The 2025 AI Paradigm

Context Engineering

"In industrial-strength LLM apps, filling the context window is a delicate art and science"

— Andrej Karpathy, June 2025

The CPU/RAM Analogy: A New Operating System Paradigm

According to Karpathy, LLMs are like a new kind of operating system: LLM = CPU, Context Window = RAM. The context window serves as the model's working memory, where every token must be carefully placed.

Science

Task descriptions, few-shot examples, RAG, multimodal data, tools,, state and history

Art

Guiding intuition around LLM psychology and understanding human spirits

Balance

Too little context = poor performance, Too much = high cost and performans düşüşüdegradation

Why Context Engineering in 2025?

This paradigm shift, pioneered by Tobi Lütke (Shopify CEO) and Andrej Karpathy, reflects the reality of industrial-strength AI applications

Old Approach: "Prompt Engineering"

  • Short task descriptions

    Thought of as simple day-to-day requests

  • "ChatGPT wrapper" misconception

    Karpathy: "This term is tired and really, really wrong"

  • One-shot instructions

    Static, unchanging pieces of information

New Reality: Context Engineering

  • Thick software layer

    Complex systems coordinating LLM calls

  • Dynamic context assembly

    Precise orchestration of information for each step

  • System prompt learning

    LLMs learning by taking their own notes

Software 3.0: Software in the Age of AI

The new software paradigm defined by Karpathy at YC AI Startup School 2025

Software 1.0

Classical programming

  • • Explicit instructions
  • • Deterministic behavior
  • • Human-written code

Software 2.0

Neural network era

  • • Data-driven
  • • Learned behaviors
  • • Weight optimization

Software 3.0

Context engineering era

  • • Context orchestration
  • • AI agent systems
  • • Dynamic adaptation

The "Jagged Intelligence" Phenomenon

Karpathy's paradox: LLMs can solve complex math problems but fail at simple tasks. This "jagged intelligence" profile shows why context engineering is critical.

Strong Points

  • • Complex reasoning
  • • Creative problem solving
  • • Language understanding

Weak Points

  • • Simple arithmetic errors
  • • Context drift
  • • Inconsistent behaviors

Core Components of Context Engineering

Critical building blocks for industrial-strength LLM applications

RAG (Retrieval-Augmented Generation)

Dynamic information retrieval enables LLMs to access current and accurate information through vector databases and semantic search.

State & History Management

Intelligent management of conversation history, user preferences, and application state. Critical for efficient context window usage.

Few-Shot Examples

Carefully selected examples for the task. Ensures LLMs produce output in the desired format and quality.

Tool Use & Function Calling

LLM interaction with external systems. Required for API calls, database queries, and computations.

Multimodal Context

Combining text, images, audio, and other data types. Critical for rich context creation.

Context Compaction

Maximum information density without exceeding token limits. Summarization, filtering, and prioritization techniques.

Implementation Strategies

Modern techniques for filling the context window effectively

01

Context Window Planning

Strategically distribute your token budget

System prompt (10-20%), Examples (20-30%), RAG content (30-40%), History (10-20%), Buffer (10%)

02

Dynamic Context Assembly

Create custom context for each request

Task analysis → Relevant retrieval → Priority sorting → Token optimization → Context injection

03

Cascading Context Strategy

Break down and chain complex tasks

Decompose large tasks into subtasks, use optimized context for each, merge results

04

Context Decay & Refresh

Clean old information, add new

Temporal relevance scoring, sliding window approach, importance-based retention

05

Multi-Agent Orchestration

Specialized agents with different contexts

Each agent has its own context, coordinator agent management, shared memory systems

Real-World Applications

The power of context engineering in industrial applications

Code Generation Systems

Systems like GitHub Copilot and Cursor use context engineering to understand entire codebases and generate consistent code.

  • Understanding project structure
  • Maintaining code style
  • Import and dependency management

Enterprise AI Assistants

Corporate AI assistants use context engineering to understand organizational knowledge and processes.

  • Enterprise knowledge base integration
  • Role-based access control
  • Compliance and security layers

Autonomous Agents

Systems like AutoGPT use context engineering to execute long-running tasks independently.

  • Task decomposition
  • Memory management
  • Self-reflection loops

Educational Systems

Personalized learning platforms use student context to provide adaptive learning experiences.

  • Learning history tracking
  • Personalized curriculum
  • Adaptive difficulty adjustment

Challenges and Solutions

Core problems in context engineering and modern solution approaches

Lost in the Middle

Strategic positioning

Place critical information at beginning and end, prevent mid-context loss

Context Window Limits

Smart compression

Token efficiency through semantic chunking, summarization, and prioritization

Hallucination Risk

Grounding techniques

Accuracy control with RAG, fact-checking, and validation gates

Context Switching

State management

Maintain continuity with session persistence and memory systems

Performance Degradation

Selective loading

Performance optimization with relevance scoring and lazy loading

Cost Explosion

Token economy

Cost control through caching, reuse strategies, and efficient encoding

Measurable Results

Proven impact of context engineering in industrial applications

85%
Accuracy improvement
With RAG and grounding
10x
Productivity gain
In autonomous agent systems
60%
Token savings
Through smart compression

Case Study: Shopify's Magic AI Assistant

Led by CEO Tobi Lütke, Shopify Magic uses context engineering principles to provide AI support to millions of merchants.

Techniques Applied

  • • Store context and product catalog integration
  • • Merchant behavior history analysis
  • • E-commerce best practices injection

Results Achieved

  • • 70% faster store setup
  • • 50% higher conversion rate
  • • 40% increase in customer satisfaction

The Future: Autonomy Slider and Beyond

Karpathy's vision for the future of context engineering

Autonomy Slider Concept

Users can dynamically adjust the autonomy level of AI systems. A continuous spectrum from full manual control to fully autonomous operation.

ManualSemi-autonomousFully autonomous

Self-Improving Systems

System prompt learning enables LLMs to learn from their own experiences. Each interaction becomes a data point that improves the system's context strategy. Mechanisms similar to human brain's note-taking and learning processes.

AI-Native Architecture

Systems designed from the ground up for AI agents. Human interfaces become secondary, with API and context-first approaches taking priority. "Build for agents, adapt for humans" philosophy.

Build the Future with Context Engineering

With Karpathy's vision, develop industrial-strength AI systems. Filling the context window is now both an art and a science.