Academy2. Application Architectures

Module 2: Application & Agent Architectures

Designing an LLM application is not just about picking a model—it’s about picking (and often combining) the right architecture pattern along a spectrum that runs from a single LLM call to fully-autonomous multi-agent swarms.
This chapter unifies the perspectives from three excellent deep-dives:

Below is a distilled map, guidance on when to stop at a workflow vs. when to move to an agent, and concrete patterns you can apply in Langfuse-instrumented projects.


The Architecture Ladder

The three sources use slightly different names; we merge them into six rungs of increasing “agency.”

Rule of thumb – climb only as high as you need:

  • Workflows (R0-R4) shine when you value predictability, testability, low latency, and tight context control.
  • Agents (R5-R6) shine when the path is unknown a-priori, tooling decisions are dynamic, or the user expects open-ended autonomy.

Canonical Patterns

PatternTypical Use-CaseKey ProsKey Cons
Prompt ChainingDeterministic multi-step doc generationEasy to debugRigid, brittle when input drifts
Routing / HandoffTier-1 support → specialised promptsCheap requests go to smaller modelsMis-routing tanks quality
ParallelisationMap-reduce summarisation, guardrailsReduces latencyCost × N, aggregation complexity
Evaluator–Optimizer”Draft → critique → revise” loopsBuilds quality offline or onlineAdds tokens & delay
Orchestrator–WorkersRetrieval + synthesis workflowsClear separation of concernsNeeds robust state passing
Tool-Calling ReActOne-shot Q&A with calculator / webSimple mental modelParsing / hallucination risk
Planning AgentMulti-file code-refactor, researchDeeper reasoningPlanning errors snowball
ReflectionSelf-consistency, safety checksCuts hallucinationsExtra calls and $$
Memory-AugmentedLong customer sessionsPersonalised UXMemory staleness / cost
Multi-Agent SwarmBrainstorming, negotiation simsDiverse reasoningHardest to debug

Selecting the Right Approach

  1. Define “good” first. Accuracy? Cost? Latency? Trust?
  2. Prototype as R1 (single call). Measure offline with Langfuse datasets.
  3. When metric plateaus, move to R2 → R3.
  4. Adopt agents only if the task cannot be expressed as a bounded graph.

“The hard part of reliable agents is passing the right context at every step.” — Harrison Chase

Langfuse provides the tracing you need to see that context. Every node/tool invocation you build becomes a traced span that you can later debug, evaluate, and cost-optimise.


Implementation Tips (from all three sources)

  • Tool schema = prompt. Document args, edge-cases, examples.
  • Guardrails hierarchy: JSON schema → allow-list APIs → max-iterations → human-approval.
  • Persist state (checkpoints) for fault-tolerance and to enable offline re-runs in Langfuse.
  • Add reflection early. A cheap 2nd-model critique catches many hallucinations.
  • Cost caps. Track usage.total_cost in traces; autonomy creep is real.

Further Reading

  • Harrison Chase, Agent architectures (tweet thread)
  • Anthropic, Building Effective Agents (2024-12)
  • Philipp Schmid, Zero to One: Learning Agentic Patterns (2025-05)

These links are the perfect starting points if you want to dive deeper or port the mermaid diagrams above into code (Phil provides full Python snippets, Harrison shows LangGraph recipes, Anthropic offers high-level design guidance).


Was this page useful?

Questions? We're here to help

Subscribe to updates