@llm_chat and @llm_function and is one of the key pieces behind TUI rendering, progress visualization, tool monitoring, and custom telemetry.
Quick Overview
LLM Call Events
Observe model-call start, streaming chunks, and completion.
Tool Call Events
Observe tool start, completion, batching, and failure details.
Execution Metrics
Inspect token usage, latency, call counts, and runtime statistics.
Custom UI
Drive progress bars, live message panes, and tool activity panels.
State Management
Build wrappers for context compression, persistence, or custom orchestration.
Why It Matters
Without event streaming, you usually only get a final result. With event streaming, you can:- monitor model and tool activity in real time
- collect detailed performance metrics
- build richer interactive interfaces
- debug the ReAct loop more effectively
- add wrapper-based state management around otherwise stateless functions
Stateless Agent Design
SimpleLLMFunc treats the Agent itself as a function, not as a stateful object. That means:@llm_chatfunctions are fundamentally stateless- state such as
historyis passed in and returned explicitly - advanced behavior can be layered through wrappers instead of hidden inside the decorator
Enabling Event Streaming
Event streaming works for both
@llm_chat and @llm_function. In both cases, the output type is ReactOutput.llm_chat example
llm_function example
Return Shapes
- llm_chat default mode
- llm_chat event mode
- llm_function default mode
- llm_function event mode
Core Types
ReactOutput
The common event-stream output type:
ResponseYield | EventYield.ResponseYield
Carries model output plus the updated message list.
EventYield
Carries an intermediate event plus origin metadata.
EventOrigin
Describes where an event came from within the call tree.
EventOrigin
EventOrigin helps you understand where an event came from in nested or forked execution.
Useful fields include:
session_idagent_call_idparent_agent_call_idevent_seqfork_idfork_depthfork_seqselfref_instance_idmemory_key/source_memory_keytool_name/tool_call_id
How to Read This Page
Start Here
If event streaming is new to you, start with enablement and return shapes.
Core Types
If you need to build on the API, understand
ReactOutput, ResponseYield, EventYield, and EventOrigin first.Use Cases
If you care about UI, telemetry, or wrappers, jump to the practical patterns.
FAQ
If you are troubleshooting, jump to the end of the page.
Common Use Cases
Build a live UI
Build a live UI
Use event streaming to update message panes, progress states, and tool activity in real time.
Track model and tool performance
Track model and tool performance
Observe timing, tool counts, and token usage from event payloads.
Differentiate main flow vs forked flow
Differentiate main flow vs forked flow
Use
output.origin.fork_id and related fields to route events to the correct visual lane or collector.Wrap stateless Agents with custom state logic
Wrap stateless Agents with custom state logic
Use wrapper functions to compress history, persist state externally, or intercept and rewrite inputs and outputs.
Best Practices
Prefer event mode for debugging
Prefer event mode for debugging
Use
enable_event=True when debugging Agent behavior, tool usage, or output generation.Keep wrappers explicit
Keep wrappers explicit
If you add stateful behavior, implement it in wrapper functions so state flow remains understandable.
Use origin metadata for routing
Use origin metadata for routing
Treat
EventOrigin as the canonical source of truth when rendering or collecting multi-branch events.Do not assume only one event source
Do not assume only one event source
Model, tool, custom, and runtime events can all coexist in the same stream.
FAQ
Does event streaming work for both decorators?
Does event streaming work for both decorators?
Yes. Both
@llm_chat and @llm_function support it.Is the Agent stateful when event streaming is enabled?
Is the Agent stateful when event streaming is enabled?
No. Event streaming adds observability, not hidden internal state.
Can I modify history or state?
Can I modify history or state?
Yes, but do it in a wrapper around the decorated function instead of expecting the decorator itself to manage state for you.
How do I identify events from forked sub-tasks?
How do I identify events from forked sub-tasks?
Inspect
output.origin.fork_id, fork_depth, and related metadata.