llm_function is one of the core entry points in SimpleLLMFunc. It wraps an async Python function and lets the framework execute it through an LLM. You define the function signature, return type, and docstring, and the framework handles prompt construction, model invocation, parsing, and tool usage.
Quick Overview
Parameter Handling
Convert function arguments into model-friendly inputs automatically.
Type Safety
Parse model output into
str, dict, Pydantic models, and other declared return types.Tool Integration
Attach tools so the model can call external capabilities when needed.
Prompt Templates
Customize prompt structure and use dynamic template parameters at call time.
Event Stream
Enable
enable_event=True to inspect intermediate execution events and token usage.Important Note
llm_function only supports functions defined with async def. Call the decorated function with await, or use asyncio.run(...) at the top level.Basic Syntax
The function body itself is not executed. The docstring is the prompt, so
pass is usually the right implementation.Parameters
Core parameters
Core parameters
llm_interface: required, the LLM interface instancetoolkit: optional, a list of tools or functions decorated with@toolmax_tool_calls: optional, limit on tool invocations; defaults toNone
Prompt templates
Prompt templates
system_prompt_template: custom system prompt wrapperuser_prompt_template: custom user prompt wrapper
Event stream and model arguments
Event stream and model arguments
enable_event=False: return the parsed result directlyenable_event=True: return an async generator ofReactOutput- Any extra
llm_kwargsare passed to the model interface, such astemperatureortop_p retry_timescontrols retries for empty responses
Abort During Execution
You can pass anAbortSignal through _abort_signal to stop the current run, interrupt streaming output, and cancel in-flight tool calls.
enable_event=True, ReactEndEvent.extra includes aborted: true and, when available, an abort_reason.
See Abort and Cancellation.
Custom Prompt Templates
System template placeholders
System template placeholders
{function_description}{parameters_description}{return_type_description}
User template placeholders
User template placeholders
{parameters}
Dynamic Template Parameters
_template_params lets you fill placeholders in the docstring at call time so the same function can adapt to multiple scenarios.
Example:
Role switching
One function can act as different roles.
Task adaptation
Adjust behavior based on call-time context.
Transparent handling
_template_params affects template rendering only and is not sent to the model as an argument.Fallback behavior
If placeholders are incomplete, the framework falls back to the original docstring.
Execution Flow
Capture the function call
The decorator receives the real arguments and parses the function signature.
Build prompts
The docstring becomes the core prompt content, parameters are formatted into user-facing input, and any templates are applied.
Call the model and tools
The framework sends the prompt to the model and automatically handles tool invocations if the model requests them.
Built-in Capabilities
Type conversion support
Type conversion support
- Primitive types such as
str,int,float,bool - Container types such as
ListandDict - Pydantic models
- Fallback to plain text for unsupported types
Output constraints
Output constraints
- Simple return types typically use plain text constraints
- Complex types such as Pydantic models,
List,Dict, orUnionuse stricter structured constraints
Error handling
Error handling
- Retry empty model outputs
- Capture and log invocation failures
- Provide fallback handling when output parsing fails
Logging
Logging
- Record parameters, prompt content, and responses
- Track execution with
trace_id - Use normal logging levels for debugging and production analysis
Examples
Basic text processing
Structured output
Tool-augmented function
Custom prompt templates
Pydantic return models
When to use llm_function
Usellm_function when:
- you want stateless, typed LLM calls
- inputs and outputs map naturally to a Python function signature
- you need structured parsing without maintaining chat history manually
- you want to add tools to a single-purpose workflow
llm_chat instead when you need multi-turn history and Agent-style conversations.