Skip to main content
llm_function is one of the core entry points in SimpleLLMFunc. It wraps an async Python function and lets the framework execute it through an LLM. You define the function signature, return type, and docstring, and the framework handles prompt construction, model invocation, parsing, and tool usage.

Quick Overview

Parameter Handling

Convert function arguments into model-friendly inputs automatically.

Type Safety

Parse model output into str, dict, Pydantic models, and other declared return types.

Tool Integration

Attach tools so the model can call external capabilities when needed.

Prompt Templates

Customize prompt structure and use dynamic template parameters at call time.

Event Stream

Enable enable_event=True to inspect intermediate execution events and token usage.

Important Note

llm_function only supports functions defined with async def. Call the decorated function with await, or use asyncio.run(...) at the top level.

Basic Syntax

from SimpleLLMFunc import llm_function


@llm_function(
    llm_interface=llm_interface,
    toolkit=None,
    max_tool_calls=None,
    system_prompt_template=None,
    user_prompt_template=None,
    **llm_kwargs,
)
async def your_function(param1: Type1, param2: Type2) -> ReturnType:
    """Describe the function behavior here."""
    pass
The function body itself is not executed. The docstring is the prompt, so pass is usually the right implementation.

Parameters

  • llm_interface: required, the LLM interface instance
  • toolkit: optional, a list of tools or functions decorated with @tool
  • max_tool_calls: optional, limit on tool invocations; defaults to None
  • system_prompt_template: custom system prompt wrapper
  • user_prompt_template: custom user prompt wrapper
  • enable_event=False: return the parsed result directly
  • enable_event=True: return an async generator of ReactOutput
  • Any extra llm_kwargs are passed to the model interface, such as temperature or top_p
  • retry_times controls retries for empty responses

Abort During Execution

You can pass an AbortSignal through _abort_signal to stop the current run, interrupt streaming output, and cancel in-flight tool calls.
import asyncio

from SimpleLLMFunc.hooks import ABORT_SIGNAL_PARAM, AbortSignal

abort_signal = AbortSignal()


async def run():
    async def abort_later():
        await asyncio.sleep(1.0)
        abort_signal.abort("timeout")

    asyncio.create_task(abort_later())

    async for output in your_function(
        param1,
        **{ABORT_SIGNAL_PARAM: abort_signal},
    ):
        ...


asyncio.run(run())
When enable_event=True, ReactEndEvent.extra includes aborted: true and, when available, an abort_reason. See Abort and Cancellation.

Custom Prompt Templates

  • {function_description}
  • {parameters_description}
  • {return_type_description}
  • {parameters}

Dynamic Template Parameters

_template_params lets you fill placeholders in the docstring at call time so the same function can adapt to multiple scenarios. Example:
@llm_function(llm_interface=llm)
async def flexible_function(text: str) -> str:
    """As a {role}, please {action} the following text in a {style} style."""
    pass


result = await flexible_function(
    text,
    _template_params={
        "role": "professional editor",
        "action": "polish",
        "style": "academic",
    },
)

Role switching

One function can act as different roles.

Task adaptation

Adjust behavior based on call-time context.

Transparent handling

_template_params affects template rendering only and is not sent to the model as an argument.

Fallback behavior

If placeholders are incomplete, the framework falls back to the original docstring.

Execution Flow

1

Capture the function call

The decorator receives the real arguments and parses the function signature.
2

Build prompts

The docstring becomes the core prompt content, parameters are formatted into user-facing input, and any templates are applied.
3

Call the model and tools

The framework sends the prompt to the model and automatically handles tool invocations if the model requests them.
4

Parse the return value

The model output is parsed into the declared return type and returned to the caller.

Built-in Capabilities

  • Primitive types such as str, int, float, bool
  • Container types such as List and Dict
  • Pydantic models
  • Fallback to plain text for unsupported types
  • Simple return types typically use plain text constraints
  • Complex types such as Pydantic models, List, Dict, or Union use stricter structured constraints
  • Retry empty model outputs
  • Capture and log invocation failures
  • Provide fallback handling when output parsing fails
  • Record parameters, prompt content, and responses
  • Track execution with trace_id
  • Use normal logging levels for debugging and production analysis

Examples

Basic text processing

import asyncio

from SimpleLLMFunc import OpenAICompatible, llm_function

models = OpenAICompatible.load_from_json_file("provider.json")
llm = models["openai"]["gpt-3.5-turbo"]


@llm_function(llm_interface=llm)
async def summarize_text(text: str, max_words: int = 100) -> str:
    """Generate a concise summary of the input text within the requested word limit."""
    pass


async def main():
    long_text = "This is a long article about LLM application design..."
    summary = await summarize_text(long_text, max_words=50)
    print(summary)


asyncio.run(main())

Structured output

from typing import Any, Dict


@llm_function(llm_interface=llm)
async def analyze_sentiment(text: str) -> Dict[str, Any]:
    """Analyze sentiment and return sentiment label, confidence, and keywords."""
    pass

Tool-augmented function

from SimpleLLMFunc import OpenAICompatible, llm_function, tool


@tool
async def search_web(query: str) -> str:
    """Search the web for relevant information."""
    return f"Search results for: {query}"


@tool
async def calculate(expression: str) -> float:
    """Evaluate a mathematical expression."""
    return eval(expression)


@llm_function(llm_interface=llm, toolkit=[search_web, calculate])
async def research_and_calculate(topic: str, calculation: str) -> str:
    """Search a topic, perform a calculation, and return a combined report."""
    pass

Custom prompt templates

custom_system_template = """
You are a professional data analyst.

Parameters:
{parameters_description}

Return type:
{return_type_description}

Task:
{function_description}
"""

custom_user_template = """
Please analyze the following data:
{parameters}
"""

Pydantic return models

from pydantic import BaseModel


class TaskResult(BaseModel):
    success: bool
    message: str
    tasks: list[str]
    estimated_time: int


@llm_function(llm_interface=llm)
async def create_project_plan(project_description: str, deadline_days: int) -> TaskResult:
    """Create a project plan with tasks, time estimate, and recommendations."""
    pass

When to use llm_function

Use llm_function when:
  • you want stateless, typed LLM calls
  • inputs and outputs map naturally to a Python function signature
  • you need structured parsing without maintaining chat history manually
  • you want to add tools to a single-purpose workflow
Use llm_chat instead when you need multi-turn history and Agent-style conversations.