If you are new to SimpleLLMFunc, start with Quick Start and then use this page to decide what to read next.
Quick Navigation
Infrastructure
Configuration and Environment
Learn how
.env, provider.json, logging, and runtime configuration work.LLM Interface Layer
Understand
OpenAICompatible, OpenAIResponsesCompatible, API key pools, and token bucket rate limiting.Developer Experience
llm_function Decorator
Build stateless LLM-powered functions with structured output, templates, and tool usage.
llm_chat Decorator
Build multi-turn chat applications, Agents, and streaming interactions.
Agent Execution
Event Stream System
Observe the ReAct loop and consume model, tool, and runtime events.
Abort and Cancellation
Use
AbortSignal to interrupt a running turn and shut it down cleanly.Tools and Runtime
Runtime Primitives
Learn how
runtime.*, PrimitivePack, and backend lifecycle management work.Tool System
Define tools, return structured values, and inject tool usage guidance.
PyRepl Runtime
Execute Python code in a persistent session and expose runtime primitives.
UI and Interaction
Terminal TUI
Wrap
@llm_chat with a Textual-based terminal interface.Integrations and Examples
Langfuse Integration
Add observability for model calls, tool calls, and event streaming.
Examples
Browse runnable examples by scenario and recommended learning order.
Contributing
Learn the repository conventions for issues, pull requests, and local development.
Browse by Task
I want to get started quickly
I want to get started quickly
Start with Quick Start, then come back here to choose the next topic.
I want to configure models and runtime settings
I want to configure models and runtime settings
Read Configuration and Environment for
.env, provider.json, and logging.I want to create LLM functions
I want to create LLM functions
Read llm_function Decorator to learn about signatures, output types, templates, and tools.
I want to build a chat app or Agent
I want to build a chat app or Agent
Read llm_chat Decorator to learn about history handling, streaming output, and runtime context.
I want to inspect execution in real time
I want to inspect execution in real time
Read Event Stream System to consume
EventYield and ResponseYield in custom UI or telemetry code.I want the model to call external capabilities
I want the model to call external capabilities
Read Tool System. If you also need persistent Python state, pair it with PyRepl Runtime.
I want to understand runtime primitives and fork workflows
I want to understand runtime primitives and fork workflows
Read Runtime Primitives and PyRepl Runtime.
I want to build a terminal UI
I want to build a terminal UI
Read Terminal TUI to learn about
@tui, interrupts, hotkeys, and custom event hooks.I want complete runnable examples
I want complete runnable examples
Go to Examples and browse by scenario.
Recommended Learning Paths
Beginner path
- Read Quick Start
- Read llm_function Decorator
- Run a structured-output example from Examples
Intermediate path
- Read llm_chat Decorator
- Read Tool System
- Revisit Configuration and Environment to tune providers and limits
Advanced path
- Read LLM Interface Layer
- Read Event Stream System
- Read Runtime Primitives and PyRepl Runtime
Browse by Capability
| Capability | Documentation | What it covers |
|---|---|---|
| Basic configuration | Configuration and Environment | API keys, environment variables, and provider.json |
| Stateless LLM tasks | llm_function Decorator | Text processing, typed outputs, structured extraction |
| Chat applications | llm_chat Decorator | Multi-turn conversation, history handling, streaming |
| Event streaming | Event Stream System | Realtime observation, tool call telemetry, performance insight |
| Abort control | Abort and Cancellation | Interrupting model output and tool execution |
| Tool integration | Tool System | Tool definitions, invocation, multimodal returns |
| Runtime primitives | Runtime Primitives | CodeAct runtime capabilities and primitive design |
| Interface design | LLM Interface Layer | API abstraction, key pools, and rate limiting |
| Runnable examples | Examples | End-to-end examples for common scenarios |
FAQ Quick Reference
How do I configure API keys?
How do I configure API keys?
Read Configuration and Environment. Pay special attention to the provider-to-model-list structure in
provider.json.Do the decorators support synchronous functions?
Do the decorators support synchronous functions?
No.
@llm_function, @llm_chat, and @tool all require async def functions.How do I build multi-turn chat?
How do I build multi-turn chat?
Start with llm_chat Decorator and learn how
history, stream=True, and return modes work.How do I interrupt the current response?
How do I interrupt the current response?
Read Abort and Cancellation and use
AbortSignal.How do I let the model call functions or external APIs?
How do I let the model call functions or external APIs?
Read Tool System to learn
@tool, return types, and tool guidance injection.Which LLM providers are supported?
Which LLM providers are supported?
Read LLM Interface Layer. The framework supports both OpenAI-compatible chat/completions adapters and OpenAI Responses API adapters, and can connect to many compatible services.
How should I debug failures or retry issues?
How should I debug failures or retry issues?
Start with the troubleshooting section in LLM Interface Layer, then use logs and the event stream for deeper inspection.
Other Resources
Project Introduction
Learn the design philosophy, core features, and project layout.
Examples
Run example code directly and compare different patterns.
GitHub Repository
Browse source code, issues, and release history.
Most pages keep full code examples. When something goes wrong, first check the troubleshooting or FAQ section on the relevant page, then compare against the examples.