All decorators in SimpleLLMFunc such as
@llm_function, @llm_chat, and @tool require functions defined with async def.Quick Access
Basic Function Example
Start with structured output and basic prompt design.
Event Stream Chat Example
Inspect the full ReAct loop, tool calls, and token statistics.
Terminal TUI Agent
Explore the combination of
@tui and @llm_chat.Provider Config Template
Reuse a provider configuration template with multiple vendors and keys.
Highlighted Examples
llm_function structured output (Pydantic)
llm_function structured output (Pydantic)
File:
examples/llm_function_pydantic_example.pyUseful for learning:- nested Pydantic models
- structured parsing
- typed return value handling
Dynamic template parameters
Dynamic template parameters
File:
examples/dynamic_template_demo.pyUseful for learning:- one function serving many scenarios
- switching prompts by role or style
- reducing repetitive function definitions
llm_function event stream + Pydantic
llm_function event stream + Pydantic
Event stream chatbot
Event stream chatbot
File:
examples/event_stream_chatbot.pyFurther reading: Event Stream SystemTerminal TUI example
Terminal TUI example
File:
examples/tui_chat_example.pyFurther reading: Terminal TUIRuntime memory primitives
Runtime memory primitives
General TUI Agent
General TUI Agent
Responses API TUI Agent
Responses API TUI Agent
File:
examples/response_api_example.pyUseful for learning:OpenAIResponsesCompatiblewithreasoning={...}- system prompt to Responses
instructionsadaptation runtime.selfref.fork.gather_all(...)result parsing viastatus,response, andresult- workspace-scoped TUI agent workflows
Agent as a tool
Agent as a tool
Token usage monitoring
Token usage monitoring
Custom tool events
Custom tool events
Parallel tool calling
Parallel tool calling
Multimodal processing
Multimodal processing
Provider Configuration Examples
provider.json Example
See the full provider-to-model configuration structure.
provider_template.json
Reuse a template with multiple providers, keys, and rate limits.
Browse by Capability
Text processing
Text processing
Chat and Agents
Chat and Agents
Multimodal workflows
Multimodal workflows
Run Examples Quickly
Prepare the environment
- Install SimpleLLMFunc:
pip install SimpleLLMFunc - Configure your API keys as described in Quick Start
- Create or update
provider.json
event_stream_chatbot.py depends on rich, so install it first if needed.Suggested Learning Path
Beginner
- Read Quick Start
- Run
llm_function_pydantic_example.py - Modify the prompt and observe how the structured output changes
Intermediate
- Read llm_chat Decorator
- Run
event_stream_chatbot.py - Try
parallel_toolcall_example.py
Advanced
- Read LLM Interface Layer
- Explore
multi_modality_toolcall.py - Study
event_stream_chatbot.pyandtui_general_agent_example.py
FAQ
Where is the example code?
Where is the example code?
All example code lives in the repository’s
examples/ directory.How do I modify an example?
How do I modify an example?
Clone the repository, edit the files under
examples/, and run them locally.Do examples support all providers?
Do examples support all providers?
Most examples rely on
provider.json and therefore work with any OpenAI-compatible provider you configure.What should I do if an example fails?
What should I do if an example fails?
Re-check your environment, API keys, and provider configuration, then compare against Quick Start.