What is SimpleLLMFunc?
SimpleLLMFunc is a lightweight framework for building LLM and Agent applications. Its design philosophy is LLM as Function, Prompt as Code: you describe behavior in Python function signatures and docstrings, and let the framework handle model calls, output parsing, and tool execution.LLM as Function
Represent LLM capabilities as ordinary Python functions instead of hand-written request pipelines.
Prompt as Code
Keep prompts close to the function definition so behavior remains readable and maintainable.
Typed + Tool-capable
Combine structured outputs, tool calling, multimodal inputs, and event streaming in one workflow.
Why use it?
LLM application development often runs into the same problems:- repetitive API call boilerplate
- prompts scattered across the codebase as plain strings
- orchestration constrained by heavy frameworks
- weak observability and difficult debugging
- Decorator-driven development with
@llm_functionand@llm_chat - Prompt-as-logic design through docstrings
- Type-safe outputs via Python type hints and Pydantic models
- Multimodal support for text, image URLs, and local images
- OpenAI-compatible provider abstraction plus OpenAI Responses API adaptation
- API key pooling and rate limiting
- Tool integration with structured and multimodal returns
- Logging, trace IDs, and event streaming for debugging and observability
If your main question is “How do I get running quickly?” go to Quick Start. If your question is “What is this framework good at?” keep reading this page.
| Feature | SimpleLLMFunc | LangChain | Dify |
|---|---|---|---|
| Ease of use | ✅ | ❌ | ✅ |
| Clarity | ✅ | ❌ | ⭕️ |
| Flexibility | ✅ | ✅ | ⭕️ |
| Development speed | ✅ | ❌ | ✅ |
| Debuggability | ✅ | ❌ | ✅ |
| Async support | ✅ | ✅ | ⭕️ |
| Multimodal support | ✅ | ⭕️ | ⭕️ |
| Rate limiting | ✅ | ⭕️ | ⭕️ |
| Type safety | ✅ | ⭕️ | ❌ |
| Tool integration | ✅🌟 | ✅ | ✅ |
| Ecosystem maturity | ⭕️ | ✅ | ✅ |
Example
Here is a minimal example that shows the core idea:@llm_function, @llm_chat, @tool, and related decorators only support async def functions. Call them with await, or use asyncio.run(...) at the top level.OpenAIResponsesCompatible; the decorator-level authoring model stays the same, and the adapter maps the selected system prompt to Responses instructions.
Core Features
- Decorator-driven development with
@llm_functionand@llm_chat - Docstring-as-prompt workflow
- Dynamic template parameters via
_template_params - Structured output with Python types and Pydantic models
- Native async support
- Multimodal inputs and tool returns
- Event streaming with
enable_event=True - Built-in PyRepl and self-reference runtime primitives
- A step-based internal pipeline for prompt building, ReAct execution, and parsing
- OpenAI-compatible provider abstraction plus OpenAI Responses API adaptation
Architecture Overview
This project is split into several main layers:interface/: provider abstraction, key pools, token bucketsllm_decorator/: decorators and step-based orchestrationbase/: message building, tool-call execution, type resolution, post-processinghooks/: event types, stream wrappers, and event routingruntime/andbuiltin/: PyRepl, self-reference, runtime primitivestool/: tool definitions and serializationtype/: message, multimodal, and hook-related typeslogger/andobservability/: logging and Langfuse integration
Who is it for?
SimpleLLMFunc is especially useful for:- developers building typed LLM features in Python
- teams who want less framework overhead and more control
- people prototyping Agent workflows quickly
- product-minded engineers who want prompts, types, and runtime behavior to stay in one place