Skip to main content
cover

What is SimpleLLMFunc?

SimpleLLMFunc is a lightweight framework for building LLM and Agent applications. Its design philosophy is LLM as Function, Prompt as Code: you describe behavior in Python function signatures and docstrings, and let the framework handle model calls, output parsing, and tool execution.

LLM as Function

Represent LLM capabilities as ordinary Python functions instead of hand-written request pipelines.

Prompt as Code

Keep prompts close to the function definition so behavior remains readable and maintainable.

Typed + Tool-capable

Combine structured outputs, tool calling, multimodal inputs, and event streaming in one workflow.

Why use it?

LLM application development often runs into the same problems:
  • repetitive API call boilerplate
  • prompts scattered across the codebase as plain strings
  • orchestration constrained by heavy frameworks
  • weak observability and difficult debugging
SimpleLLMFunc is built to address those problems. It gives you:
  • Decorator-driven development with @llm_function and @llm_chat
  • Prompt-as-logic design through docstrings
  • Type-safe outputs via Python type hints and Pydantic models
  • Multimodal support for text, image URLs, and local images
  • OpenAI-compatible provider abstraction plus OpenAI Responses API adaptation
  • API key pooling and rate limiting
  • Tool integration with structured and multimodal returns
  • Logging, trace IDs, and event streaming for debugging and observability
If your main question is “How do I get running quickly?” go to Quick Start. If your question is “What is this framework good at?” keep reading this page.
FeatureSimpleLLMFuncLangChainDify
Ease of use
Clarity⭕️
Flexibility⭕️
Development speed
Debuggability
Async support⭕️
Multimodal support⭕️⭕️
Rate limiting⭕️⭕️
Type safety⭕️
Tool integration✅🌟
Ecosystem maturity⭕️

Example

Here is a minimal example that shows the core idea:
@llm_function, @llm_chat, @tool, and related decorators only support async def functions. Call them with await, or use asyncio.run(...) at the top level.
import asyncio
from typing import List

from pydantic import BaseModel, Field

from SimpleLLMFunc import OpenAICompatible, llm_function


class ProductAnalysis(BaseModel):
    pros: List[str] = Field(..., description="Product strengths")
    cons: List[str] = Field(..., description="Product weaknesses")
    rating: int = Field(..., description="Rating from 1 to 5")


models = OpenAICompatible.load_from_json_file("provider.json")
llm_interface = models["openai"]["gpt-3.5-turbo"]


@llm_function(llm_interface=llm_interface)
async def analyze_product(product_name: str, review: str) -> ProductAnalysis:
    """Analyze a product review, extract pros and cons, and assign a score.

    Args:
        product_name: Product name
        review: User review text

    Returns:
        A structured analysis result
    """
    pass


async def main():
    result = await analyze_product("Wireless earbuds", "Good sound quality but unstable connection")
    print(f"Pros: {result.pros}")
    print(f"Cons: {result.cons}")
    print(f"Rating: {result.rating}/5")


asyncio.run(main())
If you need an OpenAI Responses API endpoint instead of a chat/completions-compatible endpoint, switch to OpenAIResponsesCompatible; the decorator-level authoring model stays the same, and the adapter maps the selected system prompt to Responses instructions.

Core Features

  • Decorator-driven development with @llm_function and @llm_chat
  • Docstring-as-prompt workflow
  • Dynamic template parameters via _template_params
  • Structured output with Python types and Pydantic models
  • Native async support
  • Multimodal inputs and tool returns
  • Event streaming with enable_event=True
  • Built-in PyRepl and self-reference runtime primitives
  • A step-based internal pipeline for prompt building, ReAct execution, and parsing
  • OpenAI-compatible provider abstraction plus OpenAI Responses API adaptation

Architecture Overview

This project is split into several main layers:
  • interface/: provider abstraction, key pools, token buckets
  • llm_decorator/: decorators and step-based orchestration
  • base/: message building, tool-call execution, type resolution, post-processing
  • hooks/: event types, stream wrappers, and event routing
  • runtime/ and builtin/: PyRepl, self-reference, runtime primitives
  • tool/: tool definitions and serialization
  • type/: message, multimodal, and hook-related types
  • logger/ and observability/: logging and Langfuse integration

Who is it for?

SimpleLLMFunc is especially useful for:
  • developers building typed LLM features in Python
  • teams who want less framework overhead and more control
  • people prototyping Agent workflows quickly
  • product-minded engineers who want prompts, types, and runtime behavior to stay in one place