Skip to main content
Welcome to SimpleLLMFunc. This guide helps you get from zero to a running LLM application as quickly as possible.

Requirements

  • Python 3.12 or later
  • Windows, macOS, or Linux
1

Prepare the environment

Create a virtual environment

Use a virtual environment to isolate your dependencies:
# Create a virtual environment with venv
python -m venv simplellmfunc_env

# Activate it
# Windows
simplellmfunc_env\Scripts\activate
# macOS/Linux
source simplellmfunc_env/bin/activate

Install SimpleLLMFunc

You can install it in two ways:
2

Install the Agent skill pack (recommended)

If you plan to use SimpleLLMFunc together with a coding agent, install the packaged skill right after installing the library. The skill tells the agent how to use the framework correctly, where to look for references, and how to follow common workflows.

Export the usage skill

python -m SimpleLLMFunc.skills_cli usage /path/to/your/agent/skills
This creates:
/path/to/your/agent/skills/simplellmfunc/
The exported folder contains:
  • SKILL.md
  • helper reference files under reference/

Optional: export the developer skill

If your agent also needs to work on the framework itself, update low-level code, or add tests, export the developer-oriented skill pack as well:
python -m SimpleLLMFunc.skills_cli developer /path/to/your/agent/skills
Supported skill kinds are usage, developer, and dev. If the destination already exists, add --force to overwrite it.
3

Configure your model provider

Create .env

Create a .env file in the project root:
cp env_template .env
Optional logging settings:
LOG_LEVEL=INFO
LOG_DIR=logs

Create provider.json

Create provider.json to define your models and API keys:
{
  "openai": [
    {
      "model_name": "gpt-3.5-turbo",
      "api_keys": ["sk-test-key-1", "sk-test-key-2"],
      "base_url": "https://api.openai.com/v1",
      "max_retries": 5,
      "retry_delay": 1.0,
      "rate_limit_capacity": 20,
      "rate_limit_refill_rate": 3.0
    },
    {
      "model_name": "gpt-4",
      "api_keys": ["sk-test-key-3"],
      "base_url": "https://api.openai.com/v1",
      "max_retries": 5,
      "retry_delay": 1.0,
      "rate_limit_capacity": 10,
      "rate_limit_refill_rate": 1.0
    }
  ],
  "zhipu": [
    {
      "model_name": "glm-4",
      "api_keys": ["zhipu-test-key-1", "zhipu-test-key-2"],
      "base_url": "https://open.bigmodel.cn/api/paas/v4/",
      "max_retries": 3,
      "retry_delay": 0.5,
      "rate_limit_capacity": 15,
      "rate_limit_refill_rate": 2.0
    }
  ]
}
provider.json uses a provider -> list of model configs structure. Each model_name becomes the lookup key under models[provider][model_name].
4

Build your first demo

Basic text analysis example

Create first_demo.py:
All SimpleLLMFunc decorators such as @llm_function, @llm_chat, and @tool must decorate async def functions. Call them with await, or use asyncio.run(...) at the top level.
import asyncio
from typing import List

from pydantic import BaseModel, Field

from SimpleLLMFunc import OpenAICompatible, llm_function


class TextAnalysis(BaseModel):
    sentiment: str = Field(..., description="Sentiment: positive, negative, or neutral")
    keywords: List[str] = Field(..., description="Extracted keywords")
    summary: str = Field(..., description="A short summary")


models = OpenAICompatible.load_from_json_file("provider.json")
gpt_interface = models["openai"]["gpt-3.5-turbo"]


@llm_function(llm_interface=gpt_interface)
async def analyze_text(text: str) -> TextAnalysis:
    """Analyze the text and return its sentiment, keywords, and summary.

    Args:
        text: The text to analyze

    Returns:
        A structured analysis result
    """
    pass


async def main():
    test_text = """
    Today the weather is excellent. The sun is out, the temperature is pleasant,
    and the park is full of people walking, playing, and relaxing.
    """

    print("=== Text Analysis Demo ===")
    result = await analyze_text(test_text)
    print(f"Sentiment: {result.sentiment}")
    print(f"Keywords: {', '.join(result.keywords)}")
    print(f"Summary: {result.summary}")


if __name__ == "__main__":
    asyncio.run(main())

Run the demo

python first_demo.py
5

Try a more advanced example

Dynamic template parameters

This example shows how _template_params lets one function adapt to different scenarios:
import asyncio

from SimpleLLMFunc import OpenAICompatible, llm_function

models = OpenAICompatible.load_from_json_file("provider.json")
gpt_interface = models["openai"]["gpt-3.5-turbo"]


@llm_function(llm_interface=gpt_interface)
async def analyze_code(code: str) -> str:
    """Analyze {language} code in a {style} style, focusing on {focus}."""
    pass


@llm_function(llm_interface=gpt_interface)
async def process_text(text: str) -> str:
    """As a {role}, please {action} the following text in a {style} style."""
    pass


async def main():
    python_code = """
def fibonacci(n):
    if n <= 1:
        return n
    return fibonacci(n-1) + fibonacci(n-2)
"""

    performance_result = await analyze_code(
        python_code,
        _template_params={
            "style": "detailed",
            "language": "Python",
            "focus": "performance optimization",
        },
    )

    translated_text = await process_text(
        "Artificial intelligence is changing software development.",
        _template_params={
            "role": "translator",
            "action": "translate into Chinese",
            "style": "business",
        },
    )

    print(performance_result)
    print(translated_text)


asyncio.run(main())
6

Inspect logs

SimpleLLMFunc includes a built-in logging system. After running a demo, you can inspect:
  1. Console output with trace_id
  2. Structured logs in LOG_DIR/application.log (default: logs/application.log)
  3. If PyRepl is enabled, execution audit logs in LOG_DIR/pyrepl/<instance_id>/executions.jsonl
Each function call generates a unique trace_id, which makes debugging much easier.

FAQ

Check the API keys in provider.json and make sure they are valid and still have quota.
For complex output types, the framework expects structured output. If the model often returns invalid structure, try a stronger model or tighten the format constraints in the docstring.
Define them with @tool and pass them to the toolkit parameter of @llm_function or @llm_chat.
The decorators only support async def functions. Use them in an async context with await, or call them from a top-level asyncio.run(...) entry point.
They let you fill placeholders in the docstring at call time through _template_params, so one function can serve multiple scenarios.

Next Steps

Browse Examples

Explore the examples/ directory for more patterns.

Read the User Guide

Use the task-oriented guide to decide what to read next.

Build with llm_function

Start building typed, structured LLM workflows.

Contribute

Report issues, contribute docs, or add new examples.