Skip to main content
SimpleLLMFunc’s tool system lets LLMs call external functions and APIs for computation, search, file operations, or multimodal workflows. The framework turns Python functions into model-readable tool descriptions automatically.

Quick Overview

Type Inference

Extract parameter types and descriptions from function signatures automatically.

Docstring Parsing

Use docstrings to enrich tool descriptions and parameter guidance.

JSON Schema Generation

Generate schemas compatible with modern function-calling APIs.

Multiple Return Types

Support plain text, structured values, and multimodal results.

Two Creation Modes

Use either the decorator style or the class-based style.

Long Output Truncation

Persist oversized results to disk and return only a safe preview.

Supported Parameter Types

  • Primitive types: str, int, float, bool
  • Container types: List[T], Dict[K, V]
  • Multimodal types: Text, ImgPath, ImgUrl
  • Multimodal lists: List[Text], List[ImgPath], List[ImgUrl]
  • Pydantic models
  • Optional parameters with defaults
  • Nested container structures
Tool arguments must ultimately be representable as JSON-compatible values. Containers such as Tuple and Set are therefore not good tool parameter types.

Supported Return Types

Tool functions can return several output formats, and the framework will convert them into a model-friendly representation.
  • str
  • JSON-serializable values such as dict, list, int, float, bool, None
  • Pydantic models
  • ImgUrl
  • ImgPath
  • Tuple[str, ImgUrl]
  • Tuple[str, ImgPath]

Return Type Examples

from typing import Any, Dict, List, Tuple

from SimpleLLMFunc import tool
from SimpleLLMFunc.type import ImgPath, ImgUrl


@tool(name="calculate", description="Evaluate a math expression")
async def calculate(expression: str) -> float:
    """Return the numeric result."""
    return eval(expression)


@tool(name="get_status", description="Get current system status")
async def get_status() -> str:
    """Return a short status string."""
    return "System is healthy"


@tool(name="get_user_info", description="Return user information")
async def get_user_info(user_id: int) -> Dict[str, Any]:
    """Return a structured JSON-style object."""
    return {
        "id": user_id,
        "name": "Alice",
        "age": 25,
        "skills": ["Python", "AI", "Data Analysis"],
    }


@tool(name="search_results", description="Return search results")
async def search_results(query: str) -> List[Dict[str, str]]:
    """Return a list of structured search items."""
    return [
        {"title": "Result 1", "url": "https://example1.com"},
        {"title": "Result 2", "url": "https://example2.com"},
    ]


@tool(name="get_chart", description="Generate a chart image")
async def get_chart(data: List[float]) -> ImgPath:
    """Return a local image path."""
    return ImgPath("/path/to/generated/chart.png")


@tool(name="fetch_image", description="Fetch a remote image")
async def fetch_image(image_url: str) -> ImgUrl:
    """Return a remote image URL."""
    return ImgUrl(image_url)


@tool(name="analyze_image", description="Analyze an image and generate a report")
async def analyze_image(image_path: str) -> Tuple[str, ImgPath]:
    """Return a text summary and an annotated image."""
    summary = "Detected 3 objects: 2 people and 1 car"
    annotated = ImgPath("/path/to/annotated_image.png")
    return summary, annotated

How Return Values Are Processed

  1. Primitive values are serialized directly
  2. ImgUrl is used as a web image reference
  3. ImgPath is converted into a base64 data URL
  4. Text + image pairs become multimodal message content
  5. Unsupported values fall back to strings
When a tool returns images or text-plus-image content, the framework uses a multimodal assistant + user message pair instead of a plain tool message, because OpenAI-style tool messages cannot directly carry multimodal payloads.
  • Local image paths must exist and be readable
  • Remote image URLs should be publicly accessible
  • Combination types must be (str, ImgPath) or (str, ImgUrl)
  • Avoid returning very large data structures

Long Output Truncation

When a tool may return a very large string, such as a code execution result or a long search response, you can enable too_long_to_file.
1

Estimate token size

When the tool returns a string, the framework estimates its token count.
2

Persist oversized output

If the output exceeds 20,000 tokens, the full result is written to a temporary file.
3

Return a safe preview

The model receives only the first 20,000 tokens plus a <system-reminder> note containing the path to the full file.
Example:
@tool(name="execute_code", description="Execute Python code", too_long_to_file=True)
async def execute_code(code: str) -> str:
    """Execute code and return output."""
    return run_python(code)
The function wrapped by @tool must itself be defined as async def.
Use too_long_to_file only when a tool may return a lot of plain text, and only when the Agent also has a way to read files, such as FileToolset or PyRepl.

Built-in Tools

FileToolset

FileToolset provides safe file reading, searching, and editing with hash validation to avoid stale writes. By default, it only allows access to non-hidden files inside the workspace.
  • read_file: Read file content with optional line ranges and line numbers
  • grep: Run regex search in the workspace and require a path_pattern
  • sed: Replace content in a line range after reading the file first
  • echo_into: Overwrite the entire file after reading it first
  • Always call read_file before modifying a file
  • grep must be constrained with path_pattern
  • Hidden files and hidden directories are blocked by default
Example:
from SimpleLLMFunc import llm_chat
from SimpleLLMFunc.builtin import FileToolset

file_tools = FileToolset("/path/to/workspace").toolset


@llm_chat(toolkit=file_tools)
async def agent(message: str, history=None):
    """A file operations assistant."""

Usage

Decorator style

from SimpleLLMFunc import tool


@tool(name="tool_name", description="Short tool description")
async def your_function(param1: str, param2: int = 10) -> str:
    """Detailed description.

    Args:
        param1: Description of the first argument
        param2: Description of the second argument

    Returns:
        Description of the return value
    """
    return "ok"

Class-based style

from SimpleLLMFunc import Tool


class YourTool(Tool):
    def __init__(self):
        super().__init__(name="tool_name", description="Tool description")

    async def run(self, *args, **kwargs):
        return "ok"

Tool Flow

1

Register the tool

Create a tool with @tool or by subclassing Tool.
2

Extract schema and metadata

The framework reads the signature and docstring and generates a tool schema.
3

Model chooses the tool

The LLM decides whether to call the tool based on the generated tool description.
4

Validate arguments and execute

The framework validates arguments, runs the function, and injects the result back into the conversation.

Best Practices

  • Keep tool names short, clear, and action-oriented
  • Add explicit type annotations and useful docstrings
  • Return structured values when possible
  • Avoid huge payloads unless you intentionally enable truncation
  • Keep tools focused on one clear responsibility