Skip to main content
SimpleLLMFunc provides an out-of-the-box @tui decorator, built on textual and event streams. You can directly stack it on @llm_chat to give the Agent complete terminal input loop, streaming rendering, and tool call visualization capabilities.
@tui relies on event stream. Please set enable_event=True in @llm_chat, and it is recommended to also enable stream=True for a better streaming experience.

Quick Start

1

Install Dependencies

“textual” has been provided as a framework dependency. If you are upgrading in an existing environment, please reinstall the dependencies:
poetry install
2

Stack @tui and @llm_chat

from SimpleLLMFunc import llm_chat, tui

@tui()
@llm_chat(
    llm_interface=llm,
    toolkit=[...],
    stream=True,
    enable_event=True,
)
async def agent(message: str, history=None):
    """Your agent prompt."""

if __name__ == "__main__":
    agent()
3

Run and enter chat loop

Execute the script directly to start the TUI, input loop, and event stream rendering.

Parameter Identification Rules

@tui will automatically recognize input parameters
  • “history” or “chat_history” as a history parameter
  • The rest of the first parameter is used as user input

UI Capabilities

  • Alternate rendering of user and model messages
  • Model streaming output refreshes in real time and supports Markdown rendering
  • Message area automatically scrolls to bottom during streaming output
  • reasoning delta is displayed in gray text when supported by the model
  • Display structured parameters at the start of tool invocation, instead of a raw JSON string
  • Consume CustomEvent during tool execution and update output in real-time
  • Display results, time spent, and status after the tool ends
  • When the tool triggers input(), the input box will switch to tool input mode
  • New input will be preferentially filled back to this tool request
  • fork tasks will be automatically split into independent columns based on origin.fork_id
  • Main chain and sub chain events can be stably displayed separately

Interrupt current reply

When a message is sent again while the Agent is still generating a response, the TUI will automatically trigger an interrupt and start a new round:
  • The current turn will be terminated, stopping the streaming output and canceling the tool call
  • New messages will automatically add an interruption prompt: "I want to interrupt your reply."
This is suitable for quick correction or interrupting long responses. If you need finer-grained control, you can manually pass in AbortSignal. See Interruption and Cancellation.

Interaction and Exit

  • Send message: press Enter after typing
  • When a pending tool-input request exists, pressing Enter submits input to that request first.
  • Force send a new round of chat: /chat <message>
  • Copy full transcript: /copy or Ctrl+Y
  • Exit commands: /exit, /quit, /q
  • Exit shortcut: Ctrl+Q, also keep Ctrl+C

Custom Tool Event Hook

@tui supports injecting custom event parsing logic via custom_event_hook:
from SimpleLLMFunc.hooks.events import CustomEvent
from SimpleLLMFunc.utils.tui import ToolEventRenderUpdate, ToolRenderSnapshot


def my_hook(
    event: CustomEvent,
    snapshot: ToolRenderSnapshot,
) -> ToolEventRenderUpdate | None:
    if event.event_name != "batch_progress" or not isinstance(event.data, dict):
        return None

    return ToolEventRenderUpdate(
        append_output=f"progress={event.data['percent']}%\n"
    )


@tui(custom_event_hook=[my_hook])
@llm_chat(..., enable_event=True, stream=True)
async def agent(message: str, history=None):
    ...
The framework has built-in default parsing for some common tool events, such as PyRepl’s kernel_stdout, kernel_stderr, and kernel_input_request.

Run the Example

Example: examples/tui_chat_example.py