@tui decorator, built on textual and event streams. You can directly stack it on @llm_chat to give the Agent complete terminal input loop, streaming rendering, and tool call visualization capabilities.
@tui relies on event stream. Please set
enable_event=True in @llm_chat, and it is recommended to also enable stream=True for a better streaming experience.Quick Start
Install Dependencies
“textual” has been provided as a framework dependency. If you are upgrading in an existing environment, please reinstall the dependencies:
Parameter Identification Rules
@tui will automatically recognize input parameters
- “history” or “chat_history” as a history parameter
- The rest of the first parameter is used as user input
UI Capabilities
Messages and Streaming Rendering
Messages and Streaming Rendering
- Alternate rendering of user and model messages
- Model streaming output refreshes in real time and supports Markdown rendering
- Message area automatically scrolls to bottom during streaming output
- reasoning delta is displayed in gray text when supported by the model
Tool call visualization
Tool call visualization
- Display structured parameters at the start of tool invocation, instead of a raw JSON string
- Consume
CustomEventduring tool execution and update output in real-time - Display results, time spent, and status after the tool ends
Input Mode Switch
Input Mode Switch
- When the tool triggers
input(), the input box will switch to tool input mode - New input will be preferentially filled back to this tool request
Fork Scenario Support
Fork Scenario Support
- fork tasks will be automatically split into independent columns based on
origin.fork_id - Main chain and sub chain events can be stably displayed separately
Interrupt current reply
When a message is sent again while the Agent is still generating a response, the TUI will automatically trigger an interrupt and start a new round:- The current turn will be terminated, stopping the streaming output and canceling the tool call
- New messages will automatically add an interruption prompt:
"I want to interrupt your reply."
AbortSignal. See Interruption and Cancellation.
Interaction and Exit
Basic Interaction
Basic Interaction
- Send message: press Enter after typing
- When a pending tool-input request exists, pressing Enter submits input to that request first.
Commands and Shortcuts
Commands and Shortcuts
- Force send a new round of chat:
/chat <message> - Copy full transcript:
/copyorCtrl+Y - Exit commands:
/exit,/quit,/q - Exit shortcut:
Ctrl+Q, also keepCtrl+C
Custom Tool Event Hook
@tui supports injecting custom event parsing logic via custom_event_hook:
kernel_stdout, kernel_stderr, and kernel_input_request.
Run the Example
Example:examples/tui_chat_example.py