Skip to main content
AbortSignal is a lightweight cancellation mechanism for stopping streaming output, cancelling tool calls in progress, and ending an Agent turn in a controlled way. Typical scenarios:
  • interrupting a long response from the user interface
  • enforcing a timeout on the server side
  • stopping an old turn before starting a new one with different context

User-initiated interrupt

Stop a long-running answer and immediately switch to a new request.

Server-side timeout

Cap the runtime of a single turn so requests do not block forever.

Context switching

Abort the current turn cleanly before launching the next one.

Core Concepts

  • AbortSignal: a signal object that can be shared across coroutines
  • _abort_signal: the special call parameter used to pass the signal
  • abort(reason): trigger cancellation with an optional reason string
_abort_signal is runtime-only. It is not included in the prompt and is not passed to the model or tools as normal input.

Basic Usage

1

Create an AbortSignal

Create a signal object before starting the call.
2

Pass it through _abort_signal

Prefer the ABORT_SIGNAL_PARAM constant instead of hard-coding the parameter name.
3

Trigger abort from another coroutine

Call abort(reason) when you want the current turn to stop as soon as possible.

llm_chat example

import asyncio

from SimpleLLMFunc import llm_chat
from SimpleLLMFunc.hooks import ABORT_SIGNAL_PARAM, AbortSignal


@llm_chat(llm_interface=llm, stream=True, enable_event=True)
async def chat(message: str, history=None):
    """Your system prompt here."""
    pass


abort_signal = AbortSignal()


async def run():
    async def abort_later():
        await asyncio.sleep(1.5)
        abort_signal.abort("user_interrupt")

    asyncio.create_task(abort_later())

    async for output in chat(
        "Please explain transformers in detail.",
        history=[],
        **{ABORT_SIGNAL_PARAM: abort_signal},
    ):
        ...


asyncio.run(run())

llm_function example

import asyncio

from SimpleLLMFunc import llm_function
from SimpleLLMFunc.hooks import ABORT_SIGNAL_PARAM, AbortSignal


@llm_function(llm_interface=llm, enable_event=True)
async def analyze(text: str) -> str:
    """Analyze a long piece of text."""
    pass


abort_signal = AbortSignal()


async def run():
    async def abort_later():
        await asyncio.sleep(1.0)
        abort_signal.abort("timeout")

    asyncio.create_task(abort_later())

    async for output in analyze(
        "A very long text block...",
        **{ABORT_SIGNAL_PARAM: abort_signal},
    ):
        ...


asyncio.run(run())

Runtime Behavior

  • Streaming output stops yielding new chunks after cancellation
  • In-flight tool calls are cancelled when possible
  • In event mode, ReactEndEvent.extra includes aborted: true
  • If you provide a reason, the event metadata may also include abort_reason
  • In non-event mode, the generator ends early without yielding a final event object
Abort is cooperative. If a tool is not cancellable or is blocked in a non-interruptible state, actual shutdown may be delayed.

Built-in TUI Behavior

When using @tui, sending a new message while the Agent is still responding will automatically trigger an abort:
  • the current turn is stopped
  • the new message enters the queue
  • an interruption prompt is added automatically so the model can wrap up quickly
If you need more control, such as a custom interruption message or external orchestration, bypass the built-in TUI behavior and pass your own AbortSignal explicitly.