{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "04b012ac-e0b5-483e-a645-d13d0e215aad", "metadata": {}, "source": [ "# How to stream data from within a tool\n", "\n", "
\n", "

Prerequisites

\n", "

\n", " This guide assumes familiarity with the following:\n", "

\n", "

\n", "
\n", "\n", "If your graph involves tools that invoke LLMs (or any other LangChain `Runnable` objects like other graphs, `LCEL` chains, or retrievers), you might want to surface partial results during the execution of the tool, especially if the tool takes a longer time to run.\n", "\n", "A common scenario is streaming LLM tokens generated by a tool calling an LLM, though this applies to any use of Runnable objects. \n", "\n", "This guide shows how to stream data from within a tool using the `astream` API with `stream_mode=\"messages\"` and also the more granular `astream_events` API. The `astream` API should be sufficient for most use cases.\n", "\n", "## Setup\n", "\n", "First, let's install the required packages and set our API keys" ] }, { "cell_type": "code", "execution_count": 3, "id": "47f79af8-58d8-4a48-8d9a-88823d88701f", "metadata": {}, "outputs": [], "source": [ "%%capture --no-stderr\n", "%pip install -U langgraph langchain-openai" ] }, { "cell_type": "code", "execution_count": 4, "id": "0cf6b41d-7fcb-40b6-9a72-229cdd00a094", "metadata": {}, "outputs": [], "source": [ "import getpass\n", "import os\n", "\n", "\n", "def _set_env(var: str):\n", " if not os.environ.get(var):\n", " os.environ[var] = getpass.getpass(f\"{var}: \")\n", "\n", "\n", "_set_env(\"OPENAI_API_KEY\")" ] }, { "cell_type": "markdown", "id": "767cd76a", "metadata": {}, "source": [ "
\n", "

Set up LangSmith for LangGraph development

\n", "

\n", " Sign up for LangSmith to quickly spot issues and improve the performance of your LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM apps built with LangGraph — read more about how to get started here. \n", "

\n", "
" ] }, { "cell_type": "markdown", "id": "e3d02ebb-c2e1-4ef7-b187-810d55139317", "metadata": {}, "source": [ "## Define the graph\n", "\n", "We'll use a prebuilt ReAct agent for this guide" ] }, { "cell_type": "markdown", "id": "9378fd4a-69e4-49e2-b34c-a98a0505ea35", "metadata": {}, "source": [ "
\n", "

ASYNC IN PYTHON<=3.10

\n", "

\n", "Any Langchain `RunnableLambda`, a `RunnableGenerator`, or `Tool` that invokes other runnables and is running async in python<=3.10, will have to propagate callbacks to child objects **manually**. This is because LangChain cannot automatically propagate callbacks to child objects in this case.\n", " \n", "This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n", "

\n", "
" ] }, { "cell_type": "code", "execution_count": 5, "id": "f1975577-a485-42bd-b0f1-d3e987faf52b", "metadata": {}, "outputs": [], "source": [ "from langchain_core.callbacks import Callbacks\n", "from langchain_core.messages import HumanMessage\n", "from langchain_core.tools import tool\n", "\n", "from langgraph.prebuilt import create_react_agent\n", "from langchain_openai import ChatOpenAI\n", "\n", "\n", "@tool\n", "async def get_items(\n", " place: str,\n", " callbacks: Callbacks, # <--- Manually accept callbacks (needed for Python <= 3.10)\n", ") -> str:\n", " \"\"\"Use this tool to look up which items are in the given place.\"\"\"\n", " # Attention when using async, you should be invoking the LLM using ainvoke!\n", " # If you fail to do so, streaming will not WORK.\n", " return await llm.ainvoke(\n", " [\n", " {\n", " \"role\": \"user\",\n", " \"content\": f\"Can you tell me what kind of items i might find in the following place: '{place}'. \"\n", " \"List at least 3 such items separating them by a comma. And include a brief description of each item..\",\n", " }\n", " ],\n", " {\"callbacks\": callbacks},\n", " )\n", "\n", "\n", "llm = ChatOpenAI(model_name=\"gpt-4o\")\n", "tools = [get_items]\n", "agent = create_react_agent(llm, tools=tools)" ] }, { "cell_type": "markdown", "id": "15cb55cc-b59d-4743-b6a3-13db75414d2c", "metadata": {}, "source": [ "## Using stream_mode=\"messages\"\n", "\n", "Using `stream_mode=\"messages\"` is a good option if you don't have any complex LCEL logic inside of nodes (or you don't need super granular progress from within the LCEL chain)." ] }, { "cell_type": "code", "execution_count": 6, "id": "4c9cdad3-3e9a-444f-9d9d-eae20b8d3486", "metadata": {}, "outputs": [], "source": [ "final_message = \"\"\n", "async for msg, metadata in agent.astream(\n", " {\"messages\": [(\"human\", \"what items are on the shelf?\")]}, stream_mode=\"messages\"\n", "):\n", " # Stream all messages from the tool node\n", " if (\n", " msg.content\n", " and not isinstance(msg, HumanMessage)\n", " and metadata[\"langgraph_node\"] == \"tools\"\n", " and not msg.name\n", " ):\n", " print(msg.content, end=\"|\", flush=True)\n", " # Final message should come from our agent\n", " if msg.content and metadata[\"langgraph_node\"] == \"agent\":\n", " final_message += msg.content" ] }, { "attachments": {}, "cell_type": "markdown", "id": "81656193-1cbf-4721-a8df-0e316fd510e5", "metadata": {}, "source": [ "## Using stream events API\n", "\n", "For simplicity, the `get_items` tool doesn't use any complex LCEL logic inside it -- it only invokes an LLM.\n", "\n", "However, if the tool were more complex (e.g., using a RAG chain inside it), and you wanted to see more granular events from within the chain, then you can use the astream events API.\n", "\n", "The example below only illustrates how to invoke the API.\n", "\n", "
\n", "

Use async for the astream events API

\n", "

\n", " You should generally be using `async` code (e.g., using `ainvoke` to invoke the llm) to be able to leverage the astream events API properly.\n", "

\n", "
" ] }, { "cell_type": "code", "execution_count": 7, "id": "c3acdec9-0a24-4348-921e-435c8ea6f9fe", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "|In| a| bedroom|,| you| might| find| the| following| items|:\n", "\n", "|1|.| **|Bed|**|:| The| central| piece| of| furniture| in| a| bedroom|,| typically| consisting| of| a| mattress| on| a| frame|,| where| people| sleep|.| It| often| includes| bedding| such| as| sheets|,| blankets|,| and| pillows| for| comfort|.\n", "\n", "|2|.| **|Ward|robe|**|:| A| large|,| tall| cupboard| or| fre|estanding| piece| of| furniture| used| for| storing| clothes|.| It| may| have| hanging| space|,| shelves|,| and| sometimes| drawers| for| organizing| garments| and| accessories|.\n", "\n", "|3|.| **|Night|stand|**|:| A| small| table| or| cabinet| placed| beside| the| bed|,| used| for| holding| items| like| a| lamp|,| alarm| clock|,| books|,| or| personal| belongings| that| might| be| needed| during| the| night| or| early| morning|.||" ] } ], "source": [ "from langchain_core.messages import HumanMessage\n", "\n", "async for event in agent.astream_events(\n", " {\"messages\": [{\"role\": \"user\", \"content\": \"what's in the bedroom.\"}]}, version=\"v2\"\n", "):\n", " if (\n", " event[\"event\"] == \"on_chat_model_stream\"\n", " and event[\"metadata\"].get(\"langgraph_node\") == \"tools\"\n", " ):\n", " print(event[\"data\"][\"chunk\"].content, end=\"|\", flush=True)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.4" } }, "nbformat": 4, "nbformat_minor": 5 }