Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Select an option

  • Save DiegoHernanSalazar/ae469913ebd452182ba2122e70b84e0c to your computer and use it in GitHub Desktop.

Select an option

Save DiegoHernanSalazar/ae469913ebd452182ba2122e70b84e0c to your computer and use it in GitHub Desktop.
DeepLearning.AI - LangChain - Tavily: Persistence and Streaming
Display the source blob
Display the rendered blob
Raw
{
"metadata": {
"kernelspec": {
"name": "python",
"display_name": "Python (Pyodide)",
"language": "python"
},
"language_info": {
"codemirror_mode": {
"name": "python",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8"
}
},
"nbformat_minor": 5,
"nbformat": 4,
"cells": [
{
"id": "5945bec2-a816-458c-8e96-a7775c6d86dd",
"cell_type": "markdown",
"source": "<img src=\"https://media.licdn.com/dms/image/sync/v2/D5627AQGTZahmqVia0w/articleshare-shrink_800/articleshare-shrink_800/0/1735447634970?e=2147483647&v=beta&t=f8-WnRWXXPIOJzOk74aASHT6dfSRE-syA_kxPxjWuSM\"/>",
"metadata": {}
},
{
"id": "0ab846d8-33d4-4585-b376-660f60f24308",
"cell_type": "markdown",
"source": "# Lesson 4: Persistence and Streaming",
"metadata": {}
},
{
"id": "aa401d1b-fc24-4d5a-9a1e-91b8ab0dffaa",
"cell_type": "code",
"source": "# Loads environment variables from a file called '.env'. \n# This function does not return data directly, \n# but loads the variables into the runtime environment.\nfrom dotenv import load_dotenv\n\n# load environment variables from a '.env' file into the \n# current directory or process's environment\n# This is our OpenAI API key\n_ = load_dotenv()",
"metadata": {
"trusted": true
},
"outputs": [],
"execution_count": null
},
{
"id": "9cf6b5be-d117-4d55-afad-a85a35ef8753",
"cell_type": "code",
"source": "# 'StateGraph' and 'END' are used to construct graphs. \n# 'StateGraph' allows nodes to communicate, by reading and writing to a common state. \n# The 'END' node is used to signal the completion of a graph, \n# ensuring that cycles eventually conclude.\nfrom langgraph.graph import StateGraph, END\n\n# The typing module in Python, which includes 'TypedDict' and 'Annotated', \n# provides tools for creating advanced type annotations. \n# 'TypedDict allows you to define {dictionaries}={messages} with specific types for each 'key',\n# while 'Annotated' ADDS new data or messages values to LangChain types.\n# 'TypedDict' and 'Annotated' are used to construct the class AgentState()\nfrom typing import TypedDict, Annotated\n\n# 'operator' module provides efficient functions that correspond to the \n# language's intrinsic operators. It offers functions for mathematical, logical, relational, \n# bitwise, and other operations. For example, operator.add(x, y) is equivalent to x + y.\n# It's useful for situations where you need to treat 'operators' as 'functions()'.\n# 'operator' is used to construct the class AgentState()\nimport operator\n\n# Messages in LangChain are classified into different roles.\n# 'SystemMessage' <- 'system', 'HumanMessage' <- 'user', 'ToolMessage' <- 'assistant'\n# 'AnyMessage' <- 'any other'\nfrom langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, ToolMessage\n\n# To start using OpenAI chat models on Langchain, you need to install \n# the 'langchain-openai' library and set the 'OPENAI_API_KEY' environment variable\n# to your OpenAI API key.\n# This is a container/wrapper of OpenAI API in LangChain, exposing a standard\n# interface for ALL Language Models (LM). It means that even we'll use 'ChatOpenAI',\n# we can change it to any other different Language Model (LM) provider, that\n# LangChain supports, without changing any other lines of code.\nfrom langchain_openai import ChatOpenAI\n\n# Import 'Tavily' tool to be used as search engine.\n# The 'TavilySearchResults' tool allows you to perform queries in \n# the Tavily Search API, returning results in JSON / {message} format.\nfrom langchain_community.tools.tavily_search import TavilySearchResults",
"metadata": {
"trusted": true
},
"outputs": [],
"execution_count": null
},
{
"id": "59e76714-1fee-4c5d-b9e2-05ac53d9eae7",
"cell_type": "code",
"source": "# Create the 'Tavily' tool to be used as search engine, by initializing\n# 'TavilySearchResults' with 'max_results=2', meaning we'll only\n# only get back (4) max responses from the search API.\ntool = TavilySearchResults(max_results=2)\n\n# Display Tavily 'tool' type -> \n# <class 'langchain_community.tools.tavily_search.tool.TavilySearchResults'>\nprint('Tavily tool type:',type(tool))\n\n# Display Tavily 'tool' name -> 'tavily_search_results_json' \nprint('Tavily tool name:',tool.name)",
"metadata": {
"trusted": true
},
"outputs": [],
"execution_count": null
},
{
"id": "7db53396-f6aa-4e25-b2d2-5e0c9c5b352d",
"cell_type": "markdown",
"source": "```\nTavily tool type: <class 'langchain_community.tools.tavily_search.tool.TavilySearchResults'>\nTavily tool name: tavily_search_results_json\n```",
"metadata": {}
},
{
"id": "cf741224-7c49-49b4-b728-269a8351b162",
"cell_type": "code",
"source": "# Simple Agent State\nclass AgentState(TypedDict):\n \n # Annotated list of messages [ {message1}, {message2}, ...] \n # that will be ADDED overtime with ’operator.add‘, \n # key:value -> messages:list of messages -> messages:[ {message1}, {message2}, …] \n messages: Annotated[list[AnyMessage], operator.add]",
"metadata": {
"trusted": true
},
"outputs": [],
"execution_count": null
},
{
"id": "bfba281b-109d-4d3c-ba8f-ad58a64bb6b9",
"cell_type": "code",
"source": "# 'SqliteSaver()' class in LangGraph is used for saving checkpoints \n# in a SQLite database.\nfrom langgraph.checkpoint.sqlite import SqliteSaver\n\n# Create a 'SqliteSaver()' instance (obj) that saves data in memory, \n# rather than to a file on disk. The \":memory:\" parameter\n# specifies that the built-in (under the hood) SQLite database will be \n# created and maintained entirely in system RAM -> checkpoint (obj)\n# If we refresh the notebook, this saved SQLite database will disappear.\nmemory = SqliteSaver.from_conn_string(\":memory:\")",
"metadata": {
"trusted": true
},
"outputs": [],
"execution_count": null
},
{
"id": "f181c2f3-9d4a-4965-a75b-ba8acb61a229",
"cell_type": "code",
"source": "# Create AI 'Agent' class (obj)\nclass Agent:\n \n # This Agent will be parametrized by (3) things: A 'model' to use,\n # a 'tool or function' to call and a 'system' msg.\n def __init__(self, model, tools, checkpointer, system=\"\"):\n \n # As before, let's save the 'system' msg as a class \n # attribute/variable -> self.system, so it can be used/modified\n # by ALL functions/instances that compound the class\n self.system = system\n \n ### Start creating the graph (obj) ###\n \n # 1st initialize the 'StateGraph' with the 'AgentState' class as input\n # without any nodes or edges attached to it\n graph = StateGraph(AgentState)\n \n # Add 'call_openai()' function called 'llm' node. \n # Use 'self.function_name()'\n graph.add_node(\"llm\", self.call_openai)\n \n # Add 'take_action()' function called 'action' node\n # Use 'self.function_name()'\n graph.add_node(\"action\", self.take_action)\n \n # Add 'exists_action()' function as conditional edge\n # Use 'self.function_name()'\n # Edge Input -> 'llm' node \n # Question -> is there a recommended action?\n # {Dictionary}: How to MAP the response of the function\n # to the next node to go to.\n # if 'exists_action()' returns True -> Executes 'action' node, \n # if 'exists_action()' returns False -> Goes to 'END' node and it finishes\n graph.add_conditional_edges(\n \"llm\",\n self.exists_action,\n {True: \"action\", False: END}\n \n ) # Finish 'add_conditional_edges’\n \n # Add a regular edge 1st arg: Start of edge (->) 2nd arg: End of edge \n # From 'action' node -> To 'llm' node\n graph.add_edge(\"action\", \"llm\")\n \n # Set the entry point of the graph as 'llm' node\n graph.set_entry_point(\"llm\")\n \n # 'obj.compile()' the graph and updates/overwrite at the same graph (obj)\n # Use 'self.obj' to save this as an attribute/variable over ALL the class\n # Do this after we've done all the setups,\n # and we'll turn it into a LangChain runnable/executable.\n # A LangChain runnable exposes a standard interface for calling\n # and invoking this graph (obj).\n self.graph = graph.compile(checkpointer = checkpointer)\n \n # We'll also save the tools (obj) that we passed\n # We'll pass in, the list of the tools passed into the 'Agent' class\n # Create a dictionary, Getting the 'name' of the tool\n # with 'tool.name' as key, and ’t’ tool as value. \n # Save that {dictionary} as an attribute/variable\n # used over ALL the class, updating/overwritting 'self.tools'\n self.tools = {t.name: t for t in tools}\n \n # We'll also save the model (obj) that we passed\n # This is letting the model (LLM) to bind /enlazar/ tools\n # to know that it has these tools available to call\n self.model = model.bind_tools(tools)\n \n ### Finish creating graph (obj) ###\n \n ### Finish initializing Agent class general attributes/variables ###\n \n ### Implement 'functions()' as methods on the 'Agent' class ###\n \n # Create 'function()' representing the 'conditional edge' node.\n # After this function is executed, then 'graph.add_conditional_edges()' \n # will return 'True' key, when the previous model 'llm' node, recommended an 'action' \n # to take on, so it executes the 'action' node. \n # Otherwise it returns 'False' key, so this will execute the 'END' node and finish \n # it also take in the 'AgentState' class, as input.\n def exists_action(self, state: AgentState):\n \n # At state:{AgentState dict} -> state:{'messages':list of messages}\n # so select key 'messages' obtaining\n # state['messages'] = list of messages = [ {message1}, {message2}, …]\n # Then select the LAST message from state with [-1] -> {message_N} \n # which is the most recent calling response, from the language model. \n result = state['messages'][-1]\n \n # We're going to return a 'True' boolean, when the lenght of 'result.tool_calls' > 0\n # This means, if there's any 'tool_calls()' attribute or a [list of tool_calls], \n # (so its len > 0), we're going to return 'True'.\n # If there's NOT then, we're going to return 'False'.\n return len(result.tool_calls) > 0\n \n # Create 'function()' representing the 'llm' node\n # Use 'AgentState' class as input argument, ALL the nodes and the edges. \n # 'AgentState' is a historic dict of messages -> messages:{list of messages}\n # key:value -> state:AgentState -> state:{AgentState dict}\n def call_openai(self, state: AgentState):\n \n # At state:{AgentState dict} -> state:{'messages':list of messages}\n # so select key 'messages' obtaining\n # state['messages'] = list of messages = [ {message1}, {message2}, …]\n messages = state['messages']\n \n # If 'self.system'='system' msg/prompt is NOT empty\n if self.system:\n \n # Add list of messages = [ {message1}, {message2}, …] to\n # 'system' msg and OVERWRITE 'messages' list of messages\n messages = [SystemMessage(content=self.system)] + messages\n \n # Then call the 'self.model' (LLM) using \n # 'self.model.invoke(list of messages)' method,\n # and return a new 'assistant' msg called 'message' -> {message}\n message = self.model.invoke(messages)\n \n # UPDATE the returned dictionary \n # (Use the same 'messages' key name, as we did at 'AgentState')\n # and include only new 'assistant' msg at 'messages' key -> {message} in a list \n # {'messages': list with assistant msg} -> {'messages': [ {message} ] }\n # Because we have ANNOTATED the 'messages' key attribute, on the\n # 'AgentState' with the 'operators.add' this ISN'T overwriting this.\n # It's ADDED to historic of 'messages' at 'AgentState'\n return {'messages': [message]}\n \n # Create 'function()' representing the 'action' node, \n # done to execute the recommended 'action' by Agent/Assistant\n def take_action(self, state: AgentState):\n \n # At state:{AgentState dict} -> state:{'messages':list of messages}\n # so select key 'messages' obtaining\n # state['messages'] = list of messages = [ {message1}, {message2}, …]\n # Then select the LAST message with [-1] -> {message_N}\n # If we have gotten into this state, the Language model must have\n # recommended an 'action' to be done, so we need to call some tools,\n # to execute that recommended 'action. That means there will be the\n # 'tool_calls()' attribute to execute this LAST 'action' msg \n # present in the 'AgentState' historic list of messages.\n # Then UPDATE/OVERWRITE the 'tool_calls()' attribute\n tool_calls = state['messages'][-1].tool_calls\n \n # Initialize ‘results‘ [] list as an empty list [], \n # to be filled with ’assistant’ responses\n results = []\n \n # 'tool_calls()' can also be a [list of tool_calls], so a lot of the\n # modern models support parallel tool or parallel function calling\n # so we can LOOP over these [list of tool_calls] and assign each\n # tool_call() to be executed as 't'.\n for t in tool_calls:\n print(f\"Calling: {t}\") # Display iterated tool\n \n # Find if recommended tool 't' name -> t['name'] is NOT included \n # in the dictionary of tools -> 'self.tools' = {t0.name: t0(), t1.name: t1(),..., tN-1().name: tN-1()} \n # that we created, so, we check for bad tool name from LLM.\n if not t['name'] in self.tools:\n \n # When tool t['name'] is NOT included in dict of tools -> \n # 'self.tools' = {t0.name: t0(), t1.name: t1(),..., tN-1().name: tN-1()} \n # then print \"bad tool name\" \n print(\"\\n ....bad tool name....\")\n \n # And instruct/prompt a ’user’ msg to LLM ‘assistant‘, \n # to retry when it's \"bad tool name\"\n result = \"bad tool name, retry\"\n \n # Otherwise, when recommended tool 't' name -> t['name'] is included in the dictionary of tools -> \n # 'self.tools'= {t0.name: t0(), t1.name: t1(),..., tN-1().name: tN-1()} that we created.\n else:\n \n # Get the 'name' -> t['name'] of each iterated tool t = 'tool()', \n # and select each tool at 'self.tools' = {t0.name: t0(), t1.name: t1(),..., tN-1().name: tN-1()}\n # dictionary with that 'name'.\n # Then call '.invoke()' method passing in the input arguments \n # of each t = 'tool()' function call -> t['args'] -> 'args': {'query': 'user msg/prompt'}\n # Execute t='tool()' or 'action', so get a \n # resultant assistant 'string' -> Observation\n result = self.tools[t['name']].invoke(t['args'])\n \n # Append the previous assistant 'string' observation, as a 'ToolMessage' containing\n # \"tool_id, tool_name, observation\" into a 'results' list -> \n # [{\"tool_id, tool_name, observation\"1}, {\"tool_id, tool_name, observation\"2}, ...]\n # each iteration\n results.append(ToolMessage(tool_call_id=t['id'], name=t['name'], content=str(result)))\n \n # Let's get back to graph beginning, at 'llm' node.\n print(\"Back to the model!\")\n \n # UPDATE the returned dictionary \n # (Use the same 'messages' key name, as we did at 'AgentState')\n # Add 'results' list of observations at 'messages' key -> \n # [{\"tool_id, tool_name, observation\"1}, {\"tool_id, tool_name, observation\"2}, ...,\n # {\"tool_id, tool_name, observation\"N}]\n # {'messages': results} -> {'messages': list of observations} -> \n # {'messages': [{\"tool_id, tool_name, observation\"1}, {\"tool_id, tool_name, observation\"2}, ...,\n # {\"tool_id, tool_name, observation\"N}]\n # Because we have ANNOTATED the 'messages' key attribute, on the\n # 'AgentState' with the 'operators.add' this ISN'T overwriting this.\n # It's ADDED to historic of 'messages' at 'AgentState'\n return {'messages': results}",
"metadata": {
"trusted": true
},
"outputs": [],
"execution_count": null
},
{
"id": "5bcc8a10-4ff5-46c5-8170-bb338901e81b",
"cell_type": "code",
"source": "# 'system' msg / prompt\nprompt = \"\"\"You are a smart research assistant. Use the search engine to look up information. \\\nYou are allowed to make multiple calls (either together or in sequence). \\\nOnly look up information when you are sure of what you want. \\\nIf you need to look up some information before asking a follow up question, you are allowed to do that!\n\"\"\"\n# Use \"gpt-4o\" as model from OpenAI\nmodel = ChatOpenAI(model=\"gpt-4o\")\n\n# Call AI 'Agent' class with inputs: \n# model = ChatOpenAI(model= \"gpt-4o\") \n# tools=[tools(objs)]=[Search Engine(obj)]=[TavilySearchResults(obj)]=[tool(obj)] \n# system = 'system' msg/prompt\n# checkpointer=memory SQLite database for SAVING data -> checkpoint(obj)\nabot = Agent(model, [tool], system=prompt, checkpointer=memory)",
"metadata": {
"trusted": true
},
"outputs": [],
"execution_count": null
},
{
"id": "9de371ce-8dbe-4aaf-b31c-5090e78151de",
"cell_type": "code",
"source": "# 'user' msg = \"What is the weather in sf?\" \n# Create a 'HumanMessage' representing the 'user' msg, \n# and put it as a [list of messages]. \n# We have to do this, because the simple 'StateAgent' class expects \n# as input, a 'messages' key attribute to work, \n# with a [list of messages] as dict value, so we need to make it \n# conform with that.\nmessages = [HumanMessage(content=\"What is the weather in sf?\")]",
"metadata": {
"trusted": true
},
"outputs": [],
"execution_count": null
},
{
"id": "b66798e3-f866-4e67-a77f-fd4cfcc3e1ca",
"cell_type": "code",
"source": "# thread config /hilo configurable/ used to keep track \n# of different threads /hilos/ inside the persistent\n# checkpointer/checkpoint. Used for having MULTIPLE CONVERSATIONS \n# with MANY 'users’ going on at the same time.\n# dict {} <- with inner dict {} as value, of \"configurable\" key\nthread = {\"configurable\": {\"thread_id\": \"1\"}}",
"metadata": {
"trusted": true
},
"outputs": [],
"execution_count": null
},
{
"id": "c0187e22-7a1f-48e7-b234-6e07446a5d41",
"cell_type": "code",
"source": "# Call 'events=agent(obj).graph.stream({\"messages\": messages}, thread)' \n# including list of messages dict -> {\"messages\": [messages]} and also \n# the 'thread config' dict -> {\"configurable\": {\"thread_id\": \"1\"}}\n# We're going to get back a 'stream of events', that represents UPDATES\n# to 'AgentState', over time. \nevents = abot.graph.stream({\"messages\": messages}, thread)\n\n# Extract each event per iteration. \nfor event in events:\n \n # Extract event dict 'values' -> {v} per iter, \n # with 'event.values()' method. \n for v in event.values():\n \n # From dict {v}, select 'messages' key to get\n # dict values each inner loop iteration.\n print(v['messages'])",
"metadata": {
"trusted": true
},
"outputs": [],
"execution_count": null
},
{
"id": "9abcc982-9b82-43a1-aea3-2465621088c7",
"cell_type": "markdown",
"source": "```\n[AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_bmfLa92f6oAIKN9KvXtqbKDz', 'function': {'arguments': '{\"query\":\"current weather in San Francisco\"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 22, 'prompt_tokens': 151, 'total_tokens': 173, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_831e067d82', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-0ce2232f-abce-4934-9a91-12e7389c00d3-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_bmfLa92f6oAIKN9KvXtqbKDz'}])]\nCalling: {'name': 'tavily_search_results_json', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_bmfLa92f6oAIKN9KvXtqbKDz'}\nBack to the model!\n[ToolMessage(content=\"[{'url': 'https://www.weather25.com/north-america/usa/california/san-francisco?page=month&month=July', 'content': 'weather25.com\\\\nSearch\\\\nweather in United States\\\\nRemove from your favorite locations\\\\nAdd to my locations\\\\nShare\\\\nweather in United States\\\\n\\\\n# San Francisco weather in July 2025\\\\n\\\\nMist\\\\nMist\\\\nCloudy\\\\nPatchy rain possible\\\\nClear\\\\nSunny\\\\nPartly cloudy\\\\nPartly cloudy\\\\nClear\\\\nPartly cloudy\\\\nSunny\\\\nPatchy rain possible\\\\nClear\\\\nPatchy rain possible\\\\n\\\\n## The average weather in San Francisco in July\\\\n\\\\nThe temperatures in San Francisco in July are comfortable with low of 14°C and and high up to 25°C. [...] | Sun | Mon | Tue | Wed | Thu | Fri | Sat |\\\\n| --- | --- | --- | --- | --- | --- | --- |\\\\n| | | 1 Cloudy 20° /12° | 2 Partly cloudy 20° /12° | 3 Sunny 20° /12° | 4 Sunny 20° /12° | 5 Sunny 20° /13° |\\\\n| 6 Sunny 19° /13° | 7 Mist 19° /12° | 8 Sunny 20° /12° | 9 Partly cloudy 20° /13° | 10 Partly cloudy 21° /13° | 11 Sunny 22° /13° | 12 Sunny 20° /13° | [...] | 13 Partly cloudy 19° /12° | 14 Partly cloudy 20° /13° | 15 Partly cloudy 20° /13° | 16 Sunny 20° /13° | 17 Partly cloudy 20° /12° | 18 Partly cloudy 20° /13° | 19 Partly cloudy 19° /13° |\\\\n| 20 Partly cloudy 19° /13° | 21 Partly cloudy 19° /12° | 22 Sunny 20° /13° | 23 Mist 16° /14° | 24 Mist 16° /14° | 25 Cloudy 16° /13° | 26 Patchy rain possible 17° /14° |\\\\n| 27 Sunny 18° /14° | 28 Sunny 18° /13° | 29 Partly cloudy 17° /14° | 30 Partly cloudy 18° /14° | 31 Sunny 19° /14° | | |'}, {'url': 'https://www.accuweather.com/en/us/san-francisco/94103/july-weather/347629', 'content': 'Get the monthly weather forecast for San Francisco, CA, including daily high/low, historical averages, to help you plan ahead.'}]\", name='tavily_search_results_json', tool_call_id='call_bmfLa92f6oAIKN9KvXtqbKDz')]\n[AIMessage(content=\"I couldn't find the current weather in San Francisco from the sources I checked. You might want to visit a weather website like [Weather.com](https://www.weather.com) or [AccuWeather](https://www.accuweather.com) for the most up-to-date information.\", response_metadata={'token_usage': {'completion_tokens': 57, 'prompt_tokens': 793, 'total_tokens': 850, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_07871e2ad8', 'finish_reason': 'stop', 'logprobs': None}, id='run-8a72079b-1612-4211-89bb-0aa2fdbaaf70-0')]\n```",
"metadata": {}
},
{
"id": "7853ab37-4901-4b7b-a27f-8707568479e7",
"cell_type": "code",
"source": "# 'user' msg = \"What about in la?\" -> ¿Y qué pasa en Los Ángeles?\n# Create a 'HumanMessage' representing the 'user' msg, \n# and put it as a [list of messages]. \n# We have to do this, because the simple 'StateAgent' class expects \n# as input, a 'messages' key attribute to work, \n# with a [list of messages] as dict value, so we need to make it \n# conform with that.\nmessages = [HumanMessage(content=\"What about in la?\")]\n\n# thread config /hilo configurable/ used to keep track \n# of different threads /hilos/ inside the persistent\n# checkpointer/checkpoint. Used for having MULTIPLE CONVERSATIONS \n# with MANY 'users’ going on at the same time.\n# dict {} <- with inner dict {} as value, of \"configurable\" key\n# Pass the same {\"thread_id\" : “1”} to make sure we’re continuing from \n# that same point of conversation, so we would expect it to realize \n# we’re asking about weather, without being explicit.\nthread = {\"configurable\": {\"thread_id\": \"1\"}}\n\n# Call 'events=agent(obj).graph.stream({\"messages\": messages}, thread)' \n# including list of messages dict -> {\"messages\": [messages]} and also \n# the 'thread config' dict -> {\"configurable\": {\"thread_id\": \"1\"}}\n# We're going to get back a 'stream of events', that represents UPDATES\n# to 'AgentState', over time.\nevents = abot.graph.stream({\"messages\": messages}, thread)\n\n# Extract each event per iteration.\nfor event in events:\n \n # Extract event dict 'values' -> {v} per iter, \n # with 'event.values()' method.\n for v in event.values():\n \n # From dict {v}, select 'messages' key to get\n # dict values each inner loop iteration.\n print(v)",
"metadata": {
"trusted": true
},
"outputs": [],
"execution_count": null
},
{
"id": "e26fed05-6f00-42a0-a26b-c2a8375780d7",
"cell_type": "markdown",
"source": "```\n{'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_OMC14omIDdYzaQgSqeHerus6', 'function': {'arguments': '{\"query\":\"current weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 22, 'prompt_tokens': 862, 'total_tokens': 884, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_07871e2ad8', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-da14193c-e35f-47e3-9320-c952d8cbdeed-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Los Angeles'}, 'id': 'call_OMC14omIDdYzaQgSqeHerus6'}])]}\nCalling: {'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Los Angeles'}, 'id': 'call_OMC14omIDdYzaQgSqeHerus6'}\nBack to the model!\n{'messages': [ToolMessage(content=\"[{'url': 'https://world-weather.info/forecast/usa/los_angeles/july-2025/', 'content': 'Detailed ⚡ Los Angeles Weather Forecast for July 2025 – day/night 🌡 ... Wednesday, 23 July. +63°. Day. +79°. Clear sky. Thursday, 24 July. +63°. Day. +79'}, {'url': 'https://www.weather25.com/north-america/usa/california/los-angeles?page=month&month=July', 'content': 'weather25.com\\\\nSearch\\\\nweather in United States\\\\nRemove from your favorite locations\\\\nAdd to my locations\\\\nShare\\\\nweather in United States\\\\n\\\\n# Los Angeles weather in July 2025\\\\n\\\\nCloudy\\\\nPartly cloudy\\\\nCloudy\\\\nClear\\\\nClear\\\\nClear\\\\nClear\\\\nClear\\\\nClear\\\\nClear\\\\nClear\\\\nClear\\\\nClear\\\\nClear\\\\n\\\\n## The average weather in Los Angeles in July\\\\n\\\\nThe weather in Los Angeles in July is hot. The average temperatures are between 20°C and 30°C. [...] | 20 Sunny 29° /21° | 21 Partly cloudy 29° /21° | 22 Sunny 29° /20° | 23 Cloudy 26° /17° | 24 Partly cloudy 29° /17° | 25 Cloudy 25° /17° | 26 Sunny 27° /18° |\\\\n| 27 Sunny 28° /19° | 28 Sunny 31° /21° | 29 Sunny 32° /22° | 30 Sunny 32° /23° | 31 Sunny 34° /23° | | | [...] | Sun | Mon | Tue | Wed | Thu | Fri | Sat |\\\\n| --- | --- | --- | --- | --- | --- | --- |\\\\n| | | 1 Sunny 28° /19° | 2 Sunny 28° /19° | 3 Sunny 28° /19° | 4 Sunny 29° /19° | 5 Sunny 29° /19° |\\\\n| 6 Sunny 29° /20° | 7 Sunny 28° /19° | 8 Cloudy 29° /20° | 9 Sunny 30° /20° | 10 Partly cloudy 31° /21° | 11 Sunny 31° /20° | 12 Sunny 30° /21° |\\\\n| 13 Partly cloudy 29° /20° | 14 Sunny 29° /20° | 15 Sunny 29° /20° | 16 Sunny 29° /21° | 17 Sunny 29° /21° | 18 Partly cloudy 29° /20° | 19 Sunny 29° /21° |'}]\", name='tavily_search_results_json', tool_call_id='call_OMC14omIDdYzaQgSqeHerus6')]}\n{'messages': [AIMessage(content=\"I couldn't find the current weather in Los Angeles either from the sources I checked. For real-time weather updates, you can visit websites like [Weather.com](https://www.weather.com) or [AccuWeather](https://www.accuweather.com).\", response_metadata={'token_usage': {'completion_tokens': 52, 'prompt_tokens': 1490, 'total_tokens': 1542, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_07871e2ad8', 'finish_reason': 'stop', 'logprobs': None}, id='run-dbb8c553-bc35-4ac0-8e1c-1a139a739023-0')]}\n```",
"metadata": {}
},
{
"id": "6be55103-41ff-4fa0-8669-d59d046aff0f",
"cell_type": "code",
"source": "# 'user' msg = \"Which one is warmer?\" -> ¿Cuál es más cálido?\n# Create a 'HumanMessage' representing the 'user' msg, \n# and put it as a [list of messages]. \n# We have to do this, because the simple 'StateAgent' class expects \n# as input, a 'messages' key attribute to work, \n# with a [list of messages] as dict value, so we need to make it \n# conform with that.\nmessages = [HumanMessage(content=\"Which one is warmer?\")]\n\n# thread config /hilo configurable/ used to keep track \n# of different threads /hilos/ inside the persistent\n# checkpointer/checkpoint. Used for having MULTIPLE CONVERSATIONS \n# with MANY 'users’ going on at the same time.\n# dict {} <- with inner dict {} as value, of \"configurable\" key\n# Pass the same {\"thread_id\" : “1”} to make sure we’re continuing from \n# that same point of conversation, so we would expect it to realize we’re asking about, \n# which of them (sf or la) has a warmer weather.\nthread = {\"configurable\": {\"thread_id\": \"1\"}}\n\n# Call 'events=agent(obj).graph.stream({\"messages\": messages}, thread)' \n# including list of messages dict -> {\"messages\": [messages]} and also \n# the 'thread config' dict -> {\"configurable\": {\"thread_id\": \"1\"}}\n# We're going to get back a 'stream of events', that represents UPDATES\n# to 'AgentState', over time.\nevents = abot.graph.stream({\"messages\": messages}, thread)\n\n# Extract each event per iteration.\nfor event in events:\n \n # Extract event dict 'values' -> {v} per iter, \n # with 'event.values()' method.\n for v in event.values():\n \n # From dict {v}, select 'messages' key to get\n # dict values each inner loop iteration.\n print(v)",
"metadata": {
"trusted": true
},
"outputs": [],
"execution_count": null
},
{
"id": "8ab40a24-c147-4ca1-ad6c-56c5079cf06e",
"cell_type": "markdown",
"source": "```\n{'messages': [AIMessage(content=\"To determine which city is currently warmer, I need specific current temperature data for both San Francisco and Los Angeles. Since I couldn't find that information from the sources I checked, you might want to look at a weather service like [Weather.com](https://www.weather.com) to compare the temperatures directly. Generally, Los Angeles tends to be warmer than San Francisco due to its more southern location and less coastal influence.\", response_metadata={'token_usage': {'completion_tokens': 83, 'prompt_tokens': 1554, 'total_tokens': 1637, 'prompt_tokens_details': {'cached_tokens': 1536, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_07871e2ad8', 'finish_reason': 'stop', 'logprobs': None}, id='run-e2d69863-2562-41fc-9788-ad1858b18cfb-0')]}\n```",
"metadata": {}
},
{
"id": "c847e6cd-707a-40e0-833b-10e406c8cfb9",
"cell_type": "code",
"source": "# 'user' msg = \"Which one is warmer?\" -> ¿Cuál es más cálido?\n# Create a 'HumanMessage' representing the 'user' msg, \n# and put it as a [list of messages]. \n# We have to do this, because the simple 'StateAgent' class expects \n# as input, a 'messages' key attribute to work, \n# with a [list of messages] as dict value, so we need to make it \n# conform with that.\nmessages = [HumanMessage(content=\"Which one is warmer?\")]\n\n# thread config /hilo configurable/ used to keep track \n# of different threads /hilos/ inside the persistent\n# checkpointer/checkpoint. Used for having MULTIPLE CONVERSATIONS \n# with MANY 'users’ going on at the same time.\n# dict {} <- with inner dict {} as value, of \"configurable\" key\n# Change {\"thread_id\" : “2”} so we’re NOT continuing with \n# that same point of conversation.\nthread = {\"configurable\": {\"thread_id\": \"2\"}}\n\n# Call 'events=agent(obj).graph.stream({\"messages\": messages}, thread)' \n# including list of messages dict -> {\"messages\": [messages]} and also \n# the 'thread config' dict -> {\"configurable\": {\"thread_id\": \"2\"}}\n# We're going to get back a 'stream of events', that represents UPDATES\n# to 'AgentState', over time.\nevents = abot.graph.stream({\"messages\": messages}, thread)\n\n# Extract each event per iteration.\nfor event in events:\n \n # Extract event dict 'values' -> {v} per iter, \n # with 'event.values()' method.\n for v in event.values():\n \n # From dict {v}, select 'messages' key to get\n # dict values each inner loop iteration.\n print(v)",
"metadata": {
"trusted": true
},
"outputs": [],
"execution_count": null
},
{
"id": "f4e8516f-9d31-4ca2-964c-05704df5de53",
"cell_type": "markdown",
"source": "```\n{'messages': [AIMessage(content=\"Could you please clarify what you're comparing to determine which is warmer? Are you comparing two specific locations, types of clothing, materials, or something else? Let me know so I can provide the appropriate information.\", response_metadata={'token_usage': {'completion_tokens': 43, 'prompt_tokens': 149, 'total_tokens': 192, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_f9f4fb6dbf', 'finish_reason': 'stop', 'logprobs': None}, id='run-7276d754-43be-46c0-ba78-fd0cc73f46c8-0')]}\n```",
"metadata": {}
},
{
"id": "b8f29888-d659-405b-9983-4b8897c4594d",
"cell_type": "markdown",
"source": "## Streaming tokens",
"metadata": {}
},
{
"id": "9ec20911-d99e-4987-8550-139ca35eae97",
"cell_type": "code",
"source": "# The 'AsyncSqliteSaver' class allows you to save LangGraph checkpoints \n# asynchronously using a SQLite database.\n# This is useful for saving application state and being able to resume \n# processing in the event of interruptions or for saving history.\nfrom langgraph.checkpoint.aiosqlite import AsyncSqliteSaver\n\n# The \":memory:\" connection string in Async SQL Saver indicates \n# the use of an in-memory Async SQLite database. This means the database \n# is created in RAM and destroyed when the connection is closed. \n# This is useful for testing situations where a temporary database is needed\nmemory = AsyncSqliteSaver.from_conn_string(\":memory:\")\n\n# Call AI 'Agent' class with inputs: \n# model = ChatOpenAI(model= \"gpt-4o\") \n# tools=[tools(objs)]=[Search Engine(obj)]=[TavilySearchResults(obj)]=[tool(obj)] \n# system = 'system' msg/prompt\n# checkpointer=memory Async SQLite database for SAVING data -> checkpoint(obj)\nabot = Agent(model, [tool], system=prompt, checkpointer=memory)",
"metadata": {
"trusted": true
},
"outputs": [],
"execution_count": null
},
{
"id": "a2d8b677-183e-4be9-ad31-ccf021f27602",
"cell_type": "code",
"source": "# 'user' msg = \"What is the weather in sf?\" \n# Create a 'HumanMessage' representing the 'user' msg, \n# and put it as a [list of messages]. \n# We have to do this, because the simple 'StateAgent' class expects \n# as input, a 'messages' key attribute to work, \n# with a [list of messages] as dict value, so we need to make it \n# conform with that.\nmessages = [HumanMessage(content=\"What is the weather in SF?\")]\n\n# thread config /hilo configurable/ used to keep track \n# of different threads /hilos/ inside the persistent\n# checkpointer/checkpoint. Used for having MULTIPLE CONVERSATIONS \n# with MANY 'users’ going on at the same time.\n# dict {} <- with inner dict {} as value, of \"configurable\" key\n# Change {\"thread_id\" : “4”} so we’re NOT continuing with \n# a same point of conversation. The 'Agent’ will start a conversation \n# from fresh /zero/.\nthread = {\"configurable\": {\"thread_id\": \"4\"}}\n\n# Events related to data transmission (asynchronously) from a chat model.\n# '.astream_events()’ is an asynchronous method, which means we’re \n# gonna need to use an async checkpointer(obj) for doing checkpoint.\naevents = abot.graph.astream_events({\"messages\": messages}, thread, version=\"v1\") \n\n# The 'async for' loop iterates over events that are generated \n# asynchronously, allowing you to process events as they arrive.\n# Asynchronous events represent UPDATES of stream.\n# Extract one (1) async event -> {dict} each iteration\nasync for event in aevents:\n \n # Select \"event\" key from each {event} dict, \n # so extract its value -> \"string with kind of event\"\n kind = event[\"event\"]\n \n # When \"string with kind of event\" == \"on_chat_model_stream\"\n # which corrresponds to the arrive of fundamental units of text \n # NEW TOKENS(words, subwords, characters, punctuation marks)\n # that the model processes.\n if kind == \"on_chat_model_stream\":\n \n # Then get from {event} dict -> From all data -> \n # chunk /pedazos/ -> '.content' TOKENS\n content = event[\"data\"][\"chunk\"].content\n \n # If there exist TOKENS at each event (NOT EMPTY)\n if content:\n \n # Empty content in the context of 'OpenAI' LLM means\n # that the model is asking for a tool()/function()/action() \n # to be invoked / executed / taken.\n # Print TOKENS content or NON-EMPTY content, \n # and separate them per event using \"|\" character, \n # at the end of the line. This way print multiple items \n # on a single line instead of adding them at a newline \"\\n\". \n print(content, end=\"|\")",
"metadata": {
"trusted": true
},
"outputs": [],
"execution_count": null
},
{
"id": "11d381d2-a86e-42e1-9419-51763f692b02",
"cell_type": "markdown",
"source": "",
"metadata": {}
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment