Last active
August 7, 2025 00:51
-
-
Save DiegoHernanSalazar/1c9857640e6df72452233db00c9c8c98 to your computer and use it in GitHub Desktop.
DeepLearning.AI - LangChain - Tavily: Lesson 5: Human in the Loop
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| { | |
| "metadata": { | |
| "kernelspec": { | |
| "name": "xpython", | |
| "display_name": "Python 3.13 (XPython)", | |
| "language": "python" | |
| }, | |
| "language_info": { | |
| "file_extension": ".py", | |
| "mimetype": "text/x-python", | |
| "name": "python", | |
| "version": "3.13.1" | |
| } | |
| }, | |
| "nbformat_minor": 5, | |
| "nbformat": 4, | |
| "cells": [ | |
| { | |
| "id": "f470a3ab-849e-4347-89f1-5246f3ca63bd", | |
| "cell_type": "markdown", | |
| "source": "<img src=\"https://media.licdn.com/dms/image/sync/v2/D5627AQGTZahmqVia0w/articleshare-shrink_800/articleshare-shrink_800/0/1735447634970?e=2147483647&v=beta&t=f8-WnRWXXPIOJzOk74aASHT6dfSRE-syA_kxPxjWuSM\"/>", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "b936b8ed-2e57-43c8-ba9e-da3ad7c972c1", | |
| "cell_type": "markdown", | |
| "source": "# Lesson 5: Human in the Loop", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "a6feb031-daf0-4b39-b36d-59803870fca0", | |
| "cell_type": "markdown", | |
| "source": "Note: This notebook is running in a later version of langgraph that it was filmed with. The later version has a couple of key additions:\n- Additional state information is stored to memory and displayed when using `get_state()` or `get_state_history()`.\n- State is additionally stored every state transition while previously it was stored at an interrupt or at the end.\nThese change the command output slightly, but are a useful addtion to the information available.", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "ae79ec89-140a-4f27-8f36-a09a6cf4d64c", | |
| "cell_type": "code", | |
| "source": "# Loads environment variables from a file called '.env'. \n# This function does not return data directly, \n# but loads the variables into the runtime environment.\nfrom dotenv import load_dotenv\n\n# load environment variables from a '.env' file into the \n# current directory or process's environment\n# This is our OpenAI API key\n_ = load_dotenv()", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "8371f0d5-3f4c-4daf-878a-76f65d26784f", | |
| "cell_type": "code", | |
| "source": "# 'StateGraph' and 'END' are used to construct graphs. \n# 'StateGraph' allows nodes to communicate, by reading and writing to a common state. \n# The 'END' node is used to signal the completion of a graph, \n# ensuring that cycles eventually conclude.\nfrom langgraph.graph import StateGraph, END\n\n# The typing module in Python, which includes 'TypedDict' and 'Annotated', \n# provides tools for creating advanced type annotations. \n# 'TypedDict allows you to define {dictionaries}={messages} with specific types for each 'key',\n# while 'Annotated' ADDS new data or messages values to LangChain types.\n# 'TypedDict' and 'Annotated' are used to construct the class AgentState()\nfrom typing import TypedDict, Annotated\n\n# 'operator' module provides efficient functions that correspond to the \n# language's intrinsic operators. It offers functions for mathematical, logical, relational, \n# bitwise, and other operations. For example, operator.add(x, y) is equivalent to x + y.\n# It's useful for situations where you need to treat 'operators' as 'functions()'.\n# 'operator' is used to construct the class AgentState()\nimport operator\n\n# Messages in LangChain are classified into different roles/types: \n# 'SystemMessage' <- 'system', 'HumanMessage' <- 'user', 'ToolMessage' <- 'observation' \n# 'AnyMessage' <- 'assistant'\nfrom langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, ToolMessage\n\n# To start using OpenAI chat models on Langchain, you need to install \n# the 'langchain-openai' library and set the 'OPENAI_API_KEY' environment variable\n# to your OpenAI API key.\n# This is a container/wrapper of OpenAI API in LangChain, exposing a standard\n# interface for ALL Language Models (LM). It means that even we'll use 'ChatOpenAI',\n# we can change it to any other different Language Model (LM) provider, that\n# LangChain supports, without changing any other lines of code.\nfrom langchain_openai import ChatOpenAI\n\n# Import 'Tavily' tool to be used as search engine.\n# The 'TavilySearchResults' tool allows you to perform queries in \n# the Tavily Search API, returning results in JSON / {message} format.\nfrom langchain_community.tools.tavily_search import TavilySearchResults\n\n# 'SqliteSaver()' class in LangGraph is used for saving checkpoints \n# in a SQLite database.\nfrom langgraph.checkpoint.sqlite import SqliteSaver\n\n# Create a 'SqliteSaver()' instance (obj) that saves data in memory, \n# rather than to a file on disk. The \":memory:\" parameter\n# specifies that the built-in (under the hood) SQLite database will be \n# created and maintained entirely in system RAM -> checkpoint (obj)\n# If we refresh the notebook, this saved SQLite database will disappear.\nmemory = SqliteSaver.from_conn_string(\":memory:\")", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "b3acfe02-db9c-4017-bbff-c41fc8df5c06", | |
| "cell_type": "code", | |
| "source": "# The 'uuid' module in Python, specifically the 'uuid4()' function, \n# is used to generate version 4 Universally Unique Identifiers (UUIDs).\n# These UUIDs are randomly generated 128-bit numbers, \n# making them virtually unique. \n# The 'uuid4()' function requires no arguments and \n# returns a directly usable UUID (object)\nfrom uuid import uuid4\n\n# Messages in LangChain are classified into different roles/types: \n# 'SystemMessage' <- 'system', 'HumanMessage' <- 'user', 'AIMessage' <- 'observation' \n# 'AnyMessage' <- 'from history list of messages'\nfrom langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, AIMessage\n\n\"\"\"\nIn previous examples we've annotated the `messages` state key\nwith the default `operator.add` or `+` reducer, which always\nappends new messages to the end of the existing messages array.\n\nNow, to support replacing existing messages, we annotate the\n`messages` key with a customer reducer function, which replaces\nmessages with the same `id`, and appends them otherwise.\n\"\"\"\n\n# list[AnyMessage] = {message1}, {message2}, {message3}, ...] = left = right\n# The 'reduce_messages()' function takes as input two (2) lists of messages\n# left = {message1}, {message2}, {message3}, ...] and right = {message1}, {message2}, {message3}, ...] \n# but returns just one (1) new list of messages ->\n# merged = [{message1},{message2},{message3}..., {append NEW message}].\ndef reduce_messages(left: list[AnyMessage], right: list[AnyMessage]) -> list[AnyMessage]:\n \n # Extract messages from list -> {message1}, {message2}, {message3}, ...] \n # right = {message1}, {message2}, {message3}, ...] so get\n # message = {message}\n for message in right:\n \n # message.id = message['id'] picks 'id' key from each message = {message}\n # to get 'id' value as 'string'.\n # That occurs when THERE ISN'T a 'message.id' value or ’String id‘ -> True \n if not message.id:\n \n # Use 'uuid4()' function to return a UNIQUE id (obj)\n # Then, secures this is an 'string id'. \n # Assign message.id = message['id'] = 'new unique string id' value \n # to message WITHOUT ‘String id’.\n message.id = str(uuid4())\n \n # left = {message1}, {message2}, {message3}, ...]\n # left.copy = {message1}, {message2}, {message3}, ...]\n # Init merged = left.copy = {message1}, {message2}, {message3}, ...] \n merged = left.copy()\n \n # Extract messages from list -> {message1}, {message2}, {message3}, ...] \n # right = {message1}, {message2}, {message3}, ...] so get\n # message = {message}\n for message in right:\n \n # Extract {message1},{message2}... at merged list [{message1},{message2}...]\n # existing = {existing message} and enumerate each as i=0,1,2,...,N-1 messages \n for i, existing in enumerate(merged):\n \n # existing.id = existing['id'] = 'existing string id' value\n # message.id = message['id'] = 'new unique string id' value\n # Compare if 'existing string id' value == 'new unique string id' value \n if existing.id == message.id:\n \n # Replace at merged = [{message1}i=0,{message2}i=1...] the message={message} \n # when 'existing string id' value == 'new unique string id' value. \n # Pick the actual i-th position to overwrite {message} related to 'new unique string id' \n # at merged list of messages -> [{message1}[i=0],{message2}[i=1],{message3}[i=2]...]\n merged[i] = message\n\n # Stop comparison id’s loop\n break\n\n # Out of comparison id's loop but inside message loop\n # As 'new unique id' is practically imposible to be exactly the same as 'existing id'\n # else clause is what will happen most of the time.\n else:\n \n # Merge /unir/ the 'new unique id' message with the 'existing id' messages\n # Append the message={message} related to 'new unique id' at the end of list ->\n # merged = [{message1},{message2},{message3}..., {append NEW message}]\n merged.append(message)\n \n # Return merged list of messages -> [{message1},{message2},{message3}..., {NEW message}]\n return merged\n\n# Simple Agent State\nclass AgentState(TypedDict):\n \n # Annotated list of messages [ {message1}, {message2}, ..., {append NEW message} ] that will be APPENDED \n # at the end of the list, when 'new unique id' message is NOT equal to 'existing id' message.\n # When 'new unique id' message is strictly equal (==) to 'new unique id' message, then\n # {message} is OVERWRITTEN instead, but this is very unlikely to happen, \n # because the 'new id' is practically unique. \n # key:value -> messages:list of messages -> messages:[ {message1}, {message2}, …, {append NEW message}] \n messages: Annotated[list[AnyMessage], reduce_messages]", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "23c34ff1-daaf-445a-b84d-42e8b2ecc7ce", | |
| "cell_type": "code", | |
| "source": "# Create the 'Tavily' tool to be used as search engine, by initializing\n# 'TavilySearchResults' with 'max_results=2', meaning we'll only\n# only get back (2) max responses from the search API.\ntool = TavilySearchResults(max_results=2)\n\n# Display Tavily 'tool' type -> \n# <class 'langchain_community.tools.tavily_search.tool.TavilySearchResults'>\nprint('Tavily tool type:',type(tool))\n\n# Display Tavily 'tool' name -> 'tavily_search_results_json' \nprint('Tavily tool name:',tool.name)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "e444809b-0610-4b7f-9329-ea15c216ff7e", | |
| "cell_type": "markdown", | |
| "source": "```\nTavily tool type: <class 'langchain_community.tools.tavily_search.tool.TavilySearchResults'>\nTavily tool name: tavily_search_results_json\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "dbbe0310-8166-4e07-af7d-69badd49ba32", | |
| "cell_type": "markdown", | |
| "source": "## Manual human approval", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "66e28926-61d5-44fe-92ca-2b711b22e7a2", | |
| "cell_type": "code", | |
| "source": "# Create AI 'Agent' class (obj)\nclass Agent:\n \n # This Agent will be parametrized by (3) things: A 'model' to use,\n # a 'tool or function' to call and a 'system' msg.\n def __init__(self, model, tools, checkpointer, system=\"\"):\n \n # As before, let's save the 'system' msg as a class \n # attribute/variable -> self.system, so it can be used/modified\n # by ALL functions/instances that compound the class\n self.system = system\n \n ### Start creating the graph (obj) ###\n \n # 1st initialize the 'StateGraph' with the 'AgentState' class as input\n # without any nodes or edges attached to it\n graph = StateGraph(AgentState)\n \n # Add 'call_openai()' function called 'llm' node. \n # Use 'self.function_name()'\n graph.add_node(\"llm\", self.call_openai)\n \n # Add 'take_action()' function called 'action' node\n # Use 'self.function_name()'\n graph.add_node(\"action\", self.take_action)\n \n # Add 'exists_action()' function as conditional edge\n # Use 'self.function_name()'\n # Edge Input -> 'llm' node \n # Question -> is there a recommended action?\n # {Dictionary}: How to MAP the response of the function\n # to the next node to go to.\n # if 'exists_action()' returns True -> Executes 'action' node, \n # if 'exists_action()' returns False -> Goes to 'END' node and it finishes\n graph.add_conditional_edges(\n \"llm\",\n self.exists_action,\n {True: \"action\", False: END}\n \n ) # Finish 'add_conditional_edges’\n \n # Add a regular edge 1st arg: Start of edge (->) 2nd arg: End of edge \n # From 'action' node -> To 'llm' node\n graph.add_edge(\"action\", \"llm\")\n \n # Set the entry point of the graph as 'llm' node\n graph.set_entry_point(\"llm\")\n \n # ‘obj.compile()' the graph and updates/overwrite at the same graph (obj) \n # Use 'self.obj' to save this as an attribute/variable over ALL the class. \n # Do this after we've done all the setups, \n # and we'll turn it into a LangChain runnable/executable. \n # A LangChain runnable exposes a standard interface for calling \n # and invoking this graph (obj). \n # ADD checkpointer = memory for saving data using Sync or Async SQLite database \n # (short term memory in notebook). \n # ADD and INTERRUPT before calling the “action“ node which executes (1) or more [tools()] / \n # [functions()] / [actions] in a list. \n # We'll ADD some manual approval before, to secure we're running [tools()] correctly. \n self.graph = graph.compile(checkpointer = checkpointer, interrupt_before = [\"action\"])\n \n # We'll also save the tools (obj) that we passed\n # We'll pass in, the list of the tools passed into the 'Agent' class\n # Create a dictionary, Getting the 'name' of the tool\n # with 'tool.name' as key, and ’t’ tool as value. \n # Save that {dictionary} as an attribute/variable\n # used over ALL the class, updating/overwritting 'self.tools'\n self.tools = {t.name: t for t in tools}\n \n # We'll also save the model (obj) that we passed\n # This is letting the model (LLM) to bind /enlazar/ tools\n # to know that it has these tools available to call\n self.model = model.bind_tools(tools)\n \n ### Finish creating graph (obj) ###\n \n ### Finish initializing Agent class general attributes/variables ###\n \n ### Implement 'functions()' as methods on the 'Agent' class ###\n \n # Create 'function()' representing the 'conditional edge' node.\n # After this function is executed, then 'graph.add_conditional_edges()' \n # will return 'True' key, when the previous model 'llm' node, recommended an 'action' \n # to take on, so it executes the 'action' node. \n # Otherwise it returns 'False' key, so this will execute the 'END' node and finish \n # it also take in the 'AgentState' class, as input.\n def exists_action(self, state: AgentState):\n \n # At state:{AgentState dict} -> state:{'messages':list of messages}\n # so select key 'messages' obtaining\n # state['messages'] = list of messages = [ {message1}, {message2}, …]\n # Then select the LAST message from state with [-1] -> {message_N} \n # which is the most recent calling response, from the language model. \n result = state['messages'][-1]\n \n # We're going to return a 'True' boolean, when the lenght of 'result.tool_calls' > 0\n # This means, if there's any 'tool_calls()' attribute or a [list of tool_calls], \n # (so its len > 0), we're going to return 'True'.\n # If there's NOT then, we're going to return 'False'.\n return len(result.tool_calls) > 0\n \n # Create 'function()' representing the 'llm' node\n # Use 'AgentState' class as input argument, ALL the nodes and the edges. \n # 'AgentState' is a historic dict of messages -> messages:{list of messages}\n # key:value -> state:AgentState -> state:{AgentState dict}\n def call_openai(self, state: AgentState):\n \n # At state:{AgentState dict} -> state:{'messages':list of messages}\n # so select key 'messages' obtaining\n # state['messages'] = list of messages = [ {message1}, {message2}, …]\n messages = state['messages']\n \n # If 'self.system'='system' msg/prompt is NOT empty\n if self.system:\n \n # Add list of messages = [ {message1}, {message2}, …] to\n # 'system' msg and OVERWRITE 'messages' list of messages\n messages = [SystemMessage(content=self.system)] + messages\n \n # Then call the 'self.model' (LLM) using \n # 'self.model.invoke(list of messages)' method,\n # and return a new 'assistant' msg called 'message' -> {message}\n message = self.model.invoke(messages)\n \n # UPDATE the returned dictionary \n # (Use the same 'messages' key name, as we did at 'AgentState')\n # and include only new 'assistant' msg at 'messages' key -> {message} in a list \n # {'messages': list with assistant msg} -> {'messages': [ {message} ] }\n # Because we have ANNOTATED the 'messages' key attribute, on the\n # 'AgentState' with the 'operators.add' this ISN'T overwriting this.\n # It's ADDED to historic of 'messages' at 'AgentState'\n return {'messages': [message]}\n \n # Create 'function()' representing the 'action' node, \n # done to execute the recommended 'action' by Agent/Assistant\n def take_action(self, state: AgentState):\n \n # At state:{AgentState dict} -> state:{'messages':list of messages}\n # so select key 'messages' obtaining\n # state['messages'] = list of messages = [ {message1}, {message2}, …]\n # Then select the LAST message with [-1] -> {message_N}\n # If we have gotten into this state, the Language model must have\n # recommended an 'action' to be done, so we need to call some tools,\n # to execute that recommended 'action. That means there will be the\n # 'tool_calls()' attribute to execute this LAST 'action' msg \n # present in the 'AgentState' historic list of messages.\n # Then UPDATE/OVERWRITE the 'tool_calls()' attribute\n tool_calls = state['messages'][-1].tool_calls\n \n # Initialize ‘results‘ [] list as an empty list [], \n # to be filled with ’assistant’ responses\n results = []\n \n # 'tool_calls()' can also be a [list of tool_calls], so a lot of the\n # modern models support parallel tool or parallel function calling\n # so we can LOOP over these [list of tool_calls] and assign each\n # tool_call() to be executed as 't'.\n for t in tool_calls:\n print(f\"Calling: {t}\") # Display iterated tool\n \n # Find if recommended tool 't' name -> t['name'] is NOT included \n # in the dictionary of tools -> 'self.tools' = {t0.name: t0(), t1.name: t1(),..., tN-1().name: tN-1()} \n # that we created, so, we check for bad tool name from LLM.\n if not t['name'] in self.tools:\n \n # When tool t['name'] is NOT included in dict of tools -> \n # 'self.tools' = {t0.name: t0(), t1.name: t1(),..., tN-1().name: tN-1()} \n # then print \"bad tool name\" \n print(\"\\n ....bad tool name....\")\n \n # And instruct/prompt a ’user’ msg to LLM ‘assistant‘, \n # to retry when it's \"bad tool name\"\n result = \"bad tool name, retry\"\n \n # Otherwise, when recommended tool 't' name -> t['name'] is included in the dictionary of tools -> \n # 'self.tools'= {t0.name: t0(), t1.name: t1(),..., tN-1().name: tN-1()} that we created.\n else:\n \n # Get the 'name' -> t['name'] of each iterated tool t = 'tool()', \n # and select each tool at 'self.tools' = {t0.name: t0(), t1.name: t1(),..., tN-1().name: tN-1()}\n # dictionary with that 'name'.\n # Then call '.invoke()' method passing in the input arguments \n # of each t = 'tool()' function call -> t['args'] -> 'args': {'query': 'user msg/prompt'}\n # Execute t='tool()' or 'action', so get a \n # resultant assistant 'string' -> Observation\n result = self.tools[t['name']].invoke(t['args'])\n \n # Append the previous assistant 'string' observation, as a 'ToolMessage' containing\n # \"tool_id, tool_name, observation\" into a 'results' list -> \n # [{\"tool_id, tool_name, observation\"1}, {\"tool_id, tool_name, observation\"2}, ...]\n # each iteration\n results.append(ToolMessage(tool_call_id=t['id'], name=t['name'], content=str(result)))\n \n # Let's get back to graph beginning, at 'llm' node.\n print(\"Back to the model!\")\n \n # UPDATE the returned dictionary \n # (Use the same 'messages' key name, as we did at 'AgentState')\n # Add 'results' list of observations at 'messages' key -> \n # [{\"tool_id, tool_name, observation\"1}, {\"tool_id, tool_name, observation\"2}, ...,\n # {\"tool_id, tool_name, observation\"N}]\n # {'messages': results} -> {'messages': list of observations} -> \n # {'messages': [{\"tool_id, tool_name, observation\"1}, {\"tool_id, tool_name, observation\"2}, ...,\n # {\"tool_id, tool_name, observation\"N}]\n # Because we have ANNOTATED the 'messages' key attribute, on the\n # 'AgentState' with the 'operators.add' this ISN'T overwriting this.\n # It's ADDED to historic of 'messages' at 'AgentState'\n return {'messages': results}", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "49abef51-0db2-4e76-9873-cb314215ddda", | |
| "cell_type": "code", | |
| "source": "# 'system' msg / prompt\nprompt = \"\"\"You are a smart research assistant. Use the search engine to look up information. \\\nYou are allowed to make multiple calls (either together or in sequence). \\\nOnly look up information when you are sure of what you want. \\\nIf you need to look up some information before asking a follow up question, you are allowed to do that!\n\"\"\"\n\n# Use \"gpt-3.5-turbo\" as model from OpenAI\nmodel = ChatOpenAI(model=\"gpt-3.5-turbo\")\n\n# Call AI 'Agent' class with inputs: \n# model = ChatOpenAI(model= \"gpt-3.5-turbo\") \n# tools=[tools(objs)]=[Search Engine(obj)]=[TavilySearchResults(obj)]=[tool(obj)] \n# system = 'system' msg/prompt\n# checkpointer=memory SQLite database for SAVING data -> checkpoint(obj)\nabot = Agent(model, [tool], system=prompt, checkpointer=memory)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "b51531ea-2ed3-43e1-8007-b964a73a8198", | |
| "cell_type": "code", | |
| "source": "# 'user' msg = \"What is the weather in sf?\" \n# Create a 'HumanMessage' representing the 'user' msg, \n# and put it as a [list of messages]. \n# We have to do this, because the simple 'StateAgent' class expects \n# as input, a 'messages' key attribute to work, \n# with a [list of messages] as dict value, so we need to make it \n# conform with that.\nmessages = [HumanMessage(content=\"Whats the weather in SF?\")]\n\n# thread config /hilo configurable/ used to keep track \n# of different threads /hilos/ inside the persistent\n# checkpointer/checkpoint. Used for having MULTIPLE CONVERSATIONS \n# with MANY 'users’ going on at the same time.\n# dict {} <- with inner dict {} as value, of \"configurable\" key\nthread = {\"configurable\": {\"thread_id\": \"1\"}}\n\n# Call 'events=agent(obj).graph.stream({\"messages\": messages}, thread)' \n# including list of messages dict -> {\"messages\": [messages]} and also \n# the 'thread config' dict -> {\"configurable\": {\"thread_id\": \"1\"}}\n# We're going to get back a 'stream of events', that represents UPDATES\n# to 'AgentState', over time.\nevents = abot.graph.stream({\"messages\": messages}, thread)\n\n# Extract each event per iteration.\nfor event in events:\n \n # Extract event dict 'values' -> {v} per iter, \n # with 'event.values()' method.\n for v in event.values():\n \n # Display {v} dict, each inner loop iteration.\n print(v)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "e8207aa4-d6be-409d-b9ab-d494ad6eebe9", | |
| "cell_type": "markdown", | |
| "source": "```\n{'messages': [HumanMessage(content='Whats the weather in SF?', id='1c9d26ab-56a4-4a43-b6bb-ddbe23130475'), AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_i7rGhnzgZf5hW3bsDcqdTrH0', 'function': {'arguments': '{\"query\":\"weather in San Francisco\"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 22, 'prompt_tokens': 152, 'total_tokens': 174, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-4e73aa0c-f8ac-4777-8f73-6ac06e92e08c-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in San Francisco'}, 'id': 'call_i7rGhnzgZf5hW3bsDcqdTrH0'}])]}\n\n{'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_i7rGhnzgZf5hW3bsDcqdTrH0', 'function': {'arguments': '{\"query\":\"weather in San Francisco\"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 22, 'prompt_tokens': 152, 'total_tokens': 174, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-8b126745-174a-4486-acfe-13b51e59a76f-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in San Francisco'}, 'id': 'call_i7rGhnzgZf5hW3bsDcqdTrH0'}])]}\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "caa54365-29b8-458d-8506-48ebed1230e4", | |
| "cell_type": "code", | |
| "source": "# Get the CURRENT STATE of the graph for this 'thread config’ -> \n# thread = { \"configurable\": {\"thread_id\": \"1\"} }\nabot.graph.get_state(thread)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "63fcbdc4-82c0-4ec7-9048-0af3ea518a76", | |
| "cell_type": "markdown", | |
| "source": "```\nStateSnapshot(values={'messages': [HumanMessage(content='Whats the weather in SF?', id='1c9d26ab-56a4-4a43-b6bb-ddbe23130475'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in San Francisco\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_i7rGhnzgZf5hW3bsDcqdTrH0', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-4e73aa0c-f8ac-4777-8f73-6ac06e92e08c-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in San Francisco'}, 'id': 'call_i7rGhnzgZf5hW3bsDcqdTrH0'}])]}, next=('action',), config={'configurable': {'thread_id': '1', 'thread_ts': '1f069a04-921f-65cc-8001-d4f37a559a34'}}, metadata={'source': 'loop', 'step': 1, 'writes': {'llm': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in San Francisco\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_i7rGhnzgZf5hW3bsDcqdTrH0', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-4e73aa0c-f8ac-4777-8f73-6ac06e92e08c-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in San Francisco'}, 'id': 'call_i7rGhnzgZf5hW3bsDcqdTrH0'}])]}}}, created_at='2025-07-25T21:42:39.113141+00:00', parent_config={'configurable': {'thread_id': '1', 'thread_ts': '1f069a04-91a4-60e2-8000-28f552e14649'}})\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "4bbd0870-2696-467d-9bf8-c614710ef15a", | |
| "cell_type": "code", | |
| "source": "# Get the NEXT STATE of the graph for this 'thread config’ -> \n# thread = { \"configurable\": {\"thread_id\": \"1\"} } \n# which is we’re about executing action / tool / function\nabot.graph.get_state(thread).next", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "1b968c70-51f5-4a81-b698-83c05276459d", | |
| "cell_type": "markdown", | |
| "source": "```\n('action',)\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "9045fe96-1dc5-4b08-a8c2-5b2a8cc430cb", | |
| "cell_type": "markdown", | |
| "source": "### continue after interrupt", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "a0327223-fd2a-41dc-9171-4f1da2538ca0", | |
| "cell_type": "code", | |
| "source": "# Call 'events=agent(obj).graph.stream(None, thread)' \n# Doesn't include list of messages dict -> {\"messages\": [messages]}\n# so 'HumanMessage' from ‘user’ is NOT passed in as historical msg here. \n# Include also the 'thread config' dict -> {\"configurable\": {\"thread_id\": \"1\"}}\n# We're going to get back a 'stream of events', that represents UPDATES\n# to 'AgentState', over time.\nevents = abot.graph.stream(None, thread)\n\n# Extract each event per iteration.\nfor event in events :\n \n # Extract event dict 'values' -> {v} per iter, \n # with 'event.values()' method.\n for v in event.values():\n \n # Display {v} dict, each inner loop iteration.\n print(v)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "0af2d894-840d-4e15-b63a-f68bc41e3ef9", | |
| "cell_type": "markdown", | |
| "source": "```\nCalling: {'name': 'tavily_search_results_json', 'args': {'query': 'weather in San Francisco'}, 'id': 'call_i7rGhnzgZf5hW3bsDcqdTrH0'}\nBack to the model!\n\n{'messages': [ToolMessage(content=\"[{'url': 'https://en.climate-data.org/north-america/united-states-of-america/california/san-francisco-385/t/july-7/', 'content': '| 25. July | Broken clouds | 17 °C 62.6 °F | 13 °C 55.4 °F | 0 % | 15 km/h 9 mph | 0mm 0 in | 87% |\\\\n| 26. July | Overcast clouds | 16 °C 60.8 °F | 13 °C 55.4 °F | 0 % | 15 km/h 9 mph | 0mm 0 in | 88% |\\\\n| 27. July | Broken clouds | 17 °C 62.6 °F | 13 °C 55.4 °F | 0 % | 14 km/h 9 mph | 0mm 0 in | 86% |\\\\n| 28. July | Few clouds | 16 °C 60.8 °F | 14 °C 57.2 °F | 0 % | 19 km/h 12 mph | 0mm 0 in | 81% | [...] | 24. July | 17 °C | 62 °F | 22 °C | 72 °F | 13 °C | 55 °F | 14 °C | 57 °F | 0.0 mm | 0.0 inch. |\\\\n| 25. July | 17 °C | 62 °F | 22 °C | 72 °F | 13 °C | 55 °F | 14 °C | 57 °F | 0.0 mm | 0.0 inch. |\\\\n| 26. July | 16 °C | 61 °F | 22 °C | 72 °F | 13 °C | 55 °F | 14 °C | 57 °F | 0.0 mm | 0.0 inch. |\\\\n| 27. July | 16 °C | 61 °F | 22 °C | 72 °F | 13 °C | 55 °F | 14 °C | 57 °F | 0.1 mm | 0.0 inch. |\\\\n| 28. July | 16 °C | 61 °F | 22 °C | 71 °F | 13 °C | 55 °F | 14 °C | 57 °F | 0.1 mm | 0.0 inch. | [...] | Max. Temperature °C (°F) | 14 °C (57.3) °F | 14.9 °C (58.7) °F | 16.2 °C (61.2) °F | 17.4 °C (63.3) °F | 19.2 °C (66.5) °F | 21.5 °C (70.8) °F | 21.8 °C (71.2) °F | 22.2 °C (71.9) °F | 23.1 °C (73.6) °F | 21.3 °C (70.3) °F | 17.1 °C (62.8) °F | 13.9 °C (57.1) °F |\\\\n| Precipitation / Rainfall mm (in) | 113 (4) | 118 (4) | 83 (3) | 40 (1) | 21 (0) | 6 (0) | 2 (0) | 2 (0) | 3 (0) | 25 (0) | 57 (2) | 111 (4) |\\\\n| Humidity(%) | 79% | 80% | 78% | 72% | 70% | 69% | 74% | 74% | 71% | 70% | 76% | 78% |'}, {'url': 'https://www.weather25.com/north-america/usa/california/san-francisco?page=month&month=July', 'content': '| 27 Image 54: Mist 15°/13° | 28 Image 55: Fog 18°/12° | 29 Image 56: Overcast 18°/14° | 30 Image 57: Patchy rain possible 17°/14° | 31 Image 58: Partly cloudy 19°/15° | | | [...] Friday Jul 25 Image 8: Mist 0 mm 15°/14°Saturday Jul 26 Image 9: Overcast 0 mm 15°/14°Sunday Jul 27 Image 10: Mist 0 mm 15°/13°Monday Jul 28 Image 11: Fog 0 mm 18°/12°Tuesday Jul 29 Image 12: Overcast 0 mm 18°/14°Wednesday Jul 30 Image 13: Patchy rain possible 0 mm 17°/14°Thursday Jul 31 Image 14: Partly cloudy 0 mm 19°/15°Friday Aug 1 Image 15: Sunny 0 mm 20°/15°Saturday Aug 2 Image 16: Sunny 0 mm 18°/14°Sunday Aug 3 Image 17: Sunny 0 mm 18°/14°Monday Aug 4 Image 18: Sunny 0 mm 19°/14°Tuesday [...] | 13 Image 40: Partly cloudy 19°/12° | 14 Image 41: Partly cloudy 20°/13° | 15 Image 42: Partly cloudy 20°/13° | 16 Image 43: Sunny 20°/13° | 17 Image 44: Partly cloudy 20°/12° | 18 Image 45: Partly cloudy 20°/13° | 19 Image 46: Partly cloudy 19°/13° |\\\\n| 20 Image 47: Partly cloudy 19°/13° | 21 Image 48: Partly cloudy 19°/12° | 22 Image 49: Sunny 20°/13° | 23 Image 50: Sunny 20°/14° | 24 Image 51: Sunny 21°/13° | 25 Image 52: Mist 15°/14° | 26 Image 53: Overcast 15°/14° |'}]\", name='tavily_search_results_json', id='0017b74c-4a7b-4d65-b27c-f9cdad582c3a', tool_call_id='call_i7rGhnzgZf5hW3bsDcqdTrH0')]}\n\n{'messages': [HumanMessage(content='Whats the weather in SF?', id='f16764ce-a920-4494-a6f2-3b36a6db728f'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in San Francisco\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_i7rGhnzgZf5hW3bsDcqdTrH0', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-0ffc33a2-34fa-414e-81b9-4c0e45663843-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in San Francisco'}, 'id': 'call_i7rGhnzgZf5hW3bsDcqdTrH0'}]), ToolMessage(content=\"[{'url': 'https://en.climate-data.org/north-america/united-states-of-america/california/san-francisco-385/t/july-7/', 'content': '| 25. July | Broken clouds | 17 °C 62.6 °F | 13 °C 55.4 °F | 0 % | 15 km/h 9 mph | 0mm 0 in | 87% |\\\\n| 26. July | Overcast clouds | 16 °C 60.8 °F | 13 °C 55.4 °F | 0 % | 15 km/h 9 mph | 0mm 0 in | 88% |\\\\n| 27. July | Broken clouds | 17 °C 62.6 °F | 13 °C 55.4 °F | 0 % | 14 km/h 9 mph | 0mm 0 in | 86% |\\\\n| 28. July | Few clouds | 16 °C 60.8 °F | 14 °C 57.2 °F | 0 % | 19 km/h 12 mph | 0mm 0 in | 81% | [...] | 24. July | 17 °C | 62 °F | 22 °C | 72 °F | 13 °C | 55 °F | 14 °C | 57 °F | 0.0 mm | 0.0 inch. |\\\\n| 25. July | 17 °C | 62 °F | 22 °C | 72 °F | 13 °C | 55 °F | 14 °C | 57 °F | 0.0 mm | 0.0 inch. |\\\\n| 26. July | 16 °C | 61 °F | 22 °C | 72 °F | 13 °C | 55 °F | 14 °C | 57 °F | 0.0 mm | 0.0 inch. |\\\\n| 27. July | 16 °C | 61 °F | 22 °C | 72 °F | 13 °C | 55 °F | 14 °C | 57 °F | 0.1 mm | 0.0 inch. |\\\\n| 28. July | 16 °C | 61 °F | 22 °C | 71 °F | 13 °C | 55 °F | 14 °C | 57 °F | 0.1 mm | 0.0 inch. | [...] | Max. Temperature °C (°F) | 14 °C (57.3) °F | 14.9 °C (58.7) °F | 16.2 °C (61.2) °F | 17.4 °C (63.3) °F | 19.2 °C (66.5) °F | 21.5 °C (70.8) °F | 21.8 °C (71.2) °F | 22.2 °C (71.9) °F | 23.1 °C (73.6) °F | 21.3 °C (70.3) °F | 17.1 °C (62.8) °F | 13.9 °C (57.1) °F |\\\\n| Precipitation / Rainfall mm (in) | 113 (4) | 118 (4) | 83 (3) | 40 (1) | 21 (0) | 6 (0) | 2 (0) | 2 (0) | 3 (0) | 25 (0) | 57 (2) | 111 (4) |\\\\n| Humidity(%) | 79% | 80% | 78% | 72% | 70% | 69% | 74% | 74% | 71% | 70% | 76% | 78% |'}, {'url': 'https://www.weather25.com/north-america/usa/california/san-francisco?page=month&month=July', 'content': '| 27 Image 54: Mist 15°/13° | 28 Image 55: Fog 18°/12° | 29 Image 56: Overcast 18°/14° | 30 Image 57: Patchy rain possible 17°/14° | 31 Image 58: Partly cloudy 19°/15° | | | [...] Friday Jul 25 Image 8: Mist 0 mm 15°/14°Saturday Jul 26 Image 9: Overcast 0 mm 15°/14°Sunday Jul 27 Image 10: Mist 0 mm 15°/13°Monday Jul 28 Image 11: Fog 0 mm 18°/12°Tuesday Jul 29 Image 12: Overcast 0 mm 18°/14°Wednesday Jul 30 Image 13: Patchy rain possible 0 mm 17°/14°Thursday Jul 31 Image 14: Partly cloudy 0 mm 19°/15°Friday Aug 1 Image 15: Sunny 0 mm 20°/15°Saturday Aug 2 Image 16: Sunny 0 mm 18°/14°Sunday Aug 3 Image 17: Sunny 0 mm 18°/14°Monday Aug 4 Image 18: Sunny 0 mm 19°/14°Tuesday [...] | 13 Image 40: Partly cloudy 19°/12° | 14 Image 41: Partly cloudy 20°/13° | 15 Image 42: Partly cloudy 20°/13° | 16 Image 43: Sunny 20°/13° | 17 Image 44: Partly cloudy 20°/12° | 18 Image 45: Partly cloudy 20°/13° | 19 Image 46: Partly cloudy 19°/13° |\\\\n| 20 Image 47: Partly cloudy 19°/13° | 21 Image 48: Partly cloudy 19°/12° | 22 Image 49: Sunny 20°/13° | 23 Image 50: Sunny 20°/14° | 24 Image 51: Sunny 21°/13° | 25 Image 52: Mist 15°/14° | 26 Image 53: Overcast 15°/14° |'}]\", name='tavily_search_results_json', id='0017b74c-4a7b-4d65-b27c-f9cdad582c3a', tool_call_id='call_i7rGhnzgZf5hW3bsDcqdTrH0'), AIMessage(content='The weather in San Francisco for the upcoming days is as follows:\\n- July 25: Broken clouds, 17°C (62.6°F) to 13°C (55.4°F), 0% precipitation, 87% humidity\\n- July 26: Overcast clouds, 16°C (60.8°F) to 13°C (55.4°F), 0% precipitation, 88% humidity\\n- July 27: Broken clouds, 17°C (62.6°F) to 13°C (55.4°F), 0% precipitation, 86% humidity\\n- July 28: Few clouds, 16°C (60.8°F) to 14°C (57.2°F), 0% precipitation, 81% humidity\\n\\nIf you need more detailed information, feel free to ask!', response_metadata={'token_usage': {'completion_tokens': 175, 'prompt_tokens': 1608, 'total_tokens': 1783, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-42cb3f4b-6605-41c5-afd3-511af7331610-0')]}\n\n{'messages': [AIMessage(content='The weather in San Francisco for the upcoming days is as follows:\\n- July 25: Broken clouds, 17°C (62.6°F) to 13°C (55.4°F), 0% precipitation, 87% humidity\\n- July 26: Overcast clouds, 16°C (60.8°F) to 13°C (55.4°F), 0% precipitation, 88% humidity\\n- July 27: Broken clouds, 17°C (62.6°F) to 13°C (55.4°F), 0% precipitation, 86% humidity\\n- July 28: Few clouds, 16°C (60.8°F) to 14°C (57.2°F), 0% precipitation, 81% humidity\\n\\nIf you need more detailed information, feel free to ask!', response_metadata={'token_usage': {'completion_tokens': 175, 'prompt_tokens': 1608, 'total_tokens': 1783, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-42cb3f4b-6605-41c5-afd3-511af7331610-0')]}\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "80bb6882-e6a1-4b5c-9ac6-4e101598b9b8", | |
| "cell_type": "code", | |
| "source": "# Get the CURRENT STATE of the graph for this 'thread config’ -> \n# thread = { \"configurable\": {\"thread_id\": \"1\"} }\nabot.graph.get_state(thread)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "ca5fbc64-e4a5-4d64-8930-9a0c7b72d50b", | |
| "cell_type": "markdown", | |
| "source": "```\nStateSnapshot(values={'messages': [HumanMessage(content='Whats the weather in SF?', id='5aa77e7e-249b-460e-a9f8-dcd94516a905'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in San Francisco\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_i7rGhnzgZf5hW3bsDcqdTrH0', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-b8a8e29c-4744-4f67-8513-6f565af2a22c-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in San Francisco'}, 'id': 'call_i7rGhnzgZf5hW3bsDcqdTrH0'}]), ToolMessage(content='[{\\'url\\': \\'https://en.climate-data.org/north-america/united-states-of-america/california/san-francisco-385/t/july-7/\\', \\'content\\': \"| 25. July | Broken clouds | 17 °C 62.6 °F | 13 °C 55.4 °F | 0 % | 15 km/h 9 mph | 0mm 0 in | 87% |\\\\n| 26. July | Overcast clouds | 16 °C 60.8 °F | 13 °C 55.4 °F | 0 % | 15 km/h 9 mph | 0mm 0 in | 88% |\\\\n| 27. July | Broken clouds | 17 °C 62.6 °F | 13 °C 55.4 °F | 0 % | 14 km/h 9 mph | 0mm 0 in | 86% |\\\\n| 28. July | Few clouds | 16 °C 60.8 °F | 14 °C 57.2 °F | 0 % | 19 km/h 12 mph | 0mm 0 in | 81% | [...] | 24. July | 17 °C | 62 °F | 22 °C | 72 °F | 13 °C | 55 °F | 14 °C | 57 °F | 0.0 mm | 0.0 inch. |\\\\n| 25. July | 17 °C | 62 °F | 22 °C | 72 °F | 13 °C | 55 °F | 14 °C | 57 °F | 0.0 mm | 0.0 inch. |\\\\n| 26. July | 16 °C | 61 °F | 22 °C | 72 °F | 13 °C | 55 °F | 14 °C | 57 °F | 0.0 mm | 0.0 inch. |\\\\n| 27. July | 16 °C | 61 °F | 22 °C | 72 °F | 13 °C | 55 °F | 14 °C | 57 °F | 0.1 mm | 0.0 inch. |\\\\n| 28. July | 16 °C | 61 °F | 22 °C | 71 °F | 13 °C | 55 °F | 14 °C | 57 °F | 0.1 mm | 0.0 inch. | [...] San Francisco experiences mild temperatures in July, reflecting its unique climate influenced by the Pacific Ocean. The average temperature during this month is around 61.3°F (16.3°C). Daytime highs can reach up to approximately 71.2°F (21.8°C), providing a comfortable warmth perfect for exploring the city\\'s outdoor attractions and vibrant neighborhoods. Nighttime lows, on the other hand, typically drop to about 54.9°F (12.7°C), which might require a light jacket if you\\'re out after sunset.\"}, {\\'url\\': \\'https://www.weather25.com/north-america/usa/california/san-francisco?page=month&month=July\\', \\'content\\': \\'Friday Jul 25 Image 8: Mist 0 mm 15°/14°Saturday Jul 26 Image 9: Overcast 0 mm 15°/14°Sunday Jul 27 Image 10: Mist 0 mm 15°/13°Monday Jul 28 Image 11: Fog 0 mm 18°/12°Tuesday Jul 29 Image 12: Overcast 0 mm 18°/14°Wednesday Jul 30 Image 13: Patchy rain possible 0 mm 17°/14°Thursday Jul 31 Image 14: Partly cloudy 0 mm 19°/15°Friday Aug 1 Image 15: Sunny 0 mm 20°/15°Saturday Aug 2 Image 16: Sunny 0 mm 18°/14°Sunday Aug 3 Image 17: Sunny 0 mm 18°/14°Monday Aug 4 Image 18: Sunny 0 mm 19°/14°Tuesday [...] | 27 Image 54: Mist 15°/13° | 28 Image 55: Fog 18°/12° | 29 Image 56: Overcast 18°/14° | 30 Image 57: Patchy rain possible 17°/14° | 31 Image 58: Partly cloudy 19°/15° | | | [...] | 13 Image 40: Partly cloudy 19°/12° | 14 Image 41: Partly cloudy 20°/13° | 15 Image 42: Partly cloudy 20°/13° | 16 Image 43: Sunny 20°/13° | 17 Image 44: Partly cloudy 20°/12° | 18 Image 45: Partly cloudy 20°/13° | 19 Image 46: Partly cloudy 19°/13° |\\\\n| 20 Image 47: Partly cloudy 19°/13° | 21 Image 48: Partly cloudy 19°/12° | 22 Image 49: Sunny 20°/13° | 23 Image 50: Sunny 20°/14° | 24 Image 51: Sunny 21°/13° | 25 Image 52: Mist 15°/14° | 26 Image 53: Overcast 15°/14° |\\'}]', name='tavily_search_results_json', id='4f0cdcd0-8348-42a1-83b9-afc732a66b1a', tool_call_id='call_i7rGhnzgZf5hW3bsDcqdTrH0'), AIMessage(content='The weather in San Francisco is currently around 16°C to 17°C with broken clouds. The forecast for the upcoming days includes overcast clouds and few clouds with temperatures ranging from 16°C to 17°C during the day and dropping to around 13°C at night.', response_metadata={'finish_reason': 'stop', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 57, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 1401, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 1458}}, id='run-064467f7-7b46-4a8b-b60f-3a98136ea221-0')]}, next=(), config={'configurable': {'thread_id': '1', 'thread_ts': '1f069b70-e1e4-65b1-8003-ad6451d38a04'}}, metadata={'source': 'loop', 'step': 3, 'writes': {'llm': {'messages': [AIMessage(content='The weather in San Francisco is currently around 16°C to 17°C with broken clouds. The forecast for the upcoming days includes overcast clouds and few clouds with temperatures ranging from 16°C to 17°C during the day and dropping to around 13°C at night.', response_metadata={'finish_reason': 'stop', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 57, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 1401, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 1458}}, id='run-064467f7-7b46-4a8b-b60f-3a98136ea221-0')]}}}, created_at='2025-07-26T00:25:38.528172+00:00', parent_config={'configurable': {'thread_id': '1', 'thread_ts': '1f069b70-d7c0-6e2a-8002-73c6f03cda98'}})\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "e8f6d8da-67bf-4c84-8170-4feda7ea25b5", | |
| "cell_type": "code", | |
| "source": "# Get the NEXT STATE of the graph for this 'thread config’ -> \n# thread = { \"configurable\": {\"thread_id\": \"1\"} } which is we’re about taking action / tool / function\nabot.graph.get_state(thread).next", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "b71820e3-9041-4de6-bb72-d274698aeb51", | |
| "cell_type": "markdown", | |
| "source": "```\n('action',)\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "be272e66-8684-402e-9ebc-d0c7c87ac691", | |
| "cell_type": "code", | |
| "source": "# 'user' msg = \"What is the weather in LA?\" \n# Create a 'HumanMessage' representing the 'user' msg, \n# and put it as a [list of messages]. \n# We have to do this, because the simple 'StateAgent' class expects \n# as input, a 'messages' key attribute to work, \n# with a [list of messages] as dict value, so we need to make it \n# conform with that.\nmessages = [HumanMessage(\"Whats the weather in LA?\")]\n\n# thread config /hilo configurable/ used to keep track \n# of different threads /hilos/ inside the persistent\n# checkpointer/checkpoint. Used for having MULTIPLE CONVERSATIONS \n# with MANY 'users’ going on at the same time.\n# dict {} <- with inner dict {} as value, of \"configurable\" key\n# New conversational point of view is assigned to new thread config,\n# so start a fresh -> {\"thread_id\": \"2\"}\nthread = {\"configurable\": {\"thread_id\": \"2\"}}\n\n# Call 'events=agent(obj).graph.stream({\"messages\": messages}, thread)' \n# including list of messages dict -> {\"messages\": [messages]} and also \n# the 'thread config' dict -> {\"configurable\": {\"thread_id\": \"2\"}}\n# We're going to get back a 'stream of events', that represents UPDATES\n# to 'AgentState', over time.\nevents = abot.graph.stream({\"messages\": messages}, thread)\n\n# Extract each event per iteration.\nfor event in events:\n \n # Extract event dict 'values' -> {v} per iter, \n # with 'event.values()' method.\n for v in event.values():\n \n # Display {v} dict, each inner loop iteration.\n print(v)\n\n# Get the NEXT STATE of the graph for this 'thread config’ -> \n# thread = { \"configurable\": {\"thread_id\": \"2\"} } which is we’re about \n# taking action / tool / function\nnext_state = abot.graph.get_state(thread).next \n\n# While NEXT STATE of graph is NOT empty -> True (Keep Executing while loop)\nwhile next_state:\n \n # Print out the CURRENT STATE of the graph for 'thread config’ -> \n # { \"thread_id\": \"2\"} }\n current_state = abot.graph.get_state(thread) \n print(\"\\n\", current_state,\"\\n\")\n \n # The Python code line _input = input(\"proceed?\") ask the question\n # \"proceed?\" and prompts the user for input, \n # storing it as 'string' in the '_input' variable.\n _input = input(\"proceed?\")\n \n # If the user's answer/input is different (!=) to Yes -> \"y\"\n if _input != \"y\":\n \n # Print out \"aborting\"\n print(\"aborting\")\n \n # And 'break' the while loop\n break\n \n # Otherwise continue the while loop execution\n # Call 'events=agent(obj).graph.stream(None, thread)' \n # Doesn't include list of messages dict -> {\"messages\": [messages]}\n # so 'HumanMessage' from ‘user’ is NOT passed in as historical msg here. \n # Include also the 'thread config' dict -> {\"configurable\": {\"thread_id\": \"2\"}}\n # We're going to get back a 'stream of events', that represents UPDATES\n # to 'AgentState', over time.\n events = abot.graph.stream(None, thread) \n \n # Extract each event per iteration.\n for event in events:\n \n # Extract event dict 'values' -> {v} per iter, \n # with 'event.values()' method.\n for v in event.values():\n \n # Display {v} dict, each inner loop iteration.\n print(v)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "e4511a53-4f30-4769-979a-0b33e90c757e", | |
| "cell_type": "markdown", | |
| "source": "```\n{'messages': [HumanMessage(content='Whats the weather in LA?', id='431264b4-db29-4548-9805-3245d3e30ee9'), AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 22, 'prompt_tokens': 152, 'total_tokens': 174, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-b9678832-8d7a-446a-8485-77677c51f47a-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}\n\n{'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 22, 'prompt_tokens': 152, 'total_tokens': 174, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-b9678832-8d7a-446a-8485-77677c51f47a-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}\n\n StateSnapshot(values={'messages': [HumanMessage(content='Whats the weather in LA?', id='431264b4-db29-4548-9805-3245d3e30ee9'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-b9678832-8d7a-446a-8485-77677c51f47a-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}, next=('action',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f06bef9-d4aa-660b-8001-1c2a3de32d1d'}}, metadata={'source': 'loop', 'step': 1, 'writes': {'llm': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-b9678832-8d7a-446a-8485-77677c51f47a-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}}}, created_at='2025-07-28T20:15:32.875596+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f06bef9-d477-64ea-8000-b4a6963ae416'}})\n\nproceed?|y |\n\nCalling: {'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}\nBack to the model!\n\n{'messages': [ToolMessage(content='[{\\'url\\': \\'https://weathershogun.com/weather/usa/ca/los-angeles/451/july/2025-07-28\\', \\'content\\': \"Monday, July 28, 2025. Los Angeles, CA - Weather Forecast \\\\n\\\\n===============\\\\n\\\\n☰\\\\n\\\\nLos Angeles, CA\\\\n\\\\nImage 1: WeatherShogun.com\\\\n\\\\nHomeContactBrowse StatesPrivacy PolicyTerms and Conditions\\\\n\\\\n°F)°C)\\\\n\\\\n❮\\\\n\\\\nTodayTomorrowHourly7 days30 daysJuly\\\\n\\\\n❯\\\\n\\\\nLos Angeles, California Weather: \\\\n\\\\nMonday, July 28, 2025\\\\n\\\\nDay 81°\\\\n\\\\nNight 64°\\\\n\\\\nPrecipitation 0 %\\\\n\\\\nWind 5 mph\\\\n\\\\nUV Index (0 - 11+)11\\\\n\\\\nTuesday\\\\n\\\\n Hourly\\\\n Today\\\\n Tomorrow\\\\n 7 days\\\\n 30 days\\\\n\\\\nWeather Forecast History\\\\n------------------------ [...] Last Year\\'s Weather on This Day (July 28, 2024)\\\\n\\\\n### Day\\\\n\\\\n82°\\\\n\\\\n### Night\\\\n\\\\n61°\\\\n\\\\nPlease note that while we strive for accuracy, the information provided may not always be correct. Use at your own risk.\\\\n\\\\n© Copyright by WeatherShogun.com\"}, {\\'url\\': \\'https://www.weather25.com/north-america/usa/california/los-angeles?page=month&month=July\\', \\'content\\': \\'| 27 Image 54: Cloudy 28°/16° | 28 Image 55: Sunny 30°/20° | 29 Image 56: Sunny 31°/21° | 30 Image 57: Sunny 32°/22° | 31 Image 58: Sunny 30°/22° | | | [...] Thursday Jul 24 Image 8: Partly cloudy 0 mm 29°/18°Friday Jul 25 Image 9: Cloudy 0 mm 24°/17°Saturday Jul 26 Image 10: Cloudy 0 mm 24°/16°Sunday Jul 27 Image 11: Cloudy 0 mm 28°/16°Monday Jul 28 Image 12: Sunny 0 mm 30°/20°Tuesday Jul 29 Image 13: Sunny 0 mm 31°/21°Wednesday Jul 30 Image 14: Sunny 0 mm 32°/22°Thursday Jul 31 Image 15: Sunny 0 mm 30°/22°Friday Aug 1 Image 16: Sunny 0 mm 31°/20°Saturday Aug 2 Image 17: Sunny 0 mm 32°/21°Sunday Aug 3 Image 18: Sunny 0 mm 33°/22°Monday Aug 4 Image\\'}]', name='tavily_search_results_json', id='23431937-bcc2-4f77-85b1-ef022b832e2b', tool_call_id='call_6ED1ZQ8nrjYIOY14yqInLPZc')]}\n\n{'messages': [HumanMessage(content='Whats the weather in LA?', id='431264b4-db29-4548-9805-3245d3e30ee9'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-b9678832-8d7a-446a-8485-77677c51f47a-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}]), ToolMessage(content='[{\\'url\\': \\'https://weathershogun.com/weather/usa/ca/los-angeles/451/july/2025-07-28\\', \\'content\\': \"Monday, July 28, 2025. Los Angeles, CA - Weather Forecast \\\\n\\\\n===============\\\\n\\\\n☰\\\\n\\\\nLos Angeles, CA\\\\n\\\\nImage 1: WeatherShogun.com\\\\n\\\\nHomeContactBrowse StatesPrivacy PolicyTerms and Conditions\\\\n\\\\n°F)°C)\\\\n\\\\n❮\\\\n\\\\nTodayTomorrowHourly7 days30 daysJuly\\\\n\\\\n❯\\\\n\\\\nLos Angeles, California Weather: \\\\n\\\\nMonday, July 28, 2025\\\\n\\\\nDay 81°\\\\n\\\\nNight 64°\\\\n\\\\nPrecipitation 0 %\\\\n\\\\nWind 5 mph\\\\n\\\\nUV Index (0 - 11+)11\\\\n\\\\nTuesday\\\\n\\\\n Hourly\\\\n Today\\\\n Tomorrow\\\\n 7 days\\\\n 30 days\\\\n\\\\nWeather Forecast History\\\\n------------------------ [...] Last Year\\'s Weather on This Day (July 28, 2024)\\\\n\\\\n### Day\\\\n\\\\n82°\\\\n\\\\n### Night\\\\n\\\\n61°\\\\n\\\\nPlease note that while we strive for accuracy, the information provided may not always be correct. Use at your own risk.\\\\n\\\\n© Copyright by WeatherShogun.com\"}, {\\'url\\': \\'https://www.weather25.com/north-america/usa/california/los-angeles?page=month&month=July\\', \\'content\\': \\'| 27 Image 54: Cloudy 28°/16° | 28 Image 55: Sunny 30°/20° | 29 Image 56: Sunny 31°/21° | 30 Image 57: Sunny 32°/22° | 31 Image 58: Sunny 30°/22° | | | [...] Thursday Jul 24 Image 8: Partly cloudy 0 mm 29°/18°Friday Jul 25 Image 9: Cloudy 0 mm 24°/17°Saturday Jul 26 Image 10: Cloudy 0 mm 24°/16°Sunday Jul 27 Image 11: Cloudy 0 mm 28°/16°Monday Jul 28 Image 12: Sunny 0 mm 30°/20°Tuesday Jul 29 Image 13: Sunny 0 mm 31°/21°Wednesday Jul 30 Image 14: Sunny 0 mm 32°/22°Thursday Jul 31 Image 15: Sunny 0 mm 30°/22°Friday Aug 1 Image 16: Sunny 0 mm 31°/20°Saturday Aug 2 Image 17: Sunny 0 mm 32°/21°Sunday Aug 3 Image 18: Sunny 0 mm 33°/22°Monday Aug 4 Image\\'}]', name='tavily_search_results_json', id='23431937-bcc2-4f77-85b1-ef022b832e2b', tool_call_id='call_6ED1ZQ8nrjYIOY14yqInLPZc'), AIMessage(content='The weather forecast for Los Angeles today is 81°F during the day and 64°F at night with 0% chance of precipitation and a wind speed of 5 mph. The UV Index is 11. It seems like a sunny day in Los Angeles.', response_metadata={'token_usage': {'completion_tokens': 54, 'prompt_tokens': 791, 'total_tokens': 845, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-0255573e-7109-43fb-bd9f-bcf51a99b0ef-0')]}\n\n{'messages': [AIMessage(content='The weather forecast for Los Angeles today is 81°F during the day and 64°F at night with 0% chance of precipitation and a wind speed of 5 mph. The UV Index is 11. It seems like a sunny day in Los Angeles.', response_metadata={'token_usage': {'completion_tokens': 54, 'prompt_tokens': 791, 'total_tokens': 845, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-0255573e-7109-43fb-bd9f-bcf51a99b0ef-0')]}\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "41b7648c-f2e7-4f83-8954-f67551d76fbd", | |
| "cell_type": "markdown", | |
| "source": "## Modify State\nRun until the interrupt and then modify the state.", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "6a6d9cc0-f899-4a19-9abb-d995c2fdbc6f", | |
| "cell_type": "code", | |
| "source": "# 'user' msg = \"What is the weather in LA?\" \n# Create a 'HumanMessage' representing the 'user' msg, \n# and put it as a [list of messages]. \n# We have to do this, because the simple 'StateAgent' class expects \n# as input, a 'messages' key attribute to work, \n# with a [list of messages] as dict value, so we need to make it \n# conform with that.\nmessages = [HumanMessage(\"Whats the weather in LA?\")]\n\n# thread config /hilo configurable/ used to keep track \n# of different threads /hilos/ inside the persistent\n# checkpointer/checkpoint. Used for having MULTIPLE CONVERSATIONS \n# with MANY 'users’ going on at the same time.\n# dict {} <- with inner dict {} as value, of \"configurable\" key\n# New conversational point of view is assigned to new thread config,\n# so start a fresh -> {\"thread_id\": \"3\"}\nthread = {\"configurable\": {\"thread_id\": \"3\"}}\n\n# Call 'events=agent(obj).graph.stream({\"messages\": messages}, thread)' \n# including list of messages dict -> {\"messages\": [messages]} and also \n# the 'thread config' dict -> {\"configurable\": {\"thread_id\": \"3\"}}\n# We're going to get back a 'stream of events/states', \n# that represents UPDATES to 'AgentState', over time.\nevents = abot.graph.stream({\"messages\": messages}, thread)\n\n# Extract each event/state per iteration\nfor event in events :\n \n # Extract event dict 'values' -> {v} per iter, \n # with 'event.values()' method.\n for v in event.values():\n \n # Display {v} dict, each inner loop iteration.\n print(v)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "869e4078-70fa-48a5-b12a-39b59a93071f", | |
| "cell_type": "markdown", | |
| "source": "```\n{'messages': [HumanMessage(content='Whats the weather in LA?', id='430d650c-5e26-45f4-9f53-3bcf7de8d1aa'), AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 22, 'prompt_tokens': 152, 'total_tokens': 174, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-ac4a3b51-fd64-4c32-9b62-6942826b2bd6-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}\n\n{'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 22, 'prompt_tokens': 152, 'total_tokens': 174, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-ac4a3b51-fd64-4c32-9b62-6942826b2bd6-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "2f92876a-4dcf-41d5-b91c-783a0217adf1", | |
| "cell_type": "code", | |
| "source": "# Get the CURRENT STATE / StateSnapshot of the graph for this \n# 'thread config’ -> thread = { \"configurable\": {\"thread_id\": \"3\"} }\nabot.graph.get_state(thread)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "24c62b5b-d8fa-4676-ae5c-b82a209668ba", | |
| "cell_type": "markdown", | |
| "source": "```\nStateSnapshot(values={'messages': [HumanMessage(content='Whats the weather in LA?', id='430d650c-5e26-45f4-9f53-3bcf7de8d1aa'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-ac4a3b51-fd64-4c32-9b62-6942826b2bd6-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}, next=('action',), config={'configurable': {'thread_id': '3', 'thread_ts': '1f06ca21-1151-6cb7-8001-4d319d3914f9'}}, metadata={'source': 'loop', 'step': 1, 'writes': {'llm': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-ac4a3b51-fd64-4c32-9b62-6942826b2bd6-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}}}, created_at='2025-07-29T17:32:57.558324+00:00', parent_config={'configurable': {'thread_id': '3', 'thread_ts': '1f06ca21-10fd-6542-8000-ee3d7d8ee526'}})\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "f8093dde-169e-4993-839a-95cd5bc9f6fe", | |
| "cell_type": "code", | |
| "source": "# Get the CURRENT STATE / StateSnapshot of the graph for this \n# 'thread config’ -> thread = { \"configurable\": {\"thread_id\": \"3\"} }\n# Save the CURRENT STATE of the graph, into a new variable -> \n# 'current_values'\ncurrent_values = abot.graph.get_state(thread)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "2262c276-f4a6-4d18-b95c-2d47bb19832e", | |
| "cell_type": "code", | |
| "source": "# From CURRENT STATE (StateSnapshot) extract 'values' dictionary -> \n# values={ 'messages':[HumanMessage(...), AIMessage(...)] } \n# current_values.values['messages'][0] -> HumanMessage(...) \n# current_values.values['messages'][1] -> AIMessage(...)\n# From List of messages [HumanMessage(...)0, AIMessage(...)1] \n# pick the LAST message -> AIMessage(...)\ncurrent_values.values['messages'][-1]", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "a286556e-f79e-4c60-944d-982b3145d8e0", | |
| "cell_type": "markdown", | |
| "source": "```\nAIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-ac4a3b51-fd64-4c32-9b62-6942826b2bd6-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "e36194e2-89b8-48c7-a6c4-a032828d02ed", | |
| "cell_type": "code", | |
| "source": "# From CURRENT STATE (StateSnapshot) extract 'values' dictionary -> \n# values={ 'messages':[HumanMessage(...), AIMessage(...)] } \n# current_values.values['messages'][0] -> HumanMessage(...) \n# current_values.values['messages'][1] -> AIMessage(...) \n# From List of messages [HumanMessage(...)0, AIMessage(...)1] \n# Pick the LAST message -> AIMessage(...) and then select \n# the list 'tool_calls' inside that -> tool_calls=[]\ncurrent_values.values['messages'][-1].tool_calls", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "8de7419a-b604-4227-a668-e8d5cbbbeb79", | |
| "cell_type": "markdown", | |
| "source": "```\n[{'name': 'tavily_search_results_json',\n 'args': {'query': 'weather in Los Angeles'},\n 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}]\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "6cd42c9a-2467-4610-88f1-493d84ba8f8c", | |
| "cell_type": "code", | |
| "source": "# From 'tool_calls' list -> tool_calls=[] select the unique element -> \n# [0] inside that which is a dict {}, and then select the 'id' key\n# to get it's value. Save as '_id' variable.\n_id = current_values.values['messages'][-1].tool_calls[0]['id']\n\n# MODIFY tool_calls=[ {} ] list with dict inside. \n# tool_calls =['name':value,'args':{'query':'NEW QUERY'},'id':_id]\ncurrent_values.values['messages'][-1].tool_calls = [ # -> Open List\n {'name': 'tavily_search_results_json',\n \n # MODIFY Los Angeles by Louisiana, so\n # CURRENT STATE / StateSnapshot will be MODIFIED\n 'args': {'query': 'current weather in Louisiana'},\n 'id': _id}\n] # <- Close list", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "af64cbc5-2933-4324-bec3-d0e6bc0c4998", | |
| "cell_type": "code", | |
| "source": "# CURRENT STATE / StateSnapshot of graph for {\"thread_id\": \"3\"} ->\n# current_values -> 'current_values.values' -> values = {'messages':...}\n# UPDATE NEW CURRENT STATE for 'thread_config' -> {\"thread_id\": \"3\"}\n# including these NEW tool_calls =[ { } ] list at \n# 'current_values.values' -> values = {'messages':...} \nabot.graph.update_state(thread, current_values.values)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "58ee25fb-42e8-4824-bc0e-43882b822544", | |
| "cell_type": "markdown", | |
| "source": "```\n{'messages': [HumanMessage(content='Whats the weather in LA?', id='dcafe9a8-0867-4546-9d41-9e2b8a13546f'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-43864947-ce9f-4125-8698-074c78701b04-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Louisiana'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}\n\n{'configurable': {'thread_id': '3',\n 'thread_ts': '1f06ccfb-2558-6364-8002-9e064c16cb5a'}}\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "b82619c5-0f8e-485f-b0a9-c14048bd1ccc", | |
| "cell_type": "code", | |
| "source": "# Get the NEW CURRENT STATE / StateSnapshot of the graph\n# after being MODIFIED and UPDATED for this 'thread config’ ->\n# thread = { \"configurable\": {\"thread_id\": \"3\"} }\nabot.graph.get_state(thread)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "3a64cc1d-b747-462b-8f32-63df2b17c578", | |
| "cell_type": "markdown", | |
| "source": "```\nStateSnapshot(values={'messages': [HumanMessage(content='Whats the weather in LA?', id='e9a897f2-395e-496c-bac8-9116f4b1d12f'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-544eb2c7-5be2-489a-a922-69a4fd71d2f9-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Louisiana'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}, next=('action',), config={'configurable': {'thread_id': '3', 'thread_ts': '1f06cdc9-7ee4-63b2-8002-51761c4f29ec'}}, metadata={'source': 'update', 'step': 2, 'writes': {'llm': {'messages': [HumanMessage(content='Whats the weather in LA?', id='e9a897f2-395e-496c-bac8-9116f4b1d12f'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-544eb2c7-5be2-489a-a922-69a4fd71d2f9-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Louisiana'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}}}, created_at='2025-07-30T00:31:54.606473+00:00', parent_config={'configurable': {'thread_id': '3', 'thread_ts': '1f06cdc9-7e9f-6824-8001-7551be4f6c7f'}})\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "444d90d6-1e44-4cc2-a6bb-01b97e25d489", | |
| "cell_type": "code", | |
| "source": "# Call 'events=agent(obj).graph.stream(None, thread)' \n# Doesn't include list of messages dict -> {\"messages\": [messages]}\n# so 'HumanMessage(...)' from ‘user’ is NOT passed in as historical msg here. \n# Include also the 'thread config' dict -> {\"configurable\": {\"thread_id\": \"3\"}}\n# We're going to get back a 'stream of events', that represents UPDATES\n# to 'AgentState', over time.\nevents = abot.graph.stream(None, thread)\n\n# Extract each event per iteration.\nfor event in events :\n \n # Extract event dict 'values' -> {v} per iter, \n # with 'event.values()' method.\n for v in event.values():\n \n # Display {v} dict, each inner loop iteration.\n print(v)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "2618d827-1ab7-4fa0-996c-57d2b09ff8d6", | |
| "cell_type": "markdown", | |
| "source": "```\nCalling: {'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Louisiana'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}\nBack to the model!\n\n{'messages': [ToolMessage(content=\"[{'url': 'https://www.weather25.com/north-america/usa/louisiana?page=month&month=July', 'content': 'weather25.com\\\\nSearch\\\\nweather in United States\\\\nRemove from your favorite locations\\\\nAdd to my locations\\\\nShare\\\\nweather in United States\\\\n\\\\n# Louisiana weather in July 2025\\\\n\\\\nPatchy light rain with thunder\\\\nThundery outbreaks possible\\\\nPartly cloudy\\\\nThundery outbreaks possible\\\\nPartly cloudy\\\\nPatchy rain possible\\\\nPatchy rain possible\\\\nPatchy light rain with thunder\\\\nPatchy rain possible\\\\nPatchy rain possible\\\\nLight rain shower\\\\nPatchy light rain\\\\nLight rain shower\\\\nPatchy rain possible [...] Light rain shower\\\\nModerate or heavy rain shower\\\\nPatchy rain possible\\\\nModerate or heavy rain shower\\\\nModerate or heavy rain shower\\\\nPatchy light rain with thunder\\\\nThundery outbreaks possible\\\\nPartly cloudy [...] | Sun | Mon | Tue | Wed | Thu | Fri | Sat |\\\\n| --- | --- | --- | --- | --- | --- | --- |\\\\n| | | 1 Moderate or heavy rain shower 33° /25° | 2 Moderate or heavy rain shower 33° /25° | 3 Light rain shower 32° /25° | 4 Patchy rain possible 33° /25° | 5 Sunny 34° /25° |'}, {'url': 'https://www.accuweather.com/en/us/new-orleans/70112/july-weather/348585', 'content': 'July. January February March April May June July August September October November December. 2025 ... 30. 90°. 76°. 1. 94°. 79°. 2. 90°. 82°. 3. 94°. 80°. 4. 95°.'}]\", name='tavily_search_results_json', id='198b56b3-0e0f-47a5-b335-1097086c1c7f', tool_call_id='call_6ED1ZQ8nrjYIOY14yqInLPZc')]}\n\n{'messages': [HumanMessage(content='Whats the weather in LA?', id='e9a897f2-395e-496c-bac8-9116f4b1d12f'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-544eb2c7-5be2-489a-a922-69a4fd71d2f9-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Louisiana'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}]), ToolMessage(content=\"[{'url': 'https://www.weather25.com/north-america/usa/louisiana?page=month&month=July', 'content': 'weather25.com\\\\nSearch\\\\nweather in United States\\\\nRemove from your favorite locations\\\\nAdd to my locations\\\\nShare\\\\nweather in United States\\\\n\\\\n# Louisiana weather in July 2025\\\\n\\\\nPatchy light rain with thunder\\\\nThundery outbreaks possible\\\\nPartly cloudy\\\\nThundery outbreaks possible\\\\nPartly cloudy\\\\nPatchy rain possible\\\\nPatchy rain possible\\\\nPatchy light rain with thunder\\\\nPatchy rain possible\\\\nPatchy rain possible\\\\nLight rain shower\\\\nPatchy light rain\\\\nLight rain shower\\\\nPatchy rain possible [...] Light rain shower\\\\nModerate or heavy rain shower\\\\nPatchy rain possible\\\\nModerate or heavy rain shower\\\\nModerate or heavy rain shower\\\\nPatchy light rain with thunder\\\\nThundery outbreaks possible\\\\nPartly cloudy [...] | Sun | Mon | Tue | Wed | Thu | Fri | Sat |\\\\n| --- | --- | --- | --- | --- | --- | --- |\\\\n| | | 1 Moderate or heavy rain shower 33° /25° | 2 Moderate or heavy rain shower 33° /25° | 3 Light rain shower 32° /25° | 4 Patchy rain possible 33° /25° | 5 Sunny 34° /25° |'}, {'url': 'https://www.accuweather.com/en/us/new-orleans/70112/july-weather/348585', 'content': 'July. January February March April May June July August September October November December. 2025 ... 30. 90°. 76°. 1. 94°. 79°. 2. 90°. 82°. 3. 94°. 80°. 4. 95°.'}]\", name='tavily_search_results_json', id='198b56b3-0e0f-47a5-b335-1097086c1c7f', tool_call_id='call_6ED1ZQ8nrjYIOY14yqInLPZc'), AIMessage(content='I found information about the weather in Louisiana, but I believe you were asking about Los Angeles. Let me correct that and search for the current weather in Los Angeles.', additional_kwargs={'tool_calls': [{'id': 'call_yHnqSiycFmFc9kq3NGHuTwsc', 'function': {'arguments': '{\"query\":\"current weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 56, 'prompt_tokens': 580, 'total_tokens': 636, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-5bf4b6f4-547b-4cdd-b9ae-b44db339717e-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Los Angeles'}, 'id': 'call_yHnqSiycFmFc9kq3NGHuTwsc'}])]}\n\n{'messages': [AIMessage(content=\"I couldn't find the weather for LA specifically, but I found information about the weather in Louisiana. It mentions thundery outbreaks possible, heavy rain at times, sunny weather, and moderate rain at times. If you would like, I can search for the weather in Los Angeles specifically.\", additional_kwargs={'tool_calls': [{'id': 'call_yHnqSiycFmFc9kq3NGHuTwsc', 'function': {'arguments': '{\"query\":\"current weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 56, 'prompt_tokens': 580, 'total_tokens': 636, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-5bf4b6f4-547b-4cdd-b9ae-b44db339717e-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Los Angeles'}, 'id': 'call_yHnqSiycFmFc9kq3NGHuTwsc'}])]}\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "f4f2bf73-a6b7-4629-a871-0c0e4f7044d5", | |
| "cell_type": "markdown", | |
| "source": "## Time Travel", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "c214e40d-1426-462e-8551-1aeae9ebadb0", | |
| "cell_type": "code", | |
| "source": "# Init states as empty list []\nstates = []\n\n# Access the whole history of Agent States\n# It returns an iterator including ALL the StateSnapshots\n# for 'thread config' value -> {\"thread_id\": \"3\"} -> \n# Conversation topic \"3\" -> \"Weather in LA\" \n# MODIFIED to -> \"Weather in Louisiana\"\nstates_history = abot.graph.get_state_history(thread)\n\n# Extract each Agent state / StateSnapshot per iteration\nfor state in states_history:\n \n # Print each state / StateSnapshot per iteration\n print(state)\n \n # Print '--' between StateSnapshots for splitting them\n print('--')\n \n # Append each Agent state / StateSnapshot into 'states' list []\n states.append(state)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "4132e949-c02e-48a1-ba23-a6a4b378453f", | |
| "cell_type": "markdown", | |
| "source": "```\nStateSnapshot(values={'messages': [HumanMessage(content='Whats the weather in LA?', id='699b2955-1692-47c0-8885-a6cac23ac681'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-7d51d8e2-e512-4ae1-a435-0db06683a196-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Louisiana'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}]), ToolMessage(content=\"[{'url': 'https://www.weather25.com/north-america/usa/louisiana?page=month&month=July', 'content': 'weather25.com\\\\nSearch\\\\nweather in United States\\\\nRemove from your favorite locations\\\\nAdd to my locations\\\\nShare\\\\nweather in United States\\\\n\\\\n# Louisiana weather in July 2025\\\\n\\\\nThundery outbreaks possible\\\\nThundery outbreaks possible\\\\nHeavy rain at times\\\\nThundery outbreaks possible\\\\nSunny\\\\nModerate rain at times\\\\nLight rain shower\\\\nSunny\\\\nSunny\\\\nThundery outbreaks possible\\\\nPatchy rain possible\\\\nPatchy light drizzle\\\\nPartly cloudy\\\\nPatchy rain possible\\\\n\\\\n## The average weather in Louisiana in July [...] | Sun | Mon | Tue | Wed | Thu | Fri | Sat |\\\\n| --- | --- | --- | --- | --- | --- | --- |\\\\n| | | 1 Moderate or heavy rain shower 33° /25° | 2 Moderate or heavy rain shower 33° /25° | 3 Light rain shower 32° /25° | 4 Patchy rain possible 33° /25° | 5 Sunny 34° /25° | [...] Light rain shower\\\\nModerate or heavy rain shower\\\\nPatchy rain possible\\\\nModerate or heavy rain shower\\\\nModerate or heavy rain shower\\\\nModerate or heavy rain shower\\\\nThundery outbreaks possible\\\\nThundery outbreaks possible'}, {'url': 'https://www.wafb.com/2025/07/30/heat-early-then-rain-helps-cool-us-off/', 'content': 'We will be tracking high heat and humidity early today and then a threat for one or two strong...\\\\n\\\\n###### Two Weather Impacts Being Tracked Today\\\\n\\\\nWe should stay mainly dry this morning. Expect a very warm and humid morning as temperatures...\\\\n\\\\n###### Wednesday AM Day Planner\\\\n\\\\nHere are updated rain numbers for Baton Rouge Metro through yesterday.\\\\n\\\\n###### BTR Rain Stats Through Yesterday\\\\n\\\\nJared Silverman gives the 9 a.m. weather forecast on Wednesday, July 30. [...] As of Wednesday afternoon, the Weather Prediction Center is forecasting rainfall totals of 1 to 2 inches on average across south Louisiana. Locally higher amounts, possibly 3 inches or more, are possible.\\\\n\\\\nThe forecast rain chances show elevated rain chances continuing through the weekend.\\\\n\\\\nEXTENDED OUTLOOK\\\\n\\\\nElevated rain chances through the rest of the workweek and into the weekend, helping bring afternoon highs near normal in the low 90s. [...] # Rain returns today; a more active pattern settles in\\\\n\\\\nBATON ROUGE, La. (WAFB) - HEAT ALERTS: A Heat Advisory is in effect again today for most of our area for heat index values peaking between 105°–110°.\\\\n\\\\nA Heat Advisory remains in effect until this evening for heat index values peaking near or...\\\\n\\\\nTHE REST OF TODAY'}]\", name='tavily_search_results_json', id='64a02489-2037-4569-a464-7b3d7f32abfc', tool_call_id='call_6ED1ZQ8nrjYIOY14yqInLPZc'), AIMessage(content=\"I couldn't find the weather for LA specifically, but I found information about the weather in Louisiana. It mentions thundery outbreaks possible, heavy rain at times, sunny weather, and moderate rain at times. If you would like, I can search for the weather in Los Angeles specifically.\", response_metadata={'finish_reason': 'stop', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 59, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 807, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 866}}, id='run-9b4de511-ecff-4614-b9e7-414926c36d1a-0')]}, next=(), config={'configurable': {'thread_id': '3', 'thread_ts': '1f06d767-de03-63ff-8004-230b051f7329'}}, metadata={'source': 'loop', 'step': 4, 'writes': {'llm': {'messages': [AIMessage(content=\"I couldn't find the weather for LA specifically, but I found information about the weather in Louisiana. It mentions thundery outbreaks possible, heavy rain at times, sunny weather, and moderate rain at times. If you would like, I can search for the weather in Los Angeles specifically.\", response_metadata={'finish_reason': 'stop', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 59, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 807, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 866}}, id='run-9b4de511-ecff-4614-b9e7-414926c36d1a-0')]}}}, created_at='2025-07-30T18:53:33.389906+00:00', parent_config={'configurable': {'thread_id': '3', 'thread_ts': '1f06d767-d2ea-6a09-8003-ffaf36bf90ba'}})\n--\nStateSnapshot(values={'messages': [HumanMessage(content='Whats the weather in LA?', id='699b2955-1692-47c0-8885-a6cac23ac681'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-7d51d8e2-e512-4ae1-a435-0db06683a196-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Louisiana'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}]), ToolMessage(content=\"[{'url': 'https://www.weather25.com/north-america/usa/louisiana?page=month&month=July', 'content': 'weather25.com\\\\nSearch\\\\nweather in United States\\\\nRemove from your favorite locations\\\\nAdd to my locations\\\\nShare\\\\nweather in United States\\\\n\\\\n# Louisiana weather in July 2025\\\\n\\\\nThundery outbreaks possible\\\\nThundery outbreaks possible\\\\nHeavy rain at times\\\\nThundery outbreaks possible\\\\nSunny\\\\nModerate rain at times\\\\nLight rain shower\\\\nSunny\\\\nSunny\\\\nThundery outbreaks possible\\\\nPatchy rain possible\\\\nPatchy light drizzle\\\\nPartly cloudy\\\\nPatchy rain possible\\\\n\\\\n## The average weather in Louisiana in July [...] | Sun | Mon | Tue | Wed | Thu | Fri | Sat |\\\\n| --- | --- | --- | --- | --- | --- | --- |\\\\n| | | 1 Moderate or heavy rain shower 33° /25° | 2 Moderate or heavy rain shower 33° /25° | 3 Light rain shower 32° /25° | 4 Patchy rain possible 33° /25° | 5 Sunny 34° /25° | [...] Light rain shower\\\\nModerate or heavy rain shower\\\\nPatchy rain possible\\\\nModerate or heavy rain shower\\\\nModerate or heavy rain shower\\\\nModerate or heavy rain shower\\\\nThundery outbreaks possible\\\\nThundery outbreaks possible'}, {'url': 'https://www.wafb.com/2025/07/30/heat-early-then-rain-helps-cool-us-off/', 'content': 'We will be tracking high heat and humidity early today and then a threat for one or two strong...\\\\n\\\\n###### Two Weather Impacts Being Tracked Today\\\\n\\\\nWe should stay mainly dry this morning. Expect a very warm and humid morning as temperatures...\\\\n\\\\n###### Wednesday AM Day Planner\\\\n\\\\nHere are updated rain numbers for Baton Rouge Metro through yesterday.\\\\n\\\\n###### BTR Rain Stats Through Yesterday\\\\n\\\\nJared Silverman gives the 9 a.m. weather forecast on Wednesday, July 30. [...] As of Wednesday afternoon, the Weather Prediction Center is forecasting rainfall totals of 1 to 2 inches on average across south Louisiana. Locally higher amounts, possibly 3 inches or more, are possible.\\\\n\\\\nThe forecast rain chances show elevated rain chances continuing through the weekend.\\\\n\\\\nEXTENDED OUTLOOK\\\\n\\\\nElevated rain chances through the rest of the workweek and into the weekend, helping bring afternoon highs near normal in the low 90s. [...] # Rain returns today; a more active pattern settles in\\\\n\\\\nBATON ROUGE, La. (WAFB) - HEAT ALERTS: A Heat Advisory is in effect again today for most of our area for heat index values peaking between 105°–110°.\\\\n\\\\nA Heat Advisory remains in effect until this evening for heat index values peaking near or...\\\\n\\\\nTHE REST OF TODAY'}]\", name='tavily_search_results_json', id='64a02489-2037-4569-a464-7b3d7f32abfc', tool_call_id='call_6ED1ZQ8nrjYIOY14yqInLPZc')]}, next=('llm',), config={'configurable': {'thread_id': '3', 'thread_ts': '1f06d767-d2ea-6a09-8003-ffaf36bf90ba'}}, metadata={'source': 'loop', 'step': 3, 'writes': {'action': {'messages': [ToolMessage(content=\"[{'url': 'https://www.weather25.com/north-america/usa/louisiana?page=month&month=July', 'content': 'weather25.com\\\\nSearch\\\\nweather in United States\\\\nRemove from your favorite locations\\\\nAdd to my locations\\\\nShare\\\\nweather in United States\\\\n\\\\n# Louisiana weather in July 2025\\\\n\\\\nThundery outbreaks possible\\\\nThundery outbreaks possible\\\\nHeavy rain at times\\\\nThundery outbreaks possible\\\\nSunny\\\\nModerate rain at times\\\\nLight rain shower\\\\nSunny\\\\nSunny\\\\nThundery outbreaks possible\\\\nPatchy rain possible\\\\nPatchy light drizzle\\\\nPartly cloudy\\\\nPatchy rain possible\\\\n\\\\n## The average weather in Louisiana in July [...] | Sun | Mon | Tue | Wed | Thu | Fri | Sat |\\\\n| --- | --- | --- | --- | --- | --- | --- |\\\\n| | | 1 Moderate or heavy rain shower 33° /25° | 2 Moderate or heavy rain shower 33° /25° | 3 Light rain shower 32° /25° | 4 Patchy rain possible 33° /25° | 5 Sunny 34° /25° | [...] Light rain shower\\\\nModerate or heavy rain shower\\\\nPatchy rain possible\\\\nModerate or heavy rain shower\\\\nModerate or heavy rain shower\\\\nModerate or heavy rain shower\\\\nThundery outbreaks possible\\\\nThundery outbreaks possible'}, {'url': 'https://www.wafb.com/2025/07/30/heat-early-then-rain-helps-cool-us-off/', 'content': 'We will be tracking high heat and humidity early today and then a threat for one or two strong...\\\\n\\\\n###### Two Weather Impacts Being Tracked Today\\\\n\\\\nWe should stay mainly dry this morning. Expect a very warm and humid morning as temperatures...\\\\n\\\\n###### Wednesday AM Day Planner\\\\n\\\\nHere are updated rain numbers for Baton Rouge Metro through yesterday.\\\\n\\\\n###### BTR Rain Stats Through Yesterday\\\\n\\\\nJared Silverman gives the 9 a.m. weather forecast on Wednesday, July 30. [...] As of Wednesday afternoon, the Weather Prediction Center is forecasting rainfall totals of 1 to 2 inches on average across south Louisiana. Locally higher amounts, possibly 3 inches or more, are possible.\\\\n\\\\nThe forecast rain chances show elevated rain chances continuing through the weekend.\\\\n\\\\nEXTENDED OUTLOOK\\\\n\\\\nElevated rain chances through the rest of the workweek and into the weekend, helping bring afternoon highs near normal in the low 90s. [...] # Rain returns today; a more active pattern settles in\\\\n\\\\nBATON ROUGE, La. (WAFB) - HEAT ALERTS: A Heat Advisory is in effect again today for most of our area for heat index values peaking between 105°–110°.\\\\n\\\\nA Heat Advisory remains in effect until this evening for heat index values peaking near or...\\\\n\\\\nTHE REST OF TODAY'}]\", name='tavily_search_results_json', id='64a02489-2037-4569-a464-7b3d7f32abfc', tool_call_id='call_6ED1ZQ8nrjYIOY14yqInLPZc')]}}}, created_at='2025-07-30T18:53:32.226379+00:00', parent_config={'configurable': {'thread_id': '3', 'thread_ts': '1f06d767-b2e3-65db-8002-98ebb41b5b53'}})\n--\nStateSnapshot(values={'messages': [HumanMessage(content='Whats the weather in LA?', id='699b2955-1692-47c0-8885-a6cac23ac681'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-7d51d8e2-e512-4ae1-a435-0db06683a196-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Louisiana'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}, next=('action',), config={'configurable': {'thread_id': '3', 'thread_ts': '1f06d767-b2e3-65db-8002-98ebb41b5b53'}}, metadata={'source': 'update', 'step': 2, 'writes': {'llm': {'messages': [HumanMessage(content='Whats the weather in LA?', id='699b2955-1692-47c0-8885-a6cac23ac681'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-7d51d8e2-e512-4ae1-a435-0db06683a196-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Louisiana'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}}}, created_at='2025-07-30T18:53:28.867981+00:00', parent_config={'configurable': {'thread_id': '3', 'thread_ts': '1f06d767-b292-6e94-8001-3d9437c8bbed'}})\n--\nStateSnapshot(values={'messages': [HumanMessage(content='Whats the weather in LA?', id='699b2955-1692-47c0-8885-a6cac23ac681'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-7d51d8e2-e512-4ae1-a435-0db06683a196-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}, next=('action',), config={'configurable': {'thread_id': '3', 'thread_ts': '1f06d767-b292-6e94-8001-3d9437c8bbed'}}, metadata={'source': 'loop', 'step': 1, 'writes': {'llm': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-7d51d8e2-e512-4ae1-a435-0db06683a196-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}}}, created_at='2025-07-30T18:53:28.835021+00:00', parent_config={'configurable': {'thread_id': '3', 'thread_ts': '1f06d767-b250-66ef-8000-a497d8225ab5'}})\n--\nStateSnapshot(values={'messages': [HumanMessage(content='Whats the weather in LA?', id='699b2955-1692-47c0-8885-a6cac23ac681')]}, next=('llm',), config={'configurable': {'thread_id': '3', 'thread_ts': '1f06d767-b250-66ef-8000-a497d8225ab5'}}, metadata={'source': 'loop', 'step': 0, 'writes': None}, created_at='2025-07-30T18:53:28.807784+00:00', parent_config={'configurable': {'thread_id': '3', 'thread_ts': '1f06d767-b249-6bbe-bfff-d499c91f3d0f'}})\n--\nStateSnapshot(values={'messages': []}, next=('__start__',), config={'configurable': {'thread_id': '3', 'thread_ts': '1f06d767-b249-6bbe-bfff-d499c91f3d0f'}}, metadata={'source': 'input', 'step': -1, 'writes': {'messages': [HumanMessage(content='Whats the weather in LA?')]}}, created_at='2025-07-30T18:53:28.805054+00:00', parent_config=None)\n--\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "30b077bb-b869-47aa-aaed-9b93b8c7c1ea", | |
| "cell_type": "markdown", | |
| "source": "To fetch the same state as was filmed, the offset below is changed to `-3` from `-1`. This accounts for the initial state `__start__` and the first state that are now stored to state memory with the latest version of software.", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "ba986bc5-149e-489c-9ade-c2c183e6bccf", | |
| "cell_type": "code", | |
| "source": "# Print the len(states) = 6 elems\nprint('states list has {} elems -> StateSnapshots'.format(len(states)))\n\n# Select the 4th element / StateSnapshot at 'states' list, where we MODIFIED tool_calls = [ { } ] \n# {’query’ : ’current weather in Louisiana’} as input ’arg’ of ’tavily_search_results_json‘. \n# So, it returns to be {'query': 'weather in Los Angeles'} as input ’arg’ of ’tavily_search_results_json‘. \n# Then save that into 'to_replay' variable.\nto_replay = states[-3]", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "c501c049-5537-4e7b-b39d-daaa9ff3f576", | |
| "cell_type": "markdown", | |
| "source": "```\nstates list has 6 elems -> StateSnapshots\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "f49848ae-f06a-46ce-bbfc-96aac9dfb14d", | |
| "cell_type": "code", | |
| "source": "# Print out the 4th element / StateSnapshot at 'states' list \n# which is the one occurred after MODIFIYING tool_calls = [ { } ] \n# {’query’ : ’current weather in Louisiana’} as input ’arg’ of ’tavily_search_results_json‘. \n# So, it returns to be {'query': 'weather in Los Angeles'} as input ’arg’ of ’tavily_search_results_json‘. \n# Then save that into 'to_replay' variable. \nto_replay", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "1784d56d-f57f-4ddc-8e91-ef51e681984a", | |
| "cell_type": "markdown", | |
| "source": "```\nStateSnapshot(values={'messages': [HumanMessage(content='Whats the weather in LA?', id='a63f0883-81a4-45e5-b0ca-d9b297642382'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-c6365724-c4fb-425d-affa-fdfa3eb95603-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}, next=('action',), config={'configurable': {'thread_id': '3', 'thread_ts': '1f06d9ec-7a79-6a5f-8001-bd9b22fcb587'}}, metadata={'source': 'loop', 'step': 1, 'writes': {'llm': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-c6365724-c4fb-425d-affa-fdfa3eb95603-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}}}, created_at='2025-07-30T23:41:57.039558+00:00', parent_config={'configurable': {'thread_id': '3', 'thread_ts': '1f06d9ec-7a41-686d-8000-bda93e201639'}})\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "679b00cc-9a7b-4c62-857b-829082e9b3ad", | |
| "cell_type": "code", | |
| "source": "# Call 'events=agent(obj).graph.stream({\"messages\": messages}, to_replay.config)' \n# Doesn't include list of messages dict -> {\"messages\": [messages]} \n# so 'HumanMessage' from ‘user’ is NOT passed in as historical msg here. \n# This time use ‘to_replay.config’ = ‘thread' config at 4th StateSnapshot -> \n# config={'configurable': {'thread_id': '3', 'thread_ts': '1f06d9ec-7a79-6a5f-8001-bd9b22fcb587'}} \n# We're going to get back a 'stream of events/states', that represents UPDATES \n# to 'AgentState', over time.\nevents = abot.graph.stream(None, to_replay.config)\n\n# Extract each event/state per iteration.\nfor event in events:\n \n # Extract each event/state/node value -> {v dict} \n # and each event/state/node name -> 'k-name' \n # Get 'k-name', {v dict} per iter, with 'event.items()' method.\n for k, v in event.items():\n \n # Display event/state/node value -> {v dict}, \n # which has a 'k-name', each inner loop iteration.\n print(v)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "1e094ed6-4cb4-45fc-b8ea-86d64a4990be", | |
| "cell_type": "markdown", | |
| "source": "```\nCalling: {'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}\nBack to the model!\n\n{'messages': [ToolMessage(content='[{\\'url\\': \\'https://weatherspark.com/h/m/1705/2025/7/Historical-Weather-in-July-2025-in-Los-Angeles-California-United-States\\', \\'content\\': \\'Today Yesterday Jul 2025 194019501960197019801990200020102020 2016201720182019202020212022202320242025 SpringSummerFall Winter JanFebMarAprMayJunJulAug Sep Oct Nov Dec 123456789101112131415161718192021222324252627282930 31 July 2025 Weather History in Los Angeles California, United States ================================================================== The data for this report comes from the Los Angeles International Airport. See all nearby weather stations Latest Report — 3:53 PM [...] 11:54 AM | W | 11:42 PM | E | 5:32 AM | S | 231,924 mi | | 17 | | 50% | - | 1:01 PM | WNW | - | 6:18 AM | S | 230,611 mi | | 18 | | 44% | 12:12 AM | ENE | 2:11 PM | WNW | - | 7:08 AM | S | 229,581 mi | | 19 | | 32% | 12:47 AM | ENE | 3:24 PM | WNW | - | 8:01 AM | S | 228,908 mi | | 20 | | 21% | 1:29 AM | ENE | 4:37 PM | WNW | - | 8:59 AM | S | 228,690 mi | | 21 | | 12% | 2:19 AM | NE | 5:46 PM | NW | - | 10:01 AM | S | 229,032 mi | | 22 | | 5% | 3:19 AM | NE | 6:49 PM | NW | - | 11:05 AM | S | [...] SE | - | - | | 10 | | 100% | - | 5:21 AM | WSW | 8:32 PM | ESE | 12:33 AM | S | 244,014 mi | | 11 | | 100% | - | 6:25 AM | WSW | 9:12 PM | ESE | 1:29 AM | S | 241,674 mi | | 12 | | 97% | - | 7:31 AM | WSW | 9:47 PM | ESE | 2:22 AM | S | 239,381 mi | | 13 | | 93% | - | 8:37 AM | WSW | 10:18 PM | ESE | 3:12 AM | S | 237,223 mi | | 14 | | 86% | - | 9:43 AM | W | 10:46 PM | E | 4:00 AM | S | 235,247 mi | | 15 | | 77% | - | 10:48 AM | W | 11:14 PM | E | 4:46 AM | S | 233,477 mi | | 16 | | 67% | - |\\'}, {\\'url\\': \\'https://www.weather25.com/north-america/usa/california/los-angeles?page=month&month=July\\', \\'content\\': \"| 27 Image 54: Sunny 29°/21° | 28 Image 55: Sunny 29°/20° | 29 Image 56: Sunny 30°/21° | 30 Image 57: Partly cloudy 32°/18° | 31 Image 58: Partly cloudy 32°/19° | | | [...] If you’re planning to visit Los Angeles in the near future, we highly recommend that you review the 14 day weather forecast for Los Angeles before you arrive.\\\\n\\\\nImage 22: Temperatures\\\\n\\\\nTemperatures\\\\n\\\\n30° /20° \\\\n\\\\nImage 23: Rainy Days\\\\n\\\\nRainy Days\\\\n\\\\n0\\\\n\\\\nImage 24: Snowy Days\\\\n\\\\nSnowy Days\\\\n\\\\n0\\\\n\\\\nImage 25: Dry Days\\\\n\\\\nDry Days\\\\n\\\\n31\\\\n\\\\nImage 26: Rainfall\\\\n\\\\nRainfall\\\\n\\\\n5\\\\n\\\\nmm\\\\n\\\\nImage 27: 11.9\\\\n\\\\nSun Hours\\\\n\\\\n11.9\\\\n\\\\nHrs\\\\n\\\\nHistoric average weather for July\\\\n\\\\n[](\\\\n\\\\nJuly\\\\n\\\\n[]( [...] The average weather in Los Angeles in July\\\\n------------------------------------------\\\\n\\\\nThe weather in Los Angeles in July is hot. The average temperatures are between 20°C and 30°C.\\\\n\\\\nThere shouldn\\'t be any rainy days in in Los Angeles during July. Having said that, the days are very hot in July so hanging out outside is more recomanded during the after noons and evening time.\\\\n\\\\nOur weather forecast can give you a great sense of what weather to expect in Los Angeles in July 2025.\"}]', name='tavily_search_results_json', id='9b74be76-8723-40c9-8eb3-436b52051e89', tool_call_id='call_6ED1ZQ8nrjYIOY14yqInLPZc')]}\n\n{'messages': [HumanMessage(content='Whats the weather in LA?', id='8794f1c0-8bc8-48a7-9f31-bc48fb2f2e4f'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-8b6f9f27-842c-4d7c-af29-1b6227272bc7-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}]), ToolMessage(content='[{\\'url\\': \\'https://weatherspark.com/h/m/1705/2025/7/Historical-Weather-in-July-2025-in-Los-Angeles-California-United-States\\', \\'content\\': \\'Today Yesterday Jul 2025 194019501960197019801990200020102020 2016201720182019202020212022202320242025 SpringSummerFall Winter JanFebMarAprMayJunJulAug Sep Oct Nov Dec 123456789101112131415161718192021222324252627282930 31 July 2025 Weather History in Los Angeles California, United States ================================================================== The data for this report comes from the Los Angeles International Airport. See all nearby weather stations Latest Report — 3:53 PM [...] 11:54 AM | W | 11:42 PM | E | 5:32 AM | S | 231,924 mi | | 17 | | 50% | - | 1:01 PM | WNW | - | 6:18 AM | S | 230,611 mi | | 18 | | 44% | 12:12 AM | ENE | 2:11 PM | WNW | - | 7:08 AM | S | 229,581 mi | | 19 | | 32% | 12:47 AM | ENE | 3:24 PM | WNW | - | 8:01 AM | S | 228,908 mi | | 20 | | 21% | 1:29 AM | ENE | 4:37 PM | WNW | - | 8:59 AM | S | 228,690 mi | | 21 | | 12% | 2:19 AM | NE | 5:46 PM | NW | - | 10:01 AM | S | 229,032 mi | | 22 | | 5% | 3:19 AM | NE | 6:49 PM | NW | - | 11:05 AM | S | [...] SE | - | - | | 10 | | 100% | - | 5:21 AM | WSW | 8:32 PM | ESE | 12:33 AM | S | 244,014 mi | | 11 | | 100% | - | 6:25 AM | WSW | 9:12 PM | ESE | 1:29 AM | S | 241,674 mi | | 12 | | 97% | - | 7:31 AM | WSW | 9:47 PM | ESE | 2:22 AM | S | 239,381 mi | | 13 | | 93% | - | 8:37 AM | WSW | 10:18 PM | ESE | 3:12 AM | S | 237,223 mi | | 14 | | 86% | - | 9:43 AM | W | 10:46 PM | E | 4:00 AM | S | 235,247 mi | | 15 | | 77% | - | 10:48 AM | W | 11:14 PM | E | 4:46 AM | S | 233,477 mi | | 16 | | 67% | - |\\'}, {\\'url\\': \\'https://www.weather25.com/north-america/usa/california/los-angeles?page=month&month=July\\', \\'content\\': \"| 27 Image 54: Sunny 29°/21° | 28 Image 55: Sunny 29°/20° | 29 Image 56: Sunny 30°/21° | 30 Image 57: Partly cloudy 32°/18° | 31 Image 58: Partly cloudy 32°/19° | | | [...] If you’re planning to visit Los Angeles in the near future, we highly recommend that you review the 14 day weather forecast for Los Angeles before you arrive.\\\\n\\\\nImage 22: Temperatures\\\\n\\\\nTemperatures\\\\n\\\\n30° /20° \\\\n\\\\nImage 23: Rainy Days\\\\n\\\\nRainy Days\\\\n\\\\n0\\\\n\\\\nImage 24: Snowy Days\\\\n\\\\nSnowy Days\\\\n\\\\n0\\\\n\\\\nImage 25: Dry Days\\\\n\\\\nDry Days\\\\n\\\\n31\\\\n\\\\nImage 26: Rainfall\\\\n\\\\nRainfall\\\\n\\\\n5\\\\n\\\\nmm\\\\n\\\\nImage 27: 11.9\\\\n\\\\nSun Hours\\\\n\\\\n11.9\\\\n\\\\nHrs\\\\n\\\\nHistoric average weather for July\\\\n\\\\n[](\\\\n\\\\nJuly\\\\n\\\\n[]( [...] The average weather in Los Angeles in July\\\\n------------------------------------------\\\\n\\\\nThe weather in Los Angeles in July is hot. The average temperatures are between 20°C and 30°C.\\\\n\\\\nThere shouldn\\'t be any rainy days in in Los Angeles during July. Having said that, the days are very hot in July so hanging out outside is more recomanded during the after noons and evening time.\\\\n\\\\nOur weather forecast can give you a great sense of what weather to expect in Los Angeles in July 2025.\"}]', name='tavily_search_results_json', id='9b74be76-8723-40c9-8eb3-436b52051e89', tool_call_id='call_6ED1ZQ8nrjYIOY14yqInLPZc'), AIMessage(content='The weather in Los Angeles today is sunny with a temperature of 29°C. There are no rainy days expected in Los Angeles during July. The average temperatures range between 20°C and 30°C.', response_metadata={'token_usage': {'completion_tokens': 42, 'prompt_tokens': 1277, 'total_tokens': 1319, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5e7b7685-8b76-4857-8532-52629c87a8ea-0')]}\n\n{'messages': [AIMessage(content='The weather in Los Angeles today is sunny with a temperature of 29°C. There are no rainy days expected in Los Angeles during July. The average temperatures range between 20°C and 30°C.', response_metadata={'token_usage': {'completion_tokens': 42, 'prompt_tokens': 1277, 'total_tokens': 1319, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5e7b7685-8b76-4857-8532-52629c87a8ea-0')]}\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "c5aadcb4-a1fa-4393-ad16-bce2384385f3", | |
| "cell_type": "markdown", | |
| "source": "## Go back in time and edit", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "3cb01855-cca3-4ce7-81f3-28a7395f0a0c", | |
| "cell_type": "code", | |
| "source": "# Print out the 4th element / StateSnapshot at 'states' list \n# which is the one occurred after MODIFIYING tool_calls = [ { } ] \n# {’query’ : ’current weather in Louisiana’} as input ’arg’ of ’tavily_search_results_json‘. \n# So, it returns to be {'query': 'weather in Los Angeles'} as input ’arg’ of ’tavily_search_results_json‘. \n# Then save that into 'to_replay' variable. \nto_replay", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "b6317b53-4539-49c0-8ff6-28ef394d7352", | |
| "cell_type": "markdown", | |
| "source": "```\nStateSnapshot(values={'messages': [HumanMessage(content='Whats the weather in LA?', id='a63f0883-81a4-45e5-b0ca-d9b297642382'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-c6365724-c4fb-425d-affa-fdfa3eb95603-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}, next=('action',), config={'configurable': {'thread_id': '3', 'thread_ts': '1f06d9ec-7a79-6a5f-8001-bd9b22fcb587'}}, metadata={'source': 'loop', 'step': 1, 'writes': {'llm': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-c6365724-c4fb-425d-affa-fdfa3eb95603-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}}}, created_at='2025-07-30T23:41:57.039558+00:00', parent_config={'configurable': {'thread_id': '3', 'thread_ts': '1f06d9ec-7a41-686d-8000-bda93e201639'}})\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "dfcdd7b8-b1c3-4021-a9e3-259ff01dc9f4", | |
| "cell_type": "code", | |
| "source": "# At 4th element / StateSnapshot -> 'to_replay' \n# From 'tool_calls' list -> tool_calls=[] select the unique element -> \n# [0] inside that which is a dict { }, and then select the 'id' key \n# to get it's value. Save as '_id' variable.\n_id = to_replay.values['messages'][-1].tool_calls[0]['id']\n\n# MODIFY tool_calls=[ {} ] list with dict inside. \n# tool_calls =[ { 'name':value,'args':{'query':'NEW QUERY'},'id':_id } ]\nto_replay.values['messages'][-1].tool_calls = [ # Open List\n \n {'name': 'tavily_search_results_json',\n \n # MODIFY for searching at accuweather, about \n # current weather in LA\n 'args': {'query': 'current weather in LA, accuweather'},\n 'id': _id}\n \n] # Close List", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "dda5f916-639f-4b10-8410-aa8c30b05c34", | |
| "cell_type": "code", | |
| "source": "# Select CURRENT STATE / StateSnapshot of graph for {\"thread_id\": \"3\"} -> \n# thread = to_replay.config -> config = {‘configurable’ : {\"thread_id\": \"3\", \n# 'thread_ts': '1f06d9ec-7a79-6a5f-8001-bd9b22fcb587' } } \n# {‘messages’: list of messages} = 'to_replay.values' -> values = {'messages': [list of messages]} \n# UPDATE, MODIFIED as NEW CURRENT STATE for 'thread’ config = ‘to_replay.config’ \n# Include MODIFIED or NEW tool_calls =[ { } ] list contained at full dict of messages -> \n# 'to_replay.values' -> values = {'messages':[list of messages]} \nbranch_state = abot.graph.update_state(to_replay.config, to_replay.values)\n\n# Display UPDATED as CURRENT STATE, previously MODIFIED with new config={} \n# and values={} messages \nbranch_state", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "986d48c4-6668-4303-92c5-7040a4dec482", | |
| "cell_type": "markdown", | |
| "source": "```\n{'messages': [HumanMessage(content='Whats the weather in LA?', id='de2a7b3d-d00d-4c35-a2c1-ce901130659e'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-d5ec3ed5-02b2-49b2-89ca-c71d977b211e-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in LA, accuweather'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "fc60fe04-7b80-4460-843e-cb6da2f60325", | |
| "cell_type": "code", | |
| "source": "# Call 'events=agent(obj).graph.stream(None, branch_state)'\n# branch_state = {\"messages\": [list of messages]} \n# Doesn't include ‘thread’ config ->\n# config={'configurable': {'thread_id': '3', 'thread_ts': '1f06d9ec-7a79-6a5f-8001-bd9b22fcb587'}}\n# This time use ‘branch_state’ MODIFIED 4th elemenet / StateSnapshot -> \n# We're going to get back a 'stream of events / states', \n# that represents UPDATES to 'AgentState', over time.\nevents = abot.graph.stream(None, branch_state)\n\n# Extract each event/state per iteration.\nfor event in events:\n \n # Extract each event/state/node value -> {v dict} \n # and each event/state/node name -> 'k-name' \n # Get 'k-name', {v dict} per iter, with 'event.items()' method.\n for k, v in event.items():\n \n # If event/state/node 'k-name' != \"__end__\" \n if k != \"__end__\":\n \n # Then print each event/state/node value -> {v dict}\n print(v)\n \n # And print each event/state/node name -> 'k-name' \n print('-->',k)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "40b4626b-7b3f-4fb1-8eaa-f18db6d0c8cb", | |
| "cell_type": "markdown", | |
| "source": "```\nCalling: {'name': 'tavily_search_results_json', 'args': {'query': 'current weather in LA, accuweather'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}\nBack to the model!\n{'messages': [ToolMessage(content=\"HTTPError('403 Client Error: Forbidden for url: http://jupyter-api-proxy.internal.dlai/rev-proxy/tavily_search/search')\", name='tavily_search_results_json', id='8202f55e-68c0-4c5c-aea8-3428536174ab', tool_call_id='call_6ED1ZQ8nrjYIOY14yqInLPZc')]}\n--> action\n\n{'messages': [HumanMessage(content='Whats the weather in LA?', id='1a9ceefb-5dc9-483b-b0f2-46c9c9557f28'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-f1966673-37e1-42ef-8587-0ae3f50716ed-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in LA, accuweather'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}]), ToolMessage(content=\"HTTPError('403 Client Error: Forbidden for url: http://jupyter-api-proxy.internal.dlai/rev-proxy/tavily_search/search')\", name='tavily_search_results_json', id='8202f55e-68c0-4c5c-aea8-3428536174ab', tool_call_id='call_6ED1ZQ8nrjYIOY14yqInLPZc'), AIMessage(content='I encountered an issue while trying to fetch the weather information for LA. Let me attempt the search again.', additional_kwargs={'tool_calls': [{'id': 'call_q5MGPo2kqOyIPDjkpup6BYzw', 'function': {'arguments': '{\"query\":\"current weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 44, 'prompt_tokens': 219, 'total_tokens': 263, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-6d40504d-26dc-4d0c-8691-aff214a28d59-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Los Angeles'}, 'id': 'call_q5MGPo2kqOyIPDjkpup6BYzw'}])]}\n\n{'messages': [AIMessage(content='I encountered an issue while trying to fetch the weather information for LA. Let me attempt the search again.', additional_kwargs={'tool_calls': [{'id': 'call_q5MGPo2kqOyIPDjkpup6BYzw', 'function': {'arguments': '{\"query\":\"current weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 44, 'prompt_tokens': 219, 'total_tokens': 263, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-6d40504d-26dc-4d0c-8691-aff214a28d59-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Los Angeles'}, 'id': 'call_q5MGPo2kqOyIPDjkpup6BYzw'}])]}\n--> llm\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "69edaf54-5f62-4973-af18-55f21a9fac88", | |
| "cell_type": "markdown", | |
| "source": "## Add message to a state at a given time", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "d306311b-e8f5-41ed-a89d-c6d9613b5b56", | |
| "cell_type": "code", | |
| "source": "# Print out the 4th element / StateSnapshot at 'states' list \n# which is the one occurred after MODIFIYING tool_calls = [ { } ] \n# {'query': 'current weather in LA, accuweather'} as NEW input ’arg’ \n# of ’tavily_search_results_json‘. \n# After that, UPDATE this NEW STATE.\nto_replay", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "5407099b-12da-4d18-b1ec-7e6ff1b642c7", | |
| "cell_type": "markdown", | |
| "source": "```\nStateSnapshot(values={'messages': [HumanMessage(content='Whats the weather in LA?', id='ea10c077-3dcd-4c64-9d6e-ae4c8410d75a'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-e17aab91-ede0-42c4-a55e-907b80a5060d-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in LA, accuweather'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}, next=('action',), config={'configurable': {'thread_id': '3', 'thread_ts': '1f06e5e3-9975-65a9-8001-044a9ac1eab8'}}, metadata={'source': 'loop', 'step': 1, 'writes': {'llm': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-e17aab91-ede0-42c4-a55e-907b80a5060d-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}])]}}}, created_at='2025-07-31T22:32:22.068555+00:00', parent_config={'configurable': {'thread_id': '3', 'thread_ts': '1f06e5e3-993e-60e5-8000-abc8b4f30acd'}})\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "51b7cca9-5af1-4cfd-a3cb-cddd3df8ebdd", | |
| "cell_type": "code", | |
| "source": "# At 4th element / StateSnapshot UPDATED -> 'to_replay' \n# From 'tool_calls' list -> tool_calls=[] select the unique element -> \n# [0] inside that which is a dict { }, and then select the 'id' key \n# to get it's value. SAVE as '_id' variable.\n_id = to_replay.values['messages'][-1].tool_calls[0]['id']", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "3845da23-1650-4cf3-b580-58102a171bea", | |
| "cell_type": "code", | |
| "source": "# Create NEW 'ToolMessage(...)' to be APPENDED at state ->\n# {\"messages\": [list of messages]} -> \n# {\"messages\": [ HumanMessage(...), ToolMessage(...), AIMessage(...) ] }\nstate_update = {\"messages\": [ToolMessage(tool_call_id=_id,\n name=\"tavily_search_results_json\",\n content=\"54 degree celcius\")]}", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "ba9ae593-cf8d-4c8e-89dd-5c0ebfceb4ce", | |
| "cell_type": "code", | |
| "source": "# UPDATE NEW STATE of graph including 'thread’ config -> \n# ‘to_replay.config’. Also includes NEW 'ToolMessage(...)' -> state_update \n# and use as_node= \"action\" as we were the ‘action’ node, \n# which executes the 'ToolMessage(...)'. \nbranch_and_add = abot.graph.update_state( to_replay.config, \n state_update, \n as_node=\"action\" )", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "eaa18978-1974-4f53-b58d-3738111754b6", | |
| "cell_type": "code", | |
| "source": "# Call 'events=agent(obj).graph.stream(None, branch_and_add)' \n# branch_and_add contains UPDATED STATE including NEW ToolMessage(...)' dict -> \n# state_update = {\"messages\": [ ToolMessage( id_value, tool_name, tool_content ) ] } \n# Doesn't include ‘thread’ config -> \n# config={'configurable': {'thread_id': '3', 'thread_ts': '1f06d9ec-7a79-6a5f-8001-bd9b22fcb587'}} \n# This time use ‘branch_and_add’ that includes NEW CREATED ‘ToolMessage’ -> \n# {“message”: [ToolMessage(...)]}, that is executed by ACTING as \n# we were the \"action\" node -> as_node = “action” parameter, \n# returning as “action” result / observation -> \"54 degree celcius\" content \n# We're going to get back a 'stream of events / states', \n# that represents UPDATES to 'AgentState', over time.\nevents = abot.graph.stream(None, branch_and_add)\n# Extract each event/state per iteration.\nfor event in events :\n \n # Extract each event/state/node value -> {v dict} \n # and each event/state/node name -> 'k-name' \n # Get 'k-name', {v dict} per iter, with 'event.items()' method.\n for k, v in event.items():\n \n # Print out each event/state/node value -> {v dict}\n print(v)\n \n # And print out each event/state/node name -> 'k-name'\n print('-->', k)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "09ccd669-c045-42e3-a816-54128462f121", | |
| "cell_type": "markdown", | |
| "source": "```\n{'messages': [HumanMessage(content='Whats the weather in LA?', id='e4f706bb-f4ca-4593-83ef-f536be03656c'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{\"query\":\"weather in Los Angeles\"}', 'name': 'tavily_search_results_json'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens': 152, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}, 'total_tokens': 174}}, id='run-7425db2d-501c-4bbe-9b10-a4cf101ac1de-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_6ED1ZQ8nrjYIOY14yqInLPZc'}]), ToolMessage(content='54 degree celcius', name='tavily_search_results_json', id='4e025859-4dbf-4f64-b872-47fa8b25f675', tool_call_id='call_6ED1ZQ8nrjYIOY14yqInLPZc'), AIMessage(content='The current weather in Los Angeles is 54 degrees Celsius.', response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 190, 'total_tokens': 203, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-641fc4f0-fc34-4159-b98a-eb5091ecfcdd-0')]}\n\n{'messages': [AIMessage(content='The current weather in Los Angeles is 54 degrees Celsius.', response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 190, 'total_tokens': 203, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-641fc4f0-fc34-4159-b98a-eb5091ecfcdd-0')]}\n--> llm\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "05b698dd-1fe9-4881-a902-b459f357efc8", | |
| "cell_type": "markdown", | |
| "source": "# Extra Practice", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "dd294405-3224-46dd-9d6f-664d80fe00be", | |
| "cell_type": "markdown", | |
| "source": "## Build a small graph\nThis is a small simple graph you can tinker with if you want more insight into controlling state memory.", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "6d83eb9e-8c9e-475f-9d10-c8d72c160640", | |
| "cell_type": "code", | |
| "source": "# Loads environment variables from a file called '.env'. \n# This function does not return data directly, \n# but loads the variables into the runtime environment.\nfrom dotenv import load_dotenv\n\n# load environment variables from a '.env' file into the \n# current directory or process's environment\n# This is our OpenAI API key\n_ = load_dotenv()", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "3c68f872-8b0c-478e-af0f-90f1cbe343ad", | |
| "cell_type": "code", | |
| "source": "# 'StateGraph' and 'END' are used to construct graphs. \n# 'StateGraph' allows nodes to communicate, by reading and writing to a common state. \n# The 'END' node is used to signal the completion of a graph, \n# ensuring that cycles eventually conclude.\nfrom langgraph.graph import StateGraph, END\n\n# The typing module in Python, which includes 'TypedDict' and 'Annotated', \n# provides tools for creating advanced type annotations. \n# 'TypedDict allows you to define {dictionaries}={messages} with specific types for each 'key',\n# while 'Annotated' ADDS new data or messages values to LangChain types.\n# 'TypedDict' and 'Annotated' are used to construct the class AgentState()\nfrom typing import TypedDict, Annotated\n\n# 'operator' module provides efficient functions that correspond to the \n# language's intrinsic operators. It offers functions for mathematical, logical, relational, \n# bitwise, and other operations. For example, operator.add(x, y) is equivalent to x + y.\n# It's useful for situations where you need to treat 'operators' as 'functions()'.\n# 'operator' is used to construct the class AgentState()\nimport operator\n\n# 'SqliteSaver()' class in LangGraph is used for saving checkpoints \n# in a SQLite database.\nfrom langgraph.checkpoint.sqlite import SqliteSaver", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "3c178e00-6c82-4b04-b901-94c25ebfc75b", | |
| "cell_type": "markdown", | |
| "source": "Define a simple 2 node graph with the following state:\n-`lnode`: last node\n-`scratch`: a scratchpad location\n-`count` : a counter that is incremented each step", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "56d3124f-da55-4dc4-b250-cf34dbc7b7a8", | |
| "cell_type": "code", | |
| "source": "# More Complex Agent State\nclass AgentState(TypedDict):\n \n # Last node /Último nodo/ returns a 'string'\n lnode: str\n \n # Node of a scratchpad location /Nodo de Ubicación de un Blog de Notas/\n # returns a 'string'\n scratch: str\n \n # Annotated accumulated integer -> int = int + new int, \n # that will be ADDED overtime with ’operator.add‘, \n # Returns a dict {’key‘: value} -> {’count’ : accumulated integer} \n # {key:value} -> {count : accumulated integer -> int = int + new int}\n count: Annotated[int, operator.add]", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "6d1079d5-417a-4ca1-87e5-c98ddcc183d4", | |
| "cell_type": "code", | |
| "source": "# Create ‘node2()’ function \n# Inputs state = dict { } returned from ‘AgentState()’ class \ndef node1(state: AgentState):\n \n # Print out -> node1, count: accumulated integer \n # returned when execute ‘AgentState()’ class \n print(f\"node1, count:{state['count']}\")\n \n # Return a dict = { ’lnode’:'node_1', ’count’:1 }. At 'AgenState' class,\n # assign to ’lnode’ -> value 'node_1' and Assign to ’count’ -> value 1\n return {\"lnode\": \"node_1\",\n \"count\": 1}\n\n# Create ‘node2()’ function \n# Inputs state = dict { } returned from ‘AgentState()’ class \ndef node2(state: AgentState):\n \n # Print out -> node2, count: accumulated integer\n # returned when execute ‘AgentState()’ class \n print(f\"node2, count:{state['count']}\")\n \n # Return a dict = { ’lnode’:'node_2', ’count’:1 }. At 'AgenState' class,\n # assign to ’lnode’ -> value 'node_2' and Assign to ’count’ -> value 1\n return {\"lnode\": \"node_2\",\n \"count\": 1}", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "9d59b226-b0b7-4a60-909f-152e76646f0e", | |
| "cell_type": "markdown", | |
| "source": "The graph goes N1->N2->N1... but breaks after count reaches 3.", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "cdea2fa3-a3d7-4100-9acd-c3decfbaf737", | |
| "cell_type": "code", | |
| "source": "# 'graph' goes node1 -> node2 -> node1... but breaks, \n# when state[‘count’] = count reaches 3 (count=3), \n# count=0 < 3? (True so continue to \"Node1\" executing 'node1()') \n# count = 1 < 3? (True so continue to \"Node1\" executing 'node1()')\n# count = 2 < 3? (True so continue to \"Node1\" executing 'node1()')\n# count = 3 < 3? (False so END)\ndef should_continue(state):\n return state[\"count\"] < 3", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "64c16c6b-76f3-4c3a-b511-d57e2d72e278", | |
| "cell_type": "code", | |
| "source": "### Start creating the graph (obj) ###\n# 1st initialize the 'StateGraph' with the 'AgentState' class as input\n# without any nodes or edges attached to it. \n# builder -> 'graph' Agent of states\nbuilder = StateGraph(AgentState)\n\n# Add 'node1()' function passing in its name, \n# being the 1st node of the 'graph' for Agent states. \n# 'graph -> builder' and name / call this 1st node as -> \"Node1\"\nbuilder.add_node(\"Node1\", node1)\n\n# Add 'node2()' function passing in its name, \n# being the 2nd node of the 'graph' for Agent states. \n# 'graph -> builder' and name / call this 2nd node as -> \"Node2\"\nbuilder.add_node(\"Node2\", node2)\n\n# Add a regular edge 1st arg: Start of edge (->) 2nd arg: End of edge \n# From 'Node1' node (->) to 'Node2' node\nbuilder.add_edge(\"Node1\", \"Node2\")\n\n# Add 'should_continue()' function to be executed \n# as our conditional edge <-/\\->\n# Conditional Edge <-/\\-> Input is -> \"Node2\" output \n# {\"lnode\": \"node_2\", \"count\": 1} response\n# Question -> is count=state[\"count\"] < 3?\n# We'll use a {Dictionary} to MAP the response of the 'should_continue()' \n# function to the next node to go to.\n# if 'should_continue()' returns True -> Executes \"Node1\" node after that, \n# if 'should_continue()' returns False -> Goes to 'END' node and it finishes \nbuilder.add_conditional_edges(\"Node2\", \n should_continue, \n {True: \"Node1\", False: END})\n\n# Set the entry point of the 'graph -> builder' as \"Node1\" node\nbuilder.set_entry_point(\"Node1\")", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "428f09f7-18be-4a25-972a-352f03ea1a14", | |
| "cell_type": "code", | |
| "source": "# Create a 'SqliteSaver()' instance (obj) that saves data in memory, rather than to a file on disk. \n# The \":memory:\" parameter specifies that the built-in (under the hood) Sync SQLite database will be \n# created and maintained entirely in system RAM -> checkpoint (obj). \n# If we refresh the notebook, this saved SQLite database will disappear.\nmemory = SqliteSaver.from_conn_string(\":memory:\")\n\n# ‘obj.compile()' the ’graph -> builder’ so updates / overwrite this \n# as a NEW compiled graph (obj). Do this after we've done all the setups, \n# and we'll turn it into a LangChain runnable/executable \n# A LangChain runnable exposes a standard interface for calling \n# and invoking this graph (obj). \n# ADD checkpointer = memory for saving data using Sync or Async SQLite database \n# (short term memory in notebook).\ngraph = builder.compile(checkpointer=memory)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "628fd647-2e3c-4351-a52d-3475aa30ebdd", | |
| "cell_type": "markdown", | |
| "source": "### Run it!\nNow, set the thread and run!", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "250c56c9-5b56-4bb9-8f5f-74a1e938464d", | |
| "cell_type": "code", | |
| "source": "# thread config /hilo configurable/ used to keep track \n# of different threads /hilos/ inside the persistent \n# checkpointer/checkpoint. Used for having MULTIPLE CONVERSATIONS \n# with MANY 'users’ going on at the same time. \n# dict {} <- with inner dict {} as value, of \"configurable\" key\n# str(1) = '1'\nthread = {\"configurable\": {\"thread_id\": str(1)}}\n\n# Init ’count' key value=0 for returned dict at 'StateAgent' class, \n# call 'graph(obj).invoke(dict)' -> with dict={\"count\":0, \"scratch\":'hi'} \n# thread = {\"configurable\": {\"thread_id\": str(1)}}\n# and get back a 'graph' state response.\nresponse = graph.invoke({\"count\":0, \"scratch\":\"hi\"},thread)\n\n# Display 'graph' state response\nresponse", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "d55f3a2d-21d1-4455-bcb5-b0a5d5b800d6", | |
| "cell_type": "markdown", | |
| "source": "```\nnode1, count:0\nnode2, count:1\nnode1, count:2\nnode2, count:3\n{'lnode': 'node_2', 'scratch': 'hi', 'count': 4}\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "11aebf40-d6c1-4e68-8740-be31b26a6af8", | |
| "cell_type": "markdown", | |
| "source": "### Look at current state", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "3afb8fcc-ebd5-470a-8413-bcb901b26298", | |
| "cell_type": "markdown", | |
| "source": "Get the current state. Note the `values` which are the AgentState. Note the `config` and the `thread_ts`. You will be using those to refer to snapshots below.", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "7f1176d4-9d7d-49a3-9362-c0a36e458acd", | |
| "cell_type": "code", | |
| "source": "# Get CURRENT STATE / StateSnapshot of 'graph' for -> \n# thread = {\"configurable\": {\"thread_id\": '1'}} \ncurrent_state = graph.get_state(thread)\n\n# Display CURRENT STATE / StateSnapshot \ncurrent_state", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "36c63d8c-38f6-49ef-8cfd-a73c6cd99cad", | |
| "cell_type": "markdown", | |
| "source": "```\nStateSnapshot(values={'lnode': 'node_2', 'scratch': 'hi', 'count': 4}, next=(), config={'configurable': {'thread_id': '1', 'thread_ts': '1f071929-8ef8-6e5e-8004-01f70daae739'}}, metadata={'source': 'loop', 'step': 4, 'writes': {'Node2': {'count': 1, 'lnode': 'node_2'}}}, created_at='2025-08-05T00:24:49.348551+00:00', parent_config={'configurable': {'thread_id': '1', 'thread_ts': '1f071929-8ef5-6a00-8003-43e89416ebe7'}})\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "f646dfaf-790e-41ac-8e64-efe705274a77", | |
| "cell_type": "markdown", | |
| "source": "```\nView all the statesnapshots in memory. You can use the displayed `count` agentstate variable to help track what you see. Notice the most recent snapshots are returned by the iterator first. Also note that there is a handy `step` variable in the metadata that counts the number of steps in the graph execution. This is a bit detailed - but you can also notice that the *parent_config* is the *config* of the previous node. At initial startup, additional states are inserted into memory to create a parent. This is something to check when you branch or *time travel* below.\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "5b929d07-1d2c-42b9-b4a8-5bdaadff2782", | |
| "cell_type": "markdown", | |
| "source": "### Look at state history", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "d21236f4-77a3-4967-970e-80fe8c878500", | |
| "cell_type": "code", | |
| "source": "# Access the whole history of ‘graph’ States \n# It returns an iterator including ALL the states / StateSnapshots \n# for 'thread config' value -> {\"thread_id\": \"1\"}\nstates_history = graph.get_state_history(thread)\n\n# Extract each ‘graph’ state / StateSnapshot per iteration\nfor state in states_history:\n \n # Print each state / StateSnapshot per iteration, \n # and separates by an enter “\\n” character, from the next one\n print(state, \"\\n\")", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "786b09ab-992d-42b0-b052-9669e121f259", | |
| "cell_type": "markdown", | |
| "source": "```\nStateSnapshot(values={'lnode': 'node_2', 'scratch': 'hi', 'count': 4}, next=(), config={'configurable': {'thread_id': '1', 'thread_ts': '1f0719d8-45c7-6343-8004-3d0100523b26'}}, metadata={'source': 'loop', 'step': 4, 'writes': {'Node2': {'count': 1, 'lnode': 'node_2'}}}, created_at='2025-08-05T01:42:59.294065+00:00', parent_config={'configurable': {'thread_id': '1', 'thread_ts': '1f0719d8-45c3-63e1-8003-29324ce315b6'}}) \n\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hi', 'count': 3}, next=('Node2',), config={'configurable': {'thread_id': '1', 'thread_ts': '1f0719d8-45c3-63e1-8003-29324ce315b6'}}, metadata={'source': 'loop', 'step': 3, 'writes': {'Node1': {'count': 1, 'lnode': 'node_1'}}}, created_at='2025-08-05T01:42:59.292444+00:00', parent_config={'configurable': {'thread_id': '1', 'thread_ts': '1f0719d8-45c0-662d-8002-54e30c824b4c'}}) \n\nStateSnapshot(values={'lnode': 'node_2', 'scratch': 'hi', 'count': 2}, next=('Node1',), config={'configurable': {'thread_id': '1', 'thread_ts': '1f0719d8-45c0-662d-8002-54e30c824b4c'}}, metadata={'source': 'loop', 'step': 2, 'writes': {'Node2': {'count': 1, 'lnode': 'node_2'}}}, created_at='2025-08-05T01:42:59.291273+00:00', parent_config={'configurable': {'thread_id': '1', 'thread_ts': '1f0719d8-45ba-63d6-8001-f15f1282a924'}}) \n\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hi', 'count': 1}, next=('Node2',), config={'configurable': {'thread_id': '1', 'thread_ts': '1f0719d8-45ba-63d6-8001-f15f1282a924'}}, metadata={'source': 'loop', 'step': 1, 'writes': {'Node1': {'count': 1, 'lnode': 'node_1'}}}, created_at='2025-08-05T01:42:59.288765+00:00', parent_config={'configurable': {'thread_id': '1', 'thread_ts': '1f0719d8-45b0-6b2f-8000-9e3accf8d9c0'}}) \n\nStateSnapshot(values={'scratch': 'hi', 'count': 0}, next=('Node1',), config={'configurable': {'thread_id': '1', 'thread_ts': '1f0719d8-45b0-6b2f-8000-9e3accf8d9c0'}}, metadata={'source': 'loop', 'step': 0, 'writes': None}, created_at='2025-08-05T01:42:59.284857+00:00', parent_config={'configurable': {'thread_id': '1', 'thread_ts': '1f0719d8-45ad-66d5-bfff-367a1aee8d9b'}}) \n\nStateSnapshot(values={'count': 0}, next=('__start__',), config={'configurable': {'thread_id': '1', 'thread_ts': '1f0719d8-45ad-66d5-bfff-367a1aee8d9b'}}, metadata={'source': 'input', 'step': -1, 'writes': {'count': 0, 'scratch': 'hi'}}, created_at='2025-08-05T01:42:59.283515+00:00', parent_config=None)\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "e127252b-fe0f-4669-8b8d-2f373c3a9889", | |
| "cell_type": "markdown", | |
| "source": "Store just the `config` into an list. Note the sequence of counts on the right. `get_state_history` returns the most recent snapshots first.", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "fe9e1280-1d6a-4027-9ca1-1147fac98d44", | |
| "cell_type": "code", | |
| "source": "# Init states as empty list [] \nstates = []\n\n# Access the whole history of ‘graph’ States \n# It returns an iterator including ALL the states / StateSnapshots\n# for 'thread config' value -> {\"thread_id\": \"1\"}\nstates_history = graph.get_state_history(thread)\n\n# Extract each ‘graph’ state / StateSnapshot per iteration\nfor state in states_history:\n \n # Append each ‘graph’ state / StateSnapshot thread config={} \n # into 'states' list []\n states.append(state.config)\n \n # Display config = { ’configurable’ : {’thread_id’:’1’, 'thread_ts':'...'} }, \n # values = {’lnode’ : ’node_No’, ‘scratch‘ : ‘hi‘, ‘count‘ : int_number}\n print(state.config, state.values['count'])", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "caee0a71-7e06-461f-9703-3d943b1890a5", | |
| "cell_type": "code", | |
| "source": "# Init states as empty list [] \nstates = []\n\n# Access the whole history of ‘graph’ States \n# It returns an iterator including ALL the states / StateSnapshots\n# for 'thread config' value -> {\"thread_id\": \"1\"}\nstates_history = graph.get_state_history(thread)\n\n# Extract each ‘graph’ state / StateSnapshot per iteration\nfor state in states_history:\n \n # Append each ‘graph’ state / StateSnapshot thread config={} \n # into 'states' list []\n states.append(state.config)\n \n # Display config = { ’configurable’ : {’thread_id’:’1’, 'thread_ts':'...'} }, \n # values = {’lnode’ : ’node_No’, ‘scratch‘ : ‘hi‘, ‘count‘ : int_number}\n print(state.config, state.values['count'])", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "d4eefe7e-f7e2-4b80-94dc-49330440c2c7", | |
| "cell_type": "markdown", | |
| "source": "```\n{'configurable': {'thread_id': '1', 'thread_ts': '1f071a31-3dde-6246-8004-1012d46ff307'}} 4\n{'configurable': {'thread_id': '1', 'thread_ts': '1f071a31-3dda-6caa-8003-2c7feac3bc82'}} 3\n{'configurable': {'thread_id': '1', 'thread_ts': '1f071a31-3dd8-6705-8002-26f0bdc7ee93'}} 2\n{'configurable': {'thread_id': '1', 'thread_ts': '1f071a31-3dd4-68ed-8001-b3337a120c02'}} 1\n{'configurable': {'thread_id': '1', 'thread_ts': '1f071a31-3dcf-606f-8000-bedbeb4bd24a'}} 0\n{'configurable': {'thread_id': '1', 'thread_ts': '1f071a31-3dcb-6cd2-bfff-5f24a523ffe9'}} 0\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "76336b81-ae3a-4da5-a7b7-9ad223fa4f39", | |
| "cell_type": "markdown", | |
| "source": "Grab / Pick an early state.", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "03e34b32-2c42-4d7b-86a1-2dc8083b5c8d", | |
| "cell_type": "code", | |
| "source": "# states[5]=states[-1] ->{'count': 0}\n# states[4]=states[-2] ->{'scratch': 'hi', 'count': 0} '__start__'\n# states[3]=states[-3] ->{'lnode': 'node_1', 'scratch': 'hi', 'count': 1}\n# states[2]=states[-4] ->{'lnode': 'node_2', 'scratch': 'hi', 'count': 2}\n# states[1]=states[-5] ->{'lnode': 'node_1', 'scratch': 'hi', 'count': 3}\n# states[0]=states[-6] ->{'lnode': 'node_2', 'scratch': 'hi', 'count': 4}\n# Pick 'thread config' for states[-3]=states[3]\nstates[-3]", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "7c67caa3-e6b4-4d59-94c4-aea173be3c15", | |
| "cell_type": "markdown", | |
| "source": "```\n{'configurable': {'thread_id': '1', 'thread_ts': '1f07221d-86ac-627c-8001-f522e26b14e3'}}\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "037609d2-a525-4ed0-92be-0877764c6c10", | |
| "cell_type": "markdown", | |
| "source": "This is the state after Node1 completed for the first time. Note `next` is `Node2`and `count` is 1.", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "06011197-ecb7-49e3-b05d-08b179d1506f", | |
| "cell_type": "code", | |
| "source": "# Pass in selected 'thread config' = states[-3] = {'configurable': \n# {'thread_id': '1','thread_ts': '1f07221d-86ac-627c-8001-f522e26b14e3'}}\n# and GET BACK into the 'step’:1 as NEW CURRENT STATE / StateSnapshot \n# based on this input, via graph(obj).get_state()\nnew_current_state = graph.get_state(states[-3])\n\n# Display NEW CURRENT STATE / StateSnapshot based on states[-3] \n# input with selected 'thread config' \nnew_current_state", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "e6ba61f8-0cf3-4351-a216-f5b849804940", | |
| "cell_type": "markdown", | |
| "source": "```\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hi', 'count': 1}, next=('Node2',), config={'configurable': {'thread_id': '1', 'thread_ts': '1f07221d-86ac-627c-8001-f522e26b14e3'}}, metadata={'source': 'loop', 'step': 1, 'writes': {'Node1': {'count': 1, 'lnode': 'node_1'}}}, created_at='2025-08-05T17:30:13.884763+00:00', parent_config={'configurable': {'thread_id': '1', 'thread_ts': '1f07221d-86a9-6009-8000-500c4e01e00e'}})\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "e6a5b217-be93-4a4e-839e-bb46f0553f6c", | |
| "cell_type": "markdown", | |
| "source": "### Go Back in Time\nUse that state in `invoke` to go back in time. Notice it uses states[-3] as *current_state* and continues to node2,", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "37e7bd2c-f8de-4a54-a074-1f3f785dafc3", | |
| "cell_type": "code", | |
| "source": "# Pass in selected 'thread config' at states[-3] = {'configurable': \n# {'thread_id': '1','thread_ts': '1f07221d-86ac-627c-8001-f522e26b14e3'}}\n# and get back a new 'graph' states response.\nresponse = graph.invoke(None, states[-3])\n\n# Display 'graph' new states response. \nresponse", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "d4c805f5-ab90-4aa1-aa21-e423ccaa266e", | |
| "cell_type": "markdown", | |
| "source": "```\nnode2, count:1\nnode1, count:2\nnode2, count:3\n{'lnode': 'node_2', 'scratch': 'hi', 'count': 4}\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "cc3eacb1-6887-4123-a3c7-f0939b10972e", | |
| "cell_type": "markdown", | |
| "source": "Notice the new states are now in state history. Notice the counts on the far right.", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "25d0ca7b-acde-4e58-a679-d39743915335", | |
| "cell_type": "code", | |
| "source": "# 'thread config' value -> {\"thread_id\": \"1\"}\nthread = {\"configurable\": {\"thread_id\": str(1)}}\n\n# Access the whole history of ‘graph’ States \n# It returns an iterator including ALL the states / StateSnapshots \n# for 'thread config' value -> {\"thread_id\": \"1\"} \nstates_history = graph.get_state_history(thread)\n\n# Extract each ‘graph’ state / StateSnapshot per iteration\nfor state in states_history:\n \n # Display config = { ’configurable’ : {’thread_id’:’1’, 'thread_ts':'...'} }, \n # values = {’lnode’ : ’node_No’, ‘scratch‘ : ‘hi‘, ‘count‘ : int_number} \n print(state.config, state.values['count'])", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "16b218c9-ecb3-463e-b98e-d50783371bce", | |
| "cell_type": "markdown", | |
| "source": "```\n{'configurable': {'thread_id': '1', 'thread_ts': '1f073147-bc2b-64f4-8004-ad4333a59b3f'}} 4\n{'configurable': {'thread_id': '1', 'thread_ts': '1f073147-bc28-632d-8003-ae05ad03896d'}} 3\n{'configurable': {'thread_id': '1', 'thread_ts': '1f073147-bc24-62d0-8002-bc73af997dfb'}} 2\n{'configurable': {'thread_id': '1', 'thread_ts': '1f07312c-d0d4-656e-8004-5ed5c0ed292d'}} 4\n{'configurable': {'thread_id': '1', 'thread_ts': '1f07312c-d0d0-6efa-8003-8090f031fc8f'}} 3\n{'configurable': {'thread_id': '1', 'thread_ts': '1f07312c-d0ce-65d4-8002-23470adfee96'}} 2\n{'configurable': {'thread_id': '1', 'thread_ts': '1f07312c-d0c7-6e67-8001-7e88c9619de7'}} 1\n{'configurable': {'thread_id': '1', 'thread_ts': '1f07312c-d0ba-6346-8000-a1350d38a4e4'}} 0\n{'configurable': {'thread_id': '1', 'thread_ts': '1f07312c-d0b6-6d1c-bfff-954fec719ef9'}} 0\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "6907cbab-a642-4cc0-988c-b78eb5e89ce2", | |
| "cell_type": "markdown", | |
| "source": "You can see the details below. Lots of text, but try to find the node that start the new branch. Notice the parent *config* is not the previous entry in the stack, but is the entry from state[-3].", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "fc4b24d6-4866-48fd-ba11-2df9f9127b2e", | |
| "cell_type": "code", | |
| "source": "# 'thread config' value -> {\"thread_id\": \"1\"} \nthread = {\"configurable\": {\"thread_id\": str(1)}}\n\n# Access the whole history of ‘graph’ States \n# It returns an iterator including ALL the states / StateSnapshots \n# for 'thread config' value -> {\"thread_id\": \"1\"} \nstates_history = graph.get_state_history(thread)\n\n# Extract each ‘graph’ state / StateSnapshot per iteration \nfor state in states_history:\n\n # Print each state / StateSnapshot per iteration, \n # and separates by an enter “\\n” character, from the next one\n print(state, \"\\n\")", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "ba87c226-2741-49d0-892c-3343a66cc0c1", | |
| "cell_type": "markdown", | |
| "source": "```\nStateSnapshot(values={'lnode': 'node_2', 'scratch': 'hi', 'count': 4}, next=(), config={'configurable': {'thread_id': '1', 'thread_ts': '1f0722a6-b765-6ce8-8004-cffce1a15ff5'}}, metadata={'source': 'loop', 'step': 4, 'writes': {'Node2': {'count': 1, 'lnode': 'node_2'}}}, created_at='2025-08-05T18:31:36.559720+00:00', parent_config={'configurable': {'thread_id': '1', 'thread_ts': '1f0722a6-b762-6f46-8003-398274b1fd08'}}) \n\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hi', 'count': 3}, next=('Node2',), config={'configurable': {'thread_id': '1', 'thread_ts': '1f0722a6-b762-6f46-8003-398274b1fd08'}}, metadata={'source': 'loop', 'step': 3, 'writes': {'Node1': {'count': 1, 'lnode': 'node_1'}}}, created_at='2025-08-05T18:31:36.558561+00:00', parent_config={'configurable': {'thread_id': '1', 'thread_ts': '1f0722a6-b760-67f4-8002-096efec8836a'}}) \n\nStateSnapshot(values={'lnode': 'node_2', 'scratch': 'hi', 'count': 2}, next=('Node1',), config={'configurable': {'thread_id': '1', 'thread_ts': '1f0722a6-b760-67f4-8002-096efec8836a'}}, metadata={'source': 'loop', 'step': 2, 'writes': {'Node2': {'count': 1, 'lnode': 'node_2'}}}, created_at='2025-08-05T18:31:36.557551+00:00', parent_config={'configurable': {'thread_id': '1', 'thread_ts': '1f07221d-86ac-627c-8001-f522e26b14e3'}}) \n\nStateSnapshot(values={'lnode': 'node_2', 'scratch': 'hi', 'count': 4}, next=(), config={'configurable': {'thread_id': '1', 'thread_ts': '1f07221d-86b4-68fe-8004-fc2375fabe93'}}, metadata={'source': 'loop', 'step': 4, 'writes': {'Node2': {'count': 1, 'lnode': 'node_2'}}}, created_at='2025-08-05T17:30:13.888218+00:00', parent_config={'configurable': {'thread_id': '1', 'thread_ts': '1f07221d-86b1-6f7a-8003-289c7f0e5866'}}) \n\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hi', 'count': 3}, next=('Node2',), config={'configurable': {'thread_id': '1', 'thread_ts': '1f07221d-86b1-6f7a-8003-289c7f0e5866'}}, metadata={'source': 'loop', 'step': 3, 'writes': {'Node1': {'count': 1, 'lnode': 'node_1'}}}, created_at='2025-08-05T17:30:13.887158+00:00', parent_config={'configurable': {'thread_id': '1', 'thread_ts': '1f07221d-86af-6fe4-8002-afc29489bb82'}}) \n\nStateSnapshot(values={'lnode': 'node_2', 'scratch': 'hi', 'count': 2}, next=('Node1',), config={'configurable': {'thread_id': '1', 'thread_ts': '1f07221d-86af-6fe4-8002-afc29489bb82'}}, metadata={'source': 'loop', 'step': 2, 'writes': {'Node2': {'count': 1, 'lnode': 'node_2'}}}, created_at='2025-08-05T17:30:13.886342+00:00', parent_config={'configurable': {'thread_id': '1', 'thread_ts': '1f07221d-86ac-627c-8001-f522e26b14e3'}}) \n\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hi', 'count': 1}, next=('Node2',), config={'configurable': {'thread_id': '1', 'thread_ts': '1f07221d-86ac-627c-8001-f522e26b14e3'}}, metadata={'source': 'loop', 'step': 1, 'writes': {'Node1': {'count': 1, 'lnode': 'node_1'}}}, created_at='2025-08-05T17:30:13.884763+00:00', parent_config={'configurable': {'thread_id': '1', 'thread_ts': '1f07221d-86a9-6009-8000-500c4e01e00e'}}) \n\nStateSnapshot(values={'scratch': 'hi', 'count': 0}, next=('Node1',), config={'configurable': {'thread_id': '1', 'thread_ts': '1f07221d-86a9-6009-8000-500c4e01e00e'}}, metadata={'source': 'loop', 'step': 0, 'writes': None}, created_at='2025-08-05T17:30:13.883485+00:00', parent_config={'configurable': {'thread_id': '1', 'thread_ts': '1f07221d-86a6-6df2-bfff-a998761091ae'}}) \n\nStateSnapshot(values={'count': 0}, next=('__start__',), config={'configurable': {'thread_id': '1', 'thread_ts': '1f07221d-86a6-6df2-bfff-a998761091ae'}}, metadata={'source': 'input', 'step': -1, 'writes': {'count': 0, 'scratch': 'hi'}}, created_at='2025-08-05T17:30:13.882612+00:00', parent_config=None)\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "c6ad3450-e21e-4ea9-aae6-9c7f10739c8e", | |
| "cell_type": "markdown", | |
| "source": "### Modify State\nLet's start by starting a fresh thread and running to clean out history.", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "e3439f76-de93-4d81-ba2c-1ffb2a886355", | |
| "cell_type": "code", | |
| "source": "# thread config /hilo configurable/ used to keep track \n# of different threads /hilos/ inside the persistent \n# checkpointer/checkpoint. Used for having MULTIPLE CONVERSATIONS \n# with MANY 'users’ going on at the same time. \n# dict {} <- with inner dict {} as value, of \"configurable\" key\n# str(2) = '2'\nthread2 = {\"configurable\": {\"thread_id\": str(2)}}\n\n# Init ’count' key value=0 for returned dict at 'StateAgent' class, \n# call 'graph(obj).invoke(dict)' -> with dict={\"count\":0, \"scratch\":'hi'} \n# thread2 = {\"configurable\": {\"thread_id\": str(2)}}\n# and get back a 'graph' state response.\nresponse2 = graph.invoke({\"count\":0, \"scratch\":\"hi\"},thread2)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "19b9fba8-4987-4a0d-a3f3-94888df328c2", | |
| "cell_type": "markdown", | |
| "source": "```\nnode1, count:0\nnode2, count:1\nnode1, count:2\nnode2, count:3\n{'lnode': 'node_2', 'scratch': 'hi', 'count': 4}\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "52623a6c-13b8-43d2-adc9-d2089635151d", | |
| "cell_type": "code", | |
| "source": "# Get 'Image' function for plotting a png image for 'graph'\nfrom IPython.display import Image \n\n# Let's apply '.get_graph().draw_png()' over 'graph (obj)'\n# for plotting png 'Image' for this 'graph (obj)'\nImage(graph.get_graph().draw_png())", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "4d577b62-198d-4b6d-95e7-ee0e9fa77ed7", | |
| "cell_type": "markdown", | |
| "source": "```\n__start__ -> \"Node1\" -> \"Node2\" -> __end__\n | <- <- <- |\n True\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "c44f3cfc-ea36-45fb-acad-5b2513e3503a", | |
| "cell_type": "code", | |
| "source": "\nfrom PIL import Image\nimg = Image.open(\"Extra_Practice_png_Image.png\")\nimg", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [ | |
| { | |
| "execution_count": 11, | |
| "output_type": "execute_result", | |
| "data": { | |
| "image/png": "iVBORw0KGgoAAAANSUhEUgAAAKQAAAH/CAYAAADZi3bBAABQHUlEQVR4Ae1dB3gUVdd+k91NICRAgACh9957VboUKYKAFFEU9VNsnwqKBbCABXvHX1H4FBWUolKkSO9Ik4703ntLm/+cu8xms9n02d3Z3XPzTKbduXPnve/eekqIRgESBAGTIBBqknxINgQBhYAQUohgKgSEkKYqDsmMVSBwj8D169dx9epVXLp0CZcvX0ZCQgKuXLmC+Ph4xwOJiYnqvuMCHYSFhSFPnjzOl5A3b15YLBZERUWpexEREYiOjk4RR07sCAQFIS9cuIDDhw/jyJEjOHPmDM6ePav2p06dwunTp3H6DJ+fVsS7dvUaLl68AG+M9XITMXPnjiDCRqFAgQIoHFMYMTGFULBgQRQqVIiOY9RWuHBhlCpVCrGxsbBaA7vIQgJhlM212O7du9W2d+9eRb5Dhw7j4KGDOEz7q1evOCogW1g48lHh540ugKjogrTZj/k8V55I5CKS5InKi7DcuRGeKzci6Dh3RB5YiAi28FwIy5XLkRYfRObNl+I87sYNxMXdTL5GkxhXL11U59eopo27eQM3qfa9cukC4m5cV8fXqOa9fOE8Lp8/q/ZXzp/DJdounD2Dq5cvOdLiWrZI0aIoXbo0ShNBS5YsqY4rVaqEihUrKtKGhvp3L8yvCMk12qZNm7B582ZFvp27dmPXrp04Tdc52GxhKFKiJArGFkOBIsVQqFhxFCpKezrnfYGisYiIjHIUsD8cJCbE4wLV6qePHcHZE8dx5sRRnD1+DGeOH8U5Oj959LAiL39LOP1YKlSogMqVK6MSEbRatWqoW7cuqlSp4jc1q2kJeejQIaxdu1YRcMOGjdi4aSNOHD+uOBRDBCtWpjyKlilHe9rKllf7mOIlqK8W2E2aux8R167HD+7Dsf17aaP9gX04eWg/Dv+7h/q8cYqo1atXR/169RRBmaT16Jj7u2YLpiAkDw527tyJFStWYPny5Vi6bBkOHjiAUG6iiGTFy1VCuRq1UL56LVSoVQf5C8aYDUdT5icxMUGRdO/WLTiydzeO/Lsbe7ZsVF0Bm82GmjVroUWL5rS1QOvWrVW/1dcf4jNC7tu3D3PnzsWcOXOxZMkSGlBcUv2xynUbgLcq9RuhQs06qh/na5AC7f1cg+7auA47/l6LPZv+xuG9exASEoKq1MR37tQJHTt2VCT1RQ3qNULytMlff/2F2bNnYzaRcM/uXdSfi0Stpi1Rq9ltioClKlRGiJ93yv2RvDyA2rVxPbatXYlNyxYpguahAV6bNm3QuXMndO3aFcWLF/fKp3mUkElJSVi5ciWmTp2KH3/6SQ0+YkuWRr1W7dCgdQdUbdAYNhP2Y7yCvIlfcooGSptXLME/K5ep/TWapWjSpCn69OmNvn37oiiN9D0VPELI7du3Y/z48fj55yk4efIEylergaYdu6F5524oXKKUp75F0vUAAvE3b2Ij1Zor5szE+kXzweetW7fBAw8MRq9evQwfGBlGSG6SZ8yYgc8++0z1CWNLlcFt3XoRCburUbAHsJIkvYzATZo3Xf/XfKyYNQPrlyygCfxCePihIXjkkUdQokQJQ3KTY0LyEtsXX3yBd997DydPnED929vijv73o07z26U/aEgRmTMRnhOd9/P/8Ncvk9U8aI+77sKokSNRo0aNHGU424S8SVX3V199hbFj38QFWmrrcM996DTgfmmSc1Qc/vdwAq3tr543GzO//gwHdm1H7z59MHrUKDUZn52vyRYhf/31Vzz19H9pHfgUEXEQ7nr4cZkbzA76AfQMr/2vnjcLUz99D0f2/YvBgwfjnXfeybIQSZYIeYKa5KGPP47p06ahTc++uOfJ4bRE57kRVwCVV9B8ikYzK8uoj/m/ca/DRlN4n3/2KXr27Jnp7880IblWHDLkIeQmUapHXh2Hmk1bZPolEjH4ELhCAiUT3x6NRdOm0Gj8bkyY8I0Sv8sIiUyJhowdOxa9e/dGk07d8N5vfwUdGT8a9jh6VSmG9YvnZ4Rnju576z05ymQmH2YpqKFjPsDIb37CwsWL0ZyWJ1kEMKOQLiF5Kue+++7DSOqkPjRyLB4a9WbQLeVdOHsaK+f+nhGOOb7vrffkOKNZTKBWs5Z4c8osXL4Zj4aNGmPjxo3pppBmk82d1PupYzqFVlmGf/INatM0TiAFXir7bcKX2Lf9H5w/fRJ58uZH6cpV0eXeB1GfVpI4jLy3F7atW5Xqs0d8ORENWrVX1/fv2IYZNMLcTWvC58+cQnShwmotvt9Tw1GEVqX08O5TD2PVn3/ASkINk9btxKcj/ouNS/8Cx1szf06G79HT8df9dVrtGffEEJzYtwfr161Nc94yzRryo48+wg8//IDnPvq/gCPj6vmz8VL/7uA9r+PGFCuB+Lgbapls7H8GYc4P36py5wEbC+7qITqmCIrShH8ukvLmsGfzBrzUrxuWUyf+IgnTsigck3LZH9Px7F0dcOLQARWP/4XfeoanSaZ+/gFWzvkNXEi8ZfQeRyJ+fJCb1saHffI1clFT3q17D/D8tbvglpD//vsvnn/hBfR9/FnUbdna3XN+fe23b75QKgrla9TGxDXb8cnc5fhu1Ta1spSPVh/WLpyr7j/97mfoQxjo4T+vv4PP5q1EjcbN1aUpn70PXr3g8ObPf+D9mQvx2qRf1Pn1K5fx+3dfqWP+Z7FaHMfzf/4e3R98DM+8/6XCN6P3OB708wMm5fOffYu9+/djxIgRbr/GrTTrSy+/DF766/HQULcP+ftFXS2AVQtY8pprPZa9fOqdT7L0aYNffE0RltUSuLnnKY9y1WopIWGWRTy0Z6fb9G7r2hODhr3s9l6gX+RuzIBnXsRnr43AU089hbJly6b45FQ1JKsJTKN5xh4PPR6w0te1SdyNAzepQzs0U9snLzyFpb9PU01oCoTSOYkmpay/lyzE16+9hP51yuPuaiXQt2ZpMBk5xMfFuX26UbuObq8Hy8XWPfsgmoSsv/7661SfnKqGZJlFaEDj9p1SRQ6UCwOffQnXr13Fkhm/KPIwMXlbPGMqDW7y4tkPxmfYb+ba8NXBfZUENuPCA6Eylaspcbqpn3/oIKU7zLgvGsyB1UwadeiMOSSgPWbMmBRQpKohWZWgRNlyjk54itgBcsKag0PHvI9vVmxW/TgeWZepUk193VXSYHxr6GBcOnc23a/l0TerA3Bo0aUHXvxyEvr/9wX0fOQJJCUlpvssj7SDPZQjkcQdO3akgiEVIVkZPpxUQYMhROWPVjKaD7z0Ot6bsQAPvvyG+mxWZT24KzVYWhI1HbfCqaNH9EOUqlTFcfzP6hU51ul2fo8j4QA7yEWqxTcJZ57rdg6pCMnK6ax2GajhAk3L8JTPA81rYfb3E1J8ZryTPjX3DzmwbrYedm1arx+ioNMa/j+rloPVVU8ePohvxrziELu7SMYHMhvSek9mn/e3eOdJMCc/Tam5Gj5IRciGDRsqXV/WAw7EkJ8mrgsULqrmDb9542U82Lw2nr6zNR5sUQeT3nldfTJLtpeoUEkdl61awwHD9K8+xf1NqmPu5O9QtX5jNX/JN/9ZvRwD6lfCY+2bqrg9hjym9qwKwGnv3bpZnaf3L633pPeMP9/bsX4NGhHXXEMqQrJKZEzhIlj4y4+ucQPm/L/vfY57n3sZPA/J84hH9/+rtO6qN2yqpn6eHveZ41vLUl+HB0FMZO778VxaodjiyoLFy//3gxr8sHULnixv1aM3xvwwA90feBT1SFCZn9Hoz5oJvaG03uPISAAd8CLCur/+JPmIu1N9ldulQx75jH3rLXw8ZxmCfUSYCjG5kGMEvhr9AjYvmY+9tADDhrecQ6oakm8+88wzKFqkCD594Wk12ev8gBwLAjlB4O/FC5Tqw/uk8uJKRk7XbQ3JN9atW4cWLVuiTa9+StKHr3krsEmQz156JlOv47hsSoSNC2QmDBr2SqbjZiY9T8YJNBxYEGXUoF7o1fMuTPzuO7fQpUlIjs0rNn1IR6L1XX3w8Oi3aD1W5s/coigXM0SA523fJWmfhvXrY9asPxAeHu72GbdNth6TRc9nzZqFNX/OwpiHBpJZuUv6LdkLAplGYPmsmXhjSH+0a9sGv/02M00ycoLp1pD6G9n8XZc7uyIp1IL/vD7OIe2i35e9IOAOAa7AJo17Tc3YvEDSYzxYZhtC6YVMEZITYAWv/zz6KH6bORPt7u6He4eNVOu+6SUu94IXgbUL5uLr119EKAl6s6LX3XennuJxh06mCak//Msvv5Dm4RNIpBfd9ciTaN9noJqf0+/LPrgR2L99K376+B0lBTWI1F94NM3mqjMbskxITvjcuXN47bXX8CXZ78lXoBB6/ucpUovtI4OezKIegPEO/7sLP3/yrjIaUK9+A7zz9lvKelpWPzVbhNRfwkbkuV/wzYQJtBxXRFmvaNPrnhRi/3pc2QceAqx3tWXlUvw5eSLWLZqH6mRG5Y3XX1fm+zLqK6aFRo4IqSd6gKzdfvDBB/juu4m4QdLTzUldlu37VKxVV48i+wBCgHWuF037GfN/moSjZPz0tttuJ+nvJ3EX2ffJLhF1eAwhpJ7YDRInmjJlCt57/31soZF50ZKlyAxfV5rH7EtmmSvo0WTvhwiwSN6WVcuwilSC15BynIVmXPr374ehQ4eiVq1ahn2RoYR0ztWqVavw448/krHSX2iEftxhI7Jh2ztQonxF56hybFIEWCNyMzXJTEJlG5JUMtqQbch+/e5Ro2Z2BGV08Bgh9YyyQfulS5fiJ7Kg++uv08hp0RkyZF8StVu0Uhp3bJKFJWgk+B4B7hMe2LldmXXevGIx2SBfp6Tfmzdv4SAhy8t6MnickM6ZZ3Kyqw82ds92xjf8vR6hVPVXrlOPbIw3RpV6DWmduaHMbzqD5sHjJCqPQ3t2EfHWKCP429euwtlTJ0n8sDA6keF7Nn7fvn17r3pn8CohXbFlN2/z5s1TxvCXLV+hDOFzp7gUCcdWrtcIlYioZarUQMkKFWVKyRW8bJyzhQ4WcGCB4d0k/b5rw3pcJf3xvKS837x5M7QkYZo77rhD+bLJ6eAkG9lTj/iUkK6ZZoKykfxl5Kdm+YqVZAdmg9K7YA9dpUlvpRQpYrFkNWv3cT80b4GCrknIOSHA6rfsSOnwnt1EwH+oGd6GA0TE87dUKkqRa7qWJIjdrJmdhOxUySwu6UxFSFc2sQIQa0GyOzne2FDRxo2bcJ7Mn3BgC1vFSUOyKHn1Kk7evGLJlElsqbLKtZyzCRTXdAPhnEl39sQxnDpymDx3kQcvmn45Tvvj7MWLFNC4OWZ9lcrkVq5e3bqoU6eO2tiLl5k90ZqakGkR5+jRo+TjcJfyd8j7nTt3YRf5vTl08CDpQ9tVUNnvX2Gy2cP+DaMLx4LdzrHiFnsBszve1B1wFsjx3Fla+czO9ZvXryl7Qyzmz3aHLtPGimls0/v0saM4f5L9HR5z1Hb8jmhSlqpYqSKqEvnYEae+sc/DtMS8spM3bzzjl4RMC5g4qjV4kl53RXyQCMqrSYcPH1GeYY8fO44LJMzrHLipykcFyl5h2RNs7kjSj6F9GGkbsqom68uEk2fYMPIEyyGCpjqcmzeOw55i9XCd1Iid9bJvXLsGNjBFurHk2fUi+Jz1eG6QL+7rV8k7LB3fpGtMvoukC86qoc4hjOQGeWRbqhR7gC2prIbpXmDZ8wF7hmVXxoESAoqQmSkUdsDOfVXdb7buM5vP2WH7xYsXlaP2K3R8lch1nnxtXyPC8KQ/O4K6dDGlTOglWrXg63qIIOftYdTn1UN4rnDyiW1XpeWmMoIIHElx2Ec2O3ZnMX52+M6kct50X9memOvT82bGfdAR0hOFwKYL3377bRw7dswTyQdVmulKjAcVEvKxpkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkBACGmKYpBM6AgIIXUkZG8KBISQpigGyYSOgBBSR0L2pkDAaopc+FEmdu/ejQkTJqTI8T///IOQkBC88MILKa6XL18eDz30UIprcpI+AiEahfSjyF1nBK5cuYKYmBjEx8fDanX/e2ZI4+LiMGbMGLz44ovOj8txBghIk50BQK63IyMj0bVrV4SGhuLmzZtuNyYj15j9+vVzfVzOM0BACJkBQO5uDxgwAAkJCe5uqWtMxvr166Ns2bJpxpEb7hEQQrrHJd2rnTp1AteUaQWLxYJBgwaldVuup4OAEDIdcNK6FRYWhj59+sBms7mNkpSUpO67vSkX00VACJkuPGnf7N+/vxrYuMbg2rFVq1YoUqSI6y05zwQCQshMgOQuCpOOR9vuwr333uvuslzLBAJCyEyA5C4Kj7J5cOPabPP1u+66y90jci0TCAghMwFSWlF4WofnI/XA85JdunRBvnz59EuyzyICQsgsAuYcvVGjRihTpozjUmJiIgYOHOg4l4OsIyCEzDpmKZ7g/qLebOfOnRudO3dOcV9OsoaAEDJreKWKrTfb3He8++67waSUkH0EhJDZx049WbVqVVSvXh0898iDHAk5Q0CEK27hd/r0aRw7dgyHDx9W+6NHj+LcuXO4cOECbWdx/vwZ2p/H9evXcenSFXB/kcONG3F0LU4d85Jh/vx51DH/i4zMg7AwG6Kjo+l6QdrH0D6/2ooWLYpixYqhRIkSaouNjUV4eLjj2WA9CCpCMrl27NihNhYj27VrB3bu/AcHDhxVxNJJkC+flUhiQYECGpEngbYk2qC2PMQ33mixRgVerCEZC/z2G8DTj5cv66kAFy+CRuEgIidv58/biNyhOHFCw8mT8bQmnixsVbRoAVSuXAWVKlWnfWVUqVJF7cuVK6eEOZJTDtyjgCUkk2/9+vX4+++/aVuPDRvWYO/ew6okc+e2UEFbaYujTUOFCkDx4qAaCyhVCoiIyHqBUwVLE+VZe44r2ZMnQbUycPw46IcB+pEAu3fb6IcSQjW1veaNisqNevXqksBGEyW0wYIblSpVUhJFWXuj+WMHDCEvXbqEpUuXYvHixVi06E9s2rSN+nUaES2MCjGBtiTagBo17KSj1tX0gWvbnTtBPybQj4q3MGzdmkCylkkoXDgarVq1RevWbWnfStWmpv+gTGTQrwn577//YsaMGZg58xesXr1e9euqVw9DmzZxVEhA06YAddUCKpCoJf3YQD8+0A/PgmXLQqibkEA/vBjceWdP9OjRg0ja2m/7o35HSO77/e9//8P06VOwbdtuFCxoo4JIpBWSJEXCrDab/s5WFsukngnmzwf9MK1UmyYgKioCHTt2xj339FcrRyyd5DeBVRjMHi5fvqx98803WosWjTUayWolS9q0p5+GtmgRNCoQyr5sOgaHDkH79FNobdtatdDQEC0mJj9h9bS2ZcsWsxezyh+XpGnDgQMHtKeeekqjTr2WK5dF69s3VJs7FxoNBijPsmWEAZPz9dehVagQxkN5rVmzhtr06dMJv0TTlrkpCblp0yatX7++mtVq0UqVsmnvvw+NpgSFhNn8EdKcvWpNuncPVbVmpUpltPHjx2ukE2Q6YpqKkEeOHNEefniIZrGEarVr27SJE6FRJ16ImE0iuqtBaRyoPflkiEZTX/RjL0YYT9Rolck0xDQFIa9du6a9/PLLWkREuFauXJg2ZQoIJCGiO0IZdY2b84EDQ6lPDtU337BhgylI6XNCrlu3TqtatYJGqyPau+9Cu3FDiGgU6TKTzrp10Fq2tGo2m0V79dVXNZLv9CkxfUZI7liPHj1a9RN5RHjwoBAxMwTyRBxujT7+GNRCWbSGDetoNLXmM1L6hJDcRPfq1UMLDw9VQARi81ywINTIlgSBqHD9Y6NVIa1BA6tWoEBebcmSJT4hpdcJeerUKa1JkwYaTWhry5Z5pqD69rWTgac6SKDbbTfgzTeT49A6suGkMYqQJLSh0XKnIjd/zxdfGJ9X5x/MtWugyiKUKgub9sMPP3idlF6Vh7x69SpJVLfHqVObsXJlPFq08Pz6AQssfPSR599j9BvWroXCp1s30Pq10amnnR7LF0+ZkoShQ+PJ2MG9amk27djG3/EaIVl+cMCAe0iiZRvmzo0naRXjPyatFMeOBVgax5/C558DK1YAJCaJXr28m3MSfsd77wGPPqqhf/++WLVqldcy4DVCvk5LBvPnz8Hs2QmoWNE738eiZCyvyHKJo0dn7p0svPDBB0DDhqA1YSBXLijxtMcfB44cSZ0GryO3aWOXkaRmmgxM2UXKaDrFbTh7Fnj2WXuaLI9boABovRlYsyZldCYi52PfPtCadMp73jr78EMNbdsmkmpGd8KQQPRG8EYnYc+ePbT0Z9MIYMP7as79H/1Y70PygOKJJ+z9L9JQ1bZvT36/uz4kCYNrt92W3F8jIxQ08kw+L1QIGpmCdHwDp0cmfhz9O9J+pekT0KQ+aGBgv+48qKFaWiOTkSo+yTtopLSokQym43zhwuS09W/h/dSpye/wdB/S+b18TD8grXBhG02mP+ENqtAbvRDuvLOjVqeOjea43APuCkJOz3VCUg2pnTkDjaS9VaFTTeQgkztCjhiRXPCDB0OjLq9aN//6a6gJZKogNJKpdKRB5n0cZBw50h6XZBg1kv5yXHcm5AMP2K/zj4Pn//g7ed61XTv7dRISd6TtjIEvCcn54BUzXj0jw6yUP88GRsCj4dChQ2r9lMQW6T3e2XRCkhS4euc77yQThMS01DVXQrLAhl6r5c0LjeySpshvx47JaWzZYicfqTIo4jHhSY3BEZ+EaVMRkn+Mem3bvHlyXMZk2rTk+Bs3przH931NSJ6Wq1IlTAm60Ed6NHi8D8myi9HRFpAFO5+FJ58EKfTbX8/9NwI4VeC+GglwqFCnjr1P6BypcePkMxaQJX0wUA2qAtWCDh0bvlC3bspzvrZ3L0BTKirwYIX7mPrWs6f9Ov/35og6+a3pH3E+Bw6Mw+TJE1NY6kj/qezddW+TOHtpuX1q8eIFZHE2IUWBuY3owYs8eKAaUQ04qHbDt9+CzDGnfKFzn50HJ67B2RwkaUuAalBHcLWcwgXI15xH9s7KX2wYjUnvLrim5S6OL6717g28/PIFpaPUpEkTj2XBpViMf8+hQ/tBAwWfBx6pfvihfTT7yivAM8+kzBJrFepBryn1c96fOpV8RlqtKRTBWKvQOXANfP688xWAugGOUKsWaOrLceoXBzwzQkuLpIS2C54kZKin0Thy5ITS6PP0ezKTPs+tcWANv0mT7Mf6f7a+rNeM1I9zNK/6fW5m9cBTQqylqBup2LbNrgqr31+9GqTeqp/Z96TJqtRn+YybZeqz+lXgWj9/fgv90Fx+aQZ/hccJGR2dN1VtYfA3ZDo5Gkw4Jplp+iZF4Mnghx+2X+ImmfuaPCfJxCGVAFpZst/r0AFqHpWmhNC+vf0aN/evvWaPy+U1bFiKpNUJdxG42ePAP4hx4+zHnP6DD9rnI2m6yNGPpdG36hZw14CP9cA64HyNN86fN8P160meNxXj0SETJd68eUOaw0o9cvTkiNt1lO38LhZQ5blCKkjHpq9l8zxky5bJ12lS3DEy5vhUi2qkQ01fZd9oUlzj+UQ9LZ6T5LlL6mJpNLGtrpOuvyP+iRPQSpdOjs/r3dT8O55n8Ts97bZtk6/r6bvun3oqOb7+nKf2PIfK7587dy7l0XPB4zVk7doNsXhxGH2LOQJNTNM6rfu88KrMggUAEYMU80HWIuwjcjLfA5qjVLrRZPnEEVjPe/ZsoEEDkNqpvV95333ArFnJzb8+EueHeDDDa9Q0Wa9G/TzQ4ZqW5iHx++/2WtmRuMkOuM8bFmalFayGHs2Zx9VgV1Jb15zaSp4q4SZJgn8icMcdVuoDd8a0aTM9+gEeryGbkrZ+pUpl8f77Hn+VR4EK5sQ3bwYWLkwk6Z/BHofB4zUkfwGpXtJgoqdqDlkQQYL/IMBTWKTiQIO72jSwW+t5o1ee656mTPnOOztplSvbNJqzoxuy+QsGPNBifRtvGRpgZnglsIpr8eKFtTZtrCnWff2lYIIxn9SwKTmEcePGeYUj/BKvEZJfxgYA8uaNICMAoaJvbfJWYvFiu9LXo48+wkXnteBVQvJXLViwgGQIc2vt21ul+TYpKX/4AUoBr0+fXmQ7KcFrZOQXeZ2Q/FIyIkqCqYW1mjVtpHLJWZDNDBiw4S6W66RlQm3YsOd8YgPIJ4RkUrKcZP36tbQ8eSzaZ5+JpQpfE5IrhqZNrUqyn+3++Cr4jJD8weToXJlQYaNSHTpYpbb0QUvBtpNYtYQrhnr1ampbt271FRfVe31KSP3LV69eTbrHlWl6IVQZQmK1A1/XGMHwfh5FV6pkU7XiqFGjVAWhl4mv9qYgJH88d565qShatCAJHFg1kp5R+jDBQAxvfiOrI7CNTbbnw8Zf2ewh2+E0SzANIXVA2FruyJEjlTmPPHmspDUYopF6Ad2WLScYcNNM2iRarVpWJbXToUMbbc2aNTrsptmbjpA6MleuXFE1ZsWKpdXkbLt2FqX9xpqAOSmYYHuW7fWMGsVib2EKxzvv7KytXbtWh9l0e9MSUkeKzcNNmTJF69Spg1LFZJtArGtNXgjEvngarQY5IdPI8oUaNdOqORkmLaq98sorpmqa9fJ13ZuekM4Z5uXHsWPHkj3J8qrZiYmxaQ8+GKKxQSY2khRstZ/z97LRAlbtbdzYpuYRefFhwIB+2vz5830yn+hcblk59oq0jydkW/ZRx/J3kmqdOnUy2Z5Zp4Rpa9e2kLBrghJ4ZUNWLHAbqIHVcFnPZ8GCEMybF0Y2k26STlA+MubVlbQ8uyk3yXnYB56fBb8lpDPOx0kHgWoCciT0F0mnz6fCOUZkDCWpbwt574pXEt0s3U3qBEpC2/lZfzhme0DsyYvtCP39dyhJnVvJzlAcaQGGo1mzpmjVqh3ZF2qDxqQ8zm6S/TkEBCFdC4CmMYiYi6ng1lIhrsSWLdvJQWY8eWe1oGpV9nMYp8jJFth4s6t4uqaStXNWvsqJM1eWO6S+H/k5tPs7ZJ+Hu3ZZybVcKA4etGtzlS4dSz+wxvQDa0Sm+looAvqVU6RMQBqQhHT9bhoYkerpVuWMk73B7ty5jQp+u6pJExOJCRRo7pNUWy3kBTaBnHAmomRJuyag7gVW37PBAO4K6Cqw7CSLtf/YfN7zzyd7g6X+nfICy+qwujdY1kjkY96TwhcRkB1s2kAG6Ok82TNsgQJR9EOpSD+aWrSvREYF6hAJG5BzzxiV10D+FxSETKsAaekS7C+R+6M0YFJ+smmNndRUD9P5QSLOBZw5c5HMhxCrDAi6P+3o6HzkgzGWiF+afgQliPwlle/sUmQdiwkYDMRLC86gJmRaoOjXedhelnRf+5HRR/IoRrXbBbLnc1VtTGYON0hpevjw4coH95s0zOWaTA/5qVq1kkI27/VNvyd79whY3V+Wq4wA90EPknuIvqToXZTcyvLmGo7RcJfNi3DtR/bTaYRPOq0Sso2Afw/Jsv3ZmXvw119/RTmygcJ9uLTCzz//rEa2XJuypTc2XS0h+wgIIdPBbhoZbuyt2z9JI94kMhKkk/AMiSmRO400YsrlzCAghEwDJXK1RjYd95L6bq80YrDNx73YTErLXDtysJGNlsmTJ6cZX25kjIAQMg2MuLnm0a/zIMU1KvlxUYMW/TpPL9G6O8150qSkhGwhIIRMAzY2bnD33XerwUoaUVSfkUnoHEhKiWw/znW+JMdZQEAI6Qas/fv3q2mcrl27urlrv7SRjEjyHKZrsJD1KK45JWQPASGkG9xmkfmyvGTylo1kpRV+/PFHsgZGyzQugSTf8RuJH3FNKSHrCAgh3WA2m2zstSdrpO4Ix9F5EPP999/TkqF9ctw1CW7GZ870rJUw13cGyrkQ0qUkr5PVUp666dy5s8ud5NNl5DWUJYzSCjxJznOSErKOgBDSBbOF5E6LSdmRHNOkFdJqrvX4PC9JFjrUyo1+TfaZQ0AI6YLTnDlzyM9MXSXs4HJLnXJzzIRMq7nWn2FS8sS6hKwhIGvZLnhx/3HgwIEuV5NPd5LWVH2S9tVXZ5LvpDziZvswGSSXkDUERNrHCS+exqlI0rrcR2QB2MwGJjEPYkivPLOPSLw0EJAm2wmYRYsWkVpABBqRm9asBPJ2q/R7svKMxHWPgBDSCRceXTdr1izN6R6nqHLoIQSkyXYCtjQ5kXnooYfIp9/LTlfl0JsISA15C23uP7L6QqtWrbyJv7zLBQEh5C1AuLnm/qOnHQO54C+nLggIIZ0Iyf3H8JzosrqAK6dZR0AI6UTI22+/PesIyhOGIiCEJDhZBZb7j1mZezS0FCQxBwJCSIJiPdkoYRMk9djjpgSfIiCEJPjJK4RS0GcZSAm+RUAIeYuQvD6d3fDRRx+lKYyR3TSD9TkhJJU815A5IWSwkscT3x30hOTBDFucyAkhWSAjPf0bTxRcoKYZ9EuHunYh2+2JiooK1HL2m+8K+hqSm+vKlSsLGU1C2aAnJHmoVRLiJimPoM9G0BOSLZdVYVvPEkyBQFATkvVi2PwzGwmVYA4EgpqQbDmXFfuFkOYgI+ciqAm5myzMszJWhQoVzFMiQZ6ToCdkbGxsjkfYolNj3K8oqAnJRDKiuWatw0ceecS4UgnilIKakNxkG0HIIOaP4Z8e1IYCmJBdunTJMahsB4htkUvIOQJBS0ie8mGDUez2I6eB17J5k5BzBIK2yWZ3HmxWr3jx4jlHUVIwDIGgJaRuTq9YsWKGgSkJ5RyBoCUk15A8B+nOGVLOYZUUsotA0BLy5MmT5F+6oJhNyS5zPPRc0BLy3LlzipAewlWSzSYCQUvI8+QjmB1iSjAXAkFLSJYQj46ONqQ0ZKXGEBhVIkFNSKNqSF6CZCP3PEhKb/vyyy+NK7kATSloJ8avXbtmWA3J3GB9nL/++stBE3baWatWLbzyyiuOa7Ka44AizYOgJSQ7Xs+VK1eawGTlBjt35805sNGqmJgYNGnSxPmyHGeAQNA22UYSMgOMU9wuUKAAPvnkE3Tv3l2Z/7t06RIiIyPx7rvvpog3ZMiQFI4/2cj+qFGjUKZMGWWhjWU42UBBoIWgriF9YXqPvYNNmDBB2aFkn4i5c+fOFKeYjEzAb775Bi1btgT702Frv5zeo48+mqk0/CFS0BKSVRfYUaa3g9VqVS5Fvvrqq0y/mv0mfvjhhxg2bBj69OmjnmPXJStXrsR7770XUIQM2iabrZ0lJSVlmhRGRsyq2T8exV+9ehWtXMxNc03JTuRPnz5tZPZ8mlbQ1pBcO2bk/MhTJZPV+U9dEMSVkHr+mJA8gAqEIIT0QSnyXKVzcD3ne87ujXUzgfPmzXNr1CCrBHd+t9mOg5aQNpsN7LfQDIEn6HkpUw9cc69bt84xT1q7dm01smYJd3abHMghaPuQPJHtXAvlpJBzah+SPT+wQ0+2M8S+FHnUzH1cPXBe+dqrr76KGTNm4MSJE2ATMKx+cf/99+vRAmKf/NUB8TmZ/whuBnkO0Axh3LhxYKdNt912G9q0aaP0xPv166eMGOj543nKxx57DE8//TRKliyp3CfznObYsWP1KAGxD1pzfA8++CBYSJfdEec0sHDF559/nq4X2Zy+I6Pn27Vrh0KFCmUUzfT3g7YPyTXkjh07DCkg1jr86aef8MYbbxiSXnYSYZOCQsjsIGeSZ4oUKaL6YkZlZ9KkSUYlFdTpBG0fkrUNdc3DoGaAyT4+qAl58+ZNnD171mRFEtzZCVpC6uqvR48eDW4GmOzrg5aQPM3Cc31sI1KCeRAIWkKy2BfP57FJZwnmQSBoCclFwLbFjSCk2Ic0jtBBTUieuzOCkKJ1KIQ0BAEmpFGT44ZkSBJB0K7UcNkzIVk/m13LFS5cONt0EPuQ2YYu1YNBTUjdPw1L2OSEkGIfMhWvsn0hqPuQPBfJa9pG9COzXQLyYAoEgpqQLKnNNsa5hpRgDgSCmpBcBDVr1lTCruYoDslF0BOyXr162LBhgzLvLHTwPQJCSCIkj7T379/v+9KQHAS3azku/zp16oCV97mWlOB7BIK+hoyIiFDzkTkhpKzUGEfkoCckQ8n9SNb4y26QtezsIpf6OSEkYZJTQqaGVa5kF4Gg1Tp0Bmzp0qW4/fbbcfDgQZQqVcr5lhx7GQGpIQnwunXrKmHdnDTbXi63gH2dEJKKli1DVK1aFatWrQrYgvaXDxNC3iqp5s2bY8WKFf5SbgGbTyGkEyG5yWZTzxJ8h4AQ0omQrBYr/UjfkZHfLIS8hX/58uURGxsrzbZv+SiEdMa/adOmQkhnQHxwLDWkE+g8sGFD8uzYPSshp/Yhs/KuQI8rhHQqYSbkmTNnwJZqJfgGASGkE+48Qc7CFsuXL3e6mvEh69R07do144gSI0MEZOnQBSI2/Fm0aFF8//33Lnfk1BsISA3pgjKbVGYnmlntR7okI6fZREAI6QIcE5L9wojilwswXjoVQroA3aBBA+TLly+Fq2GXKHLqQQSEkC7gsjoDe0Nw9n3tEkVOPYiAENINuK1bt1aE9JXrOTdZCppLQkg3Rc39SNZE3Lx5sxrcsJMi9gfDzi4PHDjg5gm5ZBQCMu3jBkl2qMQWdtmqBROQjVHpzjrZ0aWr+w3WqeFBkMxFugEzi5ekhrwFGJt2Zh/WbMmMCXfx4kVs3LhRkZGj6M237gjTGWfROnRGI2fHQW39TIdu5syZ6NGjh9LPZuLpc5CuzjnDwsLAmwTPISA1JGHbvXt39O3bF2x8SiejO8jz5Mnj7rKqVcePH+/2nlzMGgLSh7yFFzfR1atXV9699ObZFUrWSGTNRAmeQ0BqyFvY8mQ4+ytMr4Z013/0XNEEZ8pCSKdyb9GiBUaMGKFG1E6XHYfsaF2CZxEQQrrgO3r0aNSuXRs2m83lDhAdHZ3qmlwwFgEhpAuevHQ4ZcoUNeJ2vsVev6SGdEbEM8dCSDe4ssLXxx9/rEbd+m2eGOd+pgTPIiCETAPfIUOGoFevXo6mm2tIGdSkAZaBl4WQ6YDJKze8asO1Iwc2ueIuyEqNO1Syd00ImQ5uPIj53//+h6SkJLARgbRqSLEPmQ6IWbwlS4duADt37hyOHTsG3l+9ehV33HEH5s6di4ULF+LQoUNqnZtJyuH69ev4559/cP78efTp0yfFSJxXdsLDw9VgiMnNGw+MeGO9Hd7cjebdZCloLgXlSs3ly5eVqis7TGJfhyzRc/jwPlJdOEr740SyuBQEyJUrlIQrNBQpYqUtFJGRSUQke5SwsCTkyZOoThITQ3DpUvJv/NKlEMTFhZAoG2hLpC0hRbqhoSGUXgEUL14MxYqVJgmjskrCiF3esaQRrwzxcmYwhYAnJEvxsL0e+7aGCLgNR4+eVmUcFhaK8uVtqFAhHiVKJJEpFRAJQOQAkQQoUABUmwG5coGeA4jHaNQoZ/SgilQRlNR2qBYG5QU4csR+fOBAGHkV03D2bLx6SUREONk/L0/zoo1Qv359tbGRfvb1HaghoAiZkJCA9evXY9GiRViyZCHWrl1LTellGpSEkP3HMCrQm6hRg/1ks+NNoGxZ0Hyj+Yr27FkQMUEylvZtwwb2EgH6lgTKr4W+pQIJC7cFS7az5d+YmBjzfUQ2c+T3hGRhBxYfmzdvDtg08+XL16imC0erVvFo2jSJSMiuP0AGALKJkIkeo8qeanpg3TrQD85GJE2grgToR1YBbdp0VgLCTFCe3PfX4JeE3LJlC6ZPn05EnEpCtNuoWbWiffskqjGSiIigGsRfiyNr+SbBdvoRAosXA/Pnh2HLljgaOEWhS5duJFLXQ4nFsSUOvwok3eIXgUaxGskcavXq1WRLUFrhwlbt3nuh/fYbNJqRoW+QjcZmhBG0O++0aTZbqJY3bwRhNFCbP3++RrMCflHOXIqmDmRnR+vdu5cWFmbVoqKs2pAhoRpZXiaAhYDp/QhJDUj78ENotWrZ1A+4SpVy2gcffKDRDIOpy9uUhCQBWW3atGlas2YNFZhNm4ZpEydCu3JFSJgeCdO6R/1ObehQaJGRFi06OlIjETuNrHOYkpimI+Tvv/+u0ShSozk6rXt3i0aGyAg42YzAgOb5tTFjoBUpYtPCw23aM888o9Hkv6mIaRpCkq9BrU2b2zVWa+nd26LRlIcQ0UM/RLLrr336qb0fXrBgXmraP9Ti4uJMQUyfE5KBGDVqpGaxhGoNG1o1GjUKET1ERNdalrtAo0ZBy53bolWvXkmjxQOfk9KnhCS9Z61GjSpqsPL11zJQcSWMt85pflO7/XarGjiOGTNGowUGnxHTZ4QkqWwtVy4bDVysGhl+CLhasWBBqAEZKTL6xbfxrAVPGUVEWLS2bVtpFy5c8AkpfULIN954g/qKIdqwYSHaLb18QwuNVKwVGXi+skwZaNxncq1t3nwzOQ6tK6e67xo/q+dGEHLbNmj9+kErVw5Ue0Ej0UytUydoJHRkeH717+MRebFiVmq5Kmu0Ckbv8W7wOiGfe+5ZzWoN1b74wnOgOhOSSfn226nfZXZC0pI8jYSTfzT8HfrGA79bGrvEltTfltNrJGGn1axp00qUKKKRuJ1XGelVQpL7DDWKnjzZeBCdC8GVkKQKo/FEsXMcsxOS7KY6CEiG1zT2VvLee1D4MTHJFlaK73H+NiOOSSpJkZJrSm82314j5Jw5c9TcorvayggAndPQCUmiZLSEZi/Yxx5LWYBpEZKXId9/HxoTIjLSXkuRzpeaWD58OGUa/E4SdNBI6Ib6XtBIXE275x5oJ07Ym1cmjmsfkryO0PwfNE6Tm2GS2dXIvpW2enVy2iQ/qXHe8+aF1rFj8nV+H8lOOIjKaTl/t9HHXFMWL27T7rijrdeWHvmLPB5I6pr6csWpsELpXZ4FkdPXCclkeOIJewGSAIy2fXvyu90RkoS/NTKe6yhwUqVRRNObSu7DkXC44xs4PSatfp9rYv4BkFq3IqcrIcmSnyIiX2cykmwl9dfsz/N5ZvqGjRvb43OzTcLsjrx4Ctc1a0BTciG0UjbR4zzhF/AXeTy8+OKLGknk0HKV5wF0JiTXMlyLkJCtIk2XLsnvd0dIMlrhINfgwfYC50EXT0kxAZhIJM5GeNk30lhwxB85EmqARkK8qsbUSepcQz7wgD0+/zi4ZuV0eMBFnkhUOiSj6Uhbf4fz/o8/kt/HtbLzPU8eP/FECAmzRHtlVYe/yqOBa0eWOhk3znsA6jUkSX3Tt0F7553kgpw/337NlZBMPG5ymUjcVLqum3PTqZOMpN8U+UhlRl1jwjtLHPFIVY+rEzI+Prm2JYdhKl86iaZNS45PJilT3NPjLFgATX8fD3a2bnUfT49v5J7scKkK5f333/coVzhxj2sdzpgxg3RUbuC++6iIfBSefBKg6R8Vnn0WpEWYOiMs/EprvSqwQK+r5T1qKh2BLDwr9QNqMlUg0pHdSMdtkEOwFOd8Z+9e4No1exz2E081rmPr2TP5WSJaqvDddwBN95DCmT1dGhSSpbZU0Tx2gX6gpMCWSBqY33jsHXrCHhctnjbtF3ToEEJi9vorvb+nGgVUI4Lm9EiIFfj229SqC1QLOALNIaYK1Fd0BBaMpRrUEVwNWjDZ+Br1GR2B9XH0UKSIXYpdP3feu6Y1ejTw6qv2GKzj88svIEFk5ye8czxwoEYWhreBVX7ZlZ6ngscJuW/fLnTsSHL2Pg40+gXJB4I66XjlFYBGuikCK3PpQa8p9XPe07SRI9DIOIVKBGsVOgeugVmZyzlwLaOHWrVAarX6Wdr7l18GSDpHBdYDmjULoElynwRWbrNaQ8AOADxJyFBPf93Bg4eVJp+n35OZ9GkeTwXW+Js0KeUTrPCl14zUj3M0r3osbmb10LChXStRV/6jFRUyJKDfBWgKB6RvliIwkfRuADfLrAuTXqCJbwcZ+X38fl+RkfPJrUzBgjacPHkyvWzn+J7HCZnjHBqYAA0myF6PPUGavkkRyHQPHn7YfombZO5rxsXZiUOiWuRH236vQwdQDQHSZATp8divcXP/2mv2uFwzDhuWIml1wnpXvXvbr/MPggZ5KjAxH3zQrnJL00WqH8vpPf64/T7/52PuaixenHJz7hIkx/bcEY+2PK4n7ulhU9261UhC2XsjQh5duo6ynUec//6bPFlORafxpk9H8TwkuaJR1/g66WOnmIekWlRznhzn5T2eP9TT4TlJnrts0gQa6Xir69TUcjmqjSfMeYVFj8/r3Twxrp+/+649Ho0DHdf0e+72U6cmp62/w1N7xoaabG3q1KkepQzVC54N5ctXoV83VScmCbRCAhLndxvYIABNr4CIgXr1AK41uT/IWoz0o1IqqCVKJD/KKrazZwO0qqOaNFbw49kE7uvpzb8+EueneDBDquKgyXo16ueBDte0NA+J33+318ocj0hlusD5Jqk0Uimu49G8eVwN9scff8SgQQPIOgObIvHot0jiHkTg4YdDyAhDNdIF3+rBt1Al4NHUKXH2/xIRkTvVIMLT75X0jUOA+7RTp1qoYhliXKJppORxQrIdmiee+C+NGK1kzCmNXMhlUyPw0kshNNEfRd0R6o94Oni0h3or8WvXrpGQaUmtTx+L6tx7quMt6Ro/yFm1CkpKi+xkeoMqavjnlRfNmzdPKXKxbJ8Qxz8wILNJJI1kIyn19oElfqYz/pNPPlFSM99/7x8FEsw/HBbQrVHDSpYvqmrk5UwvQo/vmRleDcOGDVPydZ99JqQ0K+G5ZmQVhpIli9K862Gv8sPrhOSv05W8nn3WM0peZi1of8gXT/bHxtprRm/r0zA3fEJIfvHkyZOVOY+WLa0aiWbdyorsfUValgdludHw8FBSWWjn1Waa+aAHnxGSM7B582bqo1RThgLIA4dYNFP1g/d/lLycyhVDeLhVe+utt4LTUID+i3A2pdKggVVbssT7BeKrWsnX72V1CzalkiuXRelhs30lXwef1pDOH09ydqRb0loJFfTqZdFYSd7XBRao7yfJdY0855HhAStt+TSe/RBjU85sdDqeNWuWMnzESlVdu1rE+JSBzTgrvJGYHCls2ZQZG57xYMvEZgqmqSGdQWGDpaSLozVv3kTVmI0bh2kTJoCsv0qtmZ1amzUcH32UlcTsBktfeukl0h0/4Qy5aY5NSUhndFasWKFMOrOBzchIqzZ4cKiqNXlUmJ3CCZZnWPaSLV3UqCEmnZ35ZNixbvS+eXO7meeYmGSj9+6MSQUL8Zy/c/9+u13xdu1syn5Svnx5/M7ovcflIT0hHLKVlFJYvXbGjKkkn/cPeWm1KLcgrVqxaxCgWjW7iqkn3m2mNFm5jN2CkJ8o5RZk27Y4EgzOizvv7KHcgnQkZXJ/8/rll4R0JgUtbTk5TlpCjjGvonDhMPJwlYBmzeyOk1hP2lmN1fl5fzmmmhA0X0hCsnbHSUuX2kgDMIH6LUDNmpUcjpNakg6GOE4ySanSYIhqzA2kDLWYNnYttxpnzlxUruUqV7aRqkE8eb3SlFs5VitlTUPdiaZJPkFlg5W32LWc7l6OXcuxB6+LFxMovxb6hirKtVyrVq1wGxkjKqjrS5jpI7KZF7+vITP6bnY9x/4PnZ1vHj5sV+Uk50JESna+mUDONhOVw82SJe3ON3nP+tesr22UMyzWz+FmlvW+WZuUnW6y0DJZGSNLGCE4eNBGJExSPg35u/LkyaWcb9ap01g53mxAyju1STWRXR4Hagh4QrorOPaBza6Jd+/erfbsMfbo0YNEisNEjmNksuRGisfYa2z+/BYiKLsmZoUujUjKM1JcwybRtUR1HB8fQhYtrOqYyXfxYijIpg+RUKMtiVwXJ6h7+j8y9K/cE5ck9hcrVsrhnphdE/PG14MtBCUhMypkkv+j2usI1VTniUgXHHs+ZjKTBDwZBripkuE9mxfhvmyzZs3IhEo+R/LssN1CaoX5qZp1dt7Ox+y8vQhpvflzf8/xoQYeCCENAJMsA+NtssR6jB1gS8gRAh5X8spR7vzk4Vyk0O1cM/pJtk2ZTakhTVkswZspqSGDt+xN+eVCSFMWS/BmSggZvGVvyi8XQpqyWII3U0LI4C17U365ENKUxRK8mRJCGlD2vFrDqzsSco6AEDLnGOLLL78ko6ZVDUhJkhBCCgdMhYAQ0oDiqFmzJgYOHGhASpKELB0KB0yFgNSQpioOyYwQUjhgKgSEkKYqDsmMEFI4YCoEhJCmKg7JjBBSOGAqBISQBhQHW9L4niz5S8g5AkLInGOIhQsXYvjw4QakJEkIIYUDpkJAVmoMKA7WyT5w4ACZN2lpQGrBnYQQMrjL33RfL0226YokuDMkhAzu8jfd1wshTVckwZ0hIWRwl7/pvl4IaboiCe4MCSGDu/xN9/VCSAOKZPr06ejbt68BKUkSQkgDOEBufLFs2TIDUpIkhJAGcEDsQxoA4q0kZKXGOCwlJQMQkBrSABAlCeMQEEIah6WkZAACQkgDQJQkjENACGkclpKSAQgIIQ0AUZIwDgEhpHFYSkoGICCENABEsQ9pAIi3khBCGoCl2Ic0AEQhpHEgSkrGISA1pAFYin1IA0C8lYQsHRqHpaRkAAJSQxoAoiRhHAJCSOOwlJQMQEAIaQCIkoRxCAghjcNSUjIAASGkASBKEsYhIIQ0DktJyQAEhJAGgCj2IQ0A8VYSQkgDsBT7kAaAKIQ0DkRJyTgEZKXGACzFPqQBIN5KQghpHJaSkgEISB/SABAlCeMQEEIah6WkZAACQkgDQJQkjENACGkclpKSAQgIIQ0AUZIwDgEhpHFYSkoGICCENABEsQ9pAIi3khBCGoCl2Ic0AEQhpHEgin1I47CUlRrjsJSUDEBAmmwDQJQkjENACGkclpKSAQgIIQ0AUZIwDgEhpHFYSkoGICCENABEScI4BISQxmEpKRmAgBDSABDFPqQBIN5KQghpAJZiH9IAEIWQxoEoKRmHgNSQBmAp9iENAPFWErJ0aByWkpIBCEgNaQCIkoRxCAghjcNSUjIAASGkASBKEsYhIIQ0DktJyQAErAakEVRJbNq0CT169EBCQkK63x0SEoIuXbqA5yglZB4BIWTmsVIxa9WqBV6ZOXHiRIZPtmjRIsM4EiElAtJkp8Qjw7PQ0FDce++9CAsLSzdueHi4qknTjSQ3UyEghEwFScYX+vXrh7i4uDQjWq1WdOvWDZGRkWnGkRvuERBCuscl3at169ZFhQoV0oyTmJiIAQMGpHlfbqSNgBAybWzSvcPNts1mcxuHa8aOHTu6vScX00dACJk+Pmne5RrQ3UibSdqnTx9wH1JC1hEQQmYdM/VE+fLlUbt2bfD0jnOIj49H//79nS/JcRYQEEJmASzXqIMGDYLFYklxuVChQrj99ttTXJOTzCMghMw8Vqli3nPPPUhKSnJc5+aa+5auJHVEkIMMERBCZghR2hFiY2PRvHlz8NwkB26ueUpIQvYREEJmHzv1JNeIej+yZMmSaNCgQQ5TDO7HhZA5LP+7777bQcj777/fcZzDZIP2cVnLTqfob9y4gcuXL6vtwoULqr/Ie+dw/vx58Pr2hg0bEB0djT/++ANsDU0PfJw7d25ERESolZuoqCjkz59fvy17FwSCSoXh+vXr2L9/P9jR0cmTJ3H69GklJHHq1CmcPnUCJ44fARPuwoVLuHz1Os0zJrrAZdxpVGRuREXmQV4iaEzhwigSWwJFi8YiJiZGbdw/LVq0KMqUKaP2xr3Z3CkFHCG5xtqxYwe2bduGffv24cCB/Tiwbw8R8QBOnj7nKI1wWyhi8llRNH8IiuRNQExkIorkA6Lz2LcoquQib215cwP5IoBQmnLU93pCfO8mSaJNXAo82g64coMGN048vnoTiKP7vOd7l2m7eA24dN1+zsenLwMnLobg5CUrHYfi5MVEnL9MD90KucLDULZMCZQpW4G28oqkVapUQfXq1dVxII3q/ZaQPKJlL6xr165V+x3b/sH27Vtx/ORZVYyRuS0oX9SKMgXjUaZQEsrGAGVubaUK2kmnF7gR+yTNTlgj0uI0mMTHzgMHztB22r7t5/1ZG/afDsHh03bhDiZrlcrlUbV6bSJoDfA6e8OGDVUta1RevJmO3xCSa7uVK1di3bp1WLdmBTZu+gc3bsYhKsKK6iVCUb1YHKoUA2qUgNqXLgQaYHgTSu++i2vanceAbUeAHUeB7ccs2Ebb/hN2opYpVQyNmjRHo0aNaWukNn9YzjQtIVkAdtmyZViwYD7m/zkb+w8ehdUSgkrFrKhfOh71ywItKgN1SgMWmStw/Bq4K7DlEPD3fmDFHguW7QrFifPxsFotqF2rJtq1vwPt2rVDy5YtTbnebhpCssgW14AzZ87EbzN+wZ69B5ErzIImFUPRqko82lQHGpUDwt0L2DgKRA5SI8BN/uIdwF/bQ7Bohw1HzsQhMk9utG3bFt179ETXrl3BS55mCD4lJPcD582bh+nTp+G3mdNx+sx5VCkRhu5149ChJtC0IpA7fcFsM2Dod3nYc4LIuQ34Y5MFC7ZqahDWvGkj9OjZBzyvyhP8vgo+IeT27dsxadIkfDfh/9TIt1oJK3o3SkCfJkC14r6CIjjfe526nAu2Ejk3hmDmRhtOX4xH0yaNMei+wUrIOE+ePF4FxmuE5DlAJuH4Lz7Fxs1bUT42DPc1j8N9twE86pXgewR4ZD97E/Dt0lDM2QxE5M6Fvv364+mnn0HVqlW9kkGPE/Ls2bP4/PPP8enHH+DSpUvo1zQJg2/X0KJSYI+CvVJ6HnzJqUvADyuA8Yts2HM8AV06d8Kw4S+owZAHXwuPEZJXPMaOHYvPP/sE4ZYEPNo2AU90gJp89uQHSdrGIsDzq79vAMbNtmHFzng0a9IQb73znseIaTgheaAyfvx4vDr6FSDhKl7qFo8hreyrHsZCJal5G4GVu4FXp1swb0sievXsgbffeRcsOW9kMJSQa9aswf2DBqj14ic6JOGlHkD+CCOzK2mZAQHuZz73ow17T2oY/vwLGDlyZJoKb1nNryGE5DnEt956C6NHjVLzhV8MTkS5wlnNSvDFL/QIcPYKaKUJ2Pq2f30/y518vgAY8bMFNWrWwveTf0bFijRPl8OQ4zWOM2fOoE3r2/D6qyMxrn8i5g4XMuawTPzicVr4wZN30IrQG4lIOPcP6tWthZ9//jnHebfmJIWjR4+iQ7vWuH7+ANa+loRapXKSmjzrjwiw/MCqUQkYNjmBtC374dy5c3j00Uez/SnZJiQLO7SlmjEPTmH5K/EoFp3tPMiDfo5AGLHoo0EgUT4NQ4c+pqb3nn/++Wx9VbYIyZLUve7qhmjLSSx4IQEFIrP1blM9xH25sTOBmX8Dh88CecLtS5cj7wIaV0jOau+PgF/WAlwINycCXy8C3psN7DsFlCgAPNsZeKx9cnw+Wr8PGP4jsOZf0Po81LLoh/cG3jzsiG4Ay5E+OWKEmkhn+0ZZDdki5H//+zQJve7ChjcCg4xnLgNNRoFGjXaisQTRkXP2VQteVpszHGqwxuAyUTnwqsbHfwJPTbKf838m5dDviJi08tStnv06i4a1HmMXxuUrNkL8VyI0X+c5vkALj3cANh8Kwf33DcTfGzajbNmyWfrELA9qFi5cSEY4x2PCQwlK6DVLbzNp5Oep9mIyckd9BRFzzWtErg+AdjXsxHvs2+SMcxw9jJkB/PQECc1SrfnfTvpV4AOqMfUweloyGUf2BM59Zd+4VTl3RY8VWPuPByWhRL4bePQ/D2X5w7JMyNEjX0KnOhbc1SDL7zLlAzx98dMqe9YalwcalLMfs5ib3vTuOg5sOpg6+0PbA31JIISFgd+6J1kKneNz4Bpw1kb7Mc/HvtTdLlXOqhHvDrBfD8T/LKH14YB4/DlvIVavXp2lT6QGJPNh7969WLFqLeY+T0gHSNhLzey1OPvHrNhN/bo0iLL1sF0Y2Pmz29dMPuM+ZYUiwLp9dh0ZvsMqCKxLw4HnGjmOHuqWtp9z0x+IgeVXa5ex4bvvvkWTJvSrzWRwgijjJ/766y9EhIeiTTWqVgIkXL6e/CGs5MX9R3eBlbtcQ0zelFciwu3nmmbfs1KXHlyfZ/UKvnb6kh4j8Pbd6sbjpwXU0c5CyBIh9+zZg6oku2i1BA4hWWtQDzyPOvd5/Szne52gnNKFaynT4+b8/NWU1wLtrGZJYMzMQ+CVvMxqRmapD8kyjRFht37+AYIeL3HqI2dulhOTjPuw4tHJEu+sjHUzPjnt1XtI9iRwftfJH+Z0xLgm0S+PeZPZkCVCFihQgPSGqa0JoMCj5t6N7R90/AKJWf1hP2ZiPvgVUOBh6guNyN6ImJXP2tNInQPrX7823U54rhmH/Wi/Hsj/T16keVdS082KrfUsEZJ1fncdjQu4poZHyDxS5jDiZ4CFHmL+A0xYYm9WB7UkYtI0TXYCT/XogxmeeM//kD1ttuIXm9+eopG1cnby6KlnVv8L1K1TK0vJZ4mQrKXGjP95dZbeYfrIPJhZ+zrwxB12YwKs88y1G89D/v6cffUlux/B6rqzh9unk3gqKSIMuI8IPmsYUPAWyfWReHbfYcbnblD35Nf1NnTr0StL2cuy+NnDDz+EBb9NxI6340UlNUtQB1fkD+fSvOsv4di770CWbBNlqYZkSEeOHIWz12zUBwoJLoTlazONAC+LvvKLBc88OyxLZOQXZLmG5IemTp2qPA1MHgr0a8ZXPBt45YMHGJkJHJcFJZpVzExs4J1+FLdS5uJ6MlagfCM31U1G25ArphaWLl+ZocczV0yzRUhO5PHHh+K7b8Zj2tOJSnrFNWE5Dz4EuC9814dWbDiSR9leyo7BgSw32TrMH330Mfr2vxdd3wvF1DX6VdkHKwI88d/hbSs2H4vCgoWLs239wppdAHnm/euvJ9AcUxT6ffopWeLS8GJ3MfyUXTz9+bmNB4D+X9hwDYWomV6EypUrZ/tzsl1D8hvZ2DvXlB98+BHG/m7D7WOsZLsw23mRB/0MAV7+5IWEJqNDEVu+MVauXpcjMvLnZ7sP6YodGw8d0K+PslY78i67UQB9Qtg1rpz7PwIsBf/0D1as2wu8/sYYPPfccw73KDn5uhzVkM4vrlGjBtau34innh2BkdPCUe2FMCUZ7RxHjv0fAZakH/RlKBqPCkFowQZYs3Ydhg8fbggZGR3DakhnqNmo/IsjXsAPk39EowpWDO8Sjx4NjDV57Pw+OfY8AtwV+2AO8M0SC80tFsPb495XpvuMfrNHCKlncv369Rjzxuv47fffyd63Dc90JGtntGwmNh91hMy/Z0u842aFUmunoXixWDzz3PN45JFHPGZ916OE1OFmSfOPP/4I//fVeIRZktC3cQLubWE3yazHkb15EGDJJJZXmLQijAxMxaF2zWqKiOzl1mrN9sRMpj7QK4TUc8K+YSZOnKgMle7Y9S+qlQzD/S3ilPgXe0iQ4DsEeIWFNSy/XxGqVIFDLTb06nU3HnhwCFq1auW1jHmVkM5fxYapWN/ipx9/wIWLV1CnrN2Uc/f6JLJUxjmmHHsKAZbLZCW0mRtCMXdLCK7eSFLm9gY/+DB69+6NvHnzeurVaabrM0LqOYqLi8PixYtvGbv/FUeOnUTpwmFoXz0OrauRTjNtutyg/ozss4cAO3RiYwVsX3zRThuW70yg0bGFasDbyPh9L7Bif4kSJbKXuEFP+ZyQzt+hkXbU33//jd9pEPQXKQetWbse8STnX7VkLrSucgMtaQGgYTmgfBHnp+Q4LQRYrpMHJawusXinBct3kRbkjUSUKlEErdvegU6dOqNjx47Ily9fWkl4/bqpCOn69VevXsXy5cuxaNEiLFr4p1qwZ4IWiLIRMTU0KJugCMrKWWUK8cqRawrBc87+abYftZttWbcvBOsPhGEnSfezTkuxooXQqk17tG7dhrbWhhsZNRJlUxPS9UNZWWjTpk12b17r1mL92pXYtecAuGbNk8tCGpEWVIuNU54cWA+a9aR5sJTL5pqS/56z3s/+U3by7ThGnryOWrHjeCgOnYpTH5U/X6RyLdewUVO1Z//dvm6Gs4K2XxHS3YexIX12tMmbcrq5dTN2bN+GQ0dOOKLHFghTxCxdgPwexmgoWQDKWhvrVcdE2Y/ZmoQvA/fvWEebHXGygQE+Pkp7dnp08KyVfBxacPBUPG7EJals5onIhapVKqFajTqoVq2a2tgZJ9vS0R3K+/J7svtuvydkWh/ORLV7gz1AHmFvbfvJO+z+PeSe+CjOXaCSdwq5wy3kHZa2qBDkzZ2EyPBE2pKUNS82g8KE5bV53nS1WX6cJ/mda2BnXWtW3uKmlAPP7V0heUE2HsDb+es22ofgEm2nLibhDPmHcQ7hYTYUi41B6dJlUaZcReX1lclWunRpdVyqVCm/Jp7ztzofBywhnT/S3TGP7tlfNs+N8qZ8ZtM5WwRmMrPj9iuXL9F2EefPn1XnCQkJpGN8AzduELNuhSvkV5v7tXrIGxVBCmKht05DyKk7VcMU2HF7VFReREblQ1Te/MrZO6uH8nX2kc3+sQuT32z92BdTLrcy7dNd0BLSp6jLy9NEQP8ppxlBbggC3kRACOlNtOVdGSIghMwQIongTQRozIip3nyhvEsQSA+B/wcKzVP17cgC4QAAAABJRU5ErkJggg==", | |
| "text/plain": "<PIL.PngImagePlugin.PngImageFile image mode=RGBA size=164x511>" | |
| }, | |
| "metadata": {} | |
| } | |
| ], | |
| "execution_count": 11 | |
| }, | |
| { | |
| "id": "f3b6f7ed-49df-4c8e-ae11-3de2a1b26fb3", | |
| "cell_type": "code", | |
| "source": "# Init 'states2' as empty list []\nstates2 = []\n\n# Access the whole history of ‘graph’ States \n# It returns an iterator including ALL the states / StateSnapshots\n# for a fresh 'thread config' value -> thread2={'configurable':{\"thread_id\": \"2\"}}\nstates_history = graph.get_state_history(thread2)\n\n# Extract each ‘graph’ state / StateSnapshot per iteration\nfor state in states_history:\n \n # Append each ‘graph’ state / StateSnapshot thread config={} \n # into 'states2' list []\n states2.append(state.config)\n \n # Display config = { ’configurable’ : {’thread_id’:’2’, 'thread_ts':'...'} }, \n # values = {’lnode’ : ’node_No’, ‘scratch‘ : ‘hi‘, ‘count‘ : int_number}\n print(state.config, state.values['count'])", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "67e962c2-a59e-4bb1-ba5d-08fef45cc5a6", | |
| "cell_type": "markdown", | |
| "source": "```\n{'configurable': {'thread_id': '2', 'thread_ts': '1f0724ce-41af-648a-8004-5ad6f6774865'}} 4\n{'configurable': {'thread_id': '2', 'thread_ts': '1f0724ce-41ac-65c2-8003-0c3296a25121'}} 3\n{'configurable': {'thread_id': '2', 'thread_ts': '1f0724ce-41a9-6b05-8002-f61a533e424b'}} 2\n{'configurable': {'thread_id': '2', 'thread_ts': '1f0724ce-41a6-6066-8001-7f275b14773e'}} 1\n{'configurable': {'thread_id': '2', 'thread_ts': '1f0724ce-41a1-6701-8000-70850948f4c9'}} 0\n{'configurable': {'thread_id': '2', 'thread_ts': '1f0724ce-419f-667d-bfff-8b5302d065b9'}} 0\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "82c7c468-83c0-4f3e-ba62-cdc9896585bd", | |
| "cell_type": "markdown", | |
| "source": "Start by grabbing / picking a state.", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "fadb688f-6076-4d45-9ec2-48c3c816162c", | |
| "cell_type": "code", | |
| "source": "# Pass in selected 'thread config' = states2[-3] = {'configurable': \n# {'thread_id': '2','thread_ts': '1f0724ce-41a6-6066-8001-7f275b14773e'}}\n# and GET BACK into the 'step’:1 as NEW CURRENT STATE / StateSnapshot \n# based on this input, via graph(obj).get_state()\nsave_state = graph.get_state(states2[-3])\n\n# Display NEW CURRENT STATE / StateSnapshot based on states2[-3] \n# input with selected 'thread config'\nsave_state", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "e35bfcd1-c08d-4417-92ac-2cfdd474509c", | |
| "cell_type": "markdown", | |
| "source": "```\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hi', 'count': 1}, next=('Node2',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f0724ce-41a6-6066-8001-7f275b14773e'}}, metadata={'source': 'loop', 'step': 1, 'writes': {'Node1': {'count': 1, 'lnode': 'node_1'}}}, created_at='2025-08-05T22:38:21.850005+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f0724ce-41a1-6701-8000-70850948f4c9'}})\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "18c4dddc-6588-4536-b39f-d87a4d427171", | |
| "cell_type": "markdown", | |
| "source": "Now modify the values. One subtle item to note: Recall when agent state was defined, `count` used `operator.add` to indicate that values are *added* to the current value. Here, `-3` will be added to the current count value rather than replace it.", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "80c3258f-3bab-4d8f-aa93-a01a154abe67", | |
| "cell_type": "code", | |
| "source": "# values={'lnode': 'node_1', 'scratch': 'hi', 'count': 1}\n# MODIFY 'count': 1 -> 'count': -3\nsave_state.values[\"count\"] = -3\n\n# MODIFY 'scratch':'hi' -> 'scratch':'hello' \nsave_state.values[\"scratch\"] = \"hello\"\n\n# Display MODIFIED CURRENT STATE\nsave_state", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "a532e6e0-1709-4d55-b8bb-47d2f20cc003", | |
| "cell_type": "markdown", | |
| "source": "```\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hello', 'count': -3}, next=('Node2',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072575-95c6-67de-8001-1b2f9aaaaebf'}}, metadata={'source': 'loop', 'step': 1, 'writes': {'Node1': {'count': 1, 'lnode': 'node_1'}}}, created_at='2025-08-05T23:53:13.543461+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072575-95c3-6ae5-8000-d8160841c98b'}})\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "c4be47a2-cb70-4706-a173-9ca2dfcb54b0", | |
| "cell_type": "markdown", | |
| "source": "Now update the state. This creates a new entry at the *top*, or *latest* entry in memory. This will become the current state.", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "1e536170-6782-4be6-a957-396288181dc0", | |
| "cell_type": "code", | |
| "source": "# Select CURRENT STATE / StateSnapshot of graph for {\"thread_id\": \"2\"} -> \n# thread2 = config = {‘configurable’ : {\"thread_id\": \"2\", \n#'thread_ts': '1f072575-95c6-67de-8001-1b2f9aaaaebf' } } \n# 'save_state.values' -> values = {'lnode': 'node_1', 'scratch': 'hello', 'count': -3} \n# UPDATE, previously MODIFIED and NEW CURRENT STATE for 'thread2’ config \n# Include MODIFIED or NEW values = {'lnode': 'node_1', 'scratch': 'hello', 'count': -3} \n# SAVE this as ‘new_state_entry’ variable. \n# This creates a new entry ’step’:5 at the top, or latest entry in memory history.\nnew_state_entry = graph.update_state(thread2,save_state.values)\n\n# Display UPDATED and previously MODIFIED state / StateSnapshot\n# related to 'step':1 -> This creates a new entry at the top, \n# or latest entry in memory history.\nnew_state_entry", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "759e6628-1e65-43ac-babb-9dd06a2ef804", | |
| "cell_type": "markdown", | |
| "source": "```\n{'configurable': {'thread_id': '2', 'thread_ts': '1f072612-4f8e-6216-8006-72ec5f0f4e52'}}\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "095a962a-1b96-472b-891f-48bbf1b4d5b0", | |
| "cell_type": "markdown", | |
| "source": "Current state is at the top. You can match the `thread_ts`.\nNotice the `parent_config`, `thread_ts` of the new node - it is the previous node.", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "e13ade64-cd73-4086-93e8-0b9ca09c87eb", | |
| "cell_type": "code", | |
| "source": "# Access the whole history of ‘graph’ States \n# It returns an iterator including ALL the states / StateSnapshots \n# for 'thread config' value -> thread2={'configurable':{\"thread_id\": \"2\"}}\nstates_history = graph.get_state_history(thread2)\n\n# Extract each ‘graph’ state / StateSnapshot per iteration\n# Enumerate each one as i=0,1,2,3,4,5\nfor i, state in enumerate(states_history):\n \n # When i=3,4,5 -> break for loop so DOESN'T print\n # these FIRST (3) ‘graph’ states / StateSnapshots\n if i >= 3: \n break\n \n # When i=0,1,2 print the LATEST (3) ‘graph’ states / StateSnapshots,\n # separated one each other by ENTER character '\\n'\n print(state, '\\n')", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "aa9a6aed-3115-469b-934c-d08565e1e118", | |
| "cell_type": "markdown", | |
| "source": "```\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hello', 'count': -2}, next=('Node1',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072612-4f8e-6216-8006-72ec5f0f4e52'}}, metadata={'source': 'update', 'step': 6, 'writes': {'Node2': {'count': -3, 'lnode': 'node_1', 'scratch': 'hello'}}}, created_at='2025-08-06T01:03:20.617005+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f07260d-080c-6b75-8005-2dbe19ddab00'}}) \n\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hello', 'count': 1}, next=('Node1',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f07260d-080c-6b75-8005-2dbe19ddab00'}}, metadata={'source': 'update', 'step': 5, 'writes': {'Node2': {'count': -3, 'lnode': 'node_1', 'scratch': 'hello'}}}, created_at='2025-08-06T01:00:58.901379+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072575-95cf-6b76-8004-3860bb081756'}}) \n\nStateSnapshot(values={'lnode': 'node_2', 'scratch': 'hi', 'count': 4}, next=(), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072575-95cf-6b76-8004-3860bb081756'}}, metadata={'source': 'loop', 'step': 4, 'writes': {'Node2': {'count': 1, 'lnode': 'node_2'}}}, created_at='2025-08-05T23:53:13.547240+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072575-95cc-6c21-8003-79a9c443815d'}})\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "1139c7a1-cc9c-4598-9bb7-e18e17d8f7b7", | |
| "cell_type": "markdown", | |
| "source": "### Try again with `as_node`\nWhen writing using `update_state()`, you want to define to the graph logic which node should be assumed as the writer. What this does is allow th graph logic to find the node on the graph. After writing the values, the `next()` value is computed by travesing the graph using the new state. In this case, the state we have was written by `Node1`. The graph can then compute the next state as being `Node2`. Note that in some graphs, this may involve going through conditional edges! Let's try this out.", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "31456cf8-bb2e-4032-9ec4-bbfa32b59372", | |
| "cell_type": "code", | |
| "source": "# Simulate we're now the \"Node1\" -> as_node=\"Node1\"\n# And the values we'll MODIFY manually to be passed in, as\n# they would be modified by \"Node1\", are the same we changed previously ->\n# save_state.values[\"count\"] = -3\n# save_state.values[\"scratch\"] = \"hello\" \nsim_node1_new_state= graph.update_state(thread2,save_state.values, as_node=\"Node1\")\n\n# Diplay UPDATED NEW CURRENT STATE, creating a new entry with MODIFIED values={}\n# at NEW state / StateSnapshot\nsim_node1_new_state", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "4173fd49-3056-4c12-a3d6-fda3cb2f36ef", | |
| "cell_type": "markdown", | |
| "source": "```\n{'configurable': {'thread_id': '2', 'thread_ts': '1f072ed8-667e-6fe9-8007-de0e5d60026f'}}\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "f3935bfa-30b6-422a-871a-3df328665b76", | |
| "cell_type": "code", | |
| "source": "# Access the whole history of ‘graph’ States \n# It returns an iterator including ALL the states / StateSnapshots \n# for 'thread config' value -> {\"thread_id\": \"2\"}\nstates_history = graph.get_state_history(thread2)\n\n# Extract each ‘graph’ state / StateSnapshot per iteration\n# Enumerate each one as i=0,1,2,3,4,5\nfor i, state in enumerate(states_history):\n \n # When i=3,4,5 -> break for loop so DOESN'T print\n # these FIRST (3) ‘graph’ states / StateSnapshots\n if i >= 3: \n break\n \n # When i=0,1,2 print the LATEST (3) ‘graph’ states / StateSnapshots,\n # separated one each other by ENTER character '\\n'\n print(state, '\\n')", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "7c970c6b-2792-48d2-ba90-8d150fa66fcd", | |
| "cell_type": "markdown", | |
| "source": "```\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hello', 'count': -5}, next=('Node2',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072ed8-667e-6fe9-8007-de0e5d60026f'}}, metadata={'source': 'update', 'step': 7, 'writes': {'Node1': {'count': -3, 'lnode': 'node_1', 'scratch': 'hello'}}}, created_at='2025-08-06T17:48:13.625938+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072ea4-92f1-6906-8006-3aa22890c2a8'}}) \n\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hello', 'count': -2}, next=('Node2',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072ea4-92f1-6906-8006-3aa22890c2a8'}}, metadata={'source': 'update', 'step': 6, 'writes': {'Node1': {'count': -3, 'lnode': 'node_1', 'scratch': 'hello'}}}, created_at='2025-08-06T17:25:02.422227+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072e78-6b58-6a5a-8005-51f91a94fabc'}}) \n\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hello', 'count': 1}, next=('Node1',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072e78-6b58-6a5a-8005-51f91a94fabc'}}, metadata={'source': 'update', 'step': 5, 'writes': {'Node2': {'count': -3, 'lnode': 'node_1', 'scratch': 'hello'}}}, created_at='2025-08-06T17:05:17.154150+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072e77-b111-6fdb-8004-f629da67dfcc'}})\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "afabd899-6a66-4644-a9b5-8e458e16fad0", | |
| "cell_type": "markdown", | |
| "source": "`invoke` will run from the current state if not given a particular `thread_ts`. This is now the entry that was just added.", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "28a07a03-7fff-4bd8-b75e-a0b1fcb004b8", | |
| "cell_type": "code", | |
| "source": "# RUN FROM the CURRENT STATE for ‘thread_id’ : ‘2’ \n# if not given a particular ‘thread_ts’ \n# thread = {\"configurable\": {\"thread_id\": str(2)}} \n# which is the MOST RECENT entry ‘step’:7 we just ADDED, \n# and get back the 'graph' state response.\nresponse3 = graph.invoke(None,thread2)\n\n# Display 'graph' state response\nresponse3", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "d0a988e5-0971-47c5-ab9b-35fdf5137ff7", | |
| "cell_type": "markdown", | |
| "source": "```\nnode2, count:-5\nnode1, count:-4\nnode2, count:-3\nnode1, count:-2\nnode2, count:-1\nnode1, count:0\nnode2, count:1\nnode1, count:2\nnode2, count:3\n{'lnode': 'node_2', 'scratch': 'hello', 'count': 4\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "5544f4d9-1c36-4b14-8506-93beccabd90a", | |
| "cell_type": "code", | |
| "source": "# Access the whole history of ‘graph’ States \n# It returns an iterator including ALL the states / StateSnapshots \n# for 'thread config' value -> thread2={'configurable':{\"thread_id\": \"2\"}} \nstates_history = graph.get_state_history(thread2)\n\n# Extract each ‘graph’ state / StateSnapshot per iteration \nfor state in states_history:\n\n # Print each state / StateSnapshot per iteration, \n # and separates by an enter “\\n” character, from the next one\n print(state, \"\\n\")", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "383bc8f4-eea8-4102-884d-6f825854dbd6", | |
| "cell_type": "markdown", | |
| "source": "```\nStateSnapshot(values={'lnode': 'node_2', 'scratch': 'hello', 'count': 4}, next=(), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072f2d-f634-6dc8-8010-3df20957d50c'}}, metadata={'source': 'loop', 'step': 16, 'writes': {'Node2': {'count': 1, 'lnode': 'node_2'}}}, created_at='2025-08-06T18:26:30.396450+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072f2d-f631-69bc-800f-add54b647c2b'}}) \n\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hello', 'count': 3}, next=('Node2',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072f2d-f631-69bc-800f-add54b647c2b'}}, metadata={'source': 'loop', 'step': 15, 'writes': {'Node1': {'count': 1, 'lnode': 'node_1'}}}, created_at='2025-08-06T18:26:30.395120+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072f2d-f62f-6a70-800e-4cdca6aa5855'}}) \n\nStateSnapshot(values={'lnode': 'node_2', 'scratch': 'hello', 'count': 2}, next=('Node1',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072f2d-f62f-6a70-800e-4cdca6aa5855'}}, metadata={'source': 'loop', 'step': 14, 'writes': {'Node2': {'count': 1, 'lnode': 'node_2'}}}, created_at='2025-08-06T18:26:30.394319+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072f2d-f62c-6c92-800d-6c917a571518'}}) \n\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hello', 'count': 1}, next=('Node2',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072f2d-f62c-6c92-800d-6c917a571518'}}, metadata={'source': 'loop', 'step': 13, 'writes': {'Node1': {'count': 1, 'lnode': 'node_1'}}}, created_at='2025-08-06T18:26:30.393144+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072f2d-f62a-663c-800c-9464567ae2ef'}}) \n\nStateSnapshot(values={'lnode': 'node_2', 'scratch': 'hello', 'count': 0}, next=('Node1',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072f2d-f62a-663c-800c-9464567ae2ef'}}, metadata={'source': 'loop', 'step': 12, 'writes': {'Node2': {'count': 1, 'lnode': 'node_2'}}}, created_at='2025-08-06T18:26:30.392162+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072f2d-f626-6a99-800b-ecfe4428fa55'}}) \n\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hello', 'count': -1}, next=('Node2',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072f2d-f626-6a99-800b-ecfe4428fa55'}}, metadata={'source': 'loop', 'step': 11, 'writes': {'Node1': {'count': 1, 'lnode': 'node_1'}}}, created_at='2025-08-06T18:26:30.390634+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072f2d-f623-60df-800a-1525ddd10387'}}) \n\nStateSnapshot(values={'lnode': 'node_2', 'scratch': 'hello', 'count': -2}, next=('Node1',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072f2d-f623-60df-800a-1525ddd10387'}}, metadata={'source': 'loop', 'step': 10, 'writes': {'Node2': {'count': 1, 'lnode': 'node_2'}}}, created_at='2025-08-06T18:26:30.389155+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072f2d-f61f-6c9a-8009-5e34fd9654a3'}}) \n\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hello', 'count': -3}, next=('Node2',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072f2d-f61f-6c9a-8009-5e34fd9654a3'}}, metadata={'source': 'loop', 'step': 9, 'writes': {'Node1': {'count': 1, 'lnode': 'node_1'}}}, created_at='2025-08-06T18:26:30.387817+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072f2d-f61c-6c6b-8008-c8bc4f1e30f2'}}) \n\nStateSnapshot(values={'lnode': 'node_2', 'scratch': 'hello', 'count': -4}, next=('Node1',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072f2d-f61c-6c6b-8008-c8bc4f1e30f2'}}, metadata={'source': 'loop', 'step': 8, 'writes': {'Node2': {'count': 1, 'lnode': 'node_2'}}}, created_at='2025-08-06T18:26:30.386576+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072ed8-667e-6fe9-8007-de0e5d60026f'}}) \n\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hello', 'count': -5}, next=('Node2',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072ed8-667e-6fe9-8007-de0e5d60026f'}}, metadata={'source': 'update', 'step': 7, 'writes': {'Node1': {'count': -3, 'lnode': 'node_1', 'scratch': 'hello'}}}, created_at='2025-08-06T17:48:13.625938+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072ea4-92f1-6906-8006-3aa22890c2a8'}}) \n\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hello', 'count': -2}, next=('Node2',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072ea4-92f1-6906-8006-3aa22890c2a8'}}, metadata={'source': 'update', 'step': 6, 'writes': {'Node1': {'count': -3, 'lnode': 'node_1', 'scratch': 'hello'}}}, created_at='2025-08-06T17:25:02.422227+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072e78-6b58-6a5a-8005-51f91a94fabc'}}) \n\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hello', 'count': 1}, next=('Node1',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072e78-6b58-6a5a-8005-51f91a94fabc'}}, metadata={'source': 'update', 'step': 5, 'writes': {'Node2': {'count': -3, 'lnode': 'node_1', 'scratch': 'hello'}}}, created_at='2025-08-06T17:05:17.154150+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072e77-b111-6fdb-8004-f629da67dfcc'}}) \n\nStateSnapshot(values={'lnode': 'node_2', 'scratch': 'hi', 'count': 4}, next=(), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072e77-b111-6fdb-8004-f629da67dfcc'}}, metadata={'source': 'loop', 'step': 4, 'writes': {'Node2': {'count': 1, 'lnode': 'node_2'}}}, created_at='2025-08-06T17:04:57.621694+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072e77-b10f-64e0-8003-4818b0a98ff9'}}) \n\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hi', 'count': 3}, next=('Node2',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072e77-b10f-64e0-8003-4818b0a98ff9'}}, metadata={'source': 'loop', 'step': 3, 'writes': {'Node1': {'count': 1, 'lnode': 'node_1'}}}, created_at='2025-08-06T17:04:57.620593+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072e77-b10c-6f49-8002-32e0e1b597ff'}}) \n\nStateSnapshot(values={'lnode': 'node_2', 'scratch': 'hi', 'count': 2}, next=('Node1',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072e77-b10c-6f49-8002-32e0e1b597ff'}}, metadata={'source': 'loop', 'step': 2, 'writes': {'Node2': {'count': 1, 'lnode': 'node_2'}}}, created_at='2025-08-06T17:04:57.619623+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072e77-b109-6717-8001-92fdbc76dc04'}}) \n\nStateSnapshot(values={'lnode': 'node_1', 'scratch': 'hi', 'count': 1}, next=('Node2',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072e77-b109-6717-8001-92fdbc76dc04'}}, metadata={'source': 'loop', 'step': 1, 'writes': {'Node1': {'count': 1, 'lnode': 'node_1'}}}, created_at='2025-08-06T17:04:57.618190+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072e77-b104-6ed8-8000-95e7087a97b5'}}) \n\nStateSnapshot(values={'scratch': 'hi', 'count': 0}, next=('Node1',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072e77-b104-6ed8-8000-95e7087a97b5'}}, metadata={'source': 'loop', 'step': 0, 'writes': None}, created_at='2025-08-06T17:04:57.616337+00:00', parent_config={'configurable': {'thread_id': '2', 'thread_ts': '1f072e77-b102-633e-bfff-810e809b9a21'}}) \n\nStateSnapshot(values={'count': 0}, next=('__start__',), config={'configurable': {'thread_id': '2', 'thread_ts': '1f072e77-b102-633e-bfff-810e809b9a21'}}, metadata={'source': 'input', 'step': -1, 'writes': {'count': 0, 'scratch': 'hi'}}, created_at='2025-08-06T17:04:57.615224+00:00', parent_config=None)\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "249f0e87-023d-4d8d-a380-9a2a5dbefffc", | |
| "cell_type": "code", | |
| "source": "", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| } | |
| ] | |
| } |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment