Last active
July 15, 2025 19:22
-
-
Save DiegoHernanSalazar/185dbbba05714d1b6cb18868cd1601ce to your computer and use it in GitHub Desktop.
DeepLearning.AI - langChain - Tavily: LangGraph Components
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| { | |
| "metadata": { | |
| "kernelspec": { | |
| "name": "python", | |
| "display_name": "Python (Pyodide)", | |
| "language": "python" | |
| }, | |
| "language_info": { | |
| "codemirror_mode": { | |
| "name": "python", | |
| "version": 3 | |
| }, | |
| "file_extension": ".py", | |
| "mimetype": "text/x-python", | |
| "name": "python", | |
| "nbconvert_exporter": "python", | |
| "pygments_lexer": "ipython3", | |
| "version": "3.8" | |
| } | |
| }, | |
| "nbformat_minor": 5, | |
| "nbformat": 4, | |
| "cells": [ | |
| { | |
| "id": "3073527b-c5ea-4df6-9ba8-1978e3221f7a", | |
| "cell_type": "markdown", | |
| "source": "<img src=\"https://media.licdn.com/dms/image/sync/v2/D5627AQGTZahmqVia0w/articleshare-shrink_800/articleshare-shrink_800/0/1735447634970?e=2147483647&v=beta&t=f8-WnRWXXPIOJzOk74aASHT6dfSRE-syA_kxPxjWuSM\"/>", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "c79c1ec8-a3cb-43ec-88a7-8997d0c6cc71", | |
| "cell_type": "markdown", | |
| "source": "# Lesson 2 : LangGraph Components", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "4d96fefc-adda-4f33-8c90-50683b1988f1", | |
| "cell_type": "code", | |
| "source": "# Loads environment variables from a file called '.env'. \n# This function does not return data directly, \n# but loads the variables into the runtime environment.\nfrom dotenv import load_dotenv\n\n# load environment variables from a '.env' file into the \n# current directory or process's environment\n# This is our OpenAI API key\n_ = load_dotenv()", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "b020e8cd-b195-49b8-ae48-44c5409421b7", | |
| "cell_type": "code", | |
| "source": "# 'StateGraph' and 'END' are used to construct graphs. \n# 'StateGraph' allows nodes to communicate, by reading and writing to a common state. \n# The 'END' node is used to signal the completion of a graph, \n# ensuring that cycles eventually conclude.\nfrom langgraph.graph import StateGraph, END\n\n# The typing module in Python, which includes 'TypedDict' and 'Annotated', \n# provides tools for creating advanced type annotations. \n# 'TypedDict allows you to define {dictionaries}={messages} with specific types for each 'key',\n# while 'Annotated' ADDS new data or messages values to LangChain types.\n# 'TypedDict' and 'Annotated' are used to construct the class AgentState()\nfrom typing import TypedDict, Annotated\n\n# 'operator' module provides efficient functions that correspond to the \n# language's intrinsic operators. It offers functions for mathematical, logical, relational, \n# bitwise, and other operations. For example, operator.add(x, y) is equivalent to x + y.\n# It's useful for situations where you need to treat 'operators' as 'functions()'.\n# 'operator' is used to construct the class AgentState()\nimport operator\n\n# Messages in LangChain are classified into different roles.\n# 'SystemMessage' <- 'system', 'HumanMessage' <- 'user', 'ToolMessage' <- 'assistant'\n# 'AnyMessage' <- 'any other'\nfrom langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, ToolMessage\n\n# To start using OpenAI chat models on Langchain, you need to install \n# the 'langchain-openai' library and set the 'OPENAI_API_KEY' environment variable\n# to your OpenAI API key.\n# This is a container/wrapper of OpenAI API in LangChain, exposing a standard\n# interface for ALL Language Models (LM). It means that even we'll use 'ChatOpenAI',\n# we can change it to any other different Language Model (LM) provider, that\n# LangChain supports, without changing any other lines of code.\nfrom langchain_openai import ChatOpenAI\n\n# Import 'Tavily' tool to be used as search engine.\n# The 'TavilySearchResults' tool allows you to perform queries in \n# the Tavily Search API, returning results in JSON / {message} format.\nfrom langchain_community.tools.tavily_search import TavilySearchResults", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "b0d92119-3125-418f-8005-715eb474782a", | |
| "cell_type": "code", | |
| "source": "# Create the 'Tavily' tool to be used as search engine, by initializing\n# 'TavilySearchResults' with 'max_results=4', meaning we'll only\n# only get back (4) max responses from the search API.\ntool = TavilySearchResults(max_results=4) #increased number of results\n\n# Display Tavily 'tool' type -> \n# <class 'langchain_community.tools.tavily_search.tool.TavilySearchResults'>\nprint('Tavily tool type:',type(tool))\n\n# Display Tavily 'tool' name -> 'tavily_search_results_json' \nprint('Tavily tool name:',tool.name)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "03d47a27-0843-4c37-9551-6a5bd97e8d78", | |
| "cell_type": "markdown", | |
| "source": "```\nTavily tool type: <class 'langchain_community.tools.tavily_search.tool.TavilySearchResults'>\nTavily tool name: tavily_search_results_json\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "39e384ad-0e55-4696-8233-a9ad77b4cd5b", | |
| "cell_type": "markdown", | |
| "source": "```\n> If you are not familiar with python typing annotation, you can refer to the [python documents](https://docs.python.org/3/library/typing.html).\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "42faa939-3264-44e1-8c98-5a9666f5c9b9", | |
| "cell_type": "code", | |
| "source": "# Simple Agent State\nclass AgentState(TypedDict):\n \n # Annotated list of messages [ {message1}, {message2}, ...] \n # that will be ADDED overtime with ’operator.add‘, \n # key:value -> messages:list of messages -> messages:[ {message1}, {message2}, …] \n messages: Annotated[list[AnyMessage], operator.add]", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "9d043860-688a-4f62-9a6e-728f1872dd32", | |
| "cell_type": "markdown", | |
| "source": "```\n> Note: in `take_action` below, some logic was added to cover the case that the LLM returned a non-existent tool name. Even with function calling, LLMs can still occasionally hallucinate. Note that all that is done is instructing the LLM to try again! An advantage of an agentic organization.\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "555a54a2-d54c-4e6d-8ca2-207ea6d0e1e5", | |
| "cell_type": "code", | |
| "source": "# Create AI 'Agent' class (obj)\nclass Agent:\n \n # This Agent will be parametrized by (3) things: A 'model' to use,\n # a 'tool or function' to call and a 'system' msg.\n def __init__(self, model, tools, system=\"\"):\n \n # As before, let's save the 'system' msg as a class \n # attribute/variable -> self.system, so it can be used/modified\n # by ALL functions/instances that compound the class\n self.system = system\n \n ### Start creating the graph (obj) ###\n \n # 1st initialize the 'StateGraph' with the 'AgentState' class as input\n # without any nodes or edges attached to it\n graph = StateGraph(AgentState)\n \n # Add 'call_openai()' function called 'llm' node. \n # Use 'self.function_name()'\n graph.add_node(\"llm\", self.call_openai)\n \n # Add 'take_action()' function called 'action' node\n # Use 'self.function_name()'\n graph.add_node(\"action\", self.take_action)\n \n # Add 'exists_action()' function as conditional edge\n # Use 'self.function_name()'\n # Edge Input -> 'llm' node \n # Question -> is there a recommended action?\n # {Dictionary}: How to MAP the response of the function\n # to the next node to go to.\n # if 'exists_action()' returns True -> Executes 'action' node, \n # if 'exists_action()' returns False -> Goes to 'END' node and it finishes\n graph.add_conditional_edges(\n \"llm\",\n self.exists_action,\n {True: \"action\", False: END}\n \n ) # Finish 'add_conditional_edges’\n \n # Add a regular edge 1st arg: Start of edge (->) 2nd arg: End of edge \n # From 'action' node -> To 'llm' node\n graph.add_edge(\"action\", \"llm\")\n \n # Set the entry point of the graph as 'llm' node\n graph.set_entry_point(\"llm\")\n \n # 'obj.compile()' the graph and updates/overwrite at the same graph (obj)\n # Use 'self.obj' to save this as an attribute/variable over ALL the class\n # Do this after we've done all the setups,\n # and we'll turn it into a LangChain runnable/executable.\n # A LangChain runnable exposes a standard interface for calling\n # and invoking this graph (obj).\n self.graph = graph.compile()\n \n # We'll also save the tools (obj) that we passed\n # We'll pass in, the list of the tools passed into the 'Agent' class\n # Create a dictionary, Getting the 'name' of the tool\n # with 'tool.name' as key, and ’t’ tool as value. \n # Save that {dictionary} as an attribute/variable\n # used over ALL the class, updating/overwritting 'self.tools'\n self.tools = {t.name: t for t in tools}\n \n # We'll also save the model (obj) that we passed\n # This is letting the model (LLM) to bind /enlazar/ tools\n # to know that it has these tools available to call\n self.model = model.bind_tools(tools)\n \n ### Finish creating graph (obj) ###\n \n ### Finish initializing Agent class general attributes/variables ###\n \n ### Implement 'functions()' as methods on the 'Agent' class ###\n \n # Create 'function()' representing the 'conditional edge' node.\n # After this function is executed, then 'graph.add_conditional_edges()' \n # will return 'True' key, when the previous model 'llm' node, recommended an 'action' \n # to take on, so it executes the 'action' node. \n # Otherwise it returns 'False' key, so this will execute the 'END' node and finish \n # it also take in the 'AgentState' class, as input.\n def exists_action(self, state: AgentState):\n \n # At state:{AgentState dict} -> state:{'messages':list of messages}\n # so select key 'messages' obtaining\n # state['messages'] = list of messages = [ {message1}, {message2}, …]\n # Then select the LAST message from state with [-1] -> {message_N} \n # which is the most recent calling response, from the language model. \n result = state['messages'][-1]\n \n # We're going to return a 'True' boolean, when the lenght of 'result.tool_calls' > 0\n # This means, if there's any 'tool_calls()' attribute or a [list of tool_calls], \n # (so its len > 0), we're going to return 'True'.\n # If there's NOT then, we're going to return 'False'.\n return len(result.tool_calls) > 0\n \n # Create 'function()' representing the 'llm' node\n # Use 'AgentState' class as input argument, ALL the nodes and the edges. \n # 'AgentState' is a historic dict of messages -> messages:{list of messages}\n # key:value -> state:AgentState -> state:{AgentState dict}\n def call_openai(self, state: AgentState):\n \n # At state:{AgentState dict} -> state:{'messages':list of messages}\n # so select key 'messages' obtaining\n # state['messages'] = list of messages = [ {message1}, {message2}, …]\n messages = state['messages']\n \n # If 'self.system'='system' msg/prompt is NOT empty\n if self.system:\n \n # Add list of messages = [ {message1}, {message2}, …] to\n # 'system' msg and OVERWRITE 'messages' list of messages\n messages = [SystemMessage(content=self.system)] + messages\n \n # Then call the 'self.model' (LLM) using \n # 'self.model.invoke(list of messages)' method,\n # and return a new 'assistant' msg called 'message' -> {message}\n message = self.model.invoke(messages)\n \n # UPDATE the returned dictionary \n # (Use the same 'messages' key name, as we did at 'AgentState')\n # and include only new 'assistant' msg at 'messages' key -> {message} in a list \n # {'messages': list with assistant msg} -> {'messages': [ {message} ] }\n # Because we have ANNOTATED the 'messages' key attribute, on the\n # 'AgentState' with the 'operators.add' this ISN'T overwriting this.\n # It's ADDED to historic of 'messages' at 'AgentState'\n return {'messages': [message]}\n \n # Create 'function()' representing the 'action' node, \n # done to execute the recommended 'action' by Agent/Assistant\n def take_action(self, state: AgentState):\n \n # At state:{AgentState dict} -> state:{'messages':list of messages}\n # so select key 'messages' obtaining\n # state['messages'] = list of messages = [ {message1}, {message2}, …]\n # Then select the LAST message with [-1] -> {message_N}\n # If we have gotten into this state, the Language model must have\n # recommended an 'action' to be done, so we need to call some tools,\n # to execute that recommended 'action. That means there will be the\n # 'tool_calls()' attribute to execute this LAST 'action' msg \n # present in the 'AgentState' historic list of messages.\n # Then UPDATE/OVERWRITE the 'tool_calls()' attribute\n tool_calls = state['messages'][-1].tool_calls\n \n # Initialize ‘results‘ [] list as an empty list [], \n # to be filled with ’assistant’ responses\n results = []\n \n # 'tool_calls()' can also be a [list of tool_calls], so a lot of the\n # modern models support parallel tool or parallel function calling\n # so we can LOOP over these [list of tool_calls] and assign each\n # tool_call() to be executed as 't'.\n for t in tool_calls:\n print(f\"Calling: {t}\") # Display iterated tool\n \n # Find if recommended tool 't' name -> t['name'] is NOT included \n # in the dictionary of tools -> 'self.tools' = {t0.name: t0(), t1.name: t1(),..., tN-1().name: tN-1()} \n # that we created, so, we check for bad tool name from LLM.\n if not t['name'] in self.tools:\n \n # When tool t['name'] is NOT included in dict of tools -> \n # 'self.tools' = {t0.name: t0(), t1.name: t1(),..., tN-1().name: tN-1()} \n # then print \"bad tool name\" \n print(\"\\n ....bad tool name....\")\n \n # And instruct/prompt a ’user’ msg to LLM ‘assistant‘, \n # to retry when it's \"bad tool name\"\n result = \"bad tool name, retry\"\n \n # Otherwise, when recommended tool 't' name -> t['name'] is included in the dictionary of tools -> \n # 'self.tools'= {t0.name: t0(), t1.name: t1(),..., tN-1().name: tN-1()} that we created.\n else:\n \n # Get the 'name' -> t['name'] of each iterated tool t = 'tool()', \n # and select each tool at 'self.tools' = {t0.name: t0(), t1.name: t1(),..., tN-1().name: tN-1()}\n # dictionary with that 'name'.\n # Then call '.invoke()' method passing in the input arguments \n # of each t = 'tool()' function call -> t['args'] -> 'args': {'query': 'user msg/prompt'}\n # Execute t='tool()' or 'action', so get a \n # resultant assistant 'string' -> Observation\n result = self.tools[t['name']].invoke(t['args'])\n \n # Append the previous assistant 'string' observation, as a 'ToolMessage' containing\n # \"tool_id, tool_name, observation\" into a 'results' list -> \n # [{\"tool_id, tool_name, observation\"1}, {\"tool_id, tool_name, observation\"2}, ...]\n # each iteration\n results.append(ToolMessage(tool_call_id=t['id'], name=t['name'], content=str(result)))\n \n # Let's get back to graph beginning, at 'llm' node.\n print(\"Back to the model!\")\n \n # UPDATE the returned dictionary \n # (Use the same 'messages' key name, as we did at 'AgentState')\n # Add 'results' list of observations at 'messages' key -> \n # [{\"tool_id, tool_name, observation\"1}, {\"tool_id, tool_name, observation\"2}, ...,\n # {\"tool_id, tool_name, observation\"N}]\n # {'messages': results} -> {'messages': list of observations} -> \n # {'messages': [{\"tool_id, tool_name, observation\"1}, {\"tool_id, tool_name, observation\"2}, ...,\n # {\"tool_id, tool_name, observation\"N}]\n # Because we have ANNOTATED the 'messages' key attribute, on the\n # 'AgentState' with the 'operators.add' this ISN'T overwriting this.\n # It's ADDED to historic of 'messages' at 'AgentState'\n return {'messages': results}", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "63e57c77-cbb9-48f9-b119-cef84ea70bd3", | |
| "cell_type": "code", | |
| "source": "# 'system' msg / prompt\nprompt = \"\"\"You are a smart research assistant. Use the search engine to look up information. \\\nYou are allowed to make multiple calls (either together or in sequence). \\\nOnly look up information when you are sure of what you want. \\\nIf you need to look up some information before asking a follow up question, you are allowed to do that!\n\"\"\"\n# Use \"gpt-3.5-turbo\" as model from OpenAI\n# reduce inference cost\nmodel = ChatOpenAI(model=\"gpt-3.5-turbo\")\n\n# Call AI 'Agent' class with inputs:\n# model = ChatOpenAI(model=\"gpt-3.5-turbo\")\n# tools = [tools(objs)] = [Search Engine(obj)]\n# system = 'system' msg/prompt\nabot = Agent(model, [tool], system=prompt)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "8f15af07-816a-42d7-b364-47a4c373a1b5", | |
| "cell_type": "code", | |
| "source": "# Get 'Image' function for plotting a png Agent's 'graph'\nfrom IPython.display import Image \n\n# Getting 'abot' Agent class (obj), let's apply\n# '.graph.get_graph().draw_png()' over that, to plot 'Image' of graph\nImage(abot.graph.get_graph().draw_png())", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "0b563126-3345-4a41-a2d6-ed549a60e8b1", | |
| "cell_type": "markdown", | |
| "source": "<img src=\"https://miro.medium.com/v2/resize:fit:1400/1*eJ3paG6HiT7dGBwuilrchA.png\"/>", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "d014976c-d54f-4ce8-94a4-3199dd67a36b", | |
| "cell_type": "code", | |
| "source": "# 'user' msg = \"What is the weather in sf?\"\n# Create a 'HumanMessage' representing the 'user' msg, \n# and put it as a [list of messages].\n# We have to do this, because the simple 'StateAgent' class expects\n# as input, a 'messages' key attribute to work, \n# with a [list of messages] as dict value, so we need to make it\n# conform with that.\nmessages = [HumanMessage(content=\"What is the weather in sf?\")]\n\n# Having this [list of messages] input for dict at 'StateAgent' class,\n# call 'agent(obj).graph.invoke(dict)' -> \n# with dict={\"messages\":[list of messages]} ={\"messages\":messages}\n# and get back an 'Agent' response.\n# (Recall we added some print statemens in the 'Agent' class)\n# ADD [‘user’ msg/prompt] into ‘messages’ key which contains historic list of messages\nresult = abot.graph.invoke({\"messages\": messages})", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "5b4d5c2b-4bd6-4277-b0b5-3caffe91150f", | |
| "cell_type": "markdown", | |
| "source": "```\nCalling: {'name': 'tavily_search_results_json', 'args': {'query': 'weather in San Francisco'}, 'id': 'call_PvPN1v7bHUxOdyn4J2xJhYOX'}\nBack to the model!\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "d7a50071-a661-4012-af3a-154131c9a542", | |
| "cell_type": "code", | |
| "source": "# Print out the model or 'Agent' response = dict {‘messages’: [list of messages] }\nresult", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "e04ad37f-cbe4-4adf-8dc7-73d7e7ef18f4", | |
| "cell_type": "markdown", | |
| "source": "```\n{'messages': [HumanMessage(content='What is the weather in sf?'),\n AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_PvPN1v7bHUxOdyn4J2xJhYOX', 'function': {'arguments': '{\"query\":\"weather in San Francisco\"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 21, 'prompt_tokens': 153, 'total_tokens': 174, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-587da8d7-a096-4872-b88f-380eb27a6256-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in San Francisco'}, 'id': 'call_PvPN1v7bHUxOdyn4J2xJhYOX'}]),\n ToolMessage(content='[{\\'url\\': \\'https://weathershogun.com/weather/usa/ca/san-francisco/480/july/2025-07-15\\', \\'content\\': \"Tuesday, July 15, 2025. San Francisco, CA - Weather Forecast \\\\n\\\\n===============\\\\n\\\\n☰\\\\n\\\\nSan Francisco, CA\\\\n\\\\nImage 1: WeatherShogun.com\\\\n\\\\nHomeContactBrowse StatesPrivacy PolicyTerms and Conditions\\\\n\\\\n°F)°C)\\\\n\\\\n❮\\\\n\\\\nTodayTomorrowHourly7 days30 daysJuly\\\\n\\\\n❯\\\\n\\\\nSan Francisco, California Weather: \\\\n\\\\nTuesday, July 15, 2025\\\\n\\\\nDay 64°\\\\n\\\\nNight 55°\\\\n\\\\nPrecipitation 0 %\\\\n\\\\nWind 15 mph\\\\n\\\\nUV Index (0 - 11+)10\\\\n\\\\nWednesday\\\\n\\\\n Hourly\\\\n Today\\\\n Tomorrow\\\\n 7 days\\\\n 30 days\\\\n\\\\nWeather Forecast History\\\\n------------------------ [...] Last Year\\'s Weather on This Day (July 15, 2024)\\\\n\\\\n### Day\\\\n\\\\n64°\\\\n\\\\n### Night\\\\n\\\\n55°\\\\n\\\\nPlease note that while we strive for accuracy, the information provided may not always be correct. Use at your own risk.\\\\n\\\\n© Copyright by WeatherShogun.com\"}, {\\'url\\': \\'https://www.weather25.com/north-america/usa/california/san-francisco?page=month&month=July\\', \\'content\\': \\'weather25.com\\\\nSearch\\\\nweather in United States\\\\nRemove from your favorite locations\\\\nAdd to my locations\\\\nShare\\\\nweather in United States\\\\n\\\\n# San Francisco weather in July 2025\\\\n\\\\nThe temperatures in San Francisco in July are comfortable with low of 14°C and and high up to 25°C.\\\\n\\\\nThere is little to no rain in San Francisco during July, so it’s a lot easier to explore the city. Just remember to dress in warm layers, as it can still get pretty chilly. [...] | 13 Partly cloudy 20° /13° | 14 Partly cloudy 21° /13° | 15 Partly cloudy 21° /13° | 16 Sunny 21° /13° | 17 Partly cloudy 21° /12° | 18 Partly cloudy 21° /13° | 19 Partly cloudy 20° /13° |\\\\n| 20 Partly cloudy 20° /13° | 21 Partly cloudy 20° /12° | 22 Sunny 22° /13° | 23 Sunny 21° /14° | 24 Sunny 22° /14° | 25 Partly cloudy 21° /13° | 26 Sunny 20° /13° |\\\\n| 27 Sunny 22° /14° | 28 Sunny 21° /15° | 29 Partly cloudy 20° /13° | 30 Partly cloudy 21° /14° | 31 Partly cloudy 21° /14° | | | [...] | Sun | Mon | Tue | Wed | Thu | Fri | Sat |\\\\n| --- | --- | --- | --- | --- | --- | --- |\\\\n| | | 1 Cloudy 21° /12° | 2 Partly cloudy 21° /13° | 3 Partly cloudy 21° /12° | 4 Sunny 21° /13° | 5 Sunny 21° /14° |\\\\n| 6 Sunny 20° /13° | 7 Mist 20° /12° | 8 Sunny 21° /12° | 9 Partly cloudy 21° /13° | 10 Sunny 22° /14° | 11 Sunny 23° /14° | 12 Sunny 22° /14° |\\'}, {\\'url\\': \\'https://en.climate-data.org/north-america/united-states-of-america/california/san-francisco-385/t/july-7/\\', \\'content\\': \"| 14. July | 17 °C | 62 °F | 22 °C | 72 °F | 13 °C | 55 °F | 13 °C | 56 °F | 0.0 mm | 0.0 inch. |\\\\n| 15. July | 16 °C | 62 °F | 22 °C | 72 °F | 13 °C | 55 °F | 13 °C | 56 °F | 0.0 mm | 0.0 inch. |\\\\n| 16. July | 17 °C | 62 °F | 22 °C | 72 °F | 13 °C | 55 °F | 13 °C | 56 °F | 0.1 mm | 0.0 inch. |\\\\n| 17. July | 16 °C | 62 °F | 22 °C | 71 °F | 13 °C | 55 °F | 13 °C | 56 °F | 0.0 mm | 0.0 inch. |\\\\n| 18. July | 16 °C | 62 °F | 22 °C | 71 °F | 13 °C | 55 °F | 13 °C | 56 °F | 0.1 mm | 0.0 inch. | [...] San Francisco experiences mild temperatures in July, reflecting its unique climate influenced by the Pacific Ocean. The average temperature during this month is around 61.3°F (16.3°C). Daytime highs can reach up to approximately 71.2°F (21.8°C), providing a comfortable warmth perfect for exploring the city\\'s outdoor attractions and vibrant neighborhoods. Nighttime lows, on the other hand, typically drop to about 54.9°F (12.7°C), which might require a light jacket if you\\'re out after sunset. [...] | Humidity(%) | 79% | 80% | 78% | 72% | 70% | 69% | 74% | 74% | 71% | 70% | 76% | 78% |\\\\n| Rainy days (d) | 8 | 7 | 6 | 4 | 2 | 1 | 0 | 0 | 0 | 2 | 5 | 7 |\\\\n| avg. Sun hours (hours) | 5.9 | 6.5 | 7.8 | 9.1 | 9.1 | 9.3 | 7.4 | 6.8 | 7.6 | 7.3 | 6.8 | 5.8 |\"}, {\\'url\\': \\'https://world-weather.info/forecast/usa/san_francisco/july-2025/\\', \\'content\\': \"South San Francisco+55°\\\\n\\\\nVallejo+63°\\\\n\\\\nPalo Alto+68°\\\\n\\\\nPacifica+54°\\\\n\\\\nBerkeley+63°\\\\n\\\\nCastro Valley+59°\\\\n\\\\nConcord+66°\\\\n\\\\nDaly City+55°\\\\n\\\\nFairfax+61°\\\\n\\\\nShoreview+61°\\\\n\\\\nMinimum and maximum\\\\n\\\\nworld\\'s temperature today\\\\n\\\\n_Bolivia_\\\\n\\\\nColchani day+48°F night+18°F\\\\n\\\\n_UAE_\\\\n\\\\nKhawr Fakkān day+120°F night+97°F\\\\n\\\\nWeather forecast on your site Install _San Francisco_ +57°\\\\n\\\\nServices\\\\n\\\\nSupport\\\\n\\\\n User agreement\\\\n Feedback\\\\n About Us\\\\n\\\\nSearch\\\\n\\\\nCity or place… [...] JanFebMarAprMayJunJulAugSepOctNovDec\\\\n\\\\nJuly\\\\n----\\\\n\\\\nStart Week On\\\\n\\\\nSunday\\\\n\\\\nMonday\\\\n\\\\n Sun\\\\n Mon\\\\n Tue\\\\n Wed\\\\n Thu\\\\n Fri\\\\n Sat\\\\n\\\\n 1 +70°\\\\n+59°\\\\n\\\\n 2 +72°\\\\n+61°\\\\n\\\\n 3 +70°\\\\n+61°\\\\n\\\\n 4 +70°\\\\n+59°\\\\n\\\\n 5 +70°\\\\n+59°\\\\n\\\\n 6 +70°\\\\n+59°\\\\n\\\\n 7 +70°\\\\n+59°\\\\n\\\\n 8 +70°\\\\n+59°\\\\n\\\\n 9 +70°\\\\n+59°\\\\n\\\\n 10 +72°\\\\n+61°\\\\n\\\\n 11 +72°\\\\n+61°\\\\n\\\\n 12 +70°\\\\n+61°\\\\n\\\\n 13 +70°\\\\n+59°\\\\n\\\\n 14 +70°\\\\n+61°\\\\n\\\\n 15 +72°\\\\n+59°\\\\n\\\\n 16 +72°\\\\n+59°\\\\n\\\\n 17 +70°\\\\n+61°\\\\n\\\\n 18 +72°\\\\n+61°\\\\n\\\\n 19 +70°\\\\n+61°\\\\n\\\\n 20 +72°\\\\n+61°\\\\n\\\\n 21 +70°\\\\n+61°\\\\n\\\\n 22 +72°\\\\n+61° [...] Weather in San Francisco in July 2025 (California) - Detailed Weather Forecast for a Month\\\\n\\\\n===============\\\\n\\\\n[](\\\\n\\\\nAdd the current city\\\\n\\\\nSearch \\\\n\\\\n Weather\\\\n Archive\\\\n Weather Widget\\\\n\\\\n°F\\\\n\\\\n World\\\\n United States\\\\n California\\\\n Weather in San Francisco\\\\n\\\\nWeather in San Francisco in July 2025\\\\n=====================================\\\\n\\\\nSan Francisco Weather Forecast for July 2025, is based on previous years\\' statistical data.\\\\n\\\\n201520162017201820192020202120222023202420252026\"}]', name='tavily_search_results_json', tool_call_id='call_PvPN1v7bHUxOdyn4J2xJhYOX'),\n AIMessage(content='The weather in San Francisco on Tuesday, July 15, 2025, is as follows:\\n- Day: 64°F\\n- Night: 55°F\\n- Precipitation: 0%\\n- Wind: 15 mph\\n- UV Index: 10\\n\\nIt seems to be a comfortable day with mild temperatures.', response_metadata={'token_usage': {'completion_tokens': 68, 'prompt_tokens': 2318, 'total_tokens': 2386, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-473b112e-d094-46ee-8ada-f0d9bf6a40a4-0')]}\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "2eb370f8-3e70-4394-89d9-93bbc781f7c4", | |
| "cell_type": "code", | |
| "source": "# Get the Final / LAST message from ‘Agent’, at this list of messages,\n# using '.content' attribute\nresult['messages'][-1].content", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "dc6bc9c2-bf1f-411d-960b-47ae9e8b9a46", | |
| "cell_type": "markdown", | |
| "source": "```\n'The weather in San Francisco on Tuesday, July 15, 2025, is as follows:\\n- Day: 64°F\\n- Night: 55°F\\n- Precipitation: 0%\\n- Wind: 15 mph\\n- UV Index: 10\\n\\nIt seems to be a comfortable day with mild temperatures.'\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "0b0bc39a-4768-4d09-a3a3-acecb15685d4", | |
| "cell_type": "code", | |
| "source": "# 'user' msg = \"What is the weather in SF and LA?\"\n# Create a 'HumanMessage' representing the 'user' msg, \n# and put it as a [list of messages].\n# We have to do this, because the simple 'StateAgent' class expects\n# as input, a 'messages' key attribute to work, \n# with a [list of messages] as dict value, so we need to make it\n# conform with that.\nmessages = [HumanMessage(content=\"What is the weather in SF and LA?\")]\n\n# Having this [list of messages] input for dict at 'StateAgent' class,\n# call 'agent(obj).graph.invoke(dict)' -> \n# with dict={\"messages\":[list of messages]} ={\"messages\":messages}\n# and get back an 'Agent' response.\n# (Recall we added some print statemens in the 'Agent' class)\n# ADD [‘user’ msg/prompt] into ‘messages’ key which contains historic list of messages\nresult = abot.graph.invoke({\"messages\": messages})", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "5c361bd7-e1cd-43bb-adc0-3e24238e5de2", | |
| "cell_type": "markdown", | |
| "source": "```\nCalling: {'name': 'tavily_search_results_json', 'args': {'query': 'weather in San Francisco'}, 'id': 'call_1SqGYuEtOOFN1yiIHSQTPnvE'}\nCalling: {'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_8RiM72Y7G8V7c3HEEAML1SKP'}\nBack to the model!\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "2ec14733-2c86-4b67-8e56-ff3730822180", | |
| "cell_type": "code", | |
| "source": "# Get the Final / LAST message from ‘Agent’, at this list of messages,\n# using '.content' attribute\nresult['messages'][-1].content", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "f47c41d3-daa2-4f6c-b657-ee872f9511f7", | |
| "cell_type": "markdown", | |
| "source": "```\n'The weather in San Francisco for today, July 15, 2025, is as follows:\\n- Day: 64°F, Night: 55°F\\n- Precipitation: 0%\\n- Wind: 15 mph\\n- UV Index: 10\\n\\nThe weather in Los Angeles for July 2025 is expected to be hot with average temperatures ranging from 20°C to 30°C. There are no rainy days expected in Los Angeles during July.\\n\\nIf you need more specific details or forecasts for the upcoming days, feel free to ask!'\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "ec374514-2a6d-47ab-9fed-185fb21492cc", | |
| "cell_type": "code", | |
| "source": "# Results may vary per run and over time as search information and models change.\n# query = 'user' msg / prompt\nquery = \"Who won the super bowl in 2024? In what state is the winning team headquarters located? \\\nWhat is the GDP of that state? Answer each question.\"\n\n# Create a 'HumanMessage' representing the 'user' msg, \n# and put it as a [list of messages].\n# We have to do this, because the simple 'StateAgent' class expects\n# as input, a 'messages' key attribute to work, \n# with a [list of messages] as dict value, so we need to make it\n# conform with that.\nmessages = [HumanMessage(content=query)]\n\n# Use \"gpt-4o\" as model from OpenAI\n# (requires more advanced model to produce more consistent results)\nmodel = ChatOpenAI(model=\"gpt-4o\")\n\n# Call AI 'Agent' class with inputs:\n# model = ChatOpenAI(model=\"gpt-4o\")\n# tools = [tools(objs)] = [Search Engine(obj)]\n# system = 'system' msg/prompt\nabot = Agent(model, [tool], system=prompt)\n\n# Having this [list of messages] input for dict at 'StateAgent' class,\n# call 'agent(obj).graph.invoke(dict)' -> \n# with dict={\"messages\":[list of messages]} ={\"messages\":messages}\n# and get back an 'Agent' response.\n# (Recall we added some print statemens in the 'Agent' class)\n# ADD [‘user’ msg/prompt/query] into ‘messages’ key which contains historic list of messages\nresult = abot.graph.invoke({\"messages\": messages})", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "a60a865b-9d65-49cb-9984-efcc0f167d7e", | |
| "cell_type": "markdown", | |
| "source": "```\nCalling: {'name': 'tavily_search_results_json', 'args': {'query': '2024 Super Bowl winner'}, 'id': 'call_HBUU1Lo9WSgKCPKYCAStSb7g'}\nBack to the model!\nCalling: {'name': 'tavily_search_results_json', 'args': {'query': 'Kansas City Chiefs team headquarters'}, 'id': 'call_niTRnrKss6s7ah9QPmAgrEyt'}\nCalling: {'name': 'tavily_search_results_json', 'args': {'query': 'GDP of Missouri 2023'}, 'id': 'call_Lxt9qYVdwDxQi3axRrXjuQUR'}\nBack to the model!\n```", | |
| "metadata": {} | |
| }, | |
| { | |
| "id": "71a3aaf4-4c98-41ac-b94a-02e3a497e79b", | |
| "cell_type": "code", | |
| "source": "# Get the Final / LAST message from ‘Agent’, at this list of messages,\n# using '.content' attribute\nprint(result['messages'][-1].content)", | |
| "metadata": { | |
| "trusted": true | |
| }, | |
| "outputs": [], | |
| "execution_count": null | |
| }, | |
| { | |
| "id": "f362c61c-22ff-4f6e-a3ec-1d7004e61e53", | |
| "cell_type": "markdown", | |
| "source": "```\n1. **Who won the Super Bowl in 2024?**\n - The Kansas City Chiefs won the Super Bowl in 2024, defeating the San Francisco 49ers with a score of 25-22 in overtime.\n\n2. **In what state is the winning team's headquarters located?**\n - The Kansas City Chiefs' headquarters is located in Kansas City, Missouri.\n\n3. **What is the GDP of that state?**\n - In 2023, Missouri's Gross Domestic Product (GDP) was approximately $348.49 billion in inflation-adjusted (chained 2017) dollars.\n```", | |
| "metadata": {} | |
| } | |
| ] | |
| } |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment