Files
Hands-On-Large-Language-Models/chapter07/Chapter 7 - Advanced Text Generation Techniques and Tools.ipynb
2024-09-25 12:04:38 +02:00

1125 lines
34 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "-ETtu9CvVMDR"
},
"source": [
"<h1>Chapter 7 - Advanced Text Generation Techniques and Tools</h1>\n",
"<i>Going beyond prompt engineering.</i>\n",
"\n",
"<a href=\"https://www.amazon.com/Hands-Large-Language-Models-Understanding/dp/1098150961\"><img src=\"https://img.shields.io/badge/Buy%20the%20Book!-grey?logo=amazon\"></a>\n",
"<a href=\"https://www.oreilly.com/library/view/hands-on-large-language/9781098150952/\"><img src=\"https://img.shields.io/badge/O'Reilly-white.svg?logo=data:image/svg%2bxml;base64,PHN2ZyB3aWR0aD0iMzQiIGhlaWdodD0iMjciIHZpZXdCb3g9IjAgMCAzNCAyNyIgZmlsbD0ibm9uZSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KPGNpcmNsZSBjeD0iMTMiIGN5PSIxNCIgcj0iMTEiIHN0cm9rZT0iI0Q0MDEwMSIgc3Ryb2tlLXdpZHRoPSI0Ii8+CjxjaXJjbGUgY3g9IjMwLjUiIGN5PSIzLjUiIHI9IjMuNSIgZmlsbD0iI0Q0MDEwMSIvPgo8L3N2Zz4K\"></a>\n",
"<a href=\"https://github.com/HandsOnLLM/Hands-On-Large-Language-Models\"><img src=\"https://img.shields.io/badge/GitHub%20Repository-black?logo=github\"></a>\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/HandsOnLLM/Hands-On-Large-Language-Models/blob/main/chapter07/Chapter%207%20-%20Advanced%20Text%20Generation%20Techniques%20and%20Tools.ipynb)\n",
"\n",
"---\n",
"\n",
"This notebook is for Chapter 7 of the [Hands-On Large Language Models](https://www.amazon.com/Hands-Large-Language-Models-Understanding/dp/1098150961) book by [Jay Alammar](https://www.linkedin.com/in/jalammar) and [Maarten Grootendorst](https://www.linkedin.com/in/mgrootendorst/).\n",
"\n",
"---\n",
"\n",
"<a href=\"https://www.amazon.com/Hands-Large-Language-Models-Understanding/dp/1098150961\">\n",
"<img src=\"https://raw.githubusercontent.com/HandsOnLLM/Hands-On-Large-Language-Models/main/images/book_cover.png\" width=\"350\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### [OPTIONAL] - Installing Packages on <img src=\"https://colab.google/static/images/icons/colab.png\" width=100>\n",
"\n",
"If you are viewing this notebook on Google Colab (or any other cloud vendor), you need to **uncomment and run** the following codeblock to install the dependencies for this chapter:\n",
"\n",
"---\n",
"\n",
"💡 **NOTE**: We will want to use a GPU to run the examples in this notebook. In Google Colab, go to\n",
"**Runtime > Change runtime type > Hardware accelerator > GPU > GPU type > T4**.\n",
"\n",
"---\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# %%capture\n",
"# !pip install langchain>=0.1.17 openai>=1.13.3 langchain_openai>=0.1.6 transformers>=4.40.1 datasets>=2.18.0 accelerate>=0.27.2 sentence-transformers>=2.5.1 duckduckgo-search>=5.2.2 langchain_community\n",
"# !CMAKE_ARGS=\"-DLLAMA_CUDA=on\" pip install llama-cpp-python==0.2.69"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "rerbJgwAigbK"
},
"source": [
"# Loading an LLM"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!wget https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf/resolve/main/Phi-3-mini-4k-instruct-fp16.gguf\n",
"\n",
"# If this command does not work for you, you can use the link directly to download the model\n",
"# https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf/resolve/main/Phi-3-mini-4k-instruct-fp16.gguf"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "LQcht_ZFijW7"
},
"outputs": [],
"source": [
"from langchain import LlamaCpp\n",
"\n",
"# Make sure the model path is correct for your system!\n",
"llm = LlamaCpp(\n",
" model_path=\"Phi-3-mini-4k-instruct-fp16.gguf\",\n",
" n_gpu_layers=-1,\n",
" max_tokens=500,\n",
" n_ctx=2048,\n",
" seed=42,\n",
" verbose=False\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 36
},
"executionInfo": {
"elapsed": 854,
"status": "ok",
"timestamp": 1724338298709,
"user": {
"displayName": "Maarten Grootendorst",
"userId": "11015108362723620659"
},
"user_tz": -120
},
"id": "3SNhQF9WthzV",
"outputId": "fd062b8a-4643-43a3-afc1-cce9dc338708"
},
"outputs": [
{
"data": {
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
},
"text/plain": [
"''"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm.invoke(\"Hi! My name is Maarten. What is 1 + 1?\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Wwx2AIuGfCoP"
},
"source": [
"### Chains"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "kF--Q5me_-X1"
},
"outputs": [],
"source": [
"from langchain import PromptTemplate\n",
"\n",
"# Create a prompt template with the \"input_prompt\" variable\n",
"template = \"\"\"<s><|user|>\n",
"{input_prompt}<|end|>\n",
"<|assistant|>\"\"\"\n",
"prompt = PromptTemplate(\n",
" template=template,\n",
" input_variables=[\"input_prompt\"]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ogWsGeg6hElt"
},
"outputs": [],
"source": [
"basic_chain = prompt | llm"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 36
},
"executionInfo": {
"elapsed": 894,
"status": "ok",
"timestamp": 1724338313078,
"user": {
"displayName": "Maarten Grootendorst",
"userId": "11015108362723620659"
},
"user_tz": -120
},
"id": "KINQxKAINXgG",
"outputId": "682d6b12-a4aa-4992-8abe-b23334b2a524"
},
"outputs": [
{
"data": {
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
},
"text/plain": [
"' Hello Maarten, the answer to 1 + 1 is 2.'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Use the chain\n",
"basic_chain.invoke(\n",
" {\n",
" \"input_prompt\": \"Hi! My name is Maarten. What is 1 + 1?\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "sSMBMRxB8gFW"
},
"source": [
"### Multiple Chains"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 292,
"status": "ok",
"timestamp": 1724338320681,
"user": {
"displayName": "Maarten Grootendorst",
"userId": "11015108362723620659"
},
"user_tz": -120
},
"id": "wrUKuHt_OLpe",
"outputId": "844043b3-51ad-4de8-dada-a7b290c2e5c0"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py:141: LangChainDeprecationWarning: The class `LLMChain` was deprecated in LangChain 0.1.17 and will be removed in 1.0. Use RunnableSequence, e.g., `prompt | llm` instead.\n",
" warn_deprecated(\n"
]
}
],
"source": [
"from langchain import LLMChain\n",
"\n",
"# Create a chain for the title of our story\n",
"template = \"\"\"<s><|user|>\n",
"Create a title for a story about {summary}. Only return the title.<|end|>\n",
"<|assistant|>\"\"\"\n",
"title_prompt = PromptTemplate(template=template, input_variables=[\"summary\"])\n",
"title = LLMChain(llm=llm, prompt=title_prompt, output_key=\"title\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 805,
"status": "ok",
"timestamp": 1724338321745,
"user": {
"displayName": "Maarten Grootendorst",
"userId": "11015108362723620659"
},
"user_tz": -120
},
"id": "igFIyg73OtaL",
"outputId": "9d7610f3-8fc1-429d-bf5f-f1a4697fad40"
},
"outputs": [
{
"data": {
"text/plain": [
"{'summary': 'a girl that lost her mother',\n",
" 'title': ' \"Whispers of Love: A Journey Through Grief\"'}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"title.invoke({\"summary\": \"a girl that lost her mother\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "zTtFEmANOhyE"
},
"outputs": [],
"source": [
"# Create a chain for the character description using the summary and title\n",
"template = \"\"\"<s><|user|>\n",
"Describe the main character of a story about {summary} with the title {title}. Use only two sentences.<|end|>\n",
"<|assistant|>\"\"\"\n",
"character_prompt = PromptTemplate(\n",
" template=template, input_variables=[\"summary\", \"title\"]\n",
")\n",
"character = LLMChain(llm=llm, prompt=character_prompt, output_key=\"character\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Xjf-avW8NAqZ"
},
"outputs": [],
"source": [
"# Create a chain for the story using the summary, title, and character description\n",
"template = \"\"\"<s><|user|>\n",
"Create a story about {summary} with the title {title}. The main charachter is: {character}. Only return the story and it cannot be longer than one paragraph<|end|>\n",
"<|assistant|>\"\"\"\n",
"story_prompt = PromptTemplate(\n",
" template=template, input_variables=[\"summary\", \"title\", \"character\"]\n",
")\n",
"story = LLMChain(llm=llm, prompt=story_prompt, output_key=\"story\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "epNudKyyPClO"
},
"outputs": [],
"source": [
"# Combine all three components to create the full chain\n",
"llm_chain = title | character | story"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 6677,
"status": "ok",
"timestamp": 1715331735693,
"user": {
"displayName": "Maarten Grootendorst",
"userId": "11015108362723620659"
},
"user_tz": -120
},
"id": "b44ZR0vXRaAo",
"outputId": "f73d9bbc-d126-4aed-8f1c-7ff14d5afa59"
},
"outputs": [
{
"data": {
"text/plain": [
"{'summary': 'a girl that lost her mother',\n",
" 'title': ' \"In Loving Memory: A Journey Through Grief\"',\n",
" 'character': ' The protagonist, Emily, is a resilient young girl who struggles to cope with her overwhelming grief after losing her beloved and caring mother at an early age. As she embarks on a journey of self-discovery and healing, she learns valuable life lessons from the memories and wisdom shared by those around her.',\n",
" 'story': \" In Loving Memory: A Journey Through Grief revolves around Emily, a resilient young girl who loses her beloved mother at an early age. Struggling to cope with overwhelming grief, she embarks on a journey of self-discovery and healing, drawing strength from the cherished memories and wisdom shared by those around her. Through this transformative process, Emily learns valuable life lessons about resilience, love, and the power of human connection, ultimately finding solace in honoring her mother's legacy while embracing a newfound sense of inner peace amidst the painful loss.\"}"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain.invoke(\"a girl that lost her mother\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7UQ-DZ71P-D-"
},
"source": [
"# Memory"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 36
},
"executionInfo": {
"elapsed": 841,
"status": "ok",
"timestamp": 1715331767433,
"user": {
"displayName": "Maarten Grootendorst",
"userId": "11015108362723620659"
},
"user_tz": -120
},
"id": "-15Eoey5EJUO",
"outputId": "e475493c-ede1-4932-b954-ade7be05c79a"
},
"outputs": [
{
"data": {
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
},
"text/plain": [
"' Hello Maarten! The answer to 1 + 1 is 2.'"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Let's give the LLM our name\n",
"basic_chain.invoke({\"input_prompt\": \"Hi! My name is Maarten. What is 1 + 1?\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 54
},
"executionInfo": {
"elapsed": 1385,
"status": "ok",
"timestamp": 1715331769763,
"user": {
"displayName": "Maarten Grootendorst",
"userId": "11015108362723620659"
},
"user_tz": -120
},
"id": "N42wQRl-Lykt",
"outputId": "d1019050-017d-4dcb-a008-7029e9ebd9fe"
},
"outputs": [
{
"data": {
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
},
"text/plain": [
"\" I'm unable to determine your name as I don't have the capability to access personal data. If you provide context or information where this question might be relevant, I could assist you further within appropriate guidelines!\""
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Next, we ask the LLM to reproduce the name\n",
"basic_chain.invoke({\"input_prompt\": \"What is my name?\"})"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "PfqATEZjMgET"
},
"source": [
"## ConversationBuffer"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Zoo0PA1fUs70"
},
"outputs": [],
"source": [
"# Create an updated prompt template to include a chat history\n",
"template = \"\"\"<s><|user|>Current conversation:{chat_history}\n",
"\n",
"{input_prompt}<|end|>\n",
"<|assistant|>\"\"\"\n",
"\n",
"prompt = PromptTemplate(\n",
" template=template,\n",
" input_variables=[\"input_prompt\", \"chat_history\"]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bgGMS1S9saLi"
},
"outputs": [],
"source": [
"from langchain.memory import ConversationBufferMemory\n",
"\n",
"# Define the type of Memory we will use\n",
"memory = ConversationBufferMemory(memory_key=\"chat_history\")\n",
"\n",
"# Chain the LLM, Prompt, and Memory together\n",
"llm_chain = LLMChain(\n",
" prompt=prompt,\n",
" llm=llm,\n",
" memory=memory\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 887,
"status": "ok",
"timestamp": 1715331790905,
"user": {
"displayName": "Maarten Grootendorst",
"userId": "11015108362723620659"
},
"user_tz": -120
},
"id": "mltR_GtkiqDZ",
"outputId": "15161d8e-2520-4ffc-e104-6147e52bd5f2"
},
"outputs": [
{
"data": {
"text/plain": [
"{'input_prompt': 'Hi! My name is Maarten. What is 1 + 1?',\n",
" 'chat_history': '',\n",
" 'text': \" Hello Maarten! The answer to 1 + 1 is 2. Hope you're having a great day!\"}"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Generate a conversation and ask a basic question\n",
"llm_chain.invoke({\"input_prompt\": \"Hi! My name is Maarten. What is 1 + 1?\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 821,
"status": "ok",
"timestamp": 1715331794689,
"user": {
"displayName": "Maarten Grootendorst",
"userId": "11015108362723620659"
},
"user_tz": -120
},
"id": "h-je1rmy3dx4",
"outputId": "5a25e2dd-5fb4-4cd8-8a5d-5da423bd3d33"
},
"outputs": [
{
"data": {
"text/plain": [
"{'input_prompt': 'What is my name?',\n",
" 'chat_history': \"Human: Hi! My name is Maarten. What is 1 + 1?\\nAI: Hello Maarten! The answer to 1 + 1 is 2. Hope you're having a great day!\",\n",
" 'text': ' Your name is Maarten.'}"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Does the LLM remember the name we gave it?\n",
"llm_chain.invoke({\"input_prompt\": \"What is my name?\"})"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Sw3ELCg6Rpsk"
},
"source": [
"## ConversationBufferMemoryWindow"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "G0DRT7kjRtiC"
},
"outputs": [],
"source": [
"from langchain.memory import ConversationBufferWindowMemory\n",
"\n",
"# Retain only the last 2 conversations in memory\n",
"memory = ConversationBufferWindowMemory(k=2, memory_key=\"chat_history\")\n",
"\n",
"# Chain the LLM, Prompt, and Memory together\n",
"llm_chain = LLMChain(\n",
" prompt=prompt,\n",
" llm=llm,\n",
" memory=memory\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 2270,
"status": "ok",
"timestamp": 1715331894039,
"user": {
"displayName": "Maarten Grootendorst",
"userId": "11015108362723620659"
},
"user_tz": -120
},
"id": "CBY69vvcR1Qq",
"outputId": "76dbe9cc-3161-486c-b5b1-c8ce713ab1ab"
},
"outputs": [
{
"data": {
"text/plain": [
"{'input_prompt': 'What is 3 + 3?',\n",
" 'chat_history': \"Human: Hi! My name is Maarten and I am 33 years old. What is 1 + 1?\\nAI: Hello Maarten, it's nice to meet you! The answer to 1 + 1 is 2.\\n\\nHowever, if you have any other questions or need further assistance, feel free to ask!\",\n",
" 'text': \" Hello again! 3 + 3 equals 6. If there's anything else I can help you with, just let me know!\"}"
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Ask two questions and generate two conversations in its memory\n",
"llm_chain.invoke({\"input_prompt\":\"Hi! My name is Maarten and I am 33 years old. What is 1 + 1?\"})\n",
"llm_chain.invoke({\"input_prompt\":\"What is 3 + 3?\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 455,
"status": "ok",
"timestamp": 1715331894493,
"user": {
"displayName": "Maarten Grootendorst",
"userId": "11015108362723620659"
},
"user_tz": -120
},
"id": "nvSLfKWpR5h5",
"outputId": "6ce15789-1eae-4817-c676-0282f22b5d40"
},
"outputs": [
{
"data": {
"text/plain": [
"{'input_prompt': 'What is my name?',\n",
" 'chat_history': \"Human: Hi! My name is Maarten and I am 33 years old. What is 1 + 1?\\nAI: Hello Maarten, it's nice to meet you! The answer to 1 + 1 is 2.\\n\\nHowever, if you have any other questions or need further assistance, feel free to ask!\\nHuman: What is 3 + 3?\\nAI: Hello again! 3 + 3 equals 6. If there's anything else I can help you with, just let me know!\",\n",
" 'text': ' Your name is Maarten.'}"
]
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Check whether it knows the name we gave it\n",
"llm_chain.invoke({\"input_prompt\":\"What is my name?\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 1455,
"status": "ok",
"timestamp": 1715331896303,
"user": {
"displayName": "Maarten Grootendorst",
"userId": "11015108362723620659"
},
"user_tz": -120
},
"id": "YW7qEyctcqeJ",
"outputId": "54e196e3-b1de-4269-c31d-2ec369efce4b"
},
"outputs": [
{
"data": {
"text/plain": [
"{'input_prompt': 'What is my age?',\n",
" 'chat_history': \"Human: What is 3 + 3?\\nAI: Hello again! 3 + 3 equals 6. If there's anything else I can help you with, just let me know!\\nHuman: What is my name?\\nAI: Your name is Maarten.\",\n",
" 'text': \" I'm unable to determine your age as I don't have access to personal information. Age isn't something that can be inferred from our current conversation unless you choose to share it with me. How else may I assist you today?\"}"
]
},
"execution_count": 29,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Check whether it knows the age we gave it\n",
"llm_chain.invoke({\"input_prompt\":\"What is my age?\"})"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tSb5OnANMhu2"
},
"source": [
"## ConversationSummary"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "lWHZlJUbwpqE"
},
"outputs": [],
"source": [
"# Create a summary prompt template\n",
"summary_prompt_template = \"\"\"<s><|user|>Summarize the conversations and update with the new lines.\n",
"\n",
"Current summary:\n",
"{summary}\n",
"\n",
"new lines of conversation:\n",
"{new_lines}\n",
"\n",
"New summary:<|end|>\n",
"<|assistant|>\"\"\"\n",
"summary_prompt = PromptTemplate(\n",
" input_variables=[\"new_lines\", \"summary\"],\n",
" template=summary_prompt_template\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "qg1HAgxZMkbO"
},
"outputs": [],
"source": [
"from langchain.memory import ConversationSummaryMemory\n",
"\n",
"# Define the type of memory we will use\n",
"memory = ConversationSummaryMemory(\n",
" llm=llm,\n",
" memory_key=\"chat_history\",\n",
" prompt=summary_prompt\n",
")\n",
"\n",
"# Chain the LLM, prompt, and memory together\n",
"llm_chain = LLMChain(\n",
" prompt=prompt,\n",
" llm=llm,\n",
" memory=memory\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 6468,
"status": "ok",
"timestamp": 1715332131212,
"user": {
"displayName": "Maarten Grootendorst",
"userId": "11015108362723620659"
},
"user_tz": -120
},
"id": "2klIk9CpVSH0",
"outputId": "1edddb31-7703-4a54-c758-03a6d459e783"
},
"outputs": [
{
"data": {
"text/plain": [
"{'input_prompt': 'What is my name?',\n",
" 'chat_history': ' Summary: Human, identified as Maarten, asked the AI about the sum of 1 + 1, which was correctly answered by the AI as 2 and offered additional assistance if needed.',\n",
" 'text': ' Your name in this context was referred to as \"Maarten\". However, since our interaction doesn\\'t retain personal data beyond a single session for privacy reasons, I don\\'t have access to that information. How can I assist you further today?'}"
]
},
"execution_count": 50,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Generate a conversation and ask for the name\n",
"llm_chain.invoke({\"input_prompt\": \"Hi! My name is Maarten. What is 1 + 1?\"})\n",
"llm_chain.invoke({\"input_prompt\": \"What is my name?\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 4499,
"status": "ok",
"timestamp": 1715332139542,
"user": {
"displayName": "Maarten Grootendorst",
"userId": "11015108362723620659"
},
"user_tz": -120
},
"id": "_VdOH_I-V-Fy",
"outputId": "b20dc2a4-1be8-40dd-d683-955b32900875"
},
"outputs": [
{
"data": {
"text/plain": [
"{'input_prompt': 'What was the first question I asked?',\n",
" 'chat_history': ' Summary: Human, identified as Maarten in the context of this conversation, first asked about the sum of 1 + 1 and received an answer of 2 from the AI. Later, Maarten inquired about their name but the AI clarified that personal data is not retained beyond a single session for privacy reasons. The AI offered further assistance if needed.',\n",
" 'text': ' The first question you asked was \"what\\'s 1 + 1?\"'}"
]
},
"execution_count": 51,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Check whether it has summarized everything thus far\n",
"llm_chain.invoke({\"input_prompt\": \"What was the first question I asked?\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 211,
"status": "ok",
"timestamp": 1715332142602,
"user": {
"displayName": "Maarten Grootendorst",
"userId": "11015108362723620659"
},
"user_tz": -120
},
"id": "n1_LlvrVX9HL",
"outputId": "f1c989a2-76e2-4348-cbe5-8a4f5a551dc0"
},
"outputs": [
{
"data": {
"text/plain": [
"{'chat_history': ' Maarten, identified in this conversation, initially asked about the sum of 1+1 which resulted in an answer from the AI being 2. Subsequently, he sought clarification on his name but the AI informed him that no personal data is retained beyond a single session due to privacy reasons. The AI then offered further assistance if required. Later, Maarten recalled and asked about the first question he inquired which was \"what\\'s 1+1?\"'}"
]
},
"execution_count": 52,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Check what the summary is thus far\n",
"memory.load_memory_variables({})"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BG5sJa1qvS4N"
},
"source": [
"# Agents"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "rcBt8bZM56dM"
},
"outputs": [],
"source": [
"import os\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"# Load OpenAI's LLMs with LangChain\n",
"os.environ[\"OPENAI_API_KEY\"] = \"MY_KEY\"\n",
"openai_llm = ChatOpenAI(model_name=\"gpt-3.5-turbo\", temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "lmRZu8DO2p6k"
},
"outputs": [],
"source": [
"# Create the ReAct template\n",
"react_template = \"\"\"Answer the following questions as best you can. You have access to the following tools:\n",
"\n",
"{tools}\n",
"\n",
"Use the following format:\n",
"\n",
"Question: the input question you must answer\n",
"Thought: you should always think about what to do\n",
"Action: the action to take, should be one of [{tool_names}]\n",
"Action Input: the input to the action\n",
"Observation: the result of the action\n",
"... (this Thought/Action/Action Input/Observation can repeat N times)\n",
"Thought: I now know the final answer\n",
"Final Answer: the final answer to the original input question\n",
"\n",
"Begin!\n",
"\n",
"Question: {input}\n",
"Thought:{agent_scratchpad}\"\"\"\n",
"\n",
"prompt = PromptTemplate(\n",
" template=react_template,\n",
" input_variables=[\"tools\", \"tool_names\", \"input\", \"agent_scratchpad\"]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NV-ssNa-4zOK"
},
"outputs": [],
"source": [
"from langchain.agents import load_tools, Tool\n",
"from langchain.tools import DuckDuckGoSearchResults\n",
"\n",
"# You can create the tool to pass to an agent\n",
"search = DuckDuckGoSearchResults()\n",
"search_tool = Tool(\n",
" name=\"duckduck\",\n",
" description=\"A web search engine. Use this to as a search engine for general queries.\",\n",
" func=search.run,\n",
")\n",
"\n",
"# Prepare tools\n",
"tools = load_tools([\"llm-math\"], llm=openai_llm)\n",
"tools.append(search_tool)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "6tAr1962vS4T"
},
"outputs": [],
"source": [
"from langchain.agents import AgentExecutor, create_react_agent\n",
"\n",
"# Construct the ReAct agent\n",
"agent = create_react_agent(openai_llm, tools, prompt)\n",
"agent_executor = AgentExecutor(\n",
" agent=agent, tools=tools, verbose=True, handle_parsing_errors=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 5841,
"status": "ok",
"timestamp": 1712135814912,
"user": {
"displayName": "Maarten Grootendorst",
"userId": "11015108362723620659"
},
"user_tz": -120
},
"id": "QSU6ECdYBOOm",
"outputId": "b6cf304c-c0f2-4939-b682-2f72d7e5a078"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI need to find the current price of a MacBook Pro in USD first before converting it to EUR.\n",
"Action: duckduck\n",
"Action Input: \"current price of MacBook Pro in USD\"\u001b[0m\u001b[33;1m\u001b[1;3m[snippet: View at Best Buy. The best MacBook Pro overall The MacBook Pro 14-inch with the latest M3-series chips offers outstanding, best-in-class performance while getting fantastic battery life and ..., title: The best MacBook Pro in 2024: our picks for the top Pro models, link: https://www.techradar.com/best/best-macbook-pro], [snippet: Starts at $1,299. Upgradable to 24 GB of memory and 2 TB of storage. 67W USB-C charger included. The M2-powered MacBook Pro is available now for a starting price of $1,299 on Apple's website ..., title: MacBook Pro 13-inch (M2, 2022) review | Tom's Guide, link: https://www.tomsguide.com/reviews/macbook-pro-13-inch-m2-2022], [snippet: The late-2023 MacBook Pro update also marks the demise of the 13-inch MacBook Pro, which has been replaced by a 14-inch model with the standard M3 chip, unfortunately at a higher price than the ..., title: Best MacBook Pro Deals: March 2024 | Macworld, link: https://www.macworld.com/article/672811/best-macbook-pro-deals.html], [snippet: For the M3 Pro models, prices start at $2,249.00 for the 512GB/18GB RAM 16-inch MacBook Pro and increase to $2,649.00 for the 512GB/36GB RAM model, both of which are all-time low prices., title: Best Buy Introduces All-Time Low Prices on Apple's M3 MacBook Pro for ..., link: https://www.macrumors.com/2024/04/01/best-buy-m3-macbook-pro/]\u001b[0m\u001b[32;1m\u001b[1;3mI found the current price of a MacBook Pro in USD, now I need to convert it to EUR using the exchange rate.\n",
"Action: Calculator\n",
"Action Input: $2,249.00 * 0.85\u001b[0m\u001b[36;1m\u001b[1;3mAnswer: 1911.6499999999999\u001b[0m\u001b[32;1m\u001b[1;3mI now know the final answer\n",
"Final Answer: The current price of a MacBook Pro in USD is $2,249.00. It would cost approximately 1911.65 EUR with an exchange rate of 0.85 EUR for 1 USD.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'input': 'What is the current price of a MacBook Pro in USD? How much would it cost in EUR if the exchange rate is 0.85 EUR for 1 USD?',\n",
" 'output': 'The current price of a MacBook Pro in USD is $2,249.00. It would cost approximately 1911.65 EUR with an exchange rate of 0.85 EUR for 1 USD.'}"
]
},
"execution_count": 84,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# What is the Price of a MacBook Pro?\n",
"agent_executor.invoke(\n",
" {\n",
" \"input\": \"What is the current price of a MacBook Pro in USD? How much would it cost in EUR if the exchange rate is 0.85 EUR for 1 USD?\"\n",
" }\n",
")"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"gpuType": "T4",
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
}
},
"nbformat": 4,
"nbformat_minor": 4
}