mirror of
https://github.com/pinecone-io/examples.git
synced 2023-10-11 20:04:54 +03:00
1115 lines
48 KiB
Plaintext
1115 lines
48 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"[](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/generation/langchain/rag-chatbot.ipynb) [](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/generation/langchain/rag-chatbot.ipynb)"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Building RAG Chatbots with LangChain"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"In this example, we'll work on building an AI chatbot from start-to-finish. We will be using LangChain, OpenAI, and Pinecone vector DB, to build a chatbot capable of learning from the external world using **R**etrieval **A**ugmented **G**eneration (RAG).\n",
|
||
"\n",
|
||
"We will be using a dataset sourced from the Llama 2 ArXiv paper and other related papers to help our chatbot answer questions about the latest and greatest in the world of GenAI.\n",
|
||
"\n",
|
||
"By the end of the example we'll have a functioning chatbot and RAG pipeline that can hold a conversation and provide informative responses based on a knowledge base.\n",
|
||
"\n",
|
||
"### Before you begin\n",
|
||
"\n",
|
||
"You'll need to get an [OpenAI API key](https://platform.openai.com/account/api-keys) and [Pinecone API key](https://app.pinecone.io)."
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Prerequisites"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Before we start building our chatbot, we need to install some Python libraries. Here's a brief overview of what each library does:\n",
|
||
"\n",
|
||
"- **langchain**: This is a library for GenAI. We'll use it to chain together different language models and components for our chatbot.\n",
|
||
"- **openai**: This is the official OpenAI Python client. We'll use it to interact with the OpenAI API and generate responses for our chatbot.\n",
|
||
"- **datasets**: This library provides a vast array of datasets for machine learning. We'll use it to load our knowledge base for the chatbot.\n",
|
||
"- **pinecone-client**: This is the official Pinecone Python client. We'll use it to interact with the Pinecone API and store our chatbot's knowledge base in a vector database.\n",
|
||
"\n",
|
||
"You can install these libraries using pip like so:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"!pip install -qU \\\n",
|
||
" langchain==0.0.292 \\\n",
|
||
" openai==0.28.0 \\\n",
|
||
" datasets==2.10.1 \\\n",
|
||
" pinecone-client==2.2.4 \\\n",
|
||
" tiktoken==0.5.1"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Building a Chatbot (no RAG)"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We will be relying heavily on the LangChain library to bring together the different components needed for our chatbot. To begin, we'll create a simple chatbot without any retrieval augmentation. We do this by initializing a `ChatOpenAI` object. For this we do need an [OpenAI API key](https://platform.openai.com/account/api-keys)."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 1,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import os\n",
|
||
"from langchain.chat_models import ChatOpenAI\n",
|
||
"\n",
|
||
"os.environ[\"OPENAI_API_KEY\"] = os.getenv(\"OPENAI_API_KEY\") or \"YOUR_API_KEY\"\n",
|
||
"\n",
|
||
"chat = ChatOpenAI(\n",
|
||
" openai_api_key=os.environ[\"OPENAI_API_KEY\"],\n",
|
||
" model='gpt-3.5-turbo'\n",
|
||
")"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Chats with OpenAI's `gpt-3.5-turbo` and `gpt-4` chat models are typically structured (in plain text) like this:\n",
|
||
"\n",
|
||
"```\n",
|
||
"System: You are a helpful assistant.\n",
|
||
"\n",
|
||
"User: Hi AI, how are you today?\n",
|
||
"\n",
|
||
"Assistant: I'm great thank you. How can I help you?\n",
|
||
"\n",
|
||
"User: I'd like to understand string theory.\n",
|
||
"\n",
|
||
"Assistant:\n",
|
||
"```\n",
|
||
"\n",
|
||
"The final `\"Assistant:\"` without a response is what would prompt the model to continue the comversation. In the official OpenAI `ChatCompletion` endpoint these would be passed to the model in a format like:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"[\n",
|
||
" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
|
||
" {\"role\": \"user\", \"content\": \"Hi AI, how are you today?\"},\n",
|
||
" {\"role\": \"assistant\", \"content\": \"I'm great thank you. How can I help you?\"}\n",
|
||
" {\"role\": \"user\", \"content\": \"I'd like to understand string theory.\"}\n",
|
||
"]\n",
|
||
"```\n",
|
||
"\n",
|
||
"In LangChain there is a slightly different format. We use three _message_ objects like so:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 2,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain.schema import (\n",
|
||
" SystemMessage,\n",
|
||
" HumanMessage,\n",
|
||
" AIMessage\n",
|
||
")\n",
|
||
"\n",
|
||
"messages = [\n",
|
||
" SystemMessage(content=\"You are a helpful assistant.\"),\n",
|
||
" HumanMessage(content=\"Hi AI, how are you today?\"),\n",
|
||
" AIMessage(content=\"I'm great thank you. How can I help you?\"),\n",
|
||
" HumanMessage(content=\"I'd like to understand string theory.\")\n",
|
||
"]"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The format is very similar, we're just swapped the role of `\"user\"` for `HumanMessage`, and the role of `\"assistant\"` for `AIMessage`.\n",
|
||
"\n",
|
||
"We generate the next response from the AI by passing these messages to the `ChatOpenAI` object."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 3,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"AIMessage(content=\"String theory is a theoretical framework in physics that aims to provide a unified description of the fundamental particles and forces in the universe. It suggests that at the most fundamental level, particles are not point-like entities but instead tiny, vibrating strings.\\n\\nHere are some key points to understand about string theory:\\n\\n1. Building blocks: According to string theory, the basic building blocks of the universe are not particles but rather tiny, one-dimensional strings. These strings can vibrate in different ways, and the different modes of vibration give rise to different particle properties, such as mass and charge.\\n\\n2. Extra dimensions: Unlike traditional theories, string theory requires more than the usual three dimensions of space and one dimension of time. It suggests the presence of additional, compactified dimensions that are too small for us to detect directly. These extra dimensions play a crucial role in shaping the behavior of the strings and can help explain certain fundamental aspects of the universe.\\n\\n3. Quantum mechanics and gravity: String theory combines quantum mechanics and general relativity, the theory of gravity. It provides a framework for understanding how gravity can be described in terms of microscopic vibrating strings, resolving some of the conceptual conflicts between quantum mechanics and general relativity.\\n\\n4. Multiple versions: There are various versions of string theory, including Type I, Type IIA, Type IIB, heterotic SO(32), and heterotic E8×E8. These different versions arise from different assumptions about the properties of the strings and their interactions.\\n\\n5. Unification of forces: One of the significant achievements of string theory is its potential to unify the fundamental forces of nature – gravity, electromagnetism, the strong nuclear force, and the weak nuclear force. In string theory, these forces emerge as different vibrational modes of the strings, providing a unified description of all interactions.\\n\\nIt's important to note that string theory remains a highly theoretical and mathematically complex field. It is still being actively researched, and many aspects of the theory are not yet fully understood or experimentally confirmed. Nonetheless, it offers intriguing possibilities for understanding the fundamental nature of the universe.\", additional_kwargs={}, example=False)"
|
||
]
|
||
},
|
||
"execution_count": 3,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"res = chat(messages)\n",
|
||
"res"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"In response we get another AI message object. We can print it more clearly like so:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"String theory is a theoretical framework in physics that aims to provide a unified description of the fundamental particles and forces in the universe. It suggests that at the most fundamental level, particles are not point-like entities but instead tiny, vibrating strings.\n",
|
||
"\n",
|
||
"Here are some key points to understand about string theory:\n",
|
||
"\n",
|
||
"1. Building blocks: According to string theory, the basic building blocks of the universe are not particles but rather tiny, one-dimensional strings. These strings can vibrate in different ways, and the different modes of vibration give rise to different particle properties, such as mass and charge.\n",
|
||
"\n",
|
||
"2. Extra dimensions: Unlike traditional theories, string theory requires more than the usual three dimensions of space and one dimension of time. It suggests the presence of additional, compactified dimensions that are too small for us to detect directly. These extra dimensions play a crucial role in shaping the behavior of the strings and can help explain certain fundamental aspects of the universe.\n",
|
||
"\n",
|
||
"3. Quantum mechanics and gravity: String theory combines quantum mechanics and general relativity, the theory of gravity. It provides a framework for understanding how gravity can be described in terms of microscopic vibrating strings, resolving some of the conceptual conflicts between quantum mechanics and general relativity.\n",
|
||
"\n",
|
||
"4. Multiple versions: There are various versions of string theory, including Type I, Type IIA, Type IIB, heterotic SO(32), and heterotic E8×E8. These different versions arise from different assumptions about the properties of the strings and their interactions.\n",
|
||
"\n",
|
||
"5. Unification of forces: One of the significant achievements of string theory is its potential to unify the fundamental forces of nature – gravity, electromagnetism, the strong nuclear force, and the weak nuclear force. In string theory, these forces emerge as different vibrational modes of the strings, providing a unified description of all interactions.\n",
|
||
"\n",
|
||
"It's important to note that string theory remains a highly theoretical and mathematically complex field. It is still being actively researched, and many aspects of the theory are not yet fully understood or experimentally confirmed. Nonetheless, it offers intriguing possibilities for understanding the fundamental nature of the universe.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(res.content)"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Because `res` is just another `AIMessage` object, we can append it to `messages`, add another `HumanMessage`, and generate the next response in the conversation."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 5,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Physicists believe that string theory has the potential to produce a unified theory because of several reasons:\n",
|
||
"\n",
|
||
"1. Consistency with known physics: String theory incorporates and extends the principles of quantum mechanics and general relativity, two highly successful theories in their respective domains. It provides a framework that can naturally accommodate both quantum mechanics and gravity, which have been challenging to reconcile in other approaches.\n",
|
||
"\n",
|
||
"2. Explanation of particle properties: String theory offers a way to explain the properties of particles, such as their masses and charges, in terms of the vibrations and interactions of the tiny strings. It provides a more fundamental and elegant description compared to the ad hoc assignments of properties in other theories.\n",
|
||
"\n",
|
||
"3. Grand unification of forces: In the standard model of particle physics, the three fundamental forces (electromagnetism, weak nuclear force, and strong nuclear force) are described independently and have different strengths. String theory suggests that these forces can be unified into a single framework. The different vibrational modes of the strings correspond to the different forces, allowing for a unified description of all interactions.\n",
|
||
"\n",
|
||
"4. Extra dimensions and symmetry: String theory requires the existence of extra dimensions beyond the usual four dimensions of space and time. These extra dimensions can help explain certain features of the universe, such as the hierarchy problem (why the gravitational force is so much weaker than the other forces) and the existence of different particle generations. The mathematical symmetries inherent in string theory also provide a basis for unification.\n",
|
||
"\n",
|
||
"5. Mathematical elegance: String theory is highly mathematically consistent and elegant. It relies on sophisticated mathematical techniques, such as conformal field theory and algebraic geometry, which have been extensively studied and developed over many decades. The mathematical beauty and internal consistency of the theory give physicists confidence in its potential for a unified description.\n",
|
||
"\n",
|
||
"It's worth noting that while string theory shows promise, it has not yet been experimentally confirmed. Further research and experimental data are needed to verify its predictions and determine its applicability to the physical world.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# add latest AI response to messages\n",
|
||
"messages.append(res)\n",
|
||
"\n",
|
||
"# now create a new user prompt\n",
|
||
"prompt = HumanMessage(\n",
|
||
" content=\"Why do physicists believe it can produce a 'unified theory'?\"\n",
|
||
")\n",
|
||
"# add to messages\n",
|
||
"messages.append(prompt)\n",
|
||
"\n",
|
||
"# send to chat-gpt\n",
|
||
"res = chat(messages)\n",
|
||
"\n",
|
||
"print(res.content)"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Dealing with Hallucinations"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We have our chatbot, but as mentioned — the knowledge of LLMs can be limited. The reason for this is that LLMs learn all they know during training. An LLM essentially compresses the \"world\" as seen in the training data into the internal parameters of the model. We call this knowledge the _parametric knowledge_ of the model.\n",
|
||
"\n",
|
||
"By default, LLMs have no access to the external world.\n",
|
||
"\n",
|
||
"The result of this is very clear when we ask LLMs about more recent information, like about the new (and very popular) Llama 2 LLM."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 6,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# add latest AI response to messages\n",
|
||
"messages.append(res)\n",
|
||
"\n",
|
||
"# now create a new user prompt\n",
|
||
"prompt = HumanMessage(\n",
|
||
" content=\"What is so special about Llama 2?\"\n",
|
||
")\n",
|
||
"# add to messages\n",
|
||
"messages.append(prompt)\n",
|
||
"\n",
|
||
"# send to OpenAI\n",
|
||
"res = chat(messages)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 7,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"I apologize, but I'm not familiar with Llama 2. Could you please provide more context or clarify what you are referring to? That way, I can try to assist you better.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(res.content)"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Our chatbot can no longer help us, it doesn't contain the information we need to answer the question. It was very clear from this answer that the LLM doesn't know the informaiton, but sometimes an LLM may respond like it _does_ know the answer — and this can be very hard to detect.\n",
|
||
"\n",
|
||
"OpenAI have since adjusted the behavior for this particular example as we can see below:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 8,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# add latest AI response to messages\n",
|
||
"messages.append(res)\n",
|
||
"\n",
|
||
"# now create a new user prompt\n",
|
||
"prompt = HumanMessage(\n",
|
||
" content=\"Can you tell me about the LLMChain in LangChain?\"\n",
|
||
")\n",
|
||
"# add to messages\n",
|
||
"messages.append(prompt)\n",
|
||
"\n",
|
||
"# send to OpenAI\n",
|
||
"res = chat(messages)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 9,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"I'm sorry, but I couldn't find any specific information about LLMChain in LangChain. It's possible that it may be a specific term or concept related to a particular project or technology that is not widely known or documented. Without more context or information, it's difficult for me to provide a detailed explanation. \n",
|
||
"\n",
|
||
"If you can provide more details or clarify the context in which LLMChain is mentioned, I'll do my best to assist you further.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(res.content)"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"There is another way of feeding knowledge into LLMs. It is called _source knowledge_ and it refers to any information fed into the LLM via the prompt. We can try that with the LLMChain question. We can take a description of this object from the LangChain documentation."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 11,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"llmchain_information = [\n",
|
||
" \"A LLMChain is the most common type of chain. It consists of a PromptTemplate, a model (either an LLM or a ChatModel), and an optional output parser. This chain takes multiple input variables, uses the PromptTemplate to format them into a prompt. It then passes that to the model. Finally, it uses the OutputParser (if provided) to parse the output of the LLM into a final format.\",\n",
|
||
" \"Chains is an incredibly generic concept which returns to a sequence of modular components (or other chains) combined in a particular way to accomplish a common use case.\",\n",
|
||
" \"LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an api, but will also: (1) Be data-aware: connect a language model to other sources of data, (2) Be agentic: Allow a language model to interact with its environment. As such, the LangChain framework is designed with the objective in mind to enable those types of applications.\"\n",
|
||
"]\n",
|
||
"\n",
|
||
"source_knowledge = \"\\n\".join(llmchain_information)"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We can feed this additional knowledge into our prompt with some instructions telling the LLM how we'd like it to use this information alongside our original query."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 12,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"query = \"Can you tell me about the LLMChain in LangChain?\"\n",
|
||
"\n",
|
||
"augmented_prompt = f\"\"\"Using the contexts below, answer the query.\n",
|
||
"\n",
|
||
"Contexts:\n",
|
||
"{source_knowledge}\n",
|
||
"\n",
|
||
"Query: {query}\"\"\""
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Now we feed this into our chatbot as we were before."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 13,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# create a new user prompt\n",
|
||
"prompt = HumanMessage(\n",
|
||
" content=augmented_prompt\n",
|
||
")\n",
|
||
"# add to messages\n",
|
||
"messages.append(prompt)\n",
|
||
"\n",
|
||
"# send to OpenAI\n",
|
||
"res = chat(messages)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 14,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"LLMChain, as described in the provided context, is a type of chain within the LangChain framework. Chains in LangChain refer to sequences of modular components or other chains that are combined in a specific way to achieve a common purpose.\n",
|
||
"\n",
|
||
"The LLMChain, specifically, is a common type of chain within LangChain. It consists of three main components: a PromptTemplate, a model (either an LLM or a ChatModel), and an optional output parser.\n",
|
||
"\n",
|
||
"When using an LLMChain, multiple input variables are taken and formatted into a prompt using the PromptTemplate. This formatted prompt is then passed to the language model (either an LLM or a ChatModel) within the chain. Finally, the output of the language model is processed by the OutputParser (if provided) to transform it into a final format.\n",
|
||
"\n",
|
||
"The LangChain framework itself aims to go beyond simply calling a language model via an API. It emphasizes two key aspects: being data-aware and being agentic. Being data-aware means connecting the language model to other sources of data, allowing for a richer understanding and integration of information. Being agentic refers to enabling the language model to interact with its environment, making it more capable of dynamic and interactive applications.\n",
|
||
"\n",
|
||
"In summary, the LLMChain is a specific type of chain within the LangChain framework that facilitates the use of language models by combining prompts, models, and output parsers to process and format inputs and outputs in a desired manner.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(res.content)"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The quality of this answer is phenomenal. This is made possible thanks to the idea of augmented our query with external knowledge (source knowledge). There's just one problem — how do we get this information in the first place?\n",
|
||
"\n",
|
||
"We learned in the previous chapters about Pinecone and vector databases. Well, they can help us here too. But first, we'll need a dataset."
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Importing the Data"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"In this task, we will be importing our data. We will be using the Hugging Face Datasets library to load our data. Specifically, we will be using the `\"jamescalam/llama-2-arxiv-papers\"` dataset. This dataset contains a collection of ArXiv papers which will serve as the external knowledge base for our chatbot."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 15,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"Dataset({\n",
|
||
" features: ['doi', 'chunk-id', 'chunk', 'id', 'title', 'summary', 'source', 'authors', 'categories', 'comment', 'journal_ref', 'primary_category', 'published', 'updated', 'references'],\n",
|
||
" num_rows: 4838\n",
|
||
"})"
|
||
]
|
||
},
|
||
"execution_count": 15,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"from datasets import load_dataset\n",
|
||
"\n",
|
||
"dataset = load_dataset(\n",
|
||
" \"jamescalam/llama-2-arxiv-papers-chunked\",\n",
|
||
" split=\"train\"\n",
|
||
")\n",
|
||
"\n",
|
||
"dataset"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 16,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"{'doi': '1102.0183',\n",
|
||
" 'chunk-id': '0',\n",
|
||
" 'chunk': 'High-Performance Neural Networks\\nfor Visual Object Classi\\x0ccation\\nDan C. Cire\\x18 san, Ueli Meier, Jonathan Masci,\\nLuca M. Gambardella and J\\x7f urgen Schmidhuber\\nTechnical Report No. IDSIA-01-11\\nJanuary 2011\\nIDSIA / USI-SUPSI\\nDalle Molle Institute for Arti\\x0ccial Intelligence\\nGalleria 2, 6928 Manno, Switzerland\\nIDSIA is a joint institute of both University of Lugano (USI) and University of Applied Sciences of Southern Switzerland (SUPSI),\\nand was founded in 1988 by the Dalle Molle Foundation which promoted quality of life.\\nThis work was partially supported by the Swiss Commission for Technology and Innovation (CTI), Project n. 9688.1 IFF:\\nIntelligent Fill in Form.arXiv:1102.0183v1 [cs.AI] 1 Feb 2011\\nTechnical Report No. IDSIA-01-11 1\\nHigh-Performance Neural Networks\\nfor Visual Object Classi\\x0ccation\\nDan C. Cire\\x18 san, Ueli Meier, Jonathan Masci,\\nLuca M. Gambardella and J\\x7f urgen Schmidhuber\\nJanuary 2011\\nAbstract\\nWe present a fast, fully parameterizable GPU implementation of Convolutional Neural\\nNetwork variants. Our feature extractors are neither carefully designed nor pre-wired, but',\n",
|
||
" 'id': '1102.0183',\n",
|
||
" 'title': 'High-Performance Neural Networks for Visual Object Classification',\n",
|
||
" 'summary': 'We present a fast, fully parameterizable GPU implementation of Convolutional\\nNeural Network variants. Our feature extractors are neither carefully designed\\nnor pre-wired, but rather learned in a supervised way. Our deep hierarchical\\narchitectures achieve the best published results on benchmarks for object\\nclassification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with\\nerror rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple\\nback-propagation perform better than more shallow ones. Learning is\\nsurprisingly rapid. NORB is completely trained within five epochs. Test error\\nrates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs,\\nrespectively.',\n",
|
||
" 'source': 'http://arxiv.org/pdf/1102.0183',\n",
|
||
" 'authors': ['Dan C. Cireşan',\n",
|
||
" 'Ueli Meier',\n",
|
||
" 'Jonathan Masci',\n",
|
||
" 'Luca M. Gambardella',\n",
|
||
" 'Jürgen Schmidhuber'],\n",
|
||
" 'categories': ['cs.AI', 'cs.NE'],\n",
|
||
" 'comment': '12 pages, 2 figures, 5 tables',\n",
|
||
" 'journal_ref': None,\n",
|
||
" 'primary_category': 'cs.AI',\n",
|
||
" 'published': '20110201',\n",
|
||
" 'updated': '20110201',\n",
|
||
" 'references': []}"
|
||
]
|
||
},
|
||
"execution_count": 16,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"dataset[0]"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"#### Dataset Overview\n",
|
||
"\n",
|
||
"The dataset we are using is sourced from the Llama 2 ArXiv papers. It is a collection of academic papers from ArXiv, a repository of electronic preprints approved for publication after moderation. Each entry in the dataset represents a \"chunk\" of text from these papers.\n",
|
||
"\n",
|
||
"Because most **L**arge **L**anguage **M**odels (LLMs) only contain knowledge of the world as it was during training, they cannot answer our questions about Llama 2 — at least not without this data."
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Task 4: Building the Knowledge Base"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We now have a dataset that can serve as our chatbot knowledge base. Our next task is to transform that dataset into the knowledge base that our chatbot can use. To do this we must use an embedding model and vector database.\n",
|
||
"\n",
|
||
"We begin by initializing our connection to Pinecone, this requires a [free API key](https://app.pinecone.io)."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 17,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import pinecone\n",
|
||
"\n",
|
||
"# get API key from app.pinecone.io and environment from console\n",
|
||
"pinecone.init(\n",
|
||
" api_key=os.environ.get('PINECONE_API_KEY') or 'YOUR_API_KEY',\n",
|
||
" environment=os.environ.get('PINECONE_ENVIRONMENT') or 'YOUR_ENV'\n",
|
||
")"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Then we initialize the index. We will be using OpenAI's `text-embedding-ada-002` model for creating the embeddings, so we set the `dimension` to `1536`."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 20,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import time\n",
|
||
"\n",
|
||
"index_name = 'llama-2-rag'\n",
|
||
"\n",
|
||
"if index_name not in pinecone.list_indexes():\n",
|
||
" pinecone.create_index(\n",
|
||
" index_name,\n",
|
||
" dimension=1536,\n",
|
||
" metric='cosine'\n",
|
||
" )\n",
|
||
" # wait for index to finish initialization\n",
|
||
" while not pinecone.describe_index(index_name).status['ready']:\n",
|
||
" time.sleep(1)\n",
|
||
"\n",
|
||
"index = pinecone.Index(index_name)"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Then we connect to the index:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 21,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"{'dimension': 1536,\n",
|
||
" 'index_fullness': 0.0,\n",
|
||
" 'namespaces': {},\n",
|
||
" 'total_vector_count': 0}"
|
||
]
|
||
},
|
||
"execution_count": 21,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"index.describe_index_stats()"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Our index is now ready but it's empty. It is a vector index, so it needs vectors. As mentioned, to create these vector embeddings we will OpenAI's `text-embedding-ada-002` model — we can access it via LangChain like so:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 23,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
|
||
"\n",
|
||
"embed_model = OpenAIEmbeddings(model=\"text-embedding-ada-002\")"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Using this model we can create embeddings like so:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 24,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"(2, 1536)"
|
||
]
|
||
},
|
||
"execution_count": 24,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"texts = [\n",
|
||
" 'this is the first chunk of text',\n",
|
||
" 'then another second chunk of text is here'\n",
|
||
"]\n",
|
||
"\n",
|
||
"res = embed_model.embed_documents(texts)\n",
|
||
"len(res), len(res[0])"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"From this we get two (aligning to our two chunks of text) 1536-dimensional embeddings.\n",
|
||
"\n",
|
||
"We're now ready to embed and index all our our data! We do this by looping through our dataset and embedding and inserting everything in batches."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 26,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"application/vnd.jupyter.widget-view+json": {
|
||
"model_id": "eef2d2eafbd34b7e8c2104857310a2b2",
|
||
"version_major": 2,
|
||
"version_minor": 0
|
||
},
|
||
"text/plain": [
|
||
" 0%| | 0/49 [00:00<?, ?it/s]"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"from tqdm.auto import tqdm # for progress bar\n",
|
||
"\n",
|
||
"data = dataset.to_pandas() # this makes it easier to iterate over the dataset\n",
|
||
"\n",
|
||
"batch_size = 100\n",
|
||
"\n",
|
||
"for i in tqdm(range(0, len(data), batch_size)):\n",
|
||
" i_end = min(len(data), i+batch_size)\n",
|
||
" # get batch of data\n",
|
||
" batch = data.iloc[i:i_end]\n",
|
||
" # generate unique ids for each chunk\n",
|
||
" ids = [f\"{x['doi']}-{x['chunk-id']}\" for i, x in batch.iterrows()]\n",
|
||
" # get text to embed\n",
|
||
" texts = [x['chunk'] for _, x in batch.iterrows()]\n",
|
||
" # embed text\n",
|
||
" embeds = embed_model.embed_documents(texts)\n",
|
||
" # get metadata to store in Pinecone\n",
|
||
" metadata = [\n",
|
||
" {'text': x['chunk'],\n",
|
||
" 'source': x['source'],\n",
|
||
" 'title': x['title']} for i, x in batch.iterrows()\n",
|
||
" ]\n",
|
||
" # add to Pinecone\n",
|
||
" index.upsert(vectors=zip(ids, embeds, metadata))"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We can check that the vector index has been populated using `describe_index_stats` like before:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 27,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"{'dimension': 1536,\n",
|
||
" 'index_fullness': 0.0,\n",
|
||
" 'namespaces': {'': {'vector_count': 4838}},\n",
|
||
" 'total_vector_count': 4838}"
|
||
]
|
||
},
|
||
"execution_count": 27,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"index.describe_index_stats()"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"#### Retrieval Augmented Generation"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We've built a fully-fledged knowledge base. Now it's time to connect that knowledge base to our chatbot. To do that we'll be diving back into LangChain and reusing our template prompt from earlier."
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"To use LangChain here we need to load the LangChain abstraction for a vector index, called a `vectorstore`. We pass in our vector `index` to initialize the object."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 28,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain.vectorstores import Pinecone\n",
|
||
"\n",
|
||
"text_field = \"text\" # the metadata field that contains our text\n",
|
||
"\n",
|
||
"# initialize the vector store object\n",
|
||
"vectorstore = Pinecone(\n",
|
||
" index, embed_model.embed_query, text_field\n",
|
||
")"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Using this `vectorstore` we can already query the index and see if we have any relevant information given our question about Llama 2."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 30,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"[Document(page_content='Alan Schelten Ruan Silva Eric Michael Smith Ranjan Subramanian Xiaoqing Ellen Tan Binh Tang\\nRoss Taylor Adina Williams Jian Xiang Kuan Puxin Xu Zheng Yan Iliyan Zarov Yuchen Zhang\\nAngela Fan Melanie Kambadur Sharan Narang Aurelien Rodriguez Robert Stojnic\\nSergey Edunov Thomas Scialom\\x03\\nGenAI, Meta\\nAbstract\\nIn this work, we develop and release Llama 2, a collection of pretrained and fine-tuned\\nlarge language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.\\nOur fine-tuned LLMs, called L/l.sc/a.sc/m.sc/a.sc /two.taboldstyle-C/h.sc/a.sc/t.sc , are optimized for dialogue use cases. Our\\nmodels outperform open-source chat models on most benchmarks we tested, and based on\\nourhumanevaluationsforhelpfulnessandsafety,maybeasuitablesubstituteforclosedsource models. We provide a detailed description of our approach to fine-tuning and safety', metadata={'source': 'http://arxiv.org/pdf/2307.09288', 'title': 'Llama 2: Open Foundation and Fine-Tuned Chat Models'}),\n",
|
||
" Document(page_content='asChatGPT,BARD,andClaude. TheseclosedproductLLMsareheavilyfine-tunedtoalignwithhuman\\npreferences, which greatly enhances their usability and safety. This step can require significant costs in\\ncomputeandhumanannotation,andisoftennottransparentoreasilyreproducible,limitingprogresswithin\\nthe community to advance AI alignment research.\\nIn this work, we develop and release Llama 2, a family of pretrained and fine-tuned LLMs, L/l.sc/a.sc/m.sc/a.sc /two.taboldstyle and\\nL/l.sc/a.sc/m.sc/a.sc /two.taboldstyle-C/h.sc/a.sc/t.sc , at scales up to 70B parameters. On the series of helpfulness and safety benchmarks we tested,\\nL/l.sc/a.sc/m.sc/a.sc /two.taboldstyle-C/h.sc/a.sc/t.sc models generally perform better than existing open-source models. They also appear to\\nbe on par with some of the closed-source models, at least on the human evaluations we performed (see', metadata={'source': 'http://arxiv.org/pdf/2307.09288', 'title': 'Llama 2: Open Foundation and Fine-Tuned Chat Models'}),\n",
|
||
" Document(page_content='Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aur’elien Rodriguez, Armand Joulin, Edouard\\nGrave, and Guillaume Lample. Llama: Open and efficient foundation language models. arXiv preprint\\narXiv:2302.13971 , 2023.\\nAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser,\\nand Illia Polosukhin. Attention is all you need, 2017.\\nOriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung,\\nDavid H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using\\nmulti-agent reinforcement learning. Nature, 575(7782):350–354, 2019.\\nYizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and HannanehHajishirzi. Self-instruct: Aligninglanguagemodel withselfgeneratedinstructions. arXivpreprint', metadata={'source': 'http://arxiv.org/pdf/2307.09288', 'title': 'Llama 2: Open Foundation and Fine-Tuned Chat Models'})]"
|
||
]
|
||
},
|
||
"execution_count": 30,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"query = \"What is so special about Llama 2?\"\n",
|
||
"\n",
|
||
"vectorstore.similarity_search(query, k=3)"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We return a lot of text here and it's not that clear what we need or what is relevant. Fortunately, our LLM will be able to parse this information much faster than us. All we need is to connect the output from our `vectorstore` to our `chat` chatbot. To do that we can use the same logic as we used earlier."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 34,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def augment_prompt(query: str):\n",
|
||
" # get top 3 results from knowledge base\n",
|
||
" results = vectorstore.similarity_search(query, k=3)\n",
|
||
" # get the text from the results\n",
|
||
" source_knowledge = \"\\n\".join([x.page_content for x in results])\n",
|
||
" # feed into an augmented prompt\n",
|
||
" augmented_prompt = f\"\"\"Using the contexts below, answer the query.\n",
|
||
"\n",
|
||
" Contexts:\n",
|
||
" {source_knowledge}\n",
|
||
"\n",
|
||
" Query: {query}\"\"\"\n",
|
||
" return augmented_prompt"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Using this we produce an augmented prompt:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 35,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Using the contexts below, answer the query.\n",
|
||
"\n",
|
||
" Contexts:\n",
|
||
" Alan Schelten Ruan Silva Eric Michael Smith Ranjan Subramanian Xiaoqing Ellen Tan Binh Tang\n",
|
||
"Ross Taylor Adina Williams Jian Xiang Kuan Puxin Xu Zheng Yan Iliyan Zarov Yuchen Zhang\n",
|
||
"Angela Fan Melanie Kambadur Sharan Narang Aurelien Rodriguez Robert Stojnic\n",
|
||
"Sergey Edunov Thomas Scialom\u0003\n",
|
||
"GenAI, Meta\n",
|
||
"Abstract\n",
|
||
"In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned\n",
|
||
"large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.\n",
|
||
"Our fine-tuned LLMs, called L/l.sc/a.sc/m.sc/a.sc /two.taboldstyle-C/h.sc/a.sc/t.sc , are optimized for dialogue use cases. Our\n",
|
||
"models outperform open-source chat models on most benchmarks we tested, and based on\n",
|
||
"ourhumanevaluationsforhelpfulnessandsafety,maybeasuitablesubstituteforclosedsource models. We provide a detailed description of our approach to fine-tuning and safety\n",
|
||
"asChatGPT,BARD,andClaude. TheseclosedproductLLMsareheavilyfine-tunedtoalignwithhuman\n",
|
||
"preferences, which greatly enhances their usability and safety. This step can require significant costs in\n",
|
||
"computeandhumanannotation,andisoftennottransparentoreasilyreproducible,limitingprogresswithin\n",
|
||
"the community to advance AI alignment research.\n",
|
||
"In this work, we develop and release Llama 2, a family of pretrained and fine-tuned LLMs, L/l.sc/a.sc/m.sc/a.sc /two.taboldstyle and\n",
|
||
"L/l.sc/a.sc/m.sc/a.sc /two.taboldstyle-C/h.sc/a.sc/t.sc , at scales up to 70B parameters. On the series of helpfulness and safety benchmarks we tested,\n",
|
||
"L/l.sc/a.sc/m.sc/a.sc /two.taboldstyle-C/h.sc/a.sc/t.sc models generally perform better than existing open-source models. They also appear to\n",
|
||
"be on par with some of the closed-source models, at least on the human evaluations we performed (see\n",
|
||
"Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aur’elien Rodriguez, Armand Joulin, Edouard\n",
|
||
"Grave, and Guillaume Lample. Llama: Open and efficient foundation language models. arXiv preprint\n",
|
||
"arXiv:2302.13971 , 2023.\n",
|
||
"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser,\n",
|
||
"and Illia Polosukhin. Attention is all you need, 2017.\n",
|
||
"Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung,\n",
|
||
"David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using\n",
|
||
"multi-agent reinforcement learning. Nature, 575(7782):350–354, 2019.\n",
|
||
"Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and HannanehHajishirzi. Self-instruct: Aligninglanguagemodel withselfgeneratedinstructions. arXivpreprint\n",
|
||
"\n",
|
||
" Query: What is so special about Llama 2?\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(augment_prompt(query))"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"There is still a lot of text here, so let's pass it onto our chat model to see how it performs."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 36,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Llama 2 is a collection of pretrained and fine-tuned large language models (LLMs) that range in scale from 7 billion to 70 billion parameters. These LLMs, such as L/l.sc/a.sc/m.sc/a.sc/t.sc and L/l.sc/a.sc/m.sc/a.sc/t.sc-C/h.sc/a.sc/t.sc, are specifically optimized for dialogue use cases.\n",
|
||
"\n",
|
||
"The special aspect of Llama 2 is that its fine-tuned LLMs outperform open-source chat models on various benchmarks, demonstrating superior performance. In fact, based on humane evaluations for helpfulness and safety, Llama 2 models are considered as potential substitutes for closed-source models. Closed-source models like ChatGPT, BARD, and Claude are heavily fine-tuned to align with human preferences, enhancing usability and safety.\n",
|
||
"\n",
|
||
"The development and release of Llama 2 contribute to the progress of AI alignment research, as it provides transparent and reproducible approaches to fine-tuning and safety. This is in contrast to closed-source models, which often lack transparency and hinder community advancements in the field.\n",
|
||
"\n",
|
||
"Overall, the special features of Llama 2 lie in its large-scale pretrained and fine-tuned LLMs that excel in dialogue applications and its commitment to transparency and reproducibility in research.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# create a new user prompt\n",
|
||
"prompt = HumanMessage(\n",
|
||
" content=augment_prompt(query)\n",
|
||
")\n",
|
||
"# add to messages\n",
|
||
"messages.append(prompt)\n",
|
||
"\n",
|
||
"res = chat(messages)\n",
|
||
"\n",
|
||
"print(res.content)"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We can continue with more Llama 2 questions. Let's try _without_ RAG first:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 37,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"According to the provided context, the paper mentions that they provide a detailed description of their approach to fine-tuning and safety, similar to other closed-source models like ChatGPT, BARD, and Claude. However, the specific safety measures used in the development of Llama 2 are not mentioned in the given context. To obtain more detailed information about the safety measures employed in Llama 2, it would be necessary to refer to the original paper or additional sources related to Llama 2.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"prompt = HumanMessage(\n",
|
||
" content=\"what safety measures were used in the development of llama 2?\"\n",
|
||
")\n",
|
||
"\n",
|
||
"res = chat(messages + [prompt])\n",
|
||
"print(res.content)"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The chatbot is able to respond about Llama 2 thanks to it's conversational history stored in `messages`. However, it doesn't know anything about the safety measures themselves as we have not provided it with that information via the RAG pipeline. Let's try again but with RAG."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 38,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"The safety measures used in the development of Llama 2 include safety-specific data annotation and tuning, conducting red-teaming, and employing iterative evaluations. These measures were taken to increase the safety of the models and ensure responsible development. The paper also provides a thorough description of the fine-tuning methodology and approach to improving LLM safety. By sharing these details and being open about the process, the aim is to enable the community to reproduce fine-tuned LLMs and continue to improve their safety, promoting responsible development in the field.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"prompt = HumanMessage(\n",
|
||
" content=augment_prompt(\n",
|
||
" \"what safety measures were used in the development of llama 2?\"\n",
|
||
" )\n",
|
||
")\n",
|
||
"\n",
|
||
"res = chat(messages + [prompt])\n",
|
||
"print(res.content)"
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We get a much more informed response that includes several items missing in the previous non-RAG response, such as \"red-teaming\", \"iterative evaluations\", and the intention of the researchers to share this research to help \"improve their safety, promoting responsible development in the field\"."
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "redacre",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.9.12"
|
||
},
|
||
"orig_nbformat": 4
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 2
|
||
}
|