Files
RAG_Techniques/all_rag_techniques/HyPE_Hypothetical_Prompt_Embeddings.ipynb
2025-03-10 13:28:16 +00:00

559 lines
26 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Hypothetical Prompt Embeddings (HyPE)\n",
"\n",
"## Overview\n",
"\n",
"This code implements a Retrieval-Augmented Generation (RAG) system enhanced by Hypothetical Prompt Embeddings (HyPE). Unlike traditional RAG pipelines that struggle with query-document style mismatch, HyPE precomputes hypothetical questions during the indexing phase. This transforms retrieval into a question-question matching problem, eliminating the need for expensive runtime query expansion techniques.\n",
"\n",
"## Key Components of notebook\n",
"\n",
"1. PDF processing and text extraction\n",
"2. Text chunking to maintain coherent information units\n",
"3. **Hypothetical Prompt Embedding Generation** using an LLM to create multiple proxy questions per chunk\n",
"4. Vector store creation using [FAISS](https://engineering.fb.com/2017/03/29/data-infrastructure/faiss-a-library-for-efficient-similarity-search/) and OpenAI embeddings\n",
"5. Retriever setup for querying the processed documents\n",
"6. Evaluation of the RAG system\n",
"\n",
"## Method Details\n",
"\n",
"### Document Preprocessing\n",
"\n",
"1. The PDF is loaded using `PyPDFLoader`.\n",
"2. The text is split into chunks using `RecursiveCharacterTextSplitter` with specified chunk size and overlap.\n",
"\n",
"### Hypothetical Question Generation\n",
"\n",
"Instead of embedding raw text chunks, HyPE **generates multiple hypothetical prompts** for each chunk. These **precomputed questions** simulate user queries, improving alignment with real-world searches. This removes the need for runtime synthetic answer generation needed in techniques like HyDE.\n",
"\n",
"### Vector Store Creation\n",
"\n",
"1. Each hypothetical question is embedded using OpenAI embeddings.\n",
"2. A FAISS vector store is built, associating **each question embedding with its original chunk**.\n",
"3. This approach **stores multiple representations per chunk**, increasing retrieval flexibility.\n",
"\n",
"### Retriever Setup\n",
"\n",
"1. The retriever is optimized for **question-question matching** rather than direct document retrieval.\n",
"2. The FAISS index enables **efficient nearest-neighbor** search over the hypothetical prompt embeddings.\n",
"3. Retrieved chunks provide a **richer and more precise context** for downstream LLM generation.\n",
"\n",
"## Key Features\n",
"\n",
"1. **Precomputed Hypothetical Prompts** Improves query alignment without runtime overhead.\n",
"2. **Multi-Vector Representation** Each chunk is indexed multiple times for broader semantic coverage.\n",
"3. **Efficient Retrieval** FAISS ensures fast similarity search over the enhanced embeddings.\n",
"4. **Modular Design** The pipeline is easy to adapt for different datasets and retrieval settings. Additionally it's compatible with most optimizations like reranking etc.\n",
"\n",
"## Evaluation\n",
"\n",
"HyPE's effectiveness is evaluated across multiple datasets, showing:\n",
"\n",
"- Up to 42 percentage points improvement in retrieval precision\n",
"- Up to 45 percentage points improvement in claim recall\n",
" (See full evaluation results in [preprint](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5139335))\n",
"\n",
"## Benefits of this Approach\n",
"\n",
"1. **Eliminates Query-Time Overhead** All hypothetical generation is done offline at indexing.\n",
"2. **Enhanced Retrieval Precision** Better alignment between queries and stored content.\n",
"3. **Scalable & Efficient** No addinal per-query computational cost; retrieval is as fast as standard RAG.\n",
"4. **Flexible & Extensible** Can be combined with advanced RAG techniques like reranking.\n",
"\n",
"## Conclusion\n",
"\n",
"HyPE provides a scalable and efficient alternative to traditional RAG systems, overcoming query-document style mismatch while avoiding the computational cost of runtime query expansion. By moving hypothetical prompt generation to indexing, it significantly enhances retrieval precision and efficiency, making it a practical solution for real-world applications.\n",
"\n",
"For further details, refer to the full paper: [preprint](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5139335)\n",
"\n",
"\n",
"<div style=\"text-align: center;\">\n",
"\n",
"<img src=\"../images/hype.svg\" alt=\"HyPE\" style=\"width:70%; height:auto;\">\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Import libraries and environment variables"
]
},
{
"cell_type": "code",
"execution_count": 63,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import sys\n",
"import faiss\n",
"from tqdm import tqdm\n",
"from dotenv import load_dotenv\n",
"from concurrent.futures import ThreadPoolExecutor, as_completed\n",
"from langchain_community.docstore.in_memory import InMemoryDocstore\n",
"\n",
"\n",
"# Load environment variables from a .env file\n",
"load_dotenv()\n",
"\n",
"# Set the OpenAI API key environment variable (comment out if not using OpenAI)\n",
"if not os.getenv('OPENAI_API_KEY'):\n",
" os.environ[\"OPENAI_API_KEY\"] = input(\"Please enter your OpenAI API key: \")\n",
"else:\n",
" os.environ[\"OPENAI_API_KEY\"] = os.getenv('OPENAI_API_KEY')\n",
"\n",
"sys.path.append(os.path.abspath(os.path.join(os.getcwd(), '..'))) # Add the parent directory to the path since we work with notebooks\n",
"from helper_functions import *\n",
"from evaluation.evalute_rag import *\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Define constants\n",
"\n",
"- `PATH`: path to the data, to be embedded into the RAG pipeline\n",
"\n",
"This tutorial uses OpenAI endpoint ([avalible models](https://platform.openai.com/docs/pricing)). \n",
"- `LANGUAGE_MODEL_NAME`: The name of the language model to be used. \n",
"- `EMBEDDING_MODEL_NAME`: The name of the embedding model to be used.\n",
"\n",
"The tutroial uses a `RecursiveCharacterTextSplitter` chunking approach where the chunking length function used is python `len` function. The chunking varables to be tweaked here are:\n",
"- `CHUNK_SIZE`: The minimum length of one chunk\n",
"- `CHUNK_OVERLAP`: The overlap of two consecutive chunks."
]
},
{
"cell_type": "code",
"execution_count": 64,
"metadata": {},
"outputs": [],
"source": [
"PATH = \"../data/Understanding_Climate_Change.pdf\"\n",
"LANGUAGE_MODEL_NAME = \"gpt-4o-mini\"\n",
"EMBEDDING_MODEL_NAME = \"text-embedding-3-small\"\n",
"CHUNK_SIZE = 1000\n",
"CHUNK_OVERLAP = 200"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Define generation of Hypothetical Prompt Embeddings\n",
"\n",
"The code block below generates hypothetical questions for each text chunk and embeds them for retrieval.\n",
"\n",
"- An LLM extracts key questions from the input chunk.\n",
"- These questions are embedded using OpenAI's model.\n",
"- The function returns the original chunk and its prompt embeddings later used for retrieval.\n",
"\n",
"To ensure clean output, extra newlines are removed, and regex parsing can improve list formatting when needed."
]
},
{
"cell_type": "code",
"execution_count": 65,
"metadata": {},
"outputs": [],
"source": [
"def generate_hypothetical_prompt_embeddings(chunk_text: str):\n",
" \"\"\"\n",
" Uses the LLM to generate multiple hypothetical questions for a single chunk.\n",
" These questions will be used as 'proxies' for the chunk during retrieval.\n",
"\n",
" Parameters:\n",
" chunk_text (str): Text contents of the chunk\n",
"\n",
" Returns:\n",
" chunk_text (str): Text contents of the chunk. This is done to make the \n",
" multithreading easier\n",
" hypothetical prompt embeddings (List[float]): A list of embedding vectors\n",
" generated from the questions\n",
" \"\"\"\n",
" llm = ChatOpenAI(temperature=0, model_name=LANGUAGE_MODEL_NAME)\n",
" embedding_model = OpenAIEmbeddings(model=EMBEDDING_MODEL_NAME)\n",
"\n",
" question_gen_prompt = PromptTemplate.from_template(\n",
" \"Analyze the input text and generate essential questions that, when answered, \\\n",
" capture the main points of the text. Each question should be one line, \\\n",
" without numbering or prefixes.\\n\\n \\\n",
" Text:\\n{chunk_text}\\n\\nQuestions:\\n\"\n",
" )\n",
" question_chain = question_gen_prompt | llm | StrOutputParser()\n",
"\n",
" # parse questions from response\n",
" # Notes: \n",
" # - gpt4o likes to split questions by \\n\\n so we remove one \\n\n",
" # - for production or if using smaller models from ollama, it's beneficial to use regex to parse \n",
" # things like (un)ordeed lists\n",
" # r\"^\\s*[\\-\\*\\•]|\\s*\\d+\\.\\s*|\\s*[a-zA-Z]\\)\\s*|\\s*\\(\\d+\\)\\s*|\\s*\\([a-zA-Z]\\)\\s*|\\s*\\([ivxlcdm]+\\)\\s*\"\n",
" questions = question_chain.invoke({\"chunk_text\": chunk_text}).replace(\"\\n\\n\", \"\\n\").split(\"\\n\")\n",
" \n",
" return chunk_text, embedding_model.embed_documents(questions)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Define creation and population of FAISS Vector Store\n",
"\n",
"The code block below builds a FAISS vector store by embedding text chunks in parallel.\n",
"\n",
"What happens?\n",
"- Parallel processing Uses threading to generate embeddings faster.\n",
"- FAISS initialization Sets up an L2 index for efficient similarity search.\n",
"- Chunk embedding Each chunk is stored multiple times, once for each generated question embedding.\n",
"- In-memory storage Uses InMemoryDocstore for fast lookup.\n",
"\n",
"This ensures efficient retrieval, improving query alignment with precomputed question embeddings."
]
},
{
"cell_type": "code",
"execution_count": 66,
"metadata": {},
"outputs": [],
"source": [
"def prepare_vector_store(chunks: List[str]):\n",
" \"\"\"\n",
" Creates and populates a FAISS vector store from a list of text chunks.\n",
"\n",
" This function processes a list of text chunks in parallel, generating \n",
" hypothetical prompt embeddings for each chunk.\n",
" The embeddings are stored in a FAISS index for efficient similarity search.\n",
"\n",
" Parameters:\n",
" chunks (List[str]): A list of text chunks to be embedded and stored.\n",
"\n",
" Returns:\n",
" FAISS: A FAISS vector store containing the embedded text chunks.\n",
" \"\"\"\n",
"\n",
" # Wait with initialization to see vector lengths\n",
" vector_store = None \n",
"\n",
" with ThreadPoolExecutor() as pool: \n",
" # Use threading to speed up generation of prompt embeddings\n",
" futures = [pool.submit(generate_hypothetical_prompt_embeddings, c) for c in chunks]\n",
" \n",
" # Process embeddings as they complete\n",
" for f in tqdm(as_completed(futures), total=len(chunks)): \n",
" \n",
" chunk, vectors = f.result() # Retrieve the processed chunk and its embeddings\n",
" \n",
" # Initialize the FAISS vector store on the first chunk\n",
" if vector_store == None: \n",
" vector_store = FAISS(\n",
" embedding_function=OpenAIEmbeddings(model=EMBEDDING_MODEL_NAME), # Define embedding model\n",
" index=faiss.IndexFlatL2(len(vectors[0])) # Define an L2 index for similarity search\n",
" docstore=InMemoryDocstore(), # Use in-memory document storage\n",
" index_to_docstore_id={} # Maintain index-to-document mapping\n",
" )\n",
" \n",
" # Pair the chunk's content with each generated embedding vector.\n",
" # Each chunk is inserted multiple times, once for each prompt vector\n",
" chunks_with_embedding_vectors = [(chunk.page_content, vec) for vec in vectors]\n",
" \n",
" # Add embeddings to the store\n",
" vector_store.add_embeddings(chunks_with_embedding_vectors) \n",
"\n",
" return vector_store # Return the populated vector store\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Encode PDF into a FAISS Vector Store\n",
"\n",
"The code block below processes a PDF file and stores its content as embeddings for retrieval.\n",
"\n",
"What happens?\n",
"- PDF loading Extracts text from the document.\n",
"- Chunking Splits text into overlapping segments for better context retention.\n",
"- Preprocessing Cleans text to improve embedding quality.\n",
"- Vector store creation Generates embeddings and stores them in FAISS for retrieval."
]
},
{
"cell_type": "code",
"execution_count": 70,
"metadata": {},
"outputs": [],
"source": [
"def encode_pdf(path, chunk_size=1000, chunk_overlap=200):\n",
" \"\"\"\n",
" Encodes a PDF book into a vector store using OpenAI embeddings.\n",
"\n",
" Args:\n",
" path: The path to the PDF file.\n",
" chunk_size: The desired size of each text chunk.\n",
" chunk_overlap: The amount of overlap between consecutive chunks.\n",
"\n",
" Returns:\n",
" A FAISS vector store containing the encoded book content.\n",
" \"\"\"\n",
"\n",
" # Load PDF documents\n",
" loader = PyPDFLoader(path)\n",
" documents = loader.load()\n",
"\n",
" # Split documents into chunks\n",
" text_splitter = RecursiveCharacterTextSplitter(\n",
" chunk_size=chunk_size, chunk_overlap=chunk_overlap, length_function=len\n",
" )\n",
" texts = text_splitter.split_documents(documents)\n",
" cleaned_texts = replace_t_with_space(texts)\n",
"\n",
" vectorstore = prepare_vector_store(cleaned_texts)\n",
"\n",
" return vectorstore"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create HyPE vector store\n",
"\n",
"Now we process the PDF and store its embeddings.\n",
"This step initializes the FAISS vector store with the encoded document."
]
},
{
"cell_type": "code",
"execution_count": 71,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|██████████| 97/97 [00:22<00:00, 4.40it/s]\n"
]
}
],
"source": [
"# Chunk size can be quite large with HyPE as we are not loosing percision with more\n",
"# information. For production, test how exhaustive your model is in generating sufficient \n",
"# amount of questions per chunk. This will mostly depend on your information density.\n",
"chunks_vector_store = encode_pdf(PATH, chunk_size=CHUNK_SIZE, chunk_overlap=CHUNK_OVERLAP)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create retriever\n",
"\n",
"Now we set up the retriever to fetch relevant chunks from the vector store.\n",
"\n",
"Retrieves the top `k=3` most relevant chunks based on query similarity."
]
},
{
"cell_type": "code",
"execution_count": 79,
"metadata": {},
"outputs": [],
"source": [
"chunks_query_retriever = chunks_vector_store.as_retriever(search_kwargs={\"k\": 3})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test retriever\n",
"\n",
"Now we test retrieval using a sample query.\n",
"\n",
"- Queries the vector store to find the most relevant chunks.\n",
"- Deduplicates results to remove potentially repeated chunks.\n",
"- Displays the retrieved context for inspection.\n",
"\n",
"This step verifies that the retriever returns meaningful and diverse information for the given question."
]
},
{
"cell_type": "code",
"execution_count": 80,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Context 1:\n",
"Most of these climate changes are attributed to very small variations in Earth's orbit that \n",
"change the amount of solar energy our planet receives. During the Holocene epoch, which \n",
"began at the end of the last ice age, human societies f lourished, but the industrial era has seen \n",
"unprecedented changes. \n",
"Modern Observations \n",
"Modern scientific observations indicate a rapid increase in global temperatures, sea levels, \n",
"and extreme weather events. The Intergovernmental Panel on Climate Change (IPCC) has \n",
"documented these changes extensively. Ice core samples, tree rings, and ocean sediments \n",
"provide a historical record that scientists use to understand past climate conditions and \n",
"predict future trends. The evidence overwhelmingly shows that recent changes are primarily \n",
"driven by human activities, particularly the emission of greenhou se gases. \n",
"Chapter 2: Causes of Climate Change \n",
"Greenhouse Gases\n",
"\n",
"\n",
"Context 2:\n",
"driven by human activities, particularly the emission of greenhou se gases. \n",
"Chapter 2: Causes of Climate Change \n",
"Greenhouse Gases \n",
"The primary cause of recent climate change is the increase in greenhouse gases in the \n",
"atmosphere. Greenhouse gases, such as carbon dioxide (CO2), methane (CH4), and nitrous \n",
"oxide (N2O), trap heat from the sun, creating a \"greenhouse effect.\" This effect is essential \n",
"for life on Earth, as it keeps the planet warm enough to support life. However, human \n",
"activities have intensified this natural process, leading to a warmer climate. \n",
"Fossil Fuels \n",
"Burning fossil fuels for energy releases large amounts of CO2. This includes coal, oil, and \n",
"natural gas used for electricity, heating, and transportation. The industrial revolution marked \n",
"the beginning of a significant increase in fossil fuel consumption, which continues to rise \n",
"today. \n",
"Coal\n",
"\n",
"\n",
"Context 3:\n",
"Understanding Climate Change \n",
"Chapter 1: Introduction to Climate Change \n",
"Climate change refers to significant, long -term changes in the global climate. The term \n",
"\"global climate\" encompasses the planet's overall weather patterns, including temperature, \n",
"precipitation, and wind patterns, over an extended period. Over the past cent ury, human \n",
"activities, particularly the burning of fossil fuels and deforestation, have significantly \n",
"contributed to climate change. \n",
"Historical Context \n",
"The Earth's climate has changed throughout history. Over the past 650,000 years, there have \n",
"been seven cycles of glacial advance and retreat, with the abrupt end of the last ice age about \n",
"11,700 years ago marking the beginning of the modern climate era and human civilization. \n",
"Most of these climate changes are attributed to very small variations in Earth's orbit that \n",
"change the amount of solar energy our planet receives. During the Holocene epoch, which\n",
"\n",
"\n"
]
}
],
"source": [
"test_query = \"What is the main cause of climate change?\"\n",
"context = retrieve_context_per_question(test_query, chunks_query_retriever)\n",
"context = list(set(context))\n",
"show_context(context)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Evaluate results"
]
},
{
"cell_type": "code",
"execution_count": 76,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'questions': ['1. **Multiple Choice: Causes of Climate Change**',\n",
" ' - What is the primary cause of the current climate change trend?',\n",
" ' A) Solar radiation variations',\n",
" ' B) Natural cycles of the Earth',\n",
" ' C) Human activities, such as burning fossil fuels',\n",
" ' D) Volcanic eruptions',\n",
" '',\n",
" '2. **True or False: Impact on Biodiversity**',\n",
" ' - True or False: Climate change does not have any significant impact on the migration patterns and extinction rates of various species.',\n",
" '',\n",
" '3. **Short Answer: Mitigation Strategies**',\n",
" ' - What are two effective strategies that can be implemented at a community level to mitigate the effects of climate change?',\n",
" '',\n",
" '4. **Matching: Climate Change Effects**',\n",
" ' - Match the following effects of climate change (numbered) with their likely consequences (lettered).',\n",
" ' 1. Rising sea levels',\n",
" ' 2. Increased frequency of extreme weather events',\n",
" ' 3. Melting polar ice caps',\n",
" ' 4. Ocean acidification',\n",
" ' ',\n",
" ' A) Displacement of coastal communities',\n",
" ' B) Loss of marine biodiversity',\n",
" ' C) Increased global temperatures',\n",
" ' D) More frequent and severe hurricanes and floods',\n",
" '',\n",
" '5. **Essay: International Cooperation**',\n",
" ' - Discuss the importance of international cooperation in combating climate change. Include examples of successful global agreements or initiatives and explain how they have contributed to addressing climate change.'],\n",
" 'results': ['```json\\n{\\n \"Relevance\": 5,\\n \"Completeness\": 4,\\n \"Conciseness\": 3\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 5,\\n \"Completeness\": 4,\\n \"Conciseness\": 3\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 2,\\n \"Completeness\": 1,\\n \"Conciseness\": 2\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 4,\\n \"Completeness\": 3,\\n \"Conciseness\": 3\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 5,\\n \"Completeness\": 4,\\n \"Conciseness\": 3\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 1,\\n \"Completeness\": 1,\\n \"Conciseness\": 2\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 1,\\n \"Completeness\": 1,\\n \"Conciseness\": 2\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 5,\\n \"Completeness\": 4,\\n \"Conciseness\": 3\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 5,\\n \"Completeness\": 4,\\n \"Conciseness\": 3\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 2,\\n \"Completeness\": 1,\\n \"Conciseness\": 2\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 2,\\n \"Completeness\": 1,\\n \"Conciseness\": 2\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 4,\\n \"Completeness\": 3,\\n \"Conciseness\": 2\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 2,\\n \"Completeness\": 1,\\n \"Conciseness\": 2\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 4,\\n \"Completeness\": 3,\\n \"Conciseness\": 3\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 4,\\n \"Completeness\": 2,\\n \"Conciseness\": 3\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 5,\\n \"Completeness\": 4,\\n \"Conciseness\": 3\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 5,\\n \"Completeness\": 4,\\n \"Conciseness\": 3\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 5,\\n \"Completeness\": 4,\\n \"Conciseness\": 3\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 5,\\n \"Completeness\": 4,\\n \"Conciseness\": 3\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 4,\\n \"Completeness\": 3,\\n \"Conciseness\": 3\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 4,\\n \"Completeness\": 3,\\n \"Conciseness\": 2\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 4,\\n \"Completeness\": 3,\\n \"Conciseness\": 3\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 5,\\n \"Completeness\": 4,\\n \"Conciseness\": 3\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 5,\\n \"Completeness\": 4,\\n \"Conciseness\": 3\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 2,\\n \"Completeness\": 1,\\n \"Conciseness\": 2\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 4,\\n \"Completeness\": 3,\\n \"Conciseness\": 3\\n}\\n```',\n",
" '```json\\n{\\n \"Relevance\": 4,\\n \"Completeness\": 2,\\n \"Conciseness\": 3\\n}\\n```'],\n",
" 'average_scores': None}"
]
},
"execution_count": 76,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluate_rag(chunks_query_retriever)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
}