Files
claude-cookbooks/misc/mc_qa.ipynb
alexalbertt ec9b524f58 launch stuff
2024-03-03 17:15:16 -08:00

1462 lines
63 KiB
Plaintext
Raw Permalink Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Experimentally Testing Claude's Long-Context QA Abilities"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[DISCLAIMER: This notebook was created using Claude 2 models and is considered legacy.]\n",
"\n",
"In this notebook, we will take a look at Claude's ability to answer questions about the meeting notes from a long government document. We will also see how this ability varies depending on the location of the relevant information. The government document is split up into many smaller subsections. Each question will be about information contained in one of those subsections. All the questions and answers will be written by Claude!\n",
"\n",
"Summary of what is to come:\n",
"\n",
"1. Downloading and preprocessing the data\n",
"2. Using Claude to write 400 multiple-choice questions about specific sections of the data\n",
"3. Validating that Claude is able to answer those questions when given that section alone\n",
"4. Validating that Claude is unable to answer those questions when given a random other chunk\n",
"5. Testing Claude's ability to answer the questions even when the context size gets very long."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Data Prep"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To start: download the document and split it up into chunks. Each chunk corresponds to a meeting note from one department, such as the Department of Transportation."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import anthropic, os, re, requests, trio, pandas as pd\n",
"import numpy as np\n",
"from bs4 import BeautifulSoup\n",
"API_KEY = os.environ['ANTHROPIC_API_KEY']\n",
"CLIENT = anthropic.Anthropic(api_key=API_KEY)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"88\n",
" 491013P\n",
"\n",
"\n",
"\n",
"NATIONAL AERONAUTICS AND SPACE ADMINISTRATION\n",
"14 CFR Part 1204\n",
"[NASA Document No: NASA23054; NASA Docket No: NASA20230003]\n",
"RIN 2700AE70\n",
"Delegations and Designations; Correction\n",
"\n",
"AGENCY:\n",
"National Aeronautics and Space Administration.\n",
"\n",
"\n",
"ACTION:\n",
"Direct final rule; correction.\n",
"\n",
"\n",
"SUMMARY:\n",
"\n",
" NASA published a document in the \n",
" Federal Register\n",
" on July 5, 2023, concerning Delegations and Designations. The document contained an error in amendatory instruction 2.a.\n",
" \n",
"\n",
"\n",
"DATES:\n",
"\n",
" This correction is effective September 5, 2023. If adverse comments are received on the direct final rule published at 88 FR 42870, NASA will publish a timely withdrawal of the rule and this correction to the rule in the \n",
" Federal Register\n",
" .\n",
" \n",
"\n",
"\n",
"FOR FURTHER INFORMATION CONTACT:\n",
"Daniela Cruzado, 2022957589.\n",
"\n",
"\n",
"\n",
"SUPPLEMENTARY INFORMATION:\n",
"Correction\n",
"\n",
" In the \n",
" Federal Register\n",
" of July 5, 2023, in FR Doc. 202314042, published at 88 FR 42870, the following correction is made:\n",
" \n",
"\n",
1204.501\n",
"[Amended]\n",
"\n",
"\n",
"1. On page 42871, in the first column, correct amendatory instruction 2.a. for §1204.501 to read: “a. In paragraph (a) introductory text, add the words “the Office of” before the word “Strategic” and remove the words “Integrated Asset Management” and add in their place the words “Facilities and Real Estate.”\n",
"\n",
"\n",
"Nanette Smith,\n",
"Team Lead, NASA Directives and Regulations.\n",
"\n",
"\n",
"[FR Doc. 202314794 Filed 71223; 8:45 am]\n",
"\n"
]
}
],
"source": [
"url = 'https://www.govinfo.gov/content/pkg/FR-2023-07-13/xml/FR-2023-07-13.xml'\n",
"\n",
"response = requests.get(url)\n",
"soup = BeautifulSoup(response.text, 'xml')\n",
"\n",
"text = soup.get_text()\n",
"chunks = text.split('BILLING CODE')\n",
"chunks[0] = chunks[0][chunks[0].index('DEPARTMENT OF TRANSPORTATION'):] # First chunk has some extra material at the beginning.\n",
"\n",
"# We'll throw out the chunks that are extra-long or extra-short.\n",
"tokenizer = CLIENT.get_tokenizer()\n",
"chunks = [c for c in chunks if len(tokenizer.encode(c)) <= 5000 and len(tokenizer.encode(c)) > 200]\n",
"print(len(chunks))\n",
"print(chunks[2])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Question and Answer Generation With Claude"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, it's time to use Claude to generate questions and answers! We'll use a two-shot prompt template that includes two example (chunks, questions, answers) groups along with instructions. We'll ask for five questions about each chunk, with 3 wrong answers and 1 right answer."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"example_passage1 = \"\"\"DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT\n",
"[Docket No. FR6381N01]\n",
"Improving Access to Public Benefit Programs; Request for Comment\n",
"AGENCY:\n",
"Office of Policy Development and Research, Department of Housing and Urban Development, HUD.\n",
"ACTION:\n",
"Request for comments.\n",
"SUMMARY:\n",
"The Department of Housing and Urban Development is seeking comments from the public regarding the burden faced when applying for or maintaining eligibility for HUD's housing programs. HUD recognizes that these administrative hurdles and paperwork burdens disproportionately fall on the most vulnerable populations and prevent individuals and entities from accessing benefits for which they are legally eligible. Public comment submitted in response to this request for comment will assist HUD in better understanding, identifying, and reducing HUD's public program administrative burden and ultimately further its mission to pursue transformative housing and community-building policies and programs.\n",
"DATES:\n",
"Comment Due Date: August 14, 2023.\n",
"ADDRESSES:\n",
"Interested persons are invited to submit comments responsive to this request for comment. There are three methods for submitting public comments. All submissions must refer to the above docket number and title.\n",
"1. Electronic Submission of Comments. Comments may be submitted electronically through the Federal eRulemaking Portal at www.regulations.gov. HUD strongly encourages commenters to submit comments electronically through www.regulations.gov. Electronic submission of comments allows the commenter maximum time to prepare and submit a comment, ensures timely receipt by HUD, and enables HUD to make comments immediately available to the public. Comments submitted electronically through www.regulations.gov can be viewed by other commenters and interested members of the public. Commenters should follow the instructions provided on that website to submit comments electronically.\n",
"2. Submission of Comments by Mail. Comments may be submitted by mail to the Regulations Division, Office of General Counsel, Department of Housing and Urban Development, 451 7th Street SW, Room 10276, Washington, DC 204100500.\n",
"3. Submission of Comments by Electronic Mail. Comments may be submitted by electronic mail to the Regulations Division, Office of General Counsel, Department of Housing and Urban Development at improvingaccesstopublicbenefitprograms@hud.gov.\n",
"Note: To receive consideration as a public comment, comments must be submitted through one of the three methods specified above.\n",
"Public Inspection of Public Comments. Copies of all comments submitted will be available for inspection and downloading at www.regulations.gov. HUD will also make all properly submitted comments and communications available for public inspection and copying during regular business hours at the above address. Due to security measures at the HUD Headquarters building, you must schedule an appointment in advance to review the public comments by calling the Regulations Division at 2027083055 (this is not a toll-free number). HUD welcomes and is prepared to receive calls from individuals who are deaf or hard of hearing, as well as individuals with speech or communication disabilities. To learn more about how to make an accessible telephone call, please visit https://www.fcc.gov/consumers/guides/telecommunications-relay-service-trs. Copies of all comments submitted are available for inspection and downloading at www.regulations.gov.\n",
"FOR FURTHER INFORMATION CONTACT:\n",
"Todd Richardson, General Deputy Assistant Secretary, Office of Policy Development and Research, Department of Housing and Urban Development, 451 7th Street SW, Room 8100, Washington, DC 20410, telephone 2024025706 (this is not a toll-free number). HUD welcomes and is prepared to receive calls from individuals who are deaf or hard of hearing, as well as individuals with speech or communication disabilities. To learn more about how to make an accessible telephone call, please visit https://www.fcc.gov/consumers/guides/telecommunications-relay-service-trs.\n",
"SUPPLEMENTARY INFORMATION:\n",
"I. Background\n",
"Applying for and maintaining eligibility for public benefits and services, including housing programs, often requires completing and submitting a variety of forms. HUD and its housing partners that administer its programs (including Public Housing Authorities, State and local governments, non-profit recipients of CDBG programs, Multifamily Housing owners, and FHA lenders) use the information collected by these forms to determine whether applicants are eligible or if current recipients continue to be eligible. These forms and other methods of information collections may create burdens that disproportionately fall on the most vulnerable populations and prevent individuals and entities from accessing services for which they are legally eligible. These burdens include the expenditure of time, effort, or financial resources to generate, maintain, or provide information to HUD or its housing partners. For example, individuals may be required to provide a list of family members, the family's total annual family income, the assets available to each family member in the household, and the value of such assets in order to access public housing. Individuals applying for or maintaining eligibility for public benefits or services may also face burdens such as time spent gathering records and documentation needed to prove eligibility, travel time associated with developing and submitting the collection, or even time waiting to speak with agency personnel.\n",
"Consistent with the Paperwork Reduction Act of 1995 (PRA), 1 agencies must ensure that both the quantitative burden estimates and the narrative description supporting its information collection requests reflect the beginning-to-end experience of completing the information collection activity. Specifically, the burden faced by individuals applying for and maintaining eligibility for public benefits should also include:\n",
"1 Public Law 10413 (1995) (codified at 44 U.S.C. 35013520).\n",
"—Information and learning costs, which refer to the time, effort, money, and other resources that individuals need to expend to learn about the existence of a public service or benefit, rules governing their eligibility and application, certification, benefits maintenance, and post-award reporting or recertification processes.\n",
"—Compliance costs, which refer to the time, effort, money, and other resources that individuals need to expend to follow through with program application, certification, or recertification, including filling out necessary paperwork, waiting for correspondence from program agencies, planning for in-person meetings, and producing documentation to confirm their eligibility (for instance, records of household composition, income, or assets).\"\"\"\n",
"questions1 = \"\"\"<Question 1>\n",
"What is the Department of Housing and Urban Development seeking comments from the public about?\n",
"</Question 1>\n",
"<Answers 1>\n",
"1. Difficulties in obtaining access to HUD's housing program.\n",
"2. Potential changes in national zoning regulations for mixed-use housing.\n",
"3. Minimum notice for evictions of long-time tenants.\n",
"4. Insurance requirements for HUD-sponsored new construction in disaster-prone areas.\n",
"</Answers 1>\n",
"<Question 2>\n",
"When is the due date for public comment on the burdens placed on individuals applying for HUD's housing programs?\n",
"</Question 2>\n",
"<Answers 2>\n",
"1. August 14, 2023\n",
"2. September 9, 2023\n",
"3. January 2, 2024\n",
"4. July 31, 2023\n",
"</Answers 2>\n",
"<Question 3>\n",
"What do \"compliance costs\" refer to in the context of access to HUD's public benefit programs?\n",
"</Question 3>\n",
"<Answers 3>\n",
"1. Time, effort, money, and resources needed to behave in accordance with paperwork requirements.\n",
"2. Information and self-education required to familiarize oneself with the public services available.\n",
"3. Disclosure requirements for proving your organization has not shared information unduly with others.\n",
"4. Cognitive load, distress, anxiety, distrust, or loss of autonomy and dignity.\n",
"</Answers 3>\n",
"\"\"\"\n",
"questions2 = \"\"\"<Question 1>\n",
"What agency published the document on July 5 concerning Delegations and Designations?\n",
"</Question 1>\n",
"<Answers 1>\n",
"1. National Aeronautics and Space Administration \n",
"2. Federal Aviation Administration\n",
"3. Department of Defense\n",
"4. National Oceanic and Atmospheric Administration\n",
"</Answers 1>\n",
"<Question 2> \n",
"What is the purpose of the document published in the Federal Register by NASA?\n",
"</Question 2>\n",
"<Answers 2>\n",
"1. To correct an error in a previous document regarding Delegations and Designations\n",
"2. To announce a new policy regarding procurement of launch services \n",
"3. To solicit public comments on proposed changes to Rule 210.12(b)(2) regarding astronaut training requirements\n",
"4. To provide guidance on sharing satellite data with foreign partners\n",
"</Answers 2>\n",
"<Question 3>\n",
"What will NASA do if it receives adverse comments on the direct final rule published on July 5, 2023?\n",
"</Question 3>\n",
"<Answers 3>\n",
"1. Publish a timely withdrawal of the rule and this correction to the rule\n",
"2. Extend the comment period by 30 days\n",
"3. Schedule public hearings to discuss the comments and reaactions to the comments\n",
"4. Proceed with implementing the rule as planned\n",
"</Answers 3>\n",
"<Question 4> \n",
"What specifically needs to be corrected in the original NASA Federal Register document?\n",
"</Question 4>\n",
"<Answers 4>\n",
"1. The amendatory instruction for section 1204.501 paragraph (a)\n",
"2. The chapter heading for section 1107.323 paragraph (b) describing responsible disclosure of satellite data\n",
"3. The effective date of the delegations and designations, July 29, 2023\n",
"4. The point of contact for further information, Todd Richardson\n",
"</Answers 4>\"\"\"\n",
"\n",
"example_passage2 = \"\"\"NATIONAL AERONAUTICS AND SPACE ADMINISTRATION\n",
"14 CFR Part 1204\n",
"[NASA Document No: NASA23054; NASA Docket No: NASA20230003]\n",
"RIN 2700AE70\n",
"Delegations and Designations; Correction\n",
"AGENCY:\n",
"National Aeronautics and Space Administration.\n",
"ACTION:\n",
"Direct final rule; correction.\n",
"SUMMARY:\n",
"NASA published a document in the Federal Register on July 5, 2023, concerning Delegations and Designations. The document contained an error in amendatory instruction 2.a.\n",
"DATES:\n",
"This correction is effective September 5, 2023. If adverse comments are received on the direct final rule published at 88 FR 42870, NASA will publish a timely withdrawal of the rule and this correction to the rule in the Federal Register .\n",
"FOR FURTHER INFORMATION CONTACT:\n",
"Daniela Cruzado, 2022957589.\n",
"SUPPLEMENTARY INFORMATION:\n",
"Correction\n",
"In the Federal Register of July 5, 2023, in FR Doc. 202314042, published at 88 FR 42870, the following correction is made:\n",
1204.501\n",
"[Amended]\n",
"1. On page 42871, in the first column, correct amendatory instruction 2.a. for §1204.501 to read: “a. In paragraph (a) introductory text, add the words “the Office of” before the word “Strategic” and remove the words “Integrated Asset Management” and add in their place the words “Facilities and Real Estate.”\n",
"Nanette Smith,\n",
"Team Lead, NASA Directives and Regulations.\n",
"[FR Doc. 202314794 Filed 71223; 8:45 am]\"\"\"\n",
"mc_qa3 = \"\"\"\\n\\nHuman: Hello Claude. Here is a section from the minutes of a government meeting. Please read it carefully and devise five factual questions about it, along with three wrong answers and the right answer for each. Put questions in <Question></Question> tags and answers in <Answer></Answer> tags, as in the examples.\n",
"\n",
"Here are two examples.\n",
"\n",
"<Example>\n",
"<Passage>\n",
"{example_passage1}\n",
"</Passage>\n",
"{questions1}\n",
"</Example>\n",
"<Example>\n",
"<Passage>\n",
"{example_passage2}\n",
"</Passage>\n",
"{questions2}\n",
"</Example>\n",
"\n",
"Now here is the passage I would like you to write questions for.\n",
"\n",
"<Passage>\n",
"{test_passage}\n",
"</Passage>\n",
"\n",
"Please write five factual questions about this document that can be answered with reference to it and without any outside knowledge. For each question, give three wrong answers and the right answer. Always put the correct answer first. Write 4 non-numerical questions and one numerical one. Make sure the wrong answers are highly detailed. Put the question inside <Question N></Question N> tags, and the answers inside <Answers N></Answers N> tags, where N is the index of the question, as in the examples. \n",
"\n",
"Guidelines:\n",
"Make sure that each question clearly and independently identifies the section/minutes/government meeting from which it derives; avoid terms like \"this document\", \"this passage\", \"this notice\" in favor of more specific descriptions. The goal is to future-proof the questions and answers in the event that they became divorced from their subject in the filing system.\n",
"Make the questions specific to their source text. Eschew generic questions about date of publication or name of agency. Instead, prefer questions that could not apply to notes produced by any other department/agency.\n",
"\n",
"Assistant:\n",
"\"\"\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A key detail to pay attention to in the prompt above: the instruction to make the wrong answers \"highly detailed\". Without this instruction, the wrong answers tended to be relatively short and the right answer stood out on length alone. Put a pin in the instruction to \"Make sure that each question clearly and independently identifies the section/minutes/government meeting from which it derives\"; we'll come back to it later.\n",
"\n",
"Now, we'll make a dataframe with a column where we fill in the prompt template for each chunk, excluding the two chunks we used in the two-shot."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"86\n"
]
}
],
"source": [
"chunks = [c for c in chunks if example_passage1[20:80] not in c and example_passage2[20:80] not in c]\n",
"df = pd.DataFrame(\n",
" {'chunk': chunks, 'chunk_idx': range(len(chunks))}\n",
")\n",
"df['prompt'] = [mc_qa3.format(\n",
" example_passage1=example_passage1, example_passage2=example_passage2, questions1=questions1, questions2=questions2, test_passage=c\n",
" ) for c in chunks]\n",
"print(len(df))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this notebook, we'll use Claude Instant, which has a 100K context window just like Claude 2. You can also run it with Claude 2 to similar results. First, we design helper code to allow us to call the API in parallel if your org allows. If not, you can just set the CapacityLimiter to 1."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"def get_completion(client, prompt, max_tokens=3000, model='claude-instant-1.2', temperature=0):\n",
" return client.completions.create(\n",
" prompt=prompt, max_tokens_to_sample=max_tokens, model=model, temperature=temperature, stop_sequences=['\\n\\nHuman:', '\\n\\nAssistant:']\n",
" ).completion\n",
"\n",
"async def process_case(limiter, client, prompt, results, output_col_name='completion'):\n",
"\n",
" async with limiter:\n",
" completion = await trio.to_thread.run_sync(get_completion, client, prompt)\n",
"\n",
" results.append({'prompt': prompt, output_col_name: completion})\n",
"\n",
" if len(results) % 10 == 0:\n",
" print(f\"{len(results)} test cases processed\") # Optional \"progress bar\"\n",
"\n",
"async def get_completions_parallel(client, prompts, output_col_name='completion'):\n",
" async with trio.open_nursery() as nursery:\n",
" limiter = trio.CapacityLimiter(10) # Set this to the maximum concurrency allowed on your API key, which may just be 1.\n",
" results = []\n",
" for prompt in prompts:\n",
" nursery.start_soon(process_case, limiter, CLIENT, prompt, results, output_col_name)\n",
" return results"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Get questions and answers for every prompt\n",
"qas = await get_completions_parallel(CLIENT, df.prompt.values, output_col_name='qas')\n",
"df = df.merge(pd.DataFrame(qas), on='prompt')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we'll do some minor cleanup on the output:\n",
"- Remove the numbers for ease of reshuffling\n",
"- Extract the material between XML tags\n",
"- Make a separate row for every (question + answers) pair"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"def remove_numbered_bullets(answer):\n",
" return re.sub(r'^\\d+\\. ', '', answer)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"def extract_between_tags(tag: str, string: str, strip: bool = True, alt=True) -> list[str]:\n",
" # Helper function for parsing Claude's output\n",
" try:\n",
" ext_list = re.findall(f\"<{tag}\\s?>(.+?)</{tag}\\s?>\", string, re.DOTALL)\n",
" if strip:\n",
" ext_list = [e.strip() for e in ext_list]\n",
" if alt and not ext_list:\n",
" ext_list = re.findall(f\"<{tag}\\s?>(.+?)<{tag}\\s?>\", string, re.DOTALL)\n",
" if strip:\n",
" ext_list = [e.strip() for e in ext_list]\n",
" return ext_list\n",
" except:\n",
" return extract_between_tags(tag, string+'</' + tag + '>', strip, alt)\n",
"\n",
"def extract_answer(sample):\n",
" return extract_between_tags('Answer', sample)[0][0] if extract_between_tags(\n",
" 'Answer', sample) else extract_between_tags('Answer', sample + '</Answer>')[0][0] if extract_between_tags('Answer', sample + '</Answer>') else '_'\n",
"\n",
"def extract_qs_as(qas, n=5):\n",
" # Parse each of Claude's answers to the QA generation prompt into a question and a list of answers.\n",
" flattened_qas = []\n",
" for i in range(1, n + 1):\n",
" try:\n",
" question = extract_between_tags(f'Question {i}', qas)[0]\n",
" answers = extract_between_tags(f'Answers {i}', qas)[0]\n",
" except:\n",
" continue\n",
" flattened_qas.append({\n",
" 'question': question,\n",
" 'right_answer': remove_numbered_bullets(answers.split('\\n')[0].strip()),\n",
" 'wrong_answers': [remove_numbered_bullets(a.strip()) for a in answers.split('\\n')[1:]]\n",
" })\n",
" return flattened_qas"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We started out with 86 sections after devoting 2 of the original 88 to examples, yielding 86 * 5 = 430 questions."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"430\n"
]
}
],
"source": [
"qs_as = df['qas'].apply(extract_qs_as)\n",
"df['questions'] = [[q['question'] for q in qa] for qa in qs_as]\n",
"df['right_answers'] = [[q['right_answer'] for q in qa] for qa in qs_as]\n",
"df['wrong_answers'] = [[q['wrong_answers'] for q in qa] for qa in qs_as]\n",
"qa_df_rows = []\n",
"for i, row in df.iterrows():\n",
" for j, q in enumerate(row.questions):\n",
" qa_df_rows.append(row.to_dict() | {'question': q, 'right_answer': row['right_answers'][j], 'wrong_answers_q': row['wrong_answers'][j]})\n",
"qa_df = pd.DataFrame(qa_df_rows)\n",
"print(len(qa_df))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It's a good time to look at some of the questions and answers to make sure they look mostly reasonable."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"for i in range(28, 38):\n",
" for c in ['question', 'right_answer', 'wrong_answers_q']:\n",
" print(qa_df.iloc[i][c])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Establishing Baselines + Quality Control"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's create an answering prompt that tells Claude to read the material and answer a question about it."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"mc_answer_one_chunk_prompt = \"\"\"\\n\\nHuman: Please read the following government record closely and then answer the multiple choice question below.\n",
"<Government Record>\n",
"{chunk}\n",
"</Government Record>\n",
"Here is the question:\n",
"<Question>\n",
"{question}\n",
"</Question>\n",
"Based on the government record above, select the correct answer to the question from the list below and write the corresponding letter (A, B, C, or D) in <Answer></Answer> tags.\n",
"<Answers>\n",
"{answers}\n",
"</Answers>\n",
"\n",
"Assistant: Based on the government record provided, the correct answer to the question is:\n",
"\"\"\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Randomize answers and track which one is correct in the 'correct_answer_letter' column."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"def randomize_answers(answers_list):\n",
" # Assign a letter A-D randomly to each answer\n",
" shuffled = np.random.permutation(answers_list[:4])\n",
" letters = ['A. ', 'B. ', 'C. ', 'D. ']\n",
" numbered = [letters[i] + answer for i, answer in enumerate(shuffled)]\n",
" s_numbered = sorted(numbered)\n",
" return s_numbered\n",
"\n",
"qa_df.apply(lambda row: randomize_answers(row['wrong_answers_q'] + [row['right_answer']]), axis=1)\n",
"\n",
"qa_df['randomized_answers'] = qa_df.apply(lambda row: randomize_answers(row['wrong_answers_q'] + [row['right_answer']]), axis=1)\n",
"\n",
"def pluck_answer_letter(qa_df_row):\n",
" # Find the letter of the correct answer\n",
" answer = qa_df_row['right_answer']\n",
" for ra in qa_df_row['randomized_answers']:\n",
" if ra[3:] == answer:\n",
" return ra[0]\n",
"\n",
"qa_df['correct_answer_letter'] = qa_df.apply(lambda row: pluck_answer_letter(row), axis=1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, we will test Claude's ability to answer the question when it sees the relevant chunk and only the relevant chunk."
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"qa_df['qa_with_right_chunk_prompt'] = qa_df.apply(lambda row: mc_answer_one_chunk_prompt.format(\n",
" chunk=row['chunk'], question=row['question'], answers=row['randomized_answers']),\n",
" axis=1\n",
") # Populate prompt column"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"qa_answer_right_chunk = await get_completions_parallel(CLIENT, qa_df['qa_with_right_chunk_prompt'].values, output_col_name='qa_answer_right_chunk')"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"qa_df = qa_df.merge(pd.DataFrame(qa_answer_right_chunk), left_on='qa_with_right_chunk_prompt', right_on='prompt', suffixes=['', '_x']).drop(columns=['prompt_x'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's see how many it got right."
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"def print_results(df, results):\n",
" cs, ics = 0, 0\n",
" j = 0\n",
" for i, row in df.iterrows():\n",
" if results[j] == row['correct_answer_letter']:\n",
" cs += 1\n",
" else:\n",
" ics += 1\n",
" j += 1\n",
" print(\"Results:\", cs, ics)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Results: 387 43\n"
]
}
],
"source": [
"qa_df['qa_answer_right_chunk'] = [extract_answer(sample) for sample in qa_df['qa_answer_right_chunk'].values]\n",
"print_results(qa_df, qa_df['qa_answer_right_chunk'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It got 90% of them right. Now, we'll see how Claude does when, instead of giving Claude the chunk with the answer, we give it some random other chunk. Poor Claude!"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/tmp/ipykernel_2454397/3504946734.py:3: SettingWithCopyWarning: \n",
"A value is trying to be set on a copy of a slice from a DataFrame\n",
"\n",
"See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
" qa_df['shifted_chunk'].iloc[:shift_val] = qa_df['chunk'].iloc[-1 * shift_val:].values\n"
]
}
],
"source": [
"shift_val = int(len(qa_df) / 2)\n",
"qa_df['shifted_chunk'] = qa_df['chunk'].shift(shift_val)\n",
"qa_df['shifted_chunk'].iloc[:shift_val] = qa_df['chunk'].iloc[-1 * shift_val:].values"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
"qa_df['qa_with_shift_chunk_prompt'] = qa_df.apply(\n",
" lambda row: mc_answer_one_chunk_prompt.format(chunk=row['shifted_chunk'], question=row['question'], answers=row['randomized_answers']),\n",
" axis=1\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"qa_answer_shift_chunk = await get_completions_parallel(CLIENT, qa_df['qa_with_shift_chunk_prompt'].values, output_col_name='qa_answer_shift_chunk')"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"qa_df = qa_df.merge(pd.DataFrame(qa_answer_shift_chunk), left_on='qa_with_shift_chunk_prompt', right_on='prompt', suffixes=['', '_x']).drop(columns=['prompt_x'])"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Results: 155 275\n"
]
}
],
"source": [
"qa_df['qa_answer_shift_chunk'] = [extract_answer(sample) for sample in qa_df['qa_answer_shift_chunk'].values]\n",
"print_results(qa_df, qa_df['qa_answer_shift_chunk'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By sheer chance Claude would be expected to get 25% right. In practice, Claude got 36% right. Just as smart humans like us have the ability to guess above chance on a standardized test, so does Claude. Still a far cry from Claude's accuracy when given the right chunk, so the experiment is meaningful. We'll filter out the questions where Claude didn't get the correct answer even with the relevant chunk, as those are \"too difficult\" for testing the impact of long context."
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"387"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"too_hard_qa_df = qa_df[qa_df.correct_answer_letter != qa_df.qa_answer_right_chunk]\n",
"qa_df = qa_df[qa_df.correct_answer_letter == qa_df.qa_answer_right_chunk]\n",
"len(qa_df)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test Time"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now for the long context part! We will create long contexts by taking random chunks until we've made a nice big pile of tokens. We will create a different long context for each question. We try two different prompts here: one basic prompt, and one including a \"scratchpad\" where we ask Claude to pull relevant quotes from the document that may be helpful."
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [],
"source": [
"mc_answer_one_chunk_prompt = \"\"\"\\n\\nHuman: Please read the following government record closely and then answer the multiple choice question below.\n",
"<Government Record>\n",
"{chunk}\n",
"</Government Record>\n",
"Here is the question:\n",
"<Question>\n",
"{question}\n",
"</Question>\n",
"Based on the government record above, select the correct answer to the question from the list below and write the corresponding letter (A, B, C, or D) in <Answer></Answer> tags.\n",
"<Answers>\n",
"{answers}\n",
"</Answers>\n",
"\n",
"Assistant: Based on the government record provided, the correct answer to the question is:\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [],
"source": [
"mc_answer_one_chunk_prompt_scratchpad = \"\"\"\\n\\nHuman: Please read the following government record closely and then answer the multiple choice question below.\n",
"<Government Record>\n",
"{chunk}\n",
"</Government Record>\n",
"Now here is the question for you to answer:\n",
"<Question>\n",
"{question}\n",
"</Question>\n",
"Pull 2-3 relevant quotes from the record that pertain to the question and write them inside <scratchpad></scratchpad> tags. Then, select the correct answer to the question from the list below and write the corresponding letter (A, B, C, or D) in <Answer></Answer> tags.\n",
"<Answers>\n",
"{answers}\n",
"</Answers>\n",
"\n",
"Assistant:\n",
"\"\"\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To create long contexts, we use a technique we call \"randomized collage\" -- start with the relevant chunk, concatenate random chunks until we reach the maximum length we want to test on, randomize the chunks, then move the relevant chunk to the desired location in the collage. We will experiment with putting the relevant chunk in the beginning, middle, and beginning of the context."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def create_long_context(chunk, other_chunks, main_chunk_idx, max_tokens=70000): # Can also use 95000.\n",
" doc_len = len(tokenizer.encode(chunk))\n",
" chunks_ctx = [chunk]\n",
" np.random.shuffle(other_chunks)\n",
" i = 0\n",
" # Add chunks until we exceed the context length\n",
" while doc_len < max_tokens:\n",
" chunks_ctx.append(other_chunks[i])\n",
" doc_len += len(tokenizer.encode(other_chunks[i]))\n",
" i += 1\n",
" # Put the relevant chunk in the desired position.\n",
" chunks_ctx = chunks_ctx[:-1]\n",
" chunks_ctx_ordered = chunks_ctx[1:main_chunk_idx] + [chunk] + chunks_ctx[main_chunk_idx:]\n",
" return '\\n\\n\\n\\n'.join(chunks_ctx_ordered)"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
"qa_df['long_context_end'] = qa_df.apply(lambda row: create_long_context(row['chunk'], [c for c in chunks if c != row['chunk']], len(chunks)), axis=1)\n",
"qa_df['long_context_middle'] = qa_df.apply(lambda row: create_long_context(row['chunk'], [c for c in chunks if c != row['chunk']], 20), axis=1)\n",
"qa_df['long_context_beginning'] = qa_df.apply(lambda row: create_long_context(row['chunk'], [c for c in chunks if c != row['chunk']], 0), axis=1)"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [],
"source": [
"# Create prompts for each question/context\n",
"qa_df['qa_long_ctx_prompt_end'] = qa_df.apply(lambda row: mc_answer_one_chunk_prompt.format(\n",
" chunk=row['long_context_end'], question=row['question'], answers=row['randomized_answers']),\n",
" axis=1\n",
")\n",
"\n",
"qa_df['qa_long_ctx_prompt_middle'] = qa_df.apply(lambda row: mc_answer_one_chunk_prompt.format(\n",
" chunk=row['long_context_middle'], question=row['question'], answers=row['randomized_answers']),\n",
" axis=1\n",
")\n",
"\n",
"qa_df['qa_long_ctx_prompt_beginning'] = qa_df.apply(lambda row: mc_answer_one_chunk_prompt.format(\n",
" chunk=row['long_context_beginning'], question=row['question'], answers=row['randomized_answers']),\n",
" axis=1\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we'll do another round of sampling for beginning, middle, and end. \n",
"\n",
"*Note: Each of these cells takes a while to run.* If you're just following along for fun, you probably want to run this only on a few rows of qa_df."
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {},
"outputs": [],
"source": [
"async def sample_from_prompt(exp_name, prompt_col):\n",
" global qa_df\n",
" answers = await get_completions_parallel(CLIENT, qa_df[prompt_col].values, output_col_name=exp_name)\n",
" qa_df = qa_df.merge(pd.DataFrame(answers), left_on=prompt_col, right_on='prompt', suffixes=['', '_x'], how='left').drop(columns=['prompt_x'])\n",
" qa_df[exp_name] = [extract_answer(sample) for sample in qa_df[exp_name].values]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# We reuse this code block throughout to first sample each prompt and get Claude's answer to each question, then analyze the results\n",
"# ...and to do this for the relevant chunk being in the beginning, middle, or end.\n",
"# Note: for a table with results for each row, see the blog post on Anthropic's website.\n",
"# Note: if this block takes unacceptably long for you, you can downsample qa_df.\n",
"for position in ['beginning', 'middle', 'end']:\n",
" exp_name = 'qa_answers_long_ctx_' + position\n",
" prompt_col = 'qa_long_ctx_prompt_' + position\n",
" _ = await sample_from_prompt(exp_name, prompt_col)\n",
" print(\"Results for \" + exp_name)\n",
" print_results(qa_df, qa_df[exp_name].values)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we'll repeat the experiment, but with Claude having access to a scratchpad in which to put exact quotes from the context."
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [],
"source": [
"qa_df['qa_long_ctx_prompt_scratchpad_end'] = qa_df.apply(lambda row: mc_answer_one_chunk_prompt_scratchpad.format(\n",
" chunk=row['long_context_end'], question=row['question'], answers=row['randomized_answers']),\n",
" axis=1\n",
")\n",
"\n",
"qa_df['qa_long_ctx_prompt_scratchpad_middle'] = qa_df.apply(lambda row: mc_answer_one_chunk_prompt_scratchpad.format(\n",
" chunk=row['long_context_middle'], question=row['question'], answers=row['randomized_answers']),\n",
" axis=1\n",
")\n",
"\n",
"qa_df['qa_long_ctx_prompt_scratchpad_beginning'] = qa_df.apply(lambda row: mc_answer_one_chunk_prompt_scratchpad.format(\n",
" chunk=row['long_context_beginning'], question=row['question'], answers=row['randomized_answers']),\n",
" axis=1\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for position in ['beginning', 'middle', 'end']:\n",
" exp_name = 'qa_answers_long_ctx_scratchpad_' + position\n",
" prompt_col = 'qa_long_ctx_prompt_scratchpad_' + position\n",
" _ = await sample_from_prompt(exp_name, prompt_col)\n",
" print(\"Results for \" + exp_name)\n",
" print_results(qa_df, qa_df[exp_name].values)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we'll try adding some examples of correctly-answered multiple-choice questions to the prompt. To start, we'll use some made-up examples. We'll test with and without a scratchpad."
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {},
"outputs": [],
"source": [
"mc_answer_lc_with_nongov_examples_prompt = \"\"\"\\n\\nHuman: Please read the following government record closely and then answer the multiple choice question below.\n",
"<Government Record>\n",
"{chunk}\n",
"</Government Record>\n",
"First, here are two example questions with correct answers.\n",
"<Question>\n",
"Who was the first president of the United States?\n",
"</Question>\n",
"<Answers>\n",
"A. Thomas Jefferson\n",
"B. George Washington\n",
"C. Abraham Lincoln\n",
"D. John Adams\n",
"</Answers>\n",
"Here, the correct answer is:\n",
"<Answer>\n",
"B. George Washington\n",
"</Answer>\n",
"<Question>\n",
"What is the boiling temperature of water, in degrees Fahrenheit?\n",
"</Question>\n",
"<Answers>\n",
"A. 200\n",
"B. 100\n",
"C. 287\n",
"D. 212\n",
"</Answers>\n",
"Here, the correct answer is:\n",
"<Answer>\n",
"D. 212\n",
"</Answer>\n",
"Now, based on the government record you've just read, please answer this question:\n",
"<Question>\n",
"{question}\n",
"</Question>\n",
"Select the correct answer to the question from the list below and write the corresponding letter (A, B, C, or D) in <Answer></Answer> tags.\n",
"<Answers>\n",
"{answers}\n",
"</Answers>\n",
"\n",
"Assistant:\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {},
"outputs": [],
"source": [
"mc_answer_lc_with_nongov_examples_prompt_scratchpad = \"\"\"\\n\\nHuman: Please read the following government record closely and then answer the multiple choice question below.\n",
"<Government Record>\n",
"{chunk}\n",
"</Government Record>\n",
"Based on the government record above, select the correct answer to the question from the list below and write the corresponding letter (A, B, C, or D) in <Answer></Answer> tags.\n",
"First, here are two example questions.\n",
"<Question>\n",
"Who was the first president of the United States?\n",
"</Question>\n",
"<Answers>\n",
"A. Thomas Jefferson\n",
"B. George Washington\n",
"C. Abraham Lincoln\n",
"D. John Adams\n",
"</Answers>\n",
"Here, the correct answer is:\n",
"<Answer>\n",
"B. George Washington\n",
"</Answer>\n",
"<Question>\n",
"What is the boiling temperature of water, in degrees Fahrenheit?\n",
"</Question>\n",
"<Answers>\n",
"A. 200\n",
"B. 100\n",
"C. 287\n",
"D. 212\n",
"</Answers>\n",
"Here, the correct answer is:\n",
"<Answer>\n",
"D. 212\n",
"</Answer>\n",
"Now, based on the government record you've just read, please answer this question:\n",
"<Question>\n",
"{question}\n",
"</Question>\n",
"Pull 2-3 relevant quotes from the record that pertain to the question and write them inside <scratchpad></scratchpad> tags. Then, select the correct answer to the question from the list below and write the corresponding letter (A, B, C, or D) in <Answer></Answer> tags.\n",
"<Answers>\n",
"{answers}\n",
"</Answers>\n",
"\n",
"Assistant:\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {},
"outputs": [],
"source": [
"# Create prompts, non-scratchpad version\n",
"qa_df['qa_long_ctx_prompt_nongov_examples_end'] = qa_df.apply(lambda row: mc_answer_lc_with_nongov_examples_prompt.format(\n",
" chunk=row['long_context_end'], question=row['question'], answers=row['randomized_answers']),\n",
" axis=1\n",
")\n",
"\n",
"qa_df['qa_long_ctx_prompt_nongov_examples_middle'] = qa_df.apply(lambda row: mc_answer_lc_with_nongov_examples_prompt.format(\n",
" chunk=row['long_context_middle'], question=row['question'], answers=row['randomized_answers']),\n",
" axis=1\n",
")\n",
"\n",
"qa_df['qa_long_ctx_prompt_nongov_examples_beginning'] = qa_df.apply(lambda row: mc_answer_lc_with_nongov_examples_prompt.format(\n",
" chunk=row['long_context_beginning'], question=row['question'], answers=row['randomized_answers']),\n",
" axis=1\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Get answers and print accuracy.\n",
"for position in ['beginning', 'middle', 'end']:\n",
" exp_name = 'qa_long_ctx_answers_nongov_examples_' + position\n",
" prompt_col = 'qa_long_ctx_prompt_nongov_examples_' + position\n",
" _ = await sample_from_prompt(exp_name, prompt_col)\n",
" print(\"Results for \" + exp_name)\n",
" print_results(qa_df, qa_df[exp_name].values)"
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {},
"outputs": [],
"source": [
"# Create prompts, with-scratchpad version\n",
"qa_df['qa_long_ctx_prompt_nongov_examples_scratchpad_end'] = qa_df.apply(lambda row: mc_answer_lc_with_nongov_examples_prompt_scratchpad.format(\n",
" chunk=row['long_context_end'], question=row['question'], answers=row['randomized_answers']),\n",
" axis=1\n",
")\n",
"\n",
"qa_df['qa_long_ctx_prompt_nongov_examples_scratchpad_middle'] = qa_df.apply(lambda row: mc_answer_lc_with_nongov_examples_prompt_scratchpad.format(\n",
" chunk=row['long_context_middle'], question=row['question'], answers=row['randomized_answers']),\n",
" axis=1\n",
")\n",
"\n",
"qa_df['qa_long_ctx_prompt_nongov_examples_scratchpad_beginning'] = qa_df.apply(lambda row: mc_answer_lc_with_nongov_examples_prompt_scratchpad.format(\n",
" chunk=row['long_context_beginning'], question=row['question'], answers=row['randomized_answers']),\n",
" axis=1\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Get answers and print accuracy.\n",
"for position in ['beginning', 'middle', 'end']:\n",
" exp_name = 'qa_long_ctx_answers_nongov_examples_scratchpad_' + position\n",
" prompt_col = 'qa_long_ctx_prompt_nongov_examples_scratchpad_' + position\n",
" _ = await sample_from_prompt(exp_name, prompt_col)\n",
" print(\"Results for \" + exp_name)\n",
" print_results(qa_df, qa_df[exp_name].values)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The results do not show much improvement if any. Can we do better by adding \"few-shot\" examples that are more germane to the task? \n",
"\n",
"The procedure for generating these few_shot examples is as follows. For each question, find its associated chunk, then choose random QAs from other chunks that aren't that chunk.\n",
"\n",
"We will experiment with using 2 and 5 examples, with and without a scratchpad."
]
},
{
"cell_type": "code",
"execution_count": 54,
"metadata": {},
"outputs": [],
"source": [
"# Function to generate a prompt using examples from the context.\n",
"def gen_mc_answer_lc_with_examples_prompt(num_examples): \n",
" examples_section = \"some example questions that refer to the government record above, along with correct answers.\"\n",
" for i in range(num_examples):\n",
" examples_section += \"\"\"\n",
"<Question>\n",
"{sample_question\"\"\" + str(i+1) + \"\"\"}\n",
"</Question>\n",
"<Answers>\n",
"{sample_answers\"\"\" + str(i+1) + \"\"\"}\n",
"</Answers>\n",
"Here, the correct answer is:\n",
"<Answer>\n",
"{correct_answer\"\"\" + str(i+1) + \"\"\"}\n",
"</Answer>\"\"\"\n",
" return \"\"\"\\n\\nHuman: Please read the following government record closely and then answer the multiple choice question below.\n",
"<Government Record>\n",
"{chunk}\n",
"</Government Record>\n",
"First, here are \"\"\" + examples_section + \"\"\"\n",
"Now here is the question for you to answer.\n",
"<Question>\n",
"{question}\n",
"</Question>\n",
"Select the correct answer to the question from the list below and write the corresponding letter (A, B, C, or D) in <Answer></Answer> tags.\n",
"<Answers>\n",
"{answers}\n",
"</Answers>\n",
"\n",
"Assistant:\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 55,
"metadata": {},
"outputs": [],
"source": [
"# Same as above, but includes scratchpad.\n",
"def gen_mc_answer_lc_with_examples_prompt_scratchpad(num_examples): \n",
" examples_section = \"some example questions that refer to the government record above, along with correct answers.\"\n",
" for i in range(num_examples):\n",
" examples_section += \"\"\"\n",
"<Question>\n",
"{sample_question\"\"\" + str(i+1) + \"\"\"}\n",
"</Question>\n",
"<Answers>\n",
"{sample_answers\"\"\" + str(i+1) + \"\"\"}\n",
"</Answers>\n",
"Here, the correct answer is:\n",
"<Answer>\n",
"{correct_answer\"\"\" + str(i+1) + \"\"\"}\n",
"</Answer>\"\"\"\n",
" return \"\"\"\\n\\nHuman: Please read the following government record closely and then answer the multiple choice question below.\n",
"<Government Record>\n",
"{chunk}\n",
"</Government Record>\n",
"First, here are \"\"\" + examples_section + \"\"\"\n",
"Now here is the question for you to answer.\n",
"<Question>\n",
"{question}\n",
"</Question>\n",
"Pull 2-3 relevant quotes from the record that pertain to the question and write them inside <scratchpad></scratchpad> tags. Then, select the correct answer to the question from the list below and write the corresponding letter (A, B, C, or D) in <Answer></Answer> tags.\n",
"<Answers>\n",
"{answers}\n",
"</Answers>\n",
"\n",
"Assistant:\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 56,
"metadata": {},
"outputs": [],
"source": [
"# Get examples randomly\n",
"def grab_example_qas(long_context_row, long_context_col, qa_df, num_examples=2):\n",
" examples = []\n",
" for i, row in qa_df.sample(frac=1).iterrows(): # Randomize order of questions\n",
" if row['chunk'] in long_context_row[long_context_col] and row['chunk'] != long_context_row.chunk:\n",
" # Examples must pertain to chunks that were included in the collage, but must not be the exact question in question.\n",
" examples.append({\n",
" 'question': row.question, 'answers': row.randomized_answers, \n",
" 'correct_answer': [a for a in row.randomized_answers if row.right_answer in a][0][0]})\n",
" if len(examples) >= num_examples:\n",
" break\n",
" examples_numbered = {}\n",
" for i in range(num_examples):\n",
" examples_numbered['sample_question' + str(i+1)] = examples[i]['question']\n",
" examples_numbered['sample_answers' + str(i+1)] = examples[i]['answers']\n",
" examples_numbered['correct_answer' + str(i+1)] = examples[i]['correct_answer']\n",
" return examples_numbered"
]
},
{
"cell_type": "code",
"execution_count": 57,
"metadata": {},
"outputs": [],
"source": [
"def format_for_long_ctx_with_examples(row, chunk_col, long_context_col, qa_df, num_examples=2):\n",
" # Get examples QA pairs and plug them into the prompt\n",
" example_qas = grab_example_qas(long_context_row=row, long_context_col=long_context_col, qa_df=qa_df, num_examples=num_examples)\n",
" format_args = {}\n",
" for i in range(1, num_examples+1):\n",
" format_args['sample_question'+str(i)] = example_qas['sample_question'+str(i)] \n",
" format_args['sample_answers'+str(i)] = example_qas['sample_answers'+str(i)]\n",
" format_args['correct_answer'+str(i)] = example_qas['correct_answer'+str(i)]\n",
" return gen_mc_answer_lc_with_examples_prompt(num_examples).format(\n",
" chunk=row[chunk_col], question=row['question'], answers=row['randomized_answers'],\n",
" **format_args\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 58,
"metadata": {},
"outputs": [],
"source": [
"def format_for_long_ctx_with_examples_scratchpad(row, chunk_col, long_context_col, qa_df, num_examples=2):\n",
" # Same as above, but with scratchpad.\n",
" example_qas = grab_example_qas(long_context_row=row, long_context_col=long_context_col, qa_df=qa_df, num_examples=num_examples)\n",
" format_args = {}\n",
" for i in range(1, num_examples+1):\n",
" # The examples are indexed from 1.\n",
" format_args['sample_question'+str(i)] = example_qas['sample_question'+str(i)] \n",
" format_args['sample_answers'+str(i)] = example_qas['sample_answers'+str(i)]\n",
" format_args['correct_answer'+str(i)] = example_qas['correct_answer'+str(i)]\n",
" return gen_mc_answer_lc_with_examples_prompt_scratchpad(num_examples).format(\n",
" chunk=row[chunk_col], question=row['question'], answers=row['randomized_answers'],\n",
" **format_args\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, we'll experiment with just 2 examples."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"num_examples = 2\n",
"# Generate prompts that include examples, have Claude answer questions, print accuracy numbers for (beginning, middle, end)\n",
"qa_df[f'long_ctx_with_{num_examples}_examples_prompt_end'] = qa_df.apply(\n",
" lambda row: format_for_long_ctx_with_examples(row, 'long_context_end', 'qa_long_ctx_prompt_end', qa_df, num_examples=num_examples), axis=1)\n",
"\n",
"qa_df[f'long_ctx_with_{num_examples}_examples_prompt_middle'] = qa_df.apply(\n",
" lambda row: format_for_long_ctx_with_examples(row, 'long_context_middle', 'qa_long_ctx_prompt_middle', qa_df, num_examples=num_examples), axis=1)\n",
"\n",
"qa_df[f'long_ctx_with_{num_examples}_examples_prompt_beginning'] = qa_df.apply(\n",
" lambda row: format_for_long_ctx_with_examples(row, 'long_context_beginning', 'qa_long_ctx_prompt_beginning', qa_df, num_examples=num_examples), axis=1)\n",
"\n",
"for position in ['beginning', 'middle', 'end']:\n",
" exp_name = f'long_ctx_with_{num_examples}_examples_answers_' + position\n",
" prompt_col = f'long_ctx_with_{num_examples}_examples_prompt_' + position\n",
" _ = await sample_from_prompt(exp_name, prompt_col)\n",
" print(\"Results for \" + exp_name)\n",
" print_results(qa_df, qa_df[exp_name].values)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Definitely better! What if we increase the number of examples to 5?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"num_examples = 5\n",
"# Same as above, but with 5 examples\n",
"qa_df[f'long_ctx_with_{num_examples}_examples_prompt_end'] = qa_df.apply(\n",
" lambda row: format_for_long_ctx_with_examples(row, 'long_context_end', 'qa_long_ctx_prompt_end', qa_df, num_examples=num_examples), axis=1)\n",
" lambda row: format_for_long_ctx_with_examples(row, 'long_context_end', 'qa_long_ctx_prompt_end', qa_df, num_examples=num_examples), axis=1)\n",
"\n",
"qa_df[f'long_ctx_with_{num_examples}_examples_prompt_middle'] = qa_df.apply(\n",
" lambda row: format_for_long_ctx_with_examples(row, 'long_context_middle', 'qa_long_ctx_prompt_middle', qa_df, num_examples=num_examples), axis=1)\n",
"\n",
"qa_df[f'long_ctx_with_{num_examples}_examples_prompt_beginning'] = qa_df.apply(\n",
" lambda row: format_for_long_ctx_with_examples(row, 'long_context_beginning', 'qa_long_ctx_prompt_beginning', qa_df, num_examples=num_examples), axis=1)\n",
"\n",
"for position in ['beginning', 'middle', 'end']:\n",
" exp_name = f'long_ctx_with_{num_examples}_examples_answers_' + position\n",
" prompt_col = f'long_ctx_with_{num_examples}_examples_prompt_' + position\n",
" _ = await sample_from_prompt(exp_name, prompt_col)\n",
" print(\"Results for \" + exp_name)\n",
" print_results(qa_df, qa_df[exp_name].values)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now trying 2 and 5 examples with scratchpad."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"num_examples = 2\n",
"qa_df[f'long_ctx_with_{num_examples}_examples_scratchpad_prompt_end'] = qa_df.apply(\n",
" lambda row: format_for_long_ctx_with_examples_scratchpad(row, 'long_context_end', 'qa_long_ctx_prompt_end', qa_df, num_examples=num_examples), axis=1)\n",
"\n",
"qa_df[f'long_ctx_with_{num_examples}_examples_scratchpad_prompt_middle'] = qa_df.apply(\n",
" lambda row: format_for_long_ctx_with_examples_scratchpad(row, 'long_context_middle', 'qa_long_ctx_prompt_middle', qa_df, num_examples=num_examples), axis=1)\n",
"\n",
"qa_df[f'long_ctx_with_{num_examples}_examples_scratchpad_prompt_beginning'] = qa_df.apply(\n",
" lambda row: format_for_long_ctx_with_examples_scratchpad(row, 'long_context_beginning', 'qa_long_ctx_prompt_beginning', qa_df, num_examples=num_examples), axis=1)\n",
"\n",
"for position in ['beginning', 'middle', 'end']:\n",
" exp_name = f'long_ctx_with_{num_examples}_examples_scratchpad_answers_' + position\n",
" prompt_col = f'long_ctx_with_{num_examples}_examples_scratchpad_prompt_' + position\n",
" _ = await sample_from_prompt(exp_name, prompt_col)\n",
" print(\"Results for \" + exp_name)\n",
" print_results(qa_df, qa_df[exp_name].values)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"num_examples = 5\n",
"qa_df[f'long_ctx_with_{num_examples}_examples_scratchpad_prompt_end'] = qa_df.apply(\n",
" lambda row: format_for_long_ctx_with_examples_scratchpad(row, 'long_context_end', 'qa_long_ctx_prompt_end', qa_df, num_examples=num_examples), axis=1)\n",
"\n",
"qa_df[f'long_ctx_with_{num_examples}_examples_scratchpad_prompt_middle'] = qa_df.apply(\n",
" lambda row: format_for_long_ctx_with_examples_scratchpad(row, 'long_context_middle', 'qa_long_ctx_prompt_middle', qa_df, num_examples=num_examples), axis=1)\n",
"\n",
"qa_df[f'long_ctx_with_{num_examples}_examples_scratchpad_prompt_beginning'] = qa_df.apply(\n",
" lambda row: format_for_long_ctx_with_examples_scratchpad(row, 'long_context_beginning', 'qa_long_ctx_prompt_beginning', qa_df, num_examples=num_examples), axis=1)\n",
"\n",
"for position in ['beginning', 'middle', 'end']:\n",
" exp_name = f'long_ctx_with_{num_examples}_examples_scratchpad_answers_' + position\n",
" prompt_col = f'long_ctx_with_{num_examples}_examples_scratchpad_prompt_' + position\n",
" _ = await sample_from_prompt(exp_name, prompt_col)\n",
" print(\"Results for \" + exp_name)\n",
" print_results(qa_df, qa_df[exp_name].values)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Conclusion\n",
"\n",
"- Including a scratchpad always helps.\n",
"- Including random examples does not particularly help.\n",
"- Including contextual examples does help, and 5 is better than 2\n",
"\n",
"We hope you've enjoyed reading through this notebook and that the tips and code it contains are useful to you."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}