{
"cells": [
{
"cell_type": "markdown",
"id": "1972a352-a0e3-41b7-81dc-dd4ae2b890c3",
"metadata": {},
"source": [
"## Online Meeting"
]
},
{
"cell_type": "markdown",
"id": "05d999bc-83a3-454f-a8a4-44cbff1fcedc",
"metadata": {},
"source": [
"\r\n",
"
\r\n",
""
]
},
{
"cell_type": "markdown",
"id": "fe3ed1ce-d38d-4048-9db6-9707b55dc642",
"metadata": {},
"source": [
"Using generative AI like ChatGPT in online meetings can greatly improve work efficiency (e.g., **Teams**). However, the context in such applications tends to be more conversational, with a high degree of redundancy and a large number of tokens(more than **40k**). By utilizing LLMLingua to compress prompts, we can significantly reduce the length of prompts, which in turn helps to reduce latency. This makes the AI more efficient and responsive in real-time communication scenarios like online meetings, enabling smoother interactions and better overall performance. We use meeting transcripts from the [**MeetingBank** dataset](https://huggingface.co/datasets/lytang/MeetingBank-transcript) as an example to demonstrate the capabilities of LLMLingua."
]
},
{
"cell_type": "markdown",
"id": "18422597-687a-43aa-a6ed-ce6244d0eb55",
"metadata": {},
"source": [
"### MeetingBank Dataset"
]
},
{
"cell_type": "markdown",
"id": "51a7accd-5ec2-4ed2-9582-1afdb441a998",
"metadata": {},
"source": [
"Next, we will demonstrate the use of LongLLMLingua on the **MeetingBank** dataset, which can achieve similar or even better performance with significantly fewer tokens. The online meeting scenario is quite similar to RAG, as it also suffers from the \"lost in the middle\" issue, where noise data at the beginning or end of the prompt interferes with LLMs extracting key information. This dataset closely resembles real-world online meeting scenarios, with prompt lengths exceeding **60k tokens at their longest. \n",
" \n",
"The original dataset can be found at https://huggingface.co/datasets/lytang/MeetingBank-transcript"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a970a901-11bd-43af-a8bc-7fb2fc6a1a07",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Defaulting to user installation because normal site-packages is not writeable\n",
"Requirement already satisfied: llmlingua in /home/hjiang/Code/github/LLMLingua (0.1.2)\n",
"Requirement already satisfied: datasets in /home/hjiang/.local/lib/python3.9/site-packages (2.14.4)\n",
"Requirement already satisfied: nltk in /home/hjiang/.local/lib/python3.9/site-packages (from llmlingua) (3.8.1)\n",
"Requirement already satisfied: numpy in /home/hjiang/.local/lib/python3.9/site-packages (from llmlingua) (1.23.5)\n",
"Requirement already satisfied: tiktoken in /home/hjiang/.local/lib/python3.9/site-packages (from llmlingua) (0.4.0)\n",
"Requirement already satisfied: torch in /home/hjiang/.local/lib/python3.9/site-packages (from llmlingua) (1.13.1+cu116)\n",
"Requirement already satisfied: transformers>=4.26.0 in /home/hjiang/.local/lib/python3.9/site-packages (from llmlingua) (4.34.1)\n",
"Requirement already satisfied: pyarrow>=8.0.0 in /home/hjiang/.local/lib/python3.9/site-packages (from datasets) (11.0.0)\n",
"Requirement already satisfied: dill<0.3.8,>=0.3.0 in /home/hjiang/.local/lib/python3.9/site-packages (from datasets) (0.3.7)\n",
"Requirement already satisfied: pandas in /home/hjiang/.local/lib/python3.9/site-packages (from datasets) (2.0.3)\n",
"Requirement already satisfied: requests>=2.19.0 in /home/hjiang/.local/lib/python3.9/site-packages (from datasets) (2.29.0)\n",
"Requirement already satisfied: tqdm>=4.62.1 in /home/hjiang/.local/lib/python3.9/site-packages (from datasets) (4.65.0)\n",
"Requirement already satisfied: xxhash in /home/hjiang/.local/lib/python3.9/site-packages (from datasets) (3.3.0)\n",
"Requirement already satisfied: multiprocess in /home/hjiang/.local/lib/python3.9/site-packages (from datasets) (0.70.15)\n",
"Requirement already satisfied: fsspec[http]>=2021.11.1 in /home/hjiang/.local/lib/python3.9/site-packages (from datasets) (2023.6.0)\n",
"Requirement already satisfied: aiohttp in /home/hjiang/.local/lib/python3.9/site-packages (from datasets) (3.8.5)\n",
"Requirement already satisfied: huggingface-hub<1.0.0,>=0.14.0 in /home/hjiang/.local/lib/python3.9/site-packages (from datasets) (0.16.4)\n",
"Requirement already satisfied: packaging in /home/hjiang/.local/lib/python3.9/site-packages (from datasets) (23.0)\n",
"Requirement already satisfied: pyyaml>=5.1 in /usr/lib/python3/dist-packages (from datasets) (5.3.1)\n",
"Requirement already satisfied: attrs>=17.3.0 in /home/hjiang/.local/lib/python3.9/site-packages (from aiohttp->datasets) (23.1.0)\n",
"Requirement already satisfied: charset-normalizer<4.0,>=2.0 in /home/hjiang/.local/lib/python3.9/site-packages (from aiohttp->datasets) (3.2.0)\n",
"Requirement already satisfied: multidict<7.0,>=4.5 in /home/hjiang/.local/lib/python3.9/site-packages (from aiohttp->datasets) (6.0.4)\n",
"Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /home/hjiang/.local/lib/python3.9/site-packages (from aiohttp->datasets) (4.0.2)\n",
"Requirement already satisfied: yarl<2.0,>=1.0 in /home/hjiang/.local/lib/python3.9/site-packages (from aiohttp->datasets) (1.9.2)\n",
"Requirement already satisfied: frozenlist>=1.1.1 in /home/hjiang/.local/lib/python3.9/site-packages (from aiohttp->datasets) (1.4.0)\n",
"Requirement already satisfied: aiosignal>=1.1.2 in /home/hjiang/.local/lib/python3.9/site-packages (from aiohttp->datasets) (1.3.1)\n",
"Requirement already satisfied: filelock in /home/hjiang/.local/lib/python3.9/site-packages (from huggingface-hub<1.0.0,>=0.14.0->datasets) (3.12.2)\n",
"Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/hjiang/.local/lib/python3.9/site-packages (from huggingface-hub<1.0.0,>=0.14.0->datasets) (4.7.1)\n",
"Requirement already satisfied: idna<4,>=2.5 in /usr/lib/python3/dist-packages (from requests>=2.19.0->datasets) (2.8)\n",
"Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/hjiang/.local/lib/python3.9/site-packages (from requests>=2.19.0->datasets) (1.26.16)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests>=2.19.0->datasets) (2019.11.28)\n",
"Requirement already satisfied: regex!=2019.12.17 in /home/hjiang/.local/lib/python3.9/site-packages (from transformers>=4.26.0->llmlingua) (2023.6.3)\n",
"Requirement already satisfied: tokenizers<0.15,>=0.14 in /home/hjiang/.local/lib/python3.9/site-packages (from transformers>=4.26.0->llmlingua) (0.14.1)\n",
"Requirement already satisfied: safetensors>=0.3.1 in /home/hjiang/.local/lib/python3.9/site-packages (from transformers>=4.26.0->llmlingua) (0.3.1)\n",
"Requirement already satisfied: click in /home/hjiang/.local/lib/python3.9/site-packages (from nltk->llmlingua) (8.1.6)\n",
"Requirement already satisfied: joblib in /home/hjiang/.local/lib/python3.9/site-packages (from nltk->llmlingua) (1.3.1)\n",
"Requirement already satisfied: python-dateutil>=2.8.2 in /home/hjiang/.local/lib/python3.9/site-packages (from pandas->datasets) (2.8.2)\n",
"Requirement already satisfied: pytz>=2020.1 in /home/hjiang/.local/lib/python3.9/site-packages (from pandas->datasets) (2023.3)\n",
"Requirement already satisfied: tzdata>=2022.1 in /home/hjiang/.local/lib/python3.9/site-packages (from pandas->datasets) (2023.3)\n",
"Requirement already satisfied: six>=1.5 in /usr/lib/python3/dist-packages (from python-dateutil>=2.8.2->pandas->datasets) (1.14.0)\n",
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.3.1\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpython3.9 -m pip install --upgrade pip\u001b[0m\n"
]
}
],
"source": [
"# Install dependency.\n",
"!pip install llmlingua datasets"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "7bbb89f7-9f0e-4998-97a6-da033351ef1a",
"metadata": {},
"outputs": [],
"source": [
"# Download the original prompt and dataset\n",
"from datasets import load_dataset\n",
"dataset = load_dataset(\"lytang/MeetingBank-transcript\")[\"train\"]"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "cbbbf3de-a9d6-46cf-afab-dcb72a6154ec",
"metadata": {},
"outputs": [],
"source": [
"# Using the OAI\n",
"import openai\n",
"openai.api_key = \"\""
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "46506810-8565-43da-984b-d862c56b49c2",
"metadata": {},
"outputs": [],
"source": [
"# or Using the AOAI\n",
"import openai\n",
"openai.api_key = \"\"\n",
"openai.api_base = \"https://xxxx.openai.azure.com/\"\n",
"openai.api_type = 'azure'\n",
"openai.api_version = '2023-05-15'"
]
},
{
"cell_type": "markdown",
"id": "f8676ffa-5117-44dc-9742-bb9ab1d56e0c",
"metadata": {},
"source": [
"### Setup Data"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "cc17bbc5-86cb-4d15-a730-955af85a10b2",
"metadata": {},
"outputs": [],
"source": [
"# select an example from MeetingBank\n",
"contexts = dataset[1][\"source\"]"
]
},
{
"cell_type": "markdown",
"id": "ba1c6d52-dc87-434c-a41c-0bbc8a286504",
"metadata": {},
"source": [
"### Q1"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "f8a54b7f-5bd4-4d4f-9249-b900bd703884",
"metadata": {},
"outputs": [],
"source": [
"question = \"Question: How much did the crime rate increase last year?\\nAnswer:\"\n",
"reference = \"5.4%\""
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "3d441f10-c5c7-4d45-b09a-717e536b36bf",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"id\": \"chatcmpl-8FNC3cZSVtzUCxOVhB04RxnEUVrf8\",\n",
" \"object\": \"chat.completion\",\n",
" \"created\": 1698674767,\n",
" \"model\": \"gpt-4-32k\",\n",
" \"choices\": [\n",
" {\n",
" \"index\": 0,\n",
" \"finish_reason\": \"stop\",\n",
" \"message\": {\n",
" \"role\": \"assistant\",\n",
" \"content\": \"The crime rate increased by 5.4% year to date.\"\n",
" }\n",
" }\n",
" ],\n",
" \"usage\": {\n",
" \"prompt_tokens\": 30096,\n",
" \"completion_tokens\": 14,\n",
" \"total_tokens\": 30110\n",
" }\n",
"}\n"
]
}
],
"source": [
"# The response from original prompt, using GPT-4-32k\n",
"import json\n",
"prompt = \"\\n\\n\".join([contexts, question])\n",
"\n",
"message = [\n",
" {\"role\": \"user\", \"content\": prompt},\n",
"]\n",
"\n",
"request_data = {\n",
" \"messages\": message,\n",
" \"max_tokens\": 100,\n",
" \"temperature\": 0,\n",
" \"top_p\": 1,\n",
" \"n\": 1,\n",
" \"stream\": False,\n",
"}\n",
"response = openai.ChatCompletion.create(\n",
" \"gpt-4-32k\",\n",
" **request_data,\n",
")\n",
"print(json.dumps(response, indent=4))"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "7859d7d7-a6cd-499a-a780-643ba8e0b832",
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "ba191aa3d6554337a49e9b0896fc73e6",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Loading checkpoint shards: 0%| | 0/2 [00:00, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/hjiang/.local/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:362: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.\n",
" warnings.warn(\n",
"/home/hjiang/.local/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:367: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.\n",
" warnings.warn(\n"
]
}
],
"source": [
"# Setup LLMLingua\n",
"from llmlingua import PromptCompressor\n",
"llm_lingua = PromptCompressor()"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "0f850fc9-2f7f-42f8-8d8f-5a64e39c1a8b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"compressed_prompt\": \"aker3., the.\\n\\naker : Thank you Counciloman Yes,'s. 5.4% increase to date That after this a 1.4 increase in crime in. 1 From Police. Let the police. : day. Our department will continue to evolve and move forward, building on our existing strengths and taking advantage of opportunities for growth and renewal. Our priorities around crime and homelessness, employee and community wellness and open communication will help guide us further into 21st century policing, while also supporting the shared responsibility of public safety in the city of Long Beach. Thank you. Myself and Bureau Chief Josie Murray stand ready to answer any questions they can.\\n\\nQuestion: How much did the crime rate increase last year?\\nAnswer:\",\n",
" \"origin_tokens\": 30089,\n",
" \"compressed_tokens\": 149,\n",
" \"ratio\": \"201.9x\",\n",
" \"saving\": \", Saving $1.8 in GPT-4.\"\n",
"}\n",
"Response: {\n",
" \"id\": \"chatcmpl-8FNIg6iVYBfI1354r72xYE9X4tDDE\",\n",
" \"object\": \"chat.completion\",\n",
" \"created\": 1698675178,\n",
" \"model\": \"gpt-4-32k\",\n",
" \"choices\": [\n",
" {\n",
" \"index\": 0,\n",
" \"finish_reason\": \"stop\",\n",
" \"message\": {\n",
" \"role\": \"assistant\",\n",
" \"content\": \"The crime rate increased by 5.4% last year.\"\n",
" }\n",
" }\n",
" ],\n",
" \"usage\": {\n",
" \"prompt_tokens\": 156,\n",
" \"completion_tokens\": 13,\n",
" \"total_tokens\": 169\n",
" }\n",
"}\n"
]
}
],
"source": [
"# 200 Compression\n",
"compressed_prompt = llm_lingua.compress_prompt(\n",
" contexts.split(\"\\n\"),\n",
" instruction=\"\",\n",
" question=question,\n",
" target_token=200,\n",
" condition_compare=True,\n",
" condition_in_question='after',\n",
" rank_method='longllmlingua',\n",
" use_sentence_level_filter=False,\n",
" context_budget=\"+100\",\n",
" dynamic_context_compression_ratio=0.4, # enable dynamic_context_compression_ratio\n",
" reorder_context=\"sort\"\n",
")\n",
"message = [\n",
" {\"role\": \"user\", \"content\": compressed_prompt[\"compressed_prompt\"]},\n",
"]\n",
"\n",
"request_data = {\n",
" \"messages\": message,\n",
" \"max_tokens\": 100,\n",
" \"temperature\": 0,\n",
" \"top_p\": 1,\n",
" \"n\": 1,\n",
" \"stream\": False,\n",
"}\n",
"response = openai.ChatCompletion.create(\n",
" \"gpt-4-32k\",\n",
" **request_data,\n",
")\n",
"\n",
"print(json.dumps(compressed_prompt, indent=4))\n",
"print(\"Response:\", response)"
]
},
{
"cell_type": "markdown",
"id": "9aa90492-8ad1-4a89-85c5-26b8472f1ff0",
"metadata": {},
"source": [
"### Q2"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "fa638dec-c9ec-4dce-9dac-d768145de714",
"metadata": {},
"outputs": [],
"source": [
"question = \"Question: What is the homicide clearance rate?\\nAnswer:\"\n",
"reference = \"77%\""
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "5f61a186-6641-4118-ad04-5245a53b6d79",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"id\": \"chatcmpl-8FNJi0fTohhSuLHTF13uWBBcslAtx\",\n",
" \"object\": \"chat.completion\",\n",
" \"created\": 1698675242,\n",
" \"model\": \"gpt-4-32k\",\n",
" \"choices\": [\n",
" {\n",
" \"index\": 0,\n",
" \"finish_reason\": \"stop\",\n",
" \"message\": {\n",
" \"role\": \"assistant\",\n",
" \"content\": \"The homicide clearance rate for the Long Beach Fire Department is 77%.\"\n",
" }\n",
" }\n",
" ],\n",
" \"usage\": {\n",
" \"prompt_tokens\": 30093,\n",
" \"completion_tokens\": 14,\n",
" \"total_tokens\": 30107\n",
" }\n",
"}\n"
]
}
],
"source": [
"# The response from original prompt, using GPT-4-32k\n",
"import json\n",
"prompt = \"\\n\\n\".join([contexts, question])\n",
"\n",
"message = [\n",
" {\"role\": \"user\", \"content\": prompt},\n",
"]\n",
"\n",
"request_data = {\n",
" \"messages\": message,\n",
" \"max_tokens\": 100,\n",
" \"temperature\": 0,\n",
" \"top_p\": 1,\n",
" \"n\": 1,\n",
" \"stream\": False,\n",
"}\n",
"response = openai.ChatCompletion.create(\n",
" \"gpt-4-32k\",\n",
" **request_data,\n",
")\n",
"print(json.dumps(response, indent=4))"
]
},
{
"cell_type": "code",
"execution_count": 56,
"id": "4328e6c4-63f5-4a24-a459-baaa309f9825",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"compressed_prompt\": \"\\n\\nEvery we discuss a variety of public we provide, emergency response and calls for service criminal investig, and advoc, safarding while protect infrastr and and threats.\\n you see we experiencing, exempl how our are working.\\n51% these arrests forb by law from possessing firear.\\n this alone have seized firear includes a 23% increase in the recovery manufactured firearms knownimps or ghost guns.And while every homic tragic, we not dissuaded and continue to toward bringing justice to the families and loved ones of victimsAmong accomplish,'ll see we have a homicide clearance rate of 77%.\\nThere are many factors that contribute to our effectiveness in this area, including a rapid reaction and response by patrol officers, immediate follow up by our Special Investigations Division and the excellent investigative efforts of our homicide detectives.\\nTo help increase our communication, transparency and engagement, we've developed a community advisory committee to help inform and shape department policies, and we engage our neighborhoods through division, specific events and commander forums.\\n\\nQuestion: What is the homicide clearance rate?\\nAnswer:\",\n",
" \"origin_tokens\": 30086,\n",
" \"compressed_tokens\": 211,\n",
" \"ratio\": \"142.6x\",\n",
" \"saving\": \", Saving $1.8 in GPT-4.\"\n",
"}\n",
"Response: {\n",
" \"id\": \"chatcmpl-8FNxaQUPnfByyAmNtRld4FSIVUMtW\",\n",
" \"object\": \"chat.completion\",\n",
" \"created\": 1698677714,\n",
" \"model\": \"gpt-4-32k\",\n",
" \"choices\": [\n",
" {\n",
" \"index\": 0,\n",
" \"finish_reason\": \"stop\",\n",
" \"message\": {\n",
" \"role\": \"assistant\",\n",
" \"content\": \"The homicide clearance rate is 77%.\"\n",
" }\n",
" }\n",
" ],\n",
" \"usage\": {\n",
" \"prompt_tokens\": 218,\n",
" \"completion_tokens\": 8,\n",
" \"total_tokens\": 226\n",
" }\n",
"}\n"
]
}
],
"source": [
"# 200 Compression\n",
"compressed_prompt = llm_lingua.compress_prompt(\n",
" contexts.split(\"\\n\"),\n",
" instruction=\"\",\n",
" question=question,\n",
" target_token=200,\n",
" condition_compare=True,\n",
" condition_in_question='after',\n",
" rank_method='longllmlingua',\n",
" use_sentence_level_filter=True,\n",
" context_budget=\"+100\",\n",
" reorder_context=\"sort\"\n",
")\n",
"message = [\n",
" {\"role\": \"user\", \"content\": compressed_prompt[\"compressed_prompt\"]},\n",
"]\n",
"\n",
"request_data = {\n",
" \"messages\": message,\n",
" \"max_tokens\": 100,\n",
" \"temperature\": 0,\n",
" \"top_p\": 1,\n",
" \"n\": 1,\n",
" \"stream\": False,\n",
"}\n",
"response = openai.ChatCompletion.create(\n",
" \"gpt-4-32k\",\n",
" **request_data,\n",
")\n",
"\n",
"print(json.dumps(compressed_prompt, indent=4))\n",
"print(\"Response:\", response)"
]
},
{
"cell_type": "markdown",
"id": "a085d6b2-2642-4ed6-a92f-1a1dc104b954",
"metadata": {},
"source": [
"### Q3"
]
},
{
"cell_type": "code",
"execution_count": 57,
"id": "5217d1f8-009c-4665-aed9-ae4889358070",
"metadata": {},
"outputs": [],
"source": [
"question = \"Question: what are the arrangements the Police Department will make this year?\"\n",
"reference = \"enhancing community engagement and internal communication models, building a culture of accountability and transparency, and prioritizing recruitment and retention.\""
]
},
{
"cell_type": "code",
"execution_count": 58,
"id": "9a78c641-b102-4cd9-bdec-e0fccdd8e19e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"id\": \"chatcmpl-8FNz2YdueWIGpFTnRAM0ZbWKPNWIY\",\n",
" \"object\": \"chat.completion\",\n",
" \"created\": 1698677804,\n",
" \"model\": \"gpt-4-32k\",\n",
" \"choices\": [\n",
" {\n",
" \"index\": 0,\n",
" \"finish_reason\": \"stop\",\n",
" \"message\": {\n",
" \"role\": \"assistant\",\n",
" \"content\": \"The Police Department plans to focus on addressing the steady increase in call volume and maintaining or improving response times to fires, emergency medical and other emergency responses. They will also prioritize firefighter safety and behavioral health, increase diversity in all ranks of the department through recruitment and training opportunities, and maintain staffing and resources to meet service demands of citywide growth. The department will also begin preparing for the upcoming emergency service demands brought on by the 2028 Summer Olympic Games. They plan to replace front line vehicles and improve compliance with mandated fire prevention inspections. The department also plans to implement alternate destination and telemedicine pilot programs.\"\n",
" }\n",
" }\n",
" ],\n",
" \"usage\": {\n",
" \"prompt_tokens\": 30096,\n",
" \"completion_tokens\": 121,\n",
" \"total_tokens\": 30217\n",
" }\n",
"}\n"
]
}
],
"source": [
"# The response from original prompt, using GPT-4-32k\n",
"import json\n",
"prompt = \"\\n\\n\".join([contexts, question])\n",
"\n",
"message = [\n",
" {\"role\": \"user\", \"content\": prompt},\n",
"]\n",
"\n",
"request_data = {\n",
" \"messages\": message,\n",
" \"max_tokens\": 500,\n",
" \"temperature\": 0,\n",
" \"top_p\": 1,\n",
" \"n\": 1,\n",
" \"stream\": False,\n",
"}\n",
"response = openai.ChatCompletion.create(\n",
" \"gpt-4-32k\",\n",
" **request_data,\n",
")\n",
"print(json.dumps(response, indent=4))"
]
},
{
"cell_type": "code",
"execution_count": 61,
"id": "26065113-7c23-4118-812e-8fff506ba749",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"compressed_prompt\": \"Speaker3: Thank., the\\n\\nSpe Thank. Next keep the\\n Thank. Councilwoman Yes,'s5% year date. is after this year with a74%.: Mr. Mods,able Mayor and of the' very be presenting the Polices3 budget. for.ented. police and have experienced increased and de, they. their work purpose they are needed. to leave or, vast majority have toers with Department I believe because not typicalre worldized to maintain andation mental, qualityach programs as are, and to' mistakesre, is. of the or, the officers to here each a Theyageance In of, rising crime un police, theirment andre everyone our Every year we a of safety services,gency and, victim and ouringucture and resource, should also we ourhips and like with the Cityation,uma Program to toaborating with the Department Communic to responses. joining department as part the many other reason're we. Here volumere which. Year10 calls nearly60. Although' had to make modifications through the years to one or the of about 5 Like of remain andies adapted changes Our resulted notableend in And2,, with74% seen that numberink.% date accessms tocing have worked to illegal they this.ve had andve3ests forarms% over. peopleidden. And this, officers have which3% in the recovery of personally p guns Mov second of this'll, violence and other crime. we isic, we dis we to justice onesims .,ll we%. is There our in area, rapid by, up our excellent efforts ofives. In trust, more to becomees We understand this, these factors achieve the seeing' highlight work', racing exhib, City supported efforts and street to assist this extremely and improve we conductedcement we pending street haveve also continued supporting city Our andcement of quality mental evaluation Angeles our,. Year,50 found. our priority supported'reousides employee over the in new ouration and employee programsuma. intersection between community, such as communityison officers on critical, andating and to provide support tra It Officerging that biggestes facing is, theing in, been critical this constantly to futureed in like the focus increasing representation to the our next continued with, new P F T is and at par increase our shape specific of through, Project the for our media betterage public bying and effectively .ve increased a daily crimeter our as our health and we that Department onities .Y3 budget proposal department and communication. Bu're manyative the police budget our res the and to current and more offill this vision. are. our CRC Chief thatance the police. Kerry the divisions and Communityryize To we propose0 per in toness. to specific single officer life large to. The Departmentes Services we to fund. youngian while oursre departmentational. Will the while the will on or. and department, proposed re and the., O to nine F transferred other This inCP to with were works. communication the which The division willations In ourre currentlying.6 weate class2 new support and our the new part4 camera We our for additional forre to receive An new and Theorous and mostatic uned anti andperforce our work relationship and equitable systems in all areas of the department. We'll continue exploring ways to leverage new technology and improve operational efficiencies to help modernize our services or seek new grant opportunities that enhance employee and community training and develop partnerships with research and educational institutions. In closing, I'd like to again express how honored I am to lead the officers and professional staff of the Long Beach Police Department and how appreciative I am for the dedicated service they provide each day. Our department will continue to evolve and move forward, building on our existing strengths and taking advantage of opportunities for growth and renewal. Our priorities around crime and homelessness, employee and community wellness and open communication will help guide us further into 21st century policing, while also supporting the shared responsibility of public safety in the city of Long Beach. Thank you. Myself and Bureau Chief Josie Murray stand ready to answer any questions they can.\\n\\nQuestion: what are the arrangements the Police Department will make this year?\",\n",
" \"origin_tokens\": 30089,\n",
" \"compressed_tokens\": 864,\n",
" \"ratio\": \"34.8x\",\n",
" \"saving\": \", Saving $1.8 in GPT-4.\"\n",
"}\n",
"Response: {\n",
" \"id\": \"chatcmpl-8FO8w5VmNG5ujiTnL8gqVpQKbqzl8\",\n",
" \"object\": \"chat.completion\",\n",
" \"created\": 1698678418,\n",
" \"model\": \"gpt-4-32k\",\n",
" \"choices\": [\n",
" {\n",
" \"index\": 0,\n",
" \"finish_reason\": \"stop\",\n",
" \"message\": {\n",
" \"role\": \"assistant\",\n",
" \"content\": \"The Police Department plans to present a budget that addresses increased demands and challenges. They will focus on maintaining and improving mental health programs, crime prevention, and community engagement. They will also work on improving their response to rising crime rates and ensuring the safety of the community. They plan to collaborate with the City Trauma Program and the Department of Communication. They will also focus on the recovery of illegal firearms and addressing violence and other crimes. They will continue supporting city-wide mental health evaluation programs and increase representation within the department. They will also focus on leveraging new technology and improving operational efficiencies. They will seek new grant opportunities that enhance employee and community training and develop partnerships with research and educational institutions. They also plan to address issues around crime and homelessness, employee and community wellness, and open communication.\"\n",
" }\n",
" }\n",
" ],\n",
" \"usage\": {\n",
" \"prompt_tokens\": 871,\n",
" \"completion_tokens\": 157,\n",
" \"total_tokens\": 1028\n",
" }\n",
"}\n"
]
}
],
"source": [
"# 2000 Compression\n",
"compressed_prompt = llm_lingua.compress_prompt(\n",
" contexts.split(\"\\n\"),\n",
" instruction=\"\",\n",
" question=question,\n",
" target_token=2000,\n",
" condition_compare=True,\n",
" condition_in_question='after',\n",
" rank_method='longllmlingua',\n",
" use_sentence_level_filter=False,\n",
" context_budget=\"+100\",\n",
" dynamic_context_compression_ratio=0.4, # enable dynamic_context_compression_ratio\n",
" reorder_context=\"sort\"\n",
")\n",
"message = [\n",
" {\"role\": \"user\", \"content\": compressed_prompt[\"compressed_prompt\"]},\n",
"]\n",
"\n",
"request_data = {\n",
" \"messages\": message,\n",
" \"max_tokens\": 500,\n",
" \"temperature\": 0,\n",
" \"top_p\": 1,\n",
" \"n\": 1,\n",
" \"stream\": False,\n",
"}\n",
"response = openai.ChatCompletion.create(\n",
" \"gpt-4-32k\",\n",
" **request_data,\n",
")\n",
"\n",
"print(json.dumps(compressed_prompt, indent=4))\n",
"print(\"Response:\", response)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 5
}