- Update function calls and method signatures in agents, components, tools, and workflows notebooks - Improve parameter naming and add explicit keyword arguments - Simplify code structure and remove unnecessary verbose settings - Update type hints and function annotations - Enhance code readability and consistency across notebooks
396 lines
12 KiB
Plaintext
396 lines
12 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Workflows in LlamaIndex\n",
|
|
"\n",
|
|
"\n",
|
|
"This notebook is part of the [Hugging Face Agents Course](https://www.hf.co/learn/agents-course), a free Course from beginner to expert, where you learn to build Agents.\n",
|
|
"\n",
|
|
"\n",
|
|
"\n",
|
|
"## Let's install the dependencies\n",
|
|
"\n",
|
|
"We will install the dependencies for this unit."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"!pip install llama-index datasets llama-index-callbacks-arize-phoenix llama-index-vector-stores-chroma llama-index-utils-workflow llama-index-llms-huggingface-api pyvis -U -q"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"And, let's log in to Hugging Face to use serverless Inference APIs."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from huggingface_hub import login\n",
|
|
"\n",
|
|
"login()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Basic Workflow Creation\n",
|
|
"\n",
|
|
"We can start by creating a simple workflow. We use the `StartEvent` and `StopEvent` classes to define the start and stop of the workflow."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'Hello, world!'"
|
|
]
|
|
},
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from llama_index.core.workflow import StartEvent, StopEvent, Workflow, step\n",
|
|
"\n",
|
|
"\n",
|
|
"class MyWorkflow(Workflow):\n",
|
|
" @step\n",
|
|
" async def my_step(self, ev: StartEvent) -> StopEvent:\n",
|
|
" # do something here\n",
|
|
" return StopEvent(result=\"Hello, world!\")\n",
|
|
"\n",
|
|
"\n",
|
|
"w = MyWorkflow(timeout=10, verbose=False)\n",
|
|
"result = await w.run()\n",
|
|
"result"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Connecting Multiple Steps\n",
|
|
"\n",
|
|
"We can also create multi-step workflows. Here we pass the event information between steps. Note that we can use type hinting to specify the event type and the flow of the workflow."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'Finished processing: Step 1 complete'"
|
|
]
|
|
},
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from llama_index.core.workflow import Event\n",
|
|
"\n",
|
|
"\n",
|
|
"class ProcessingEvent(Event):\n",
|
|
" intermediate_result: str\n",
|
|
"\n",
|
|
"\n",
|
|
"class MultiStepWorkflow(Workflow):\n",
|
|
" @step\n",
|
|
" async def step_one(self, ev: StartEvent) -> ProcessingEvent:\n",
|
|
" # Process initial data\n",
|
|
" return ProcessingEvent(intermediate_result=\"Step 1 complete\")\n",
|
|
"\n",
|
|
" @step\n",
|
|
" async def step_two(self, ev: ProcessingEvent) -> StopEvent:\n",
|
|
" # Use the intermediate result\n",
|
|
" final_result = f\"Finished processing: {ev.intermediate_result}\"\n",
|
|
" return StopEvent(result=final_result)\n",
|
|
"\n",
|
|
"\n",
|
|
"w = MultiStepWorkflow(timeout=10, verbose=False)\n",
|
|
"result = await w.run()\n",
|
|
"result"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Loops and Branches\n",
|
|
"\n",
|
|
"We can also use type hinting to create branches and loops. Note that we can use the `|` operator to specify that the step can return multiple types."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 28,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Good thing happened\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'Finished processing: First step complete.'"
|
|
]
|
|
},
|
|
"execution_count": 28,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from llama_index.core.workflow import Event\n",
|
|
"import random\n",
|
|
"\n",
|
|
"\n",
|
|
"class ProcessingEvent(Event):\n",
|
|
" intermediate_result: str\n",
|
|
"\n",
|
|
"\n",
|
|
"class LoopEvent(Event):\n",
|
|
" loop_output: str\n",
|
|
"\n",
|
|
"\n",
|
|
"class MultiStepWorkflow(Workflow):\n",
|
|
" @step\n",
|
|
" async def step_one(self, ev: StartEvent) -> ProcessingEvent | LoopEvent:\n",
|
|
" if random.randint(0, 1) == 0:\n",
|
|
" print(\"Bad thing happened\")\n",
|
|
" return LoopEvent(loop_output=\"Back to step one.\")\n",
|
|
" else:\n",
|
|
" print(\"Good thing happened\")\n",
|
|
" return ProcessingEvent(intermediate_result=\"First step complete.\")\n",
|
|
"\n",
|
|
" @step\n",
|
|
" async def step_two(self, ev: ProcessingEvent | LoopEvent) -> StopEvent:\n",
|
|
" # Use the intermediate result\n",
|
|
" final_result = f\"Finished processing: {ev.intermediate_result}\"\n",
|
|
" return StopEvent(result=final_result)\n",
|
|
"\n",
|
|
"\n",
|
|
"w = MultiStepWorkflow(verbose=False)\n",
|
|
"result = await w.run()\n",
|
|
"result"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Drawing Workflows\n",
|
|
"\n",
|
|
"We can also draw workflows using the `draw_all_possible_flows` function.\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 24,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"<class 'NoneType'>\n",
|
|
"<class '__main__.ProcessingEvent'>\n",
|
|
"<class '__main__.LoopEvent'>\n",
|
|
"<class 'llama_index.core.workflow.events.StopEvent'>\n",
|
|
"workflow_all_flows.html\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"from llama_index.utils.workflow import draw_all_possible_flows\n",
|
|
"\n",
|
|
"draw_all_possible_flows(w)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### State Management\n",
|
|
"\n",
|
|
"Instead of passing the event information between steps, we can use the `Context` type hint to pass information between steps. \n",
|
|
"This might be useful for long running workflows, where you want to store information between steps."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 25,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Query: What is the capital of France?\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'Finished processing: Step 1 complete'"
|
|
]
|
|
},
|
|
"execution_count": 25,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from llama_index.core.workflow import Event, Context\n",
|
|
"from llama_index.core.agent.workflow import ReActAgent\n",
|
|
"\n",
|
|
"\n",
|
|
"class ProcessingEvent(Event):\n",
|
|
" intermediate_result: str\n",
|
|
"\n",
|
|
"\n",
|
|
"class MultiStepWorkflow(Workflow):\n",
|
|
" @step\n",
|
|
" async def step_one(self, ev: StartEvent, ctx: Context) -> ProcessingEvent:\n",
|
|
" # Process initial data\n",
|
|
" await ctx.set(\"query\", \"What is the capital of France?\")\n",
|
|
" return ProcessingEvent(intermediate_result=\"Step 1 complete\")\n",
|
|
"\n",
|
|
" @step\n",
|
|
" async def step_two(self, ev: ProcessingEvent, ctx: Context) -> StopEvent:\n",
|
|
" # Use the intermediate result\n",
|
|
" query = await ctx.get(\"query\")\n",
|
|
" print(f\"Query: {query}\")\n",
|
|
" final_result = f\"Finished processing: {ev.intermediate_result}\"\n",
|
|
" return StopEvent(result=final_result)\n",
|
|
"\n",
|
|
"\n",
|
|
"w = MultiStepWorkflow(timeout=10, verbose=False)\n",
|
|
"result = await w.run()\n",
|
|
"result"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Multi-Agent Workflows\n",
|
|
"\n",
|
|
"We can also create multi-agent workflows. Here we define two agents, one that multiplies two integers and one that adds two integers."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"AgentOutput(response=ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>, additional_kwargs={}, blocks=[TextBlock(block_type='text', text='I have handed off the request to an agent who can help you with adding 5 and 3. Please wait for their response.')]), tool_calls=[ToolCallResult(tool_name='handoff', tool_kwargs={'to_agent': 'addition_agent', 'reason': 'Add 5 and 3'}, tool_id='call_F97vcIcsvZjfAAOBzzIifW3y', tool_output=ToolOutput(content='Agent addition_agent is now handling the request due to the following reason: Add 5 and 3.\\nPlease continue with the current request.', tool_name='handoff', raw_input={'args': (), 'kwargs': {'to_agent': 'addition_agent', 'reason': 'Add 5 and 3'}}, raw_output='Agent addition_agent is now handling the request due to the following reason: Add 5 and 3.\\nPlease continue with the current request.', is_error=False), return_direct=True), ToolCallResult(tool_name='handoff', tool_kwargs={'to_agent': 'addition_agent', 'reason': 'Add 5 and 3'}, tool_id='call_jf49ktFRs09xYdOsnApAk2zz', tool_output=ToolOutput(content='Agent addition_agent is now handling the request due to the following reason: Add 5 and 3.\\nPlease continue with the current request.', tool_name='handoff', raw_input={'args': (), 'kwargs': {'to_agent': 'addition_agent', 'reason': 'Add 5 and 3'}}, raw_output='Agent addition_agent is now handling the request due to the following reason: Add 5 and 3.\\nPlease continue with the current request.', is_error=False), return_direct=True)], raw={'id': 'chatcmpl-B6Cy54VQkvlG3VOrmdzCzgwcJmVOc', 'choices': [{'delta': {'content': None, 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None}, 'finish_reason': 'stop', 'index': 0, 'logprobs': None}], 'created': 1740819517, 'model': 'gpt-3.5-turbo-0125', 'object': 'chat.completion.chunk', 'service_tier': 'default', 'system_fingerprint': None, 'usage': None}, current_agent_name='addition_agent')"
|
|
]
|
|
},
|
|
"execution_count": 33,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI\n",
|
|
"\n",
|
|
"# Define some tools\n",
|
|
"def add(a: int, b: int) -> int:\n",
|
|
" \"\"\"Add two numbers.\"\"\"\n",
|
|
" return a + b\n",
|
|
"\n",
|
|
"def multiply(a: int, b: int) -> int:\n",
|
|
" \"\"\"Multiply two numbers.\"\"\"\n",
|
|
" return a * b\n",
|
|
"\n",
|
|
"llm = HuggingFaceInferenceAPI(model_name=\"Qwen/Qwen2.5-Coder-32B-Instruct\")\n",
|
|
"\n",
|
|
"# we can pass functions directly without FunctionTool -- the fn/docstring are parsed for the name/description\n",
|
|
"multiply_agent = ReActAgent(\n",
|
|
" name=\"multiply_agent\",\n",
|
|
" description=\"Is able to multiply two integers\",\n",
|
|
" system_prompt=\"A helpful assistant that can use a tool to multiply numbers.\",\n",
|
|
" tools=[multiply], \n",
|
|
" llm=llm,\n",
|
|
")\n",
|
|
"\n",
|
|
"addition_agent = ReActAgent(\n",
|
|
" name=\"add_agent\",\n",
|
|
" description=\"Is able to add two integers\",\n",
|
|
" system_prompt=\"A helpful assistant that can use a tool to add numbers.\",\n",
|
|
" tools=[add], \n",
|
|
" llm=llm,\n",
|
|
")\n",
|
|
"\n",
|
|
"# Create the workflow\n",
|
|
"workflow = AgentWorkflow(\n",
|
|
" agents=[multiply_agent, addition_agent],\n",
|
|
" root_agent=\"multiply_agent\"\n",
|
|
")\n",
|
|
"\n",
|
|
"# Run the system\n",
|
|
"response = await workflow.run(user_msg=\"Can you add 5 and 3?\")"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": ".venv",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.11.11"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|