Refine LlamaIndex unit documentation with updates to agents, components, and README

This commit is contained in:
davidberenstein1957
2025-02-27 08:07:33 +01:00
parent 962845d7a3
commit 7006f3f445
3 changed files with 12 additions and 7 deletions

View File

@@ -5,7 +5,11 @@ This LlamaIndex frame outline is part of unit 2 of the course. You can access th
| Title | Description |
| --- | --- |
| [Introduction](introduction.mdx) | Introduction to LlamaIndex |
| [LlamaHub](llama-hub.mdx) | LlamaHub: a registry of integrations, agents and tools |
| [Components](components.mdx) | Components: the building blocks of workflows |
| [Tools](tools.mdx) | Tools: how to build tools in LlamaIndex |
| [Quiz 1](quiz1.mdx) | Quiz 1 |
| [Agents](agents.mdx) | Agents: how to build agents in LlamaIndex |
| [Workflows](workflows.mdx) | Workflows: a sequence of steps, events made of components that are executed in order |
| [Workflows](workflows.mdx) | Workflows: a sequence of steps, events made of components that are executed in order |
| [Quiz 2](quiz2.mdx) | Quiz 2 |
| [Conclusion](conclusion.mdx) | Conclusion |

View File

@@ -13,9 +13,9 @@ LlamaIndex supports **three main types of reasoning agents:**
1. `Function Calling Agents` - These work with AI models that can call specific functions.
2. `ReAct Agents` - These can work with any AI that does chat or text endpoint and deal with complex reasoning tasks.
3. `Advanced Agents` - These use more complex methods like LLMCompiler or Chain-of-Abstraction.
3. `Advanced Custom Agents` - These use more complex methods to deal with more complex tasks and workflows.
<Tip>Find more information on advanced agents on <a href="https://github.com/run-llama/llama_index/tree/main/llama-index-packs">LlamaIndex GitHub</a></Tip>
<Tip>Find more information on advanced agents on <a href="https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/agent/workflow/base_agent.py">BaseWorkflowAgent</a></Tip>
## Initialising Agents
@@ -41,7 +41,7 @@ llm = HuggingFaceInferenceAPILM(model_name="meta-llama/Meta-Llama-3-8B-Instruct"
# initialize agent
agent = AgentWorkflow.from_tools_or_functions(
[FunctionTool.from_defaults(multiply_tool)],
[FunctionTool.from_defaults(multiply_tool)],
llm=llm
)
```

View File

@@ -82,8 +82,9 @@ pipeline = IngestionPipeline(
]
)
# run the pipeline
# run the pipeline sync or async
nodes = pipeline.run(documents=[Document.example()])
nodes = await pipeline.arun(documents=[Document.example()])
```
@@ -91,7 +92,7 @@ nodes = pipeline.run(documents=[Document.example()])
After creating our `Node` objects we need to index them to make them searchable, but before we can do that, we need a place to store our data.
Since we are using an ingestion pipeline, we can directly attach a vector store to the pipeline to populate it. In this case, we will use `Chroma` to store our documents.
Since we are using an ingestion pipeline, we can directly attach a vector store to the pipeline to populate it. In this case, we will use `Chroma` to store our documents.
First we install the integration:
@@ -209,7 +210,7 @@ This is especially useful when we are building more complex workflows and want t
<details>
<summary>Install LlamaTrace</summary>
As introduced in the [section on components](what-are-components-in-llama-index.mdx), we can install the LlamaTrace integration with the following command:
As introduced in the [section on components](components.mdx), we can install the LlamaTrace callback from Arize Phoenix with the following command:
```bash
pip install -U llama-index-callbacks-arize-phoenix