Merge pull request #388 from jannikmaierhoefer/link-fixes

[fix] link fixes in bonus unit 2
This commit is contained in:
Sergio Paniego Blanco
2025-03-31 19:06:21 +02:00
committed by GitHub
2 changed files with 3 additions and 3 deletions

View File

@@ -109,7 +109,7 @@ agent = CodeAgent(
agent.run("1+1=")
```
Check your [Langfuse Traces Dashboard](https://cloud.langfuse.com/traces) (or your chosen observability tool) to confirm that the spans and logs have been recorded.
Check your [Langfuse Traces Dashboard](https://cloud.langfuse.com) (or your chosen observability tool) to confirm that the spans and logs have been recorded.
Example screenshot from Langfuse:

View File

@@ -23,7 +23,7 @@ Common observability tools for AI agents include platforms like [Langfuse](https
Observability tools vary widely in their features and capabilities. Some tools are open source, benefiting from large communities that shape their roadmaps and extensive integrations. Additionally, certain tools specialize in specific aspects of LLMOps—such as observability, evaluations, or prompt management—while others are designed to cover the entire LLMOps workflow. We encourage you to explore the documentation of different options to pick a solution that works well for you.
Many agent frameworks such as [smolagents](https://smolagents.com) use the [OpenTelemetry](https://opentelemetry.io/docs/) standard to expose metadata to the observability tools. In addition to this, observability tools build custom instrumentations to allow for more flexibility in the fast moving world of LLMs. You should check the documentation of the tool you are using to see what is supported.
Many agent frameworks such as [smolagents](https://huggingface.co/docs/smolagents/v1.12.0/en/index) use the [OpenTelemetry](https://opentelemetry.io/docs/) standard to expose metadata to the observability tools. In addition to this, observability tools build custom instrumentations to allow for more flexibility in the fast moving world of LLMs. You should check the documentation of the tool you are using to see what is supported.
## 🔬Traces and Spans
@@ -52,7 +52,7 @@ Here are some of the most common metrics that observability tools monitor:
**Automated Evaluation Metrics:** You can also set up automated evals. For instance, you can use an LLM to score the output of the agent e.g. if it is helpful, accurate, or not. There are also several open source libraries that help you to score different aspects of the agent. E.g. [RAGAS](https://docs.ragas.io/) for RAG agents or [LLM Guard](https://llm-guard.com/) to detect harmful language or prompt injection.
In practice, a combination of these metrics gives the best coverage of an AI agents health. In this chapters [example notebook](notebooks/bonus-unit2/monitoring-and-evaluating-agents.ipynb), we'll show you how these metrics metrics looks in real examples but first, we'll learn how a typical evaluation workflow looks like.
In practice, a combination of these metrics gives the best coverage of an AI agents health. In this chapters [example notebook](https://huggingface.co/learn/agents-course/en/bonus-unit2/monitoring-and-evaluating-agents-notebook), we'll show you how these metrics metrics looks in real examples but first, we'll learn how a typical evaluation workflow looks like.
## 👍 Evaluating AI Agents