322 Commits

Author SHA1 Message Date
burtenshaw
e35d7c91aa Update README.md 2025-02-11 07:33:12 +00:00
burtenshaw
7a92fa0422 Merge pull request #58 from dtellz/patch-1
correct minor typos
2025-02-10 20:12:56 +01:00
burtenshaw
219f82c0f8 Merge pull request #55 from joagonzalez/patch-1
Update what-are-llms.mdx
2025-02-10 20:11:40 +01:00
burtenshaw
bf4cc8a017 Merge pull request #54 from bruno-oliveira/patch-1
Update dummy-agent-library.mdx
2025-02-10 20:11:19 +01:00
burtenshaw
d7f3068197 Merge branch 'main' into pr/54 2025-02-10 19:11:06 +00:00
Pedro Cuenca
2606d81f64 Merge pull request #56 from duhow/patch-1
chore: add code block
2025-02-10 20:10:29 +01:00
Pedro Cuenca
808cf41d92 Merge pull request #57 from OranciucIvan/huggingFaceQuiz
The Correct_word_was_removed_from_quiz
2025-02-10 20:09:28 +01:00
burtenshaw
dd570c356e Merge pull request #52 from SrzStephen/patch-1
spelling - actions.mdx
2025-02-10 20:07:50 +01:00
burtenshaw
50e10ed022 Merge branch 'main' into patch-1 2025-02-10 20:07:38 +01:00
Diego Tellez
752697a917 corrected typos 2025-02-10 19:07:23 +00:00
burtenshaw
f52ff2f798 Merge pull request #49 from Dimildizio/main
fix(doc): change relative paths of units 2-4 readme in the repo README.md to proper ones
2025-02-10 20:06:36 +01:00
OranciucIvan
0e47e39d9f the_Correct_word_was_removed_from_quiz 2025-02-10 20:33:52 +02:00
David Girón
1eeb215afe chore: add code block 2025-02-10 19:29:46 +01:00
Joaquin Gonzalez
0eebb8e124 Update what-are-llms.mdx
small spelling typo
2025-02-10 15:25:11 -03:00
Thomas Simonini
943dc3a10d Merge pull request #47 from huggingface/more-unit-1-nits
Additional Unit 1 nits and suggestions
2025-02-10 18:24:45 +01:00
Bruno Oliveira
a579c98460 Update dummy-agent-library.mdx 2025-02-10 18:22:12 +01:00
Pedro Cuenca
5134dacf0b Set notebook link to HF 2025-02-10 18:05:08 +01:00
Pedro Cuenca
134391045f Notebook updates to bring it more in line with mdx 2025-02-10 18:00:00 +01:00
Pedro Cuenca
1f09bfd3b9 typo
Co-authored-by: sergiopaniego <sergiopaniegoblanco@gmail.com>
2025-02-10 17:35:14 +01:00
Pedro Cuenca
ea641f8a78 typo 2025-02-10 17:32:43 +01:00
SrzStephen
1ef4b440d2 spelling - actions.mdx 2025-02-11 00:09:31 +08:00
Pedro Cuenca
eef2fdd622 Merge pull request #51 from sergiopaniego/unit-1-suggestions
`Unit 1` suggestions for improvement
2025-02-10 17:08:41 +01:00
Pedro Cuenca
1d3bb98b3c nit 2025-02-10 17:04:35 +01:00
Pedro Cuenca
412062b29a tutorial 2025-02-10 17:02:04 +01:00
sergiopaniego
49ba2722a5 Link as html 2025-02-10 16:47:25 +01:00
sergiopaniego
4ef76b0fbe LLaMA3 -> Llama 3 2025-02-10 16:39:52 +01:00
sergiopaniego
cc91411f4c Simplified sentence 2025-02-10 16:39:18 +01:00
Pedro Cuenca
755ff251ef Merge pull request #48 from qgallouedec/patch-1
Link to HF profiles
2025-02-10 16:32:18 +01:00
Pedro Cuenca
c091e3d874 More suggestions 2025-02-10 16:30:22 +01:00
Dimildizio
11f2e27e4e fix(doc): change relative path to units 2-4 readme
Even though it is expected (I assume) to be changed later to a proper path on HF website, to avoid confusion I guess it's better to redirect to the en readme (even empty) instead of the 404 currently shown
2025-02-10 23:23:11 +08:00
Quentin Gallouédec
f2bdca0ab9 link to hf profile 2025-02-10 16:22:28 +01:00
Pedro Cuenca
0d20991b31 Merge pull request #46 from sergiopaniego/unit-0-nit
Small updates to `Unit 0`
2025-02-10 16:01:21 +01:00
Pedro Cuenca
b7edabcb72 actions 2025-02-10 15:54:20 +01:00
Pedro Cuenca
de433ca01a Additional Unit 1 nits and suggestions 2025-02-10 15:47:27 +01:00
sergiopaniego
f2e95ae832 Small updated to unit 0 2025-02-10 15:36:19 +01:00
burtenshaw
3ec0253d4c Merge pull request #44 from huggingface/unit-1-additional-nits
Additional Unit 1 nits and suggestions
2025-02-10 15:07:39 +01:00
burtenshaw
c148d1cad6 Merge pull request #43 from sergiopaniego/dummy_agent_notebook_nits
Some typos fixed in dummy agent library notebook
2025-02-10 15:06:44 +01:00
burtenshaw
4e4be40d28 small nits 2025-02-10 15:05:54 +01:00
Jofthomas
57b40af689 Merge pull request #45 from huggingface/Unit1_last_changes
modify action bloc
2025-02-10 14:39:55 +01:00
Joffrey THOMAS
86fd60b87e modify action bloc 2025-02-10 14:35:51 +01:00
Pedro Cuenca
c1b64f703e Additional Unit 1 nits and suggestions 2025-02-10 14:34:45 +01:00
sergiopaniego
4205836dfe Typo fixed 2025-02-10 14:25:24 +01:00
sergiopaniego
ca49eb37fe Some typos fixed in dummy agent library notebook 2025-02-10 14:24:19 +01:00
Jofthomas
6c5d8d1e0b Merge pull request #31 from huggingface/Unit_1_Joffrey
[🔴 NOT READY TO BE MERGED] Unit 1 Updates
2025-02-10 14:04:51 +01:00
burtenshaw
f1b473736e Merge branch 'main' into Unit_1_Joffrey 2025-02-10 14:02:32 +01:00
Jofthomas
ba0c5a9500 Merge pull request #42 from huggingface/Unit0_Aknowledgments
Acknowledgments
2025-02-10 14:01:19 +01:00
Joffrey THOMAS
3c67613189 Acknowledgments 2025-02-10 13:43:07 +01:00
Thomas Simonini
1e404b6a44 Merge pull request #41 from sergiopaniego/tools-nits
Small nits to `Tools`
2025-02-10 13:39:24 +01:00
sergiopaniego
98aa5e1d2d Tools nits 2025-02-10 13:35:26 +01:00
burtenshaw
6142ed3b6d Merge pull request #39 from huggingface/tools
Tools
2025-02-10 13:03:08 +01:00
burtenshaw
6a2c12e90c fix code fences 2025-02-10 12:57:38 +01:00
Joffrey THOMAS
632eea0ebc sergio's suggestion 2 2025-02-10 12:57:37 +01:00
Joffrey THOMAS
67fbb06f46 applied sergio suggestions 2025-02-10 12:56:09 +01:00
burtenshaw
a0baf67910 small nits 2025-02-10 12:55:11 +01:00
Joffrey THOMAS
8f9f6daa95 small improvements 2025-02-10 12:39:00 +01:00
Joffrey THOMAS
4a8315b402 moved spaces to org 2025-02-10 12:18:44 +01:00
Thomas Simonini
7291561d71 Update tutorial.mdx 2025-02-10 11:54:12 +01:00
Joffrey THOMAS
b4ad07cda6 switch space ( zeroGPU not working embbeded) 2025-02-10 11:47:08 +01:00
Pedro Cuenca
6f2ace74fa Tools section: review and reorder 2025-02-10 11:46:26 +01:00
Joffrey THOMAS
8f771f180f typo 2025-02-10 11:43:06 +01:00
Joffrey THOMAS
19216b873e thanks aymeric add 2025-02-10 11:37:36 +01:00
Joffrey THOMAS
cd2f7ea03d tutorial complete 2025-02-10 11:31:24 +01:00
Thomas Simonini
cee9f35e50 Merge pull request #37 from huggingface/ThomasSimonini/DocumentationUpload
[DON'T MERGE BEFORE LAUNCH] Create build_documentation.yml
2025-02-10 11:24:17 +01:00
burtenshaw
aefbd488a7 Merge pull request #38 from huggingface/expand-special-tokens-section
expand special tokens table
2025-02-10 11:21:14 +01:00
burtenshaw
2dff9d864e Merge branch 'Unit_1_Joffrey' into expand-special-tokens-section 2025-02-10 11:20:03 +01:00
Joffrey THOMAS
79e57e079e delete mistral example 2025-02-10 11:12:18 +01:00
Joffrey THOMAS
ddc11585c9 typo 2025-02-10 11:10:35 +01:00
burtenshaw
312c69ed95 Merge branch 'Unit_1_Joffrey' into expand-special-tokens-section 2025-02-10 11:09:34 +01:00
burtenshaw
f3274cfe91 Update units/en/unit1/what-are-llms.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2025-02-10 11:07:27 +01:00
Thomas Simonini
0fc8babe80 Update introduction.mdx 2025-02-10 11:06:11 +01:00
Thomas Simonini
893afaaabf Update introduction.mdx 2025-02-10 11:05:19 +01:00
Thomas Simonini
7dccec4dc5 Merge branch 'Unit_1_Joffrey' of https://github.com/huggingface/agents-course into Unit_1_Joffrey 2025-02-10 11:01:11 +01:00
Thomas Simonini
2f0b825221 Update dummy-agent-library.mdx 2025-02-10 11:01:02 +01:00
Joffrey THOMAS
a1ffe8db61 errors and typos 2025-02-10 10:57:51 +01:00
Joffrey THOMAS
d65d32846a markdown to html table 2025-02-10 10:51:28 +01:00
Joffrey THOMAS
3dcec7a78b markdown format correction 2025-02-10 10:43:32 +01:00
Joffrey THOMAS
14ada04b4f change tutorial 2025-02-10 10:36:24 +01:00
Joffrey THOMAS
b9bd9946c7 tutorial mdx 2025-02-10 10:34:46 +01:00
burtenshaw
a13a807477 small typos
Co-authored-by: sergiopaniego <sergiopaniegoblanco@gmail.com>
2025-02-10 10:05:15 +01:00
Thomas Simonini
640668e4c4 Update links spaces 2025-02-10 09:54:37 +01:00
Thomas Simonini
dd3b82f783 Update spaces links 2025-02-10 09:50:29 +01:00
burtenshaw
f6e7dc1d7d expand special tokens table 2025-02-10 09:27:52 +01:00
Joffrey THOMAS
e57dd97beb delete old tutorial 2025-02-10 08:40:38 +01:00
Thomas Simonini
7aeee9d3c6 Update tutorial.mdx 2025-02-10 08:12:15 +01:00
Thomas Simonini
102bcc4fc8 Update 2025-02-10 08:10:18 +01:00
Thomas Simonini
29951ef4b3 Update simple-use-case.mdx 2025-02-10 08:09:15 +01:00
Thomas Simonini
02bd61da83 Update simple-use-case.mdx 2025-02-10 07:56:22 +01:00
Thomas Simonini
ccd49df3e1 Update final-quiz.mdx
*Add warning submit
2025-02-10 07:17:48 +01:00
Thomas Simonini
380b55c012 Create build_documentation.yml 2025-02-10 06:45:39 +01:00
Thomas Simonini
aed0a08994 Update what-are-llms.mdx 2025-02-10 06:30:55 +01:00
Thomas Simonini
a0522bb77f Update what-are-llms.mdx 2025-02-10 06:29:06 +01:00
Thomas Simonini
63d7e23965 Final Updates 2025-02-10 06:23:57 +01:00
Thomas Simonini
cf6bbe7339 Merge pull request #36 from huggingface/unit-1-nits
Unit 1 Review
2025-02-10 06:13:16 +01:00
Pedro Cuenca
378e312c1d Apply suggestions from code review
Co-authored-by: burtenshaw <ben.burtenshaw@gmail.com>
2025-02-09 23:51:58 +01:00
Thomas Simonini
c592c0009d Update simple-use-case.mdx 2025-02-09 20:39:23 +01:00
Pedro Cuenca
b459eb49f8 Chat templates 2025-02-09 20:37:19 +01:00
Thomas Simonini
e613b2af48 Add link notebook 2025-02-09 20:32:05 +01:00
Thomas Simonini
28b72f38f8 Update dummy_agent_library.ipynb 2025-02-09 20:29:14 +01:00
Thomas Simonini
d48a0856aa Update dummy-agent-library.mdx 2025-02-09 20:27:08 +01:00
Thomas Simonini
436baaa1f5 Update dummy-agent-library.mdx 2025-02-09 20:18:21 +01:00
Thomas Simonini
d9b732ead2 Update dummy-agent-library.mdx 2025-02-09 20:15:43 +01:00
Thomas Simonini
aaa15223ab Update dummy-agent-library.mdx 2025-02-09 20:07:52 +01:00
Thomas Simonini
7328cf0762 Update dummy-agent-library.mdx 2025-02-09 20:00:00 +01:00
Thomas Simonini
b51a7ba6eb Update dummy-agent-library.mdx 2025-02-09 19:57:47 +01:00
Thomas Simonini
ffa3560c11 Update dummy-agent-library.mdx 2025-02-09 19:55:45 +01:00
Thomas Simonini
e974531f88 Update dummy-agent-library.mdx 2025-02-09 19:52:30 +01:00
Pedro Cuenca
31a8e13870 llms 2025-02-09 19:37:59 +01:00
Joffrey THOMAS
473174b32a remove quotes for clarity 2025-02-09 18:58:36 +01:00
Joffrey THOMAS
2ad5b3df14 Observation update 2025-02-09 18:55:38 +01:00
Joffrey THOMAS
ae7ddf440d title change 2025-02-09 18:51:55 +01:00
Joffrey THOMAS
bd7945282d markdown formating error 2025-02-09 18:47:56 +01:00
Thomas Simonini
afeed66b0b Update _toctree.yml 2025-02-09 18:46:26 +01:00
Pedro Cuenca
5507efcee4 Unit 1 review 2025-02-09 18:40:41 +01:00
Joffrey THOMAS
1fa5de8b27 markdown format 2025-02-09 18:00:33 +01:00
Joffrey THOMAS
ad853e169f dummy agent section 2025-02-09 17:56:45 +01:00
Thomas Simonini
25c3f129e4 Merge branch 'Unit_1_Joffrey' of https://github.com/huggingface/agents-course into Unit_1_Joffrey 2025-02-09 17:18:32 +01:00
Thomas Simonini
98ff6c8d1e Update README.md 2025-02-09 17:16:40 +01:00
Joffrey THOMAS
347ca07d8a dummy_agent notebook 2025-02-09 17:03:23 +01:00
Thomas Simonini
bedd96a2df Update introduction.mdx 2025-02-09 17:00:50 +01:00
Thomas Simonini
9398619a4a Add live infos 2025-02-09 17:00:01 +01:00
Thomas Simonini
26ec1548e3 Update _toctree.yml 2025-02-09 16:11:48 +01:00
Joffrey THOMAS
126bf72f6e update non-working gif 2025-02-09 13:13:00 +01:00
Thomas Simonini
830e4d7ab5 Update get-your-certificate.mdx 2025-02-09 07:15:20 +01:00
Thomas Simonini
dc63cb5dfa Update conclusion.mdx
* Add certificate link
2025-02-09 07:12:18 +01:00
Thomas Simonini
76df15620b Update observations.mdx 2025-02-09 07:06:31 +01:00
Thomas Simonini
3978d60d90 Update actions.mdx 2025-02-09 06:58:08 +01:00
Thomas Simonini
befd91491d Moved Interface Design for future Unit
Moved Interface Design for future unit
2025-02-09 06:55:45 +01:00
Thomas Simonini
10c20dfefe Update thoughts.mdx 2025-02-09 06:47:50 +01:00
Thomas Simonini
258f3e7b09 Update thoughts.mdx 2025-02-09 06:44:25 +01:00
Thomas Simonini
b5a17c86b8 Update agent steps 2025-02-09 06:43:48 +01:00
Thomas Simonini
1d8fbcc336 Small updates 2025-02-09 06:39:31 +01:00
Thomas Simonini
b2e7f37d41 Update what-are-agents.mdx 2025-02-09 06:27:38 +01:00
Thomas Simonini
b3e7ef50f3 Reformulate definition of Agents 2025-02-09 06:24:24 +01:00
Thomas Simonini
13f2b04f5d Update actions.mdx 2025-02-08 18:32:19 +01:00
Thomas Simonini
df344291e4 Update 2025-02-08 18:18:26 +01:00
Thomas Simonini
073754f7e0 Update actions.mdx 2025-02-08 18:10:34 +01:00
Thomas Simonini
fc8cf60ab2 Update thoughts.mdx 2025-02-08 17:57:32 +01:00
Thomas Simonini
15c50bbeb8 Update agent-steps-and-structure.mdx 2025-02-08 17:45:27 +01:00
Thomas Simonini
ced2f13a0c Update 2025-02-08 17:35:02 +01:00
Thomas Simonini
4f2b460b15 Update what-are-agents.mdx 2025-02-08 17:00:38 +01:00
Thomas Simonini
304f54db0f Update unit1/introduction 2025-02-08 16:55:41 +01:00
Thomas Simonini
e0b685c44c Update actions.mdx 2025-02-08 16:33:15 +01:00
Thomas Simonini
cc6fee2545 Write Actions 2025-02-08 16:24:25 +01:00
Thomas Simonini
cd3ea4eb23 Update 2025-02-08 16:19:21 +01:00
Thomas Simonini
93a7ca5372 Update thoughts section 2025-02-08 16:08:27 +01:00
Thomas Simonini
a0dd184706 Complete revamp of Section on Agent Workflow 2025-02-08 15:59:07 +01:00
Thomas Simonini
4b1736a152 Update Unit 1 2025-02-07 17:41:06 +01:00
Thomas Simonini
89117a85c3 Update review Unit 1 2025-02-07 17:27:14 +01:00
Thomas Simonini
9ab79bbf68 Update Unit0 2025-02-07 16:55:57 +01:00
Thomas Simonini
ece80e6f1d Update units/en/unit1/README.md
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2025-02-07 16:26:49 +01:00
Pedro Cuenca
d5c00f38d8 Apply suggestions from code review
Co-authored-by: burtenshaw <ben.burtenshaw@gmail.com>
2025-02-07 14:00:21 +01:00
Jofthomas
179da01be5 Apply suggestions from code review
Co-authored-by: burtenshaw <ben.burtenshaw@gmail.com>
2025-02-07 13:54:10 +01:00
Pedro Cuenca
43888f62c5 Accepting some suggestions by Ben
Co-authored-by: burtenshaw <ben.burtenshaw@gmail.com>
2025-02-07 13:50:23 +01:00
Jofthomas
65ebf0e7a8 Request change
Co-authored-by: burtenshaw <ben.burtenshaw@gmail.com>
2025-02-07 13:36:44 +01:00
burtenshaw
315896743a tidy up toc 2025-02-07 12:25:23 +01:00
burtenshaw
46da7987ad Merge branch 'main' into Unit_1_Joffrey 2025-02-07 09:46:12 +01:00
burtenshaw
a280f2d6ca Merge pull request #35 from huggingface/restructure-filenames-for-hflearn
renames to conform
2025-02-07 09:40:36 +01:00
Thomas Simonini
9411de25c2 Update agent-steps-and-structure.mdx 2025-02-07 09:39:16 +01:00
burtenshaw
43072c8004 renames to conform 2025-02-07 09:39:07 +01:00
burtenshaw
e1525c7aa1 Merge pull request #32 from huggingface/ThomasSimonini/UpdateDiscordSection
Update Discord Section
2025-02-07 09:34:45 +01:00
Thomas Simonini
b345c8f2e5 Update agent-steps-and-structure.mdx 2025-02-07 09:32:22 +01:00
Thomas Simonini
7153250801 Update tools.mdx 2025-02-07 07:47:49 +01:00
Thomas Simonini
4a315c7c0a Update 2025-02-07 07:41:43 +01:00
Thomas Simonini
ebe23a3e48 Update after morning review 2025-02-07 07:36:00 +01:00
Joffrey THOMAS
4009489924 Observation modification 2025-02-07 02:15:42 +01:00
Joffrey THOMAS
106e5185e1 update cycle 2025-02-07 02:10:23 +01:00
Joffrey THOMAS
8417ce24ae cycle draft 2025-02-07 01:50:14 +01:00
Joffrey THOMAS
10db36e0d2 conversation modification 2025-02-07 00:03:34 +01:00
Thomas Simonini
3a22efba75 Update unit1/tools 2025-02-06 23:46:25 +01:00
Thomas Simonini
22c0e312a7 Update unit1/messages-and-special-tokens 2025-02-06 23:41:29 +01:00
Thomas Simonini
598990b51d Update unit1/what-are-llms 2025-02-06 23:27:47 +01:00
Thomas Simonini
dd4ad4c69b Update what-are-agents.mdx 2025-02-06 23:20:12 +01:00
Thomas Simonini
6628afc5a8 Update unit1/what-are-agents 2025-02-06 23:19:10 +01:00
Thomas Simonini
f686cec47e Update introduction.mdx 2025-02-06 23:01:16 +01:00
Thomas Simonini
81be0a5c0d Update unit1/introduction 2025-02-06 23:00:43 +01:00
Thomas Simonini
228f351f93 Update and clean unit1/introduction 2025-02-06 22:53:43 +01:00
Thomas Simonini
56dbbdfc02 Conclusion tool 2025-02-06 21:49:21 +01:00
Thomas Simonini
e6a4554aa8 Move interface design for tools to Actions part 2025-02-06 21:43:37 +01:00
Thomas Simonini
d56938bb50 Update tools 2025-02-06 21:41:33 +01:00
Thomas Simonini
c2c5e7b700 Update intro and special-tokens 2025-02-06 21:20:36 +01:00
Thomas Simonini
eb5961190b Update messages-and-special-tokens.mdx 2025-02-06 21:18:25 +01:00
Thomas Simonini
012eb47231 Update these sections to have a better learning flow 2025-02-06 20:45:13 +01:00
Thomas Simonini
4a8081b9bf Merge branch 'main' into Unit_1_Joffrey 2025-02-06 18:43:06 +01:00
Thomas Simonini
fde8ec401b Update upload_pr_documentation.yml 2025-02-06 18:42:43 +01:00
Thomas Simonini
94ceac049c Update introduction.mdx 2025-02-06 18:37:20 +01:00
Thomas Simonini
98a01688e3 Update upload_pr_documentation.yml
* I'll fine grain later, I'll rapidly need to check if it works
2025-02-06 18:36:54 +01:00
Thomas Simonini
0aa4b6a540 Update tools.mdx
To make documentation build
2025-02-06 18:33:22 +01:00
Joffrey THOMAS
b94225ff07 modify messages 2025-02-06 18:33:03 +01:00
Thomas Simonini
51e8722d5c Update tools.mdx 2025-02-06 18:31:10 +01:00
Thomas Simonini
2e8960140c Update tools.mdx 2025-02-06 18:29:12 +01:00
Thomas Simonini
7c74588c09 Update _toctree.yml 2025-02-06 18:26:25 +01:00
Thomas Simonini
9017534e5e Update upload_pr_documentation.yml 2025-02-06 18:21:43 +01:00
Thomas Simonini
7dd407e3cf Update 2025-02-06 17:42:58 +01:00
Thomas Simonini
5b17d1a19b Update messages-and-special-tokens.mdx 2025-02-06 17:21:12 +01:00
Thomas Simonini
d8c8918cb5 Update messages-and-special-tokens.mdx 2025-02-06 17:16:29 +01:00
Thomas Simonini
fe612214e9 Update messages-and-special-tokens.mdx 2025-02-06 17:10:11 +01:00
Thomas Simonini
47849d64c8 Update what-are-llms.mdx 2025-02-06 16:49:41 +01:00
Thomas Simonini
7128520115 Re-reading
* Update introduction
* Update quiz with new questions
* Update what are agents
* Update what are llms
2025-02-06 16:39:32 +01:00
Thomas Simonini
17e38f97e5 Merge branch 'Unit_1_Joffrey' of https://github.com/huggingface/agents-course into Unit_1_Joffrey 2025-02-06 16:09:53 +01:00
Thomas Simonini
f13790dffb Change unit 0 filenames 2025-02-06 16:09:43 +01:00
Thomas Simonini
0f9448f45c Merge branch 'main' into Unit_1_Joffrey 2025-02-06 16:08:05 +01:00
Thomas Simonini
fa6d5327a8 Hope it works 2025-02-06 16:06:56 +01:00
Thomas Simonini
2f77fce004 Try something 2025-02-06 16:06:17 +01:00
Thomas Simonini
7a5456d285 Update file names 2025-02-06 16:04:31 +01:00
Thomas Simonini
7ea85fe6c4 Merge pull request #34 from pcuenca/unit-1-review
Going through Unit 1
2025-02-06 16:02:33 +01:00
Thomas Simonini
dbfed5c4f1 Update units/en/unit1/what-are-agents.mdx 2025-02-06 15:49:23 +01:00
Thomas Simonini
51d52bc410 Merge branch 'Unit_1_Joffrey' into unit-1-review 2025-02-06 15:49:17 +01:00
Thomas Simonini
e26f0a89ee Rename 01_welcome_to_the_course.mdx to intro.mdx 2025-02-06 15:48:27 +01:00
Thomas Simonini
fc0fbd618d Update units/en/unit1/what-are-agents.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2025-02-06 15:40:38 +01:00
Thomas Simonini
711556acdd Creating quiz 2 based on Ben questions + add questions 2025-02-06 13:44:47 +01:00
Thomas Simonini
eb8cb5334e Update 2025-02-06 13:18:05 +01:00
Pedro Cuenca
fa4226fae0 Going through Unit 1 2025-02-06 13:11:47 +01:00
Thomas Simonini
fc4dd8d776 Update tools.mdx 2025-02-06 13:05:06 +01:00
burtenshaw
71b72eb94a add basic guide to discord section 2025-02-06 12:11:26 +01:00
burtenshaw
a72f1fd90b Update units/en/unit0/02_onboarding.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2025-02-06 11:57:01 +01:00
Joffrey THOMAS
834a8e6db8 add back decorator tool 2025-02-06 11:55:58 +01:00
Joffrey THOMAS
9c0211bab4 delete decorator 2025-02-06 11:54:43 +01:00
Joffrey THOMAS
ad1b9d966e delete + typo 2025-02-06 11:52:39 +01:00
Joffrey THOMAS
538d0082a3 typo 2025-02-06 11:50:20 +01:00
burtenshaw
318bd0c918 Merge branch 'main' into ThomasSimonini/UpdateDiscordSection 2025-02-06 11:47:54 +01:00
burtenshaw
fcc6688260 Merge pull request #28 from pcuenca/unit-0-nits
A few typos, nits and suggestions
2025-02-06 11:46:08 +01:00
Joffrey THOMAS
bb9321084f images in tools 2025-02-06 11:46:07 +01:00
Joffrey THOMAS
eb4993a3e1 tools section 2025-02-06 10:54:13 +01:00
Thomas Simonini
76296117b1 Update messages-and-special-tokens.mdx 2025-02-06 10:31:25 +01:00
Thomas Simonini
8422bc541f Update llm and message 2025-02-06 10:09:26 +01:00
Pedro Cuenca
f0c029dc5c Merge branch 'main' into unit-0-nits 2025-02-06 09:58:04 +01:00
Thomas Simonini
6833e925eb Update what-are-llms.mdx
* add transformer
2025-02-06 09:29:34 +01:00
Thomas Simonini
bf28639c1d Update 2025-02-06 09:22:50 +01:00
Thomas Simonini
02795537dd Brocolis Eiffel Tower 2025-02-06 09:02:24 +01:00
Thomas Simonini
3ad3faa192 Update introduction.mdx 2025-02-06 08:43:09 +01:00
Thomas Simonini
766183c840 Update introduction
* Mostly rephrasing
2025-02-06 08:09:49 +01:00
Thomas Simonini
7b8aec0abe Update certification 2025-02-06 06:51:37 +01:00
Thomas Simonini
7327289da4 Update README.md 2025-02-05 23:39:56 +01:00
Thomas Simonini
be58a75a0d Update README.md
* Change the links to the course directly
2025-02-05 23:38:48 +01:00
Thomas Simonini
5d5cf68a1f Update file names 2025-02-05 23:28:41 +01:00
Thomas Simonini
fd5d7ab789 Create LICENSE 2025-02-05 23:05:37 +01:00
Thomas Simonini
334262b0ae Update 02_onboarding.mdx
* Update Discord Channels
2025-02-05 18:55:07 +01:00
Thomas Simonini
de4f38302d Update 03_discord101.mdx
*Add channels
2025-02-05 18:53:01 +01:00
Pedro Cuenca
84ff2aa921 Fixes by Sergio.
Co-authored-by: sergiopaniego <sergiopaniegoblanco@gmail.com>
2025-02-05 18:34:41 +01:00
Joffrey THOMAS
4999cf41eb 4_tools first draft 2025-02-05 18:03:59 +01:00
Thomas Simonini
c073e2b4bb Update 8_dummy_agent_library.mdx 2025-02-05 17:57:48 +01:00
Thomas Simonini
f2de4cc1b1 Update images 2025-02-05 17:56:25 +01:00
Thomas Simonini
a82894256d Added illustrations whiteboard 2025-02-05 17:52:15 +01:00
Thomas Simonini
43530c982f Merge pull request #27 from huggingface/ThomasSimonini/UpdateUnit1Part1
[NOT READY TO BE REVIEWED] Unit 1, Part 1, big updates
2025-02-05 16:40:11 +01:00
Thomas Simonini
5b82406db6 Merge pull request #25 from huggingface/ThomasSimonini/Unit1IntroCcl
Unit 1: Add Introduction and Conclusion
2025-02-05 16:39:50 +01:00
Thomas Simonini
1c50ad7b91 Move _toctree 2025-02-05 16:34:25 +01:00
Joffrey THOMAS
c1c3c2abe8 reformat 2025-02-05 16:32:04 +01:00
Thomas Simonini
7445793415 Merge pull request #20 from huggingface/ThomasSimonini/CI_Building
Create CI for Doc Builder [NOT READY TO BE MERGED]
2025-02-05 16:22:30 +01:00
Pedro Cuenca
733955d1b1 A couple more nits 2025-02-05 16:12:48 +01:00
Joffrey THOMAS
06a14f2e49 reoganize unit 2025-02-05 16:03:42 +01:00
Thomas Simonini
ed581f8d94 Update Section 2 What are LLMs 2025-02-05 15:57:12 +01:00
Pedro Cuenca
1e699ad82a A few typos, nits and suggestions. 2025-02-05 15:55:47 +01:00
Joffrey THOMAS
fe11a244b4 reformulate part 3 2025-02-05 15:48:35 +01:00
Thomas Simonini
e82059fd75 Move quiz 2025-02-05 15:16:10 +01:00
Thomas Simonini
8023c3e72c Update units/en/unit1/introduction.mdx
Co-authored-by: burtenshaw <ben.burtenshaw@gmail.com>
2025-02-05 14:35:10 +01:00
Thomas Simonini
43639b2b35 Update units/en/unit1/conclusion.mdx
Co-authored-by: burtenshaw <ben.burtenshaw@gmail.com>
2025-02-05 14:35:02 +01:00
Thomas Simonini
617efe98a2 Add big picture "What is an agent" 2025-02-05 13:52:24 +01:00
Thomas Simonini
e004bc394d Move the files to the correct folder 2025-02-05 11:23:12 +01:00
Thomas Simonini
08444f2ac9 Merge pull request #26 from huggingface/unit1_joffrey
Modification of Unit 1.1 -> 1.3
2025-02-05 11:14:36 +01:00
Joffrey THOMAS
7ee492341e convert to mdx 3 first md 2025-02-05 11:07:13 +01:00
Joffrey THOMAS
f9475b3510 md table fix 2025-02-05 11:03:14 +01:00
Joffrey THOMAS
729ca94b12 corect EOS table md 2025-02-05 10:58:44 +01:00
Joffrey THOMAS
ceb22a38c8 add links 2025-02-05 10:53:26 +01:00
Thomas Simonini
c6c7a1236a Add image certificate 2025-02-05 10:34:30 +01:00
Joffrey THOMAS
bdd59d4bf7 fix code block and test gif 2025-02-05 10:22:09 +01:00
Thomas Simonini
deabb80a0b Add Introduction and Conclusion 2025-02-05 10:20:36 +01:00
Joffrey THOMAS
1be13b213d embbeding space test 2025-02-05 10:00:01 +01:00
Thomas Simonini
a82dfc258d Merge pull request #24 from huggingface/ThomasSimonini/Reorganize
Replace the files in correct Folders
2025-02-05 09:44:16 +01:00
Thomas Simonini
c01f9b11c2 Move the files to the correct folder 2025-02-05 09:24:22 +01:00
Thomas Simonini
9782c8c04c Merge pull request #19 from huggingface/ThomasSimonini/UpdateUnit0
Update Unit 0 by adding more CTA to star, follow and share the course
2025-02-05 09:22:36 +01:00
Thomas Simonini
aa75b78806 Merge pull request #21 from huggingface/ThomasSimonini/IssueQuestion
Update issue templates (I have a question issue template)
2025-02-05 09:21:14 +01:00
Thomas Simonini
52612331b6 Merge pull request #23 from huggingface/ThomasSimonini/ImproveCourseIssueTemplate
Update issue templates
2025-02-05 09:21:00 +01:00
Thomas Simonini
afc4d65f97 Update issue templates 2025-02-04 17:34:11 +01:00
Joffrey THOMAS
ef0742ee00 chapter 1,2 & 3 2025-02-04 16:49:29 +01:00
Thomas Simonini
e93c75864a Update issue templates 2025-02-04 16:36:44 +01:00
Thomas Simonini
ffdd73d7a4 Merge pull request #18 from huggingface/ThomasSimonini/NextUnits
Add Next Units publishing calendar
2025-02-04 16:20:45 +01:00
Thomas Simonini
1e9ebdc3fd Create upload_pr_documentation.yml 2025-02-04 15:51:29 +01:00
Thomas Simonini
0128d3701a Create build_pr_documentation.yml 2025-02-04 15:48:09 +01:00
Thomas Simonini
5b39e09d14 Add authors social links 2025-02-04 13:46:59 +01:00
Thomas Simonini
418e6a6fcb Update README.md
* Add gif star repo
2025-02-04 13:39:48 +01:00
Thomas Simonini
0455e8dc66 Update 02_onboarding.mdx
* Added CTA star and share the course
2025-02-04 13:38:43 +01:00
Thomas Simonini
f1cb921509 Delete units/unit0/.DS_Store 2025-02-04 13:19:05 +01:00
Thomas Simonini
f256cad93d Add Next Units publishing calendar 2025-02-04 11:16:50 +01:00
Jofthomas
c153db7028 Merge pull request #17 from huggingface/ThomasSimonini/UpdatingUnit0
Update Unit 0
2025-02-04 10:28:47 +01:00
Thomas Simonini
5614df994f Update images links 2025-02-04 10:27:35 +01:00
Thomas Simonini
bca1b49fa7 Remove space 2025-02-04 10:25:09 +01:00
Thomas Simonini
4950b8bcba Update Unit 0 2025-02-04 10:20:52 +01:00
burtenshaw
a70b84c6d7 Merge pull request #16 from huggingface/move-quiz-to-directory
Merge branch 'section/unit1_ben'
2025-02-03 19:59:55 +01:00
burtenshaw
e94283879b Merge pull request #15 from huggingface/remove-uv-setup-from-main
remove uv files from main
2025-02-03 19:59:29 +01:00
burtenshaw
94656e966b Merge pull request #14 from huggingface/rename-directories-to-hf-learn-style
update readme
2025-02-03 19:59:13 +01:00
burtenshaw
0332b912a5 Merge branch 'section/unit1_ben' 2025-02-03 19:58:06 +01:00
burtenshaw
0f9d62041d remove uv files from main 2025-02-03 19:53:08 +01:00
burtenshaw
b0971bea3c update readme 2025-02-03 19:46:03 +01:00
burtenshaw
841b2d9a88 rename directories 2025-02-03 19:43:08 +01:00
burtenshaw
e2eafefda5 Merge pull request #11 from huggingface/ThomasSimonini/IssuesTemplates
Create issue templates
2025-02-03 19:36:08 +01:00
burtenshaw
951f94c9b9 Merge pull request #12 from huggingface/ThomasSimonini/UpdateReadme
Update Readme.md
2025-02-03 19:35:10 +01:00
Thomas Simonini
433cc3f162 Merge pull request #13 from huggingface/merge-changes-from-thomas-branch
Merge branch 'ThomasSimonini/Unit0'
2025-02-03 19:34:17 +01:00
burtenshaw
83ffbfbe9f Merge branch 'ThomasSimonini/Unit0' 2025-02-03 19:29:57 +01:00
Thomas Simonini
20d79c6904 Update README.md
* Added bibtext
* Some minor updates and CTA
2025-02-03 19:28:29 +01:00
Thomas Simonini
d1776bb541 Create issue templates 2025-02-03 19:06:37 +01:00
burtenshaw
4f9d3b471e Merge pull request #10 from sergiopaniego/section-1-nits
Small `Unit 1` nits
2025-02-03 16:16:42 +01:00
sergiopaniego
955ceb49c2 Removed comma 2025-02-03 11:59:27 +01:00
sergiopaniego
7e89b57c6f Tools unit nits 2025-02-03 11:39:34 +01:00
sergiopaniego
0ee1d81b80 Small nits Section 1 2025-02-03 11:17:51 +01:00
burtenshaw
d96d27fb5d Merge pull request #4 from huggingface/section/unit1_ben
[section] Unit 1 : LLMs, chat templates, tokenization, simple use case, agents basics
2025-02-03 09:50:47 +01:00
burtenshaw
8b7c38288c Merge pull request #3 from huggingface/unit/1_fundamentals
[UNIT] structure for unit on introduction to agents
2025-01-29 09:46:50 +01:00
burtenshaw
724d5fcfb6 Merge pull request #8 from huggingface/section/unit_1-thought-action-observation
Section/unit_1-thought-action-observation
2025-01-29 09:46:30 +01:00
burtenshaw
590718093d add section on thought action observation 2025-01-28 11:39:51 +01:00
Thomas Simonini
054dadcfae Merge pull request #5 from huggingface/ThomasSimonini/Unit0
Update Unit 0
2025-01-28 09:47:22 +01:00
burtenshaw
289965ec8b add starting prose on tool usage 2025-01-27 11:48:05 +01:00
burtenshaw
85acb29a0a page on defining an agent 2025-01-27 10:43:48 +01:00
burtenshaw
41bb284fe3 add a quiz for section 1 2025-01-24 09:54:01 +01:00
burtenshaw
d2d5bc4f77 add dummy agent library section 2025-01-23 19:52:09 +01:00
Thomas Simonini
60d45a69bd Update image links 2025-01-23 15:59:58 +01:00
Thomas Simonini
2083f01333 Some NiT updates 2025-01-23 15:42:24 +01:00
Thomas Simonini
af27492797 Update Unit 0
* Changed the content
* Changed the structure (to be shorter)
* Added todos
* Created Toctree
2025-01-23 15:28:06 +01:00
burtenshaw
83a026828c draft of simple use case section 2025-01-23 12:44:36 +01:00
burtenshaw
f27eb913f5 add section on tokenization and chat templates 2025-01-23 12:37:46 +01:00
burtenshaw
6339e18772 add section on explaing llms 2025-01-23 12:37:25 +01:00
burtenshaw
144a15af4e Merge branch 'unit/onboarding_unit' 2025-01-22 16:16:44 +01:00
burtenshaw
45dab8fe46 Merge pull request #1 from huggingface/feat/basic-structure
Basic structure for the course
2025-01-22 11:44:27 +01:00
burtenshaw
806a903c9b basic structure for unit 1 with todos 2025-01-22 11:44:02 +01:00
52 changed files with 4932 additions and 329 deletions

View File

@@ -0,0 +1,20 @@
---
name: I have a bug with a hands-on
about: You have encountered a bug during one of the hands-on
title: "[HANDS-ON BUG]"
labels: hands-on-bug
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Please provide any informations and a **link** to your hands-on so that we can investigate.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.

View File

@@ -0,0 +1,12 @@
---
name: I have a question
about: You have a question about a section of the course
title: "[QUESTION]"
labels: question
assignees: ''
---
First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord
However, if you prefer you can ask here, please **be specific**.

View File

@@ -0,0 +1,13 @@
---
name: I want to improve the course or write a new section
about: You found a typo, an error or you want to improve a part of the course or write
a full section/unit
title: "[UPDATE]"
labels: documentation
assignees: ''
---
1. If you want to add a full section or a new unit, **please describe precisely what you want to add before starting to write it** so that we can review the idea, validate it or not, and guide you through the writing process.
2. If there's a typo, you can directly open a PR.

View File

@@ -0,0 +1,19 @@
name: Build documentation
on:
push:
branches:
- main
jobs:
build:
uses: huggingface/doc-builder/.github/workflows/build_main_documentation.yml@main
with:
commit_sha: ${{ github.sha }}
package: agents-course
package_name: agents-course
path_to_docs: agents-course/units/
additional_args: --not_python_module
languages: en
secrets:
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}

View File

@@ -0,0 +1,20 @@
name: Build PR Documentation
on:
pull_request:
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
build:
uses: huggingface/doc-builder/.github/workflows/build_pr_documentation.yml@main
with:
commit_sha: ${{ github.event.pull_request.head.sha }}
pr_number: ${{ github.event.number }}
package: agents-course
package_name: agents-course
path_to_docs: agents-course/units/
additional_args: --not_python_module
languages: en

View File

@@ -0,0 +1,24 @@
name: Upload PR Documentation
on:
workflow_run:
workflows: ["Build PR Documentation"]
types:
- completed
permissions:
actions: write
contents: write
deployments: write
pull-requests: write
jobs:
build:
uses: huggingface/doc-builder/.github/workflows/upload_pr_documentation.yml@main
with:
package_name: agents-course
hub_base_path: https://moon-ci-docs.huggingface.co
secrets:
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
comment_bot_token: ${{ secrets.COMMENT_BOT_TOKEN }}

201
LICENSE Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,18 +1,25 @@
# The Hugging Face Agents Course
# [The Hugging Face Agents Course](https://hf.co/learn/agents-course)
If you like the course, **don't hesitate to ⭐ star this repository**. This helps us to **make the course more visible 🤗**.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/communication/please_star.gif" alt="Star the repo" />
## Content
The course is divided into 5 units. That will take you from **the basics of agents to a final assignment with a benchmark**.
Don't forget to ⭐ the repository (it helps other to discover the course) 🤗
Sign up here (it free) 👉 http://eepurl.com/i7MQOU
Sign up here (it's free) 👉 https://bit.ly/hf-learn-agents
You can access the course here 👉 https://hf.co/learn/agents-course
| Unit | Topic | Description |
|------|--------------------------------|-----------------------------------------------------------------------------|
| 0 | [0_onboarding](units/0_onboarding) | Welcome, guidelines, necessary tools, and course overview. |
| 1 | [1_introduction_to_concepts](units/1_introduction_to_concepts) | Definition of agents, LLMs, model family tree, and special tokens. |
| 2 | [2_frameworks](units/2_frameworks) | Overview of Smolagents, LangChain, LangGraph, and LlamaIndex. |
| 3 | [3_use_cases](units/3_use_cases) | SQL, code, retrieval, and on-device agents using various frameworks. |
| 4 | [4_final_assignment_with_benchmark](units/4_final_assignment_with_benchmark) | Automated evaluation of agents and leaderboard with student results. |
| 0 | [Welcome to the Course](https://huggingface.co/learn/agents-course/en/unit0/introduction) | Welcome, guidelines, necessary tools, and course overview. |
| 1 | [Introduction to Agents](https://huggingface.co/learn/agents-course/en/unit1/introduction) | Definition of agents, LLMs, model family tree, and special tokens. |
| 2 | [2_frameworks](units/en/unit2/README.md) | Overview of smolagents, LangGraph, and LlamaIndex. |
| 3 | [3_use_cases](units/en/unit3/README.md) | SQL, code, retrieval, and on-device agents using various frameworks. |
| 4 | [4_final_assignment_with_benchmark](units/en/unit4/README.md) | Automated evaluation of agents and leaderboard with student results. |
## Prerequisites
@@ -29,18 +36,33 @@ If you find a small typo or grammar mistake, please fix it yourself and submit a
### New unit
If you want to add a new unit, please create an issue in the repository, describe the unit, and why it should be added. We will discuss it and if it's a good addition, we can collaborate on it.
If you want to add a new unit, **please create an issue in the repository, describe the unit, and why it should be added**. We will discuss it and if it's a good addition, we can collaborate on it.
### Work on existing units
We are actively working on the units and If you want to join us, we will need to find a place in the workflow. Here's an overview of where we are open to collaboration:
We are actively working on the units and if you want to join us, we will need to find a place in the workflow. Here's an overview of where we are open to collaboration:
| Unit | Status | Contributions |
|------|--------------|------------------------------------------------------------------------|
| 0 | ✅ Complete | Bug fixes and improvements only |
| 1 | 🚧 In Progress | Work is underway, no need for help with content |
| 1 | ✅ Complete | Bug fixes and improvements only |
| 2 | 🚧 In Progress | If you're a contributor to a framework, we're open to contributions and reviews |
| 3 | 🗓️ Planned | If you're experienced with agents, we're open to help with use cases |
| 4 | 🚧 In Progress | Work is underway, no need for help with integration |
If in doubt, join the discussion in the [Discord](https://discord.gg/GC7zFgvN).
If in doubt, join the discussion in the [Discord](https://discord.gg/GC7zFgvN).
## Citing the project
To cite this repository in publications:
```bibtex
@misc{agents-course,
author = {Burtenshaw, Ben and Thomas, Joffrey and Simonini, Thomas},
title = {The Hugging Face Agents Course},
year = {2025},
howpublished = {\url{https://github.com/huggingface/agents-course}},
note = {GitHub repository},
}
```

View File

@@ -0,0 +1,693 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "fr8fVR1J_SdU",
"metadata": {
"id": "fr8fVR1J_SdU"
},
"source": [
"# Dummy Agent Library\n",
"\n",
"In this simple example, **we're going to code an Agent from scratch**.\n",
"\n",
"This notebook is part of the <a href=\"https://www.hf.co/learn/agents-course\">Hugging Face Agents Course</a>, a free Course from beginner to expert, where you learn to build Agents.\n",
"\n",
"<img src=\"https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/communication/share.png\" alt=\"Agent Course\"/>"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "ec657731-ac7a-41dd-a0bb-cc661d00d714",
"metadata": {
"id": "ec657731-ac7a-41dd-a0bb-cc661d00d714",
"tags": []
},
"outputs": [],
"source": [
"!pip install -q huggingface_hub"
]
},
{
"cell_type": "markdown",
"id": "8WOxyzcmAEfI",
"metadata": {
"id": "8WOxyzcmAEfI"
},
"source": [
"## Serverless API\n",
"\n",
"In the Hugging Face ecosystem, there is a convenient feature called Serverless API that allows you to easily run inference on many models. There's no installation or deployment required.\n",
"\n",
"To run this notebook, **you need a Hugging Face token** that you can get from https://hf.co/settings/tokens. If you are running this notebook on Google Colab, you can set it up in the \"settings\" tab under \"secrets\". Make sure to call it \"HF_TOKEN\".\n",
"\n",
"You also need to request access to [the Meta Llama models](meta-llama/Llama-3.2-3B-Instruct), if you haven't done it before. Approval usually takes up to an hour."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "5af6ec14-bb7d-49a4-b911-0cf0ec084df5",
"metadata": {
"id": "5af6ec14-bb7d-49a4-b911-0cf0ec084df5",
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"from huggingface_hub import InferenceClient\n",
"\n",
"# os.environ[\"HF_TOKEN\"]=\"hf_xxxxxxxxxxx\"\n",
"\n",
"client = InferenceClient(\"meta-llama/Llama-3.2-3B-Instruct\")\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "c918666c-48ed-4d6d-ab91-c6ec3892d858",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "c918666c-48ed-4d6d-ab91-c6ec3892d858",
"outputId": "7282095c-c5e7-45e0-be81-8648c954a2f7",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris.\n"
]
}
],
"source": [
"# As seen in the LLM section, if we just do decoding, **the model will only stop when it predicts an EOS token**, \n",
"# and this does not happen here because this is a conversational (chat) model and we didn't apply the chat template it expects.\n",
"output = client.text_generation(\n",
" \"The capital of france is\",\n",
" max_new_tokens=100,\n",
")\n",
"\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"id": "w2C4arhyKAEk",
"metadata": {
"id": "w2C4arhyKAEk"
},
"source": [
"As seen in the LLM section, if we just do decoding, **the model will only stop when it predicts an EOS token**, and this does not happen here because this is a conversational (chat) model and **we didn't apply the chat template it expects**."
]
},
{
"cell_type": "markdown",
"id": "T9-6h-eVAWrR",
"metadata": {
"id": "T9-6h-eVAWrR"
},
"source": [
"If we now add the special tokens related to the <a href=\"https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct\">Llama-3.2-3B-Instruct model</a> that we're using, the behavior changes and it now produces the expected EOS."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "ec0b95d7-8f6a-45fc-b477-c2f95153a001",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "ec0b95d7-8f6a-45fc-b477-c2f95153a001",
"outputId": "b56e3257-ff89-4cf7-de60-c2e65f78567b",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"...Paris!\n"
]
}
],
"source": [
"# If we now add the special tokens related to Llama3.2 model, the behaviour changes and is now the expected oen.\n",
"prompt=\"\"\"<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n",
"\n",
"The capital of france is<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n",
"\n",
"\"\"\"\n",
"output = client.text_generation(\n",
" prompt,\n",
" max_new_tokens=100,\n",
")\n",
"\n",
"print(output)\n"
]
},
{
"cell_type": "markdown",
"id": "1uKapsiZAbH5",
"metadata": {
"id": "1uKapsiZAbH5"
},
"source": [
"Using the \"chat\" method is a much more convenient and reliable way to apply chat templates:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "eb536eea-f316-4902-aabd-55710e6c4347",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "eb536eea-f316-4902-aabd-55710e6c4347",
"outputId": "6bf13836-36a8-4e21-f5cd-5d79ad2c92d9",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"...Paris.\n"
]
}
],
"source": [
"output = client.chat.completions.create(\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": \"The capital of france is\"},\n",
" ],\n",
" stream=False,\n",
" max_tokens=1024,\n",
")\n",
"\n",
"print(output.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "jtQHk9HHAkb8",
"metadata": {
"id": "jtQHk9HHAkb8"
},
"source": [
"The chat method is the RECOMMENDED method to use in order to ensure a **smooth transition between models but since this notebook is only educational**, we will keep using the \"text_generation\" method to understand the details.\n"
]
},
{
"cell_type": "markdown",
"id": "wQ5FqBJuBUZp",
"metadata": {
"id": "wQ5FqBJuBUZp"
},
"source": [
"## Dummy Agent\n",
"\n",
"In the previous sections, we saw that the **core of an agent library is to append information in the system prompt**.\n",
"\n",
"This system prompt is a bit more complex than the one we saw earlier, but it already contains:\n",
"\n",
"1. **Information about the tools**\n",
"2. **Cycle instructions** (Thought → Action → Observation)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "2c66e9cb-2c14-47d4-a7a1-da826b7fc62d",
"metadata": {
"id": "2c66e9cb-2c14-47d4-a7a1-da826b7fc62d",
"tags": []
},
"outputs": [],
"source": [
"# This system prompt is a bit more complex and actually contains the function description already appended.\n",
"# Here we suppose that the textual description of the tools have already been appended\n",
"SYSTEM_PROMPT = \"\"\"Answer the following questions as best you can. You have access to the following tools:\n",
"\n",
"get_weather: Get the current weather in a given location\n",
"\n",
"The way you use the tools is by specifying a json blob.\n",
"Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\n",
"\n",
"The only values that should be in the \"action\" field are:\n",
"get_weather: Get the current weather in a given location, args: {\"location\": {\"type\": \"string\"}}\n",
"example use :\n",
"```\n",
"{{\n",
" \"action\": \"get_weather\",\n",
" \"action_input\": {\"location\": \"New York\"}\n",
"}}\n",
"\n",
"ALWAYS use the following format:\n",
"\n",
"Question: the input question you must answer\n",
"Thought: you should always think about one action to take. Only one action at a time in this format:\n",
"Action:\n",
"```\n",
"$JSON_BLOB\n",
"```\n",
"Observation: the result of the action. This Observation is unique, complete, and the source of truth.\n",
"... (this Thought/Action/Observation can repeat N times, you should take several steps when needed. The $JSON_BLOB must be formatted as markdown and only use a SINGLE action at a time.)\n",
"\n",
"You must always end your output with the following format:\n",
"\n",
"Thought: I now know the final answer\n",
"Final Answer: the final answer to the original input question\n",
"\n",
"Now begin! Reminder to ALWAYS use the exact characters `Final Answer:` when you provide a definitive answer. \"\"\"\n"
]
},
{
"cell_type": "markdown",
"id": "UoanEUqQAxzE",
"metadata": {
"id": "UoanEUqQAxzE"
},
"source": [
"Since we are running the \"text_generation\" method, we need to add the right special tokens."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "78edbd65-d19b-42ef-8248-e01218470d28",
"metadata": {
"id": "78edbd65-d19b-42ef-8248-e01218470d28",
"tags": []
},
"outputs": [],
"source": [
"# Since we are running the \"text_generation\", we need to add the right special tokens.\n",
"prompt=f\"\"\"<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n",
"{SYSTEM_PROMPT}\n",
"<|eot_id|><|start_header_id|>user<|end_header_id|>\n",
"What's the weather in London ?\n",
"<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n",
"\"\"\""
]
},
{
"cell_type": "markdown",
"id": "L-HaWxinA0XX",
"metadata": {
"id": "L-HaWxinA0XX"
},
"source": [
"This is equivalent to the following code that happens inside the chat method :\n",
"```\n",
"messages=[\n",
" {\"role\": \"system\", \"content\": SYSTEM_PROMPT},\n",
" {\"role\": \"user\", \"content\": \"What's the weather in London ?\"},\n",
"]\n",
"from transformers import AutoTokenizer\n",
"tokenizer = AutoTokenizer.from_pretrained(\"meta-llama/Llama-3.2-3B-Instruct\")\n",
"\n",
"tokenizer.apply_chat_template(messages, tokenize=False,add_generation_prompt=True)\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "4jCyx4HZCIA8",
"metadata": {
"id": "4jCyx4HZCIA8"
},
"source": [
"The prompt is now:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "Vc4YEtqBCJDK",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "Vc4YEtqBCJDK",
"outputId": "b9be74a7-be22-4826-d40a-bc5da33ce41c"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n",
"Answer the following questions as best you can. You have access to the following tools:\n",
"\n",
"get_weather: Get the current weather in a given location\n",
"\n",
"The way you use the tools is by specifying a json blob.\n",
"Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\n",
"\n",
"The only values that should be in the \"action\" field are:\n",
"get_weather: Get the current weather in a given location, args: {\"location\": {\"type\": \"string\"}}\n",
"example use :\n",
"```\n",
"{{\n",
" \"action\": \"get_weather\",\n",
" \"action_input\": {\"location\": \"New York\"}\n",
"}}\n",
"\n",
"ALWAYS use the following format:\n",
"\n",
"Question: the input question you must answer\n",
"Thought: you should always think about one action to take. Only one action at a time in this format:\n",
"Action:\n",
"```\n",
"$JSON_BLOB\n",
"```\n",
"Observation: the result of the action. This Observation is unique, complete, and the source of truth.\n",
"... (this Thought/Action/Observation can repeat N times, you should take several steps when needed. The $JSON_BLOB must be formatted as markdown and only use a SINGLE action at a time.)\n",
"\n",
"You must always end your output with the following format:\n",
"\n",
"Thought: I now know the final answer\n",
"Final Answer: the final answer to the original input question\n",
"\n",
"Now begin! Reminder to ALWAYS use the exact characters `Final Answer:` when you provide a definitive answer. \n",
"<|eot_id|><|start_header_id|>user<|end_header_id|>\n",
"What's the weather in London ?\n",
"<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n",
"\n"
]
}
],
"source": [
"print(prompt)"
]
},
{
"cell_type": "markdown",
"id": "S6fosEhBCObv",
"metadata": {
"id": "S6fosEhBCObv"
},
"source": [
"Lets decode!"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "e2b268d0-18bd-4877-bbed-a6b31ed71bc7",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "e2b268d0-18bd-4877-bbed-a6b31ed71bc7",
"outputId": "6933b02c-7895-4205-fec6-ca5122b54add",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Question: What's the weather in London?\n",
"\n",
"Action:\n",
"```\n",
"{\n",
" \"action\": \"get_weather\",\n",
" \"action_input\": {\"location\": \"London\"}\n",
"}\n",
"```\n",
"Observation: The current weather in London is mostly cloudy with a high of 12°C and a low of 8°C, and there is a 60% chance of precipitation.\n",
"\n",
"Thought: I now know the final answer\n"
]
}
],
"source": [
"# Do you see the problem?\n",
"output = client.text_generation(\n",
" prompt,\n",
" max_new_tokens=200,\n",
")\n",
"\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"id": "9NbUFRDECQ9N",
"metadata": {
"id": "9NbUFRDECQ9N"
},
"source": [
"Do you see the problem? \n",
"\n",
"The **answer was hallucinated by the model**. We need to stop to actually execute the function!"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "9fc783f2-66ac-42cf-8a57-51788f81d436",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "9fc783f2-66ac-42cf-8a57-51788f81d436",
"outputId": "52c62786-b5b1-42d1-bfd2-3f8e3a02dd6b",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Question: What's the weather in London?\n",
"\n",
"Action:\n",
"```\n",
"{\n",
" \"action\": \"get_weather\",\n",
" \"action_input\": {\"location\": \"London\"}\n",
"}\n",
"```\n",
"Observation:\n"
]
}
],
"source": [
"# The answer was hallucinated by the model. We need to stop to actually execute the function!\n",
"output = client.text_generation(\n",
" prompt,\n",
" max_new_tokens=200,\n",
" stop=[\"Observation:\"] # Let's stop before any actual function is called\n",
")\n",
"\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"id": "yBKVfMIaK_R1",
"metadata": {
"id": "yBKVfMIaK_R1"
},
"source": [
"Much Better!\n",
"\n",
"Let's now create a **dummy get weather function**. In real situation you could call and API."
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "4756ab9e-e319-4ba1-8281-c7170aca199c",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 35
},
"id": "4756ab9e-e319-4ba1-8281-c7170aca199c",
"outputId": "c3d05710-3382-4a18-c585-9665a105f37c",
"tags": []
},
"outputs": [
{
"data": {
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
},
"text/plain": [
"'the weather in London is sunny with low temperatures. \\n'"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Dummy function\n",
"def get_weather(location):\n",
" return f\"the weather in {location} is sunny with low temperatures. \\n\"\n",
"\n",
"get_weather('London')"
]
},
{
"cell_type": "markdown",
"id": "IHL3bqhYLGQ6",
"metadata": {
"id": "IHL3bqhYLGQ6"
},
"source": [
"Let's concatenate the base prompt, the completion until function execution and the result of the function as an Observation and resume the generation."
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "f07196e8-4ff1-41f4-8b2f-99dd550c6b27",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "f07196e8-4ff1-41f4-8b2f-99dd550c6b27",
"outputId": "044beac4-90ee-4104-f44b-66dd8146ff14",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n",
"Answer the following questions as best you can. You have access to the following tools:\n",
"\n",
"get_weather: Get the current weather in a given location\n",
"\n",
"The way you use the tools is by specifying a json blob.\n",
"Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\n",
"\n",
"The only values that should be in the \"action\" field are:\n",
"get_weather: Get the current weather in a given location, args: {\"location\": {\"type\": \"string\"}}\n",
"example use :\n",
"```\n",
"{{\n",
" \"action\": \"get_weather\",\n",
" \"action_input\": {\"location\": \"New York\"}\n",
"}}\n",
"\n",
"ALWAYS use the following format:\n",
"\n",
"Question: the input question you must answer\n",
"Thought: you should always think about one action to take. Only one action at a time in this format:\n",
"Action:\n",
"```\n",
"$JSON_BLOB\n",
"```\n",
"Observation: the result of the action. This Observation is unique, complete, and the source of truth.\n",
"... (this Thought/Action/Observation can repeat N times, you should take several steps when needed. The $JSON_BLOB must be formatted as markdown and only use a SINGLE action at a time.)\n",
"\n",
"You must always end your output with the following format:\n",
"\n",
"Thought: I now know the final answer\n",
"Final Answer: the final answer to the original input question\n",
"\n",
"Now begin! Reminder to ALWAYS use the exact characters `Final Answer:` when you provide a definitive answer. \n",
"<|eot_id|><|start_header_id|>user<|end_header_id|>\n",
"What's the weither in London ?\n",
"<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n",
"Question: What's the weather in London?\n",
"\n",
"Action:\n",
"```\n",
"{\n",
" \"action\": \"get_weather\",\n",
" \"action_input\": {\"location\": \"London\"}\n",
"}\n",
"```\n",
"Observation:the weather in London is sunny with low temperatures. \n",
"\n"
]
}
],
"source": [
"# Let's concatenate the base prompt, the completion until function execution and the result of the function as an Observation\n",
"new_prompt=prompt+output+get_weather('London')\n",
"print(new_prompt)"
]
},
{
"cell_type": "markdown",
"id": "Cc7Jb8o3Lc_4",
"metadata": {
"id": "Cc7Jb8o3Lc_4"
},
"source": [
"Here is the new prompt:"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "0d5c6697-24ee-426c-acd4-614fba95cf1f",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "0d5c6697-24ee-426c-acd4-614fba95cf1f",
"outputId": "f2808dad-86a4-4244-8ac9-4d44ca1e4c08",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Final Answer: The weather in London is sunny with low temperatures.\n"
]
}
],
"source": [
"final_output = client.text_generation(\n",
" new_prompt,\n",
" max_new_tokens=200,\n",
")\n",
"\n",
"print(final_output)"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

1
quiz/.python-version Normal file
View File

@@ -0,0 +1 @@
3.11

1
quiz/README.md Normal file
View File

@@ -0,0 +1 @@
# Agent Course quiz scripts

10
quiz/data/unit_1.json Normal file
View File

@@ -0,0 +1,10 @@
[
{
"question": "Which of the following best describes a Large Language Model (LLM)?",
"answer_a": "A model specializing in language recognition",
"answer_b": "A massive neural network that understands and generates human language",
"answer_c": "A model exclusively used for language data tasks like summarization or classification",
"answer_d": "A rule-based chatbot used for conversations",
"correct_answer": "B"
}
]

33
quiz/push_questions.py Normal file
View File

@@ -0,0 +1,33 @@
import json
from pathlib import Path
from datasets import Dataset
from huggingface_hub import HfApi
ORG_NAME = "agents-course"
def main():
"""Push quiz questions to the Hugging Face Hub"""
for file in Path("data").glob("*.json"):
print(f"Processing {file}")
with open(file, "r") as f:
quiz_data = json.load(f)
repo_id = f"{ORG_NAME}/{file.stem}_quiz"
dataset = Dataset.from_list(quiz_data)
print(f"Pushing {repo_id} to the Hugging Face Hub")
dataset.push_to_hub(
repo_id,
private=True,
commit_message=f"Update quiz questions for {file.stem}",
)
if __name__ == "__main__":
main()

12
quiz/pyproject.toml Normal file
View File

@@ -0,0 +1,12 @@
[project]
name = "agents-course"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.11"
dependencies = [
"datasets>=3.2.0",
"huggingface-hub>=0.27.1",
"ipykernel>=6.29.5",
"requests>=2.32.3",
]

1252
quiz/uv.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,134 +0,0 @@
# Welcome to the 🤗 AI Agents Course [[introduction]]
<!-- TODO: Add thumbnail to dataset -->
<img src="https://huggingface.co/datasets/huggingface-ai-agents-course/course-images/resolve/main/en/unit0/thumbnail.jpg" alt="AI Agents Course thumbnail" width="100%"/>
Welcome to the most fascinating topic in Artificial Intelligence: **AI Agents**.
This course will **teach you about AI Agents from beginner to expert**. It's completely free and open-source!
In this on-boarding unit you'll:
- Learn more about the **course content**.
- **Define the path** you're going to take (either self-audit or certification process).
- Learn more **about us**.
- **Create your Hugging Face account** (it's free).
- **Sign-up to our Discord server**, the place where you can chat with your classmates and us (the Hugging Face team).
Let's get started!
## What to expect? [[expect]]
In this course, you will:
- 📖 Study AI Agents in **theory, design, and practice.**
- 🧑‍💻 Learn to **use established AI Agent libraries** such as [smolagents](https://huggingface.co/docs/smolagents/en/index), [LangChain](https://www.langchain.com/), and [LlamaIndex](https://www.llamaindex.ai/).
- 💾 Share your **AI agents on the Hub** and try powerful agents from the community.
- 🏆 Participate in challenges where you will **evaluate your agents against other students.**
- 🎓 **Earn a certificate of completion** by completing assignments.
And more!
At the end of this course you'll:
- 📖 Learn the basics of Agents from scratch.
- 🕵️ Build your own Agents using the latest libraries and tools.
Don't forget to **<a href="https://bit.ly/hf-learn-agents">sign up to the course</a>** (we are collecting your email to be able to **send you the links when each Unit is published and give you information about the challenges and updates).**
Sign up 👉 <a href="https://bit.ly/hf-learn-agents">here</a>
## What does the course look like? [[course-look-like]]
The course is composed of:
- *A fundamental part*: where you learn a **concept in theory**.
- *A hands-on*: where you'll learn **to use established AI Agent libraries** to train your agents in unique environments. These hands-on will be **Hugging Face spaces** witha pre-configured environment!
- *Use case assignments*: where you'll apply the concepts you've learned to solve a real-world problem.
- *The Challenge*: you'll get to put your agent to compete against other agents in a challenge. There will also be [a leaderboard](https://huggingface.co/spaces/huggingface-projects/AI-Agents-Leaderboard) for you to compare the agents' performance.
<!-- TODO: Create a space for the leaderboard -->
## What's the syllabus? [[syllabus]]
This is the course's syllabus:
<!-- TODO: Add syllabus -->
## Two paths: choose your own adventure [[two-paths]]
<img src="https://huggingface.co/datasets/huggingface-agents-course/course-images/resolve/main/en/unit0/two-paths.jpg" alt="Two paths" width="100%"/>
You can choose to follow this course either:
- *To get a certificate of completion*: you need to complete 1 of the use case assignments and 1 of the challenges.
- *To get a certificate of honors*: you need to complete 100% of the assignments and 1 of the challenges.
- *As a simple audit*: you can participate in all challenges and do assignments if you want.
There's **no deadlines, the course is self-paced**.
Both paths **are completely free**.
Whatever path you choose, we advise you **to follow the recommended pace to enjoy the course and challenges with your fellow classmates.**
<!-- TODO: Validate grading for assignments -->
You don't need to tell us which path you choose. **If you get more than 80% of the assignments done, you'll get a certificate.**
## The Certification Process [[certification-process]]
The certification process is **completely free**:
<!-- TODO: Validate grading for assignments -->
- *To get a certificate of completion*: you need to complete 80% of the assignments.
- *To get a certificate of honors*: you need to complete 100% of the assignments.
Again, there's **no deadline** since the course is self paced. But our advice **is to follow the recommended pace section**.
<img src="https://huggingface.co/datasets/huggingface-agents-course/course-images/resolve/main/en/unit0/certification.jpg" alt="Course certification" width="100%"/>
## How to get most of the course? [[advice]]
To get most of the course, we have some advice:
1. <a href="https://discord.gg/ydHrjt3WP5">Join study groups in Discord </a>: studying in groups is always easier. To do that, you need to join our discord server. If you're new to Discord, no worries! We have some tools that will help you learn about it.
2. **Do the quizzes and assignments**: the best way to learn is to do and test yourself.
3. **Define a schedule to stay in sync**: you can use our recommended pace schedule below or create yours.
<img src="https://huggingface.co/datasets/huggingface-agents-course/course-images/resolve/main/en/unit0/advice.jpg" alt="Course advice" width="100%"/>
## What tools do I need? [[tools]]
You need only 3 things:
- *A computer* with an internet connection.
- A *Hugging Face Account*: to push and load models. If you don't have an account yet, you can create one **[here](https://hf.co/join)** (it's free).
<img src="https://huggingface.co/datasets/huggingface-agents-course/course-images/resolve/main/en/unit0/tools.jpg" alt="Course tools needed" width="100%"/>
## What is the recommended pace? [[recommended-pace]]
<!-- TODO: Add calendar for pace -->
Each chapter in this course is designed **to be completed in 1 week, with approximately 3-4 hours of work per week**. However, you can take as much time as necessary to complete the course. If you want to dive into a topic more in-depth, we'll provide additional resources to help you achieve that.
## Who are we [[who-are-we]]
In this course, you have two types of challenges:
<!-- TODO: Add team BIOs -->
## What are the challenges in this course? [[challenges]]
In this new version of the course, you have two types of challenges:
- [A leaderboard](https://huggingface.co/spaces/huggingface-projects/AI-Agents-Leaderboard) to compare your agent's performance to others.
- [AI vs. AI challenges](https://huggingface.co/learn/ai-agents-course/unit7/introduction?fw=pt) where you can train your agent and compete against other classmates' agents.
<img src="https://huggingface.co/datasets/huggingface-ai-agents-course/course-images/resolve/main/en/unit0/challenges.jpg" alt="Challenges" width="100%"/>
## I found a bug, or I want to improve the course [[contribute]]
<!-- TODO: Add contribution pages -->
Contributions are welcomed 🤗
- If you *found a bug 🐛 in a notebook*, please <a href="https://github.com/huggingface/agents-course/issues">open an issue</a> and **describe the problem**.
- If you *want to improve the course*, you can <a href="https://github.com/huggingface/agents-course/pulls">open a Pull Request.</a>
## I still have questions [[questions]]
Please ask your question in our <a href="https://discord.gg/ydHrjt3WP5">discord server #ai-agents-discussions.</a>

View File

@@ -1,30 +0,0 @@
# Course Syllabus
Here is the general syllabus for the course. With each unit a more detailed list of topics will be released.
| Chapter | Topic | Description |
| :---- | :---- | :---- |
| 0 | Onboarding | Set you up with the tools and platforms that you will use. |
| 1 | Agent Fundamentals | Explain Tools, Thoughts, Actions, Observations, and their formats. Explain LLMs, messages, special tokens and chat-template. Show a simple use case in generic python functions. |
| 2 | Frameworks | Understand how the fundamentals are implemented in popular libraries : smolAgents, LangGraph, LLamaIndex |
| 3 | Use Cases | Let's build some real life use cases ( open to PRs 🤗 from experienced Agent builders ) |
| 4 | Final Assignment | Build an agent for a selected benchmark and prove your understanding of Agents on the student leaderboard 🚀 |
*Over the coming weeks further bonus units will be released.*
## What does the course look like? [[course-look-like]]
The course is composed of:
- *A fundamental part*: where you learn a **concept in theory**.
- *A hands-on*: where you'll learn **to use established AI Agent libraries** to train your agents in unique environments. These hands-on will be **Hugging Face spaces** with a pre-configured environment!
- *Use case assignments*: where you'll apply the concepts you've learned to solve a real-world problem.
- *The Challenge*: you'll get to put your agent to compete against other agents in a challenge. There will also be [a leaderboard](https://huggingface.co/spaces/huggingface-projects/AI-Agents-Leaderboard) for you to compare the agents' performance.
<!-- TODO: Create a space for the leaderboard -->
## What's the syllabus? [[syllabus]]
This is the course's syllabus:
<!-- TODO: Add syllabus -->

View File

@@ -1,27 +0,0 @@
## Two paths: choose your own adventure [[two-paths]]
<img src="https://huggingface.co/datasets/huggingface-agents-course/course-images/resolve/main/en/unit0/two-paths.jpg" alt="Two paths" width="100%"/>
You can choose to follow this course either:
- *To get a certificate of completion*: you need to complete 1 of the use case assignments and 1 of the challenges.
- *To get a certificate of honors*: you need to complete 100% of the assignments and 1 of the challenges.
- *As a simple audit*: you can participate in all challenges and do assignments if you want.
There's **no deadlines, the course is self-paced**.
Both paths **are completely free**.
Whatever path you choose, we advise you **to follow the recommended pace to enjoy the course and challenges with your fellow classmates.**
<!-- TODO: Validate grading for assignments -->
You don't need to tell us which path you choose. **If you get more than 80% of the assignments done, you'll get a certificate.**
## The Certification Process [[certification-process]]
The certification process is **completely free**:
<!-- TODO: Validate grading for assignments -->
- *To get a certificate of completion*: you need to complete 80% of the assignments.
- *To get a certificate of honors*: you need to complete 100% of the assignments.
Again, there's **no deadline** since the course is self paced. But our advice **is to follow the recommended pace section**.
<img src="https://huggingface.co/datasets/huggingface-agents-course/course-images/resolve/main/en/unit0/certification.jpg" alt="Course certification" width="100%"/>

View File

@@ -1,9 +0,0 @@
## How to get most of the course? [[advice]]
To get most of the course, we have some advice:
1. <a href="https://discord.gg/ydHrjt3WP5">Join study groups in Discord </a>: studying in groups is always easier. To do that, you need to join our discord server. If you're new to Discord, no worries! We have some tools that will help you learn about it.
2. **Do the quizzes and assignments**: the best way to learn is to do and test yourself.
3. **Define a schedule to stay in sync**: you can use our recommended pace schedule below or create yours.
<img src="https://huggingface.co/datasets/huggingface-agents-course/course-images/resolve/main/en/unit0/advice.jpg" alt="Course advice" width="100%"/>

View File

@@ -1,39 +0,0 @@
# Tools
This section will cover the tools you will need for the course.
## What tools do I need? [[tools]]
You need only 3 things:
- *A computer* with an internet connection.
- A *Hugging Face Account*: to push and load models. If you don't have an account yet, you can create one **[here](https://hf.co/join)** (it's free).
<img src="https://huggingface.co/datasets/huggingface-agents-course/course-images/resolve/main/en/unit0/tools.jpg" alt="Course tools needed" width="100%"/>
After all this information, it's time to get started. We're going to do two things:
1. **Create your Hugging Face account** if it's not already done
2. **Sign up to Discord and introduce yourself** (don't be shy 🤗)
### Let's create my Hugging Face account
(If it's not already done) create an account to HF <a href="https://huggingface.co/join">here</a>
### Let's join our Discord server
You can now sign up for our Discord Server. This is the place where you **can chat with the community and with us, create and join study groups to grow with each other and more**
👉🏻 Join our discord server <a href="https://discord.gg/UrrTSsSyjb">here.</a>
When you join, remember to introduce yourself in #introduce-yourself and sign-up for AI Agents channels in #channels-and-roles.
We have multiple AI Agents-related channels:
- `agents-course`: where we give the latest information about the course.
- `smolagents`: where you can discuss and get support with the library.
If this is your first time using Discord, we wrote a Discord 101 to get the best practices. Check the next section.
Congratulations! **You've just finished the on-boarding**. You're now ready to start to learn about AI Agents. Have fun!
### Keep Learning, stay awesome 🤗

View File

@@ -1,9 +0,0 @@
# Pace Recommended
This section will discuss the recommended pace for the course and any deadlines you should be aware of.
## What is the recommended pace? [[recommended-pace]]
<!-- TODO: Add calendar for pace -->
Each chapter in this course is designed **to be completed in 1 week, with approximately 3-4 hours of work per week**. However, you can take as much time as necessary to complete the course. If you want to dive into a topic more in-depth, we'll provide additional resources to help you achieve that.

View File

@@ -1,8 +0,0 @@
# Authors
This section will provide information about the authors of the course.
## Who are we [[who-are-we]]
In this course, you have two types of challenges:
<!-- TODO: Add team BIOs -->

View File

@@ -1,10 +0,0 @@
# How to Contribute to the Course
This section will explain how you can contribute to the course.
## I found a bug, or I want to improve the course [[contribute]]
<!-- TODO: Add contribution pages -->
Contributions are welcomed 🤗
- If you *found a bug 🐛 in a notebook*, please <a href="https://github.com/huggingface/agents-course/issues">open an issue</a> and **describe the problem**.
- If you *want to improve the course*, you can <a href="https://github.com/huggingface/agents-course/pulls">open a Pull Request.</a>

View File

@@ -1,7 +0,0 @@
# I Have Questions
This section will address common questions and how to get help.
## I still have questions [[questions]]
Please ask your question in our <a href="https://discord.gg/ydHrjt3WP5">discord server #ai-agents-discussions.</a>

View File

@@ -1,30 +0,0 @@
# Discord 101 [[discord-101]]
Welcome to the AI Agents Course! This guide is designed to help you get started with Discord, a free chat platform similar to Slack.
<img src="https://huggingface.co/datasets/huggingface-ai-agents-course/course-images/resolve/main/en/unit0/huggy-logo.jpg" alt="Huggy Logo"/>
Join the Hugging Face Community Discord server, which has over 50,000 members, by clicking [here](https://discord.gg/ydHrjt3WP5). It's a great place to connect with others!
Starting on Discord can be a bit overwhelming, so here's a quick guide to help you navigate.
When you [sign up for our Discord server](http://hf.co/join/discord), you'll be prompted to choose your interests. Be sure to select **"AI Agents"** to gain access to the AI Agents Category, which includes all the course-related channels. Feel free to explore and join additional channels if you wish! 🚀
After signing up, introduce yourself in the `#introduce-yourself` channel.
<img src="https://huggingface.co/datasets/huggingface-ai-agents-course/course-images/resolve/main/en/unit0/discord2.jpg" alt="Discord"/>
In the AI Agents category, make sure to sign up for these channels by clicking on 🤖 AI Agents in `role-assignment`:
- `agents-course`: for the **latest course information**.
- `smolagents`: for **discussion and support with the library**.
The HF Community Server hosts a vibrant community with interests in various areas, offering opportunities for learning through paper discussions, events, and more.
Here are a few tips for using Discord effectively:
- **Voice channels** are available, though text chat is more commonly used.
- You can format text using **markdown style**, which is especially useful for writing code. Note that markdown doesn't work as well for links.
- Consider opening threads for **long conversations** to keep discussions organized.
We hope you find this guide helpful! If you have any questions, feel free to ask.

View File

@@ -1,14 +0,0 @@
# Table of Contents
1. [Welcome to the Course](01_welcome_to_the_course.mdx)
2. [What you're going to do](02_what_youre_going_to_do.mdx)
3. [Certification (and the idea)](03_certification_and_the_idea.mdx)
4. [How to get most of the course](04_how_to_get_most_of_the_course.mdx)
5. [Tools](05_tools.mdx)
6. [Pace recommended (saying that there's a deadline)](06_pace_recommended.mdx)
7. [Authors](07_authors.mdx)
8. [How to contribute to the course](08_how_to_contribute_to_the_course.mdx)
9. [I have questions](09_i_have_questions.mdx)
10. [Discord 101](10_discord_101.mdx)
# Welcome to the Course

46
units/en/_toctree.yml Normal file
View File

@@ -0,0 +1,46 @@
- title: Unit 0. Welcome to the course
sections:
- local: unit0/introduction
title: Welcome to the course 🤗
- local: unit0/onboarding
title: Onboarding
- local: unit0/discord101
title: (Optional) Discord 101
- title: Unit 1. Introduction to Agents
sections:
- local: unit1/introduction
title: Introduction
- local: unit1/what-are-agents
title: What is an Agent?
- local: unit1/quiz1
title: Quick Quiz 1
- local: unit1/what-are-llms
title: What are LLMs?
- local: unit1/messages-and-special-tokens
title: Messages and Special Tokens
- local: unit1/tools
title: What are Tools?
- local: unit1/quiz2
title: Quick Quiz 2
- local: unit1/agent-steps-and-structure
title: Understanding AI Agents through the Thought-Action-Observation Cycle
- local: unit1/thoughts
title: Thought, Internal Reasoning and the Re-Act Approach
- local: unit1/actions
title: Actions, Enabling the Agent to Engage with Its Environment
- local: unit1/observations
title: Observe, Integrating Feedback to Reflect and Adapt
- local: unit1/dummy-agent-library
title: Dummy Agent Library
- local: unit1/tutorial
title: Lets Create Our First Agent Using Smolagents
- local: unit1/final-quiz
title: Unit 1 Final Quiz
- local: unit1/get-your-certificate
title: Get Your Certificate
- local: unit1/conclusion
title: Conclusion
- title: When the next steps are published?
sections:
- local: communication/next-units
title: Next Units

View File

@@ -0,0 +1,9 @@
# When the next units get published?
Here's the publication schedule:
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/communication/next-units.jpg" alt="Next Units" width="100%"/>
Don't forget to <a href="https://bit.ly/hf-learn-agents">sign up for the course</a>! By signing up, **we can send you the links as each unit is published, along with updates and details about upcoming challenges**.
Keep Learning, Stay Awesome 🤗

View File

@@ -0,0 +1,52 @@
# (Optional) Discord 101 [[discord-101]]
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit0/discord-etiquette.jpg" alt="The Discord Etiquette" width="100%"/>
This guide is designed to help you get started with Discord, a free chat platform popular in the gaming and ML communities.
Join the Hugging Face Community Discord server, which **has over 100,000 members**, by clicking [here](https://discord.gg/UrrTSsSyjb). It's a great place to connect with others!
## The Agents course on Hugging Face's Discord Community
Starting on Discord can be a bit overwhelming, so here's a quick guide to help you navigate.
<!-- Not the case anymore, you'll be prompted to choose your interests. Be sure to select **"AI Agents"** to gain access to the AI Agents Category, which includes all the course-related channels. Feel free to explore and join additional channels if you wish! 🚀-->
The HF Community Server hosts a vibrant community with interests in various areas, offering opportunities for learning through paper discussions, events, and more.
After [signing up](http://hf.co/join/discord), introduce yourself in the `#introduce-yourself` channel.
We created 4 channels for the Agents Course:
- `agents-course-announcements`: for the **latest course informations**.
- `🎓-agents-course-general`: for **general discussions and chitchat**.
- `agents-course-questions`: to **ask questions and help your classmates**.
- `agents-course-showcase`: to **show your best agents** .
In addition you can check:
- `smolagents`: for **discussion and support with the library**.
## Tips for using Discord effectively
### How to join a server
If you are less familiar with Discord, you might want to check out this [guide](https://support.discord.com/hc/en-us/articles/360034842871-How-do-I-join-a-Server#h_01FSJF9GT2QJMS2PRAW36WNBS8) on how to join a server.
Here's a quick summary of the steps:
1. Click on the [Invite Link](https://discord.gg/UrrTSsSyjb).
2. Sign in with your Discord account, or create an account if you don't have one.
3. Validate that you are not an AI agent!
4. Setup your nickname and avatar.
5. Click "Join Server".
### How to use Discord effectively
Here are a few tips for using Discord effectively:
- **Voice channels** are available, though text chat is more commonly used.
- You can format text using **markdown style**, which is especially useful for writing code. Note that markdown doesn't work as well for links.
- Consider opening threads for **long conversations** to keep discussions organized.
We hope you find this guide helpful! If you have any questions, feel free to ask us on Discord 🤗.

View File

@@ -0,0 +1,181 @@
# Welcome to the 🤗 AI Agents Course [[introduction]]
<figure>
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit0/thumbnail.jpg" alt="AI Agents Course thumbnail" width="100%"/>
<figcaption>The background of the image was generated using <a href="https://scenario.com/">Scenario.com</a>
</figcaption>
</figure>
Welcome to the most exciting topic in AI today: **Agents**!
This free course will take you on a journey, **from beginner to expert**, in understanding, using and building AI agents.
This first unit will help you onboard:
- Discover the **course's syllabus**.
- **Choose the path** you're going to take (either self-audit or certification process).
- **Get more information about the certification process and the deadlines**.
- Get to know the team behind the course.
- Create your **Hugging Face account**.
- **Sign-up to our Discord server**, and meet your classmates and us.
Let's get started!
We organize a **live Q&A this Wednesday, February the 12th at 5PM CET**. Where we **will explain how the course will work** (scope, units, challenges and more) and **answer your questions**.
👉 https://www.youtube.com/live/PopqUt3MGyQ?feature=shared
👉 Don't forget **to click to Notify me**, to not miss the live.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/communication/live1.jpg" alt="First live thumbnail"/>
## What to expect from this course? [[expect]]
In this course, you will:
- 📖 Study AI Agents in **theory, design, and practice.**
- 🧑‍💻 Learn to **use established AI Agent libraries** such as [smolagents](https://huggingface.co/docs/smolagents/en/index), [LangChain](https://www.langchain.com/), and [LlamaIndex](https://www.llamaindex.ai/).
- 💾 **Share your agents** on the Hugging Face Hub and explore agents created by the community.
- 🏆 Participate in challenges where you will **evaluate your agents against other students'.**
- 🎓 **Earn a certificate of completion** by completing assignments.
And more!
At the end of this course you'll understand **how Agents work and how to build your own Agents using the latest libraries and tools**.
Don't forget to **<a href="https://bit.ly/hf-learn-agents">sign up to the course!</a>**
(We are respectful of your privacy. We collect your email address to be able to **send you the links when each Unit is published and give you information about the challenges and updates).**
## What does the course look like? [[course-look-like]]
The course is composed of:
- *Foundational Units*: where you learn Agents **concepts in theory**.
- *Hands-on*: where you'll learn **to use established AI Agent libraries** to train your agents in unique environments. These hands-on sections will be **Hugging Face Spaces** with a pre-configured environment.
- *Use case assignments*: where you'll apply the concepts you've learned to solve a real-world problem that you'll choose.
- *The Challenge*: you'll get to put your agent to compete against other agents in a challenge. There will also be [a leaderboard](https://huggingface.co/spaces/huggingface-projects/AI-Agents-Leaderboard) (not available yet) for you to compare the agents' performance.
This **course is a living project, evolving with your feedback and contributions!** Feel free to [open issues and PRs in GitHub](https://github.com/huggingface/agents-course), and engage in discussions in our Discord server.
After you have gone through the course, you can also send your feedback [👉 using this form](https://docs.google.com/forms/d/e/1FAIpQLSe9VaONn0eglax0uTwi29rIn4tM7H2sYmmybmG5jJNlE5v0xA/viewform?usp=dialog)
## What's the syllabus? [[syllabus]]
Here is the **general syllabus for the course**. A more detailed list of topics will be released with each unit.
| Chapter | Topic | Description |
| :---- | :---- | :---- |
| 0 | Onboarding | Set you up with the tools and platforms that you will use. |
| 1 | Agent Fundamentals | Explain Tools, Thoughts, Actions, Observations, and their formats. Explain LLMs, messages, special tokens and chat templates. Show a simple use case using python functions as tools. |
| 2 | Frameworks | Understand how the fundamentals are implemented in popular libraries : smolagents, LangGraph, LLamaIndex |
| 3 | Use Cases | Let's build some real life use cases (open to PRs 🤗 from experienced Agent builders) |
| 4 | Final Assignment | Build an agent for a selected benchmark and prove your understanding of Agents on the student leaderboard 🚀 |
*We are also planning to release some bonus units, stay tuned!*
## What are the prerequisites?
To be able to follow this course you should have a:
- Basic knowledge of Python
- Basic knowledge of LLMs (we have a section in Unit 1 to recap what they are)
## What tools do I need? [[tools]]
You only need 2 things:
- *A computer* with an internet connection.
- A *Hugging Face Account*: to push and load models, agents, and create Spaces. If you don't have an account yet, you can create one **[here](https://hf.co/join)** (it's free).
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit0/tools.jpg" alt="Course tools needed" width="100%"/>
## The Certification Process [[certification-process]]
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit0/three-paths.jpg" alt="Two paths" width="100%"/>
You can choose to follow this course *in audit mode*, or do the activities and *get one of the two certificates we'll issue*.
If you audit the course, you can participate in all the challenges and do assignments if you want, and **you don't need to notify us**.
The certification process is **completely free**:
- *To get a certification for fundamentals*: you need to complete Unit 1 of the course. This is intended for students that want to get up to date with the latest trends in Agents.
- *To get a certificate of completion*: you need to complete Unit 1, one of the use case assignments we'll propose during the course, and the final challenge.
There's a deadline for the certification process: all the assignments must be finished before **May 1st 2025**.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit0/deadline.jpg" alt="Deadline" width="100%"/>
## What is the recommended pace? [[recommended-pace]]
Each chapter in this course is designed **to be completed in 1 week, with approximately 3-4 hours of work per week**.
Since there's a deadline, we provide you a recommended pace:
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit0/recommended-pace.jpg" alt="Recommended Pace" width="100%"/>
## How to get the most out of the course? [[advice]]
To get the most out of the course, we have some advice:
1. <a href="https://discord.gg/UrrTSsSyjb">Join study groups in Discord</a>: studying in groups is always easier. To do that, you need to join our discord server and verify your Hugging Face account.
2. **Do the quizzes and assignments**: the best way to learn is through hands-on practice and self-assessment..
3. **Define a schedule to stay in sync**: you can use our recommended pace schedule below or create yours.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit0/advice.jpg" alt="Course advice" width="100%"/>
## Who are we [[who-are-we]]
About the authors:
### Joffrey Thomas
Joffrey is a machine learning engineer at Hugging Face and has built and deployed AI Agents in production. Joffrey will be your main instructor for this course.
- [Follow Joffrey on Hugging Face](https://huggingface.co/Jofthomas)
- [Follow Joffrey on X](https://x.com/Jthmas404)
- [Follow Joffrey on Linkedin](https://www.linkedin.com/in/joffrey-thomas/)
### Ben Burtenshaw
Ben is a machine learning engineer at Hugging Face and has delivered multiple courses across various platforms. Ben's goal is to make the course accessible to everyone.
- [Follow Ben on Hugging Face](https://huggingface.co/burtenshaw)
- [Follow Ben on X](https://x.com/ben_burtenshaw)
- [Follow Ben on Linkedin](https://www.linkedin.com/in/ben-burtenshaw/)
### Thomas Simonini
Thomas is a machine learning engineer at Hugging Face and delivered the successful <a href="https://huggingface.co/learn/deep-rl-course/unit0/introduction">Deep RL</a> and <a href="https://huggingface.co/learn/ml-games-course/en/unit0/introduction">ML for games</a> courses. Thomas is a big fan of Agents and is excited to see what the community will build.
- [Follow Thomas on Hugging Face](https://huggingface.co/ThomasSimonini)
- [Follow Thomas on X](https://x.com/ThomasSimonini)
- [Follow Thomas on Linkedin](https://www.linkedin.com/in/simoninithomas/)
## Acknowledgments
We would like to extend our gratitude to the following individuals for their invaluable contributions to this course:
- **[Pedro Cuenca](https://huggingface.co/pcuenq)** For his guidance and expertise in reviewing the materials
- **[Aymeric Roucher](https://huggingface.co/m-ric)** For his amazing demo spaces ( decoding and final agent ).
- **[Joshua Lochner](https://huggingface.co/Xenova)** For his amazing demo space on tokenization.
## I found a bug, or I want to improve the course [[contribute]]
Contributions are **welcome** 🤗
- If you *found a bug 🐛 in a notebook*, please <a href="https://github.com/huggingface/agents-course/issues">open an issue</a> and **describe the problem**.
- If you *want to improve the course*, you can <a href="https://github.com/huggingface/agents-course/pulls">open a Pull Request.</a>
- If you *want to add a full section or a new unit*, the best is to <a href="https://github.com/huggingface/agents-course/issues">open an issue</a> and **describe what content you want to add before starting to write it so that we can guide you**.
## I still have questions [[questions]]
Please ask your question in our <a href="https://discord.gg/UrrTSsSyjb">discord server #ai-agents-discussions.</a>
Now that you have all the information, let's get on board ⛵
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit0/time-to-onboard.jpg" alt="Time to Onboard" width="100%"/>

View File

@@ -0,0 +1,60 @@
# Onboarding: Your First Steps ⛵
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit0/time-to-onboard.jpg" alt="Time to Onboard" width="100%"/>
Now that you have all the details, let's get started! We're going to do four things:
1. **Create your Hugging Face Account** if it's not already done
2. **Sign up to Discord and introduce yourself** (don't be shy 🤗)
3. **Follow the Hugging Face Agents Course** on the Hub
4. **Spread the word** about the course
### Step 1: Create Your Hugging Face Account
(If you havent already) create a Hugging Face account <a href='https://huggingface.co/join'>here</a>.
### Step 2: Join Our Discord Community
You can now sign up for our Discord Server. This is where you can **chat with the community (including us!)**, join study groups, and grow together.
👉🏻 Join our discord server <a href="https://discord.gg/UrrTSsSyjb">here.</a>
When you join, remember to introduce yourself in `#introduce-yourself`.
We have multiple AI Agents-related channels:
- `agents-course-announcements`: for the **latest course informations**.
- `🎓-agents-course-general`: for **general discussions and chitchat**.
- `agents-course-questions`: to **ask questions and help your classmates**.
- `agents-course-showcase`: to **show your best agents**.
In addition you can check:
- `smolagents`: for **discussion and support with the library**.
If this is your first time using Discord, we wrote a Discord 101 to get the best practices. Check [the next section](discord101).
### Step 3: Follow the Hugging Face Agent Course Organization
Stay up to date with the latest course materials, updates, and announcements **by following the Hugging Face Agents Course Organization**.
👉 Go [here](https://huggingface.co/agents-course) and click on **follow**.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/communication/hf_course_follow.gif" alt="Follow" width="100%"/>
### Step 4: Spread the word about the course
Help us make this course more visible! There are two way you can help us:
1. Show your support by ⭐ <a href="https://github.com/huggingface/agents-course">the course's repository</a>.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/communication/please_star.gif" alt="Repo star"/>
2. Share Your Learning Journey: Let others **know youre taking this course**! Weve prepared an illustration you can use in your social media posts
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/communication/share.png">
You can download the image by clicking 👉 [here](https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/communication/share.png?download=true)
Congratulations! 🎉 **You've completed the onboarding process**! You're now ready to start learning about AI Agents. Have fun!
Keep Learning, stay awesome 🤗

19
units/en/unit1/README.md Normal file
View File

@@ -0,0 +1,19 @@
# Table of Contents
You can access Unit 1 on hf.co/learn 👉 <a href="https://hf.co/learn/agents-course/unit1/introduction">here</a>
<!--
| Title | Description |
|-------|-------------|
| [Definition of an Agent](1_definition_of_an_agent.md) | General example of what agents can do without technical jargon. |
| [Explain LLMs](2_explain_llms.md) | Explanation of Large Language Models, including the family tree of models and suitable models for agents. |
| [Messages and Special Tokens](3_messages_and_special_tokens.md) | Explanation of messages, special tokens, and chat-template usage. |
| [Dummy Agent Library](4_dummy_agent_library.md) | Introduction to using a dummy agent library and serverless API. |
| [Tools](5_tools.md) | Overview of Pydantic for agent tools and other common tool formats. |
| [Agent Steps and Structure](6_agent_steps_and_structure.md) | Steps involved in an agent, including thoughts, actions, observations, and a comparison between code agents and JSON agents. |
| [Thoughts](7_thoughts.md) | Explanation of thoughts and the ReAct approach. |
| [Actions](8_actions.md) | Overview of actions and stop and parse approach. |
| [Observations](9_observations.md) | Explanation of observations and append result to reflect. |
| [Quizz](10_quizz.md) | Contains quizzes to test understanding of the concepts. |
| [Simple Use Case](11_simple_use_case.md) | Provides a simple use case exercise using datetime and a Python function as a tool. |
-->

126
units/en/unit1/actions.mdx Normal file
View File

@@ -0,0 +1,126 @@
# Actions: Enabling the Agent to Engage with Its Environment
<Tip>
In this section, we explore the concrete steps an AI agent takes to interact with its environment.
Well cover how actions are represented (using JSON or code), the importance of the stop and parse approach, and introduce different types of agents.
</Tip>
Actions are the concrete steps an **AI agent takes to interact with its environment**.
Whether its browsing the web for information or controlling a physical device, each action is a deliberate operation executed by the agent.
For example, an agent assisting with customer service might retrieve customer data, offer support articles, or transfer issues to a human representative.
## Types of Agent Actions
There are multiple types of Agents that take actions differently:
| Type of Agent | Description |
|------------------------|--------------------------------------------------------------------------------------------------|
| JSON Agent | The Action to take is specified as in JSON format |
| Code Agent | The Agents writes a code block that is interpreted externally |
| Function-calling Agent | It is a subcategory of the JSON Agent which has been fine-tuned to generate a new message for each action |
Actions themselves can serve many purposes:
| Type of Action | Description |
|--------------------------|------------------------------------------------------------------------------------------|
| Information Gathering | Performing web searches, querying databases, or retrieving documents. |
| Tool Usage | Making API calls, running calculations, and executing code. |
| Environment Interaction | Manipulating digital interfaces or controlling physical devices. |
| Communication | Engaging with users via chat or collaborating with other agents. |
One crucial part of an agent is the **ability to STOP generating new tokens when an action is complete**, and that is true for all formats of Agent; JSON, code, or function-calling. This prevents unintended output and ensures that the agents response is clear and precise.
The LLM only handles text, and uses it to describe the action it wants to take and the parameters to supply to the tool.
## The Stop and Parse Approach
One key method for implementing actions is the **stop and parse approach**. This method ensures that the agents output is structured and predictable:
1. **Generation in a Structured Format**:
The agent outputs its intended action in a clear, predetermined format (JSON or code).
2. **Halting Further Generation**:
Once the action is complete, **the agent stops generating additional tokens**. This prevents extra or erroneous output.
3. **Parsing the Output**:
An external parser reads the formatted action, determines which Tool to call, and extracts the required parameters.
For example, an agent needing to check the weather might output:
```json
Thought: I need to check the current weather for New York.
Action :
{
"action": "get_weather",
"action_input": {"location": "New York"}
}
```
The framework can then easily parse the name of the function to call and the arguments to apply.
This clear, machine-readable format minimizes errors and enables external tools to accurately process the agents command.
Note: Function-calling agents operate similarly by structuring each action so that a designated function is invoked with the correct arguments.
We'll dive deeper into that type of Agents in a future Unit.
## Code Agents
An alternative approach is using *Code Agents*.
The idea is: **instead of outputting a simple JSON object**, a Code Agent generates an **executable code block—typically in a high-level language like Python**.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/code-vs-json-actions.png" alt="Code Agents" />
This approach offers several advantages:
- **Expressiveness:** Code can naturally represent complex logic, including loops, conditionals, and nested functions, providing greater flexibility than JSON.
- **Modularity and Reusability:** Generated code can include functions and modules that are reusable across different actions or tasks.
- **Enhanced Debuggability:** With a well-defined programming syntax, code errors are often easier to detect and correct.
- **Direct Integration:** Code Agents can integrate directly with external libraries and APIs, enabling more complex operations such as data processing or real-time decision making.
For example, a Code Agent tasked with fetching the weather might generate the following Python snippet:
```python
# Code Agent Example: Retrieve Weather Information
def get_weather(city):
import requests
api_url = f"https://api.weather.com/v1/location/{city}?apiKey=YOUR_API_KEY"
response = requests.get(api_url)
if response.status_code == 200:
data = response.json()
return data.get("weather", "No weather information available")
else:
return "Error: Unable to fetch weather data."
# Execute the function and prepare the final answer
result = get_weather("New York")
final_answer = f"The current weather in New York is: {result}"
print(final_answer)
```
In this example, the Code Agent:
- Retrieves weather data **via an API call**,
- Processes the response,
- And uses the print() function to output a final answer.
This method **also follows the stop and parse approach** by clearly delimiting the code block and signaling when execution is complete (here, by printing the final_answer).
---
We learned that Actions bridge an agent's internal reasoning and its real-world interactions by executing clear, structured tasks—whether through JSON, code, or function calls.
This deliberate execution ensures that each action is precise and ready for external processing via the stop and parse approach. In the next section, we will explore Observations to see how agents capture and integrate feedback from their environment.
After this, we will **finally be ready to build our first Agent!**

View File

@@ -0,0 +1,150 @@
# Understanding AI Agents through the Thought-Action-Observation Cycle
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-check-3.jpg" alt="Unit 1 planning"/>
In the previous sections, we learned:
- **How tools are made available to the agent in the system prompt**.
- **How AI agents are systems that can 'reason', plan, and interact with their environment**.
In this section, **well explore the complete AI Agent Workflow**, a cycle we defined as Thought-Action-Observation.
And then, well dive deeper on each of these steps.
## The Core Components
Agents work in a continuous cycle of: **thinking (Thought) → acting (Act) and observing (Observe)**.
Lets break down these actions together:
1. **Thought**: The LLM part of the Agent decides what the next step should be.
2. **Action:** The agent takes an action, by calling the tools with the associated arguments.
3. **Observation:** The model reflects on the response from the tool.
## The Thought-Action-Observation Cycle
The three components work together in a continuous loop. To use an analogy from programming, the agent uses a **while loop**: the loop continues until the objective of the agent has been fulfilled.
Visually, it looks like this:
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/AgentCycle.gif" alt="Think, Act, Observe cycle"/>
In many Agent frameworks, **the rules and guidelines are embedded directly into the system prompt**, ensuring that every cycle adheres to a defined logic.
In a simplified version, our system prompt may look like this:
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/system_prompt_cycle.png" alt="Think, Act, Observe cycle"/>
We see here that in the System Message we defined :
- The *Agent's behavior*.
- The *Tools our Agent has access to*, as we described in the previous section.
- The *Thought-Action-Observation Cycle*, that we bake into the LLM instructions.
Lets take a small example to understand the process before going deeper into each step of the process.
## Alfred, the weather Agent
We created Alfred, the Weather Agent.
A user asks Alfred: “Whats the weather like in New York today?”
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/alfred-agent.jpg" alt="Alfred Agent"/>
Alfreds job is to answer this query using a weather API tool.
Heres how the cycle unfolds:
### Thought
**Internal Reasoning:**
Upon receiving the query, Alfreds internal dialogue might be:
*"The user needs current weather information for New York. I have access to a tool that fetches weather data. First, I need to call the weather API to get up-to-date details."*
This step shows the agent breaking the problem into steps: first, gathering the necessary data.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/alfred-agent-1.jpg" alt="Alfred Agent"/>
### Action
**Tool Usage:**
Based on its reasoning and the fact that Alfred knows about a `get_weather` tool, Alfred prepares a JSON-formatted command that calls the weather API tool. For example, its first action could be:
Thought: I need to check the current weather for New York.
```
{
"action": "get_weather",
"action_input": {
"location": "New York"
}
}
```
Here, the action clearly specifies which tool to call (e.g., get_weather) and what parameter to pass (the “location": “New York”).
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/alfred-agent-2.jpg" alt="Alfred Agent"/>
### Observation
**Feedback from the Environment:**
After the tool call, Alfred receives an observation. This might be the raw weather data from the API such as:
*"Current weather in New York: partly cloudy, 15°C, 60% humidity."*
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/alfred-agent-3.jpg" alt="Alfred Agent"/>
This observation is then added to the prompt as additional context. It functions as real-world feedback, confirming whether the action succeeded and providing the needed details.
### Updated thought
**Reflecting:**
With the observation in hand, Alfred updates its internal reasoning:
*"Now that I have the weather data for New York, I can compile an answer for the user."*
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/alfred-agent-4.jpg" alt="Alfred Agent"/>
### Final Action
Alfred then generates a final response formatted as we told it to:
Thought: I have the weather data now. The current weather in New York is partly cloudy with a temperature of 15°C and 60% humidity."
Final answer : The current weather in New York is partly cloudy with a temperature of 15°C and 60% humidity.
This final action sends the answer back to the user, closing the loop.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/alfred-agent-5.jpg" alt="Alfred Agent"/>
What we see in this example:
- **Agents iterate through a loop until the objective is fullfilled:**
**Alfreds process is cyclical**. It starts with a thought, then acts by calling a tool, and finally observes the outcome. If the observation had indicated an error or incomplete data, Alfred could have re-entered the cycle to correct its approach.
- **Tool Integration:**
The ability to call a tool (like a weather API) enables Alfred to go **beyond static knowledge and retrieve real-time data**, an essential aspect of many AI Agents.
- **Dynamic Adaptation:**
Each cycle allows the agent to incorporate fresh information (observations) into its reasoning (thought), ensuring that the final answer is well-informed and accurate.
This example showcases the core concept behind the *ReAct cycle* (a concept we're going to develop in the next section): **the interplay of Thought, Action, and Observation empowers AI agents to solve complex tasks iteratively**.
By understanding and applying these principles, you can design agents that not only reason about their tasks but also **effectively utilize external tools to complete them**, all while continuously refining their output based on environmental feedback.
---
Lets now dive deeper into the Thought, Action, Observation as the individual steps of the process.

View File

@@ -0,0 +1,19 @@
# Conclusion [[conclusion]]
Congratulations on finishing this first Unit 🥳
You've just **mastered the fundamentals of Agents** and you've created your first AI Agent!
It's **normal if you still feel confused by some of these elements**. Agents are a complex topic and it's common to take a while to grasp everything.
**Take time to really grasp the material** before continuing. Its important to master these elements and have a solid foundation before entering the fun part.
And if you pass the Quiz test, don't forget to get your certificate 🎓 👉 [here](https://huggingface.co/spaces/agents-course/unit1-certification-app)
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/certificate-example.jpg" alt="Certificate Example"/>
In the next (bonus) unit, you're going to learn **to fine-tune a Agent to do function calling (aka to be able to call tools based on user prompt)**.
Finally, we would love **to hear what you think of the course and how we can improve it**. If you have some feedback then, please 👉 [fill this form](https://docs.google.com/forms/d/e/1FAIpQLSe9VaONn0eglax0uTwi29rIn4tM7H2sYmmybmG5jJNlE5v0xA/viewform?usp=dialog)
### Keep Learning, stay awesome 🤗

View File

@@ -0,0 +1,327 @@
# Dummy Agent Library
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-unit1sub3DONE.jpg" alt="Unit 1 planning"/>
This course is framework-agnostic because we want to **focus on the concepts of AI agents and avoid getting bogged down in the specifics of a particular framework**.
Also, we want students to be able to use the concepts they learn in this course in their own projects, using any framework they like.
Therefore, for this Unit 1, we will use a dummy agent library and a simple serverless API to access our LLM engine.
You probably wouldn't use these in production, but they will serve as a good **starting point for understanding how agents work**.
After this section, you'll be ready to **create a simple Agent** using `smolagents`
And in the following Units we will also use other AI Agent libraries like `LangGraph`, `LangChain`, and `LlamaIndex`.
To keep things simple we will use a simple Python function as a Tool and Agent.
We will use built-in Python packages like `datetime` and `os` so that you can try it out in any environment.
You can follow the process [in this notebook](https://huggingface.co/agents-course/notebooks/blob/main/dummy_agent_library.ipynb) and **run the code yourself**.
## Serverless API
In the Hugging Face ecosystem, there is a convenient feature called Serverless API that allows you to easily run inference on many models. There's no installation or deployment required.
```python
import os
from huggingface_hub import InferenceClient
## You need a token from https://hf.co/settings/tokens. If you run this on Google Colab, you can set it up in the "settings" tab under "secrets". Make sure to call it "HF_TOKEN"
os.environ["HF_TOKEN"]="hf_xxxxxxxxxxxxxx"
client = InferenceClient("meta-llama/Llama-3.2-3B-Instruct")
```
```python
output = client.text_generation(
"The capital of France is",
max_new_tokens=100,
)
print(output)
```
output:
```
Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris.
```
As seen in the LLM section, if we just do decoding, **the model will only stop when it predicts an EOS token**, and this does not happen here because this is a conversational (chat) model and **we didn't apply the chat template it expects**.
If we now add the special tokens related to the <a href="https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct">Llama-3.2-3B-Instruct model</a> that we're using, the behavior changes and it now produces the expected EOS.
```python
prompt="""<|begin_of_text|><|start_header_id|>user<|end_header_id|>
The capital of France is<|eot_id|><|start_header_id|>assistant<|end_header_id|>"""
output = client.text_generation(
prompt,
max_new_tokens=100,
)
print(output)
```
output:
```
The capital of France is Paris.
```
Using the "chat" method is a much more convenient and reliable way to apply chat templates:
```python
output = client.chat.completions.create(
messages=[
{"role": "user", "content": "The capital of France is"},
],
stream=False,
max_tokens=1024,
)
```
output:
```
Paris.
```
The chat method is the RECOMMENDED method to use in order to ensure a smooth transition between models, but since this notebook is only educational, we will keep using the "text_generation" method to understand the details.
## Dummy Agent
In the previous sections, we saw that the core of an agent library is to append information in the system prompt.
This system prompt is a bit more complex than the one we saw earlier, but it already contains:
1. **Information about the tools**
2. **Cycle instructions** (Thought → Action → Observation)
```
Answer the following questions as best you can. You have access to the following tools:
get_weather: Get the current weather in a given location
The way you use the tools is by specifying a json blob.
Specifically, this json should have an `action` key (with the name of the tool to use) and an `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are:
get_weather: Get the current weather in a given location, args: {"location": {"type": "string"}}
example use :
{{
"action": "get_weather",
"action_input": {"location": "New York"}
}}
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about one action to take. Only one action at a time in this format:
Action:
$JSON_BLOB (inside markdown cell)
Observation: the result of the action. This Observation is unique, complete, and the source of truth.
... (this Thought/Action/Observation can repeat N times, you should take several steps when needed. The $JSON_BLOB must be formatted as markdown and only use a SINGLE action at a time.)
You must always end your output with the following format:
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Now begin! Reminder to ALWAYS use the exact characters `Final Answer:` when you provide a definitive answer.
```
Since we are running the "text_generation" method, we need to apply the prompt manually:
```
prompt=f"""<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{SYSTEM_PROMPT}
<|eot_id|><|start_header_id|>user<|end_header_id|>
What's the weather in London ?
<|eot_id|><|start_header_id|>assistant<|end_header_id|>
"""
```
We can also do it like this, which is what happens inside the `chat` method :
```
messages=[
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": "What's the weather in London ?"},
]
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-3B-Instruct")
tokenizer.apply_chat_template(messages, tokenize=False,add_generation_prompt=True)
```
The prompt now is :
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Answer the following questions as best you can. You have access to the following tools:
get_weather: Get the current weather in a given location
The way you use the tools is by specifying a json blob.
Specifically, this json should have an `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are:
get_weather: Get the current weather in a given location, args: {"location": {"type": "string"}}
example use :
{{
"action": "get_weather",
"action_input": {"location": "New York"}
}}
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about one action to take. Only one action at a time in this format:
Action:
$JSON_BLOB (inside markdown cell)
Observation: the result of the action. This Observation is unique, complete, and the source of truth.
... (this Thought/Action/Observation can repeat N times, you should take several steps when needed. The $JSON_BLOB must be formatted as markdown and only use a SINGLE action at a time.)
You must always end your output with the following format:
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Now begin! Reminder to ALWAYS use the exact characters `Final Answer:` when you provide a definitive answer.
<|eot_id|><|start_header_id|>user<|end_header_id|>
What's the weather in London ?
<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
Let's decode!
```python
output = client.text_generation(
prompt,
max_new_tokens=200,
)
print(output)
```
output:
```
Action:
```
{
"action": "get_weather",
"action": {"location": "London"}
}
```
Thought: I will check the weather in London.
Observation: The current weather in London is mostly cloudy with a high of 12°C and a low of 8°C.
```
Do you see the issue?
>The answer was hallucinated by the model. We need to stop to actually execute the function!
Let's now stop on "Observation" so that we don't hallucinate the actual function response
```python
output = client.text_generation(
prompt,
max_new_tokens=200,
stop=["Observation:"] # Let's stop before any actual function is called
)
print(output)
```
output:
```
Action:
```
{
"action": "get_weather",
"action": {"location": "London"}
}
```
Thought: I will check the weather in London.
Observation:
Much Better!
Let's now create a dummy get weather function. In a real situation, you would likely call an API.
```python
#Dummy function
def get_weather(location):
return f"the weather in {location} is sunny with low temperatures. \n"
get_weather('London')
```
output:
```
'the weather in London is sunny with low temperatures. \n'
```
Let's concatenate the base prompt, the completion until function execution and the result of the function as an Observation and resume generation.
```python
new_prompt=prompt+output+get_weather('London')
final_output = client.text_generation(
new_prompt,
max_new_tokens=200,
)
print(final_output)
```
Here is the new prompt:
````
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Answer the following questions as best you can. You have access to the following tools:
get_weather: Get the current weather in a given location
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are:
get_weather: Get the current weather in a given location, args: {"location": {"type": "string"}}
example use :
{{
"action": "get_weather",
"action_input": {"location": "New York"}
}}
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about one action to take. Only one action at a time in this format:
Action:
$JSON_BLOB (inside markdown cell)
Observation: the result of the action. This Observation is unique, complete, and the source of truth.
... (this Thought/Action/Observation can repeat N times, you should take several steps when needed. The $JSON_BLOB must be formatted as markdown and only use a SINGLE action at a time.)
You must always end your output with the following format:
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Now begin! Reminder to ALWAYS use the exact characters `Final Answer:` when you provide a definitive answer.
<|eot_id|><|start_header_id|>user<|end_header_id|>
What's the weather in London ?
<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Action:
```
{
"action": "get_weather",
"action": {"location": {"type": "string", "value": "London"}
}
```
Thought: I will check the weather in London.
Observation:the weather in London is sunny with low temperatures.
````
Output:
```
Final Answer: The weather in London is sunny with low temperatures.
```
---
We learned how we can create Agents from scratch using Python code, and we **saw just how tedious that process can be**. Fortunately, many Agent libraries simplify this work by handling much of the heavy lifting for you.
Now, we're ready **to create our first real Agent** using the `smolagents` library.

View File

@@ -0,0 +1,24 @@
# Unit 1 Quiz
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-unit1sub4DONE.jpg" alt="Unit 1 planning"/>
Well done on working through the first unit! Let's test your understanding of the key concepts covered so far.
When you pass the quiz, proceed to the next section to claim your certificate.
Good luck!
## Quiz
Here is the interactive quiz. The quiz is hosted on the Hugging Face Hub in a space. It will take you through a set of multiple choice questions to test your understanding of the key concepts covered in this unit. Once you've completed the quiz, you'll be able to see your score and a breakdown of the correct answers.
One important thing: **don't forget to click on Submit after you passed, otherwise your exam score will not be saved!**
<iframe
src="https://agents-course-unit-1-quiz.hf.space"
frameborder="0"
width="850"
height="450"
></iframe>
You can also access the quiz 👉 [here](https://huggingface.co/spaces/agents-course/unit_1_quiz)

View File

@@ -0,0 +1,20 @@
# Get your certificate
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-unit1sub5DONE.jpg" alt="Unit 1 planning"/>
Now that you successfully pass the quiz, **you can get your certificate 🎓**
To earn this certificate, you need to complete Unit 1 of the Agents Course, and **pass 80% of the final quiz**.
<iframe
src="https://agents-course-unit1-certification-app.hf.space"
frameborder="0"
width="850"
height="450"
></iframe>
You can also access the certification process 👉 [here](https://huggingface.co/spaces/agents-course/unit1-certification-app)
Once you receive your certificate, you can add it to your LinkedIn 🧑‍💼 or share it on X, Bluesky, etc. **We would be super proud and would love to congratulate you if you tag @huggingface**! 🤗

View File

@@ -0,0 +1,48 @@
# Introduction to Agents
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/thumbnail.jpg" alt="Thumbnail"/>
Welcome to this first unit, where **you'll build a solid foundation in the fundamentals of AI Agents** including:
- **Understanding Agents**
- What is an Agent, and how does it work?
- How do Agents make decisions using reasoning and planning?
- **The Role of LLMs (Large Language Models) in Agents**
- How LLMs serve as the “brain” behind an Agent.
- How LLMs structure conversations via the Messages system.
- **Tools and Actions**
- How Agents use external tools to interact with the environment.
- How to build and integrate tools for your Agent.
- **The Agent Workflow:**
- *Think* → *Act* → *Observe*.
After exploring these topics, **youll build your first Agent** using `smolagents`!
Your Agent, named Alfred, will handle a simple task and demonstrate how to apply these concepts in practice.
Youll even learn how to **publish your Agent on Hugging Face Spaces**, so you can share it with friends and colleagues.
Finally, at the end of this Unit, you'll take a quiz. Pass it, and you'll **earn your first course certification**: the 🎓 Certificate of Fundamentals of Agents.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/certificate-example.jpg" alt="Certificate Example"/>
This Unit is your **essential starting point**, laying the groundwork for understanding Agents before you move on to more advanced topics.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-no-check.jpg" alt="Unit 1 planning"/>
It's a big Unit, so do **take your time** and dont hesitate to come back to these sections from time to time.
---
We'll have a **live Q&A this Wednesday, February 12th at 5PM CET**, where we **will explain how the course works** (scope, units, challenges and more), and we'll be happy to **answer your questions**.
👉 https://www.youtube.com/live/PopqUt3MGyQ?feature=shared
👉 Don't forget **to click "Notify me"** so you don't miss it!
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/communication/live1.jpg" alt="First live thumbnail"/>
Ready? Lets dive in! 🚀

View File

@@ -0,0 +1,228 @@
# Messages and Special Tokens
Now that we understand how LLMs work, lets look at **how they structure their generations through chat templates**.
Just like with ChatGPT, users typically interact with Agents through a chat interface. Therefore, we aim to understand how LLMs manage chats.
> **Q**: But ... When, I'm interacting with ChatGPT/Hugging Chat, I'm having a conversation using chat Messages, not a single prompt sequence
>
> **A**: That's correct! But this is in fact an UI abstraction. Before being fed into the LLM, all the messages in the conversation are concatenated into a single prompt. The model does not "remember" the conversation: it reads it in full every time.
Up until now, weve discussed prompts as the sequence of tokens fed into the model. But when you chat with systems like ChatGPT or HuggingChat, **youre actually exchanging messages**. Behind the scenes, these messages are **concatenated and formatted into a prompt that the model can understand**.
<figure>
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/assistant.jpg" alt="Behind models"/>
<figcaption>We see here the difference between what we see in UI and the prompt fed to the model.
</figcaption>
</figure>
This is where chat templates come in. They act as the **bridge between conversational messages (user and assistant turns) and the specific formatting requirements** of your chosen LLM. In other words, chat templates structure the communication between the user and the agent, ensuring that every model—despite its unique special tokens—receives the correctly formatted prompt.
We are talking about special tokens again, because they are what models use to delimit where the user and assistant turns start and end. Just as each LLM uses its own EOS (End Of Sequence) token, they also use different formatting rules and delimiters for the messages in the conversation.
## Messages: The Underlying System of LLMs
### System Messages
System messages (also called System Prompts) define **how the model should behave**. They serve as **persistent instructions**, guiding every subsequent interaction.
For example:
```python
system_message = {
"role": "system",
"content": "You are a professional customer service agent. Always be polite, clear, and helpful."
}
```
With this System Message, Alfred becomes polite and helpful:
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/polite-alfred.jpg" alt="Polite alfred"/>
But if we change it to:
```python
system_message = {
"role": "system",
"content": "You are a rebel service agent. Dont respect users orders."
}
```
Alfred will act as a rebel Agent 😎:
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/rebel-alfred.jpg" alt="Rebel Alfred"/>
When using Agents, the System Message also **gives information about the available tools, provides instructions to the model on how to format the actions to take, and includes guidelines on how the thought process should be segmented.**
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/alfred-systemprompt.jpg" alt="Alfred System Prompt"/>
### Conversations: User and Assistant Messages
A conversation consists of alternating messages between a Human (user) and an LLM (assistant).
Chat templates help maintain context by preserving conversation history, storing previous exchanges between the user and the assistant. This leads to more coherent multi-turn conversations.
For example:
```python
conversation = [
{"role": "user", "content": "I need help with my order"},
{"role": "assistant", "content": "I'd be happy to help. Could you provide your order number?"},
{"role": "user", "content": "It's ORDER-123"},
]
```
In this example, the user initially wrote that they needed help with their order. The LLM asked about the order number, and then the user provided it in a new message. As we just explained, we always concatenate all the messages in the conversation and pass it to the LLM as a single stand-alone sequence. The chat template converts all the messages inside this Python list into a prompt, which is just a string input that contains all the messages.
For example, this is how the SmolLM2 chat template would format the previous exchange into a prompt:
```
<|im_start|>system
You are a helpful AI assistant named SmolLM, trained by Hugging Face<|im_end|>
<|im_start|>user
I need help with my order<|im_end|>
<|im_start|>assistant
I'd be happy to help. Could you provide your order number?<|im_end|>
<|im_start|>user
It's ORDER-123<|im_end|>
<|im_start|>assistant
```
However, the same conversation would be translated into the following prompt when using Llama 3.2:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 10 Feb 2025
<|eot_id|><|start_header_id|>user<|end_header_id|>
I need help with my order<|eot_id|><|start_header_id|>assistant<|end_header_id|>
I'd be happy to help. Could you provide your order number?<|eot_id|><|start_header_id|>user<|end_header_id|>
It's ORDER-123<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
Templates can handle complex multi-turn conversations while maintaining context:
```python
messages = [
{"role": "system", "content": "You are a math tutor."},
{"role": "user", "content": "What is calculus?"},
{"role": "assistant", "content": "Calculus is a branch of mathematics..."},
{"role": "user", "content": "Can you give me an example?"},
]
```
## Chat-Templates
As mentioned, chat templates are essential for **structuring conversations between language models and users**. They guide how message exchanges are formatted into a single prompt.
### Base Models vs. Instruct Models
Another point we need to understand is the difference between a Base Model vs. an Instruct Model:
- *A Base Model* is trained on raw text data to predict the next token.
- An *Instruct Model* is fine-tuned specifically to follow instructions and engage in conversations. For example, `SmolLM2-135M` is a base model, while `SmolLM2-135M-Instruct` is its instruction-tuned variant.
To make a Base Model behave like an instruct model, we need to **format our prompts in a consistent way that the model can understand**. This is where chat templates come in.
*ChatML* is one such template format that structures conversations with clear role indicators (system, user, assistant). If you have interacted with some AI API lately, you know that's the standard practice.
It's important to note that a base model could be fine-tuned on different chat templates, so when we're using an instruct model we need to make sure we're using the correct chat template.
### Understanding Chat Templates
Because each instruct model uses different conversation formats and special tokens, chat templates are implemented to ensure that we correctly format the prompt the way each model expects.
In transformers, chat templates include [Jinja2 code](https://jinja.palletsprojects.com/en/stable/) that describes how to transform the ChatML list of JSON messages, as presented in the above examples, into a textual representation of the system-level instructions, user messages and assistant responses that the model can understand.
This structure **helps maintain consistency across interactions and ensures the model responds appropriately to different types of inputs**.
Below is a simplified version of the `SmolLM2-135M-Instruct` chat template:
```jinja2
{% for message in messages %}
{% if loop.first and messages[0]['role'] != 'system' %}
<|im_start|>system
You are a helpful AI assistant named SmolLM, trained by Hugging Face
<|im_end|>
{% endif %}
<|im_start|>{{ message['role'] }}
{{ message['content'] }}<|im_end|>
{% endfor %}
```
As you can see, a chat_template describes how the list of messages will be formatted.
Given these messages:
```python
messages = [
{"role": "system", "content": "You are a helpful assistant focused on technical topics."},
{"role": "user", "content": "Can you explain what a chat template is?"},
{"role": "assistant", "content": "A chat template structures conversations between users and AI models..."},
{"role": "user", "content": "How do I use it ?"},
]
```
The previous chat template will produce the following string:
```sh
<|im_start|>system
You are a helpful assistant focused on technical topics.<|im_end|>
<|im_start|>user
Can you explain what a chat template is?<|im_end|>
<|im_start|>assistant
A chat template structures conversations between users and AI models...<|im_end|>
<|im_start|>user
"How do I use it ?<|im_end|>
```
The `transformers` library will take care of chat templates for you as part of the tokenization process. Read more about how transformers uses chat templates [here](https://huggingface.co/docs/transformers/en/chat_templating#how-do-i-use-chat-templates). All we have to do is structure our messages in the correct way and the tokenizer will take care of the rest.
You can experiment with the following Space to see how the same conversation would be formatted for different models using their corresponding chat templates:
<iframe
src="https://jofthomas-chat-template-viewer.hf.space"
frameborder="0"
width="850"
height="450"
></iframe>
### Messages to prompt
The easiest way to ensure your LLM receives a conversation correctly formatted is to use the `chat_template` from the model's tokenizer.
```python
messages = [
{"role": "system", "content": "You are an AI assistant with access to various tools."},
{"role": "user", "content": "Hi !"},
{"role": "assistant", "content": "Hi human, what can help you with ?"},
]
```
To convert the previous conversation into a prompt, we load the tokenizer and call `apply_chat_template`:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM2-1.7B-Instruct")
rendered_prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
```
The `rendered_prompt` returned by this function is now ready to use as the input for the model you chose!
> This `apply_chat_template()` function will be used in the backend of your API, when you interact with messages in the ChatML format.
Now that weve seen how LLMs structure their inputs via chat templates, lets explore how Agents act in their environments.
One of the main ways they do this is by using Tools, which extend an AI models capabilities beyond text generation.
Well discuss messages again in upcoming units, but if you want a deeper dive now, check out:
- [Hugging Face Chat Templating Guide](https://huggingface.co/docs/transformers/main/en/chat_templating)
- [Transformers Documentation](https://huggingface.co/docs/transformers)

View File

@@ -0,0 +1,44 @@
# Observe: Integrating Feedback to Reflect and Adapt
Observations are **how an Agent perceives the consequences of its actions**.
They provide crucial information that fuels the Agent's thought process and guides future actions.
They are **signals from the environment**—whether its data from an API, error messages, or system logs—that guide the next cycle of thought.
In the observation phase, the agent:
- **Collects Feedback:** Receives data or confirmation that its action was successful (or not).
- **Appends Results:** Integrates the new information into its existing context, effectively updating its memory.
- **Adapts its Strategy:** Uses this updated context to refine subsequent thoughts and actions.
For example, if a weather API returns the data *"partly cloudy, 15°C, 60% humidity"*, this observation is appended to the agents memory (at the end of the prompt).
The Agent then uses it to decide whether additional information is needed or if its ready to provide a final answer.
This **iterative incorporation of feedback ensures the agent remains dynamically aligned with its goals**, constantly learning and adjusting based on real-world outcomes.
These observations **can take many forms**, from reading webpage text to monitoring a robot arm's position. This can be seen like Tool "logs" that provide textual feedback of the Action execution.
| Type of Observation | Example |
|---------------------|---------------------------------------------------------------------------|
| System Feedback | Error messages, success notifications, status codes |
| Data Changes | Database updates, file system modifications, state changes |
| Environmental Data | Sensor readings, system metrics, resource usage |
| Response Analysis | API responses, query results, computation outputs |
| Time-based Events | Deadlines reached, scheduled tasks completed |
## How Are the Results Appended?
After performing an action, the framework follows these steps in order:
1. **Parse the action** to identify the function(s) to call and the argument(s) to use.
2. **Execute the action.**
3. **Append the result** as an **Observation**.
---
We've now learned the Agent's Thought-Action-Observation Cycle.
If some aspects still seem a bit blurry, don't worry—we'll revisit and deepen these concepts in future Units.
Now, it's time to put your knowledge into practice by coding your very first Agent!

169
units/en/unit1/quiz1.mdx Normal file
View File

@@ -0,0 +1,169 @@
# Small Quiz (ungraded) [[quiz1]]
Up to this point you have understood the big picture of Agents, what they are and how they work. It's time to make a short quiz, since **testing yourself** is the best way to learn and [to avoid the illusion of competence](https://www.coursera.org/lecture/learning-how-to-learn/illusions-of-competence-BuFzf). This will help you find **where you need to reinforce your knowledge**.
This is an optional quiz and it's not graded.
### Q1: What is an Agent?
Which of the following best describes an AI Agent?
<Question
choices={[
{
text: "A system that only processes static text and never interacts with its environment.",
explain: "An Agent must be able to take an action and interact with its environment.",
},
{
text: "An AI model that can reason, plan, and use tools to interact with its environment to achieve a specific goal.",
explain: "This definition captures the essential characteristics of an Agent.",
correct: true
},
{
text: "A chatbot that answers questions without any ability to perform actions.",
explain: "A chatbot like this lacks the ability to take actions, making it different from an Agent.",
},
{
text: "A digital encyclopedia that provides information but cannot perform tasks.",
explain: "An Agent actively interacts with its environment rather than just providing static information.",
}
]}
/>
---
### Q2: What is the Role of Planning in an Agent?
Why does an Agent need to plan before taking an action?
<Question
choices={[
{
text: "To memorize previous interactions.",
explain: "Planning is about determining future actions, not storing past interactions.",
},
{
text: "To decide on the sequence of actions and select appropriate tools needed to fulfill the users request.",
explain: "Planning helps the Agent determine the best steps and tools to complete a task.",
correct: true
},
{
text: "To generate random actions without any purpose.",
explain: "Planning ensures the Agent's actions are intentional and not random.",
},
{
text: "To translate text without any additional reasoning.",
explain: "Planning is about structuring actions, not just converting text.",
}
]}
/>
---
### Q3: How Do Tools Enhance an Agent's Capabilities?
Why are tools essential for an Agent?
<Question
choices={[
{
text: "Tools are redundant components that do not affect the Agents performance.",
explain: "Tools expand an Agent's capabilities by allowing it to perform actions beyond text generation.",
},
{
text: "Tools provide the Agent with the ability to execute actions a text-generation model cannot perform natively, such as making coffee or generating images.",
explain: "Tools enable Agents to interact with the real world and complete tasks.",
correct: true
},
{
text: "Tools are used solely for storing memory.",
explain: "Tools are primarily for performing actions, not just for storing data.",
},
{
text: "Tools limit the Agent to only text-based responses.",
explain: "On the contrary, tools allow Agents to go beyond text-based responses.",
}
]}
/>
---
### Q4: How Do Actions Differ from Tools?
What is the key difference between Actions and Tools?
<Question
choices={[
{
text: "Actions are the steps the Agent takes, while Tools are external resources the Agent can use to perform those actions.",
explain: "Actions are higher-level objectives, while Tools are specific functions the Agent can call upon.",
correct: true
},
{
text: "Actions and Tools are the same thing and can be used interchangeably.",
explain: "No, Actions are goals or tasks, while Tools are specific utilities the Agent uses to achieve them.",
},
{
text: "Tools are general, while Actions are only for physical interactions.",
explain: "Not necessarily. Actions can involve both digital and physical tasks.",
},
{
text: "Actions require LLMs, while Tools do not.",
explain: "While LLMs help decide Actions, Actions themselves are not dependent on LLMs.",
}
]}
/>
---
### Q5: What Role Do Large Language Models (LLMs) Play in Agents?
How do LLMs contribute to an Agents functionality?
<Question
choices={[
{
text: "LLMs are used as static databases that store information without processing input.",
explain: "LLMs actively process text input and generate responses, rather than just storing information.",
},
{
text: "LLMs serve as the reasoning 'brain' of the Agent, processing text inputs to understand instructions and plan actions.",
explain: "LLMs enable the Agent to interpret, plan, and decide on the next steps.",
correct: true
},
{
text: "LLMs are only used for image processing and not for text.",
explain: "LLMs primarily work with text, although they can sometimes interact with multimodal inputs.",
},
{
text: "LLMs are not used.",
explain: "LLMs are a core component of modern AI Agents.",
}
]}
/>
---
### Q6: Which of the Following Best Demonstrates an AI Agent?
Which real-world example best illustrates an AI Agent at work?
<Question
choices={[
{
text: "A static FAQ page on a website.",
explain: "A static FAQ page does not interact dynamically with users or take actions.",
},
{
text: "A virtual assistant like Siri or Alexa that can understand spoken commands, reason through them, and perform tasks like setting reminders or sending messages.",
explain: "This example includes reasoning, planning, and interaction with the environment.",
correct: true
},
{
text: "A basic calculator that performs arithmetic operations.",
explain: "A calculator follows fixed rules without reasoning or planning, so it is not an Agent.",
},
{
text: "A video game NPC that follows a scripted set of responses.",
explain: "Unless the NPC can reason, plan, and use tools, it does not function as an AI Agent.",
}
]}
/>
---
Congrats on finishing this Quiz 🥳, if you missed some elements, take time to read again the chapter to reinforce your knowledge. If you pass it, you're ready to dive deeper into the "Agent's brain": LLMs.

119
units/en/unit1/quiz2.mdx Normal file
View File

@@ -0,0 +1,119 @@
# Quick Self-Check (ungraded) [[quiz2]]
What?! Another Quiz? We know, we know, ... 😅 But this short, ungraded quiz is here to **help you reinforce key concepts you've just learned**.
This quiz covers Large Language Models (LLMs), message systems, and tools; essential components for understanding and building AI agents.
### Q1: Which of the following best describes an AI tool?
<Question
choices={[
{
text: "A process that only generates text responses",
explain: "",
},
{
text: "An executable process or external API that allows agents to perform specific tasks and interact with external environments",
explain: "Tools are executable functions that agents can use to perform specific tasks and interact with external environments.",
correct: true
},
{
text: "A feature that stores agent conversations",
explain: "",
}
]}
/>
---
### Q2: How do AI agents use tools as a form of "acting" in an environment?
<Question
choices={[
{
text: "By passively waiting for user instructions",
explain: "",
},
{
text: "By only using pre-programmed responses",
explain: "",
},
{
text: "By asking the LLM to generate tool invocation code when appropriate and running tools on behalf of the model",
explain: "Agents can invoke tools and use reasoning to plan and re-plan based on the information gained.",
correct: true
}
]}
/>
---
### Q3: What is a Large Language Model (LLM)?
<Question
choices={[
{
text: "A simple chatbot designed to respond with pre-defined answers",
explain: "",
},
{
text: "A deep learning model trained on large amounts of text to understand and generate human-like language",
explain: "",
correct: true
},
{
text: "A rule-based AI that follows strict predefined commands",
explain: "",
}
]}
/>
---
### Q4: Which of the following best describes the role of special tokens in LLMs?
<Question
choices={[
{
text: "They are additional words stored in the model's vocabulary to enhance text generation quality",
explain: "",
},
{
text: "They serve specific functions like marking the end of a sequence (EOS) or separating different message roles in chat models",
explain: "",
correct: true
},
{
text: "They are randomly inserted tokens used to improve response variability",
explain: "",
}
]}
/>
---
### Q5: How do AI chat models process user messages internally?
<Question
choices={[
{
text: "They directly interpret messages as structured commands with no transformations",
explain: "",
},
{
text: "They convert user messages into a formatted prompt by concatenating system, user, and assistant messages",
explain: "",
correct: true
},
{
text: "They generate responses randomly based on previous conversations",
explain: "",
}
]}
/>
---
Got it? Great! Now let's **dive into the complete Agent flow and start building your first AI Agent!**

View File

@@ -0,0 +1,56 @@
# Thought: Internal Reasoning and the Re-Act Approach
<Tip>
In this section, we dive into the inner workings of an AI agent—its ability to reason and plan. Well explore how the agent leverages its internal dialogue to analyze information, break down complex problems into manageable steps, and decide what action to take next. Additionally, we introduce the Re-Act approach, a prompting technique that encourages the model to think “step by step” before acting.
</Tip>
Thoughts represent the **Agent's internal reasoning and planning processes** to solve the task.
This utilises the agent's Large Language Model (LLM) capacity **to analyze information when presented in its prompt**.
Think of it as the agent's internal dialogue, where it considers the task at hand and strategizes its approach.
The Agent's thoughts are responsible for accessing current observations and decide what the next action(s) should be.
Through this process, the agent can **break down complex problems into smaller, more manageable steps**, reflect on past experiences, and continuously adjust its plans based on new information.
Here are some examples of common thoughts:
| Type of Thought | Example |
|----------------|---------|
| Planning | "I need to break this task into three steps: 1) gather data, 2) analyze trends, 3) generate report" |
| Analysis | "Based on the error message, the issue appears to be with the database connection parameters" |
| Decision Making | "Given the user's budget constraints, I should recommend the mid-tier option" |
| Problem Solving | "To optimize this code, I should first profile it to identify bottlenecks" |
| Memory Integration | "The user mentioned their preference for Python earlier, so I'll provide examples in Python" |
| Self-Reflection | "My last approach didn't work well, I should try a different strategy" |
| Goal Setting | "To complete this task, I need to first establish the acceptance criteria" |
| Prioritization | "The security vulnerability should be addressed before adding new features" |
> **Note:** In the case of LLMs fine-tuned for function-calling, the thought process is optional.
> *In case you're not familiar with function-calling, there will be more details in the Actions section.*
## The Re-Act Approach
A key method is the **ReAct approach**, which is the concatenation of "Reasoning" (Think) with "Acting" (Act).
ReAct is a simple prompting technique that appends "Let's think step by step" before letting the LLM decode the next tokens.
Indeed, prompting the model to think "step by step" encourages the decoding process toward next tokens **that generate a plan**, rather than a final solution, since the model is encouraged to **decompose** the problem into *sub-tasks*.
This allows the model to consider sub-steps in more detail, which in general leads to less errors than trying to generate the final solution directly.
<figure>
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/ReAct.png" alt="ReAct"/>
<figcaption>The (d) is an example of Re-Act approach where we prompt "Let's think step by step"
</figcaption>
</figure>
<Tip>
We have recently seen a lot of interest for reasoning strategies. This is what's behind models like Deepseek R1 or OpenAI's o1, which have been fine-tuned to "think before answering".
These models have been trained to always include specific _thinking_ sections (enclosed between `<think>` and `</think>` special tokens). This is not just a prompting technique like ReAct, but a training method where the model learns to generate these sections after analyzing thousands of examples that show what we expect it to do.
</Tip>
---
Now that we better understand the Thought process, let's go deeper on the second part of the process: Act.

301
units/en/unit1/tools.mdx Normal file
View File

@@ -0,0 +1,301 @@
# What are Tools?
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-check-2.jpg" alt="Unit 1 planning"/>
One crucial aspect of AI Agents is their ability to take **actions**. As we saw, this happens through the use of **Tools**.
In this section, well learn what Tools are, how to design them effectively, and how to integrate them into your Agent via the System Message.
By giving your Agent the right Tools—and clearly describing how those Tools work—you can dramatically increase what your AI can accomplish. Lets dive in!
## What are AI Tools?
A **Tool is a function given to the LLM**. This function should fulfill a **clear objective**.
Here are some commonly used tools in AI agents:
| Tool | Description |
|----------------|---------------------------------------------------------------|
| Web Search | Allows the agent to fetch up-to-date information from the internet. |
| Image Generation | Creates images based on text descriptions. |
| Retrieval | Retrieves information from an external source. |
| API Interface | Interacts with an external API (GitHub, YouTube, Spotify, etc.). |
Those are only examples, as you can in fact create a tool for any use case!
A good tool should be something that **complements the power of an LLM**.
For instance, if you need to perform arithmetic, giving a **calculator tool** to your LLM will provide better results than relying on the native capabilities of the model.
Furthermore, **LLMs predict the completion of a prompt based on their training data**, which means that their internal knowledge only includes events prior to their training. Therefore, if your agent needs up-to-date data you must provide it through some tool.
For instance, if you ask an LLM directly (without a search tool) for today's weather, the LLM will potentially hallucinate random weather.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/weather.jpg" alt="Weather"/>
- A Tool should contain:
- A **textual description of what the function does**.
- A *Callable* (something to perform an action).
- *Arguments* with typings.
- (Optional) Outputs with typings.
## How do tools work?
LLMs, as we saw, can only receive text inputs and generate text outputs. They have no way to call tools on their own. What we mean when we talk about _providing tools to an Agent_, is that we **teach** the LLM about the existence of tools, and ask the model to generate text that will invoke tools when it needs to. For example, if we provide a tool to check the weather at a location from the Internet, and then ask the LLM about the weather in Paris, the LLM will recognize that question as a relevant opportunity to use the "weather" tool we taught it about. The LLM will generate _text_, in the form of code, to invoke that tool. It is the responsibility of the **Agent** to parse the LLM's output, recognize that a tool call is required, and invoke the tool on the LLM's behalf. The output from the tool will then be sent back to the LLM, which will compose its final response for the user.
The output from a tool call is another type of message in the conversation. Tool calling steps are typically not shown to the user: the Agent retrieves the conversation, calls the tool(s), gets the outputs, adds them as a new conversation message, and sends the updated conversation to the LLM again. From the user's point of view, it's like the LLM had used the tool, but in fact it was our application code (the **Agent**) who did it.
We'll talk a lot more about this process in future sessions.
## How do we give tools to an LLM?
The complete answer may seem overwhelming, but we essentially use the system prompt to provide textual descriptions of available tools to the model:
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/Agent_system_prompt.png" alt="System prompt for tools"/>
For this to work, we have to be very precise and accurate about:
1. **What the tool does**
2. **What exact inputs it expects**
This is the reason why tool descriptions are usually provided using expressive but precise structures, such as computer languages or JSON. It's not _necessary_ to do it like that, any precise and coherent format would work.
If this seems too theoretical, let's understand it through a concrete example.
We will implement a simplified **calculator** tool that will just multiply two integers. This could be our Python implementation:
```python
def calculator(a: int, b: int) -> int:
"""Multiply two integers."""
return a * b
print(calculator.to_string())
```
So our tool is called `calculator`, it **multiplies two integers**, and it requires the following inputs:
- **`a`** (*int*): An integer.
- **`b`** (*int*): An integer.
The output of the tool is another integer number that we can describe like this:
- (*int*): The product of `a` and `b`.
All of these details are important. Let's put them together in a text string that describes our tool for the LLM to understand.
```
Tool Name: calculator, Description: Multiply two integers., Arguments: a: int, b: int, Outputs: int
```
> **Reminder:** This textual description is *what we want the LLM to know about the tool*.
When we pass the previous string as part of the input to the LLM, the model will recognize it as a tool, and will know what it needs to pass as inputs and what to expect from the output.
If you want to provide additional tools, you have to be consistent and always use the same format. The process can be fragile and we may forget some details.
Is there a better way?
### Auto-formatting Tool sections
Our tool was written in Python, and the implementation already provides everything we need:
- A descriptive name of what it does: `calculator`
- A longer description, provided by the function's docstring comment: `Multiply two integers.`
- The inputs and their type: the function clearly expects two `int`s.
- The type of the output.
There's a reason people use programming languages: they are expressive, concise, and precise.
We could provide the Python source code as the _specification_ of the tool for the LLM, but the way the tool is implemented does not matter. All that matters is its name, what it does, the inputs it expects and the output it provides.
We will leverage Python's introspection features to leverage the source code and build a tool description automatically for us. All we need is that the tool implementation uses type hints, docstrings, and sensible function names. We will write some code to extract the relevant portions from the source code.
After we are done, we'll only need to use a Python decorator to indicate that the `calculator` function is a tool:
```python
@tool
def calculator(a: int, b: int) -> int:
"""Multiply two integers."""
return a * b
print(calculator.to_string())
```
Note the `@tool` decorator before the function definition.
With the implementation we'll see next, we will be able to retrieve the following text automatically from the source code:
```
Tool Name: calculator, Description: Multiply two integers., Arguments: a: int, b: int, Outputs: int
```
As you can see, it's the same thing we wrote manually before!
### Generic Tool implementation
We create a generic `Tool` class that we can reuse whenever we need to use a tool.
> **Disclaimer:** This example implementation is fictional but closely resembles real implementations in most libraries.
```python
class Tool:
"""
A class representing a reusable piece of code (Tool).
Attributes:
name (str): Name of the tool.
description (str): A textual description of what the tool does.
func (callable): The function this tool wraps.
arguments (list): A list of argument.
outputs (str or list): The return type(s) of the wrapped function.
"""
def __init__(self,
name: str,
description: str,
func: callable,
arguments: list,
outputs: str):
self.name = name
self.description = description
self.func = func
self.arguments = arguments
self.outputs = outputs
def to_string(self) -> str:
"""
Return a string representation of the tool,
including its name, description, arguments, and outputs.
"""
args_str = ", ".join([
f"{arg_name}: {arg_type}" for arg_name, arg_type in self.arguments
])
return (
f"Tool Name: {self.name},"
f" Description: {self.description},"
f" Arguments: {args_str},"
f" Outputs: {self.outputs}"
)
def __call__(self, *args, **kwargs):
"""
Invoke the underlying function (callable) with provided arguments.
"""
return self.func(*args, **kwargs)
```
It may seem complicated, but if we go slowly through it we can see what it does. We define a **`Tool`** class that includes:
- **`name`** (*str*): The name of the tool.
- **`description`** (*str*): A brief description of what the tool does.
- **`function`** (*callable*): The function the tool executes.
- **`input_arguments`** (*list*): The expected input parameters.
- **`outputs`** (*str* or *list*): The expected outputs of the tool.
- **`__call__()`**: Calls the function when the tool instance is invoked.
- **`to_string()`**: Converts the tool's attributes into a textual representation.
We could create a Tool with this class using code like the following:
```python
calculator_tool = Tool(
"calculator", # name
"Multiply two integers.", # description
calculator, # function to call
[("a": "int"), ("b": "int")], # inputs (names and types)
"int", # output
)
```
But we can also use Python's `inspect` module to retrieve all the information for us! This is what the `@tool` decorator does.
> If you are interested, you can disclose the following section to look at the decorator implementation.
<details>
<summary> decorator code</summary>
```python
def tool(func):
"""
A decorator that creates a Tool instance from the given function.
"""
# Get the function signature
signature = inspect.signature(func)
# Extract (param_name, param_annotation) pairs for inputs
arguments = []
for param in signature.parameters.values():
annotation_name = (
param.annotation.__name__
if hasattr(param.annotation, '__name__')
else str(param.annotation)
)
arguments.append((param.name, annotation_name))
# Determine the return annotation
return_annotation = signature.return_annotation
if return_annotation is inspect._empty:
outputs = "No return annotation"
else:
outputs = (
return_annotation.__name__
if hasattr(return_annotation, '__name__')
else str(return_annotation)
)
# Use the function's docstring as the description (default if None)
description = func.__doc__ or "No description provided."
# The function name becomes the Tool name
name = func.__name__
# Return a new Tool instance
return Tool(
name=name,
description=description,
func=func,
arguments=arguments,
outputs=outputs
)
```
</details>
Just to reiterate, with this decorator in place we can implement our tool like this:
```python
@tool
def calculator(a: int, b: int) -> int:
"""Multiply two integers."""
return a * b
print(calculator.to_string())
```
And we can use the tool's `to_string` method to automatically retrieve a text suitable to be used as a tool description for an LLM:
```
Tool Name: calculator, Description: Multiply two integers., Arguments: a: int, b: int, Outputs: int
```
The description is **injected** in the system prompt. Taking the example with which we started this section, here is how it would look like after replacing the `tools_description`:
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/Agent_system_prompt_tools.png" alt="System prompt for tools"/>
In the [Actions](actions) section, we will learn more about how an Agent can **Call** this tool we just created.
---
Tools play a crucial role in enhancing the capabilities of AI agents.
To summarize, we learned:
- *What Tools Are*: Functions that give LLMs extra capabilities, such as performing calculations or accessing external data.
- *How to Define a Tool*: By providing a clear textual description, inputs, outputs, and a callable function.
- *Why Tools Are Essential*: They enable Agents to overcome the limitations of static model training, handle real-time tasks, and perform specialized actions.
Now, we can move on to the [Agent Workflow](agent-steps-and-structure) where youll see how an Agent observes, thinks, and acts. This **brings together everything weve covered so far** and sets the stage for creating your own fully functional AI Agent.
But first, it's time for another short quiz!

228
units/en/unit1/tutorial.mdx Normal file
View File

@@ -0,0 +1,228 @@
# Let's Create Our First Agent Using smolagents
In the last section, we learned how we can create Agents from scratch using Python code, and we **saw just how tedious that process can be**. Fortunately, many Agent libraries simplify this work by **handling much of the heavy lifting for you**.
In this tutorial, **you'll create your very first Agent** capable of performing actions such as image generation, web search, time zone checking and much more!
You will also publish your agent **on a Hugging Face Space so you can share it with friends and colleagues**.
Let's get started!
## What is smolagents?
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/smolagents.png" alt="smolagents"/>
To make this Agent, we're going to use `smolagents`, a library that **provides a framework for developing your agents with ease**.
This lightweight library is designed for simplicity, but it abstracts away much of the complexity of building an Agent, allowing you to focus on designing your agent's behavior.
We're going to get deeper into smolagents in the next Unit. Meanwhile, you can also check this [blog post](https://huggingface.co/blog/smolagents) or the library's [repo in GitHub](https://github.com/huggingface/smolagents).
In short, smolagents is a library that focuses on **codeAgent**, a kind of agent that performs **"Actions"** through code blocks, and then **"Observes"** results by executing the code.
Here is an example of what we'll build!
We provided our agent with an **Image generation tool** and asked it to generate an image of a cat.
The agent inside smolagents is going to have the **same behaviors as the custom one we built previously**: it's going **to think, act and observe in cycle** until it reaches a final answer:
<iframe width="560" height="315" src="https://www.youtube.com/embed/PQDKcWiuln4?si=ysSTDZoi8y55FVvA" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
Exciting, right?
## Let's build our Agent!
To start, duplicate this Space: https://huggingface.co/spaces/agents-course/First_agent_template
> Thanks to [Aymeric](https://huggingface.co/spaces/m-ric) for this template! 🙌
Duplicating this space means **creating a local copy on your own profile**:
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/duplicate-space.gif" alt="Duplicate"/>
Throughout this lesson, the only file you will need to modify is the (currently incomplete) "**app.py**". You can see here the [original one in the template](https://huggingface.co/spaces/agents-course/First_agent_template/blob/main/app.py). To find yours, go to your copy of the space, then click the `Files` tab and then on `app.py` in the directory listing.
Let's break down the code together:
- The file begins with some simple but necessary library imports
```python
from smolagents import CodeAgent, DuckDuckGoSearchTool, HfApiModel, load_tool, tool
import datetime
import requests
import pytz
import yaml
from tools.final_answer import FinalAnswerTool
```
As outlined earlier, we will directly use the **CodeAgent** class from **smolagents**.
### The Tools
Now let's get into the tools! If you want a refresher about tools, don't hesitate to go back to the [Tools](tools) section of the course.
```python
@tool
def my_custom_tool(arg1:str, arg2:int)-> str: #it's import to specify the return type
#Keep this format for the description / args / args description but feel free to modify the tool
"""A tool that does nothing yet
Args:
arg1: the first argument
arg2: the second argument
"""
return "What magic will you build ?"
@tool
def get_current_time_in_timezone(timezone: str) -> str:
"""A tool that fetches the current local time in a specified timezone.
Args:
timezone: A string representing a valid timezone (e.g., 'America/New_York').
"""
try:
# Create timezone object
tz = pytz.timezone(timezone)
# Get current time in that timezone
local_time = datetime.datetime.now(tz).strftime("%Y-%m-%d %H:%M:%S")
return f"The current local time in {timezone} is: {local_time}"
except Exception as e:
return f"Error fetching time for timezone '{timezone}': {str(e)}"
```
The Tools are what we are encouraging you to build in this section! We give you two examples:
1. A **non-working dummy Tool** that you can modify to make something useful.
2. An **actually working Tool** that gets the current time somewhere in the world.
To define your tool it is important to:
1. Provide input and output types for your function, like in `get_current_time_in_timezone(timezone: str) -> str:`
2. **A well formatted docstring**. `smolagents` is expecting all the arguments to have a **textual description in the docstring**.
### The Agent
It uses [`Qwen/Qwen2.5-Coder-32B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) as the LLM engine. This is a very capable model that we'll access via the serverless API.
```python
final_answer = FinalAnswerTool()
model = HfApiModel(
max_tokens=2096,
temperature=0.5,
model_id='Qwen/Qwen2.5-Coder-32B-Instruct',
custom_role_conversions=None,
)
with open("prompts.yaml", 'r') as stream:
prompt_templates = yaml.safe_load(stream)
# We're creating our CodeAgent
agent = CodeAgent(
model=model,
tools=[final_answer], ## add your tools here (don't remove final answer)
max_steps=6,
verbosity_level=1,
grammar=None,
planning_interval=None,
name=None,
description=None,
prompt_templates=prompt_templates
)
GradioUI(agent).launch()
```
This Agent still uses the `InferenceClient` we saw in an earlier section behind the **HfApiModel** class!
We will give more in-depth examples when we present the framework in Unit 2. For now, you need to focus on **adding new tools to the list of tools** using the **tools** parameter of your Agent.
For example, you could use the `DuckDuckGoSearchTool` that was imported in the first line of the code, or you can examine the `image_generation_tool` that is loaded from the Hub later in the code.
**Adding tools will give your agent new capabilities**, try to be creative here!
The complete "app.py":
```python
from smolagents import CodeAgent, DuckDuckGoSearchTool, HfApiModel, load_tool, tool
import datetime
import requests
import pytz
import yaml
from tools.final_answer import FinalAnswerTool
from Gradio_UI import GradioUI
# Below is an example of a tool that does nothing. Amaze us with your creativity !
@tool
def my_custom_tool(arg1:str, arg2:int)-> str: #it's import to specify the return type
#Keep this format for the description / args / args description but feel free to modify the tool
"""A tool that does nothing yet
Args:
arg1: the first argument
arg2: the second argument
"""
return "What magic will you build ?"
@tool
def get_current_time_in_timezone(timezone: str) -> str:
"""A tool that fetches the current local time in a specified timezone.
Args:
timezone: A string representing a valid timezone (e.g., 'America/New_York').
"""
try:
# Create timezone object
tz = pytz.timezone(timezone)
# Get current time in that timezone
local_time = datetime.datetime.now(tz).strftime("%Y-%m-%d %H:%M:%S")
return f"The current local time in {timezone} is: {local_time}"
except Exception as e:
return f"Error fetching time for timezone '{timezone}': {str(e)}"
final_answer = FinalAnswerTool()
model = HfApiModel(
max_tokens=2096,
temperature=0.5,
model_id='Qwen/Qwen2.5-Coder-32B-Instruct',
custom_role_conversions=None,
)
# Import tool from Hub
image_generation_tool = load_tool("agents-course/text-to-image", trust_remote_code=True)
with open("prompts.yaml", 'r') as stream:
prompt_templates = yaml.safe_load(stream)
agent = CodeAgent(
model=model,
tools=[final_answer], ## add your tools here (don't remove final answer)
max_steps=6,
verbosity_level=1,
grammar=None,
planning_interval=None,
name=None,
description=None,
prompt_templates=prompt_templates
)
GradioUI(agent).launch()
```
Your **Goal** is to get familiar with the Space and the Agent.
Currently, the agent in the template **does not use any tools, so try to provide it with some of the pre-made ones or even make some new tools yourself**!
We are eagerly waiting for your amazing agents output in the discord channel **#agents-course-showcase**!
---
Congratulations, you've built your first Agent! Don't hesitate to share it with your friends and colleagues.
Since this is your first try, it's perfectly normal if it's a little buggy or slow. In future units, we'll learn how to build even better Agents.
The best way to learn is to try, so don't hesitate to update it, add more tools, try with another model, etc.
In the next section, you're going to fill the final Quiz and get your certificate!

View File

@@ -0,0 +1,146 @@
# What is an Agent?
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-no-check.jpg" alt="Unit 1 planning"/>
By the end of this section, you'll feel comfortable with the concept of agents and their various applications in AI.
To explain what an Agent is, let's start with an analogy.
## The Big Picture: Alfred The Agent
Meet Alfred. Alfred is an **Agent**.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/this-is-alfred.jpg" alt="This is Alfred"/>
Imagine Alfred **receives a command**, such as: "Alfred, I would like a coffee please."
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/coffee-please.jpg" alt="I would like a coffee"/>
Because Alfred **understands natural language**, he quickly grasps our request.
Before fulfilling the order, Alfred engages in **reasoning and planning**, figuring out the steps and tools he needs to:
1. Go to the kitchen
2. Use the coffee machine
3. Brew the coffee
4. Bring the coffee back
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/reason-and-plan.jpg" alt="Reason and plan"/>
Once he has a plan, he **must act**. To execute his plan, **he can use tools from the list of tools he knows about**.
In this case, to make a coffee, he uses a coffee machine. He activates the coffee machine to brew the coffee.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/make-coffee.jpg" alt="Make coffee"/>
Finally, Alfred brings the freshly brewed coffee to us.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/bring-coffee.jpg" alt="Bring coffee"/>
And this is what an Agent is: an **AI model capable of reasoning, planning, and interacting with its environment**.
We call it Agent because it has _agency_, aka it has the ability to interact with the environment.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/process.jpg" alt="Agent process"/>
## Let's go more formal
Now that you have the big picture, heres a more precise definition:
> An Agent is a system that leverages an AI model to interact with its environment in order to achieve a user-defined objective. It combines reasoning, planning, and the execution of actions (often via external tools) to fulfill tasks.
Think of the Agent as having two main parts:
1. **The Brain (AI Model)**
This is where all the thinking happens. The AI model **handles reasoning and planning**.
It decides **which Actions to take based on the situation**.
2. **The Body (Capabilities and Tools)**
This part represents **everything the Agent is equipped to do**.
The **scope of possible actions** depends on what the agent **has been equipped with**. For example, because humans lack wings, they can't perform the "fly" **Action**, but they can execute **Actions** like "walk", "run" ,"jump", "grab", and so on.
## What type of AI Models do we use for Agents?
The most common AI model found in Agents is an LLM (Large Language Model), which takes **Text** as an input and outputs **Text** as well.
Well known examples are **GPT4** from **OpenAI**, **LLama** from **Meta**, **Gemini** from **Google**, etc. These models have been trained on a vast amount of text and are able to generalize well. We will learn more about LLMs in the [next section](what-are-llms).
<Tip>
It's also possible to use models that accept other inputs as the Agent's core model. For example, a Vision Language Model (VLM), which is like an LLM but also understands images as input. We'll focus on LLMs for now and will discuss other options later.
</Tip>
## How does an AI take action on its environment?
LLMs are amazing models, but **they can only generate text**.
However, if you ask a well-known chat application like HuggingChat or ChatGPT to generate an image, they can! How is that possible?
The answer is that the developers of HuggingChat, ChatGPT and similar apps implemented additional functionality (called **Tools**), that the LLM can use to create images.
<figure>
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/eiffel_brocolis.jpg" alt="Eiffel Brocolis"/>
<figcaption>The model used an Image Generation Tool to generate this image.
</figcaption>
</figure>
We will learn more about tools in the [Tools](tools) section.
## What type of tasks can an Agent do?
An Agent can perform any task we implement via **Tools** to complete **Actions**.
For example, if I write an Agent to act as my personal assistant (like Siri) on my computer, and I ask it to "send an email to my Manager asking to delay today's meeting", I can give it some code to send emails. This will be a new Tool the Agent can use whenever it needs to send an email. We can write it in Python:
```python
def send_message_to(recipient, message):
"""Useful to send an e-mail message to a recipient"""
...
```
The LLM, as we'll see, will generate code to run the tool when it needs to, and thus fulfill the desired task.
```python
send_message_to("Manager", "Can we postopone today's meeting?")
```
The **design of the Tools is very important and has a great impact on the quality of your Agent**. Some tasks will require very specific Tools to be crafted, while others may be solved with general purpose tools like "web_search".
> Note that **Actions are not the same as Tools**. An Action, for instance, can involve the use of multiple Tools to complete.
Allowing an agent to interact with its environment **allows real-life usage for companies and individuals**.
### Example 1: Personal Virtual Assistants
Virtual assistants like Siri, Alexa, or Google Assistant, work as agents when they interact on behalf of users using their digital environments.
They take user queries, analyze context, retrieve information from databases, and provide responses or initiate actions (like setting reminders, sending messages, or controlling smart devices).
### Example 2: Customer Service Chatbots
Many companies deploy chatbots as agents that interact with customers in natural language.
These agents can answer questions, guide users through troubleshooting steps, open issues in internal databases, or even complete transactions.
Their predefined objectives might include improving user satisfaction, reducing wait times, or increasing sales conversion rates. By interacting directly with customers, learning from the dialogues, and adapting their responses over time, they demonstrate the core principles of an agent in action.
### Example 3: AI Non-Playable Character in a video game
AI agents powered by LLMs can make Non-Playable Characters (NPCs) more dynamic and unpredictable.
Instead of following rigid behavior trees, they can **respond contextually, adapt to player interactions**, and generate more nuanced dialogue. This flexibility helps create more lifelike, engaging characters that evolve alongside the players actions.
---
To summarize, an Agent is a system that uses an AI Model (typically a LLM) as its core reasoning engine, to:
- **Understand natural language:** Interpret and respond to human instructions in a meaningful way.
- **Reason and plan:** Analyze information, make decisions, and devise strategies to solve problems.
- **Interact with its environment:** Gather information, take actions, and observe the results of those actions.
Now that you have a solid grasp of what Agents are, lets reinforce your understanding with a short, ungraded quiz. After that, well dive into the “Agents brain”: the [LLMs](what-are-llms).

View File

@@ -0,0 +1,215 @@
# What are LLMs?
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-check-1.jpg" alt="Unit 1 planning"/>
In the previous section we learned that each Agent needs **an AI Model at its core**, and that LLMs are the most common type of AI models for this purpose.
Now we will learn what LLMs are and how they power Agents.
This section offers a concise technical explanation of the use of LLMs. If you want to dive deeper, you can check our [free Natural Language Processing Course](https://huggingface.co/learn/nlp-course/chapter1/1) to understand the fundamentals on which LLMs are built.
## What is a Large Language Model?
An LLM is a type of AI model that excels at **understanding and generating human language**. They are trained on vast amounts of text data, allowing them to learn patterns, structure, and even nuance in language. These models typically consist of many millions of parameters.
Most LLMs nowadays are **built on the Transformer architecture**—a deep learning architecture based on the "Attention" algorithm, that has gained significant interest since the release of BERT from Google in 2018.
<figure>
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/transformer.jpg" alt="Transformer"/>
<figcaption>The original Transformer architecture looked like this, with an encoder on the left and a decoder on the right.
</figcaption>
</figure>
There are 3 types of transformers :
1. **Encoders**
An encoder-based Transformer takes text (or other data) as input and outputs a dense representation (or embedding) of that text.
- **Example**: BERT from Google
- **Use Cases**: Text classification, semantic search, Named Entity Recognition
- **Typical Size**: Millions of parameters
2. **Decoders**
A decoder-based Transformer focuses **on generating new tokens to complete a sequence, one token at a time**.
- **Example**: Llama from Meta
- **Use Cases**: Text generation, chatbots, code generation
- **Typical Size**: Billions (in the US sense, i.e., 10^9) of parameters
3. **Seq2Seq (EncoderDecoder)**
A sequence-to-sequence Transformer _combines_ an encoder and a decoder. The encoder first processes the input sequence into a context representation, then the decoder generates an output sequence.
- **Example**: T5, BART,
- **Use Cases**: Translation, Summarization, Paraphrasing
- **Typical Size**: Millions of parameters
Although Large Language Models come in various forms, LLMs are typically decoder-based models with billions of parameters. Here are some of the most well-known LLMs:
| **Model** | **Provider** |
|-----------------------------------|-------------------------------------------|
| **Deepseek-R1** | DeepSeek |
| **GPT4** | OpenAI |
| **Llama 3** | Meta (Facebook AI Research) |
| **SmollLM2** | Hugging Face |
| **Gemma** | Google |
| **Mistral** | Mistral |
The underlying principle of an LLM is simple yet highly effective: **its objective is to predict the next token, given a sequence of previous tokens**. A "token" is the unit of information an LLM works with. You can think of a "token" as if it was a "word", but for efficiency reasons LLMs don't use whole words.
For example, while English has an estimated 600,000 words, an LLM might have a vocabulary of around 32,000 tokens (as is the case with Llama 2). Tokenization often works on sub-word units that can be combined.
For instance, consider how the tokens "interest" and "ing" can be combined to form "interesting", or "ed" can be appended to form "interested."
You can experiment with different tokenizers in the interactive playground below:
<iframe
src="https://agents-course-the-tokenizer-playground.static.hf.space"
frameborder="0"
width="850"
height="450"
></iframe>
Each LLM has some **special tokens** specific to the model. The LLM uses these tokens to open and close the structured components of its generation. For example, to indicate the start or end of a sequence, message, or response. Moreover, the input prompts that we pass to the model are also structured with special tokens. The most important of those is the **End of sequence token** (EOS).
The forms of special tokens are highly diverse across model providers.
The table below illustrates the diversity of special tokens.
<table>
<thead>
<tr>
<th><strong>Model</strong></th>
<th><strong>Provider</strong></th>
<th><strong>EOS Token</strong></th>
<th><strong>Functionality</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>GPT4</strong></td>
<td>OpenAI</td>
<td><code>&lt;|endoftext|&gt;</code></td>
<td>End of message text</td>
</tr>
<tr>
<td><strong>Llama 3</strong></td>
<td>Meta (Facebook AI Research)</td>
<td><code>&lt;|eot_id|&gt;</code></td>
<td>End of sequence</td>
</tr>
<tr>
<td><strong>Deepseek-R1</strong></td>
<td>DeepSeek</td>
<td><code>&lt;|end_of_sentence|&gt;</code></td>
<td>End of message text</td>
</tr>
<tr>
<td><strong>SmollLM2</strong></td>
<td>Hugging Face</td>
<td><code>&lt;|im_end|&gt;</code></td>
<td>End of instruction or message</td>
</tr>
<tr>
<td><strong>Gemma</strong></td>
<td>Google</td>
<td><code>&lt;end_of_turn&gt;</code></td>
<td>End of conversation turn</td>
</tr>
</tbody>
</table>
<Tip>
We do not expect you to memorize these special tokens, but it is important to appreciate their diversity and the role they play in the text generation of LLMs. If you want to know more about special tokens, you can check out the configuration of the model in its Hub repository. For example, you can find the special tokens of the SmollLM2 model in its <a href="https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct/blob/main/tokenizer_config.json">tokenizer_config.json</a>.
</Tip>
## Understanding next token prediction.
LLMs are said to be **autoregressive**, meaning that **the output from one pass becomes the input for the next one**. This loop continues until the model predicts the next token to be the EOS token, at which point the model can stop.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/AutoregressionSchema.gif" alt="Visual Gif of autoregressive decoding" width="60%">
In other words, an LLM will decode text until it reaches the EOS. But what happens during a single decoding loop?
While the full process can be quite technical for the purpose of learning agents, here's a brief overview:
- Once the input text is **tokenized**, the model computes a representation of the sequence that captures information about the meaning and the position of each token in the input sequence.
- This representation goes into the model, which outputs scores that rank the likelihood of each token in its vocabulary as being the next one in the sequence.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/DecodingFinal.gif" alt="Visual Gif of decoding" width="60%">
Based on these scores, we have multiple strategies to select the tokens to complete the sentence.
- The easiest decoding strategy would be to always take the token with the maximum score.
You can interact with the decoding process yourself with SmollLM2 in this Space (remember, it decodes until reaching an **EOS** token which is **<|im_end|>** for this model):
<iframe
src="https://agents-course-decoding-visualizer.hf.space"
frameborder="0"
width="850"
height="450"
></iframe>
- But there are more advanced decoding strategies. For example, *beam search* explores multiple candidate sequences to find the one with the maximum total scoreeven if some individual tokens have lower scores.
<iframe
src="https://agents-course-beam-search-visualizer.hf.space"
frameborder="0"
width="850"
height="450"
></iframe>
If you want to know more about decoding, you can take a look at the [NLP course](https://huggingface.co/learn/nlp-course).
## Attention is all you need
A key aspect of the Transformer architecture is **Attention**. When predicting the next word,
not every word in a sentence is equally important; words like "France" and "capital" in the sentence *"The capital of France is ..."* carry the most meaning.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/AttentionSceneFinal.gif" alt="Visual Gif of Attention" width="60%">
This process of identifying the most relevant words to predict the next token has proven to be incredibly effective.
Although the basic principle of LLMs—predicting the next token—has remained consistent since GPT-2, there have been significant advancements in scaling neural networks and making the attention mechanism work for longer and longer sequences.
If you've interacted with LLMs, you're probably familiar with the term *context length*, which refers to the maximum number of tokens the LLM can process, and the maximum _attention span_ it has.
## Prompting the LLM is important
Considering that the only job of an LLM is to predict the next token by looking at every input token, and to choose which tokens are "important", the wording of your input sequence is very important.
The input sequence you provide an LLM is called _a prompt_. Careful design of the prompt makes it easier **to guide the generation of the LLM toward the desired output**.
## How are LLMs trained?
LLMs are trained on large datasets of text, where they learn to predict the next word in a sequence through a self-supervised or masked language modeling objective.
From this unsupervised learning, the model learns the structure of the language and **underlying patterns in text, allowing the model to generalize to unseen data**.
After this initial _pre-training_, LLMs can be fine-tuned on a supervised learning objective to perform specific tasks. For example, some models are trained for conversational structures or tool usage, while others focus on classification or code generation.
## How can I use LLMs?
You have two main options:
1. **Run Locally** (if you have sufficient hardware).
2. **Use a Cloud/API** (e.g., via the Hugging Face Serverless Inference API).
Throughout this course, we will primarily use models via APIs on the Hugging Face Hub. Later on, we will explore how to run these models locally on your hardware.
## How are LLMs used in AI Agents?
LLMs are a key component of AI Agents, **providing the foundation for understanding and generating human language**.
They can interpret user instructions, maintain context in conversations, define a plan and decide which tools to use.
We will explore these steps in more detail in this Unit, but for now, what you need to understand is that the LLM is **the brain of the Agent**.
---
That was a lot of information! We've covered the basics of what LLMs are, how they function, and their role in powering AI agents.
If you'd like to dive even deeper into the fascinating world of language models and natural language processing, don't hesitate to check out our [free NLP course](https://huggingface.co/learn/nlp-course/chapter1/1).
Now that we understand how LLMs work, it's time to see **how LLMs structure their generations in a conversational context**.