* Feature: redesign BaseModel (#80) * redesign BaseModel * update docstring * update baseModel * [Refactor] improve `Action` and `ActionExecutor` (#83) * [Fix]: fix turbomind (#81) fix turbomind * add parsers * skip ActionReturn in postprocessing * check existence of API name * add exception catch in action executing * validate input arguments * modify returned structure of `get_actions_info` * adapt tools to the new protocol * remove `LLMQA` action --------- Co-authored-by: RangiLyu <lyuchqi@gmail.com> Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * [Feature] add tools (#89) * add new tools * update PPT * chores * update action module init --------- Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * rename func 'completion' to 'generate' (#90) * [Feature] support batch inference in API models (#91) * implement `chat` * update agent interfaces * redundancy reduction --------- Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * Feature: lmdeploy_wrapper implemented BaseMode (#86) * [Fix]: fix turbomind (#81) fix turbomind * Feature: lmdeploy_wrapper implemented BaseMode * remove comments of 'llms.__init__' * update of 'llms.__init__' * update lmdepoly_wrapper with 'gen_params' * add property 'state_map' in __init_ and use APIClient to stream infer_ * func 'generate' in LMDeployClient with 'APIClient' * fix bug of TritonClient * add docstr for LMDeployPipeline & LMDeployServer * class LMDeployClient inherits class LMDeployServer * LMDeployClient with BaseModel.__init__ and use field 'max_tokens' control model output * add TODO * move 'import mmengine' to func '_update_gen_params' --------- Co-authored-by: RangiLyu <lyuchqi@gmail.com> * Fix APITemplateParser object is not callable (#95) fix APITemplateParser object is not callable Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * [Feat] support StreamAgent (#82) * [Feat] support StreamAgent * update `StreamAgent` * truncate inner history --------- Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * [Feat] hf llm implemented BaseMode (#92) * Feature: huggingface implemented BaseMode * hf llm implemented BaseMode * fix bug of hf llm * inject attention_mask during inference * remove unnecessary * [Feature] support building tool descriptions automatically (#96) * redundancy reduction * add `tool_api` to annotate a tool method * improve json parsing * enhance parsers * update README.md --------- Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * Enhance tool annotation (#98) improve `tool_api` Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * [Docs] initialize the documentation (#99) init the docs Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * Modify the structure of `ActionReturn`'s result (#102) * modify the struction of action results * fix docstrings --------- Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * Fix .readthedocs.yml (#104) fix rtd config Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * [Feature] support IPython interpreter action (#103) * add ipython interpreter * update requirements * remove `return_list` argument --------- Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * Fix BINGMap key (#105) fix the fallback value Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * StreamAgent infer demo (#106) * update cfg & fix bug of StreamAgent * fix bug of func 'stream_chat' * streamlit demo with full response * enchance stream chat * fix bug of stream chat * fix and file rename * add exception catch for func 'chat' --------- Co-authored-by: liujiangning <liujiangning@pjlab.org.cn> * [Docs] Add action tutorials (#107) * add `get_started` chapter * fix docstrings * add action.md * add zh docs --------- Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * Fix returns of OpenAI interface (#108) fix `BaseAPIModel` chat returns Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * Feat: add warn for func 'generate_from_template' (#109) * add warn for func 'generate_from_template' * clearer alerts for deprecation * clearer alerts for deprecation --------- Co-authored-by: liujiangning30 <147385819+liujiangning30@users.noreply.github.com> Co-authored-by: BraisedPork <46232992+braisedpork1964@users.noreply.github.com> Co-authored-by: RangiLyu <lyuchqi@gmail.com> Co-authored-by: wangzy <wangziyi@pjlab.org.cn> Co-authored-by: liujiangning <liujiangning@pjlab.org.cn>
1.2 KiB
1.2 KiB
Overview
This chapter introduces you to the framework of Lagent, and provides links to detailed tutorials about Lagent.
What is Lagent
Lagent is an open source LLM agent framework, which enables people to efficiently turn a large language model to agent. It also provides some typical tools to enlighten the ablility of LLM, and the whole framework is shown below:
Lagent consists of 3 main parts, agents, llms, and actions.
- agents provides agent implementation, such as ReAct, AutoGPT.
- llms supports various large language models, including open-sourced models (Llama-2, InterLM) through HuggingFace models or closed-source models like GPT3.5/4.
- actions contains a series of actions, as well as an action executor to manage all actions.
How to Use
Here is a detailed step-by-step guide to learn more about Lagent: