* Feature: redesign BaseModel (#80) * redesign BaseModel * update docstring * update baseModel * [Refactor] improve `Action` and `ActionExecutor` (#83) * [Fix]: fix turbomind (#81) fix turbomind * add parsers * skip ActionReturn in postprocessing * check existence of API name * add exception catch in action executing * validate input arguments * modify returned structure of `get_actions_info` * adapt tools to the new protocol * remove `LLMQA` action --------- Co-authored-by: RangiLyu <lyuchqi@gmail.com> Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * [Feature] add tools (#89) * add new tools * update PPT * chores * update action module init --------- Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * rename func 'completion' to 'generate' (#90) * [Feature] support batch inference in API models (#91) * implement `chat` * update agent interfaces * redundancy reduction --------- Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * Feature: lmdeploy_wrapper implemented BaseMode (#86) * [Fix]: fix turbomind (#81) fix turbomind * Feature: lmdeploy_wrapper implemented BaseMode * remove comments of 'llms.__init__' * update of 'llms.__init__' * update lmdepoly_wrapper with 'gen_params' * add property 'state_map' in __init_ and use APIClient to stream infer_ * func 'generate' in LMDeployClient with 'APIClient' * fix bug of TritonClient * add docstr for LMDeployPipeline & LMDeployServer * class LMDeployClient inherits class LMDeployServer * LMDeployClient with BaseModel.__init__ and use field 'max_tokens' control model output * add TODO * move 'import mmengine' to func '_update_gen_params' --------- Co-authored-by: RangiLyu <lyuchqi@gmail.com> * Fix APITemplateParser object is not callable (#95) fix APITemplateParser object is not callable Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * [Feat] support StreamAgent (#82) * [Feat] support StreamAgent * update `StreamAgent` * truncate inner history --------- Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * [Feat] hf llm implemented BaseMode (#92) * Feature: huggingface implemented BaseMode * hf llm implemented BaseMode * fix bug of hf llm * inject attention_mask during inference * remove unnecessary * [Feature] support building tool descriptions automatically (#96) * redundancy reduction * add `tool_api` to annotate a tool method * improve json parsing * enhance parsers * update README.md --------- Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * Enhance tool annotation (#98) improve `tool_api` Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * [Docs] initialize the documentation (#99) init the docs Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * Modify the structure of `ActionReturn`'s result (#102) * modify the struction of action results * fix docstrings --------- Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * Fix .readthedocs.yml (#104) fix rtd config Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * [Feature] support IPython interpreter action (#103) * add ipython interpreter * update requirements * remove `return_list` argument --------- Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * Fix BINGMap key (#105) fix the fallback value Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * StreamAgent infer demo (#106) * update cfg & fix bug of StreamAgent * fix bug of func 'stream_chat' * streamlit demo with full response * enchance stream chat * fix bug of stream chat * fix and file rename * add exception catch for func 'chat' --------- Co-authored-by: liujiangning <liujiangning@pjlab.org.cn> * [Docs] Add action tutorials (#107) * add `get_started` chapter * fix docstrings * add action.md * add zh docs --------- Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * Fix returns of OpenAI interface (#108) fix `BaseAPIModel` chat returns Co-authored-by: wangzy <wangziyi@pjlab.org.cn> * Feat: add warn for func 'generate_from_template' (#109) * add warn for func 'generate_from_template' * clearer alerts for deprecation * clearer alerts for deprecation --------- Co-authored-by: liujiangning30 <147385819+liujiangning30@users.noreply.github.com> Co-authored-by: BraisedPork <46232992+braisedpork1964@users.noreply.github.com> Co-authored-by: RangiLyu <lyuchqi@gmail.com> Co-authored-by: wangzy <wangziyi@pjlab.org.cn> Co-authored-by: liujiangning <liujiangning@pjlab.org.cn>
👋 join us on 𝕏 (Twitter), Discord and WeChat
What's Lagent?
Lagent is a lightweight open-source framework that allows users to efficiently build large language model(LLM)-based agents. It also provides some typical tools to augment LLM. The overview of our framework is shown below:
💻Tech Stack
Major Features
0.1.2 was released in 24/10/2023:
-
Support an efficient inference engine. Lagent now supports efficient inference engine lmdeploy turbomind.
-
Support multiple kinds of agents out of the box. Lagent now supports ReAct, AutoGPT and ReWOO, which can drive the large language models(LLMs) for multiple trials of reasoning and function calling.
-
Extremely simple and easy to extend. The framework is quite simple with a clear structure. With only 20 lines of code, you are able to construct your own agent. It also supports three typical tools: Python interpreter, API call, and google search.
-
Support various large language models. We support different LLMs, including API-based (GPT-3.5/4) and open-source (LLaMA 2, InternLM) models.
Getting Started
Please see the overview for the general introduction of Lagent. Meanwhile, we provide extremely simple code for quick start. You may refer to examples for more details.
Installation
Install with pip (Recommended).
pip install lagent
Optionally, you could also build Lagent from source in case you want to modify the code:
git clone https://github.com/InternLM/lagent.git
cd lagent
pip install -e .
Run ReAct Web Demo
# You need to install streamlit first
# pip install streamlit
streamlit run examples/react_web_demo.py
Then you can chat through the UI shown as below
Run a ReWOO agent with GPT-3.5
Below is an example of running ReWOO with GPT-3.5
# Import necessary modules and classes from the "lagent" library.
from lagent.agents import ReWOO
from lagent.actions import ActionExecutor, GoogleSearch
from lagent.llms import GPTAPI
# Initialize the Language Model (llm) and provide your API key.
llm = GPTAPI(model_type='gpt-3.5-turbo', key=['Your OPENAI_API_KEY'])
# Initialize the Google Search tool and provide your API key.
search_tool = GoogleSearch(api_key='Your SERPER_API_KEY')
# Create a chatbot by configuring the ReWOO agent.
chatbot = ReWOO(
llm=llm, # Provide the Language Model instance.
action_executor=ActionExecutor(
actions=[search_tool] # Specify the actions the chatbot can perform.
),
)
# Ask the chatbot a question and store the response.
response = chatbot.chat('What profession does Nicholas Ray and Elia Kazan have in common')
# Print the chatbot's response.
print(response.response) # Output the response generated by the chatbot.
>>> Film director.
Run a ReAct agent with InternLM
NOTE: If you want to run a HuggingFace model, please run pip install -e .[all] first.
# Import necessary modules and classes from the "lagent" library.
from lagent.agents import ReAct
from lagent.actions import ActionExecutor, GoogleSearch, PythonInterpreter
from lagent.llms import HFTransformer
from lagent.llms.meta_template import INTERNLM2_META as META
# Initialize the HFTransformer-based Language Model (llm) and
# provide the model name.
llm = HFTransformer(
path='internlm/internlm2-chat-7b',
meta_template=META
)
# Initialize the Google Search tool and provide your API key.
search_tool = GoogleSearch(
api_key='Your SERPER_API_KEY')
# Initialize the Python Interpreter tool.
python_interpreter = PythonInterpreter()
# Create a chatbot by configuring the ReAct agent.
# Specify the actions the chatbot can perform.
chatbot = ReAct(
llm=llm, # Provide the Language Model instance.
action_executor=ActionExecutor(
actions=[python_interpreter]
),
)
# Ask the chatbot a mathematical question in LaTeX format.
response = chatbot.chat(
'若$z=-1+\sqrt{3}i$,则$\frac{z}{{z\overline{z}-1}}=\left(\ \ \right)$'
)
# Print the chatbot's response.
print(response.response) # Output the response generated by the chatbot.
>>> $-\\frac{1}{3}+\\frac{{\\sqrt{3}}}{3}i$
All Thanks To Our Contributors:
Citation
If you find this project useful in your research, please consider cite:
@misc{lagent2023,
title={{Lagent: InternLM} a lightweight open-source framework that allows users to efficiently build large language model(LLM)-based agents},
author={Lagent Developer Team},
howpublished = {\url{https://github.com/InternLM/lagent}},
year={2023}
}
License
This project is released under the Apache 2.0 license.
