diff --git a/memgpt/personas/examples/docqa/README.md b/memgpt/personas/examples/docqa/README.md index 1daa7f8..d18fc41 100644 --- a/memgpt/personas/examples/docqa/README.md +++ b/memgpt/personas/examples/docqa/README.md @@ -1,8 +1,17 @@ -# MemGPT Search over LlamaIndex API Docs +# MemGPT over LlamaIndex API Docs + +MemGPT enables you to chat with your data -- try running this example to talk to the LlamaIndex API docs! 1. - a. Download embeddings and docs index from XYZ. - -- OR -- + a. Download embeddings and docs index from [HuggingFace](https://huggingface.co/datasets/MemGPT/llamaindex-api-docs). + ```bash + # Make sure you have git-lfs installed (https://git-lfs.com) + git lfs install + git clone https://huggingface.co/datasets/MemGPT/llamaindex-api-docs + ``` + + **-- OR --** + b. Build the index: 1. Build llama_index API docs with `make text`. Instructions [here](https://github.com/run-llama/llama_index/blob/main/docs/DOCS_README.md). Copy over the generated `_build/text` folder to this directory. 2. Generate embeddings and FAISS index. @@ -10,4 +19,16 @@ python3 scrape_docs.py python3 generate_embeddings_for_docs.py all_docs.jsonl python3 build_index.py --embedding_files all_docs.embeddings.jsonl --output_index_file all_docs.index - ``` \ No newline at end of file + ``` + +2. In the root `MemGPT` directory, run + ```bash + python3 main.py --archival_storage_faiss_path= --persona=memgpt_doc --human=basic + ``` + where `ARCHIVAL_STORAGE_FAISS_PATH` is the directory where `all_docs.jsonl` and `all_docs.index` are located. + If you downloaded from HuggingFace, it will be `memgpt/personas/docqa/llamaindex-api-docs`. + +## Demo +
+ MemGPT demo video for llamaindex api docs search +