mirror of
https://github.com/hhy-huang/HiRAG.git
synced 2025-09-16 23:52:00 +03:00
Merge pull request #1 from Dormiveglia-elf/dev-zhenyu
Doc Refinement and Fixes
This commit is contained in:
34
config.yaml
Normal file
34
config.yaml
Normal file
@@ -0,0 +1,34 @@
|
||||
# OpenAI Configuration
|
||||
openai:
|
||||
embedding_model: "text-embedding-ada-002"
|
||||
model: "gpt-4o"
|
||||
api_key: "***"
|
||||
base_url: "***"
|
||||
|
||||
# GLM Configuration
|
||||
glm:
|
||||
model: "glm-4-plus"
|
||||
api_key: "***"
|
||||
base_url: "https://open.bigmodel.cn/api/paas/v4"
|
||||
embedding_model: "embedding-3"
|
||||
|
||||
# Deepseek Configuration
|
||||
deepseek:
|
||||
model: "deepseek-chat"
|
||||
api_key: "***"
|
||||
base_url: "https://api.deepseek.com"
|
||||
|
||||
# Model Parameters
|
||||
model_params:
|
||||
openai_embedding_dim: 1536
|
||||
glm_embedding_dim: 2048
|
||||
max_token_size: 8192
|
||||
|
||||
# HiRAG Configuration
|
||||
hirag:
|
||||
working_dir: "your_work_dir"
|
||||
enable_llm_cache: false
|
||||
enable_hierachical_mode: true
|
||||
embedding_batch_num: 6
|
||||
embedding_func_max_async: 8
|
||||
enable_naive_rag: true
|
||||
@@ -34,7 +34,7 @@ print("Perform hi search:")
|
||||
print(graph_func.query("The question you want to ask?", param=QueryParam(mode="hi")))
|
||||
```
|
||||
|
||||
Or if you want to employ HiRAG with DeepSeek, ChatGLM, or other third-party retrieval api, here are the examples in `./hi_Search_deepseek.py`, `./hi_search_glm.py`, and `./hi_search_openai.py`. The API keys and the LLM configurations can be set at `config.yaml`.
|
||||
Or if you want to employ HiRAG with DeepSeek, ChatGLM, or other third-party retrieval api, here are the examples in `./hi_Search_deepseek.py`, `./hi_Search_glm.py`, and `./hi_Search_openai.py`. The API keys and the LLM configurations can be set at `./config.yaml`.
|
||||
|
||||
|
||||
## Evaluation
|
||||
@@ -243,6 +243,12 @@ python batch_eval.py -m result -api openai
|
||||
| |Diversity| 3.5| **96.5**|
|
||||
| |Overall| 0.0| **100.0**|
|
||||
|
||||
## Acknowledgement
|
||||
We gratefully acknowledge the use of the following open-source projects in our work:
|
||||
- [nano-graphrag](https://github.com/gusye1234/nano-graphrag): a simple, easy-to-hack GraphRAG implementation
|
||||
|
||||
- [RAPTOR](https://github.com/parthsarthi03/raptor): a novel approach to retrieval-augmented language models by constructing a recursive tree structure from documents.
|
||||
|
||||
## Cite Us
|
||||
```
|
||||
@misc{huang2025retrievalaugmentedgenerationhierarchicalknowledge,
|
||||
|
||||
Reference in New Issue
Block a user