This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

LightRAG

Explore the LightRAG implementation This section details how to reproduce LightRAG results.

    Index of LightRAG

    LightRAG is a snapshot from our experiments, with parameters, functions, and prompts fine-tuned to return statistical data and use unified prompts. To get started, you should first create a new environment and install the LightRAG dependencies.

    conad create -n lightrag python=3.10
    conda activate lightrag
    cd LightRAG
    pip install -e .
    

    Similar to other RAG implementations, you need to create a main working directory called main_folder and place an input folder inside it to store your corpus files.

    main_folder/
    ├── input/
    │   ├── file1.md
    │   ├── file2.txt
    │   ├── file3.docx
    │   └── ...
    

    Then run

    python -m Light_index -f path/to/main_folder
    

    Answer and Evaluation

    First, prepare your test questions according to the benchmark format. You’ll need to create a test set parquet file containing questions and their corresponding answer keys. Once ready, you can run the evaluation with:

    python -m /eval/eval_light -f path/to/main_folder -q path/to/question_parquet