[go: nahoru, domu]

Skip to content

Commit

Permalink
Add RAG alias for Extractor, closes #732
Browse files Browse the repository at this point in the history
  • Loading branch information
davidmezzetti committed Jun 18, 2024
1 parent 19961f0 commit 42583f0
Show file tree
Hide file tree
Showing 3 changed files with 3 additions and 2 deletions.
2 changes: 1 addition & 1 deletion docs/pipeline/text/extractor.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
![pipeline](../../images/pipeline.png#only-light)
![pipeline](../../images/pipeline-dark.png#only-dark)

The Extractor pipeline joins a prompt, context data store and generative model together to extract knowledge.
The Extractor pipeline (aka RAG) joins a prompt, context data store and generative model together to extract knowledge.

The data store can be an embeddings database or a similarity instance with associated input text. The generative model can be a prompt-driven large language model (LLM), an extractive question-answering model or a custom pipeline. This is known as prompt-driven search or retrieval augmented generation (RAG).

Expand Down
2 changes: 1 addition & 1 deletion src/python/txtai/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
# Top-level imports
from .app import Application
from .embeddings import Embeddings
from .pipeline import LLM
from .pipeline import LLM, RAG

# Configure logging per standard Python library recommendations
logger = logging.getLogger(__name__)
Expand Down
1 change: 1 addition & 0 deletions src/python/txtai/pipeline/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,5 +12,6 @@
from .llm import *
from .nop import Nop
from .text import *
from .text import Extractor as RAG
from .tensors import Tensors
from .train import *

0 comments on commit 42583f0

Please sign in to comment.