diagram.mmd — flowchart
Embedding Generation Flow flowchart diagram

Embedding generation is the process of converting text, images, or other data into dense numerical vectors that capture semantic meaning, enabling similarity search, clustering, and retrieval-augmented generation.

What the diagram shows

This flowchart covers both the document ingestion path (indexing time) and the query path (retrieval time):

Document ingestion: 1. Raw documents: source data such as web pages, PDFs, database records, or knowledge base articles enters the pipeline. 2. Chunking: documents are split into smaller segments — typically 256–1024 tokens — to ensure each chunk fits within the embedding model's context window and represents a coherent semantic unit. 3. Preprocessing: chunks are cleaned (HTML stripped, whitespace normalized) and optionally enriched with metadata such as document title or section heading. 4. Embedding model: the preprocessed text is passed to an embedding model (e.g., text-embedding-ada-002, e5-large, or a fine-tuned bi-encoder). The model outputs a fixed-dimension float vector. 5. Vector storage: the vector is stored in a vector database alongside its source chunk text and metadata (see Vector Database Query).

Query path: 1. Query input: a user query or search string is received. 2. Same preprocessing: the query is cleaned using the same normalization applied to documents. 3. Embedding model: the query is encoded using the same model to ensure vectors are in the same embedding space. 4. Nearest-neighbor search: the query vector is used to search the vector database for the most semantically similar document vectors.

Why this matters

Embeddings are the foundation of modern semantic search and RAG systems. Consistency between the document and query embedding paths — same model, same preprocessing — is essential for retrieval quality. See RAG Architecture for how embedding generation fits into the full pipeline.

Free online editor
Edit this diagram in Graphlet
Fork, modify, and export to SVG or PNG. No sign-up required.
Open in Graphlet →

Frequently asked questions

An embedding generation flow is the pipeline that converts raw text (or other data) into dense numerical vectors using an embedding model. These vectors capture semantic meaning and are stored in a vector database for similarity search, retrieval-augmented generation, and clustering tasks.
Text is first chunked into segments that fit the embedding model's context window, cleaned and normalized, then passed to an encoder model (such as `text-embedding-ada-002` or `e5-large`). The model outputs a fixed-dimension float vector representing the semantic content of the input. The same preprocessing and model must be applied to both documents and queries to ensure vectors are comparable.
Fine-tune an embedding model when off-the-shelf models trained on general web data underperform on your domain's vocabulary — such as medical, legal, or proprietary product terminology. Domain-adapted embeddings can substantially improve retrieval precision in specialized corpora.
The most common mistake is using different models or preprocessing steps for documents and queries, which puts them in incompatible vector spaces and causes retrieval to fail silently. Other issues include chunk sizes that are too large (coarse semantics) or too small (insufficient context), and not versioning the embedding model used to build an index.
mermaid
flowchart TD subgraph Ingestion["Document Ingestion Path"] A([Raw documents]) --> B[Split into chunks] B --> C[Clean and normalize text] C --> D[Add metadata: title, source, date] D --> E[Embedding model] E --> F[(Vector database: store vector + chunk + metadata)] end subgraph Query["Query Path"] G([User query]) --> H[Clean and normalize query] H --> I[Embedding model] I --> J[Nearest-neighbor search in vector DB] J --> K[Return top-k results with scores] K --> L([Ranked document chunks]) end F -.->|indexed vectors| J
Copied to clipboard