Skip to content

Commit

Permalink
[Docs] Add docs for new semantic text query functionality (#119520) (#…
Browse files Browse the repository at this point in the history
…119883)

* Update docs with new semantic text functionality

* PR feedback

* PR feedback

* PR Feedback
  • Loading branch information
kderusso authored Jan 9, 2025
1 parent c505da9 commit 13c4f5d
Show file tree
Hide file tree
Showing 4 changed files with 13 additions and 6 deletions.
1 change: 1 addition & 0 deletions docs/reference/mapping/types/semantic-text.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ Long passages are <<auto-text-chunking, automatically chunked>> to smaller secti
The `semantic_text` field type specifies an inference endpoint identifier that will be used to generate embeddings.
You can create the inference endpoint by using the <<put-inference-api>>.
This field type and the <<query-dsl-semantic-query,`semantic` query>> type make it simpler to perform semantic search on your data.
The `semantic_text` field type may also be queried with <<query-dsl-match-query, match>>, <<query-dsl-sparse-vector-query, sparse_vector>> or <<query-dsl-knn-query, knn>> queries.

If you don’t specify an inference endpoint, the `inference_id` field defaults to `.elser-2-elasticsearch`, a preconfigured endpoint for the elasticsearch service.

Expand Down
7 changes: 5 additions & 2 deletions docs/reference/query-dsl/knn-query.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@ Finds the _k_ nearest vectors to a query vector, as measured by a similarity
metric. _knn_ query finds nearest vectors through approximate search on indexed
dense_vectors. The preferred way to do approximate kNN search is through the
<<knn-search,top level knn section>> of a search request. _knn_ query is reserved for
expert cases, where there is a need to combine this query with other queries.
expert cases, where there is a need to combine this query with other queries, or
perform a kNN search against a <<semantic-text, semantic_text>> field.

[[knn-query-ex-request]]
==== Example request
Expand Down Expand Up @@ -77,7 +78,8 @@ POST my-image-index/_search
+
--
(Required, string) The name of the vector field to search against. Must be a
<<index-vectors-knn-search, `dense_vector` field with indexing enabled>>.
<<index-vectors-knn-search, `dense_vector` field with indexing enabled>>, or a
<<semantic-text, `semantic_text` field>> with a compatible dense vector inference model.
--

`query_vector`::
Expand All @@ -93,6 +95,7 @@ Either this or `query_vector_builder` must be provided.
--
(Optional, object) Query vector builder.
include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=knn-query-vector-builder]
If all queried fields are of type <<semantic-text, semantic_text>>, the inference ID associated with the `semantic_text` field may be inferred.
--

`k`::
Expand Down
5 changes: 4 additions & 1 deletion docs/reference/query-dsl/match-query.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,10 @@ provided text is analyzed before matching.
The `match` query is the standard query for performing a full-text search,
including options for fuzzy matching.

`Match` will also work against <<semantic-text, semantic_text>> fields,
however when performing `match` queries against `semantic_text` fields options
that specifically target lexical search such as `fuzziness` or `analyzer` will be ignored.


[[match-query-ex-request]]
==== Example request
Expand Down Expand Up @@ -296,4 +300,3 @@ The example above creates a boolean query:

that matches documents with the term `ny` or the conjunction `new AND york`.
By default the parameter `auto_generate_synonyms_phrase_query` is set to `true`.

6 changes: 3 additions & 3 deletions docs/reference/query-dsl/sparse-vector-query.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@ This can be achieved with one of two strategies:
- Using an {nlp} model to convert query text into a list of token-weight pairs
- Sending in precalculated token-weight pairs as query vectors

These token-weight pairs are then used in a query against a <<sparse-vector,sparse vector>>.
These token-weight pairs are then used in a query against a <<sparse-vector,sparse vector>>
or a <<semantic-text, semantic_text>> field with a compatible sparse inference model.
At query time, query vectors are calculated using the same inference model that was used to create the tokens.
When querying, these query vectors are ORed together with their respective weights, which means scoring is effectively a <<vector-functions-dot-product,dot product>> calculation between stored dimensions and query dimensions.

Expand Down Expand Up @@ -65,6 +66,7 @@ GET _search
It must be the same inference ID that was used to create the tokens from the input text.
Only one of `inference_id` and `query_vector` is allowed.
If `inference_id` is specified, `query` must also be specified.
If all queried fields are of type <<semantic-text, semantic_text>>, the inference ID associated with the `semantic_text` field will be inferred.

`query`::
(Optional, string) The query text you want to use for search.
Expand Down Expand Up @@ -291,5 +293,3 @@ GET my-index/_search
//TEST[skip: Requires inference]

NOTE: When performing <<modules-cross-cluster-search, cross-cluster search>>, inference is performed on the local cluster.


0 comments on commit 13c4f5d

Please sign in to comment.