Skip to content

Commit

Permalink
[DOCS] Documents that deployment_id can be used as inference_id in ce…
Browse files Browse the repository at this point in the history
…rtain cases. (#121055) (#121059)
  • Loading branch information
szabosteve authored Jan 28, 2025
1 parent 7e03356 commit 4614cb1
Showing 1 changed file with 4 additions and 1 deletion.
5 changes: 4 additions & 1 deletion docs/reference/query-dsl/sparse-vector-query.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -62,11 +62,14 @@ GET _search
(Required, string) The name of the field that contains the token-weight pairs to be searched against.

`inference_id`::
(Optional, string) The <<inference-apis,inference ID>> to use to convert the query text into token-weight pairs.
(Optional, string)
The <<inference-apis,inference ID>> to use to convert the query text into token-weight pairs.
It must be the same inference ID that was used to create the tokens from the input text.
Only one of `inference_id` and `query_vector` is allowed.
If `inference_id` is specified, `query` must also be specified.
If all queried fields are of type <<semantic-text, semantic_text>>, the inference ID associated with the `semantic_text` field will be inferred.
You can reference a `deployment_id` of a {ml} trained model deployment as an `inference_id`.
For example, if you download and deploy the ELSER model in the {ml-cap} trained models UI in {kib}, you can use the `deployment_id` of that deployment as the `inference_id`.

`query`::
(Optional, string) The query text you want to use for search.
Expand Down

0 comments on commit 4614cb1

Please sign in to comment.