diff --git a/_posts/2024-04-11-multimodal-semantic-search.md b/_posts/2024-04-11-multimodal-semantic-search.md index 3b7cf6e359..b89bb276a6 100644 --- a/_posts/2024-04-11-multimodal-semantic-search.md +++ b/_posts/2024-04-11-multimodal-semantic-search.md @@ -73,8 +73,8 @@ We first delivered the new interface that provides semantic search capability in Our initial release was designed around the Amazon Bedrock Multimodal Embeddings API. Because of this, you’ll need to adhere to certain design constraints when using embeddings generated by other multimodal embedding providers. For instance, your multimodal API must be designed for text and image. Our system will pass the image data as Base64 encoded [binary](https://opensearch.org/docs/latest/field-types/supported-field-types/binary/). Your API should also be able to generate a single vector embedding, which can be queried by embeddings generated for both text and image modalities. -We realized that some multimodal models can’t operate within these constraints. However, we have broader plans to rework our framework, removing the current limitations and providing more generic custom model integration support. We revealed this plan in a recent [blog post](http://earlier%20this%20year,%20we%20published%20a%20blog%20that%20revealed%20our%202024%20plans%20to%20revamp%20the%20neural%20search%20experience%20and%20deliver%20greater%20flexibility.%20our%20goal%20is%20to%20be%20able%20to%20support%20any%20multi-modal%20model/). Nonetheless, we opted to deliver multi-modal support for neural search sooner because we believe our users have a lot to gain now instead of waiting later this year. - +We realized that some multimodal models can’t operate within these constraints. However, we have broader plans to rework our framework, removing the current limitations and providing more generic custom model integration support. We revealed this plan in a recent [blog post](https://opensearch.org/blog/opensearch-ai-retrospective/). Nonetheless, we opted to deliver multi-modal support for neural search sooner because we believe our users have a lot to gain now instead of waiting later this year. +ß Our product vision and strategy continues to be open. To make it easy for AI technology providers to integrate with OpenSearch, we have created a machine learning [extensibility framework](https://opensearch.org/docs/latest/ml-commons-plugin/remote-models/blueprints/). Our choice to start with the Amazon Bedrock Titan Multimodal Embeddings support was intended to deliver timely incremental value for OpenSearch users. ## Introducing the Titan Multimodal Embeddings Model