Amazon OpenSearch Ingestion now supports batch AI inference

Published
October 3, 2025
https://aws.amazon.com/about-aws/whats-new/2025/10/amazon-opensearch-service-supports-batch-ai-inference

Amazon OpenSearch Service

You can now perform batch AI inference within Amazon OpenSearch Ingestion pipelines to efficiently enrich and ingest large datasets for Amazon OpenSearch Service domains.

Previously, customers used OpenSearch’s AI connectors to Amazon Bedrock, Amazon SageMaker, and 3rd-party services for real-time inference. Inferences generate enrichments such as vector embeddings, predictions, translations, and recommendations to power AI use cases. Real-time inference is ideal for low-latency requirements such as streaming enrichments. Batch inference is ideal for enriching large datasets offline, delivering higher performance and cost efficiency. You can now use the same AI connectors with Amazon OpenSearch Ingestion pipelines as an asynchronous batch inference job to enrich large datasets such as generating and ingesting up to billions of vector embeddings.

What to do

  • Use the new batch AI inference feature for large dataset enrichments.
  • Refer to the documentation for setup instructions.

Source: AWS release notes

Follow our blog

Get the latest insights and advice on AWS services from our experts.

By clicking Sign Up you're confirming that you agree with our Terms and Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.