TwelveLabs’ Marengo Embed 3.0 for advanced video understanding now in Amazon Bedrock

TwelveLabs' Marengo Embed 3.0 on Amazon Bedrock
TwelveLabs' Marengo Embed 3.0 is now available on Amazon Bedrock, offering advanced video-native multimodal embedding capabilities for video content. The model unifies videos, images, audio, and text into a single representation space, enabling sophisticated video search and content analysis applications.
Key Enhancements
- Extended video processing capacity: Process up to 4 hours of video and audio content and files up to 6GB.
- Enhanced sports analysis: Better understanding of gameplay dynamics, player movements, and event detection.
- Global multilingual support: Expanded language capabilities from 12 to 36 languages.
- Multimodal search precision: Combine images and descriptive text in a single embedding request for more accurate search results.
Regional Availability
Available in US East (N. Virginia), Europe (Ireland), and Asia Pacific (Seoul). Supports synchronous inference for low-latency text and image embeddings, and asynchronous inference for video, audio, and large-scale image files.
What to do
- Visit the Amazon Bedrock console to get started.
- Read the product page for more information.
- Check the documentation for usage details.
Source: AWS release notes
If you need further guidance on AWS, our experts are available at AWS@westloop.io. You may also reach us by submitting the Contact Us form.



