Amazon Bedrock Guardrails adds support for coding use cases

Amazon Bedrock Guardrails Updates
AWS has expanded capabilities in Amazon Bedrock Guardrails for code-related use cases, enabling customers to protect against harmful content in code while building generative AI applications. This update allows customers to leverage existing safeguards including content filters, denied topics, and sensitive information filters to detect intent to inject malicious code, detect and prevent prompt leakages, and help protect against introducing personally identifiable information (PII) within code.
With expanded support for code-related use cases, Amazon Bedrock Guardrails now provides safeguards against harmful content introduced within code elements, including comments, variable and function names, and string literals. Content filters (with standard tier) in Bedrock Guardrails now detect and filter such harmful content in code in the same way as text and image content protection. Additionally, Bedrock Guardrails offers enhanced protection with prompt leakage detection with standard tier, helping detect and prevent unintended disclosure of information from system prompts in model responses that could compromise intellectual property. Furthermore, denied topics (with standard tier) and sensitive information filters with Bedrock Guardrails now help safeguard against vulnerabilities using code within topics and help prevent inclusion of PII within code structures.
What to do
- Access the expanded capabilities through the Amazon Bedrock console or supported APIs.
- Review the launch blog, technical documentation, and Bedrock Guardrails product page for more information.
Source: AWS release notes
If you need further guidance on AWS, our experts are available at AWS@westloop.io. You may also reach us by submitting the Contact Us form.



