
Research
Bypassing and Strengthening AI Content Controls with Prompt Formatting
Recently while performing independent AI-related safety research, NR Labs investigated the Guardrails functionality of Amazon Bedrock. Amazon Bedrock Guardrails helps customers to implement safeguards in applications customized to their use cases and responsible AI policies.
Read More
