Copilot Podcast: Azure AI Content Safety Custom Categories
In episode 36 of the “Copilot Podcast,” AI Expert and host Aaron Back shares details of the improvements to Microsoft’s Azure AI Content Safety service since it was released in October 2023.
This episode is sponsored by the AI Summit Preconference, a part of the Community Summit North America 2024 experience, taking place on October 13th in San Antonio, Texas. The full-day preconference will feature keynotes, panel discussions, tutorials, fireside chats, use-case analysis, innovation profiles, and more. Register for the AI Summit Preconference now.
Key Takeaways
- Azure AI Content Safety: Announced in October 2023, Azure AI Content Safety is a service designed to detect and filter out harmful content, both user-generated and AI-generated, within applications and services. It includes capabilities for text and image detection to identify offensive, risky, or undesirable content.
- Advancements to the service: Initially, Azure AI Content Safety offered out-of-the-box capabilities for detecting harmful content. However, at Microsoft Build 2024, new custom capabilities were introduced, allowing developers to create custom categories powered by Azure AI language. For instance, developers can define a custom category for bullying, train the model with specific phrases and wording, and then share this custom category across their organization.
- Deployment options: There are two deployment options for Azure AI Content Safety categories – standard and rapid. Standard deployment offers thorough and robust filtering, requiring a minimum of 50 natural language examples for training, with a deployment timeframe of approximately 24 hours. Rapid deployment caters to urgent safety needs, allowing swift response – about 1 hour – to unforeseen safety concerns in the data or AI state.
- Stay tuned: Continue to follow the “Copilot at Build” series to get the latest breakdowns of the Copilot-specific announcements delivered at Microsoft Build 2024.
Stream the audio version of this episode here: