1. Website Planet
  2. >
  3. News
  4. >
  5. Microsoft Introduces Azure AI Content Safety
Microsoft Introduces Azure AI Content Safety

Microsoft Introduces Azure AI Content Safety

Ivana Shteriova
Microsoft’s Azure AI Content Safety is an AI-powered content moderation platform designed to create safer online experiences. The new service, available through the Azure AI platform, relies on state-of-the-art AI models to detect toxic content across text and images.

Sarah Bird, Microsoft’s responsible AI lead, introduced Azure AI Content Safety at the tech giant’s annual Build conference. Developed from the technology that powers the safety system of the chatbot incorporated into Microsoft’s Copilot, Bing, and its code-generating tool on GitHub, Azure AI Content Safety is as a standalone product. Prices start at $0.75 per 1,000 text records and $1.50 per 1,000 images.

When the tool detects something inappropriate, it will flag it, classify it (sexist, racist, hateful, etc.), assign it a severity score, and notify moderators to take action. In addition to detecting harmful content in AI systems, Azure AI Content Safety can control inappropriate content in online communities, forums, and gaming platforms.

The language models that power Azure AI Content Safety can currently understand text in English, German, French, Spanish, Japanese, Portuguese, Chinese, and Italian.

Azure AI Content Safety is not an entirely new concept among toxicity detection services. Google has Perspective, a similar product that uses machine learning to mitigate toxicity online.

Since ChatGPT’s integration into Microsoft Bing in early February, the AI has had its fair share of tantrums. Users have reported a huge number of cases where Bing generated various forms of inappropriate content. Microsoft has collected massive feedback and, according to the company, improved Azure AI Content Safety to keep toxic content in check.

Still, there’s a fair amount of evidence to be skeptical about AI-powered toxicity detectors. A Penn State study, for instance, has found that AI toxicity detection models are often biased toward specific groups. The study unveiled that content related to people with disabilities can often be perceived as negative or inappropriate by these models. Another study detected a racial bias in Google’s Perspective tool.

These biases often come from the people responsible for creating labels and adding them to the datasets used to train the AI systems. To fight this challenge, Microsoft has teamed up with linguistic and fairness experts and added filters to refine context.

Rate this Article
4.3 Voted by 3 users
You already voted! Undo
This field is required Maximal length of comment is equal 80000 chars Minimal length of comment is equal 10 chars
Any comments?
View %s replies
View %s reply
More news
Show more
We check all user comments within 48 hours to make sure they are from real people like you. We're glad you found this article useful - we would appreciate it if you let more people know about it.
Popup final window
Share this blog post with friends and co-workers right now:

We check all comments within 48 hours to make sure they're from real users like you. In the meantime, you can share your comment with others to let more people know what you think.

Once a month you will receive interesting, insightful tips, tricks, and advice to improve your website performance and reach your digital marketing goals!

So happy you liked it!

Share it with your friends!

1 < 1 1

Or review us on 1