1. Website Planet
  2. >
  3. News
  4. >
  5. Anthropic Defines Rules for Kids To Use Its AI
Anthropic Defines Rules for Kids To Use Its AI

Anthropic Defines Rules for Kids To Use Its AI

Sarah Hardacre May 31, 2024
May 31, 2024
Anthropic, an AI safety and research company developing various AI systems, has updated its Usage Policy, including allowing third parties to make their applications leveraging Anthropic models available for use by minors.

Until now, Anthropic has not allowed its users to open its capabilities to anyone under 18. However, Anthropic states that there are “certain use cases where AI tools can offer significant benefits to younger users” and has adapted its terms to set out a framework for minors to use Anthropic models.

Any organization using Anthropic APIs and opening its capabilities to minors must implement age verification and content moderation to restrict access to content as needed. They must also have reporting and monitoring mechanisms to track and react to any issues that may arise. These companies must also commit to providing educational resources for their users on the safe usage of Anthropic’s products.

Organizations must also comply with local or regional regulations regarding minors’ online safety, such as the Children’s Online Privacy Protection Act (COPPA) in the United States, and disclose their compliance publicly.

Anthropic will periodically audit its users to ensure they adhere to its updated Usage Policy. Repeated noncompliance could lead to suspension or permanent account termination.

In addition to opening its capabilities to minors, Anthropic has made several other changes to its Usage Policy. It has restructured the entire policy so as not to distinguish between individual and business users.

It has evolved and provided clarifications around election integrity and misinformation, laying out clearly which activities fall into this category and adding, for example, that it is prohibited to use AI to obstruct the counting or certification of votes.

Anthropic has clarified its definition of high-risk use cases, such as those that “affect healthcare decisions or legal guidance,” and has provided safety measures that organizations must follow in these cases.

Finally, the updates to the Usage Policy provide clear restrictions for forbidden cases to ensure personal data privacy, such as using its AI “to analyze biometric data to infer characteristics like race or religious beliefs.”

Not too long ago, Anthropic struck a major $4 billion deal with Amazon.

Rate this Article
5.0 Voted by 3 users
You already voted! Undo
This field is required Maximal length of comment is equal 80000 chars Minimal length of comment is equal 10 chars
Any comments?
Reply
View %s replies
View %s reply
More news
Show more
We check all user comments within 48 hours to make sure they are from real people like you. We're glad you found this article useful - we would appreciate it if you let more people know about it.
Popup final window
Share this blog post with friends and co-workers right now:

We check all comments within 48 hours to make sure they're from real users like you. In the meantime, you can share your comment with others to let more people know what you think.

Once a month you will receive interesting, insightful tips, tricks, and advice to improve your website performance and reach your digital marketing goals!

So happy you liked it!

Share it with your friends!

1 < 1 1

Or review us on 1

3342304
50
5000
97145800