Meta recently gave employees a preview of the AI tools
it was building during an all-hands meeting. The showcase included consumer-facing applications such as conversational AI contained in WhatsApp and Messenger and image generation technology used in Instagram.
Meta also demonstrated its advancements in AI beyond consumer products. Metamate, a productivity assistant for employees, will answer employee questions and perform tasks based on data found in Meta’s internal systems.
The social media
conglomerate currently has 3.8 billion active users world-wide across its various applications, and with the advancement in AI technology, CEO Mark Zuckerburg said it was possible for the company to bring these technologies “into every single one of our products.”
In addition to this internal preview, Meta announced last month that it had invited a small select group of advertisers to test its AI powered tools for ad content creation
. At the time, Meta indicated that the tools would be available for more advertisers starting in July, with a global roll out planned for later this year.
Meta’s years-long focus on building the metaverse might explain why it’s not yet brought any generative AI powered applications to consumers. But during the last quarterly earnings call, Zuckerberg has stated that his company has “been focusing on both AI and the metaverse for years now, and will continue to focus on both.”
To power this new world of AI, Meta has invested heavily in redesigning its infrastructure
to reach the expected computing needs to support the development and deployment of AI-powered solutions.
Unlike some of its biggest rivals, the tech giant intends to build many of its AI models using open source technologies. This will allow users to build their own AI-powered applications, a decision that is opening further the debate around powerful AI models.
Meta holds the point of view that creating open source technology
will allow users to build applications without relying on the framework provided by a few large companies. Critics of the policy are concerned about increased risk of spreading misinformation when opening access to these tools.