Trending

Revolutionizing Data Centers: The AI and Cloud Computing Boom

Image
Over the past decade, data centers have become the topic of conversation in the’investment world and a massive source of revenue for tech giants. Amazon Web Services  generates today's nearly  20% of the e-commerce giant's revenue. At the same time, however,  Nvidia , the leading supplier of components for data centers, has become the third-largest technology company in the world. The increasing use of technologies from’artificial intelligence, which require significant computing power and storage, is only powering the’s rise of data centers, which is expected to further increase in the years to come. According to the data presented by  Stocklytics.com,   the global data center market is expected to grow by 30% and reach   a value of more than 430 billion dollars by 2028’ . Cloud computing and AI are exploding the data market. The widespread adoption of cloud computing has dramatically transformed the landscape of data centers. While it has reduced the numb...

Meta Introduces AI-Generated Content Labeling for Transparency on Instagram and Facebook

On Meta's platforms, a label for AI-generated content

Following in the footsteps of other players, Meta is gradually developing methods to regulate the use of generative AI on its platforms. In a blog post published this Tuesday, February 6, the American group announces the introduction, "in the coming months", of a tool to automatically label AI-generated content on Facebook, Threads, and Instagram.

Meta Introduces AI-Generated Content Labeling for Transparency on Instagram and Facebook

To develop this solution, the company is announcing that it is "working with industry partners to establish common technical standards to signal that content has been generated by AI", details Nick Clegg, the Menlo Park-based group's dedicated president of international affairs.

Meta has already developed a similar tool for its image-generating AI. This announcement is hardly a surprise: since the US launch of Imagine with Meta, the group has already, "for reasons of transparency and traceability", deployed two watermarks on images produced by its image-generating AI, one invisible and the other apparent. From now on, however, the company aspires to design, together with other players in its sector, "a cutting-edge tool" enabling the automatic labeling of images from competing solutions such as Midjourney, Adobe Firefly, or DALL-E, developed by OpenAI.

Without detailing the precise roadmap, Meta hopes to have the tool in place as soon as possible, in anticipation of several major polls, including the US presidential elections, where generative AI could be exploited for disinformation purposes. "We're developing this tool right now, and soon we'll be applying labels in all the languages supported by the applications," he continues.

Generative AI tools offer immense possibilities, and we believe it's both possible and necessary for these technologies to be developed responsibly, concludes Nick Clegg.

Digitally created or modified" audio or video content will also be flagged

While the company seems to be able to identify the presence of AI in visual content, it admits to encountering difficulties in detecting these same signals on audio or video. To make up for this shortcoming, Meta has announced that it is working, in parallel, on adding a function enabling users to indicate when content has been designed with the help of artificial intelligence. This indication will be required on the group's platforms, and Meta reserves the right to apply sanctions if the broadcaster fails to specify that the published content has been "created or modified digitally".

This approach represents the pinnacle of what is currently technically possible. But it's not yet possible to identify all AI-generated content, Nick Clegg concedes in his blog post.

AI-generated content is a major challenge for social platforms

The proliferation of AI-generated content, particularly that which has been faked, is not just of concern to Meta. Last November, YouTube announced a series of measures to regulate the use of generative AI on its platform, including the introduction of specific wording to alert users that content viewed has been digitally generated or modified. Since September, TikTok has also imposed a statement indicating that content has been altered or created using artificial intelligence.

Popular posts from this blog

Enhancing User Engagement: Threads' Trending Topics Revolution

Sora: OpenAI's Groundbreaking AI Video Generator Revealed - See the Stunning Results!