Meta is still trying to properly identify AI-generated images

Images that used simple photo editing tools were labeled as “AI-generated” on Facebook.

Meta is changing the way it identifies AI-generated images on Facebook and Instagram after photographers found themselves between branding and the ever-increasing robot-driven feeds. Images generated by artificial intelligence abound on the platform, often linked to false information about films or, more dangerously, news events. It only takes a few seconds to find them.

Meta is still trying to correctly identify AI-generated images
Meta is not happy with the way it handles the situation, the company tries to label AI-generated images with an “AI-generated” mention when it detects them. While the images I had the opportunity to see were not yet labeled as such, the post contained text from Meta’s AI stating that the images were AI-generated. Unfortunately, these solutions impress real photographers who use post-processing effects.

In a press release, Meta says that “some content that included minor edits using AI, such as editing tools, including industry standards, was then labeled as ‘AI-generated.'”

This may only be a temporary approach, as Meta says it is currently working with “companies in the sector” to refine its approach, but the operation is very delicate.

A very delicate operation, with significant impacts
The artists complain that even though no AI tools were used, their images are labeled as “AI-generated,” suggesting that they were generated entirely by AI. For example, an image of tech influencer iJustine for Pride Month was labeled as such when she simply changed the background color and added lens flare.

On the other hand, asking users to click through to find out exactly how AI was used in a specific image could reduce the usefulness of such a tag. This may not even solve photographers’ fears, depending on the accuracy of the information contained in the tag. Go on!