Skip to content
TechnologyDigitalTwitterGoogleTechAiSocial mediaAndroid

Watermark addition for Veo 3 videos by Google unveiled

AI-generated Veo 3 videos now sport a subtle watermark at Google's discretion, enhancing their traceability.

AI-generated Veo 3 videos now bear a subtle watermark, enhancing their distinguishability.
AI-generated Veo 3 videos now bear a subtle watermark, enhancing their distinguishability.

Google's New AI Video Watermark: A Step Towards Transparency...Maybe

Watermark addition for Veo 3 videos by Google unveiled

Last week, Google made a quiet announcement - they're adding a visible watermark to videos generated using their new Veo 3 model. Yep, you heard that right. If you're scrolling through your social media feeds, you might just catch a glimpse of it.

The watermark is noticeable in videos released by Google to promote the launch of Veo 3 in various countries. However, it's not visible on all Veo videos. The exception? Videos generated in Google's Flow tool by users with a Google AI Ultra plan. That's right, the big guns get to bypass the watermark.

Google shared the news in an X thread by Josh Woodward, Vice President with Google Labs and Google Gemini. According to Woodward, the company added the watermark as a first step towards making their SynthID Detector available to a broader audience, although it's still not widely available yet.

Vei suave, realistic videos have been causing a stir online since Google introduced Veo 3 at Google I/O 2025. The AI video model can generate not only incredibly life-like videos but also realistic audio and dialogue. And it's not just about animals acting like humans (although there's plenty of that). Veo 3 has been used to generate a variety of content, including man-on-the-street interviews, influencer ads, fake news segments, and unboxing videos.

However, the question remains - is this watermark visible enough? Digital forensics expert Hany Farid points out that the watermark is faint and may not be apparent to most users, especially those on mobile devices or moving quickly through their social media feeds.

So, what's the solution? Farid suggests either making the watermark more prominent or implementing a note beside the image, alerting users to check for a watermark to verify if the content is AI-generated. But, as Farid notes, visible watermarks are easy to remove, which raises concerns about their effectiveness.

Google's SynthID invisible watermark, on the other hand, is quite resilient. However, it requires specialized tools to detect, and the average user can't see it without a watermark reader. The goal, then, is to make it easier for consumers to know if a piece of content contains this type of watermark.

In the end, while Google's visible watermark is a step towards transparency, its effectiveness is limited. Its exclusion from some users, the ease of watermark removal, and user awareness pose challenges to its potential impact on reducing misinformation risks. As the AI world continues to evolve, it's crucial to stay vigilant and skeptical about the content we consume online.

Insights:- Google's SynthID system includes both visible and invisible watermarks for AI-generated content, designed to help identify AI-generated videos.- The visible watermark is not applied to all users of Google's Veo 3, leading to concerns about its effectiveness in reducing misinformation risks.- The use of watermarks, while a step towards transparency, faces challenges related to their visibility, removability, and user awareness.- Expert concerns persist about the potential for AI-generated content, especially deepfakes, to spread misinformation.

Topics: Artificial Intelligence, Google, Misinformation

[1] SynthID Watermarks: https://en.wikipedia.org/wiki/SynthID#cite_note-1[3] AI-generated content and misinformation: https://en.wikipedia.org/wiki/Deepfake#cite_note-13[4] SynthID Detector: https://ai.google/research/synthid

  1. Google's new SynthID system, which includes both visible and invisible watermarks, aims to help identify AI-generated videos, a key part of their Artificial Intelligence technology.
  2. Although Google has added a visible watermark to videos generated using the Veo 3 model, its exclusion from some users raises questions about its effectiveness in reducing misinformation risks, a significant concern in the digital age and social media landscape.
  3. Digital forensics experts like Hany Farid have suggested making the watermark more prominent or implementing a note beside the image to alert users about checking for the watermark to verify if content is AI-generated, as the ease of watermark removal remains a potential concern.
  4. On the other hand, Google's SynthID invisible watermark, while resilient, requires specialized tools to detect, making it challenging for average users to verify AI-generated content on their Android devices or any other tech platform.
  5. As the technology continues to evolve, it's crucial for both Google and users to stay vigilant and skeptical about the AI-generated content they consume online, especially in light of the persistent concerns about its potential to spread misinformation, such as deepfakes on platforms like Twitter.

Read also:

    Latest