Google has released a detailed prompt engineering guide aimed at helping users craft more effective…

Google’s latest AI model, Gemini 2.0 Flash, is facing scrutiny after reports surfaced that it can remove watermarks from images, including those from stock media agencies such as Getty Images.
Gemini 2.0 Flash, available in Google’s AI studio, is amazing at editing images with simple text prompts.
It also can remove watermarks from images (and puts its own subtle watermark in instead 🤣) pic.twitter.com/ZnHTQJsT1Z
— Tanay Jaipuria (@tanayj) March 16, 2025
Social media users on platforms like X (formerly Twitter) and Reddit have shared instances where Gemini 2.0 Flash successfully erased watermarks and even reconstructed the obscured portions of images. While other AI tools offer similar features, Gemini 2.0 Flash appears to execute watermark removal with exceptional accuracy and is currently available for free through Google’s AI Studio.
New skill unlocked: Gemini 2 Flash model is really awesome at removing watermarks in images! pic.twitter.com/6QIk0FlfCv
— Deedy (@deedydas) March 15, 2025
The discovery has sparked concerns among copyright holders, as watermarks serve as a key method for protecting intellectual property. Legal experts warn that removing watermarks without consent could violate U.S. copyright laws, potentially leading to legal repercussions. Competing AI models, such as Anthropic’s Claude 3.7 Sonnet and OpenAI’s GPT-4o, explicitly refuse to remove watermarks, citing ethical and legal concerns.
Under U.S. law, altering or removing copyright management information—such as watermarks—without authorization can result in legal action. While Gemini 2.0 Flash struggles with some semi-transparent or large watermarks, its ability to remove smaller, less complex ones has raised alarms among content creators and rights holders.
Google has not issued an official response to the controversy. The company’s AI Studio labels Gemini 2.0 Flash’s image generation feature as “experimental” and “not for production use.” However, critics argue that stronger safeguards should be implemented to prevent misuse.
The incident adds to the broader debate on AI ethics, copyright protection, and responsible AI deployment. With AI image editing tools becoming increasingly powerful and accessible, regulatory bodies and tech companies may soon be compelled to address these growing concerns.