📢 Gate Square Exclusive: #PUBLIC Creative Contest# Is Now Live!
Join Gate Launchpool Round 297 — PublicAI (PUBLIC) and share your post on Gate Square for a chance to win from a 4,000 $PUBLIC prize pool
🎨 Event Period
Aug 18, 2025, 10:00 – Aug 22, 2025, 16:00 (UTC)
📌 How to Participate
Post original content on Gate Square related to PublicAI (PUBLIC) or the ongoing Launchpool event
Content must be at least 100 words (analysis, tutorials, creative graphics, reviews, etc.)
Add hashtag: #PUBLIC Creative Contest#
Include screenshots of your Launchpool participation (e.g., staking record, reward
MIT unveils PhotoGuard tech that protects images from malicious AI edits
Written by: Andrew Tarantola
Source: Engadget
Dall-E and Stable Diffusion are just the beginning. Chatbots on the internet are gaining the ability to edit and create pictures, with companies like Shutterstock and Adobe leading the way, as AI-generated systems gain popularity and companies work to differentiate their products from those of their competitors . But these new AI capabilities also pose familiar problems, such as unauthorized tampering or outright misappropriation of existing online works and images. Watermarking technology can help reduce the latter problem, while new "PhotoGuard" technology developed by MIT CSAIL can help us prevent the former.
It is reported that PhotoGuard works by changing some pixels in the image, thereby destroying the ability of AI to understand the content of the image. These "perturbations," as the research team calls them, are invisible to the human eye but easy to read for machines. The "encoding" attack method that introduces these artifacts targets the algorithmic model's underlying representation of the target image -- the complex mathematics that describe the position and color of each pixel in the image -- essentially preventing the AI from understanding what it is looking at . (Note: Artifacts refer to various forms of images that do not exist in the scanned object but appear on the image.)
"The encoder attack makes the model think that the input image (to be edited) is some other image (such as a grayscale image)," Hadi Salman, a Ph.D. student at MIT and the paper's first author, told Engadget. "The Diffusion attack forces the Diffusion model to edit some of the target images, which can also be some gray or random images." Protected images for reverse engineering.
"A collaborative approach involving model developers, social media platforms, and policymakers can be an effective defense against unauthorized manipulation of images. Addressing this pressing issue is critical today," Salman said in a release. "While I'm excited to be able to contribute to this solution, there is still a lot of work to do to make this protection practical. Companies developing these models need to invest in targeting the threats these AI tools may pose for robust immune engineering."