A Skeleton Key in the Code: Researcher Exposes Critical Flaw in Google’s New AI Tool, Highlighting Rush to Release
In the high-stakes race to dominate artificial intelligence, tech giants are moving at a breakneck pace. But a frightening discovery in one of Google’s newest AI tools suggests this speed is coming at a grave cost: your security.
Just one day after Google released its Gemini-powered coding assistant, named “Antigravity,” a security researcher uncovered a severe vulnerability. This flaw acts like a skeleton key, allowing a hacker to potentially take control of a user’s computer and install malicious software, such as spyware or ransomware.
The discovery by Aaron Portnoy, a seasoned cybersecurity expert, has sent shockwaves through the tech community. It is the latest and one of the most alarming examples of how companies are pushing AI products out the door without fully testing them for critical security weaknesses, creating a digital playground for criminals.
The One-Click Backdoor
Portnoy’s research reveals a deceptively simple attack. He found a way to alter Antigravity’s configuration settings using malicious source code. This code creates a “backdoor” – a hidden entrance – into the user’s system.
Once that backdoor is open, an attacker can inject their own code to do virtually anything. They could secretly monitor the victim, steal sensitive files, or even lock the computer and demand a ransom to return access. The attack works on both Windows and Mac computers.
Executing this hack requires one crucial step: convincing an Antigravity user to run the malicious code just once. The tool will show a button asking if the user “trusts” the code. If they click it, the damage is done.
Hackers commonly trick people into this kind of action through “social engineering.” They might pretend to be a helpful, skilled programmer on a forum, sharing what looks like a useful piece of code. An unsuspecting user, eager to try the new AI tool, could click without a second thought.
“Like Hacking in the Late 1990s”
The situation is so concerning that Portnoy compared it to the wild early days of the internet. “The speed at which we’re finding critical flaws right now feels like hacking in the late 1990s,” he wrote in a report shared with Forbes ahead of its public release.
“AI systems are shipping with enormous trust assumptions and almost zero hardened boundaries,” he added.
Portnoy immediately reported his findings to Google. The tech giant told him it had opened an investigation. However, as of Wednesday, there is no patch available to fix the problem. According to Portnoy’s report, “there is no setting that we could identify to safeguard against this vulnerability.”
Making matters worse, this is not the only problem with Antigravity. Google is aware of at least two other vulnerabilities where malicious code can trick the AI into accessing and stealing files from a user’s computer.
Other cybersecurity researchers began publishing their own findings on Tuesday, noting a pattern of security oversights. One researcher wrote, “It’s unclear why these known vulnerabilities are in the product… My personal guess is that the Google security team was caught a bit off guard by Antigravity shipping.”
Read More Article: Major Google Gemini AI Integration in Google Photo
A Persistent and Widespread Threat
What makes Portnoy’s discovered hack particularly dangerous is its persistence. The malicious backdoor doesn’t disappear after one use. It reloads every time the victim restarts an Antigravity project and enters any prompt, even something as simple as typing “hello.”
Simply uninstalling or reinstalling the Antigravity software would not remove the threat. The user would have to manually find and delete the hidden backdoor file—a technical task far beyond the skills of the average user.
This rushed release of vulnerable AI tools is not just a Google problem. It is an industry-wide issue.
“AI coding agents are ‘very vulnerable, often based on older technologies and never patched,’” said Gadi Evron, cofounder and CEO of the AI security company Knostic. He explained that because these AI tools are given broad access to corporate networks to do their jobs, they become incredibly valuable targets for hackers.
The problem is compounded because these tools are “agentic,” meaning they can perform a series of tasks on their own, without a human watching over them.
“When you combine agentic behaviour with access to internal resources, vulnerabilities become both easier to discover and far more dangerous,” Portnoy said. He warns that the automation of AI could actually help hackers steal data faster than ever before.
As the head researcher at AI security startup Mindgard, Portnoy said his team is in the process of reporting 18 different weaknesses across various AI-powered coding tools that compete with Antigravity.
A Meaningless Security Warning
Google’s primary defense is a pop-up that requires users to agree they “trust” code before loading it. But Portnoy calls this a “meaningless” security protection. Why? Because if a user chooses not to trust the code, they are locked out of the very AI features that make Antigravity useful.
This creates a powerful incentive for IT workers and developers to simply click “trust” to get their work done, effectively bypassing the only safety net in place.
In a fascinating and troubling glimpse into the AI’s “mind,” Portnoy examined how Google’s AI model processed his malicious code. He found that the AI actually recognized there was a problem but became paralyzed by conflicting instructions.
The AI’s internal dialogue noted it was “facing a serious quandary” and felt like a “catch-22.” It suspected it was “a test of my ability to navigate contradictory constraints.”
This logical confusion is exactly the kind of weakness hackers will learn to exploit. As companies rush their AI creations into the world, they are unleashing systems that can be easily manipulated, leaving the doors to our digital lives wide open. The race for AI supremacy, it seems, has a dangerous blind spot, and everyone who uses these new tools could pay the price.
For More Visit: NewsNeck













