{
    "id": 74632,
    "date": "2026-04-22T08:54:44",
    "date_gmt": "2026-04-22T01:54:44",
    "guid": {
        "rendered": "https:\/\/hbbgroup.net\/google-fixes-ai-coding-tool-flaw-that-let-attackers-execute-malicious-code-report\/"
    },
    "modified": "2026-04-22T08:54:44",
    "modified_gmt": "2026-04-22T01:54:44",
    "slug": "google-fixes-ai-coding-tool-flaw-that-let-attackers-execute-malicious-code-report",
    "status": "publish",
    "type": "post",
    "link": "https:\/\/hbbgroup.net\/en_us\/google-fixes-ai-coding-tool-flaw-that-let-attackers-execute-malicious-code-report\/",
    "title": {
        "rendered": "Google Fixes AI Coding Tool Flaw That Let Attackers Execute Malicious Code: Report"
    },
    "content": {
        "rendered": "<div>\n<div>\n<h4 color=\"#333\">In brief<\/h4>\n<ul>\n<li>Researchers found a prompt injection vulnerability in Google\u2019s Antigravity AI coding platform.<\/li>\n<li>The flaw could allow attackers to execute commands even with the platform\u2019s Secure Mode enabled.<\/li>\n<li>Google fixed the issue Feb. 28 after researchers disclosed it in January, Pillar Security said.<\/li>\n<\/ul>\n<\/div>\n<p>Google has patched a vulnerability in its Antigravity AI coding platform that researchers say could allow attackers to run commands on a developer\u2019s machine through a <a href=\"https:\/\/decrypt.co\/338143\/copypasta-attack-shows-prompt-injections-infect-ai-scale\" target=\"_blank\" rel=\"noopener\">prompt injection<\/a> attack.<\/p>\n<p>According to a <a href=\"https:\/\/www.pillar.security\/blog\/prompt-injection-leads-to-rce-and-sandbox-escape-in-antigravity\" target=\"_blank\" rel=\"noopener nofollow external\">report<\/a> by Cybersecurity firm Pillar Security, the flaw involved Antigravity\u2019s find_by_name file search tool, which passed user input directly to an underlying command-line utility without validation. That allowed malicious input to convert a file search into a command execution task, enabling remote code execution.<\/p>\n<p>\u201cCombined with Antigravity&#8217;s ability to create files as a permitted action, this enables a full attack chain: stage a malicious script, then trigger it through a seemingly legitimate search, all without additional user interaction once the prompt injection lands,\u201d Pillar Security researchers wrote.<\/p>\n<p>Launched last November, Antigravity is Google\u2019s AI-powered development environment designed to help programmers write, test, and manage code with the assistance of autonomous software agents. Pillar Security disclosed the issue to Google on January 7, and Google acknowledged the report the same day, marking the issue as fixed on February 28.<\/p>\n<p>Google did not immediately respond to a request for comment by <i>Decrypt.<\/i><\/p>\n<p>Prompt injection attacks occur when hidden instructions embedded in content cause an AI system to perform unintended actions. Because AI tools often process external files or text as part of normal workflows, the system may interpret those instructions as legitimate commands, allowing an attacker to trigger actions on a user\u2019s machine without direct access or additional interaction.<\/p>\n<p>The threat of prompt injection attacks for large language models came into renewed focus last summer when ChatGPT developer OpenAI <a href=\"https:\/\/decrypt.co\/331756\/chatgpt-agent-book-browse-fill-forms-just\" target=\"_blank\" rel=\"noopener\">warned<\/a> that its new ChatGPT agent could be compromised.<\/p>\n<p>\u201cWhen you sign ChatGPT agent into websites or enable connectors, it will be able to access sensitive data from those sources, such as emails, files, or account information,\u201d OpenAI wrote in a blog post.<\/p>\n<p>To demonstrate the Antigravity issue, the researchers created a test script inside a project workspace and triggered it through the search tool. When executed, the script opened the computer\u2019s calculator application, showing that the search function could be turned into a command execution mechanism.<\/p>\n<p>\u201cCritically, this vulnerability bypasses Antigravity&#8217;s Secure Mode, the product&#8217;s most restrictive security configuration,\u201d the report said.<\/p>\n<p>The findings highlight a broader security challenge facing AI-powered development tools as they begin to execute tasks autonomously.<\/p>\n<p>\u201cThe industry must move beyond sanitization-based controls toward execution isolation. Every native tool parameter that reaches a shell command is a potential injection point,\u201d Pillar Security said. \u201cAuditing for this class of vulnerability is no longer optional, and it is a prerequisite for shipping agentic features safely.\u201d<\/p>\n<div>\n<h3>Daily Debrief Newsletter<\/h3>\n<p>Start every day with the top news stories right now, plus original features, a podcast, videos and more.<\/p>\n<\/div>\n<\/div>",
        "protected": false
    },
    "excerpt": {
        "rendered": "<p>In brief Researchers found a prompt injection vulnerability in Google\u2019s Antigravity AI coding platform. The flaw could allow attackers to [&hellip;]<\/p>",
        "protected": false
    },
    "author": 5,
    "featured_media": 74633,
    "comment_status": "open",
    "ping_status": "open",
    "sticky": false,
    "template": "",
    "format": "standard",
    "meta": {
        "_acf_changed": false,
        "footnotes": ""
    },
    "categories": [
        220
    ],
    "tags": [],
    "class_list": [
        "post-74632",
        "post",
        "type-post",
        "status-publish",
        "format-standard",
        "has-post-thumbnail",
        "hentry",
        "category-tien-dien-tu"
    ],
    "acf": [],
    "_links": {
        "self": [
            {
                "href": "https:\/\/hbbgroup.net\/en_us\/wp-json\/wp\/v2\/posts\/74632",
                "targetHints": {
                    "allow": [
                        "GET"
                    ]
                }
            }
        ],
        "collection": [
            {
                "href": "https:\/\/hbbgroup.net\/en_us\/wp-json\/wp\/v2\/posts"
            }
        ],
        "about": [
            {
                "href": "https:\/\/hbbgroup.net\/en_us\/wp-json\/wp\/v2\/types\/post"
            }
        ],
        "author": [
            {
                "embeddable": true,
                "href": "https:\/\/hbbgroup.net\/en_us\/wp-json\/wp\/v2\/users\/5"
            }
        ],
        "replies": [
            {
                "embeddable": true,
                "href": "https:\/\/hbbgroup.net\/en_us\/wp-json\/wp\/v2\/comments?post=74632"
            }
        ],
        "version-history": [
            {
                "count": 0,
                "href": "https:\/\/hbbgroup.net\/en_us\/wp-json\/wp\/v2\/posts\/74632\/revisions"
            }
        ],
        "wp:featuredmedia": [
            {
                "embeddable": true,
                "href": "https:\/\/hbbgroup.net\/en_us\/wp-json\/wp\/v2\/media\/74633"
            }
        ],
        "wp:attachment": [
            {
                "href": "https:\/\/hbbgroup.net\/en_us\/wp-json\/wp\/v2\/media?parent=74632"
            }
        ],
        "wp:term": [
            {
                "taxonomy": "category",
                "embeddable": true,
                "href": "https:\/\/hbbgroup.net\/en_us\/wp-json\/wp\/v2\/categories?post=74632"
            },
            {
                "taxonomy": "post_tag",
                "embeddable": true,
                "href": "https:\/\/hbbgroup.net\/en_us\/wp-json\/wp\/v2\/tags?post=74632"
            }
        ],
        "curies": [
            {
                "name": "wp",
                "href": "https:\/\/api.w.org\/{rel}",
                "templated": true
            }
        ]
    }
}