
Security researchers have uncovered a Google Gemini prompt injection vulnerability that allowed attackers to access private Google Calendar data using malicious calendar invites. The flaw abused indirect prompt injection, bypassing authorization controls without direct user interaction.
According to Miggo Security, attackers could embed hidden natural-language prompts inside a calendar invite description. When a user later asked Gemini a simple question about their schedule, the AI unintentionally processed the hidden prompt, summarized private meetings, and wrote the data into a newly created calendar event. In some enterprise setups, this event was visible to the attacker, enabling silent data exfiltration.
The issue has been fixed following responsible disclosure. However, researchers warn that AI-powered features expand the attack surface, as vulnerabilities can now exist in language and context, not just code.
This case highlights the growing risks of LLM prompt injection attacks and underscores the need for organizations to continuously audit AI systems, identities, and permissions to prevent unauthorized data access.
Source: https://thehackernews.com/2026/01/google-gemini-prompt-injection-flaw.html
