AI-led vibe coding amplifying security and governance fears

Share This Post



The rapid rollout of artificial intelligence tools and vibe coding across organisations is outpacing companies’ ability to monitor and control them, creating new security and governance risks, chief technology and security officers across companies told ET.

Executives warned that risks such as prompt injection, data exposure and vulnerabilities in widely used models could trigger large-scale impact, raising the threat of a wave of shadow AI-or use of unapproved AI tools-if governance frameworks fail to keep pace.

Many firms today lack clear visibility into the AI systems being used as business teams deploy models and pipelines faster than central oversight can keep up, executives said.

“Enterprises know what workloads they deployed six months ago,” said Arjun Nagulapally, chief technology officer at AionOS. “Very few can tell you with confidence what AI workloads are running right now, who authorised them, what data they are touching, and how the models are behaving.”

Companies are now tightening processes, treating AI-generated code at par with human-written code with strict reviews, testing, and audit trails. But the push for faster adoption is real.

Amith Singhee, CTO at IBM India and South Asia, said the company is experiencing productivity increases of almost 35-40% across various stages of the software engineering lifecycle when AI agents are used effectively. “There is a growing shift toward specification-based coding and stronger test-driven development, where developers guide AI agents toward defined engineering goals,” he said.