Bridging the AI gap: governing emerging technologies in an evolving digital landscape

Share This Post



In recent years, artificial intelligence has become a staple of daily operations for businesses across the world. In Europe, ISACA’s recent AI Pulse Poll found that 83% of IT and business professionals report AI use within their organisations. However, despite this rapid uptake, only 31% of organisations have a comprehensive AI policy in place.

This disconnect highlights a troubling and widening governance gap. AI offers both promise and peril. It can accelerate productivity, efficiency and even strengthen cybersecurity, but the very features that make it powerful for defence – speed, scale, and adaptability – also make it an attractive tool for attackers. To keep pace, organisations must invest not only in adoption, but also in the structures, skills and safeguards that enable responsible, secure and resilient use.

Harnessing the potential of AI

AI’s potential is revolutionary, and its benefits are already being felt across European businesses every day. ISACA’s 2025 AI Pulse Poll found that more than half of professionals (56%) state that AI has boosted organisational productivity, 71% have seen efficiency gains and time savings, and 62% are optimistic that it will positively impact their organisation in the next year.

But these breakthroughs come hand-in-hand with new and evolving risks.

There is potential for data leakage when using AI tools, and cybercriminals are exploiting generative capabilities to make their attacks more sophisticated and difficult to detect, as was suspected in the recent phishing campaign on Microsoft. AI can be used to create highly personalised campaigns, generate malicious code, and produce synthetic content such as deepfakes that are becoming more widespread and harder to identify.

Nearly three-quarters of professionals (71%) expect these risks to grow in the year ahead, yet only 18% of organisations are investing in countermeasures., according to ISACA’s poll.

As the risks grow, so too does the urgency for stronger oversight. The use of AI is becoming more prevalent across workplaces, but the speed of adoption continues to outpace the structures needed to govern it. Regulating its use and ensuring staff are trained on how to use it safely and securely is essential if we are to keep up with bad actors, protect our organisations and ensure the safe use of AI in workplaces.

The gap between adoption and accountability

It’s clear that many organisations still see AI governance as an abstract, future concern, when in reality it is already a core operational issue with regulatory, reputational and ethical dimensions.

Policymakers are moving to catch up, but current regulation is not enough to close the gap in the immediate future. Action is needed now if we are to stem the tide of rising sophisticated cyber attacks fuelled by AI developments.

There are key steps businesses can take to boost resilience, including putting in place internal policies, processes and controls that allow employees to use AI responsibly and securely. Robust, role-specific guidelines as part of a wider, formal AI policy – covering everything from “when to use AI” to “how to spot a deepfake” – are essential to help businesses safely maximise AI’s abilities and build resilience into their operations.

Supporting businesses with AI governance

For businesses to govern AI effectively, there’s no doubt that they need practical support and clear direction — from assurance frameworks and audit tools to tips for stronger collaboration between privacy, cybersecurity and legal teams.

At the same time, skills must keep pace with technology. ISACA’s AI Pulse Poll shows that 42% of professionals in Europe believe they will need to increase their AI knowledge within the next six months, and almost nine in ten say they will need new skills within the next two years.

To help organisations combat this knowledge gap, ISACA is providing a range of practical resources — from free content guides and tailored training courses to new certification programmes, including the Advanced AI Audit (AAIA) and Advanced AI Security Management (AAISM) certifications.

The AAISM certification is the first and only AI-centric security management certification. It is designed to equip cybersecurity and audit professionals with the specialised skills needed to manage evolving security risks related to AI, implement policy and ensure its responsible and effective use across the organisation.

AAIA is the first and only advanced audit-specific artificial intelligence certification designed for experienced auditors. It validates expertise in conducting AI-focused audits, addressing AI integration challenges and enhancing audit processes through AI-driven insights. It covers the key domains of AI governance and risk, AI operations and AI auditing tools and techniques. ISACA has recently expanded the eligibility scope for the AAIA, allowing more auditors the chance to address the challenges and opportunities presented by AI in the field.

These resources are designed to translate fast-moving technological change into practical, actionable steps that businesses can use to strengthen governance and resilience.

Hear from the experts

It’s a complex landscape, and the pace at which technologies like AI evolve can make it difficult for organisations to keep up, especially as we wait for regulatory guidance. For many, the challenge lies not only in understanding the risks, but in knowing how to implement AI safely, securely and efficiently within their business.

To help leaders navigate this, ISACA will bring together global experts to speak at its upcoming Europe Conference 2025, taking place in London from 15–17 October. AI will run as a cross-agenda topic, with sessions exploring its promise, its perils, and the practical steps organisations can take to stay ahead. Themes include safeguarding against privacy trade-offs, building secure foundations for AI deployment, tackling the dual use of AI for both defence and attack, and exploring its impact on trust, regulation and reputation.

A highlight within this will be a fireside chat between myself and a senior representative from the Department for Science, Innovation and Technology (DSIT). This conversation will decode the UK’s evolving policy landscape, explore international approaches to safeguarding infrastructure and supply chains, and outline practical strategies for aligning business resilience with regulatory expectations.

Together, these sessions will translate fast-moving developments into actionable governance, stronger data discipline and measurable resilience. For the full programme, including session details, visit the conference agenda here.

From awareness to action

Organisations that act now to embed governance, train their people and align with emerging frameworks will be better placed not only to withstand AI-powered threats, but also to seize the opportunities of innovation.

The ISACA Europe Conference provides a unique opportunity to hear from experts, exchange experiences, and translate fast-moving technological change into practical resilience. For more details and to register for ISACA Europe Conference 2025 (15–17 October, London), click here: http://bit.ly/48dw6SM



Source link

spot_img

Related Posts

Access Denied

Access Denied You don't have permission to access...

Instagram says it’s safeguarding teens by limiting them to PG-13 content

Teenagers on Instagram will be restricted to seeing...

After record crypto crash, a rush to hedge against another freefall

Following the largest crypto liquidation in history last...
spot_img