ylliX - Online Advertising Network
Mind the gap notice after research suggests most firms haven't implemented an AI governance framework to mitigate risks.

AI governance gap: 95% of firms haven’t implemented frameworks


Robust governance is essential to mitigate AI risks and maintain responsible systems, but the majority of firms are yet to implement a framework.

Commissioned by Prove AI and conducted by Zogby Analytics, the report polled over 600 CEOs, CIOs, and CTOs from large companies across the US, UK, and Germany. The findings show that 96% of organisations are already utilising AI to support business operations, with the same percentage planning to increase their AI budgets in the coming year.

The primary motivations for AI investment include increasing productivity (82%), improving operational efficiency (73%), enhancing decision-making (65%), and achieving cost savings (60%). The most common AI use cases reported were customer service and support, predictive analytics, and marketing and ad optimisation.

Despite the surge in AI investments, business leaders are acutely aware of the additional risk exposure that AI brings to their organisations. Data integrity and security emerged as the biggest deterrents to implementing new AI solutions.

Executives also reported encountering various AI performance issues, including:

  • Data quality issues (e.g., inconsistencies or inaccuracies): 41%
  • Bias detection and mitigation challenges in AI algorithms, leading to unfair or discriminatory outcomes: 37%
  • Difficulty in quantifying and measuring the return on investment (ROI) of AI initiatives: 28%

While 95% of respondents expressed confidence in their organisation’s current AI risk management practices, the report revealed a significant gap in AI governance implementation.

Only 5% of executives reported that their organisation has implemented any AI governance framework. However, 82% stated that implementing AI governance solutions is a somewhat or extremely pressing priority, with 85% planning to implement such solutions by summer 2025.

The report also found that 82% of participants support an AI governance executive order to provide stronger oversight. Additionally, 65% expressed concern about IP infringement and data security.

Mrinal Manohar, CEO of Prove AI, commented: “Executives are making themselves clear: AI’s long-term efficacy, including providing a meaningful return on the massive investments organisations are currently making, is contingent on their ability to develop and refine comprehensive AI governance strategies.

“The wave of AI-focused legislation going into effect around the world is only increasing the urgency; for the current wave of innovation to continue responsibly, we need to implement clearer guardrails to manage and monitor the data informing AI systems.”

As global regulations like the EU AI Act loom on the horizon, the report underscores the importance of de-risking AI and the work that still needs to be done. Implementing and optimising dedicated AI governance strategies has emerged as a top priority for businesses looking to harness the power of AI while mitigating associated risks.

The findings of this report serve as a wake-up call for organisations to prioritise AI governance as they continue to invest in and deploy AI technologies. Responsible implementation and robust governance frameworks will be key to unlocking the full potential of AI while maintaining trust and compliance.

(Photo by Rob Thompson)

See also: Scoring AI models: Endor Labs unveils evaluation tool

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: ai, artificial intelligence, data, enterprise, ethics, framework, governance, law, legal, machine learning, report, research, security, study



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *