Board Priorities in 2025: Artificial intelligence
16 January 2025
Boards must navigate the evolving regulatory landscape of artificial intelligence (AI) with strategic foresight. Here are the critical areas to focus on.
In 2024, businesses rushed to procure and deploy generative AI tools to stay competitive. However, many encountered challenges in discovering valuable use cases and integrating these tools with their systems and data, hampering the ability to drive maximum value from these investments. In 2025, focus should shift to understanding the true capabilities of AI and effectively integrating it into core business processes to unlock its full potential while managing the systemic risks it can introduce.
In response to the rapid adoption of generative AI, many organisations developed AI policies to address its usage. As AI embeds into critical business processes, utilising AI policies to govern and risk manage AI will no longer be enough. Directors should ensure that holistic AI governance frameworks are developed, and that there are adequate governance structures and oversight mechanisms in place to identify, escalate, and monitor AI risks effectively.
The UK government is set finally to legislate on AI, targeting companies responsible for the most powerful large language models. What will be of interest to Boards who do not fall within that narrow scope is the sector-specific guidance and regulation expected over the coming months, particularly in the financial services sector. The government's pro-innovation approach expects sector regulators to shape AI regulation within their domains. Boards must stay informed about these regulatory developments and ensure steps are taken to ensure compliance.
In contrast to the proposed light-touch regulation and legislation in the UK, other countries have adopted more prescriptive legislation, notably the EU AI Act. This legislation has extra-territorial impact, catching any AI system interacting with the EU and its citizens, making it crucial for multinational companies to identify usage and risk assess AI across their business. Australia's upcoming Privacy Act reform will focus on automated decision-making (ADM) and AI, requiring organisations to map their systems that consume personal data. Even for businesses not caught under the EU's AI Act, the risk classifications used within the legislation are useful tools for assessing risk and meeting other emerging regulatory requirements.