By Thomas Abrams, Head of Human Rights, Social and Governance Issues, PRI
The rapid development and deployment of artificial intelligence (AI) is widely recognised as a material investment issue, yet many investors are unsure where to begin.
AI – and its potential impacts on people, the environment, and corporate performance – is evolving quickly, while governance and regulation are not keeping pace , leading to an “AI governance gap”. In the last three months alone, AI developments have impacted the workforces and share prices of white-collar sectors from software to finance. The World Economic Forum’s Future of Jobs Report 2025 suggests that AI and information processing will affect 86% of businesses by 2030.
While even the medium-term impact of these changes remains nebulous, and the technology may feel complex or unfamiliar, investors can still take meaningful action to mitigate risk whilst realising opportunities. Many of the underlying questions investors need to ask of companies or managers – about good governance, accountability, risk management and alignment with international standards – are not new. Situating AI risks within the frame of systemic sustainability or ESG issues is a practical starting point and a way of ensuring that AI considerations are not siloed.
Why AI matters to investors now
The impacts of AI are extremely varied. Some corporate uses may improve efficiency, reduce emissions or expand access to services; others may create serious risks, including discrimination, labour displacement, misinformation, cybersecurity vulnerabilities, or significant energy and water demands.
‘AI’ is not a single technology; it covers a wide range of tools and processes, some long-established, others entirely new. Understanding where AI-related sustainability risks are already playing out in their portfolios is a key starting point for investors:
- Human rights: AI can amplify inequality, discrimination, privacy harms and opaque decision-making, making robust human rights due diligence and clear accountability essential. In May 2025, a US federal court certified a nationwide collective action against HR and finance platform Workday, alleging its AI screening tools systematically disadvantaged job applicants over 40. The court indicated that AI vendors, not just the employers using their tools, may face direct discrimination liability.
- Climate: Energy demands are soaring due to new, large AI data centres and hardware production, potentially undermining corporate net zero pathways even as AI may enable efficiency gains and climate solution modelling. The scale of this challenge is illustrated by Google’s own disclosures, which suggest its emissions have risen around 51% above its 2019 baseline, with AI-driven energy demand identified as a primary contributor , leading the company to describe its 2030 net zero target as a “moonshot” rather than a firm commitment.
- Nature: AI infrastructure, particularly data centres, has led to well-documented increased pressure on natural resources (particularly water and mineral extraction) and contributed to land use change, heightening exposure to community conflict, biodiversity loss and supply chain impacts. For example, Bloomberg found that around two thirds of new data centres built or in development since 2022 are located in water-stressed regions, raising questions about the long-term adequacy of corporate water stewardship commitments.
- Governance: AI introduces new governance challenges, including immature internal controls, cybersecurity vulnerabilities, vendor concentration and opaque decision-making, requiring boards to demonstrate credible oversight, risk management and transparency. The scale of the oversight gap is suggested by ISS Governance’s analysis of over 3,000 US-listed companies: only around 8% disclose any form of board-level oversight of AI, even as recorded AI incidents have been rising sharply.
As AI capabilities and use cases evolve rapidly, regular and ongoing environmental and human rights due diligence, will be essential rather than a one-off exercise.
The risk and opportunity landscape will continue to shift, and new principles may well be needed as the technology matures - but the foundations for responsible engagement already exist.
Early steps investors can take
Below we outline steps investors can take to start to engage investee companies on AI:
- Seek clarity on which specific AI applications or systems investee companies are actually deploying.
- Clarify board oversight and accountability for AI use.
- Assess enterprise‑wide risk management, including how AI risks are identified, monitored and escalated.
- Check regulatory preparedness, especially as AI‑related rules evolve across jurisdictions.
- Review human rights due diligence, including impacts on workers, data subjects and affected communities.
- Understand environmental impacts, particularly energy and water use linked to AI infrastructure;
- Seek transparency on AI use cases, safeguards and reporting; and
- Ensure diligent, up-to-date procurement processes and requirements of technology providers.
These expectations build on established governance frameworks such as those produced by OECD and UNESCO (see below).
Translating responsible investment principles into practice on AI
Many PRI signatories have already begun translating responsible investment principles into investor practice on AI. Some are publicly sharing their thinking and approaches:
- Railpen investor‑focused framework turns high‑level “responsible AI” principles into concrete governance, risk‑management, oversight, and disclosure expectations for companies.
- Nuveen sets out how AI’s environmental, social, and governance impacts should be integrated into investment analysis and stewardship across public and private markets.
- Scottish Widows recognises AI as an emerging systemic governance risk and calls for board‑level oversight and internal control frameworks aligned with the OECD’s AI principles and evolving regulation.
- TD Asset Management describes AI as a material governance risk and its expectations for portfolio companies to establish clear board oversight, formal governance frameworks, and robust risk management.
- Norges Bank Investment Management frames AI as a systemic risk and opportunity affecting long-term returns, and spells out how companies must demonstrate governance structures compared to other major operational risks.
Five ways the PRI is advancing its work on AI
We are:
- highlighting how the rapid development and deployment of AI intersect with the PRI’s priority sustainability themes – human rights, governance, climate and nature – and supporting signatories to consider risks and opportunities through familiar stewardship and risk-management approaches;
- curating and signposting trusted resources from across the responsible AI ecosystem, helping signatories navigate a rapidly evolving landscape and access credible, investor-relevant guidance, including the World Benchmarking Alliance’s Ethical AI Collective Impact Coalition (PRI 2025 Award Winner), the Thomson Reuters Foundation’s AI Company Data Initiative, and the Investor AI Resource Hub.
- working with expert organisations to bring technical perspectives into investor discussions, drawing on existing expertise rather than duplicating work;
- providing dedicated support for private markets investors through our Private Equity Advisory Committee, helping them navigate AI-related risks and opportunities in the context of responsible technology investment; and
- convening signatories regionally and thematically to share emerging practice, including through a working group that will help inform the PRI’s current and future work in this area.
Useful frameworks and tools
A growing set of investor‑relevant resources can help signatories take these first steps. Many are practical, accessible and designed for non‑technical audiences:
- OECD due diligence guidance for responsible AI (2026) – a practical framework for identifying, preventing and mitigating AI‑related risks, aligned with the OECD’s responsible business conduct standards.
- SHARE’s investor advocacy on artificial intelligence (2026) – guidance on engaging companies on AI governance, transparency and human rights risks.
- RIAA’s investor toolkit on AI and human rights risks (2024) – practical steps for assessing and engaging companies on AI‑linked human rights impacts.
- UN B‑Tech resources (various) – including OHCHR’s investor tool for assessing human rights risks at technology companies, with targeted engagement questions on business models, algorithmic decision‑making and high‑risk contexts.
- UNESCO’s recommendation on AI ethics (2022) – a widely recognised set of principles for responsible AI.
PRI disclaimer: The PRI blog aims to contribute to the debate around topical responsible investment issues. It should not be construed as advice, nor relied upon. The blog is written by PRI staff members and occasionally guest contributors. Blog authors write in their individual capacity – posts do not necessarily represent a PRI view. The inclusion of examples or case studies does not constitute an endorsement by PRI Association or PRI signatories.
