How does AI factor into ESG?

A regulatory expert examination of how the proliferation of AI fits into the ESG frameworks and what it means for businesses looking to keep up with requirements.

Louisa M headshot

by Louisa Meliksetyan

Artificial intelligence (AI) is reshaping how businesses operate, think about development, and manage regulatory requirements. One area that’s often overlooked, however, is ESG and the ways that AI can help or hinder when it comes to sustainability goals, requirements, and reporting. 

In this article, EHS and Sustainability Regulatory Consultant Louisa Meliksetyan explores how the rapid integration of artificial intelligence across industries creates both opportunities and risks in the context of ESG, and why companies must align AI deployment with emerging legal requirements, ethical standards, and sustainability goals.

The relationship between AI and ESG

AI is becoming deeply embedded across a wide range of industries and operational functions, from manufacturing and finance to healthcare and logistics. While it brings significant efficiencies and transformative potential, its rapid integration also introduces complex risks, ranging from environmental costs and labor displacement to algorithmic bias and governance failures.  

These growing impacts and ethical challenges make it increasingly necessary to examine AI through the lens of ESG, ensuring its development and deployment align with sustainable and responsible business practices. Among these considerations, the implications of AI across the value chain are particularly far-reaching and complex.

In light of this, a key question arises: do existing ESG frameworks adequately capture the risks and opportunities associated with AI? While some aspects of AI, such as cybersecurity and data privacy, are already addressed within governance-related ESG disclosures, these frameworks often fall short in addressing AI-specific concerns. There’s ongoing debate about whether AI should be addressed across existing ESG categories or given its own separate metric.

 

Is AI considered a material topic?

Even though AI isn’t currently listed as a standalone disclosure requirement under the European Sustainability Reporting Standards (ESRS), its material impacts are becoming increasingly evident in both financial and impact materiality terms.  

Under the Corporate Sustainability Reporting Directive (CSRD) and ESRS, a material topic is one that has actual or potential significant impacts on people or the environment (impact materiality), and/or is likely to affect the company’s financial position, development, or performance (financial materiality).  

For companies whose operations rely heavily or exclusively on AI systems, this reliance may already meet both materiality thresholds. Moreover, AI can become a material topic even in less AI-centric businesses if, for example, its use leads to adverse outcomes, such as discrimination, labor displacement, or excessive resource use, triggering stakeholder concern, reputational risk, or compliance risks. 

The practical application of AI for ESG success

With its universality of functions, AI can impact all three pillars of ESG — environment, social, and governance — in myriad ways. Here’s a breakdown of the ways that AI can and should be considered in regard to each.

 

AI in the environmental pillar of ESG

The environmental pillar of ESG addresses the company’s impact on natural resources, ecosystems, and climate.

In the context of AI, this pillar presents a dual challenge: while AI can significantly enhance environmental performance, it also comes with its own resource footprint, primarily in the form of energy and water use, and associated emissions. Before diving into the ways in which AI can support environmentally responsible decision-making, it’s important to assess how to use AI sustainably in the first place.

 

Sustainable use of AI

It’s no longer news that the use of AI, especially large language models (LLMs) and generative AI, requires significant amounts of electricity and water for model training and inference. According to PWC estimates, AI could account for up to 15% of global greenhouse gas emissions by 2040.

It’s therefore crucial for companies to evaluate whether AI is the right tool for the job. Not all tasks require large, complex models, as smaller, more efficient algorithms or classical machine learning methods may suffice for tasks like data extraction and trend analysis. A task-based environmental impact assessment should become a standard practice.

Drawing inspiration from the concept of “privacy by design”, a term used to describe the necessity of developing personal data processing tools with privacy as a core principle, a new principle of “sustainability by design” should guide the use of AI tools. Companies should actively choose models based on energy efficiency. This requires close collaboration between sustainability officers, legal teams, and technical developers.

 

Practical use of AI in environmental management

AI technology offers a wide range of opportunities to support environmental compliance. As highlighted in KPMG report on ESG in the Age of AI, there are multiple ways AI can support both short- and long-term ESG goals.

These include:

  • Collating ESG-related data
  • Forecasting emissions
  • Linking environmental data to financial growth
  • Conducting climate risk assessments

Companies use AI to track and analyze carbon emissions, particularly Scope 1 and 2, by integrating energy, logistics, and production data and thus improving ESG reporting accuracy. It also supports pollution control through models that detect and predict pollutant levels, comparing them against compliance thresholds and triggering alerts when limits are approached.

In infrastructure-heavy sectors, companies apply AI to analyze weather patterns and vegetation growth to manage trees and plants around sites in the least disruptive, most environmentally sound manner.

When combined with a sustainable approach to AI, use of these applications can significantly amplify environmental benefits while minimizing the technology’s own resource footprint.

 

AI in the social pillar of ESG

The social pillar of ESG concerns how a company impacts people: employees, consumers, communities, and those affected throughout its supply chains.

Artificial intelligence plays a dual role here: it can enhance social responsibility efforts but also introduces ethical risks that must be governed responsibly.

 

Practical use of AI in social responsibility

There are numerous practical applications of AI to support social responsibility. For example, AI can be used to monitor whether workers are wearing personal protective equipment (PPE), or to deploy AI-guided drones that identify potential safety hazards, thereby reducing human exposure to danger and shifting safety practices from reactive to proactive.

Because of its capacity to scan and analyze large volumes of data, AI can be used to monitor labor conditions and human rights risks among suppliers, for example by automatically reviewing supplier audit reports and labor records, as discussed in the report of EY. This enables earlier identification of unethical practices and supports responsible supply chain oversight.

AI also improves consumer safety and product integrity. On production lines, it can detect contaminants or quality issues, such as spoilage, mislabeling, or defective safety seals, thus enhancing health protections.

Social listening tools powered by AI can help companies understand community sentiment around their operations. This is especially valuable when preparing for public hearings required during environmental impact assessments for large infrastructure or industrial projects.

Companies increasingly use AI algorithms to detect fraudulent activity and ensure fair, non-discriminatory AI-driven customer interactions. Similarly, when managing consumer complaints, AI can analyze feedback at scale to identify recurring issues with products or suppliers, enabling faster resolution and continuous improvement.

 

Responsible use, legal compliance, and ethical oversight

The discussed applications of AI must, however, comply with evolving legal and ethical frameworks. The most comprehensive and modern attempt to regulate AI to date is the EU AI Act, which — amongst other things — sets out specific prohibited uses of AI systems. These include the classification of individuals based on their social behavior or known, inferred, or predicted personal or personality traits.

While the EU AI Act has extraterritorial reach, its effectiveness remains geographically and jurisdictionally constrained. Because enforcement mechanisms are tied to EU institutions and legal frameworks, the Act cannot prevent the deployment of prohibited AI systems in other regions or on global digital platforms.

In this context, it becomes especially important for companies to go beyond mere legal compliance and embrace a proactive ethical approach. Relying on regulatory grey areas in cross-border digital services and e-commerce may expose businesses to reputational, operational, and legal risks, and even greater risks for the communities in which they operate. Ethical AI use should therefore be a core corporate priority, driven by internal accountability rather than external enforcement alone.

Another legal constraint companies should keep in mind when deploying such models is regulation surrounding data protection. For example, under the General Data Protection Regulation (GDPR), individuals have the right to human oversight in relation to decisions made solely through automated data processing. Companies should therefore be aware that any affected person, whether an employee or a current or prospective customer, can challenge an AI-driven decision that has significantly affected them.

Lastly, the expanding use of AI in business processes inevitably reduces reliance on human labor, which can lead to progressive layoffs. Reporting companies — especially large employers — must recognize that they shape not only the economy but also the labor markets of the communities in which they operate. This creates a responsibility to manage workforce transitions ethically and sustainably. One such approach is to invest in employee reskilling or upskilling, helping workers adapt to shifts in the labor market and ensuring the company continues to contribute positively to its social environment.

 

AI in the governance pillar of ESG

The governance pillar of ESG reporting serves, among other purposes, to demonstrate the transparency, accountability, and integrity of decision-making at the board and executive levels.

Companies are increasingly expected to disclose how sustainability matters are integrated into their governance structures, including the rationale behind key decisions, the oversight of material ESG risks and opportunities, and the extent to which these decisions are supported by data, metrics, and measurable impacts.

 

Use of AI in decision making processes

One of the most strategic advantages of AI lies in its ability to process and analyze large volumes of data, uncovering patterns and insights that may be overlooked by human analysis. This includes the instant analysis of geopolitical news, real-time environmental data, legal developments, and public disclosures, delivering an unprecedented amount of analytics with minimal manual effort.

For reporting companies, AI can identify inconsistencies across datasets, anticipate future logistical or operational needs, flag legal compliance risks, improve resource allocation, and generate recommendations for future actions.

AI can also play a compliance and oversight role, identifying potential signs of fraud, discrimination, or conflicts of interest by cross-referencing internal records, like contracts and payment records, with publicly available or third-party data.

At present, the use of AI in corporate governance is largely focused on serving as an analytical assistant to boards, helping them process information and make more informed decisions. In a more futuristic, forward-looking scenario, some have even raised the prospect of involving AI tools in governance processes — there have already been some notable precedents in Hong Kong and the UAE.

 

Risks: Data quality and decision transparency

Despite its capabilities, AI also introduces critical risks to governance, particularly in relation to data quality and the transparency of decision-making.

AI models are highly dependent on the data they’re trained on — therefore, quality, diversity, and availability of data are critical. When using AI tools, companies must consider differences in the social development of countries, as well as uneven availability of relevant and representative data. Many widely used models are predominantly trained on information sourced from the developed world, which means they may not analyze or interpret data related to developing regions with equal accuracy or nuance. This disparity can lead to social and cultural biases, as well as omissions of ethically sensitive or locally significant topics.

Moreover, the fact that the training data used by many of the most prominent LLMs remains undisclosed, often protected as a trade secret, further underscores the need for vigilance. Companies must be cautious when relying on such systems, especially in regulated or ethically complex contexts.

Finally, a crucial part of ESG reporting — for example, under ESRS GOV-1 and GOV-2 — is the requirement to disclose the basis on which decisions are made, including the information provided to the board. To ensure transparency and diligent reporting, companies should establish efficient human oversight mechanisms when using third-party AI tools and LLMs. Given that the reasoning behind outputs generated by these models is often not readily explainable, detecting biases or factual errors can be exceptionally challenging.

To ensure responsible AI use, companies should institutionalize oversight by embedding it into their governance structures. This can include appointing a dedicated AI ethics officer or establishing cross-functional committees to review high-risk AI deployments.

 

The role of responsible AI policies

An important step toward demonstrating the responsible and sustainable use of AI, as well as supporting the reporting of measurable impacts, is the adoption of comprehensive AI policies.

Even before the EU AI Act was introduced in draft form, the need for clear internal governance of AI became increasingly apparent. Having an AI policy has emerged as a key indicator of corporate accountability, particularly for companies working with large volumes of data. While it’s not yet universally mandatory, an increasing number of companies are now developing and implementing AI policies, often aligning them with the core principles outlined in the EU AI Act, such as transparency, fairness, and human oversight.

Yet a critical question remains: is having a policy enough to ensure sustainable and ethical use of AI tools? Core principles are a starting point, but without clear implementation mechanisms, those policies risk becoming symbolic. Without real substance, companies may fall into the trap of “ethical greenwashing”, projecting a responsible AI stance while exaggerating benefits and obscuring potential risks from stakeholders and investors.

To avoid this, companies need to go beyond high-level commitments and build robust internal governance structures. This includes tailored operational policies that reflect internal processes, assess specific risks, and define mitigation strategies, such as mapping data flows, applying purpose limitations, and restricting model types and data categories used in LLMs. Embedding these practices into day-to-day operations will be essential to ensure that AI use remains aligned with sustainability and human rights values.

Accountability will forge the path to ESG-aligned AI

As AI becomes a core part of business operations, its impact on ESG is clear and growing. While it can help companies meet sustainability and governance goals, it also brings serious risks that current frameworks don’t fully address.

To use AI responsibly, companies need to go beyond basic compliance. This means:

  • Putting proper oversight in place
  • Following legal requirements
  • Ensuring transparency
  • Creating clear internal policies

Taking these steps will help businesses get the most out of AI while staying true to their ESG commitments and be prepared for future developments, including the potential integration of AI as a distinct ESG metric.

Read more about how Enhesa is handling the emergence of AI

At Enhesa, we take the use of AI very seriously. After all, data and its analysis is the cornerstone of what we do for our clients. That’s why our in-house AI team works to create AI tools that will have direct benefit for our experts and solutions, and thereby our customers — all done safely and ethically.

Learn more about our in-house AI team, including the trends in the use of AI for sustainability and compliance and how they impact the work that we do.

Regulatory content and sustainability intelligence

Using AI to enhance Enhesa services

Find out how we use AI and machine learning at Enhesa to provide better, more effective products and services for our customers.

Regulatory content and sustainability intelligence

Future-proofing compliance: Global EHS & ESG trends for 2025

Explore the latest trends and upcoming legislation across EHS and ESG to anticipate, adapt, and act ahead of time.

Regulatory content and sustainability intelligence

Leveraging technology to meet sustainability goals

How technology can play a role in protecting the environment and aiding businesses in meeting sustainability goals.

Regulatory content and sustainability intelligence

Ethical application of AI

Download our Ethical application of AI eBook to learn more about how we take a careful and considered approach to the use of AI.

Share