Creating better compliance management with AI

The art of balancing technology and human expertise in compliance

nina koivula

by Nina Koivula

It’s been long predicted that artificial intelligence (AI) will revolutionize the way we work. Recent advances in algorithms and machines have accelerated the transformation of knowledge–based jobs – including those in compliance. AI’s ability to perform generative tasks and respond to prompts makes it a perfect partner for our experts. 

While this paradigm shift won’t happen overnight, companies looking for a competitive advantage must learn to adapt and integrate these technologies into their practices. Enhesa was among the first companies in this space to comprehensively use AI to augment their solutions, and we follow the same approach when it comes to generative AI. As well as monitoring the latest EHS, ESG, and product regulations, we also keep an eye on the AI space (including AI regulations) to ensure compliance with any new legal requirements. 

At Enhesa, we’re not just looking to follow the hype. We want to ensure both the adoption of new technology and the upskilling of employees to provide the best service for our customers. Read on to find out how we use AI to provide a helping hand through the automation of otherwise time-consuming tasks that once slowed us in analyzing and communicating vital information to our customers.

Enhesa’s expert experience in using AI

Having an in-house team of AI engineers working in collaboration with our regulatory experts lets us design and curate AI solutions that very few companies in the EHS world can provide to their customers or staff. 

We create models focused on domain-specific, up-to-date expertise not found in existing generalized generative AI solutions. While a non-specific model might have data that is several years old, our models are familiar with the ever-changing daily updates to regulation, as we train them using proprietary legal and scientific data.

 

Augmented experts, not artificial expertise

We are proud to have a multidisciplinary team of international in-house experts, and to bring all that knowledge and expertise to our customers. This wide-ranging experience is what makes Enhesa so effective for so many businesses around the world.

Our experts include

Enhesa-Icons_ProductPatched-841

Toxicologists

Enhesa-Icons_ProductPatched-923

Lawyers

Enhesa-Icons_ProductPatched-928

Journalists

Enhesa-Icons_ProductPatched-690

Analysts

Enhesa-Icons_ProductPatched-356

Engineers

Enhesa-Icons_ProductPatched-486

Account managers

Enhesa-Icons_ProductPatched-325

Researchers

Enhesa-Icons_ProductPatched-640

Translators

Enhesa-Icons_ProductPatched-842

Subject matter experts

Enhesa-Icons_ProductPatched-250

Developers

These and many other specialists work hard to make sure our clients receive timely, curated compliance intelligence to support their operations. But keeping up with such a vast amount of incoming data is increasingly becoming a challenge, which is why businesses benefit from enhancing their operations using AI. 

While we embrace disruptive technology for the benefit of our clients and employees, our priority is to provide best-in-class solutions that are developed by humans, for humans. We always have and want to continue to provide a fantastic asset in Enhesa human know-how and expertise. 

It’s the experts armed with the know-how and creative problem-solving skills that bring real value to our clients. At the same time, we’re making sure they’re able to benefit from the state-of-the-art tools we’ve developed using various AI and machine learning solutions over the past 5 years.

Ensuring AI is implemented well

In order to best serve clients, it’s not sufficient to simply design and deploy AI-based tools. A significant value point is to also ensure a constant dialogue with end users and provide them with the necessary training to maximize added value. Moreover, transparency and safety are paramount to working with AI and responsibly implementing such technologies.

Keeping the application of AI safe is of key importance to ensure quality services while maintaining the security of individuals and businesses. Whether it’s handling personal information, processing protected data, or communicating vital insights that could have major influence on a company’s future actions. 

For instance, because our employees analyze regulatory information in more than 35 languages, we want to make sure that we’re not only providing solutions in English – which is a common shortcut taken by many companies struggling to implement AI tools today.

Below are some examples of how we also take other measures to safeguard the integrity and correct performance of our AI tools.

 

In-house experts for algorithm testing

It’s quite common for companies to use external remote workers — known as “mechanical turks” — to label their data for them. These workers will rarely have the legal training we consider fundamental for the data that we’re handling — let alone an understanding of EHS regulation. Some companies also use fully automated pipelines where algorithms do the labelling. In our view, the fundamental task of interpreting legal provisions cannot be done away with.

Efficiency gains or money savings should never come at the cost of quality. Through our own research, we’ve seen evidence of the quality of algorithms trained by legal experts as opposed to non-legal experts. There remains a definite need for expert intervention, which is why we maintain that an elegant balance of human expertise and machine learning to achieve the best quality compliance solutions.

We know from experience that regulatory documents around the world have special features — which makes them difficult to understand and equally difficult to label properly — such as:

  • Length
  • Specialized vocabulary
  • Complex structure and interconnectedness

For us, it’s a no-brainer that AI needs to collaborate with our experts — never replace them. This is why we only ever use in-house subject matter experts in training and testing our algorithms. By keeping tasks like this “in the Enhesa family” we can assure the quality of what’s produced.

 

Our approach to generative AI

Generative AI refers to powerful artificial intelligence algorithms that can react to and generate human-like text based on a combination of immense amounts of training data and deep learning methods. It’s useful for generating content, answering questions, and the translation or transformation of language data. However, its understanding is based on statistical patterns rather than true comprehension. We’ve all seen what happens when lawyers generate court submissions with generic large language AI models without understanding the risks this entails.

We’re building our own GenAI models based on domain-specific, up-to-date expertise, which isn’t found in general-purpose GenAI solutions. These popular off-the-shelf models may be working with outdated information and data from several years ago, while our algorithms are familiar with the ever-changing updates to regulations. Our current GenAI tools focus on summarization, information retrieval, and data synthesis.

We want our generative models to be powered by legal expertise, and so we train them based on proprietary legal and scientific data – but great care must be taken to capture this accurately and responsibly. A one-size-fits-all approach simply doesn’t work.

 

Assured usage with an AI policy

Our day-to-day work is guided by our AI policy. While AI-based applications can provide efficiency gains in several work tasks, over-reliance on AI can also risk skill atrophy. This is why it’s imperative to have a strong policy for the use of AI, in particular generative AI, within systems. This limits the reach of AI within an organization.

For example, we have methods in place to assure continued product delivery to our clients, even if our AI systems would be temporarily unavailable or jeopardized. Moreover, our policy means that we can draw a firm line across what AI can be used for. High risk use cases — such as hiring processes, for example — can be identified and kept strictly clear of any employment of AI.

What’s the future of AI at Enhesa?

Our work with AI is now further accelerated through the establishment of a separate cross-company AI department under Dr. Alexander Sadovsky.

In the future, it may be impossible to tell the difference between human-made and machine-made content.

Our clients can rest assured that we’ll always have an “expert-in-the-loop” when providing them with regulatory intelligence services, though. Our goal in 2024 is to provide tailored compliance data to our clients faster than ever, without compromising on the quality or the thoroughness of our analysis. We strive towards improved customer outcomes and the discovery of further organizational efficiencies.

See more about how Enhesa uses AI

Want to know more about the ways our solutions incorporate AI to enhance the services we offer?

Come find out