Better regulatory searches with Enhesa’s AI
Learn how Enhesa’s AI helps users get faster, more accurate results from regulatory compliance searches
At Enhesa, we’re shaping a future where accessing internal data is as seamless and intuitive as having a conversation. Managing global EHS (Environment, Health, and Safety) risk is no longer a complex and time-consuming task. Traditionally, understanding and managing thousands of records requires extensive manual research or costly managed services.
Publicly available AI tools cannot help in finding a proper solution to the problem because they risk surfacing information from publicly available sources, such as forums and news articles, which lack legal rigor and credibility. This means these tools can’t offer the contextual understanding and precision that EHS experts require.
The AI team at Enhesa is trying to fill the gap between complexity and clarity by leading a transformation, building solutions that connect data with the expertise of EHS legal analysts. Rather than navigating through thousands of records, users can effortlessly uncover the information they need through human-like interactions.
In this article, AI experts Andrea Pennisi and Marco Ramos break down how robust AI tools are making for more efficient, accurate, and reliable compliance information retrieval.
Examining the Enhesa AI model
Enhesa’s AI capabilities provide faster access to compliance answers by instantly surfacing relevant legal requirements, obligations, and interpretations. For example, an EHS expert can retrieve jurisdiction-specific requirements for chemical handling in a particular region in seconds — eliminating the need to wade through dense documentation.
This innovation empowers EHS experts to make faster, more informed decisions with confidence and accuracy.
With continuous monitoring of regulatory developments, users are kept informed of emerging laws and changes in real time. The platform interprets updates and presents their implications clearly, allowing compliance teams to respond proactively to new requirements.
The power of large language models extends beyond delivering content. It enables novel comparisons and deeper analyses across jurisdictions and operational areas, functioning as an embedded consultant that enhances decision-making across the enterprise.
This innovation empowers EHS professionals to act faster, with greater clarity and certainty. It transforms how organizations approach compliance, shifting from manual interpretation to expert-driven, AI-enabled insight at scale.
Importance of using a search engine in a company’s data
The ability to efficiently search and retrieve relevant information from vast datasets is crucial for any organization. A robust internal search engine empowers employees to quickly access the information they need, leading to faster, more informed decision-making. This is especially important in regulatory environments, where up-to-date and accurate data is essential. By reducing the time spent searching, such systems enhance productivity and allow employees to focus on their core responsibilities rather than navigating through overwhelming volumes of data.
Advanced search technologies — particularly those powered by AI and Retrieval-Augmented Generation (RAG) systems — deliver more precise and contextually relevant results. These systems minimize noise, ensuring users receive only the most pertinent information for their queries. They’re designed to handle large-scale data environments and can easily scale alongside the organization. With flexible and customizable search filters, users can tailor their queries to match specific use cases or departmental needs.
Ultimately, implementing a powerful internal search engine isn’t just about improving information retrieval. It’s a strategic move where companies can increase efficiency, enhance accuracy, and support smarter, data-driven decisions across the board. In a fast-paced and competitive landscape, intelligent search becomes a key differentiator.
Understanding search systems at Enhesa
The current search initiatives include projects like Fusion Search for different topics spanning from EHS to ESG, and image-based retrieval like Fusion Vision. While the former is designed to integrate various data sources and provide unified search experiences, the latter allows users to search guideline compliance by submitting an image of a workplace. It combines traditional keyword-based search with more advanced techniques based on RAG systems to surface the most relevant information.
What is an AI RAG system?
Retrieval-Augmented Generation (RAG) is a cutting-edge approach that combines the strengths of retrieval-based models and generation-based models to enhance the accuracy and relevance of search results. In a RAG system, relevant documents or pieces of information are first retrieved from a large dataset. Then, a generative model, such as a large language model (LLM), uses this retrieved information to generate a more accurate and contextually relevant response.
Implementation at Enhesa
At Enhesa, the implementation of RAG systems has significantly improved the search capabilities. The process involves several key steps:
1. Retrieval phase
The system first compares the vectorized query with vectors existing in its internal vector database — a specialized database that stores, manages, and indexes high-dimensional vector data — and retrieves the most relevant ones. This constitutes the context used to answer the query. This is achieved using advanced search algorithms and metadata filtering to ensure that only the most pertinent information is selected. At Enhesa, we enhance this retrieval phase by leveraging the expertise of over 90 legal professionals who prepare and curate the data. This enables us to source their proprietary generated content along with the accompanying legal text that best fits a customer’s specific needs.
2. Generation phase
Once the relevant information is retrieved, a generative model processes this data to generate a coherent and contextually appropriate response. This model leverages the power of LLMs to understand the context and nuances of the query, providing more accurate and relevant results. The power of LLMs goes beyond just serving up content, it enables users to create novel comparisons and analyses across sources, functioning almost like a mini consultant embedded within our content sets.
Additional features:
- Metadata filtering: Filtering and sub-filtering play a crucial role in enhancing user experience by narrowing down search results based on specific criteria, making it easier to find the exact information they need. For instance, users can filter results by date, relevance, or specific attributes, and even apply multiple filters simultaneously. This level of granularity not only saves time but also ensures that users can quickly access the most relevant content, leading to a more efficient and satisfying search experience.
- Metadata embedding: By including certain keywords in a chunk of text and vectorizing them together we make sure that vectors belonging to the same document stay closer together in terms of similarity, allowing more relevant information to be retrieved as context.
This system is developed in a Python-based codebase with both a Python and a .NET API combined in a micro-service architecture, offering significant benefits in terms of scalability and reproducibility.
The micro-service architecture allows different components of the search system to be developed, deployed, and scaled independently. This modular approach also simplifies maintenance, as updates or changes can be made to individual services without affecting the entire system.
Together, these technologies ensure that the search system can handle increasing loads and adapt to evolving requirements with minimal disruption.
Use cases
The implementation of RAG systems at Enhesa has led to several practical applications and benefits:
Regulatory Compliance
By integrating RAG systems, Enhesa can provide more accurate and up-to-date regulatory information to its clients. This helps companies stay compliant with various regulations and avoid potential legal issues.
Internal knowledge management
RAG systems facilitate better knowledge management by making it easier for employees to find and utilize relevant information. This leads to more informed decision-making and increased productivity.
Benefits of advanced search and RAG systems
The adoption of RAG systems at Enhesa has brought several key benefits, such as enhanced user experience. Advanced search capabilities, including filtering and sub-filtering, play a crucial role in enhancing user experience, narrowing down search results based on specific criteria and making it easier to find the exact information they need. For instance, users can filter results by date, relevance, or specific attributes, and even apply multiple filters simultaneously. This level of granularity not only saves time but also ensures users can quickly access the most relevant content, leading to a more efficient and satisfying search experience.
Looking to the future of AI-enabled search development
The field of search and retrieval is continuously evolving, and several exciting projects are currently underway to push the boundaries of what’s possible. One such project involves the integration of dense vector search, which leverages advanced embedding techniques to represent documents and queries in a high-dimensional space. This approach allows for more nuanced and accurate matching of user queries with relevant documents, even when the exact keywords aren’t present.
The importance of consultant/client feedback in driving iterative improvements cannot be overstated, as it ensures systems evolve in line with user needs. Continuous innovation in search and retrieval systems is essential to maintaining a competitive advantage in an ever-changing digital landscape.
By prioritizing innovation and user experience, Enhesa remains committed to delivering powerful, intuitive, and future-ready search solutions that meet the ever-growing demands of our users.