Blogs

The promise and perils of enterprise AI search

Businesses are increasingly adopting AI-powered enterprise search tools. However, business leaders need to balance efficiency gains with the data security risks related to access control over-provisioning.

by
 
Jim Alkove
August 17, 2024
 
 
 

Enterprises are eager to adopt new generative AI (GAI) tools, but almost everyone remains wary of opening these tools up broadly to internal users and external users, like customers and consumers — anxious about accidentally exposing sensitive, confidential, regulated business information and data.

One type of GAI that is seeing widespread adoption is the new breed of AI-powered enterprise search or “copilots” that promise to dramatically accelerate day-to-day workflows by helping employees find the information they need — much, much faster. These copilots are widely viewed as safer from a data privacy and security perspective, because they appear limited to internal, authorized users. 

Yet, these copilots amplify an endemic problem in most enterprises: over-provisioned and unintentional access.

Enterprise leaders should be cautious of how over-provisioned access turns GAI-powered copilots’ greatest strength — rapidly locating any and all relevant information — into a huge risk of exposing sensitive information that could damage the business. 

It’s not the AI tools’ fault

This concern — that AI tools will spread sensitive information far outside its intended boundaries — is what’s holding back most of the enterprise use cases of AI right now, according to Professor Stephano Puntoni, co-director of Penn’s AI at Wharton program. “Our research shows that significant enterprise adoption of gen AI is already taking place,” Puntoni told me via email. “Yet IT and business leaders remain highly concerned about the potential disclosure of sensitive company and customer data.”

Businesses feel safer with internal applications, in part because leading enterprise copilots have promised to respect users’ existing permissions to prevent unauthorized access to sensitive data. In other words, if a user doesn’t have permissions to access the data, then a copilot won’t be able to access it on their behalf.

But that notion of data privacy hinges on an overconfident and misplaced sense of control over which internal users have access to what sensitive data.

The reality is that business priorities and pressures are not exactly aligned with strictly enforcing least-privilege principles in terms of granting access to sensitive information. This is why we see stats like those from Microsoft, showing 95% of permissions are unused, or that 90% of identities are using just 5% of their granted permissions.

In fact, when I talk to CISOs, one of their biggest concerns is not only that their users are over-provisioned, but that they (the CISOs) don’t know what sensitive data their users have access to that they probably shouldn’t. This is what we call unintended access. 

This all gets at an ugly truth that works against even the most well-intentioned AI vendor: There’s no way for AI tools to differentiate between legitimate and unintended, over-provisioned access. The core objective of a copilot is to gather all the relevant information it can. And if you give it access — intentionally or otherwise — it will take it.

 What is the risk with enterprise AI search?

It’s not hard to see how quickly this snowballs into big risks. A copilot might transcribe a high-level meeting recording on a shared folder and then unintentionally serve up sensitive corporate strategies to a host of users who should  not be privy to such information. 

In fact, that transcription step isn’t even necessary, given GenAI’s ability to parse multimedia content. 

Or, imagine an employee searches for a colleague’s contact information. Given copilots’ eagerness to provide additional context and insights, it’s not a stretch to think the copilot might also offer up their compensation data. The list goes on.

At minimum, scenarios like these will create uncomfortable situations for business leaders and painful nuisances for security leaders. Those situations could easily spiral into more significant damage to the business — losing competitive advantage, for example. And maintaining compliance with regulations like GDPR, HIPAA and CCPA will be virtually impossible with GenAI tools readily exposing gaps and blind spots like this.

The access control overprovisioning problem isn’t going away

Speed and agility are a (or the) top priority for every successful business today, and this creates the roots of the overprovisioning problem.

As new employees are onboarded and/or new apps and systems are added, it’s a race to give users the access they need to get their work done and start seeing business value (from the new employee or from the new application).

Couple that with IT and security teams often already stretched thin and it’s no surprise that the most common approach relies on granting permissions via overly simplistic rules- or roles-based provisioning processes. These shortcuts don’t fully consider the user’s actual business needs (e.g., granting ERP system access to all finance department users).

Moreover, the decentralized way new applications are deployed today — by the business units rather than IT — adds to this problem. Access is often granted outside typical IT processes, making it harder for IT and security teams to see and manage that access.

On top of that, as the SaaS ecosystem and broader IT estate grows larger and more complex, unintended access can also occur due to misconfigured settings between connected applications (e.g., granting access to compensation data via a connected SaaS HR application when only role and organizational hierarchy information was intended).

How to fix over-provisioning and unintended access? Enforce least-privilege, build toward zero-trust.

Solving these issues doesn’t require any radical rethinking. It’s just a matter of implementing best practices around least-privilege and zero-trust.

The sunny outlook is that GAI will give organizations a healthy shove toward adopting the technologies needed to (finally) implement zero trust. “While breaches caused by over-provisioned access have been a top risk for organizations for years, the rise of powerful AI applications like enhanced enterprise search should create a rallying cry for organizations to adopt the next-generation of identity security and access management solutions,” Taher Elgamal, Partner at Evolution Equity Partners and former CTO Security at Salesforce (and the “father of SSL encryption”), recently shared with us. Elgamal also pointed to the White House’s executive order forcing federal agencies to more urgently adopt zero trust. “The adoption of zero trust is no longer optional or a nice to have.” 

Yet, the view within many organizations is much less sunny. We hear from CISOs every day that their existing identity and access management (IAM) programs just aren’t built to handle the evolving challenges of our AI age. They burden security teams with tedious, manual approaches to stitching together the visibility they need to see and address over-provisioned and unintended access — manual processes that are too slow to keep up with GenAI tools’ capabilities.

This is why most organizations struggle to enforce least-privilege principles, and why Gartner predicts that — nearly 15 years after it first endorsed the zero-trust framework — just 10% of large organizations will have a “mature and measurable” zero-trust program in place by 2026, up from only 1% today.

How Oleria opens the door to safe, responsible GenAI in the enterprise

Our experience as operators — and our first hand experience with visibility and control gaps around identity security — led us to create Oleria. Back when GenAI was really starting to ramp up, we recognized how rampant over-provisioning and outdated, underpowered identity security tools were making identity the biggest source of risk and breaches in the enterprise world.

But the capabilities we built with Oleria’s Trustfusion Platform and Oleria Identity Security uniquely meet the moment we’re in now: Oleria is built to give CISOs and security teams the visibility and control they need to enable speed and agility, while protecting data security and data privacy.

We’re providing a composite view of access permissions across all IAM and SaaS applications in one place — giving you visibility down to the level of the individual resource and fine-grained operation. We’re using that broad and deep visibility to automatically surface unused accounts, flag individual user accounts or permission groups with low levels of active use, and even identify accounts with unnecessary administrative or privileged permissions.

Oleria’s solution makes it simple and quick to enforce least privilege principles by removing unused accounts and moving from role-based access based on group memberships to providing more targeted individual-level permissions to the critical, high-utilization users. In other words, Oleria makes it easy to ensure the people that really need access have it — and those that don’t (including copilots and other GenAI tools), do not.

The GenAI future is here — whether you’re ready or not

Like the transformational technologies that came before it, there’s no stopping the proliferation of GenAI across the enterprise landscape. Security leaders, along with the rest of the C-suite, have been rightly cautious in considering the insidious data privacy and data security risks of making these tools a connection point between internal systems and data and external audiences. But they need to be equally careful in deploying GenAI internally with tools like copilots and enterprise AI search — in particular, recognizing how GenAI will unwittingly exploit existing issues around over-provisioning and unintended access.

The upside here is that these over-provisioning problems already present substantial, largely hidden risks within the enterprise IT estate. The momentum of urgency and inevitability around GenAI will hopefully be the push businesses need to step up to a more modern approach to identity security and access control — one that delivers the comprehensive, fine-grained visibility and automated, intelligent insights they need to make zero-trust an achievable goal.

Media contact
For media inquiries, contact pr@oleria.com

See adaptive, automated
identity security in action