Future Tech

Top companies ground Microsoft Copilot over data governance concerns

Tan KW
Publish date: Thu, 22 Aug 2024, 12:39 AM
Tan KW
0 468,883
Future Tech

Security and corporate governance concerns are weighing heavily on large enterprises as they try to work Microsoft Copilots into their organizations amid a complex web of existing tech products and access rights.

So says Jack Berkowitz, chief data officer of Securiti, who spoke with The Register about how businesses have adjusted to Copilots - largely by booting them from the corporate flight deck.

Microsoft positions its Copilot tool as a way to make users more creative and productive by capturing all the human labor latent in the data used to train its AI models and reselling it.

But technology reached the market far ahead of safety and security. It was only two years ago that generative AI services started to appear and there's still some work to be done.

More generally, Berkowitz has started to hear how generative AI projects have been going for corporate clients.

"You can find a few interesting use cases, but broadly, it seems like there's a lot of caution around this," he told us. "There are some systems that have gone into production that have really great ROI capabilities."

Initiatives that have added generative AI into customer service apps have been generating returns for the companies that have done so, he said. Yet with regard to Copilots, security and oversight concerns are commonplace, according to Berkowitz.

"Particularly around bigger companies that have complex permissions around their SharePoint or their Office 365 or things like that, where the Copilots are basically aggressively summarizing information that maybe people technically have access to but shouldn't have access to," he explained.

Berkowitz said salary information, for example, might be picked up by a Copilot service.

"Now, maybe if you set up a totally clean Microsoft environment from day one, that would be alleviated," he told us. "But nobody has that. People have implemented these systems over time, particularly really big companies. And you get these conflicting authorizations or conflicting access to data."

And it's not just human resources data that may get surfaced inappropriately through interaction with a Copilot service, Berkowitz added.

"A few weeks ago, we hosted a little dinner in New York, and we just asked this question of 20-plus CDOs in New York City of the biggest companies, 'Hey, is this an issue?' And the resounding response was, 'Yeah, it's a real mess.'"

Asked how many had grounded a Copilot implementation, Berkowitz said it was about half of them. Companies, he said, were turning off Copilot software or severely restricting its use.

"Now, it's not an unsolvable problem," he added. "But you've got to have clean data and you've got to have clean security in order to get these systems to really work the way you anticipate. It's more than just flipping the switch."

While AI software also has specific security concerns, Berkowitz said the issues he was hearing about had more to do with internal employee access to information that shouldn't be available to them.

Asked whether the situation is similar to the IT security challenge 15 years ago when Google introduced its Search Appliance to index corporate documents and make them available to employees, Berkowitz said: "It's exactly that."

Companies like Fast and Attivio, where Berkowitz once worked, were among those that solved the enterprise search security problem by tying file authorization rights to search results.

So how can companies make Copilots and related AI software work?

"The biggest thing is observability and not from a data quality viewpoint, but from a realization viewpoint," said Berkowitz. "So that you're sure that your governance is there, that you know where your data assets are, you know what people are involved in your system. Once you can get that observability in place, you can get the right controls in place."

Assistive AI software is all the rage at the moment, with Microsoft and its peers investing substantial sums in developing generative AI models, something they just won't shut up about it. They can't because they need to convince customers to buy opinionated, disclaimer-laden software as a service.

Yet it seems in the rush for revenues and trying to convince businesses of the productivity benefits, other fundamental aspects that govern the way companies are directed and controlled were overlooked. ®

 

https://www.theregister.com//2024/08/21/microsoft_ai_copilots/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment