Future Tech

Microsoft Azure OpenAI lets enterprises feed corporate secrets to ChatGPT

Tan KW
Publish date: Thu, 22 Jun 2023, 02:39 PM
Tan KW
0 462,062
Future Tech

Microsoft wants to make it easier for enterprises to feed their proprietary data, along with user queries, into OpenAI's GPT-4 or ChatGPT in Azure and see the results.

This functionality, available via the Azure OpenAI Service, eliminates the need for training or fine-tuning your own generative AI models, said Andy Beatman, senior product marketing manager for Azure AI, this week, noting this was a "highly requested customer capability."

We can only assume he means highly requested by customers - not Microsoft executives continuing to spin this AI hype.

We're told that the system basically works like this: a user fires off a query to Azure, Microsoft's cloud figures out what internal corporate data is needed to complete that request, the question and retrieved data are combined into a new query that is passed to the OpenAI model of choice, the model predicts an answer, and that result is sent back to the user.

This is allegedly useful.

"Azure OpenAI on your data, together with Azure Cognitive Search, determines what data to retrieve from the designated data source based on the user input and provided conversation history," Microsoft explained. "This data is then augmented and resubmitted as a prompt to the OpenAI model, with retrieved information being appended to the original prompt."

Turning the focus to proprietary data

Microsoft has sunk more than $10 billion into OpenAI, and is rapidly integrating the upstart's AI models and tools into products and services throughout its broad portfolio.

There is no doubt that there is a push for ways to craft tailored models - ones that go beyond their base training and are customized for individual applications and organizations. That way when a query comes in, a specific answer can be generated rather than a generic one.

This approach has been talked about for a number of years. In recent months, as the pace of generative AI innovation accelerated, vendors started answering the call. Nvidia late last year introduced NeMo - a framework within its larger AI Enterprise platform, which helps organizations augment their LLMs with proprietary data.

"When we work with enterprise companies, many of them are interested in creating models for their own purposes with their own data," Manuvir Das, Nvidia's vice president of enterprise computing, told journalists during the lead-up to the GPU giant's GTC 2023 show in March.

Two months later, Nvidia teamed up with ServiceNow to enable companies using ServiceNow's cloud platform and Nvidia AI tools to train AI models on their own information.

Redmond's turn

Now comes Microsoft. "With the advanced conversational AI capabilities of ChatGPT and GPT-4, you can streamline communication, enhance customer service, and boost productivity throughout your organization," wrote Beatman. "These models not only leverage their pre-trained knowledge but also access specific data sources, ensuring that responses are based on the latest available information."

Through these latest capabilities in Azure OpenAI Service, enterprises can simplify such processes as document intake, indexing, software development, and HR procedures to somehow enhance self-service data requests, customer service tasks, revenue creation, and interactions with customers and other businesses.

The service can connect to a customer's corporate data from any source and location - whether it's stored locally, in the cloud, or at the edge - and includes tools for processing and organizing the data to pull out insights that can be used in AI models. It also can integrate with an enterprise's existing systems through an API and software-development kit (SDK) from Microsoft.

In addition, it includes a sample app to accelerate the time to implement the service.

Azure OpenAI Service on your data enables connections to such Microsoft sources as Azure Cognitive Search index for integrating with OpenAI models, Azure Blog storage container, and local files in the Azzure AI portal, with the data ingested in Azure Cognitive Search index, we're told.

Organizations will need an approved Azure OpenAI Service application and either GPT-3.5-Turbo or GPT-4 models deployed. They can use Azure AI Studio to connect the data source to the service.

"Once your data source is connected, you can start asking questions and conversing with the OpenAI models through Azure AI Studio," Beatman wrote. "This enables you to gain valuable insights and make informed business decisions."

A few things to keep in mind

There are some caveats. Users should not ask long questions, instead breaking them down to multiple questions. The max limit for the number of tokens per model response is 1,500 - including the user's question, any system messages, retrieved search documents (known as "chunks") plus internal prompts and the response.

They should also limit the responses to their data, which "encourages the model to respond using your data only, and is selected by default," Microsoft wrote.

The service also may trigger a key concern: corporate data leaking into the public domain by using it with the AI models.

ChatGPT, which was introduced to Azure OpenAI Service in May, by default keeps records of all conversations, including queries and AI responses. Remember that when feeding it internal sensitive information - crooks love vacuuming up credentials to ChatGPT accounts and thus access to any chat histories.

Dmitry Shestakov, head of threat intelligence at infosec outfit Group-IB, warned that "many enterprises are integrating ChatGPT into their operational flow. Employees enter classified correspondences or use the bot to optimize proprietary code." ®

 

https://www.theregister.com//2023/06/22/microsoft_azure_ai_data/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment