Future Tech

HPE bakes LLMs into Aruba as AI inches closer to network takeover

Tan KW
Publish date: Fri, 29 Mar 2024, 07:27 AM
Tan KW
0 428,486
Future Tech

+Comment Two years ago, before ChatGPT turned the tech industry on its head, Juniper CEO Rami Rahim boasted that by 2027 artificial intelligence would completely automate the network.

Juniper is due to become part of Hewlett Packard Enterprise's IT empire late this year or early next, and the dream of self-configuring networks is still very much alive. On Tuesday, Aruba, HPE's wired and wireless LAN division, revealed it had begun baking self-contained large language models into its control plane.

At least for now, network admins needn't worry about being automated out of a job. These LLMs - apparently developed internally by HPE on a dataset of support docs, three million customer queries, and other data collected over the years - aren't making any decisions on their own just yet.

Instead, the LLMs are part of Aruba Network Central's AI searching function. In other words, it's basically a chatbot baked into the search field at the top of the web interface. Type a question in and the LLM spits back a contextualized response - or so it's hoped.

Aruba, like many in the wired and wireless LAN arena, has been integrating machine learning-based analytics and other functionality for years now for things like traffic analysis and anomaly detection.

The inclusion of LLMs is just the latest evolution of the platform's AI capabilities, designed to make search more accurate at understanding networking jargon and technical questions, according to HPE.

It also supports document summarization - presumably by using a technology like retrieval-augmented generation (RAG) to search technical docs, of which HPE says it has more than 20,000, and outline their contents. When the feature goes live in April, HPE says users will be able to ask "how to" questions and the model will generate a guide and link back to supporting documents.

We can imagine this being a real time saver - so long as the model doesn't accidentally leave out some critical steps or fill in blanks with erroneous information.

HPE insists the models are sandboxed and include a system dedicated to identifying and obfuscating personal and corporate identifiable information from queries to prevent them from ending up in future training datasets.

If the idea of a network-aware chatbot rings any bells, that's because Juniper's Mist team has been toying with this concept since 2019. Its Marvis "virtual network assistant" used a combination of natural language processing, understanding, and generation models that allowed users to query their network telemetry, identify anomalous behavior, and get suggestions on remediation.

Since Marvis's debut, the platform has been expanded. It includes a network digital twin to help identify potential problems before new configs are rolled out, and support for Juniper's datacenter networks.

All of that intellectual property is expected to make its way into HPE's hands. When the IT giant's $14 billion acquisition of Juniper closes - either later this year or early next - Rahim is slated to take the helm of the combined networking business.

The Register Comment

While HPE may not be ready to hand over network configuration entirely to LLMs and other AI models just yet, it's obvious which direction this is headed.

LLMs, like those powering ChatGPT, are already more than capable of generating configuration scripts - though in our experience syntax errors and other weirdness are not uncommon. Whether network admins are ready to risk their careers blindly applying such scripts is another matter.

We suspect AI's takeover of the network will be a slow and steady one. As the models improve, network chatbot queries for how to do something may be met with an explanation and a subsequent offer to implement those changes for you. In the case of Juniper's tech, that configuration could first be applied to a digital twin of the network - to ensure the AI doesn't break anything.

As time goes on, and users grow more comfortable with the AI handling the nitty gritty, vendors are likely to allow for greater degrees of autonomy over the network. As a rule, if there's a way to do something faster with less effort, folks are likely to do it - so long as it doesn't mean risking their jobs, of course. ®

 

https://www.theregister.com//2024/03/28/hpe_ai_network/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment