Future Tech

On-prem AI has arrived – the solution to cloudy problems no one really has

Tan KW
Publish date: Tue, 25 Jun 2024, 09:50 PM
Tan KW
0 450,555
Future Tech

HPE discover Walking the floor at HPE's Discover show in Las Vegas last week, this vulture was left with the distinct impression that HPE and its partners believe that the age of turnkey on-prem enterprise-level AI has arrived.

Hyperscalers and high-performance computing shops are therefore no longer the only outfits that can access the fastest and most powerful AI hardware. The rest of us can now access the latest Nvidia gear in an HPE-designed AI rack that will fit snugly into your modest datacenter or colo space, and address the latency and security issues that come with cloudy AI.

Best of all, none of us will need to become AI infrastructure experts. HPE says its Nvidia Private Cloud AI will be up and running after "just three clicks."

That's it then - AI is solved.

Or maybe not, according to Steve Brazier, CEO of analyst firm Canalys, who feels this rosy vision is a little too optimistic.

"HPE deserves credit for being so quick to develop its approach to AI," Brazier told The Register in an interview after the conference last week. He praised the Silicon Valley veteran's speedy introduction of AI offerings, in contrast to its slow cloud adoption a decade prior.

HPE's AI relevance isn't guaranteed

Brazier points out, however, that the business computing world has largely been cloud-first for a while, and most folks haven't complained that much.

"In most use cases the latency argument hasn't won," Brazier said, casting doubt on one of HPE's primary arguments for on-prem AI kit. "There might be some specific use cases where every millisecond matters, but in the majority of cases that doesn't apply."

HPE GM and VP of AI solutions Joey Zwicker counters that low latency is but one of the selling points of its hybrid cloud AI offerings.

"Industries with large on-prem configurations today are worried about what AI could bring in terms of risks," Zwicker told us last week at Discover. "No one wants to be in the news for an AI screwup."

Zwicker said HPE sees on-prem AI offerings as a way to "give enterprises the safety and security of hybrid," and to deal with the high cost of running AI workloads in a public cloud environment.

"The cloud is great for getting started [with AI], but very expensive in production," Zwicker explained. What makes 2024 such a big year for AI, Zwicker told us, is that HPE has finally crossed a threshold which allows broader adoption by the "early majority," and not just early adopters.

Rainmaking isn't easy

Brazier believes HPE and its partners will struggle to pull AI customers out of the cloud - especially the smaller ones who might not have that much data.

"Convincing customers not to make the easy big three cloud choice will be a hard win," Brazier observed. He noted that large language models and other AI workloads can be deployed to a public cloud while their owners maintain full control - so it's hard to find an urgent need for repatriation.

"It'll be up to sales and marketing teams to win that argument of 'the more critical, the more you want to keep it close'," Brazier said, adding that at the end of the day the question may not come down to economics or customer preference - but just an attitude toward risk.

Location, schmocation - Nvidia still wins (for now)

It's also worth noting that HPE isn't entirely unique in its offerings. Brazier pointed out that Dell said lots of similar stuff at Dell World last month, and it had a quarter similarly as good as HPE's most recent AI-powered earnings. Nutanix and VMware have also developed turnkey AI stacks.

What's the common denominator between Dell and HPE, aside from a focus on server hardware, you ask? Nvidia CEO Jensen Huang, who attended both vendors' conferences.

Nvidia, Brazier noted, is the big winner in all of this - regardless of where customers decide to do their AI processing. The GPU giant is present in hyperscale datacenters running AI workloads, they're in HPE's new AI kit, and they're in Dell's AI hardware too.

Brazier believes that Huang's presence at so many recent events could even be an outreach effort to soothe customers - like Dell and HPE - concerned about chip availability.

"The suspicion is that Microsoft, Meta and Google have bought [all the newest Blackwell chips], and in large numbers, leaving those buying fewer down the list of priorities," Brazier opined. And that's bad news for the likes of Dell and HPE because sales of servers for applications other than AI are not growing fast - if at all.

Brazier also doesn't appear sure it's a good idea for both server giants to put all their eggs in the Nvidia basket - especially given the accelerator champ's sudden rise from a gaming mainstay to the planet's AI powerhouse.

The analyst feels challenges to Nvidia's AI dominance are likely. He noted in particular that the biz is in a very similar position to Intel circa 2005 - when AMD sued and initiated years of regulatory action against the chipmaker.

"Nvidia is in danger of attracting regulatory attention," Brazier told us. "And regulators control who wins and who loses in this game today."

"Risk of attracting" may even be a moot point - the US Department of Justice is reportedly preparing to take the lead on an antitrust investigation of Nvidia, along with Microsoft and OpenAI, to address the trio's current dominance of AI technology.

And let's not forget competitors. Longtime grudgers Intel and AMD have even reportedly put their beef aside to focus on fighting Nvidia's AI dominance.

So never mind the reality of HPE's bid to retain AI relevance. Its entire bet rests on Nvidia's ability to stay on top - even as anyone capable of spelling "GPU" tries to grab some of the booming AI market. ®

 

https://www.theregister.com//2024/06/25/nvidia_hpe_ai/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment