Future Tech

The future of AI/ML depends on the reality of today – and it's not pretty

Tan KW
Publish date: Tue, 27 Aug 2024, 05:14 PM
Tan KW
0 470,818
Future Tech

Opinion Companies love to use familiar words in unorthodox ways. "We value your privacy" is really the digital equivalent of a mugger admiring your phone. And "partnering"? Usually, it means "The one with more money is bribing the one with more cred."

There is a more accurate tech use of "partner," as in the sort that comes with a toxic relationship. Windows is that partner as it keeps doing things even when told to stop. It promises to change but doesn't. It constantly angles to take control. You want evidence? Recall is coming back.

Readers may remember Recall as a big part of Microsoft's AI/ML Windows 11 strategy. It creates a searchable timeline of your desktop activity by constantly taking snapshots of work going on and feeding them to a remote analytics engine. How this universal auto-snoop was compatible with corporate privacy and data protection policies, Microsoft couldn't say. Because it wasn't. After copious helpings of the whoop-ass nope stick, Recall was, er, recalled for unspecified fixes.

Now it's on the way back, and the fixes remain unspecified. Microsoft really wants us to have it, despite nobody asking for it. If we did want it, there are ways of doing it without all the centralized AI/ML nonsense. It's still not a great idea to create a huge database of work done across multiple apps and services, even if kept locally. A tempting target indeed. It doesn't make much sense - and in that, Recall is a microcosm of how the misapplication of AI/ML may risk a new AI winter.

AI winters, like their seasonal counterparts, come around regularly. The mechanism is as clear as climate change: Some technology is declared to be AI in egg form, just needing the warm fluffy hen's bottom of massive investment to hatch as a miraculous giant robo-god. The egg is never what it's cracked up to be, and the resultant stench makes AI deeply unfashionable for a decade or two until people forget.

How are things looking this time? There are some good questions to ask: Is it useful? Is it worth it, and can you build a sustainable industry on it? Targeted machine learning using the immense capabilities of modern hardware has done good things in medicine, science, and engineering - all fields that foster niche techniques of incredible ingenuity and capability, but not generally applicable. Pull out from the vertical to the horizontal, the things we all use that create the most rewards, and it's very different.

It's not at all clear that consumers care very much for the AI/ML bait that's being dangled. Google is making big bets here, with the Pixel 9 launch being more about it as a platform for Gemini and AI/ML apps than the usual flagship phone features. Reviewers have yet to find a single aspect that justifies this change in emphasis. They're like the hundreds of online AI/ML services that are really clever and really forgettable. It's not just that there's no business model here, it's that nobody's going to use them.

Which may be just as well: Googling the question "How much has Google invested in AI?" that same AI, now baked into the search engine, reports that "In April 2024, Google CEO Demis Hassabis said that Google would spend more than $100 billion." Direct cut and paste, dear reader. This will come as news to Google's actual CEO, Sundar Pichai. Google's flagship AI built for Google's flagship product does not know who Google's CEO is - and the company has arranged for this to be the first line of the first result shown on screen.

This is not good. This is very far from good.

Perhaps actually spending those billions will help? Perhaps not. Microsoft is already spending close to $19 billion a quarter on AI/ML infrastructure, but recently had to officially remind people that its AI wasn't entirely trustworthy. ChatGPT itself, the standard-bearer for general-purpose AI, looks like it has saturated its market already.

Even Microsoft's umbrella Windows AI/ML tool, Copilot, is getting cold glares in the company's most reliable market, enterprise computing. COO after COO is saying "not today" because the claimed benefits are nowhere near enough to balance the danger of damaged data governance.

With no revenue model, initial buzz dying, and an atmosphere of lukewarm to hostile, the reason Recall is coming back exemplifies the only path forward through the blizzard - the bet that AI/ML must become a huge general market because it's too clever to fail, and that the huge investments needed will serve to lock out everyone else when the miracle comes to pass.

Technology doesn't work like that. VR/AR is another example - again - because outside of niches, it's a bad experience not worth the hassle. Driverless cars are stalled because the hard bits are far harder than money can fix, and it's no good getting 80 percent of the way when the 20 percent can kill you. AI/ML has no clear path through to sufficient reliability, no business model that makes sense, and its promiscuous, gargantuan appetite for data cannot be safely supplied.

This isn't an AI bubble. Bubbles happen when lots of people literally buy into an unsustainable idea. Not many people are buying into broad AI/ML. A few people are spending a lot of money. If you don't flip your smartphone to an AI/ML platform, what else are you going to do? If you don't flip your productivity platform to an AI/ML platform, what else are you going to do?

If the economic impact of broad AI/ML isn't a hallucination, it's going to have to connect to reality soon. Winter is coming, and for once in this business, the word means what it says. ®

 

https://www.theregister.com//2024/08/27/opinion_ai_ml/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment