Friday, October 4, 2024
HomeTechnologyAre Apple’s upcoming AR glasses already obsolete?

Are Apple’s upcoming AR glasses already obsolete?


The future of augmented reality (AR) glasses is clear. At least it used to be. Then OpenAI happened.

Specifically, the focus of AR has been following the Microsoft Hololens/Magic Leap idea, where billions are spent on the R&D goal of anchoring high-definition  3D digital objects to physical spaces. The Holy Grail is that, say, a realistic monkey avatar can not only stand on a real table, but can hide behind it — or a topographical interactive map sitting on the floor.

In order to achieve these feats of visualization, companies like Microsoft and Magic Leap require massive processing power, massive hardware that can’t be worn while walking around — and massive price tags.

The futurists clamor for something smaller. Many (including I) have long believed that Apple would be first company to mainstream a compelling product for all-day, every-day AR glasses that look almost like ordinary prescription glasses (based on hundreds of patents, comments by Apple executives including CEO Tim Cook and the company’s sterling record of dominating mobile electronics markets).

Apple’s everywhere, everyday AR glasses are years away — four years at least.

Meanwhile, an interim product likely to be called Reality Pro should be introduced to the public by Cook at this year’s Worldwide Developers Conference (WWDC) on June 5 and available to the public much later. That product is expected to be high-end virtual reality (VR) gear that primarily functions as AR (instead of virtual objects and data superimposed on a natural view of the real world, it will instead be superimposed on a live video stream of the real world). At $2,000 to $3,000 a set, Reality Pro will not be a mainstream product for the masses.

How AI changes everything

While tech observers were waiting for compelling AR (and waiting, and waiting), an AI revolution happened. San Francisco startup OpenAI launched public access to its DALL-E AI text-to-image generator, as well as its ChatGPT chatbot based on the GPT architecture. The services offered APIs so developers could build apps atop the AI — and a thousand apps bloomed.

It’s hard to believe that ChatGPT was launched only five months ago, in December 2022.

Other AI platforms emerged in the wake of OpenAI, with more coming. And Google I/O next week should see that company unveil dozens of new AI products. There are now so many apps that more than 100 directories have emerged to link to the tools — so many directories that there’s even a directory to the directories.

The AI wave of 2023 has already had a massive impact on culture, leading Professor Yuval Noah Harari, historian, philosopher and author of the book “Sapiens” to proclaim that “AI has hacked the operating system of human civilization.”

If that’s true, it’s because it has re-set our expectations about how everything should work. And this is especially true of what we expect from AR.

The shift to AR implies a fraught change in how we find information, from search engines to the “One True Answer” (a concept popularized by search engine expert Danny Sullivan). With a search engine, we type in our queries and get a very long list of links to possible answers. With AR, we’ll just want the answer, not a thousand links to consider.

The new AI services in general, and OpenAI GPT-based services in particular, have already changed our desires and expectations for how the “One True Answer” concept should work in practice. Specifically, we want it to be detailed, flexible and variable based on prompts and interactive like ChatGPT, and not actually one static, final answer like Google Search’s “Featured Snippets” or “Knowledge Panels.”

Now we want AI more than Glasses

A startup called Humane, founded by ex-Apple employees Imran Chaudhri and Bethany Bongiorno (who are married), recently impressed AR fans with an smart new vision for how AR could work. During a TEDtalk, Chaudhri showed off a body-worn connected device with a camera, microphone and projector with access to AI and personal user data: AR without glasses.

Humane envisions this device as a smartphone replacement for what they call the “intelligence age” — the world transformed by AI, computer vision and machine learning.

During the demo, Chaudhri held a candy bar in front of the device in his pocket and asked, “Can I eat this?” The prototype device appeared to use computer vision to recognize the product, public online data to get the ingredients, then compared that against a list of intolerances or allergies of the user to advice against consumption.

It translated Chaudhri’s English words into French, which it spoke in Chaudhri’s own simulated voice. It summarized key information from recent emails with the command, “Catch me up,” and performed other ChatGPT-like feats. When his wife called, the caller ID information was projected onto his palm, with actionable buttons made of light.

It’s a new kind of device that can be understood as an AI-based, highly personalized wearable Amazon Echo smart speaker that can see. Or you can envision it as advanced AR glasses without the glasses. Instead of showing data through glasses, it projects the information to any nearby surface based on a hand gesture.

But the most compelling descriptive comparison is that it’s AI-specific hardware. It’s designed to render the physical machinery needed for interacting with various kinds of AI invisible — part of the body.

Given the excitement and energy around AI right now, that’s a far more compelling vision than any description of Apple’s eventual AR glasses I’ve heard.

To be clear, it’s likely that all Apple’s future AR devices will access AI, including a possible future Siri that accesses GPT-like language models. And the functionality of the Humane could be built into glasses.

But Apple’s master plan of starting out with a big, bulky, powerful, expensive solution to AR, then eventually paring it down to socially acceptable mobile, self-contained glasses in four or five years or more seems increasingly antiquated.

Silicon Valley’s pivot from the ‘metaverse’ to AI

Meta CEO Mark Zuckerberg’s “metaverse” idea isn’t catching fire as hoped. In fact, it seems, the tech industry is generally laying off thousands of employees who had been working on AR and VR and doubling down on AI investment in the wake of the OpenAI-driven revolution. This includes Microsoft, which is the major shareholder of OpenAI.

What the world expects at the moment, and what any number of startups and established players are no doubt working on, are AR wearables that augment reality via an human-like personal assistant that accesses AI, computer vision, and machine learning.

While augmenting reality remains a compelling vision for businesses, creatives and consumers, the AI revolution of 2023 has created demand for augmenting ourselves. What the world wants now is wearable AI hardware.

Copyright © 2023 IDG Communications, Inc.




This story originally appeared on Computerworld

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments