Business | Not edgy enough

AI will not fix Apple’s sluggish iPhone sales any time soon

The technology is not yet ready for prime time on phones or other devices

Attendees view new products during an event at Apple Park campus in Cupertino
Photograph: Getty Images
|SAN FRANCISCO
Listen to this story.

Bling is in the air. On September 9th Apple released its latest iPhone 16 series at an event called “It’s Glowtime”. The name referred to the sheen around Siri, its souped-up voice assistant. But it was just as appropriate for the new colour of its snazziest iPhone 16 Pro model: “desert titanium”—in other words, gold.

Chart: The Economist

A bit lacking, though, was zing. Tim Cook, Apple’s boss, played up the promise of the phones’ generative artificial-intelligence (AI) features, which he trailed with much hoopla in June under the moniker “Apple Intelligence”. Although the devices come with Apple’s new superfast A18 chips to power AI, iPhone buyers will have to wait until at least October for the first features. The demos look ho-hum. If you point the camera at a restaurant, Apple Intelligence can tell you what’s on the menu. You can type a request to Siri, as well as ask it questions. Investors hope that eventually more conversational and personalised AI features will reboot iPhone sales, which account for about half of Apple’s revenues but have sagged lately (see chart). They could be waiting a while.

Apple is one of many firms that want to take generative AI beyond giant data centres, known as the cloud, and run it on smaller devices, known as the edge. Samsung, Apple’s main smartphone rival, got a head start, launching its Galaxy S24 with some AI features earlier this year. So did Microsoft, which has launched Windows PCs designed for AI, called Copilot+. But by and large the market is still up for grabs. Cracking it will not be easy.

Most large language models (LLMs) are trained with graphics processing units (GPUs) that use so much energy it can take a nuclear-power plant to fuel them. They also need huge amounts of memory and unfathomable quantities of data. All that can cost hundreds of millions of dollars.

Even once they are trained, running these mega-models is expensive. According to one estimate, it costs OpenAI, the maker of ChatGPT, 36 cents every time someone asks the bot a question. Edge devices instead deploy smaller models, distilled from their cloud-based big brothers. These are cheaper, and also faster. The goal is to reach such low levels of latency that response times feel almost human. Edge AI can also learn about a user from their interactions with their device (Apple calls this “semantic indexing”).

In practice, however, shifting AI to the edge is not straightforward. One problem is performance. Complex queries, such as using an AI bot to plan a holiday, will still require cleverer cloud-based LLMs. Another problem is computational power. Even smaller AI models require oodles of it to run, quickly draining a device’s batteries.

Companies are experimenting with various solutions. Apple Intelligence will offer on-device AI as a first port of call, but send trickier queries to the firm’s private cloud. The service will direct the most idiosyncratic requests to third-party LLMs such as ChatGPT. Apple promises to do so only with the user’s permission, but the approach could still worry the privacy-conscious. Devices, especially smartphones, have access to vast amounts of users’ personal data: whom they call, where they live, what they spend, what they look like. Some may prefer that if generative AI tools use such information, it remains on-device.

Tech firms are also making use of alternatives to GPUs that use less energy, such as neural processing units (NPUs), to run AI models on the edge. Qualcomm, which makes NPUs and various other chips for edge devices, talks about maximising “performance per watt”. Compared with GPUs, whose costs can be stratospheric, NPUs are also cheaper. No one, after all, wants a phone that costs as much as a data centre.

Plenty of firms have an interest in shifting AI to devices. Cloud-based LLMs are heavily dependent on Nvidia, the leading maker of GPUs. But in edge AI “there’s nobody that dominates,” says Taner Ozcelik, who runs Mythic, a startup making energy-efficient chips for AI devices.

Although no single firm may gain as much from edge AI as Nvidia has from the cloud variety, there would still be big winners, says Neil Shah of Counterpoint, a research firm. Making the technology work could not only trigger a supercycle in device sales, but also create new opportunities for apps and digital advertising. For the moment, though, edge AI is barely ready for showtime, let alone Glowtime.

To stay on top of the biggest stories in business and technology, sign up to the Bottom Line, our weekly subscriber-only newsletter.

Explore more

This article appeared in the Business section of the print edition under the headline “Not edgy enough ”

From the September 14th 2024 edition

Discover stories from this section and more in the list of contents

Explore the edition

Discover more

Elon Musk looks on during a conference.

Elon Musk’s xAI goes after OpenAI

The fight is turning nasty

A man waitiing for the lift, which is full of people.

How to behave in lifts: an office guide

Life in an elevator



Gautam Adani faces bribery charges in America

Prosecutors allege one of India’s richest men paid off local officials

Nvidia’s boss dismisses fears that AI has hit a wall

But it’s “urgent” to get to the next level, Jensen Huang tells The Economist

Does Dallas offer a vision of America’s future?

The Texan city embodies the allure of small government