banner

Blog

Oct 27, 2023

Apple isn’t labeling its AI products ‘AI.’ Here’s why that’s important.

Apple's two-hour-and-nine-minute keynote presentation on Monday unveiled a bevy of new products and features, not least of which was the company's $3,500 Vision Pro virtual reality headset.

With all the hype around ChatGPT and similar apps that can create original images, text, and pictures, analysts anticipated that Apple would incorporate AI into its software running iPhones, Mac computers, and the new headset at every opportunity.

And Apple did use the underlying technology as it announced improved autocorrect on the iPhone, adaptive audio for AirPods, and a new journaling app. The new headset also will use the tech to create digital avatars after scanning users’ faces.

Advertisement

But Apple, as it often does, went its own way and never mentioned the phrases "artificial intelligence" or "generative AI," which have been commonly used to describe the new class of apps. Instead, Apple used more technical terms such as machine learning, transformer models, and "an advanced encoder-decoder neural network." (That one really rolls off the tongue.)

The company did not explain its terminology choice. But CEO Tim Cook and other officials emphasized several times that the machine learning calculations would be done on customers’ own devices, not in the cloud, and that Apple was not collecting or selling any of the data produced.

Why is all of this important? Because what you call something that's pervasive really matters.

Harvard University chaplain Greg Epstein has been thinking a lot about the way we talk about technology and how that can have an impact on how the tech is used and regulated. His upcoming book, arriving next year, is called "Tech Agnostic: How Technology Became the World's Most Powerful Religion, and Why it Desperately Needs a Reformation."

"This discussion or debate about terminology, to me, is really about the stories we tell ourselves as a society," he said.

Advertisement

Calling an app "artificial intelligence" implies a very human aspect, as opposed to a "machine learning model," which sounds more like a tool, for example.

"If we tell ourselves a given new technology is the equal of us, we’re telling ourselves a particular story that flatters the creators of the technology, and puts them in a very important position, and may downgrade the rest of us," he said. "But if these are simply machines meant to fill a role, which they may or may not do appropriately, in a way that benefits humanity as a whole, then we humans are still left to figure out all of the issues of equity and justice and ethics that we face."

Science fiction writer Ted Chiang made a similar point in an interview published over the weekend with the Financial Times. Chiang, whose stories delve into the human side of technology, worried that AI terminology is exaggerating the significance of the new applications. He prefers to call ChatGPT and its ilk "applied statistics."

"The machines we have now, they’re not conscious," Chiang told the FT. "I think that if we had chosen a different phrase for it, back in the ‘50s, we might have avoided a lot of the confusion that we’re having now."

Sounds like Tim Cook may agree.

Aaron Pressman can be reached at [email protected]. Follow him on Twitter @ampressman.

SHARE