What People Are Getting Wrong This Week: Artificial Intelligence - 4 minutes read




The phrase “artificial Intelligence” was coined by pointy-heads at MIT in 1955. Back then, it referred to an obscure field of computer science devoted to then-hypothetical programs that could engage in tasks that “require high-level mental processes such as: perceptual learning, memory organization, and critical reasoning.”

Fast-forward to 2023: While AI has been a murmur in tech circles for the last few years, those conversations really get loud until the commercial release of products like Chat GPT and DALL-e. Now everyone is talking about AI, everywhere you go—hyping it, demonizing it, fearing it—but most of all, misunderstanding it.

This is partly because it’s a complex subject—we don’t even agree on what “intelligence” is, let alone “artificial intelligence”—but another reason so many are getting AI wrong essentially comes down to that familiar villain capitalism. With the explosion in popular interest, advertisers and marketers are using terms like “AI,” “AI-powered,” and “artificial intelligence” as a selling point so much, they’re beginning to lose what little meaning they once had.

Is my vacuum on the verge of becoming self-aware?

Consider this vacuum. According to both consumers and reviewers, iRobot’s Roomba J7 is great for cleaning your carpets and floors, but according to its manufacturer, it offers “intelligence that grows.”

The vacuum cleaner “never stops getting smarter,” iRobot proudly claims on the Roomba J7 website. But before you add “my robot servant beating me at Scrabble” to your list of fears, understand the J7 is not intelligent by any standard definition of the word. The way it “gets smarter” is by being able to receive software updates. This would make my iPhone 8 a genius by now.

Another example: Walmart currently has over 700 products listed under the heading “AI toys,” like this “AI Powered Autonomous Mobile Camera Robot.” It doesn’t think for itself, though. It “follows pre-programmed path to patrol 24/7 and auto-docks to the charger.” There’s also this $18 “Robot AI rubber duck,” which is a “real collector’s item for all fans of artificial intelligence” according to its ad copy. It’s literally just a rubber duck with a futuristic paint job.

The FTC warns against false AI claims

The problem has grown concerning enough that the Federal Trade Commission issued a warning a few months ago urging companies to “keep [their] AI claims in check,” but it feels like an empty threat.

While the trade commission promises that “FTC technologists and others can look under the hood and analyze other materials to see if what’s inside matches up with your claims,” it’s hard to see what they can do in the face of the cross-industry over-hyping of AI. The claim that something is “AI powered” or “intelligent” seems too vague to be false advertising when there is barely a loose consensus on the definition of those words. Who can say my vacuum isn’t “growing smarter” when its software is updated?

Marketers, particularly in the tech industry, have settled into using “AI powered” or “artificial intelligence” the same way the food industry uses “farm raised” to refer to poultry (where else would a chicken be raised?) or the way this “Homeopathic Accident and Emergency First Aid Kit” can say it contains “remedies that were selected specifically to get you back on track,” even though it has no medicinal value. Both are examples of marketing speak: words that don’t actually refer to anything, but sound like they do. As long as people would rather buy the orange juice with “all natural” on the side of the container than one without it, we’re going to keep seeing “AI-powered” claims on everything from bird feeders to electric roller skates.



Source: Lifehacker.com

Powered by NewsAPI.org