Don’t Let Mistrust of Tech Companies Blind You to the Power of AI - 3 minutes read

Meanwhile, in less visible ways, AI is already changing education, commerce, and the workplace. One friend recently told me about a big IT firm he works with. The company had a lengthy and long-established protocol for launching major initiatives that involved designing solutions, coding up the product, and engineering the rollout. Moving from concept to execution took months. But he recently saw a demo that applied state-of-the-art AI to a typical software project. “All of those things that took months happened in the space of a few hours,” he says. “That made me agree with your column. Tons of the companies that surround us are now animated corpses.” No wonder people are freaked.

What fuels a lot of the rage against AI is mistrust of the companies building and promoting it. By coincidence I had a breakfast scheduled this week with Ali Farhadi, the CEO of the Allen Institute for AI, a nonprofit research effort. He’s 100 percent convinced that the hype is justified but also empathizes with those who don’t accept it—because, he says, the companies that are trying to dominate the field are viewed with suspicion by the public. “AI has been treated as this black box thing that no one knows about, and it’s so expensive only four companies can do it,” Farhadi says. The fact that AI developers are moving so quickly fuels the distrust even more. “We collectively don’t understand this, yet we’re deploying it,” he says. “I’m not against that, but we should expect these systems will behave in unpredictable ways, and people will react to that.” Fahadi, who is a proponent of open source AI, says that at the least the big companies should publicly disclose what materials they use to train their models.

Compounding the issue is that many people involved in building AI also pledge their devotion to producing AGI. While many key researchers believe this will be a boon to humanity—it's the founding principle of OpenAI—they have not made the case to the public. “People are frustrated with the notion that this AGI thing is going to come tomorrow or one year or in six months,” says Farhadi, who is not a fan of the concept. He says AGI is not a scientific term but a fuzzy notion that’s mucking up the adoption of AI. “In my lab when a student uses those three letters, it just delays their graduation by six months,” he says.

Personally I’m agnostic on the AGI issue—I don’t think we’re on the cusp of it but simply don’t know what will happen in the long run. When you talk to people on the front lines of AI, it turns out that they don’t know, either.

Some things do seem clear to me, and I think that these will eventually become apparent to all—even those pitching spitballs at me on X. AI will get more powerful. People will find ways to use it to make their jobs and personal lives easier. Also, many folks are going to lose their jobs, and entire companies will be disrupted. It will be small consolation that new jobs and firms might emerge from an AI boom, because some of the displaced people will still be stuck in unemployment lines or cashiering at Walmart. In the meantime, everyone in the AI world—including columnists like me—would do well to understand why people are so enraged, and respect their justifiable discontent.

Source: Wired

Powered by