Gaming Giant Unity Wants to Digitally Clone the World - 12 minutes read




+++lead-in-text

In video games, non-playable characters can be somewhat clueless. An NPC might wander across a city block and face-plant into a streetlamp, and then maybe vanish the next block over. NPCs leap into player-characters’ punches or commit to kicking a wall 400 times, never learning that the wall won’t kick back.

+++

Unity Technologies is in the business of NPCs. Founded in 2004, Unity makes an eponymous game engine that provides the architecture for hundreds of video games using its real-time 3D computer graphics technology. Unity also provides countless tools integrated with that game engine, including AI tools. In the Unity game engine, developers design their 3D city blocks and streetlamps; model their NPCs; animate their punches; and maybe—through Unity’s AI technology—teach them when to stop kicking.

Five years ago, Unity’s executives had a realization: In the real world, there are a lot of situations that would enormously benefit from NPCs. Think about designing a roller coaster. Engineers can’t ask humans to stand up on a roller coaster ahead of a hairpin turn to test whether they’d fly off. And they definitely can’t ask them to do it 100 or 1,000 times, just to make sure. But if an NPC had all the pertinent qualities of a human being—weight, movement, even a bit of impulsiveness—the engineer could whip them around that bend 100,000 times, like a crazed kid playing *RollerCoaster Tycoon*, to discern under which circumstances they’d be ejected. The roller coaster, of course, would be digital too, with its metal bending over time and the speed of its cars sinking and rising depending on the number of passengers.

Unity spun that idea into an arm of its business and is now leveraging its game engine technology to help clients make “digital twins” of real-life objects, environments, and, recently, people. “The real world is so freaking limited,” said Danny Lange, Unity’s senior vice president of artificial intelligence, in Unity’s San Francisco headquarters last October. Speaking with WIRED in 2020, he had told me, “In a synthetic world, you can basically recreate a world that is better than the real world for training systems. And I can create many more scenarios with that data in Unity.”

Digital twins are virtual clones of real-life things, acting and reacting in virtual space the same way their physical counterparts do. Or at least, that’s what the term implies. The word “twin” does a lot of heavy lifting. It will be a long time before simulations boast one-to-one verisimilitude to their references; and these “twins” take a mountain of human labor to create. Right now, though, dozens of companies are using Unity to create digital twins of robots, manufacturing lines, buildings, and even wind turbines to virtually design, operate, monitor, optimize, and train them. These twins rust in the rain and quicken with lubricant. They learn to avoid a lump or identify a broken gear. With an accurate enough digital twin, Lange says, Unity’s game engine can even collect “synthetic data” off the simulation to better understand and advance its IRL double.

“We're actually a massive data company,” says Lange. “We early on realized that at the end of the day, real-time 3D is about data and nothing but data.” Unity’s major digital twin clients are in the industrial machinery world, where they can use digital simulations in place of more expensive physical models. Unity executives believe that the company’s real-time 3D technology and AI capabilities position them to compete with the many other firms entering the $3.2 billion space, including IBM, Oracle, and Microsoft. David Rhodes, Unity’s senior vice president in charge of digital twins, says his goal is for Unity to one day host “a digital twin of the world.”

Across a conference table in Unity’s headquarters, Lange shared that he had never seen himself becoming a critical part of a games company. After his time as Amazon Machine Learning’s general manager and Uber’s head of machine learning, though, he realized that a game engine could be the solution to some of the thorniest data problems he’d encountered in tech. At Uber, Lange would watch as engineers tossed dummies in front of self-driving cars again and again to test vehicles' ability to brake for humans. The car would have to identify that the object was human-shaped, as well as calculating the dummy's velocity and direction.

“You can do that a few times,” says Lange. “But how many times does it take [to train the AI]? A thousand? In a game engine, you can have an NPC actually trying to get killed in front of a car, and you can see if the car can actually prevent it.” In Unity, he said, an engineer could generate an amount of data corresponding to driving an Uber vehicle 500 million miles every 24 hours. (Uber [sold its autonomous vehicle in 2020, two years after a self-driving Uber vehicle [killed a the real world, or anything in it, requires a lot of data. Unity’s customers can plug any number of sensor-based systems into the game engine: location data, CAD data, computer vision data, natural language processing data. One customer in luxury real estate, for example, engineered what he calls the most detailed map of London inside Unity by flying airplanes over the city and collecting tons of visual info. (His 3D simulation zooms in to 5-centimeter pixels).

Uploading the physical world into the metaverse is no mean feat. “I don't want to say it’s sweat, blood, and tears, but it is manual labor. Digital twins right now are built by people,” says Adrien Gaidon, a senior manager for Toyota Research Institute’s machine learning division. In 2014, before Unity really leaned into the digital twin business, Gaidon had the idea of making a digital twin of a town in Germany—of the trees, cars, roads, and pedestrians—to develop software for self-driving cars. A small digital twin used and reused 100,000 times is a perfect use case for the technology, he says, at least right now. “It’s an aspirational goal, to understand the world through a digital twin of the world,” says Gaidon, “and I think nobody is even close to it.”

Digital twins rely on an enormous amount and variety of data sources; otherwise, they’re not accurate. And if the twins are not accurate, Unity’s customers can’t rely on them to generate accurate synthetic data about their real-life counterparts. At the same time, vacuuming up all that data raises important questions about surveillance and privacy, especially now that Unity is beginning to create digital twins of human populations—which is not a traditional use of digital twin technology.

In December 2021, Unity published [a called “PeopleSansPeople: A Synthetic Data Generator for Human-Centric Computer Vision.” While Unity is pitching its game engine as a place to simulate crowds of people, it is now proving a way to essentially NPC-ify their real-life counterparts. Essentially, Unity says, PeopleSansPeople will help anonymize data collected about humans going about their lives, and the software’s operators can modulate those virtual people’s appearances to create more customizable datasets. Citing “serious and important privacy, legal, safety, and ethical concerns” that “limit capturing human data,” Unity pitches its digital twin technology as an “emerging alternative to real-world data that alleviates some of these issues.”

“It's taking our gaming experience of creating NPCs in games, making that available to create computer vision systems that have this ability to engage with humans—to understand the human pose. We do that without using real people, so we totally control the bias,” Lange said. Confused, I pointed out that it seemed like Unity’s customers do need data from real people for this to work. Lange added that “Of course we model with real people, but at the end of the day, there’s an artist who goes in there. You wouldn’t be able to recognize any of these people. They’re templates.”

Unity is working closely with several airports to simulate their environments and human traffic flows in real time, including the Hong Kong International Airport and the Vancouver International Airport. In its San Francisco office, Unity demoed a digital twin of the Vancouver International Airport, which appeared as a detailed map. At the bottom of the screen was the word “Live” and labels for “Connections,” “Pre-Board Screening,” and “Customs.” “All of the different sensors—airline information, traffic flow information, caterers bringing in food, security—this is a tremendously dense amount of information,” Unity’s head of XR, [Timoni told WIRED in early 2020. “And it needs to be taken in locally, to know what’s happening in a specific part of the airport. Unity can bring it all in, and we can output that information.” Through Unity, an airport manager can see what’s going on at Gate A-32 and even have access to localized audio.

“Singapore [Changi Airport] is really awesome because they’re working on gamifying the experience, like racking up points through retail,” says Crystal Garcia, a senior strategic business development manager for Unity’s Industrial Markets division.

In a lot of ways, Unity’s technology is a hammer. It is a powerful tool that, in the wrong hands, could break down some privacy hedges. “We make our software available to anybody and everybody as long as it doesn't break the law,” says Rhodes. “In other words, there's not a lot we could do, nor maybe should we do, to prevent people from buying our software.” At the same time, he adds, Unity has an ethics board to evaluate some potential clients. And sometimes the company decides it doesn’t want to work with someone based on their use case for the technology. Years ago, Unity decided against taking on as a client Chinese AI firm SenseTime, which develops facial recognition technology. (The US government has imposed against SenseTime for its role in surveilling China’s Uyghur population.)

Asked whether Unity does facial recognition, Rhodes said, “Our customers have the ability to use Unity, along with sensor-based systems, to connect to physical systems that do facial recognition.” In the case of SenseTime, he added, “we shut down access to our software because we didn’t feel comfortable with the use case.” Unity says that it doesn't collect demographic or personal data and it anonymizes the data it does collect.

Even when humans are translated into NPCs, there are privacy concerns, says Ryan Calo, a law professor at the University of Washington and cofounder of the Tech Policy Labs. He asks whether subjects have adequate awareness and consent that they’re going into a model. A lot of people, he says,“bristle against the notion of trying to predict our behavior or our traits on the basis of available data.” At the same time, he says, it can be difficult to gauge whether data collected is adequately anonymized or whether through AI it becomes de-anonymized. “Artificial intelligence is increasingly able to derive the intimate from the available. Systems are very good at extrapolating based on patterns.”

Unity also contracts with the US military—something some Unity employees have expressed frustration over, according to a in Vice last year. An internal memo titled “GovTech Projects – Communication Protocol” asked employees not to "discuss any projects that involve the use of simulated or virtual weapons or training to harm another person." According to a Slack message obtained by Vice, Unity CEO John Riccitiello told employees concerned about government applications for the technology that there is a “thorough review process, and we have not nor will we support programs where we knowingly violate our principles or values.”

The military does use Unity software to replace a real-life training program involving dropping live munitions on plane runways. Asked to explain how, Lange said, “During a crisis, opponents would like to blow up a few runways so the planes can’t take off. And when they do that, they actually drop munitions on the runway and they leave unexploded ordnances back because then you can’t go out and repair your runway. This project basically generated runway images with … unexploded ordnances. The defense side used synthetic data to train a computer model to detect unexploded ordnances. So when they fly a drone over the runway, it can identify where those ordnances are. When the soldiers get out there, they know where not to go.” Normally, he said, they bombed their own runways and took pictures of it—and they didn’t do it very often.

WIRED asked Lange whether Unity’s AI technology has been used for the purposes of identifying alleged terrorists or distinguishing them from civilians. He said no, adding that Unity is “a real-time 3D company. We’re a gaming company. We’re very far from a company that built that software … We are not a drone company.”

Unity’s move spurs questions about the digitally benign. Is it okay to collect lots of data on people if, in the end, those people materialize as NPCs? If the person using Unity technology only knows them as NPCs, does it matter that they were once humans? And finally, how much can you really know about the real world through a simulated version of it, and its simulated data?

Unity’s big step into domains previously considered overly ambitious for a game company portends a future in which game companies will join the ranks of broader tech companies. And when that happens, onlookers will begin to realize that behind any NPC is an unfathomable brain with unfathomable motives.

***
### More Great WIRED Stories
- 📩 The latest on tech, science, and more: [Get our The quest to trap CO~2~ in stone—and [beat climate [Could being actually be good for you?
- [John Deere's self-driving stirs AI debate
- The 18 [best electric coming this year
- 6 ways to [delete yourself from the 👁️ Explore AI like never before with [our new 🏃🏽‍♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the [best fitness [running (including and and [best

Source: Wired

Powered by NewsAPI.org