Exclusive: How Google Uses Machine Learning to Analyze Soccer Moves - 4 minutes read


When you buy the GMR insole, you get a pair of inserts (one for each shoe) and one Jacquard Tag. It's the same Tag that comes in Levi's newer jackets or the YSL backpack. Choose which shoe you want the Tag to be in, and you can put a dummy Tag in the other to feel balanced. After pairing the electronics with the FIFA game, you slip on your cleats and head out to a field. Your phone doesn't need to be anywhere near you while you run around; the Tag runs its machine learning algorithms locally on the device.

It's smart enough to know that it doesn't need to track your walk to the pitch. Instead, the Tag only starts using the bulk of its computing power when it detects you're actively making moves typically made when people play soccer. How does the Tag know what those movements are like? Well, it has sensors inside that can measure acceleration and angular rotations as well as a microcontroller that can run neural networks, which are algorithmic programs that are taught to recognize patterns.

"We had to build a whole suite of new machine learning algorithms that can take the sensor data coming from the Tag and interpret this based on what the motions are," says Nicholas Gillian, lead machine learning engineer for Google ATAP.

You can learn a lot by looking at patterns. Data coming from a runner, for example, will look steady across the duration of their workout, and very cyclical. Data from a footballer will look much more erratic, with sudden spurts and fast turns mixed with moments of little activity. Gillian says Google worked with Adidas, EA, and soccer experts to collect data from players playing in different contexts (whether during training or an actual game). That data was then used to train thousands of neural networks to understand these complicated football motions. The data is anonymized so it's not tied to a specific user, and there's no GPS or location-tracking abilities in the hardware.

The neural networks are so well trained now that the Tag can recognize when you make a fast turn, when you're kicking the ball, how far you've run, your peak speed, whether you are passing or shooting, and how powerful your kicks are. It can even estimate the ball's speed after you kick it. All this is happening in real time as the player moves.

Gillian noted that these machine-learning models are often gigabytes in size. The ATAP team managed to export its code down to a few kilobytes so it could run on the Tag—similarly to how Google shrunk Google Assistant's algorithms down so it could run locally on its Pixel phones.

In the context of the FIFA app though, the player will need to head back to their phone and wait for the data to be sent to the videogame to see progress on their goals. You can play soccer normally, or you can specifically try to hit the goals required to progress your virtual team in the videogame. It doesn't matter if you're an expert or an amateur since the Google team specifically made sure to collect data from players with varying levels of expertise.

"We're not asking you to play soccer in a different way," Giles said. "Just go play soccer the way you always play."

The Next Wave of Computing

Google has slowly been moving to this future of ambient computing, where the tech is seamlessly integrated into your surroundings. Its most recent Pixel phones have a sensor that can identify hand gestures, allowing owners to wave their hand above the phone to switch music tracks or play and pause music, all without having to touch the phone or speak a voice command. The phone also has sensors that can detect if the owner has been in a car crash, based on machine learning algorithms of what happens during accidents, and will contact emergency services if it doesn't hear a response.

Source: Wired

Powered by NewsAPI.org