Nvidia’s NeMo taps generative AI in designing semiconductor chips - 4 minutes read
















GamesBeat Next unites gaming industry leaders for exceptional content, networking, and deal-making opportunities. Join us on Oct 23-24 in San Francisco.  Register Now













In a research paper released today, Nvidia semiconductor engineers showcased how generative artificial intelligence (AI) can assist in the complex process of designing semiconductors.


The study demonstrated how specialized industries can leverage large language models (LLMs) trained on internal data to create assistants that enhance productivity.


The research, utilizing Nvidia NeMo, highlights the potential for customized AI models to provide a competitive edge in the semiconductor field.


Semiconductor design is a highly challenging endeavor, involving the meticulous construction of chips containing billions of transistors on 3D circuitry maps that are like city streets — but thinner than a human hair.





Event

GamesBeat Next 2023


Join the GamesBeat community in San Francisco this October 23-24. You’ll hear from the brightest minds within the gaming industry on latest developments and their take on the future of gaming.





Learn More


It requires the coordination of multiple engineering teams over a span of years. Each team specializes in different aspects of chip design, employing specific methods, software programs, and computer languages.


Nvidia chip designers came up with a way for LLMs to assist them in creating semiconductor chips.

Mark Ren, an Nvidia Research director, was the lead author of the paper.


“I believe over time large language models will help all the processes, across the board,” Ren said in a statement.


The paper was announced by Bill Dally, Nvidia’s chief scientist, during a keynote at the International Conference on Computer-Aided Design held in San Francisco.


“This effort marks an important first step in applying LLMs to the complex work of designing semiconductors,” said Dally, in a statement. “It shows how even highly specialized fields can use their internal data to train useful generative AI models.”


The research team at Nvidia developed a custom LLM called ChipNeMo, trained on the company’s internal data, to generate and optimize software and assist human designers. The long-term goal is to apply generative AI to every stage of chip design, leading to substantial gains in overall productivity. The initial use cases explored by the team include a chatbot, a code generator, and an analysis tool.


The most well-received use case thus far is an analysis tool that automates the time-consuming task of maintaining updated bug descriptions. And a prototype chatbot that helps engineers find technical documents quickly and a code generator that creates snippets of specialized software for chip designs are also under development.


The research paper focuses on the team’s efforts to gather design data and create a specialized generative AI model. This process can be applied to any industry. The team started with a foundation model and used Nvidia NeMo, a framework for building, customizing, and deploying generative AI models, to refine the model. The final ChipNeMo model, with 43 billion parameters and trained on over a trillion tokens, demonstrated its capability to understand patterns.


The study serves as an example of how a deeply technical team can refine a pretrained model with its own data. It highlights the importance of customizing LLMs, as even models with fewer parameters can match or exceed the performance of larger general-purpose LLMs. Careful data collection and cleaning are crucial during the training process, and users are advised to stay updated on the latest tools that can simplify and expedite their work.


The semiconductor industry is just beginning to explore the possibilities of generative AI, and this research provides valuable insights. Enterprises interested in building their own custom LLMs can utilize the NeMo framework, which is available on GitHub and the Nvidia NGC catalog, Nvidia said.


The paper has a lot of names on it: Mingjie Liu, Teo Ene, Robert Kirby, Chris Cheng, Nathaniel Pinckney, Rongjian Liang, Jonah Alben, Himyanshu Anand, Sanmitra Banerjee, Ismet Bayraktaroglu, Bonita Bhaskaran Bryan Catanzaro, Arjun Chaudhuri, Sharon Clay, Bill Dally, Laura Dang, Parikshit Deshpande Siddhanth Dhodhi, Sameer Halepete, Eric Hill, Jiashang Hu, Sumit Jain, Brucek Khailany Kishor Kunal, Xiaowei Li, Hao Liu, Stuart Oberman, Sujeet Omar, Sreedhar Pratty, Ambar Sarkar Zhengjiang Shao, Hanfei Sun, Pratik P Suthar, Varun Tej, Kaizhe Xu and Haoxing Ren.



GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.




Source: VentureBeat

Powered by NewsAPI.org