AI apocalypse team formed to fend off catastrophic nuclear and biochemical doomsday scenarios - 7 minutes read




AI expert Marva Bailer explains how, even though there are currently laws in place, the average person has more access than ever to create deepfakes of celebrities.

Artificial intelligence (AI) is advancing rapidly, bringing unprecedented benefits to us, yet it also poses serious risks, such as chemical, biological, radiological and nuclear (CBRN) threats, that could have catastrophic consequences for the world. 

How can we ensure that AI is used for good and not evil? How can we prepare for the worst-case scenarios that might arise from AI?

CLICK TO GET KURT’S FREE CYBERGUY NEWSLETTER WITH SECURITY ALERTS, QUICK VIDEO TIPS, TECH REVIEWS, AND EASY HOW-TO’S TO MAKE YOU SMARTER

How OpenAI is preparing for the worst 

These are some of the questions that OpenAI, a leading AI research lab and the company behind ChatGPT, is trying to answer with its new Preparedness team. Its mission is to track, evaluate, forecast and protect against the frontier risks of AI models. 

Artificial intelligence is advancing rapidly, bringing unprecedented benefits, yet it also poses serious risks. (Cyberguy.com)

MORE: META CONFESSES IT'S USING WHAT YOU POST TO TRAIN ITS AI  

What are frontier risks? 

Frontier risks are the potential dangers that could emerge from AI models that exceed the capabilities of the current state-of-the-art systems. These models, which OpenAI calls "frontier AI models," could have the ability to generate malicious code, manipulate human behavior, create fake or misleading information, or even trigger CBRN events. 

The dangers of deepfakes 

For example, imagine an AI model that can synthesize realistic voices and videos of any person, such as a world leader or a celebrity. Such a model could be used to create deepfakes, which are fake videos or audio clips that look and sound real. Deepfakes could be used for various malicious purposes, such as spreading propaganda, blackmailing, impersonating or inciting violence. 

ARE YOU PROTECTED FROM THREATS? SEE THE BEST ANTIVIRUS PROTECTION REVIEWED HERE 

Anticipating and preventing AI catastrophe scenarios  

Another example is an AI model that can design novel molecules or organisms, such as drugs or viruses. Such a model could be used to create new treatments for diseases or enhance human capabilities. However, it could also be used to create bioweapons or release harmful pathogens into the environment. 

An AI model could be used to create new treatments for diseases or enhance human capabilities. (Cyberguy.com)

MORE: THIS DATING APP USES AI TO FIND YOUR SOUL MATE BY YOUR FACE  

These are just some of the possible scenarios that frontier AI models could enable or cause. The Preparedness team aims to anticipate and prevent these catastrophe scenarios before they happen or mitigate their impact if they do happen. 

How will the Preparedness team work? 

The Preparedness team will work closely with other teams at OpenAI, such as the Safety team and the Policy team, to ensure that AI models are developed and deployed in a safe and responsible manner. 

MORE: HOW TOM HANKS' FAKE AI DENTAL. PLAN VIDEO IS JUST THE BEGINNING OF BOGUS CELEBRITY ENDORSEMENTS  

Managing the risks of cutting-edge AI 

The team will also collaborate with external partners, such as researchers, policymakers, regulators and civil society groups, to share insights and best practices on AI risk management. The team will conduct various activities to achieve its goals, such as: 

Developing a risk-informed development policy: This policy will outline how OpenAI will handle the risks posed by frontier AI models throughout their lifecycle, from design to deployment. The policy will include protective actions, such as testing, auditing, monitoring and red-teaming of AI models, and governance mechanisms, such as oversight committees, ethical principles and transparency measures. 

Conducting risk studies: The team will conduct research and analysis on the potential risks of frontier AI models using both theoretical and empirical methods. The team will also solicit ideas from the community for risk studies, offering a $25,000 prize and a job opportunity for the top 10 submissions. 

WHAT IS ARTIFICIAL INTELLIGENCE?

Developing risk mitigation tools: The team will develop tools and techniques to reduce or eliminate the risks of frontier AI models. These tools could include methods for detecting and preventing malicious use of AI models, methods for verifying and validating the behavior and performance of AI models and methods for controlling and intervening in the actions of AI models.

An AI model could be used to create deepfakes, which are fake videos or audio clips that look and sound real. (Cyberguy.com)

MORE: PREVENTING MASS SHOOTINGS WITH AI DETECTION: NAVY SEALS-INSPIRED INVENTION

Why is this important? 

The formation of the Preparedness team is an important step for OpenAI and the broader AI community. It shows that OpenAI is taking the potential risks of its own research and innovation seriously and is committed to ensuring that its work aligns with its vision of creating "beneficial artificial intelligence for all." 

It also sets an example for other AI labs and organizations to follow suit and adopt a proactive and precautionary approach to AI risk management. By doing so, they can contribute to building trust and confidence in AI among the public and stakeholders and prevent possible harms or conflicts that could undermine the positive impact of AI.

The Preparedness team and its allies 

The Preparedness team is not alone in this endeavor. There are many other initiatives and groups that are working on similar issues, such as the Partnership on AI, the Center for Human-Compatible AI, the Future of Life Institute, and the Global Catastrophic Risk Institute. These initiatives and groups can benefit from collaborating with each other and sharing their knowledge and resources. 

Kurt’s key takeaways 

OpenAI is taking the potential risks of its own research and innovation seriously and is committed to ensuring that its work aligns with its vision of creating "beneficial artificial intelligence for all." (Cyberguy.com)

GET MORE OF MY SECURITY ALERTS, QUICK TIPS & EASY VIDEO TUTORIALS WITH THE FREE CYBERGUY NEWSLETTER — CLICK HERE

AI is a powerful technology that can bring great benefits to us. Yet it also comes with great responsibilities and challenges. We need to be prepared for the potential risks that AI could pose, especially as it becomes more advanced and capable.  

The Preparedness team is a new initiative that aims to do just that. By studying and mitigating the frontier risks of AI models, the team hopes to ensure that AI is used for good and not evil and that it serves the best interests of humanity and the planet. 

How do you feel about the future of AI and its impact on society? Are you concerned about where we are headed with Artificial Intelligence?  Let us know by writing us at Cyberguy.com/Contact

For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter.

Ask Kurt a question or let us know what stories you'd like us to cover.

We need to be prepared for the potential risks that AI could pose, especially as it becomes more advanced and capable. (Cyberguy.com)

CLICK HERE TO GET THE FOX NEWS APP

Answers to the most asked CyberGuy questions: 

What is the best way to protect your Mac, Windows, iPhone and Android devices from getting hacked?

What is the best way to stay private, secure and anonymous while browsing the web?

How can I get rid of robocalls with apps and data removal services?

Copyright 2023 CyberGuy.com. All rights reserved. 

Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.



Source: Fox News

Powered by NewsAPI.org