Apex Trader Funding - News
CEOs of OpenAI, Google and Microsoft to join other tech leaders on federal AI safety panel
Washington
CNN
—
The US government has asked leading artificial intelligence companies for advice on how to use the technology they are creating to defend airlines, utilities and other critical infrastructure, particularly from AI-powered attacks.
The Department of Homeland Security said Friday that the panel it’s creating will include CEOs from some of the world’s largest companies and industries.
The list includes Google chief executive Sundar Pichai, Microsoft chief executive Satya Nadella and OpenAI chief executive Sam Altman, but also the head of defense contractors such as Northrop Grumman and air carrier Delta Air Lines.
The move reflects the US government’s close collaboration with the private sector as it scrambles to address both the risks and benefits of AI in the absence of a targeted national AI law.
The U.S Capitol photographed on Tuesday, Feb. 13, 2024, in Washington.
Mariam Zuhaib/AP
Related article
AI could disrupt the election. Congress is running out of time to respond
The collection of experts will make recommendations to telecommunications companies, pipeline operators, electric utilities and other sectors about how they can “responsibly” use AI, DHS said. The group will also help prepare those sectors for “AI-related disruptions.”
“Artificial intelligence is a transformative technology that can advance our national interests in unprecedented ways,” said DHS Secretary Alejandro Mayorkas, in a release. “At the same time, it presents real risks — risks that we can mitigate by adopting best practices and taking other studied, concrete actions.”
Among the panel’s other participants are the CEOs of technology providers such as Amazon Web Services, IBM and Cisco; chipmakers such as AMD; AI model developers such as Anthropic; and civil rights groups such as the Lawyers’ Committee for Civil Rights Under Law.
It also includes federal, state and local government officials, as well as leading academics in AI such as Fei-Fei Li, co-director of Stanford University’s Human-centered Artificial Intelligence Institute.
The 22-member AI Safety and Security Board is an outgrowth of a 2023 executive order signed by President Joe Biden, who called for a cross-industry body to make “recommendations for improving security, resilience, and incident response related to AI usage in critical infrastructure.”
That same executive order also led this year to government-wide rules regulating how federal agencies can purchase and use AI in their own systems. The US government already uses machine learning or artificial intelligence for more than 200 distinct purposes, such as monitoring volcano activity, tracking wildfires and identifying wildlife from satellite imagery.