OpenAI’s Big Lesson for Science Policy

April 11, 2023 | Samuel Hammond

This piece was originally published at Second Best. The incredible success of Large Language Models like ChatGPT is both a scientific breakthrough and a boon for future scientific discovery. As a recent editorial in Nature explains, …large language and vision models that can digest the literature will be used to identify gaps in knowledge, help summarize and
Read more >

Artificial Intelligence Could Democratize Government

March 8, 2023 | Luke Hogg

This piece was originally published in Tech Policy Press. From education to media to medicine, the rapid development of artificial intelligence tools has already begun to upend long-established conventions. Our democratic institutions will be no exception. It’s therefore crucial that we think about how to build AI systems in a way that democratically distributes the benefits.  
Read more >

Bots in Congress: The Risks and Benefits of Emerging AI Tools in the Legislative Branch

February 8, 2023 | Zach Graves

This piece was originally published in Tech Policy Press. In the last year, we’ve seen huge improvements in the quality and range of generative AI tools—including voice-to-text applications like OpenAI’s Whisper, text-to-voice generators like Murf, text-to-image models like Midjourney and Stable Diffusion, language models like OpenAI’s ChatGPT and GPT-3, and others. Unlike the clunky AI tools of the past (sorry, Clippy), this suite of
Read more >

Could a National AI Forensics Lab Help Address AI Chip Smuggling?

December 14, 2022 | Deepesh Chaudhari

The Bureau of Industry and Security recently announced new export control rules regarding anti-terrorism and regional stability, which will significantly affect the trade of high-end AI chips. As with any complex regulatory change, there is a risk of unintended consequences and surprise challenges in its implementation. The Bureau has therefore rightly encouraged comments and collaborative
Read more >

The Unmasking of Manipulative AI 

November 10, 2022 | Deepesh Chaudhari

The potential for automated influence operations, in which AI systems are designed to manipulate humans, is real and deserves our attention. As AI systems proliferate in creating media and interactive experiences, the opportunities for manipulative persuasion and harmful social consequences will increase. In particular, machine learning (ML) progress in text and video generation could dramatically
Read more >

Congress Needs Foresight on Future AI Risks

October 11, 2022 | Deepesh Chaudhari

AI has the potential to significantly improve our lives in many ways, but it also poses significant risks. On the one hand, technological advances in how computer systems can execute tasks that traditionally require human intelligence have an unprecedented potential to benefit humanity in countless ways. AI has already led to medical, transportation, and education
Read more >

The Moral Panic Over Open-Source Generative AI

October 10, 2022 | Ryan Khurana

On September 22, Rep. Anna Eshoo (D-CA) called on the National Security Advisor (NSA) and the Office of Science and Technology Policy (OSTP) to restrict access to open-source generative AI models in response to the release of Stable Diffusion by Stability AI. Stable Diffusion is an open-source text-to-image AI that allows for the creation of
Read more >

NIST’s Artificial Intelligence Framework Should Address Low-Probability, High-Impact Risks

June 27, 2022 | Deepesh Chaudhari

Note: The author previously submitted an anonymous comment to NIST. This blog post summarizes material from that comment. The National Institute of Standards and Technology (NIST)’s recent initial draft of its AI Risk Management Framework (AI RMF) provides an important reminder for the future of AI: advances in artificial intelligence have the potential to provide
Read more >

White House AI Principles a Boon for American Innovation

January 17, 2020 | Ryan Khurana

The American AI Initiative, created by the Trump Administration in February 2019, highlighted the White House’s priorities to make artificial intelligence a pivotal asset in shaping America’s future. While the initial plan was criticized for a lack of specifics, on January 7th the Office of Management and Budget released its “Guidance for Regulation of Artificial
Read more >