Recommendations from the scientific community to ensure that the development and use of AI honors scientific norms
In late 2022, OpenAI released ChatGPT, an AI chatbot capable of generating conversational answers and analyses, as well as images, in response to user questions and prompts. This generative AI is built with computational procedures, such as large language models, that train on vast bodies of human-created and curated data, including huge amounts of scientific literature. Since then, the worry that AI may someday outsmart humans has only grown more widespread.
In the past, as society grappled with the implications of new technologies—ranging from nuclear energy to recombinant DNA—the scientific community developed practices designed to increase adherence to the norms that have protected the integrity of each new form of scientific exploration, development, and deployment. In the process, scientists expanded their community’s repertoire of mechanisms designed to advance emerging science and technology while safeguarding the integrity of science and the wellbeing of the nation and its people.
This book provides a historical perspective on and an ethical approach to emerging AI technologies; an overview of AI frameworks and principles; and an assessment of AI’s current advances, hurdles, and potential. Experts from the fields of behavioral and social sciences, ethics, biology, physics, chemistry, mathematics, and computer science, as well as leaders in higher education, law, governance, and science publishing and communication, comprise the book’s contributors. Their essays remind us that, even as our understandings of emerging technologies and of their implications evolve, science’s commitment to core norms and values remains steadfast. The volume’s conclusion advocates for following principles of human accountability and responsibility when using artificial intelligence in research, including transparent disclosure and attribution; verification and documentation of AI-generated data and analysis; a focus on ethics and equity; and continuous oversight and public engagement.
Les mer
Contents
1. Overview and Context
Kathleen Hall Jamieson, Anne-Marie Mazza, and William Kearney
2. The Value and Limits of Statements from the Scientific Community: Human Genome Editing as a Case Study
David Baltimore and Robin Lovell-Badge
3. Science in the Context of AI
Jeannette M. Wing
4. We’ve Been Here Before: Historical Precedents for Managing Artificial Intelligence
Marc Aidinoff and David Kaiser
5. Navigating AI Governance as a Normative Field: Norms, Patterns, and Dynamics
Urs Gasser
6. Challenges to Evaluating Emerging Technologies and the Need for a Justice-Led Approach to Shaping Innovation
Alex John London
7. Bringing Power In: Rethinking Equity Solutions for AI
Shobita Parthasarathy and Jared Katzman
8. Scientific Progress in Artificial Intelligence: History, Status, and Futures
Eric Horvitz and Tom Mitchell
9. Perspectives on AI from Across the Disciplines
David Baltimore, Vinton Cerf, Joseph Francisco, Barbara Grosz, John Hennessy, Eric Horvitz, Kathleen Hall Jamieson, Marcia McNutt, William Press, Saul Perlmutter, Jeannette Wing, and Michael Witherell
10. Protecting Scientific Integrity in an Age of Generative AI
Wolfgang Blau, Vinton Cerf, Juan Enriquez, Joseph S. Francisco, Urs Gasser, Mary L. Gray, Mark Greaves, Barbara J. Grosz, Kathleen Hall Jamieson, Gerald H. Haug, John L. Hennessy, Eric Horvitz, David I. Kaiser, Alex John London, Robin Lovell-Badge, Marcia K. McNutt, Martha Minow, Tom M. Mitchell, Susan Ness, Shobita Parthasarathy, Saul Perlmutter, William H. Press, Jeannette M. Wing, and Michael Witherell
11. Safeguarding the Norms and Values of Science in the Age of Generative AI
Kathleen Hall Jamieson and Marcia McNutt
Appendix 1. List of Retreatants
Appendix 2. Biographies of Framework Authors, Paper Authors, and Editors
Index
Les mer
From nuclear energy to recombinant DNA, the scientific community developed practices to increase adherence to its norms. This volume explores the state of AI, draws lessons from the ways in which previous technologies were developed and deployed, catalogs efforts to govern AI, and suggests methods for science to harness AI’s potential responsibly.
Les mer
Chapter 1
Overview and Context
Kathleen Hall Jamieson, Anne-Marie Mazza, and William Kearney
Prior to the advent of ChatGPT, there were several moments when artificial intelligence (AI) captured news headlines. One such instance occurred in 1997 when IBM’s Big Blue computer won a chess match against the reigning grandmaster of the game. Then, in 2011, IBM’s Watson beat “the best human Jeopardy! player ever.” Six years later AlphaGo defeated the world’s top player of one of the most complicated board games, Go. A New York Times article on AlphaGo’s victory began with the line, “It isn’t looking good for humanity.”[1]
The Times’s lead sentence about AI outsmarting humans portended the worries that would emerge when the world awoke to the power, promise, and peril of artificial intelligence. That awakening occurred in late 2022 when OpenAI released ChatGPT, an AI chatbot capable of generating conversational answers and analyses, as well as images, in response to user questions and prompts. This generative AI is built with computational procedures, including large language models, that train on vast bodies of human-created and curated data, including huge amounts of scientific literature. It also has the ability to generate novel syntheses and ideas of its own that “push the expected boundaries of automated content creation.”[2]
Generative AI is accelerating breakthrough progress in science, perhaps best highlighted by Deep Mind’s AlphaFold, an AI tool that accurately predicts the unique structure of proteins, a process that in the past took many years and hundreds of thousands of dollars to accomplish. At the same time, generative AI is raising concerns about how its use in research may undermine core norms and values of science, including accountability, transparency, replicability, and human responsibility. In addition, generative AI is still plagued on occasion by nonsensical or inaccurate output, known as hallucinations. There also is a risk that the output can be biased and could reinforce long-standing injustices, inequalities, and inequities in society. Generative AI may also be used to further the proliferation of misinformation and disinformation.
To remark, as technology experts have, that AI is “evolving at a very rapid pace,” is an understatement. The sudden advances in artificial intelligence, and generative AI in particular, with new versions of chatbots and other AI tools being unveiled every few months, are putting increased pressure on the scientific community and policymakers to monitor the advances and consider their implications for research and society at large, and not just in the short term. As a 2022 report, Fostering Responsible Computing Research, from the National Academies of Sciences, Engineering, and Medicine, Fostering Responsible Computing Research, reminds us, “The concerns at the beginning of a technology’s developmental lifecycle are not the same as the ones that surface after wide-scale deployment.”[3]
Responding to the rapid development and deployment of artificial intelligence and generative AI models and the growing need for thoughtful consideration of their implications for the scientific community, in Summer 2023, National Academy of Sciences (NAS) President Marcia McNutt, Annenberg Public Policy Center (APPC) Director and Sunnylands Program Director Kathleen Hall Jamieson, and Sunnylands President David Lane invited just over two dozen experts to a two-day virtual retreat (November 29–30, 2023) followed by an in-person one at Sunnylands in Rancho Mirage, California (February 8–10, 2024), to consider governance of AI and its rapid diffusion throughout society and, in particular, across the scientific research enterprise. Background papers—which form the core of this book—on topics such as the evolution and current governance of AI, how the scientific community responded to past technological breakthroughs, and the societal implications, including effects on equity, of AI and other emerging technologies, were commissioned to inform these deliberations.[4] (A list of participants in one or both of the convenings, including the authors of the background papers included in this volume, can be found in Appendix 1).
Since 2015, the Annenberg Foundation Trust at Sunnylands, the APPC, and the NAS have partnered to fulfill Sunnylands’s mission to host “meetings of leaders and specialists in the major medical and scientific associations and institutions for the purpose of promoting and facilitating the exchange of ideas . . . to make advancements . . . for the common good and the public interest.”[5] Joined occasionally by the National Academy of Medicine, these partners have convened retreats at which leaders in science, academia, business, medical ethics, the judiciary and the bar, government, and scientific publishing identified ways to protect the integrity of science;[6] increase the transparency of authors’ contributions to scholarly publications;[7] articulate the principles that should guide scientific practice to ensure that science works at the frontiers of human knowledge in an ethical way; and protect the courts from inadvertent as well as deliberate misstatements about scientific knowledge. Plans for creation of the National Academy of Sciences’ Strategic Council for Research Excellence, Integrity, and Trust were birthed at an NAS-APPC-Sunnylands retreat,[8] as were recommendations to protect the integrity of survey research.[9]
In the past, as society grappled with the implications of technologies ranging from nuclear energy to recombinant DNA, CRISPR-Cas 9 gene editing, dual use research of concern, and neural organoids and chimeras, the scientific community often developed practices designed to increase adherence to the norms that have protected the integrity of each new form of scientific exploration, development, and deployment. In the process, scientists expanded their community’s repertoire of mechanisms designed to advance emerging science and technology while safeguarding the integrity of science and the well-being of the nation and its people.
Leading to the development of an NIH-approved biosafety framework, the 1975 Asilomar Conference on Recombinant DNA confirmed the importance of transparency and self-regulation among scientists involved in gene-splicing technology. The Belmont Report (1979), developed by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, set respect for persons, beneficence, and justice as core ethical principles for scientists involved in human subjects research in biomedicine and led to the establishment at research institutions of Institutional Review Board (IRB) processes and checks and balances based on those principles. In the process, it added the concept of informed consent and assessment of risks and benefits to the vocabulary of researchers.
Such past efforts remind us, as do the essays in this volume, that even as our understandings of emerging technologies and of their implications evolve, science’s commitment to core norms and values must remain steadfast. These reports also remind us that ethical, equitable, accountable, transparent science is the by-product of a vigilant scientific community that proactively engages the public.
Tasked with both exploring emerging challenges posed by the use of AI in research and charting a path forward for the scientific community, participants in the AI retreats included experts in behavioral and social sciences, ethics, biology, physics, chemistry, mathematics, and computer science, as well as leaders in higher education, law, governance, and science publishing and communication. Included in their ranks were three Nobel laureates and fourteen members of the National Academy of Sciences, the National Academy of Engineering, or National Academy of Medicine.[10]
In fashioning their work, the NAS-APPC-Sunnylands retreatants drew on the lessons learned from earlier workshops, reports, and consensus statements from the National Academies of Science, Engineering, and Medicine, including Fostering Integrity in Research (2017),[11] Reproducibility and Replicability in Science (2019),[12] Fostering Responsible Computing Research: Foundations and Practices (2022),[13] Automated Research Workflows for Accelerated Discovery (2022),[14] a National Academies AI for Scientific Discovery Workshop (October 12–13, 2023),[15] and National Academy of Medicine’s Generative AI and LLMs in Health and Medicine Workshop (October 25, 2023).[16]
The retreatants’ deliberations were informed as well by the commissioned background papers and by presentations from Nobel Laureates Harold Varmus, Lewis Thomas University Professor of Medicine at Weill Cornell Medical College, and David Baltimore, Distinguished Professor of Biology at Caltech, about efforts by the scientific community to deal with the challenges posed by potential pandemic pathogens and emergent technologies such as human genome editing. Additionally, Baltimore and Robin Lovell-Badge, head of the laboratory of stem cell biology and development genetics at the Francis Crick Institute in London, discussed the processes that led to the three International Summits on Human Genome Editing. Those convenings created consensus statements establishing processes and ethical principles to guide research and the use of human genome editing techniques, engage the public, and protect future generations against negative consequences. A digest of insights from Baltimore and Lovell-Badge forms Chapter 2 of the book. Chapter 11, “Safeguarding the Norms and Values of Science in the Age of Generative AI,” by conveners Kathleen Hall Jamieson and Marcia McNutt explores the guiding norms and values of science at issue in the working group’s call for the scientific community to protect scientific integrity in the age of generative AI by remaining “steadfast in honoring the guiding norms and values of science.”
The letter inviting participants to the two-stage retreats provisionally adopted the Association for the Advancement of Artificial Intelligence (AAAI) definition of artificial intelligence as “the mechanisms underlying thought and intelligent behavior and their embodiment in machines.” The invitational letter also forecast that the retreatants’ deliberations would “build from and contribute to the revision of draft commissioned papers that will provide: 1) a historical perspective on how society has prepared and managed emerging transformative technologies; 2) philosophical/ethical lenses used to analyze and evaluate emerging technologies; 3) an overview of recently proposed AI frameworks, laws, principles, and guidelines; 4) equity and inclusion issues associated with AI; 5) an assessment of the current state of scientific/technical advances in AI, hurdles and potential, and concerns its capacities raise; and 6) challenges and opportunities associated with creating and analyzing large data sets.” (Brief biographical statements on authors whose work is included in this book can be found in Appendix 2. See also Figure 1.1.)
Expanding on the AAAI definition, the retreatants presupposed with Eric Horvitz, Chief Scientific Officer of Microsoft, and Tom Mitchell, Founders University Professor at Carnegie Mellon University (see Chapter 8) that “Artificial Intelligence (AI) refers to a field of endeavor as well as a constellation of technologies,” a notion consistent with the one set forth in 15 U.S.C. 9401(3).[17]
With this in mind, the Sunnylands Statement (see Chapter 10) that emerged from the AI retreats built upon the understanding that “generative AI systems are constructed with computational procedures that learn from large bodies of human-authored and curated text, imagery, and analyses, including expansive collections of scientific literature. The systems are used to perform multiple operations, such as problem-solving, data analysis, interpretation of textual and visual content, and the generation of text, images, and other forms of data. In response to prompts and other directives, the systems can provide users with coherent text, compelling imagery, and analyses, while also possessing the capability to generate novel syntheses and ideas that push the expected boundaries of automated content creation.”
As a means of “understanding the opportunities and risks associated with AI today,” in Chapter 4, “We’ve Been Here Before: Historical Precedents for Managing Artificial Intelligence,” Marc Aidinoff, Research Associate at the Institute for Advanced Learning, and David Kaiser, Germeshausen Professor of the History of Science at the Massachusetts Institute for Technology, consider the ways in which the scientific community dealt with three historical episodes: “the early nuclear-weapons complex during the 1940s and 1950s; biotechnology, biomedicine, and the implementation of various safeguards in the 1970s; and the adoption and oversight of forensic technologies within the US legal and criminal-justice systems over the course of the past century.” In their digest in Issues in Science and Technology, they argue that “artificial intelligence needs ongoing and meaningful democratic oversight” which can be informed by understanding these historical episodes.[18]
In Chapter 5, “Navigating AI Governance as a Normative Field: Norms, Patterns, and Dynamics,” Urs Gasser, Professor of Public Policy, Governance, and Innovative Technology at the Technical University of Munich, addresses the “rapidly evolving and complex ecosystem” that surrounds AI and identifies the a variety of tools available to decision-makers as they “seek to anticipate, analyze, and address harms and risks associated with the accelerating pace of AI development, deployment, and use while harnessing its potential for human, society, and the planet at large.” This includes both ethical and technical standards. In his digest in Issues in Science and Technology, Gasser calls for AI governance that “leaves space for development and learning,” prioritizes interoperability, and invests in implementation capacity.[19]
Alex John London, K&L Gates Professor of Ethics and Computational Technologies at Carnegie Mellon University, then makes the case for a justice-led framework when evaluating innovations such as generative AI in Chapter 6, “Challenges to Evaluating Emerging Technologies and the Need for a Justice-Led Approach to Shaping Innovation.” A justice-led focus, he argues, is better able to identify and evaluate “(a) quintessentially social or higher-order effects (such as network-level or institutional level effects), (b) the role of a larger number of stakeholders who shape the innovation ecosystem in more indirect ways, and (c) some of the positive ethical claims of individuals that are relevant to evaluating innovation.” In his digest in Issues in Science and Technology, London argues that a justice-led framework will promote “social arrangements that better secure people’s freedom in the face of technological change.”[20]
In Chapter 7, “Bringing Power In: Rethinking Equity Solutions for AI,” Shobita Parthasarathy, Professor of Public Policy and Women’s and Gender Studies at the University of Michigan, and Jared Katzman, PhD student at the University of Michigan School of Information, draw our attention to growing concerns that AI is “exacerbating social inequity and injustice.” Their essay explores the responses of “policymakers, academics, and the technical community,” including the Blueprint for an AI Bill of Rights proposed by the Biden administration. That document “recommends identifying statistical biases in datasets, designing systems to be more transparent and explainable in their decision-making, incorporating proactive equity assessments into system design, including input from diverse viewpoints and identities, ensuring accessibility for people with disabilities, pre-deployment and ongoing disparity testing and mitigation, and clear oversight.”[21] They argue that many of such initiatives fall short because they fail to address “social inequalities that shape the landscape of technology development, use, and governance, including the concentration of economic and political power in a handful of technology companies and the systematic devaluation of lay contributions and perspectives, especially from those who have been historically marginalized.” Instead, as they argue in Issues in Science and Technology, AI regulators ought to “seek out partnerships with marginalized communities” in order to understand “power imbalances at the root of concerns surrounding AI bias and discrimination.”
Horvitz and Mitchell synthesize the journey of AI’s decades of “innovation with empirical studies and prototypes, the development of theoretical principles, and shifts among paradigms” in Chapter 8, “Scientific Progress in Artificial Intelligence: History, Status, and Futures.” In the process, they provide a lens on understanding “the technical evolution of different approaches to representing and reasoning with data and knowledge,” the machine learning foundations of today’s AI, as well as of discriminative and generative models, supervised, unsupervised, and self-supervised learning, and the inflection point for AI occasioned by deep learning. They also define key concepts and research directions before looking to a second inflection point: generative AI and charting its research, directions, trends, and key opportunities with applications for discriminative and generative AI.
As a complement to these efforts, members of the AI working group, Michael Witherell, Director of the Lawrence Berkeley National Laboratory, and William Press, the Leslie Surginer Professor of Computer Science and Integrative Biology at the University of Texas at Austin, planned an April 27 symposium for the 2024 annual meeting of the National Academy of Sciences moderated by working group member Jeannette Wing, Executive Vice President for Research and Professor of Computer Science at Columbia University, whose presentation in the symposium is the basis for Chapter 3, “Science in the Context of AI” (Figures 1.2 and 1.3).
“Much of the conversation we hear today about Artificial Intelligence (AI) focuses on fears concerning loss of privacy, lack of transparency and accountability, increase in inequality, and other social and economic issues,” noted the symposium planners William Press and Michael Witherell. “The widespread availability of generative AI is fueling much of this debate. However, AI is more than just large language models, and in fact versions of AI have been fueling scientific discovery and exploration for several decades now.”[22]
Titled “AI and Scientific Discovery,” the symposium provided “an opportunity to hear from speakers at the forefront of developing AI to advance research by automating workflows, finding patterns in large and complex data sets, mitigating human bias, improving models, speeding up tedious tasks, and exploring domains inhospitable to humans.”
Joining Wing in exploring both the promise of and various possible futures for AI-assisted research were four panelists:
● Pushmeet Kohli, Vice President of Research at Google DeepMind
● Daphne Koller, Founder and CEO of Insitro
● Michael Pritchard, Director of Climate Simulation Research at NVIDIA and Professor at the University of California, Irvine
● Jennifer Listgarten, Professor of Computer Science at the University of California, Berkeley
To provide a snapshot of the ways in which AI was affecting science, at both the virtual and in-person retreats, National Academy of Sciences, National Academy of Engineering, and National Academy of Medicine members of the Sunnylands working group shared their thoughts on the ways in which AI was affecting or might affect their work. As a means of preserving a sense of the ways in which AI was transforming scientific research in the months in which the retreatants were fashioning the calls for action found in their PNAS editorial, we include a digest of their thoughts in Chapter 9, “Perspectives on AI from Across the Disciplines.”
The working group’s editorial statement “Protecting Scientific Integrity in an Age of Generative AI” was published in the Proceedings of the National Academy of Sciences (PNAS) on May 21, 2024 and is included as Chapter 10 in this volume.[23] The editorial emphasizes that advances in generative AI represent a transformative moment for science—one that will accelerate scientific discovery but also challenge core norms and values of science, such as accountability, transparency, replicability, and human responsibility. “We welcome the advances that AI is driving across scientific disciplines, but we also need to be vigilant about upholding long-held scientific norms and values,” said National Academy of Sciences President Marcia McNutt, one of the coauthors of the editorial. “We hope our paper will prompt reflection among researchers and set the stage for concerted efforts to protect the integrity of science as generative AI increasingly is used in the course of research.”[24]
Urging the scientific community to follow five principles of human accountability and responsibility when using artificial intelligence in research, the editorial advocated: transparent disclosure and attribution; verification of AI-generated content and analyses; documentation of AI-generated data; a focus on ethics and equity; and continuous monitoring, oversight, and public engagement.
Its twenty-four authors also called on the National Academy of Sciences to establish a Strategic Council on the Responsible Use of AI in Science to provide ongoing guidance and oversight on responsibilities and best practices as the technology evolves. The proposed strategic council should be established by the National Academies of Sciences, Engineering, and Medicine, the authors recommended, and should coordinate with the scientific community and provide regularly updated guidance on the appropriate uses of AI. The council should study, monitor, and address the evolving use of AI in science; new ethical and societal concerns, including equity; and emerging threats to scientific norms. It should also share its insights across disciplines and develop and refine best practices.
This edited volume capsulizes the discussions that shaped the statement “Protecting Scientific Integrity in an Age of Generative AI” and provides a snapshot both of the state of AI science in Spring 2024 and of the efforts by leaders of the scientific community to ensure that the use of AI in research is pursued in a responsible manner. We hope it will provide a foundation for consideration of this fast moving and transformative technology.
Notes
[1] Paul Mozur, “Google’s AlphaGo Defeats Chinese Go Master in Win for A.I.,” New York Times, May 23, 2017, https://www.nytimes.com/2017/05/23/business/google-deepmind-alphago-go-champion-defeat.html.
[2] Mozur, “Google’s AlphaGo Defeats Chinese Go Master.”
[3] Fostering Responsible Computing Research, National Academies Press eBooks (2022), 41, https://doi.org/10.17226/26507.
[4] Chapter 4: “We’ve Been Here Before: Historical Precedents for Managing Artificial Intelligence,” by Marc Aidinoff and David Kaiser; Chapter 5: “Navigating AI Governance as a Normative Field: Norms, Patterns, and Dynamics,” by Urs Gasser; Chapter 6: “Challenges to Evaluating Emerging Technologies and the Need for a Justice-Led Approach to Shaping Innovation,” by Alex John London; Chapter 7: “Bringing Power In: Rethinking Equity Solutions for AI,” by Shobita Parthasarathy and Jared Katzman; Chapter 8: “Scientific Progress in Artificial Intelligence: History, Status, and Futures,” by Eric Horvitz and Tom Mitchell.
[5] Annenberg Public Policy Center, “National Academies, Sunnylands, and APPC Host Retreats on Protecting the Integrity of Science,” (August 7, 2020), https://www.annenbergpublicpolicycenter.org/nas-appc-sunnylands-retreats-integrity-science/.
[6] Bruce Alberts et al., “Self-Correction in Science at Work,” Science 348, no. 6242 (June 26, 2015): 1420–1422, https://doi.org/10.1126/science.aab3847.
[7] Marcia K. McNutt et al., “Transparency in Authors’ Contributions and Responsibilities to Promote Integrity in Scientific Publication,” Proceedings of the National Academy of Sciences 115, no. 11 (February 27, 2018): 2557–2560, https://doi.org/10.1073/pnas.1715374115.
[8] National Academies of Science, Engineering, and Medicine, “New Strategic Council for Research Excellence, Integrity, and Trust Established by National Academy of Sciences to Support the Health of the Research Enterprise,” National Academies (July 13, 2021), https://www.nationalacademies.org/news/2021/07/new-strategic-council-for-research-excellence-integrity-and-trust-established-by-national-academy-of-sciences-to-support-the-health-of-the-research-enterprise.
[9] Kathleen Hall Jamieson et al., “Protecting the Integrity of Survey Research,” PNAS Nexus 2, no. 3 (March 1, 2023), https://doi.org/10.1093/pnasnexus/pgad049.
[10] Some individuals represent several of the Academies.
[11] Fostering Integrity in Research, National Academies Press eBooks (2017), https://doi.org/10.17226/21896.
[12] Reproducibility and Replicability in Science, National Academies Press eBooks (2019), https://doi.org/10.17226/25303.
[13] Fostering Responsible Computing Research.
[14] Automated Research Workflows for Accelerated Discovery, National Academies Press eBooks (2022), https://doi.org/10.17226/26532.
[15] AI For Scientific Discovery, National Academies Press eBooks (2024), https://doi.org/10.17226/27457.
[16] National Academies of Medicine, Generative AI & LLMs in Health & Medicine (October 25, 2023), https://nam.edu/event/generative-ai-llms-in-health-medicine/.
[17] This definition of “artificial intelligence” or “AI” is set forth in 15 U.S.C. 9401(3). “A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action. See Managing Misuse Risk for Dual-Use3 Foundation Models, U.S. AI Safety Institute, https://doi.org/10.6028/NIST.AI.800-1.ipd.
[18] Digests of the commissioned papers were published in Issues in Science and Technology. See Marc Aidinoff and David Kaiser, “Novel Technologies and the Choices We Make: Historical Precedents for Managing Artificial Intelligence,” Issues in Science and Technology, May 21, 2024, https://doi.org/10.58875/buxb2813; Urs Gasser, “Governing AI with Intelligence,” Issues in Science and Technology, May 21, 2024, https://doi.org/10.58875/awjg1236; Alex John London, “A Justice-Led Approach to AI Innovation,” Issues in Science and Technology, May 21, 2024, https://doi.org/10.58875/knrz2697; Shobita Parthasarathy and Jared Katzman, “Bringing Communities in, Achieving AI for All,” Issues in Science and Technology, May 21, 2024, https://doi.org/10.58875/slrg2529.
[19] Gasser, “Governing AI with Intelligence.”
[20] London, “A Justice-Led Approach to AI Innovation.”
[21] Office of Science and Technology Policy, Blueprint for an AI Bill of Rights, The White House, October 2022, https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.
[22] National Academy of Science (@theNASciences), “NAS 161st Annual Meeting—Symposium AI and Scientific Discovery,” YouTube video (May 24, 2024), https://www.youtube.com/watch?v=G43Em6ELaiE.
[23] Wolfgang Blau et al., “Protecting Scientific Integrity in an Age of Generative AI,” Proceedings of the National Academy of Sciences 121, no. 22 (May 21, 2024), https://doi.org/10.1073/pnas.2407886121.
[24] National Academies of Science, Engineering, and Medicine, “Human Accountability and Responsibility Needed to Protect Scientific Integrity in an Age of AI, Says New Editorial,” National Academies (May 21, 2024), https://www.nationalacademies.org/news/2024/05/human-accountability-and-responsibility-needed-to-protect-scientific-integrity-in-an-age-of-ai-says-new-editorial?mc_cid=24fa0dce18&mc_eid=e9e4cc3749.
Les mer
Produktdetaljer
ISBN
9781512827484
Publisert
2024-11-26
Utgiver
Vendor
University of Pennsylvania Press
Høyde
229 mm
Bredde
152 mm
Aldersnivå
P, 06
Språk
Product language
Engelsk
Format
Product format
Heftet
Biographical note
Kathleen Hall Jamieson is the Elizabeth Ware Packard Professor of Communications and Director of the Annenberg Public Policy Center at the University of Pennsylvania.
William Kearney is the Executive Director of the Office of News and Public Information and Editor of Issues in Science and Technology at the National Academies of Sciences, Engineering, and Medicine in Washington, DC.
Anne-Marie Mazza is the Senior Director of the Committee on Science, Technology, and Law and Senior Advisor of the Policy and Global Affairs Division at the National Academies of Sciences, Engineering, and Medicine in Washington, DC.