2021 Annual Report

2021 has been a good year for GCRI. Our productivity is up relative to previous years, boosted by a growing team and rich network of outside collaborators. Our work over the past year is broadly consistent with the plans we outlined one year ago. We have adjusted well to the new realities of the COVID-19 pandemic, aided by the fact that GCRI was designed for remote collaboration from the start. Because of the pandemic, there is a sense in which no one has had a truly great year, and we at GCRI have certainly not been immune to its many harms and disruptions. However, despite the circumstances, we find ourselves in a relatively favorable position.

2022 projects to potentially be even better. We are poised to build on our recent successes on a variety of fronts, including research, outreach, and community building. Indeed, the growth of our community building activities is injecting energy into our research and outreach activities in ways that will carry over into the new year. 2022 has the potential to be a very productive year for GCRI’s efforts to address global catastrophic risk, especially if our ongoing fundraising is successful.

And yet, the moment is bittersweet. Longtime GCRI team member Robert de Neufville will be leaving GCRI in December to pursue other activities. He has been a central contributor to GCRI since the early days of the organization and currently serves as GCRI’s Director of Communications. GCRI is well-positioned to carry on in his absence, but he will be greatly missed nonetheless. We wish him the best in his future endeavors.

This post summarizes what GCRI accomplished in 2021, what we plan to do in 2022, and the funding we are seeking to execute these plans. GCRI posted similar year-end summaries in 20182019, and 2020.

Jump to: 
2021 Accomplishments:
* Research
* Outreach
* Community Support
* Organization Development

2022 Plans:
* Research
* Outreach
* Community Support
* Organization Development

Fundraising
Conclusion

2021 Accomplishments

GCRI made substantial progress this year in each of our primary focus areas: research, outreach, community support, and organization development. 

One year ago, we published plans for our work in 2021. To a large extent, our actual work has been consistent with these plans. The details have inevitably varied due to ordinary changes in circumstances, including which specific projects we have received funding for, but the overall picture has been consistent. This demonstrates our ability to plan and execute work, and likewise lends some confidence that our 2022 plans will be followed.

Research

Over the past year, GCRI’s research has concentrated heavily on AI, thanks to funding from Gordon Irlam. Our research has also been highly collaborative, driven by growth in the GCRI team and an expanded Advising and Collaboration Program. All of the publications listed below are on AI, and all have multiple authors, many of whom are from outside of GCRI. A recent post describes GCRI’s approach to collaborative publishing and documents our publications with outside collaborators.

GCRI has nine new publications to report this year:

2020 survey of artificial general intelligence projects for ethics, risk, and policy, a technical report published by GCRI. This paper surveys the landscape of projects that are trying to build artificial general intelligence (AGI). It updates a previous survey published by GCRI in 2017. The 2020 survey features an improved methodology, giving it more comprehensive coverage than the 2017 survey. Compared to the 2017 survey, the 2020 survey finds a larger number of AGI projects, with a shift in trends toward projects that are smaller, more geographically diverse, less open source, more focused on humanitarian goals, less focused on intellectual goals, and more concentrated in private corporations.

AI certification: Advancing ethical practice by reducing information asymmetries, in IEEE Transactions on Technology and Society. This paper presents the first-ever research study of the use of certification for AI governance. Certification serves to convey to outside parties that an entity has met some sort of performance standard. For AI, certification can be used for the performance of AI systems, the groups that develop the systems, individual personnel, and more. The paper surveys the landscape of current AI certification, finding seven active and proposed programs. The paper draws on prior research on certification from outside the context of AI to discuss how AI certification can succeed, including certification for future AI technologies, including advanced human-level AI systems.

Moral consideration of nonhumans in the ethics of artificial intelligence, in AI & Ethics. This paper presents foundational analysis of the role of nonhumans within AI ethics. The paper outlines ethical theory about nonhumans, documents the state of attention to nonhumans in AI ethics, presents an argument for greater attention to nonhumans, and discusses implications for AI ethics. The paper finds that existing work on AI ethics has been heavily human-centric, such as in concepts like “AI for people” and “human-compatible AI”. The paper argues that other, less human-centric concepts should be used instead. It further argues that failure to appropriately consider nonhumans in AI ethics could have catastrophic consequences, such as in the design of advanced AGI systems.

Collective action on artificial intelligence: A primer and review, in Technology in Society. This paper describes how multiple people, groups, or other actors can work together to improve AI outcomes. The paper provides a primer on collective action theory as it relates to AI, which is of value because prior work on AI collective action has not been well grounded in the broader social science literature on collective action. The paper then reviews prior work on AI collective action, with emphasis on dangerous AI race scenarios and potential solutions to AI collective action problems. Overall, the paper serves to advance the interdisciplinary study of AI collective action by providing a foundational resource for researchers with backgrounds in AI, social science, and other relevant fields.

Corporate governance of artificial intelligence in the public interest, in Information. This paper evaluates opportunities to constructively influence how corporations develop and deploy AI systems. The paper considers opportunities available to a wide range of actors both inside and outside the corporation, including management, workers, investors, corporate partners, industry consortia, nonprofit organizations, the public, the media, and governments. As the paper explains, the best results often come when multiple types of actors work together. Corporations are among the most important institutions for AI development and deployment, so it is vital to ensure that their activities are in the public interest. This paper provides a blueprint for how to do just that.

Artificial intelligence, systemic risks, and sustainability, in Technology in Society. This paper, led by Victor Galaz of the Stockholm Resilience Centre in conjunction with the Princeton University Global Systemic Risk group, studies the role of AI in the closely related domains of systemic risk and sustainability. AI is increasingly used in sectors like agriculture, in which networks of environmental sensors produce “big data” that the AI crunches in order to optimize things like water and fertilizer use. This can have environmental benefits, but it also creates new risks such as the risk of mismatch between AI training data and local conditions, the concentration of environmental analysis in a small number of AI-driven corporations, and the possibility of large-scale cyberattack. These risks heighten the possibility of large-scale systemic failure in agriculture and other environmental sectors.

The case for long-term corporate governance of AI, in Effective Altruism Forum. This blog post explains why corporate governance should be an important focus of activity for people concerned about AI and the long-term future. As the post explains, corporations play major roles in the types of AI that can affect the long-term future, including advanced AGI. Furthermore, there has been little prior work on AI corporate governance as it relates to the long-term future, especially compared to the amount of work on other topics such as policy. Finally, there are a number of worthwhile activities that people can do to advance long-term corporate governance of AI, such as cultivating expertise, generating ideas, and evaluating priorities to improve the long-term corporate governance of AI.

Artificial intelligence needs environmental ethics, in Ethics, Policy, & Environment. This short comment paper calls for greater contributions from the discipline of environmental ethics and presents several types of contributions that environmental ethicists can make. First, environmental ethicists can raise the profile of the environmental dimensions of AI, such as the energy consumption of AI systems. Second, environmental ethicists can help analyze novel ethical situations involving AI, such as the potential creation of artificial life and ecosystems. Third, environmental ethicists can provide valuable perspectives on the future-orientation of certain AI issues, such as in debates over near-term vs. long-term AI. As the paper explains, AI is an interdisciplinary topic that benefits from a variety of contributions. Environmental ethics can make some especially important contributions.

The ethics of sustainability for artificial intelligence, in Proceedings of AI for People: Towards Sustainable AI, CAIP’21. This paper studies the ethical dimensions of the concept of sustainability as it relates to AI. Sustainability is commonly associated with environmental issues, but the concept is more general. The paper surveys prior work on AI and sustainability, finding that most of it uses common conceptions of environmental sustainability and some of it is on the sustainability of AI systems and other things. The paper presents ethical arguments for sustaining both humans and nonhumans over long time scales. It further distinguishes between sustainability and optimization, and argues that optimization should be pursued wherever the two concepts diverge. Finally, it relates these arguments to ongoing work on AI, calling for emphasis on AI to reduce global catastrophic risk and on long-term forms of AI. The paper therefore serves to relate ideas on global catastrophic risk and the long-term future to active initiatives on AI and sustainability.

Outreach

GCRI’s outreach work over the past year has included a variety of activities. We have participated in two ongoing AI governance initiatives: one at the US National Institute of Standards and Technology (NIST) and one at the Institute of Electrical and Electronics Engineers (IEEE). Additionally, we have worked to address two major issues that impede outreach on global catastrophic risk: political partisanship and the potential downsides of raising awareness about AGI. Finally, we have continued to present our work to audiences via remote/online events, with in-person events largely curtailed by the ongoing COVID-19 pandemic.

NIST is currently developing an AI Risk Management Framework. Thanks to funding from Gordon Irlam, GCRI Director of Research Tony Barrett began outreach to the NIST framework development process at the beginning of 2021. Dr. Barrett’s initial work was successful, resulting in him receiving additional funding for this work. He is doing this as a Non-Resident Research Fellow with the AI Security Initiative of the Center for Long-Term Cybersecurity at UC Berkeley and a Senior Policy Analyst at the Berkeley Existential Risk Initiative (BERI). His work is oriented toward encouraging the NIST framework to appropriately address catastrophic risks and related topics. We thank Jared Brown of the Future of Life Institute for his instrumental role in facilitating this work.

The IEEE Standards Association is currently working on standard P2863, Recommended Practice for Organizational Governance of Artificial Intelligence. GCRI Executive Director Seth Baum and Research Associate Andrea Owe are both members of the expert working group tasked with formulating the standard. Their work is oriented toward encouraging the IEEE standard to appropriately address catastrophic risks, nonhumans, and related topics.

Political partisanship poses a significant dilemma for outreach on global catastrophic risk. On one hand, differences in the policy positions of political parties can have implications for global catastrophic risk, just as they can (and arguably should) have implications for any major public issue. On the other hand, involvement in partisan politics can make outreach less effective, and outside analysts are often expected to be nonpartisan. Indeed, GCRI explicitly identifies as a nonprofit, nonpartisan think tank. Inspired by recent high-stakes political turmoil in the US, we worked to clarify appropriate roles for people and groups working on global catastrophic risk in the context of political partisanship. The GCRI statement on the January 6 US Capitol Insurrection is a public-facing output of this work. We concurrently held many private conversations to clarify our own thinking and help others in the field of global catastrophic risk with their outreach. In short, we have found that nonpartisanship remains important, and that topics related to partisan politics can be addressed in a thoughtful and respectful fashion. We further emphasize that concern about global catastrophic risk is consistent with a wide range of political views, and we recognize that people and groups with a wide range of political views have made important contributions to addressing global catastrophic risk.

AGI also poses a dilemma for outreach. On one hand, raising awareness about AGI can improve understanding of the risks it poses and can motivate action to reduce those risks. On the other hand, raising awareness about AGI could, in some cases, motivate action that would increase the risks, in particular by prompting some groups to try to build AGI without adequate safety and ethics standards. This dilemma may be especially pronounced in the context of policy outreach, due to governments’ dual roles as developers and regulators of AI technology. GCRI and our colleagues are increasingly active in AI policy outreach. Therefore, GCRI has worked to clarify when and how to discuss AGI with policy audiences. We conducted a series of interviews with knowledgeable individuals to obtain a variety of perspectives on this issue. We have found the issue to be highly complex, touching on a range of issues including expectations about future AGI scenarios and beliefs about the general tendencies of governments. We have not yet adequately resolved this issue and for that reason have not yet have produced any public-facing outputs on it. We expect our work on this issue to continue in 2022.

GCRI has eight new presentations to report this year:

The ethics of sustainability for artificial intelligence, a remote talk given by Andrea Owe at AI for People: Towards Sustainable AI, CAIP’21.

Environmentalism in space: From the present to the long-term future, a remote talk given by Andrea Owe to The Open University as part of a series of events on The Borders of Astrobiology.

AI governance and global catastrophic risk, a remote talk given by Seth Baum to Effective Altruism New York City.

Setting the stage for future AI governance, a remote talk given by Seth Baum to the Legal Priorities Project.

Setting the stage for future AI governance, a remote talk given by Seth Baum at the Center for Human-Compatible Artificial Intelligence (CHAI).

Philosophy and ethics of space exploration, a remote talk given by Andrea Owe to Harvard’s Berkman Klein Center as part of the center’s spring 2021 “research sprint” on digital self-determination.

Risk analysis and artificial general intelligence, a remote seminar given by Seth Baum at the 17th Meeting of Japanese Society for Artificial Intelligence Special Interest Group on Artificial General Intelligence.

The Biden administration and the long-term future, an online panel discussion including Seth Baum for Effective Altruism DC.

Community Support

Supporting the broader professional community working on global catastrophic risk is a major priority for GCRI. We recognize that the risks are large and complex and that efforts to address them likewise benefit from a wide range of approaches. Throughout 2021, we have done a lot to support the global catastrophic risk community.

Every year since 2019, GCRI has run an Advising and Collaboration Program that welcomes inquiries from anyone interested in studying or working on global catastrophic risk. The program is the primary way for new people to connect with GCRI and get involved in our activities. For many people, it is also one of their first opportunities to connect with the professional global catastrophic risk community. We are proud to run an open and inclusive program. 

The 2021 Advising and Collaboration Program was a great success. It was our largest yet. We received 96 inquiries in response to our open call, which is 25% more than any previous year. We were able to speak with 81 respondents, which is over twice as many as any previous year. We made 44 professional network introductions, which is 60% more than any previous year. Participants spanned all career points, from undergraduates to senior professionals. They also had a wide range of academic and professional backgrounds and hailed from 24 countries around the world. Six participants gave testimonials about their experience. Drawing on his experience advising people in the Advising and Collaboration Program, Seth Baum has written a post Common points of advice for students and early-career professionals interested in global catastrophic risk.

Many of the participants in the 2021 Advising and Collaboration Program have collaborated with GCRI on various projects. This includes four participants who are presenting research at the upcoming Society for Risk Analysis 2021 Annual Meeting. We anticipate that these collaborations will produce, among other things, multiple co-authored research papers. Some co-authors will be early-career researchers who are gaining valuable research experience through their collaborations. This is part of GCRI’s ongoing commitment to collaborative publishing.

This year, GCRI launched a Fellowship Program to recognize people who have made excellent contributions to addressing global catastrophic risk in collaboration with GCRI. We launched the program due to the large number of excellent collaborators we have had this year. There are twelve 2021 GCRI Fellows, eight of whom participated in the 2021 Advising and Collaboration Program; two others participated in the program in previous years.Finally, GCRI has also supported other organizations that are supporting the global catastrophic risk community. McKenna Fitzgerald has served as a mentor for people interested in global catastrophic risk through Women and Non-Binary Altruism Mentorship (WANBAM). Additionally, GCRI has tapped our professional networks to help several organizations find mentors for their fellowship programs: the Stanford Existential Risks Initiative (SERI), the Cambridge Existential Risks Initiative (CERI), the Swiss Existential Risk Initiative (CHERI), and the Legal Priorities Project (LPP). These organizations are doing tremendous work to support the global catastrophic risk community, and we are honored for the opportunity to be able to support them.

Organization Development

2021 has brought significant changes to the GCRI team. As announced earlier this year, GCRI hired Andrea Owe to the position of Research Associate. Additionally, Director of Communications Robert de Neufville is leaving GCRI, and McKenna Fitzgerald is being promoted from the position of Project Manager and Research Assistant to the position of Deputy Director, making her part of the GCRI Leadership Team. Finally, our network of collaborators has expanded considerably, highlighted by our inaugural 2021 GCRI Fellows as described above.

Andrea Owe is an ethicist and futurist working with GCRI on ethical dimensions of global catastrophic risk. She first collaborated with GCRI in 2018 on an internship as part of her Masters in Philosophy degree, working with us on a project on the ethics of outer space. Last year, she did some contract work for GCRI, focused on AI ethics. This work was highly successful, and we were delighted to be able to hire her into a full-time role earlier this year. Her work has already resulted in three publications, all of which address ethical issues related to AI and nonhumans: Moral consideration of nonhumans in the ethics of artificial intelligenceArtificial intelligence needs environmental ethics, and The ethics of sustainability for artificial intelligence. She has been an excellent member of the GCRI team and we look forward to her continued work with us.

Robert de Neufville has held the position of GCRI Director of Communications since 2016, and has been involved in GCRI since 2013. He will be stepping down in December to pursue new activities. As one of the longest-tenured members of the GCRI team, he has contributed immeasurably to the organization. A political scientist by training, he has helped GCRI chart our course through the complex terrain of US and global politics, such as in the GCRI statements on racism and on the January 6 US Capitol insurrection. His rich intellectual perspective further contributed to GCRI research, including the publications A model for the probability of nuclear warLessons for artificial intelligence from other global risks, and Collective action on artificial intelligence: A primer and review. We at GCRI are grateful for his service and wish him the best in his future endeavors.

McKenna Fitzgerald joined the GCRI team in 2020 in the position of Project Manager and Research Assistant. She has quickly become an invaluable contributor to the GCRI team. She has handled a broad portfolio of activities. As a researcher, she led the 2020 survey of artificial general intelligence projects for ethics, risk, and policy, which involved leading a team through detailed data collection, analysis, and documentation. As a project manager, she has facilitated interactions across a growing GCRI team. She was instrumental in managing the 2021 Advising and Collaboration Program, our largest and most successful yet. She has already been heavily involved in the management of GCRI. It will be a smooth transition for her into her expanded role as Deputy Director and member of the GCRI Leadership Team. GCRI is fortunate to have her for this role.

2022 Plans

As the world settles into a new normal of pandemic life, we can make plans with greater confidence than we could one year ago. The pandemic very much remains a dynamic phenomenon, and we at GCRI are well aware of the potential for it to change dramatically over the next year and beyond. Nonetheless, now almost two years into the pandemic, it is clearer what the pandemic entails, including the challenges and opportunities it creates for addressing global catastrophic risk.

Our plans for 2022 are broadly consistent with the plans we published a year ago and with the work we have accomplished in 2021. We anticipate a year of further growth and progress in the same general strategic direction, with a few modest adjustments in the details. We believe that GCRI’s fundamentals are sound. We are pursuing opportunities to expand more ambitiously, and we are also prepared to work within the constraints of whatever resources we will have.Below is an overview of the work we plan to do in 2022. This plan includes the flexibility to adjust as needed, for example depending on what funding we receive and what new opportunities emerge during the year.

Research

We have a number of active research projects that will continue into 2022. These include AI projects that were generously funded by Gordon Irlam earlier in 2021; active projects on a variety of topics with external collaborators, such as research being presented at the Society for Risk Analysis 2021 Annual Meeting; and ongoing ethics research led by Andrea Owe. We are likely to also pursue additional research projects as opportunities arise.

Outreach

Some of our 2021 outreach projects will also continue into 2022. Our work on the merits of policy outreach on AGI is incomplete; we anticipate a renewed effort on it in 2022. The IEEE and NIST initiatives remain active; we plan to remain involved to see our work through to the end. One new outreach direction we plan for 2022 is on AI corporate governance; our recent research on this topic suggests several outreach ideas that we are keen to pursue (some early outreach is outlined here). We are also likely to conduct policy outreach on international security topics including AI and possibly also nuclear weapons.

Community Support

We plan to continue and possibly expand our community support activities in 2022. The 2021 Advising and Collaboration Program was a great success; we plan to do a new round of the program in 2022, possibly at an even larger scale. The 2021 Fellowship Program was a late addition to our 2021 community support activities; in 2022, the program will benefit from advance planning. We also remain available to support other organizations’ programs as needed. We will keep an eye out for new organizations to partner with on community support activities.

Organization Development

Ongoing changes to the GCRI team will necessitate some organization development activities in 2022. We anticipate a very smooth transition with the promotion of McKenna Fitzgerald to Deputy Director, given her excellent qualifications for this new role. We expect more of an adjustment following the departure of Robert de Neufville, given the central role he has played for GCRI for so many years. Therefore, we plan to let the dust settle before making any new hiring decisions, so that the decisions can be made according to the needs we identify. Hiring decisions will also depend on the results of ongoing funding. Throughout this period of transition and potential growth, we will continue to benefit from our robust network of collaborators, who considerably expand the overall capabilities of the organization.

Fundraising

GCRI currently operates on an annual budget of approximately $400,000. We have enough reserves to continue to operate through the beginning of 2023.

We are currently seeking to raise funds to extend our current operations further into the future and hopefully to expand them. We would be grateful for any support. Prospective contributors can visit our donate page or contact Seth Baum at seth [at] gcrinstitute.org.

Thanks to increased funding over the past year, we were able to hire Andrea Owe to the position of Research Associate. Her work has already resulted in three publications (thisthis, and this), three presentations (listed here), and provided general support for GCRI activities, including as an advisor for the GCRI Advising and Collaboration Program.

We are well-placed to use additional funds productively. We have a range of promising research and outreach projects we could pursue if we were funded at a higher level. We also have an extensive professional network of people who can collaborate with us and contribute to our work, in part thanks to our successful Advising and Collaboration Program. This puts us in excellent position to continue to expand if we can raise the additional money.

Conclusion

Despite the ongoing challenges of the COVID-19 pandemic, this has been a good year for GCRI. We are grateful to the many people who have helped make this possible, including our advisees and collaborators, our colleagues at other organizations, and our funders. This is the sort of professional community needed to address global catastrophic risk. It has enabled us to do a lot of good work on the risks. We look forward to building on this work next year and beyond.

Note: This page was originally published as Summary of 2021-2022 GCRI Accomplishments, Plans, and Fundraising“.

Tagged with , ,
This post was written by
Seth Baum is Executive Director of the Global Catastrophic Risk Institute.
Comments are closed.