My newest Ask-A-Data-Scientist post was inspired by a computer science student who wrote in asking for advice on how to pursue a career in policy making related to the societal impacts of AI. I realized that there are many great resources out there, and I wanted to compile a list of links all in one place.
You can find my previous Ask-A-Data-Scientist advice columns here.
Everyone in tech should be concerned about the ethical implications of our work and actively engaging with such questions. The humanities and social sciences are incredibly relevant and important in addressing ethics questions. While tech ethics is not a new field (it has traditionally been studied within science, tech, & society (STS), or information science departments), many in the tech industry are now waking up to these questions, and there is a much wider interest in the topic than before.
Working on AI ethics takes many forms, including: founding tech companies and building products in ethical ways; advocating and working for more just laws and policies; attempting to hold bad actors accountable; and research, writing, and teaching in the field. I have included many links to further resources in the rest of this post, as well as a few concrete suggestions. Don’t be overwhelmed by the length of these lists! This post is intended to be a resource that you can refer back to as needed:
- Build up your technical skills
- Start a reading group (and links to syllabi for 200+ tech ethics courses)
- 10 AI Ethics Experts to Follow
- Institutes and Fellowships
- Create your own
- Related fast.ai posts and talks
For an overview of some AI ethics issues, I encourage you to check out my recent PyBay keynote on the topic. Through a series of case studies, both negative and positive, I counter 4 misconceptions about tech that often lead to human harm, as well as offer some healthier principles:
Build up your technical skills
For anyone interested in the societal impact of AI, I recommend building up your technical knowledge of machine learning. Even if you do not plan on working as a programmer or deep learning practitioner, it is helpful to have a hands-on understanding of how this technology works and how it can be used. I encourage everyone interested in AI ethics and policy to learn Python and to take the Practical Deep Learning for Coders course (the only pre-requisite is one year of coding experience).
Start a reading group
Casey Fiesler, a professor in Information Science at CU Boulder, created a crowd-sourced spreadsheet of over 200 tech ethics courses and links to the syllabi for many of them. Even if your university does not offer a tech ethics course, I encourage you to start a club, reading group, or a student-led course on tech ethics, and these syllabi can be a helpful resource in creating your own.
For those who are not college students, consider starting a tech ethics reading group at your workplace (that could perhaps meet for lunch once a week and discuss a different reading each week) or a tech ethics meetup in your city.
10 AI Ethics Experts to Follow
Here are ten researchers whose work on AI ethics I admire and whom I recommend following. All of them have a number of great articles/talks/etc, although I’ve just linked to one each to get you started:
- Zeynep Tufekci is a professor at UNC School of Information and Library Science, and writes a New York Times column. Read her MIT Tech Review article, How social media took us from Tahrir Square to Donald Trump.
- Timnit Gebru earned her CS PhD at Stanford, just finished a postdoc at Microsoft Research, and is a founder of Black in AI. Read her paper Datasheets for Datasets.
- Latanya Sweeney is a professor of Government and Technology at Harvard University, Editor-in-Chief of Technology Science, and Director of the Data Privacy Lab at Harvard. She was formerly CTO at the U.S. Federal Trade Commission. Watch her FATML Keynote, Saving Humanity.
- Arvind Narayanan is a Princeton Computer Science Professor who studies digital privacy, infosec, cryptocurrencies & blockchains, AI ethics, and tech policy. Watch his FATML tutorial, 21 Definitions of Fairness.
- Kate Crawford is co-founder of the AINowInstitute at NYU, a principal researcher at Microsoft, and distinguished research professor at NYU. Watch her talk Politics of AI.
- danah boyd is a Principal Researcher at Microsoft Research and the founder of Data & Society. Watch her Republica keynote, How an algorithmic world can be undermined.
- Joy Buolamwini is founder of the Algorithmic Justice League and just completed her PhD at MIT Media Lab. Read and watch her research on racial bias in computer vision at gendershades.org.
- Renee DiResta is Director of Research at New Knowledge, Head of Policy at nonprofit Data for Democracy, and has testified to Congress about computational propaganda and disinformation. She is a regular contributor at Wired. Read her article Up next: a better recommendation system.
- Alvaro Bedoya is the founding Executive Director of the Center on Privacy & Technology at Georgetown Law. He was previously a senate staffer on issues of mobile location privacy, health data privacy, NSA transparency, and biometric privacy. Read his New York Times article, Why Silicon Valley Lobbyists Love Big, Broad Privacy Bills
- Guillaume Chaslot is a former YouTube engineer, founder of AlgoTransparency, and worked with the WSJ and Guardian to investigate YouTube. Read his post on How Algorithms Can Learn to Discredit the Media.
Institutes and Fellowships
The below institutes all offer a range of ways to get involved, including listening to their podcasts and videos (wherever you may be located in the world), attending in-person events, or applying for internships and fellowships to help fund your work in this area:
Harvard’s Berkman Klein Center for Internet & Society is a research center that seeks to bring people from around the globe together to tackle the biggest challenges presented by the Internet. Their programs include a Fellowship program, internships, and Assembly, a 4 month program for technologists, managers, and policymakers to confront emerging problems related to the ethics and governance of artificial intelligence.
Data & Society is a non-profit research institute founded by danah boyd in NYC. They have a year-long fellowship program which is open to data scientists and engineers, lawyers and librarians, ethnographers and creators, historians and activists.
AI Now Institute was founded by Kate Crawford and Meredith Whittaker, and is housed at NYU. They focus on four domains: rights and liberties, labor and automation, bias and inclusion, and safety and critical infrastructure.
Georgetown Law Center on Privacy and Technology is a think tank focused on privacy and surveillance law and policy—and the communities they affect. Their research includes The Perpetual Line-Up about the unregulated use of facial recognition technology by police in the USA.
Data for Democracy is a non-profit organization of volunteers that has worked on a variety of projects, including several collaborations with ProPublica.
Mozilla Media Fellowships fund new thinking on how to address emerging threats and challenges facing a healthy internet. Relevant projects have sought to address polarization, mass surveillance, and misinformation.
Knight Foundation (journalism focus) funds programs, including AI ethics initiative, to support free expression and journalistic excellence in the digital age. They have supported a number of projects related to addressing disinformation.
Eyebeam Residency (for artists) offers fellowships for those creating work which engages with technology and society through art. Previous projects include the open-source educational startup littleBits (2009) and the first Feminist Wikipedia Edit-A-Thon (2013).
Aspen Tech Policy Hub Fellowship is a new program for tech experts to teach them the policy process. During the process, each fellow will create at least one practical policy output—- for instance, mock legislation, toolkits for policymakers, white papers, op-eds, or an app.
Create your own
If what you want doesn’t yet exist in the world, you may need to create your own group, organization, non-profit, or startup. Timnit Gebru, a computer vision researcher, is an excellent role model for this. Dr. Gebru describes her experience as a Black woman attending NIPS (a major AI conference) in 2016, I went to NIPS and someone was saying there were an estimated 8,500 people. I counted six black people. I was literally panicking. That’s the only way I can describe how I felt. I saw that this field was growing exponentially, hitting the mainstream; it’s affecting every part of society. Dr. Gebru went on to found Black in AI, a large and active network of Black AI researchers, which has led to new research collaborations, conference and speaking invitations for members, and was even a factor in Google AI deciding to open a research center in Accra, Ghana.
Related fast.ai links
At fast.ai, we frequently write and speak about ethics, as well as including the topic in our deep learning course. Here are a few posts you may be interested in:
- What HBR Gets Wrong About Algorithms and Bias
- When Data Science Destabilizes Democracy and Facilitates Genocide
- What You Need to Know About Facebook and Ethics
- Diversity Crisis in AI, 2017 edition
- The Diversity Crisis in AI, and fast.ai Diversity Fellowship (2016)
Here are some talks we’ve given on this topic:
- Analyzing & Preventing Unconscious Bias in Machine Learning (keynote at QCon.ai)
- Word Embeddings, Bias in ML, Why You Don’t Like Math, & Why AI Needs You (workshop about bias in word embeddings such as Word2Vec)
- fast.ai Lesson 13: Ethics & Image Enhancement
- Some Healthy Principles About Ethics & Bias In AI (keynote at PyBay)
The ethical impact of technology is a huge and relevant area, and there is a lot of work to be done.