{"id":41989,"date":"2024-04-09T06:00:00","date_gmt":"2024-04-09T10:00:00","guid":{"rendered":"https:\/\/issues.org\/?p=41989"},"modified":"2024-04-09T12:36:22","modified_gmt":"2024-04-09T16:36:22","slug":"interview-godmother-ai-fei-fei-li","status":"publish","type":"post","link":"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/","title":{"rendered":"\u201cAI Is a Tool, and Its Values Are Human Values.\u201d"},"content":{"rendered":"\n
Fei-Fei Li has been called the godmother of AI for her pioneering work in computer vision and image recognition. Li invented ImageNet, a foundational large-scale dataset that has contributed to key developments in deep learning and artificial intelligence. She previously served as chief scientist of AI at Google Cloud and as a member of the National Artificial Intelligence Research Resource Task Force for the White House Office of Science and Technology Policy and the National Science Foundation.<\/p>\n\n\n\n
Li is currently the Sequoia Professor of Computer Science at Stanford University, where she cofounded and codirects the Institute for Human-Centered AI<\/a>. She also cofounded the national nonprofit AI4ALL<\/a>, which aims to increase inclusion and diversity in AI education. Li is a member of the National Academy of Engineering and the National Academy of Medicine, and her recent book is The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI<\/a><\/em>. <\/p>\n\n\n\n In an interview with Issues<\/em> editor Sara Frueh, Li shares her thoughts on how to keep AI centered on human well-being, the ethical responsibilities of AI scientists and developers, and whether there are limits to the human qualities AI can attain.<\/p>\n\n\n\n What drew you into AI? How did it happen, and what appealed to you about it?<\/em><\/strong><\/p>\n\n\n\n Li:<\/em><\/strong> It was a pure intellectual curiosity that developed around 25 years ago. And the audacity of a curious question, which is: What is intelligence, and can we make intelligent machines? That was just so much fun to ponder.<\/p>\n\n\n\n My original entry point into science was physics. I was an undergrad in physics at Princeton. And physics is a way of thinking about big and fundamental questions. One fun aspect of being a physics student is that you learn about the physical world, the atomic world.<\/p>\n\n\n\n What is intelligence, and can we make intelligent machines? That was just so much fun to ponder.<\/p><\/blockquote><\/figure>\n\n\n\n The question of intelligence is a contrast to that. It\u2019s so much more nebulous. Maybe one day we will prove that it\u2019s all just physically realized intelligence, but before that happens, it\u2019s just a whole different way of asking those fundamental questions. That was just fascinating. And of all the aspects of intelligence, visual intelligence is a cornerstone of intelligence for animals and humans. The pixel world is so rich and mathematically infinite. To make sense of it, to be able to understand it, to be able to live within it, and to do things in it is just so fascinating to me.<\/p>\n\n\n\n Where are we at in the development of AI? Do you see us as being at a crossroads or inflection point, and if so, what kind?<\/em><\/strong><\/p>\n\n\n\n Li: <\/em><\/strong>We\u2019re absolutely at a very interesting time. Are we at an inflection point? The short answer is yes, but the longer answer is that technologies and our society will go through many inflection points. I don\u2019t want to overhype this by saying this is the singular one.<\/p>\n\n\n\n So it is an inflection point for several reasons. One is the power of new AI models. AI as a field is relatively young\u2014it\u2019s 60, maybe 70 years old by now. It\u2019s young enough that it\u2019s only come of age to the public recently. And suddenly we\u2019ve got these powerful models like large language models\u2014and that itself is an inflection point.<\/p>\n\n\n\n The second reason it\u2019s an inflection point is the public has awakened to AI. We\u2019ve gone through a few earlier, smaller inflection points, like when AlphaGo beat a human Go player in 2016, but AlphaGo didn\u2019t change public life. You can sit here and watch a computer play a Go master, but it doesn\u2019t make your life different. ChatGPT changed that\u2014whether you\u2019re asking a question or trying to compose an email or translate a language. And now we have other generative AI creating art and all that. That just fundamentally changed people, and that public awakening is an inflection point.<\/p>\n\n\n\n And the third is socioeconomic. You combine the technology with the public awakening, and suddenly many of the doings of society are going to be impacted by this powerful technology. And that has profound impacts on business, socioeconomic structure, and labor, and there will be intended and unintended consequences\u2014including for democracy.<\/p>\n\n\n\n Thinking about where we go from here\u2014you cofounded and lead the Institute for Human-Centered AI (HAI) at Stanford. What does it mean to develop AI in a human-centered way?<\/em><\/strong><\/p>\n\n\n\n Li:<\/em><\/strong> It means recognizing AI is a tool. And tools don\u2019t have independent values\u2014their values are human values. That means we need to be responsible developers as well as governors of this technology\u2014which requires a framework. The human-centered framework<\/a> is anchored in a shared commitment that AI should improve the human condition\u2014and it consists of concentric rings of responsibility and impact, from individuals to community to society as a whole.<\/p>\n\n\n\n You combine the technology with the public awakening, and suddenly many of the doings of society are going to be impacted by this powerful technology.<\/p><\/blockquote><\/figure>\n\n\n\n For example, human centeredness for the individual recognizes that this technology can empower or harm human dignity, can enhance or take away human jobs and opportunity, and can enhance or replace human creativity.<\/p>\n\n\n\n And then you look at community. This technology can help communities. But this technology can also exacerbate the bias or the challenges among different communities. It can become a tool to harm communities. So that\u2019s another level.<\/p>\n\n\n\n And then society\u2014this technology can unleash incredible, civilizational-scale positive changes like curing diseases, discovering drugs, finding new materials, creating climate solutions. Even last year\u2019s fusion milestone was very much empowered by AI and machine learning. In the meantime, it can really create risks to society and to democracy, like disinformation and painful labor market change.<\/p>\n\n\n\n A lot of people, especially in Silicon Valley, talk about increased productivity. As a technologist, I absolutely believe in increased productivity, but that doesn\u2019t automatically translate into shared prosperity. And that\u2019s a societal level issue. So no matter if you look at the individual, community, or society, a human-centered approach to AI is important.<\/p>\n\n\n\n Are there policies or incentives that could be implemented to ensure that AI is developed in ways that enhance human benefits and minimize risks?<\/em><\/strong><\/p>\n\n\n\n Li:<\/em><\/strong> I think education is critical. I worry that the United States hasn\u2019t embraced effective education for our population\u2014whether it\u2019s K\u201312 or continuing education. A lot of people are fearful of this technology. There is a lack of public education on what this is. And I cringe when I read about AI in the news because it either lacks technical accuracy or it is going after eyeballs. The less proper education there is, the more despair and anxiety it creates for our society. And that\u2019s just not helpful.<\/p>\n\n\n\n As a technologist, I absolutely believe in increased productivity, but that doesn\u2019t automatically translate into shared prosperity.<\/p><\/blockquote><\/figure>\n\n\n\n For example, take children and learning. We\u2019re hearing about some schoolteachers absolutely banning AI. But we also see some children starting to use AI in a responsible way and learning to take advantage of this tool. And the difference between those who understand how to use AI and those who do not is going to have extremely profound downstream effects.<\/p>\n\n\n\n And of course, skillset education is also important. It\u2019s been how many decades since we entered the computing age? Yet I don\u2019t think US K\u201312 computing education is adequate. And that will also affect the future.<\/p>\n\n\n\n Thoughtful policies are important, but by policy I don\u2019t mean regulation exclusively. Policy can effectively incentivize and actually help to create a healthier ecosystem. I have been advocating for the National AI Research Resource<\/a>, which would provide the public sector and the academic world with desperately needed computing and data resources to do more AI research and discovery. And that\u2019s part of policy as well.<\/p>\n\n\n\n And of course there are policies that need to look into the harms and unintended consequences of AI, especially in areas like health care, education, manufacturing, and finance.<\/p>\n\n\n\n