{"id":41989,"date":"2024-04-09T06:00:00","date_gmt":"2024-04-09T10:00:00","guid":{"rendered":"https:\/\/issues.org\/?p=41989"},"modified":"2024-04-09T12:36:22","modified_gmt":"2024-04-09T16:36:22","slug":"interview-godmother-ai-fei-fei-li","status":"publish","type":"post","link":"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/","title":{"rendered":"\u201cAI Is a Tool, and Its Values Are Human Values.\u201d"},"content":{"rendered":"\n

Fei-Fei Li has been called the godmother of AI for her pioneering work in computer vision and image recognition. Li invented ImageNet, a foundational large-scale dataset that has contributed to key developments in deep learning and artificial intelligence. She previously served as chief scientist of AI at Google Cloud and as a member of the National Artificial Intelligence Research Resource Task Force for the White House Office of Science and Technology Policy and the National Science Foundation.<\/p>\n\n\n\n

Li is currently the Sequoia Professor of Computer Science at Stanford University, where she cofounded and codirects the Institute for Human-Centered AI<\/a>. She also cofounded the national nonprofit AI4ALL<\/a>, which aims to increase inclusion and diversity in AI education. Li is a member of the National Academy of Engineering and the National Academy of Medicine, and her recent book is The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI<\/a><\/em>.  <\/p>\n\n\n\n

In an interview with Issues<\/em> editor Sara Frueh, Li shares her thoughts on how to keep AI centered on human well-being, the ethical responsibilities of AI scientists and developers, and whether there are limits to the human qualities AI can attain.<\/p>\n\n\n\n

\"Illustration
Illustration by Shonagh Rae.<\/figcaption><\/figure>\n\n\n\n

What drew you into AI? How did it happen, and what appealed to you about it?<\/em><\/strong><\/p>\n\n\n\n

Li:<\/em><\/strong> It was a pure intellectual curiosity that developed around 25 years ago. And the audacity of a curious question, which is: What is intelligence, and can we make intelligent machines? That was just so much fun to ponder.<\/p>\n\n\n\n

My original entry point into science was physics. I was an undergrad in physics at Princeton. And physics is a way of thinking about big and fundamental questions. One fun aspect of being a physics student is that you learn about the physical world, the atomic world.<\/p>\n\n\n\n

What is intelligence, and can we make intelligent machines? That was just so much fun to ponder.<\/p><\/blockquote><\/figure>\n\n\n\n

The question of intelligence is a contrast to that. It\u2019s so much more nebulous. Maybe one day we will prove that it\u2019s all just physically realized intelligence, but before that happens, it\u2019s just a whole different way of asking those fundamental questions. That was just fascinating. And of all the aspects of intelligence, visual intelligence is a cornerstone of intelligence for animals and humans. The pixel world is so rich and mathematically infinite. To make sense of it, to be able to understand it, to be able to live within it, and to do things in it is just so fascinating to me.<\/p>\n\n\n\n

Where are we at in the development of AI? Do you see us as being at a crossroads or inflection point, and if so, what kind?<\/em><\/strong><\/p>\n\n\n\n

Li: <\/em><\/strong>We\u2019re absolutely at a very interesting time. Are we at an inflection point? The short answer is yes, but the longer answer is that technologies and our society will go through many inflection points. I don\u2019t want to overhype this by saying this is the singular one.<\/p>\n\n\n\n

So it is an inflection point for several reasons. One is the power of new AI models. AI as a field is relatively young\u2014it\u2019s 60, maybe 70 years old by now. It\u2019s young enough that it\u2019s only come of age to the public recently. And suddenly we\u2019ve got these powerful models like large language models\u2014and that itself is an inflection point.<\/p>\n\n\n\n

The second reason it\u2019s an inflection point is the public has awakened to AI. We\u2019ve gone through a few earlier, smaller inflection points, like when AlphaGo beat a human Go player in 2016, but AlphaGo didn\u2019t change public life. You can sit here and watch a computer play a Go master, but it doesn\u2019t make your life different. ChatGPT changed that\u2014whether you\u2019re asking a question or trying to compose an email or translate a language. And now we have other generative AI creating art and all that. That just fundamentally changed people, and that public awakening is an inflection point.<\/p>\n\n\n\n

And the third is socioeconomic. You combine the technology with the public awakening, and suddenly many of the doings of society are going to be impacted by this powerful technology. And that has profound impacts on business, socioeconomic structure, and labor, and there will be intended and unintended consequences\u2014including for democracy.<\/p>\n\n\n\n

Thinking about where we go from here\u2014you cofounded and lead the Institute for Human-Centered AI (HAI) at Stanford. What does it mean to develop AI in a human-centered way?<\/em><\/strong><\/p>\n\n\n\n

Li:<\/em><\/strong> It means recognizing AI is a tool. And tools don\u2019t have independent values\u2014their values are human values. That means we need to be responsible developers as well as governors of this technology\u2014which requires a framework. The human-centered framework<\/a> is anchored in a shared commitment that AI should improve the human condition\u2014and it consists of concentric rings of responsibility and impact, from individuals to community to society as a whole.<\/p>\n\n\n\n

You combine the technology with the public awakening, and suddenly many of the doings of society are going to be impacted by this powerful technology.<\/p><\/blockquote><\/figure>\n\n\n\n

For example, human centeredness for the individual recognizes that this technology can empower or harm human dignity, can enhance or take away human jobs and opportunity, and can enhance or replace human creativity.<\/p>\n\n\n\n

And then you look at community. This technology can help communities. But this technology can also exacerbate the bias or the challenges among different communities. It can become a tool to harm communities. So that\u2019s another level.<\/p>\n\n\n\n

And then society\u2014this technology can unleash incredible, civilizational-scale positive changes like curing diseases, discovering drugs, finding new materials, creating climate solutions. Even last year\u2019s fusion milestone was very much empowered by AI and machine learning. In the meantime, it can really create risks to society and to democracy, like disinformation and painful labor market change.<\/p>\n\n\n\n

A lot of people, especially in Silicon Valley, talk about increased productivity. As a technologist, I absolutely believe in increased productivity, but that doesn\u2019t automatically translate into shared prosperity. And that\u2019s a societal level issue. So no matter if you look at the individual, community, or society, a human-centered approach to AI is important.<\/p>\n\n\n\n

Are there policies or incentives that could be implemented to ensure that AI is developed in ways that enhance human benefits and minimize risks?<\/em><\/strong><\/p>\n\n\n\n

Li:<\/em><\/strong> I think education is critical. I worry that the United States hasn\u2019t embraced effective education for our population\u2014whether it\u2019s K\u201312 or continuing education. A lot of people are fearful of this technology. There is a lack of public education on what this is. And I cringe when I read about AI in the news because it either lacks technical accuracy or it is going after eyeballs. The less proper education there is, the more despair and anxiety it creates for our society. And that\u2019s just not helpful.<\/p>\n\n\n\n

As a technologist, I absolutely believe in increased productivity, but that doesn\u2019t automatically translate into shared prosperity.<\/p><\/blockquote><\/figure>\n\n\n\n

For example, take children and learning. We\u2019re hearing about some schoolteachers absolutely banning AI. But we also see some children starting to use AI in a responsible way and learning to take advantage of this tool. And the difference between those who understand how to use AI and those who do not is going to have extremely profound downstream effects.<\/p>\n\n\n\n

And of course, skillset education is also important. It\u2019s been how many decades since we entered the computing age? Yet I don\u2019t think US K\u201312 computing education is adequate. And that will also affect the future.<\/p>\n\n\n\n

Thoughtful policies are important, but by policy I don\u2019t mean regulation exclusively. Policy can effectively incentivize and actually help to create a healthier ecosystem. I have been advocating for the National AI Research Resource<\/a>, which would provide the public sector and the academic world with desperately needed computing and data resources to do more AI research and discovery. And that\u2019s part of policy as well.<\/p>\n\n\n\n

And of course there are policies that need to look into the harms and unintended consequences of AI, especially in areas like health care, education, manufacturing, and finance.<\/p>\n\n\n\n

You mentioned that you\u2019ve been advocating for the National AI Research Resource (NAIRR). An NSF-led pilot of NAIRR has just begun, and legislation has been introduced in Congress\u2014the Create AI Act<\/a>\u2014that would establish it at full scale. How would that shape the development of AI in a way that benefits people?<\/em><\/strong><\/p>\n\n\n\n

Li:<\/em><\/strong> The goal is to resource our public sector. NAIRR is a vision for a national infrastructure for AI research that democratizes the tools needed to advance discovery and innovation. The goal is to create a public resource that enables academic and nonprofit AI researchers to access the tools they need\u2014including data, computing power, and training.<\/p>\n\n\n\n

The difference between those who understand how to use AI and those who do not is going to have extremely profound downstream effects.<\/p><\/blockquote><\/figure>\n\n\n\n

And so let\u2019s look at what public sector means, not just in terms of AI, but fundamentally to our country and to our civilization. The public sector produces public goods in several forms. The first form is knowledge expansion and discovery in the long arc of civilizational progress, whether it\u2019s printing books or writing Beethoven\u2019s Sixth Symphony or curing diseases. <\/p>\n\n\n\n

The second public good is talent. The public sector is shouldering the education of students and continued skilling of the public. And resourcing the public sector well means investing in the future of these talents.<\/p>\n\n\n\n

And last but not least, the public sector is what the public should be able to trust when there is a need to assess, evaluate, or explain something. For example, I don\u2019t know exactly how ibuprofen works; most people don\u2019t. Yet we trust ibuprofen to be used in certain conditions. It\u2019s because there have been both public- and private-sector studies and assessments and evaluations and standardizations of how to use these drugs. And that is a very important process, so that by and large our public trusts using medications like ibuprofen.<\/p>\n\n\n\n

We need the public sector to play that evaluative role in AI. For example, HAI has been comparing large language models in an objective way<\/a>, but we\u2019re so resource-limited. We wish we could do an even better job, but we need to resource the public sector to do that.<\/p>\n\n\n\n

You\u2019re working on AI for health care. People think about AI as being used for drug discovery, but you\u2019re thinking about it in terms of the human experience. How do you think AI can improve the human experience in our fractured, frustrating health care system? And how did your own experience shape your vision for that?<\/em><\/strong><\/p>\n\n\n\n

Li:<\/em><\/strong> I\u2019ve been involved in AI health care for a dozen years\u2014really motivated by my personal journey of taking care of an ailing parent for the past three decades. And now two ailing parents. I\u2019ve been at the front and center of caring\u2014not just providing moral support, but playing the role of home nurse, translator, case manager, advocate, and all that. So I\u2019ve seen that so much about health care is not just drug names and treatment plans and X-ray machines. Health care is people caring for people. Health care is ensuring patients are safe, are getting adequate, timely care, and are having a dignified care process.<\/p>\n\n\n\n

And I learned we are not resourced for that. There are just not enough humans doing this work, and nurses are so in demand. And care for the elderly is even worse.<\/p>\n\n\n\n

That makes me think that AI can assist with care\u2014seeing, hearing, triaging, and alerting. Depending on the situation, for example, it could be a pair of eyes watching a patient fall and alerting a person. It could be software running in the background and constantly watching for changes of lab results. It could be a conversation engine or software that answers patient questions. There are many forms of AIs that can help in the care delivery aspect of health care.<\/p>\n\n\n\n

What are the ethical responsibilities of engineers and scientists like you who are directly involved in developing AI?<\/em><\/strong><\/p>\n\n\n\n

Li:<\/em><\/strong> I think there is absolutely individual responsibility in terms of how we are developing the technology. There are professional norms. There are laws. There\u2019s also the reflection of our own ethical value system. I will not be involved in using AI to develop a drug that is illegal and harmful for people, for example. Most people won\u2019t. So there\u2019s a lot, from individual values to professional norms to laws, where we have responsibility.<\/p>\n\n\n\n

But I also feel we have a little bit of extra responsibility at this stage of AI because it\u2019s new. We have a responsibility in communication and education. This is why HAI does so much work with the policy world, with the business world, with the ecosystem, because if we can use our resources to communicate and educate about this technology in a responsible way, it\u2019s so much better than people reading misinformation that creates anxiety or irresponsible expectations of utopia. I guess it\u2019s individual and optional, but it is a legit responsibility we can take.<\/p>\n\n\n\n

When you think about AI\u2019s future, what worries you the most, and what gives you hope?<\/em><\/strong><\/p>\n\n\n\n

Li:<\/em><\/strong> It\u2019s not AI\u2019s future, it\u2019s humanity\u2019s future. We don\u2019t talk about electricity\u2019s future, we don\u2019t talk about steam\u2019s future. At the end of the day, it is our future, our species\u2019 future, and our civilization\u2019s future\u2014in the context of AI.<\/p>\n\n\n\n

If we can use our resources to communicate and educate about this technology in a responsible way, it\u2019s so much better than people reading misinformation that creates anxiety or irresponsible expectations of utopia.<\/p><\/blockquote><\/figure>\n\n\n\n

So the dangers and the hopes of our future rely on people. I\u2019m always more hopeful because I have hope in people. But when I get down or low, it\u2019s also because of people, not because of this technology. It\u2019s people\u2019s lack of responsibility, people\u2019s distortion of what this technology is, and also, frankly, the unfair role power and money play that is instigated or enhanced by this technology.<\/p>\n\n\n\n

But then the positive side is the same. The students, the future generation, the people who are trying to do good, the doctors using AI to cure diseases, the biologists using AI to protect species, the agriculture companies using AI to innovate on farming. That\u2019s the hope I have for AI.<\/p>\n\n\n\n

Are there aspects of human intelligence that you think will always be beyond the capabilities of AI?<\/em><\/strong><\/p>\n\n\n\n

Li:<\/em><\/strong> I naturally think about compassion and love. I think this is what defines us as human\u2014possibly one of the most unique things about humans. Computers embody our values. But humans have the ability to love and feel compassion. Right now, it\u2019s not clear there is a mathematical path toward that.<\/p>\n","protected":false},"excerpt":{"rendered":"

Fei-Fei Li has been called the godmother of AI for her pioneering work in computer vision and image recognition. Li invented ImageNet, a foundational large-scale dataset that has contributed to key developments in deep learning and artificial intelligence. She previously served as chief scientist of AI at Google Cloud and as a member of the […]<\/p>\n","protected":false},"author":6,"featured_media":41990,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"apple_news_api_created_at":"","apple_news_api_id":"","apple_news_api_modified_at":"","apple_news_api_revision":"","apple_news_api_share_url":"","apple_news_cover_media_provider":"image","apple_news_coverimage":0,"apple_news_coverimage_caption":"","apple_news_cover_video_id":0,"apple_news_cover_video_url":"","apple_news_cover_embedwebvideo_url":"","apple_news_is_hidden":"","apple_news_is_paid":"","apple_news_is_preview":"","apple_news_is_sponsored":"","apple_news_maturity_rating":"","apple_news_metadata":"\"\"","apple_news_pullquote":"","apple_news_pullquote_position":"","apple_news_slug":"","apple_news_sections":[],"apple_news_suppress_video_url":false,"apple_news_use_image_component":false,"footnotes":""},"categories":[1],"byline":[4863,233],"issue":[4837],"series":[3913,2118],"collection":[4019,2529],"class_list":["post-41989","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized","byline-fei-fei-li","byline-sara-frueh","issue-40-3","series-interview","series-perspectives","collection-digital-landscape","collection-ethics"],"acf":[],"apple_news_notices":[],"yoast_head":"\nInterview With Fei-Fei Li<\/title>\n<meta name=\"description\" content=\"Deep learning and artificial intelligence pioneer Fei-Fei Li shares her thoughts on how to keep AI centered on human well-being.\" \/>\n<meta name=\"robots\" content=\"noindex, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"\u201cAI Is a Tool, and Its Values Are Human Values.\u201d\" \/>\n<meta property=\"og:description\" content=\"Deep learning and artificial intelligence pioneer Fei-Fei Li shares her thoughts on how to keep AI centered on human well-being.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/\" \/>\n<meta property=\"og:site_name\" content=\"Issues in Science and Technology\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/ISSUESinST\/\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-09T10:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-04-09T16:36:22+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/issues.org\/wp-content\/uploads\/2024\/04\/Interview_Fei-Fei-Li.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1494\" \/>\n\t<meta property=\"og:image:height\" content=\"1606\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Jay Lloyd\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@ISSUESinST\" \/>\n<meta name=\"twitter:site\" content=\"@ISSUESinST\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/\"},\"author\":{\"name\":\"Jay Lloyd\",\"@id\":\"https:\/\/live-issues-asu.ws.asu.edu\/#\/schema\/person\/efff9131bd7a7953dd22fcdcc4b5bc13\"},\"headline\":\"\u201cAI Is a Tool, and Its Values Are Human Values.\u201d\",\"datePublished\":\"2024-04-09T10:00:00+00:00\",\"dateModified\":\"2024-04-09T16:36:22+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/\"},\"wordCount\":2523,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/live-issues-asu.ws.asu.edu\/#organization\"},\"image\":{\"@id\":\"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/issues.org\/wp-content\/uploads\/2024\/04\/Interview_Fei-Fei-Li.jpg\",\"keywords\":[\"ai\",\"AI ethics\",\"artificial intelligence\",\"computing\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/\",\"url\":\"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/\",\"name\":\"Interview With Fei-Fei Li\",\"isPartOf\":{\"@id\":\"https:\/\/live-issues-asu.ws.asu.edu\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/issues.org\/wp-content\/uploads\/2024\/04\/Interview_Fei-Fei-Li.jpg\",\"datePublished\":\"2024-04-09T10:00:00+00:00\",\"dateModified\":\"2024-04-09T16:36:22+00:00\",\"description\":\"Deep learning and artificial intelligence pioneer Fei-Fei Li shares her thoughts on how to keep AI centered on human well-being.\",\"breadcrumb\":{\"@id\":\"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/#primaryimage\",\"url\":\"https:\/\/issues.org\/wp-content\/uploads\/2024\/04\/Interview_Fei-Fei-Li.jpg\",\"contentUrl\":\"https:\/\/issues.org\/wp-content\/uploads\/2024\/04\/Interview_Fei-Fei-Li.jpg\",\"width\":1494,\"height\":1606,\"caption\":\"Illustration by Shonagh Rae.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/issues.org\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"\u201cAI Is a Tool, and Its Values Are Human Values.\u201d\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/live-issues-asu.ws.asu.edu\/#website\",\"url\":\"https:\/\/live-issues-asu.ws.asu.edu\/\",\"name\":\"Issues in Science and Technology\",\"description\":\"The best minds on the most important topics.\",\"publisher\":{\"@id\":\"https:\/\/live-issues-asu.ws.asu.edu\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/live-issues-asu.ws.asu.edu\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/live-issues-asu.ws.asu.edu\/#organization\",\"name\":\"Issues in Science and Technology\",\"url\":\"https:\/\/live-issues-asu.ws.asu.edu\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/live-issues-asu.ws.asu.edu\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.issues.org\/wp-content\/uploads\/2019\/11\/Social-Media-Logo-Blue-2.jpg\",\"contentUrl\":\"https:\/\/www.issues.org\/wp-content\/uploads\/2019\/11\/Social-Media-Logo-Blue-2.jpg\",\"width\":792,\"height\":792,\"caption\":\"Issues in Science and Technology\"},\"image\":{\"@id\":\"https:\/\/live-issues-asu.ws.asu.edu\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/ISSUESinST\/\",\"https:\/\/x.com\/ISSUESinST\",\"https:\/\/www.linkedin.com\/company\/issues-in-science-technology\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/live-issues-asu.ws.asu.edu\/#\/schema\/person\/efff9131bd7a7953dd22fcdcc4b5bc13\",\"name\":\"Jay Lloyd\",\"url\":\"https:\/\/issues.org\/author\/jay\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Interview With Fei-Fei Li","description":"Deep learning and artificial intelligence pioneer Fei-Fei Li shares her thoughts on how to keep AI centered on human well-being.","robots":{"index":"noindex","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"og_locale":"en_US","og_type":"article","og_title":"\u201cAI Is a Tool, and Its Values Are Human Values.\u201d","og_description":"Deep learning and artificial intelligence pioneer Fei-Fei Li shares her thoughts on how to keep AI centered on human well-being.","og_url":"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/","og_site_name":"Issues in Science and Technology","article_publisher":"https:\/\/www.facebook.com\/ISSUESinST\/","article_published_time":"2024-04-09T10:00:00+00:00","article_modified_time":"2024-04-09T16:36:22+00:00","og_image":[{"url":"https:\/\/issues.org\/wp-content\/uploads\/2024\/04\/Interview_Fei-Fei-Li.jpg","width":1494,"height":1606,"type":"image\/jpeg"}],"author":"Jay Lloyd","twitter_card":"summary_large_image","twitter_creator":"@ISSUESinST","twitter_site":"@ISSUESinST","schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/#article","isPartOf":{"@id":"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/"},"author":{"name":"Jay Lloyd","@id":"https:\/\/live-issues-asu.ws.asu.edu\/#\/schema\/person\/efff9131bd7a7953dd22fcdcc4b5bc13"},"headline":"\u201cAI Is a Tool, and Its Values Are Human Values.\u201d","datePublished":"2024-04-09T10:00:00+00:00","dateModified":"2024-04-09T16:36:22+00:00","mainEntityOfPage":{"@id":"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/"},"wordCount":2523,"commentCount":0,"publisher":{"@id":"https:\/\/live-issues-asu.ws.asu.edu\/#organization"},"image":{"@id":"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/#primaryimage"},"thumbnailUrl":"https:\/\/issues.org\/wp-content\/uploads\/2024\/04\/Interview_Fei-Fei-Li.jpg","keywords":["ai","AI ethics","artificial intelligence","computing"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/","url":"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/","name":"Interview With Fei-Fei Li","isPartOf":{"@id":"https:\/\/live-issues-asu.ws.asu.edu\/#website"},"primaryImageOfPage":{"@id":"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/#primaryimage"},"image":{"@id":"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/#primaryimage"},"thumbnailUrl":"https:\/\/issues.org\/wp-content\/uploads\/2024\/04\/Interview_Fei-Fei-Li.jpg","datePublished":"2024-04-09T10:00:00+00:00","dateModified":"2024-04-09T16:36:22+00:00","description":"Deep learning and artificial intelligence pioneer Fei-Fei Li shares her thoughts on how to keep AI centered on human well-being.","breadcrumb":{"@id":"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/#primaryimage","url":"https:\/\/issues.org\/wp-content\/uploads\/2024\/04\/Interview_Fei-Fei-Li.jpg","contentUrl":"https:\/\/issues.org\/wp-content\/uploads\/2024\/04\/Interview_Fei-Fei-Li.jpg","width":1494,"height":1606,"caption":"Illustration by Shonagh Rae."},{"@type":"BreadcrumbList","@id":"https:\/\/issues.org\/interview-godmother-ai-fei-fei-li\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/issues.org\/"},{"@type":"ListItem","position":2,"name":"\u201cAI Is a Tool, and Its Values Are Human Values.\u201d"}]},{"@type":"WebSite","@id":"https:\/\/live-issues-asu.ws.asu.edu\/#website","url":"https:\/\/live-issues-asu.ws.asu.edu\/","name":"Issues in Science and Technology","description":"The best minds on the most important topics.","publisher":{"@id":"https:\/\/live-issues-asu.ws.asu.edu\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/live-issues-asu.ws.asu.edu\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/live-issues-asu.ws.asu.edu\/#organization","name":"Issues in Science and Technology","url":"https:\/\/live-issues-asu.ws.asu.edu\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/live-issues-asu.ws.asu.edu\/#\/schema\/logo\/image\/","url":"https:\/\/www.issues.org\/wp-content\/uploads\/2019\/11\/Social-Media-Logo-Blue-2.jpg","contentUrl":"https:\/\/www.issues.org\/wp-content\/uploads\/2019\/11\/Social-Media-Logo-Blue-2.jpg","width":792,"height":792,"caption":"Issues in Science and Technology"},"image":{"@id":"https:\/\/live-issues-asu.ws.asu.edu\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/ISSUESinST\/","https:\/\/x.com\/ISSUESinST","https:\/\/www.linkedin.com\/company\/issues-in-science-technology\/"]},{"@type":"Person","@id":"https:\/\/live-issues-asu.ws.asu.edu\/#\/schema\/person\/efff9131bd7a7953dd22fcdcc4b5bc13","name":"Jay Lloyd","url":"https:\/\/issues.org\/author\/jay\/"}]}},"_links":{"self":[{"href":"https:\/\/issues.org\/wp-json\/wp\/v2\/posts\/41989","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/issues.org\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/issues.org\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/issues.org\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/issues.org\/wp-json\/wp\/v2\/comments?post=41989"}],"version-history":[{"count":3,"href":"https:\/\/issues.org\/wp-json\/wp\/v2\/posts\/41989\/revisions"}],"predecessor-version":[{"id":42014,"href":"https:\/\/issues.org\/wp-json\/wp\/v2\/posts\/41989\/revisions\/42014"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/issues.org\/wp-json\/wp\/v2\/media\/41990"}],"wp:attachment":[{"href":"https:\/\/issues.org\/wp-json\/wp\/v2\/media?parent=41989"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/issues.org\/wp-json\/wp\/v2\/categories?post=41989"},{"taxonomy":"byline","embeddable":true,"href":"https:\/\/issues.org\/wp-json\/wp\/v2\/byline?post=41989"},{"taxonomy":"issue","embeddable":true,"href":"https:\/\/issues.org\/wp-json\/wp\/v2\/issue?post=41989"},{"taxonomy":"series","embeddable":true,"href":"https:\/\/issues.org\/wp-json\/wp\/v2\/series?post=41989"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/issues.org\/wp-json\/wp\/v2\/collection?post=41989"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}