From the course: Ethics in the Age of Generative AI

Distinguishing responsible tech from human behavior

From the course: Ethics in the Age of Generative AI

Distinguishing responsible tech from human behavior

- Generative AI is already transforming every aspect of the human experience, and I'm the kind of optimist who believes that these tools will make us more human, more creative, inspired and connected. In the banking sector, AI and machine learning are helping us identify people who might have opportunities to be more financially secure, to save more, to increase retirement contributions and to have better pathways to economic opportunity. In agriculture, AI models are helping to predict large weather events so farmers and producers can understand when additional insurance is wise or help them know exactly when to plant or harvest to best maximize their economic returns. And maybe closer to home inside of organizations, AI is transforming human resources, helping managers understand how to inspire better performance from their teams, correct potential biases or discrimination and make sure that promotions are truly merit based. But even as I'm excited about these potential opportunities ahead, I'm also convinced that we need to make sure we build these tools with positive intention with a grounding in ethical and responsible reasoning. And I'm not alone. From the very beginning, developers of artificial intelligence have known the incredible power of these tools and consider the ethical quandaries that might arise when we deploy them. In recent years, these apprehensions have reemerged with a sharper edge. Researchers and now policymakers are exceptionally concerned by the potential ways that AI could perpetuate bias, could make the world more unequal and could do so in ways that are invisible to us. Many of the ethical concerns that AI researchers have worried about are coming to light in a very real way. Consider the idea of deep fakes, tools that might create a persona or an avatar that's pretending to be a trusted person in your organization, delivering fraudulent information to your customers, or even advising them to take dangerous or potentially risky courses of action. We've seen the advent of chatbots everywhere and we know that without ethical design, chatbots might give false advice maybe to medical practitioners or to students. Information that sounds logical, but is in fact based on inaccurate or untruthful information, and has never been audited by a human being. And issues about fundamental ownership, questions of legal and copyright. Who owns the ideas and products that are created by generative AI? We're just beginning to resolve them. Join me for a moment in an example. Let's imagine that your company has deployed a new AI system to support the HR function, scanning resumes of applicants to identify potential interviewees. At first glance, the tool works incredibly well, operating just as quickly and providing the same number of candidates as your human support team. But as you dig deeper, some disturbing patterns emerge. The tools prioritizing a particular gender, folks from a particular address or neighborhood or with a particular pattern to their work history. And you realize that these are the same biases that the humans in the dataset that trained the tool were expressing. We have a challenge here. This algorithm now is providing bad recommendations but humans are the ones to make the decisions about who they will interview. In situations like these, we need to ask ourselves, what is the highest standard of responsible human behavior? What actions can we take to best promote fairness and dignity? And is it possible that we've trained an AI to provide an answer that's lower than that highest standard? When this happens, what can we do to help humans make better decisions based on the algorithm's recommendations? If you're eager to learn more about how to answer these questions, stay with me. Later in this course, we're going to cover step by step how to resolve dilemmas just like these.

Contents