From the course: Introduction to AI Ethics and Safety
Calm down
From the course: Introduction to AI Ethics and Safety
Calm down
- Again, we've talked about how scary all of this can be. We've talked about how we might have an AI apocalypse where humanity's wiped out and how the regulation isn't catching up to all of this. But now I want to introduce you to the opposing view, which is basically just, calm down, it's not that serious. For one thing, AGI, artificial general intelligence, a singularity, is not inevitable. Many researchers think that the claims of imminent AGI are not as certain as some would have you believe. This is a quote from a professor of computational cognitive science. "We often overestimate what computers are capable of while vastly underestimating what human cognition is capable of." So, that's one thing, right? This whole scary idea that AI is going to become much smarter than humans and take over and we will not stand a chance against it, that might not happen, ever. Another take is that AI is not that scary. Big tech companies profit from the hype that surrounds AI. It benefits them for AI to be a little bit confusing and a little bit daunting. They like that people are a little on edge about it. And then, there's this idea that if we had named artificial intelligence something else, which we totally could have, we could have named it probabilistic computational methods for predicting human speech or predicting whatever, it could have been given some very, very long, boring technical name that is actually a little bit more descriptive of what it's actually doing. So, some think that if we had called these computational techniques something other than artificial intelligence, there would be much less hype and fear surrounding them. Artificial intelligence is a very cool, kind of scary-sounding name. Something that is just kind of technical, but has a lot of terms in it that laypeople won't even know what they mean, that's not as scary. That's just like, "Oh, cool, a new technological tool just dropped." It's just a technological tool, just like cell phones, just like the internet, just like all of these other new technologies that have come in the past. And there has always been fear about new technologies. Even Socrates was scared of writing. "If men learn this," writing, "It will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written." Okay, sure, maybe our memory is not as good as it was 2,000 years ago. We rely less on it. But writing and reading has opened up a lot more learning, a lot more for humanity than not having it. And so, AI could be something like this, where there's all this fear around it, all this hype around it. And yes, it may change, fundamentally, how our society works, how we do things, how we think, even, but that's not necessarily bad. That doesn't necessarily spell doom for humanity. Now, I just want to brainstorm a bit about potential pros and cons of AI in our world. There's a couple here to get you started, but now that you know a bit more about generative AI and AI in general, you can sort of think about this on your own. Some ones that we could think about are, maybe we'll have more effective learning with generative AI. Maybe it will create jobs. Will it make us more creative? Will we have better communication across languages with instantaneous translation, potentially? Or we'll have a reduction in human interaction, as AI chat bots can simulate human interaction? Maybe we'll have a decrease in our skills or our motivation. Maybe we'll have fewer jobs. Maybe we'll have an increase in societal inequality. We saw how the models can be biased, how we can have biased output, biased input, and that can exacerbate things. Access to models, people have different access to these large models. These are just a couple, but you now know more about generative AI, AI in general, so you can think on your own, what are potential pros and cons of this technology? What is most important on these lists? What else would you add to these lists? Similar to my question at the end of the last section, the AI ethics section, I'd like to leave you with an open-ended question. Right now, a lot of the goals for AI are focused on automating a lot of human work. Is that what we want out of AI, ultimately? What happens if humans no longer have to do any work? For all those AI experts who said that they thought AI would be extremely good for humanity in the long run, remember, we saw that pie chart, and some people thought it would be, half the people thought it would be extremely good or good, and a quarter of these AI experts thought it would be extremely good for humanity in the long run. What does extremely good look like? What does that utopia look like? What do we want from AI, ultimately?