Part 1: Predicting the Future
00:00/14:18
Hello, listeners of Himalaya, and welcome to this episode in our series "AI and Us". In this episode we continue our investigation into the dark side of AI - and we specifically will look at the danger of AI getting its predictionsright. Why this can be a problem? You'll see shortly.
Big Data and AI help us make better decisions by helping us better understand the world we live in, and often by better predicting what may happen when. That way we can better judge the options available to us and pick the right one. For example, AI can predict whether a skin lesion is likely cancerous and help us make a decision to have the lesion surgically removed. AI may predict that a part in a machine will break before it actually does and enable us to change that part when it suits the manufacturing process rather than having to stop everything and tend to a broken machine.
And AI may predict whether somebody is likely to pay back his loan, thereby making available credit at low interest rates to those that need it and are worthy of our trust. In each of these cases, and many more, AI predictions help us make better decisions. And the problems that we worry are associated with erroneous predictions, with AI getting it wrong, so that based on the incorrect prediction we take wrong decisions. The singular goal seems to ensure that predictions are very good, and if at all get better and better over time.
But what if the predictions become really, really accurate and AI is able to foretell the future in a myriad small, but also bigger ways? On the one hand, we would be able to make better decisions, as risk is reduced and replaced with certitude. On the other hand, with such accurate predictions we may end up in a world that is similar to the one depicted in the Hollywood blockbuster movie "Minority Report".
In the movie, the future can be predicted accurately; as a result people are being imprisoned not for crimes they have committed, but for crimes they are only predicted to commit. The catch of course is that the system never fails: all predictions do become true, and so those that are imprisoned aren't seen as innocent, even if they have not done anything illegal by the time they get caught.
Foreseeing the future is a common storyline in science fiction novels and movies. "Minority Report" is just one example. The first big science fiction series in the West that had predicting the future at its core is the "Foundations" series of science fiction superstar Isaac Asimov. Written in the 1950s, it was voted best science fiction story ever in the 1960s and has been the cornerstone of Asimov's world fame. But in Asimov's story predictions are accurate only in the aggregate, for a large group of people for example, and can't be broken down to the individual. In "Minority Report", predictions are omniscient in that they can predict the behaviour not just of large groups, but of each and every individual human being.
Now, the storylines in Asimov's "Foundations" (which by the way Apple has recently turned into a streaming TV series) and in "Minority Report" may be extreme and very unlikely to ever become reality. But let's just think through what if it were to become true this would mean for humanity?
Part 2: The Future of Human Volition
04:18/14:18
Humans believe that they have volition. It's something we experience in our daily lives. We decide what to do, we are not puppets on a string. Now, of course we all face constraints in our volition. Our actions have consequences. But that's precisely it: we face the consequences, because we are free to act. A key element of being responsible for one's action is, quite simply, that one has control over them.
If I am unconscious, I can't be made responsible for what my body does. Only when I havecontrol, only when I have thechoice to act or not to act can I conceptually take on responsibility. Human volition is the prerequisite for human responsibility. If some other force made me do things, then that force needs to be blamed for the actions it forced me to undertake, not I. This principle of volition and responsibility runs deep through modern human consciousness.
But it was not always that way. Centuries ago, many people believed that Gods moved humans around like puppets, and we were objects of their whims, not subjects of our own lives. That changed as humans entered enlightenment and modernity. In an age of rationalism, we understand us no longer as puppets, but as masters of our own individual and collective destiny.
Now, please understand: I am not suggesting that human volition objectively exists. That I do not know. But I do suggest that we humans behave like human volition exists for us, and so we link volition to responsibility, and believe that actions have consequences. We also believe, of course, that only action leads to guilt. And so the very idea of holding somebody responsible for actions that have not yet taken place, but are only predicted, is utterly alien to our view of humans and humanity.
And that, dear listeners, is the crux with AI-based perfect predictions of the future: If they were to exist, they would unravel the very foundations humanity is built on. Rather than being the masters of our lives, we would be sent back in time and become objects again, of puppets on a string, stripped of volition, capacity, and purpose. That is the really dark fear of AI-based predictions becoming too good and too perfect.
Part 3: Against Predictability
07:26/14:18
So what can we do about this danger? And should we take the threat serious, given that the chances of it happening are perhaps slim?
It's true that we are likely not facing close to perfect AI-based predictions any time soon. But if the threat is so foundational to humanity, then it makes sense to worry early and ensure that we have the right measures mitigating the threat in place before things turn dark; and before we find ourselves on a slippery slope towards "Minority Report".
But what should we do?
In many instances, better predictions are a good thing. And we want our predictions to get better. But in some areas, those touching upon basic human liberty and fundamental decisions such as about freedom or imprisonment, human volition needs to be preserved. In these narrow areas, we should consider prohibiting predicting the future. There is no doubt that such a decision increases risk and uncertainty and lowers the quality of decisions. But in return it preserves the essence of what it means to be human.
Deliberately choosing ignorance over knowledge, even for very narrow areas, is a very blunt force. Such a force must truly be justified. These areas must be the exception, not the rule. And the exceptions must be narrow. But preserving these areas of "prediction-lessness" is key to the survival of humanity.
So why should we bother with this problem now? What has changed to make the threat real? Never before in human history did we have to worry about being able to predict the future with precision. We just did not have the tools to do so, and hence it did make sense to bother. It's like before our digital age, we did not have to worry much about remembering too much. Because as humans we forget. But as our digital tools made remembering easier, cheaper and ubiquitous, we needed to have a conversation about the limits of remembering and the importance of forgetting. Because as humans we have to act in the present, so we cannot be pinned down completely by the past.
As much as we need to discuss how to emancipate us from the past, in the shadow of Big Data and AI we also must have conversations about how to preserve the future as something open, as something that can be shaped and chosen by our human whim rather than be the result of calculations. Because our capabilities of prediction are improving so fast, it is time to ponder the limits we want to impose on predicting.
This will not yield quick results. There is no silver bullet, and no magic solution. Different societies will come up with different solutions and emphasize different things. But we can guess, I believe that values of personal life and livelihood, as well as personal liberty and societal well-being will be good candidates for preserving prediction-less spaces, when they are combined with questions about individual action, and personal choice.
For instance, we may want to prohibit using latest medical technology to predict the gender of embryos in the first weeks of pregnancy, except in exceptional medical situations, because such knowledge may lead to gender-based abortions. Or we may want to limit the use of AI to predict what profession a student must go into, because he or she is foreseen to be particularly good in it. Or we may want to stop predicting whether somebody will die of cancer in the next twelve months. All these and many similar cases lead to important questions about howopen or how defined we want our future to be.
As AI and Big Data helps us predict, we must never forget that they are tools. It is entirely up to us humans to decide whether, when and for what purpose to utilize these tools. The tools aren't the problem; they just force us to think about the constraints we to impose on the tools, the legal guardrails, and regulatory safety measures we want to put in place. As we gain more power to shape our future, we cannot abstain from embracing that power - but must do so deliberately, thoughtfully, and humbly.
I hope you enjoyed this episode of the dark sides of AI, in which I described the threat posed by AI not as making erroneous predictions, but that the predictions are getting so perfect that humanity loses volition and its capacity to act freely. The danger then is far worse than a prediction going wrong; the danger is for humanity to lose its agency and to be relegated to mere objects on the playing field of the Gods as in the old days of superstition and misbelief. Preserving our role as subjects of our own destiny is crucial to retaining the essence of humanness; that's the goal. Constraining the power of predictions when it comes to certain sensitive areas and issues, I suggest is the right means to achieve that goal.
Heady stuff today. Next time we tackle a far more immediate and practical dark side - information privacy and discuss how information privacy can be preserved in a Big Data and AI era - and how it may have to be rethought. I hope you'll join me again.
还没有评论,快来发表第一个评论!