You can read this article. If it's more convenient for you, turn on the podcast.
Imagine that in the future it will be possible to upload the brain to a computer, creating a complete digital copy of your consciousness. Only this new version is smarter than you, and over time, it begins to accumulate impressions that you have never experienced before. Would you dare to do that? Why? Will this digital copy still be considered by you? Are you responsible for the decisions your copy makes? Should we have the right to have a relationship with someone's digital copy?
If your child was diagnosed with a congenital heart defect, and a certain gene had to be removed to save him, what would you do? Most parents would probably agree.
What if you can make a child smarter? More beautiful? Should parents have the right to choose a child's sexual orientation or skin color? What if it was only available to the rich? What if all the other parents decided to "edit" their children, but you didn't?
Imagine: you are driving in an unmanned vehicle on a two-way road, when suddenly five children run out onto the roadway. The car has three options: crashing into children, crashing into a car in the oncoming lane, or crashing into a tree on the side of the road. In the first case, five people may die, in the second — two, in the third — one. How can I program a machine for such a case? Should she try to save the passenger or save as many lives as possible?
Are you ready to get into a car that might decide to kill you? Would you go with your child? Should all drones work according to the same rules, or will it be possible to pay extra for a car that primarily saves a passenger?
Imagine a world with intelligent robots —machines many times superior to humans — that do not distinguish good from evil, justice from injustice. There will be a lot of problems. But it is even more problematic to put moral values in them, because we, the people, must choose these values.
Which values should I give preference to? Who should decide which views are the most "correct"? Should each country agree to a certain set of values? And is it possible to give a robot the ability to change its mind?