by
Adarsh VermaShort Bytes:
As we advance more and more in artificial intelligence and related
technologies, we must solve some complex ethical problems. A similar
dilemma is faced in the case of self-driving cars and how they should
act in case of unavoidable accidents. Should they kill more people on
the road to save the car occupants? Or, should they protect others on
the road by sacrificing the life of occupants?
Every
big automobile company is preparing its artillery with the technology
that will help them to produce autonomous vehicles. Even the companies
like
Google and
Apple are hiring engineers and designers to make their ‘next big product’.
In the future, self-driving cars will soon dominate roads and as the
tests suggest, the autonomous vehicles take extra security precautions.
According to the reports, very few minor accidents that took place on
the road during tests, were caused due to the mistake of other vehicles
or people on the road. This brings us to a very interesting scenario
that deals with an event of unavoidable accidents.
How should a car act in such scenario? Should it minimize the loss of
life on the road at the cost of occupants’ life, or should it protect
the car occupants at all costs?
These ethical questions need to be answered when we talk about a future with self-driving cars.
Also read:
First Apple Car Coming in 2019, According to Report
“Our results provide but a first foray into the thorny issues raised
by moral algorithms for autonomous vehicles,” says said UAVB and Oxford
University scholar and bioethics expert Ameen Bargh who advised to
change the course of the car to reduce the loss of life.
However, deontologists argue that “some values are simply categorically always true”.
“For example, murder is always wrong, and we should never do it, even
if shifting the trolley will save five lives, we shouldn’t do it
because we would be actively killing one,” Bargh said.
The members of UAB’s Ethics and Bioethics teams are doing a lot of
work to deal with such type of questions. One way to tackle this problem
is to act in such a way that it reduces the loss of life. The results
of the tests are interesting and people are comfortable with the idea
that self-driving cars should be programmed in such a manner that they
reduce the death toll.
As we advance more and more in AI and technology, we must find
answers to these ethical and philosophical questions and find the ways
to arrive at a worthy solution.
Source:
MIT Technology Review
What do you think? Should self-driving cars must be programmed to kill? Tell us in the comments below.
Comments
Post a Comment