“AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral … In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defense against AI.” – Nvidia’s “Megatron” AI

What Exactly is AI Anyway?
AI or artificial intelligence is staple of modern science fiction. From “Robot” of Lost in Space (Danger Will Robinson) and 2001’s HAL, to the Droids of Star Wars, The Cylons of Battlestar Galactica, the Transformers, or the T-800 Terminator, fans of the genre have been fascinated by the human quest to build machines that think like we do, perhaps better than we do. And the ethical, social and societal consequences of embracing such technology makes for great drama with the nature of humanity itself at its core. Will humans eventually render ourselves obsolete? Will we unintentionally initiate a robot apocalypse? Or will we be able to live guilt-free being catered to by artificial servants?
In general, artificial intelligence is an umbrella term that’s used in the field of computer science to refer to approaches to problem solving and decision making that mimic those of the human mind. It makes use of various methods of machine learning where computers are fed data (sometimes completely raw, sometimes coupled with desired outcomes) and based on the patterns these machine learning algorithms identify, they are able to cluster data together, make decisions or generate labels for new data that’s independent of what it used to learn those patterns.
As an example from my own research in my day job as a medical physicist, I currently have a graduate student who is studying the outcomes for patients who’ve received radiation therapy for prostate cancer. She’s using machine-learning tools to identify treatment plans that are likely to fail before treatment proceeds. Humans can identify simple patterns between a few variables. Oncologists come up with general rules: keep the mean dose below to this organ below that threshold and most patients won’t experience nasty side effects. But as the data grows more and more complex, our brains have trouble handling it. So we use computers to assess it. In principle, machines identify those problem cases we can’t catch with our simple rules, and give each treatment moving forward an optimal chance for success.

The Trolley Problem
AI is what enables self-driving cars. And while our roads aren’t quite full of the self-driving vehicles predicted to be the rage in the early 2020s, they’re not far off. The act of driving brings ethics front and center in the AI world.
Consider the trolley problem–an ethical thought experiment popular in introductory ethics and philosophy classes. In short, a trolley is moving down the tracks at a high speed and its brakes go out. Ahead there’s a person stuck on the tracks and if the AI continues, that person will get hit and surely die. But you control a switch that can divert the trolley onto another track.
Clearly the ethical decision is to throw the switch and divert the trolley. However, the twist is that there’s a worker on the other track with a jackhammer and facing away. He can’t see or hear the trolley coming and you can’t warn him. In either choice, someone will die. It’s a no-win situation.
And of course there are lots of variations on this. Sometimes the person on the first track is a child. Sometimes construction worker is a medical student. Sometimes it’s a group of construction workers, or the trolley that will surely derail and crash on the alternate track is full of prison convicts.
As humans we may be called to make these kinds of decisions every day. Sometimes without much preparation. You get behind the wheel of a car, and you can’t control what everyone else on the road does.
So self-driving cars. Assume that these are completely automated, all people in the car are passengers. Unforeseeable collisions happen on a regular basis… drunk drivers, people stepping out from behind parked vehicles, black ice conditions. At some point it’s reasonably likely for an AI to encounter a situation where in order to avoid a collision, it will have to veer onto a sidewalk, and once is a while, someone will be on that sidewalk.
As an automaker, how to do you program your car?
Some argue you can avoid such decisions altogether. Simply apply the brakes. But the issue that arises is that sometimes that’s not enough to avoid a crash, and in cases where veering off would have worked, you’ve now created a product incapable of matching human performance.
Another option is to treat them as optimization problems. But do you can optimize the number of lives saved? The number of expected life-years (such that two children would outweigh three senior citizens)? Or various measures of social value (a doctor would perhaps outweigh a prison convict)?
Doing this will rely on the AI’s ability to accurately classify the subjects involved. Human/not human identification is reasonable to accomplish. Age… less so, but still possible. Social value… that’s nearly impossible to assess at the best of times, and there’s a big question as to whether ranking anyone with a social value is ethical to begin with.
Finally there are market pressures to consider as well. When people are asked “hypotheticals” there is often a utilitarian preference for optimizing the number of lives saved. But people are more likely by buy vehicles that will place their own survival over others.
That last point is critically important here. AI construction will not be governed by ethicists alone.

GIGO and Bad Data
Popular articles about artificial intelligence often sensationalize how awesome it is. For example, how much better it is at detecting cancer in mammogram images than humans are.
But AI has it’s limitations. It will only ever be as good as its training data set. Most programmers are familiar with the GIGO principle – when a computer bases otherwise correct calculations on poor quality, flawed or nonsensical input data, the final results will also be of poor quality, flawed or nonsensical – in other words: garbage in, garbage out.
As good as machine learning tools cab be, they often struggle when presented with data that differs from it’s training data set. In the case of detecting cancer in mammograms, there is a often a marked decrease in correct classifications when the images come from different centers. That’s because the mammography x-ray machines may perform somewhat differently, the data may be stored at a slightly different resolution, or use a different grayscale conversion, or any number of other differences that weren’t accounted for in the training data set.
Humans, with all our imperfections and biases, are at least adaptable to unforeseeable circumstances. When trying to assess whether or not an image contains evidence of breast cancer or not, we can figure out when someone has slipped in a picture of a muffin and simply reject it from the pile.
Bias in Training Data Sets
Sometimes the incoming data is just fine, but it contains inherent biases. And these can influence the decisions the AI makes.
One example of this was with the COMPAS (correctional offender management profiling for alternative sanctions) algorithm used in US courts to predict the probability of recidivism. A study in 2016 showed the algorithm predicted almost twice as many false positives for black offenders than for white offenders when predictions were compared against actual recidivism rates over a two year period. It has been argued that the root cause of this discrepancy is that the classification model is based on existing racial biases in the US justice system.
There is also an example of a chat bot using the phrase “9/11 was an inside job” because it was programmed to mimic chat patterns used by young people it was meant to engage with and it basically got trolled.
There’s also a case of an AI hiring algorithm at that was used to sort through resumes and identify ideal candidates for hiring into technical positions at a major technical company. The problem was that the algorithm favored male candidates. In fact it penalized resumes with the word “women’s” (e.g. women’s basketball team captain). The problem was that the training data set was predominantly male. So even if the prior selections had been completely gender-blind, the successful candidates in the training set would have also been predominantly male and so the algorithm was destined to favor male candidates.

Can AI Be Ethical?
In December 2021, Oxford University set up a debate about the ethics of AI and invited an AI created by the Applied Deep Research team at Nvidia (a computer chip maker), aptly named “Megatron.”
Nvidia’s Megatron was trained using Wikipedia, news articles and discourse on Reddit – more written material than any human could get though in a lifetime.
When arguing for the motion that “this house believes that AI will never be ethical” it came up with the quote I began with at the start of this post. In short, it took the position that the best option was not to have AI at all.
Personally, I think that’s all that needs to be said. Humans created an AI and named it after a fictional robotic evil boss who transforms into a gun and whose goal was to drain our planet of its energy. And they trained it using the internet.
Facepalm.
But like it or not, real AI is here and it’s presence will continue to grow. And the question of whether or not AI can be “ethical” is actually something of a false dichotomy. The line that separates ethical behavior from unethical behavior can be blurry, messy, and even culturally dependent.
Our challenge as we embrace this new tool, is to recognize that it’s not going to be perfect, to understand its limitations, and use it only when and where appropriate. We need to design into it our best ethical practices and be vigilant in our search for biases. And when it does make mistakes, which it will, we have to investigate them thoroughly and deeply and make the best corrections we can.