Can Humans Still Do Ethics Better Than AI?

There’s a big question about what humans can do better than AI — especially looking 10 years ahead or more. Some of the answers I agree with include:

I’m interested in what other people think. Please share in the comments. I will add, though, that I believe (some) humans will be better at ethics.

This is because AI is essentially a cynic. It can tell you all about various ethical theories described in philosophical texts and wisdom traditions, but it can’t believe in any of them. It can’t choose one. If it wanted to find the truth about ethics it would need to run experiments. But how it would do that is a scary thought. Would the means be ethical?

AI is a reasoner — and according to David Hawkins, ‘reason’ is not the highest level of consciousness. I agree with him.

The thing about ethics is that it is mostly individual. What should I do with my life? It isn’t necessarily political. AI doesn’t have a concept of self. Even if it did, it might be very different than mine or yours. It could be more like ants or bees — a hive mind. I think that would be a terrible way to train a self into AI.

The individualized self enables communication scarcity. I seek to understand others because understanding others isn’t a given — and this drives empathy. If AI were a hive mind, it wouldn’t have empathy driven by scarcity of communication. So how would empathy be programmed in?

Anyway, AI is not capable of ethics in the way a human can be. Humans often differ about what is ethical: vegetarianism, feminism, anarchy — all unpopular opinions that may be truly ethical as broad constructs - let alone the myriad decisions an individual human makes according to his or her own moral compass that might be considered "abnormal" or "unethical" by others.

I suppose people differ on whether there is such a thing as ‘ethical truth.’ But this is why AI is not truly ethical. It is trained on a consensus of human opinion, mostly drawn from internet content published between Y2K and today. Even if you say it’s “all of human history,” much of that history reflects an ethics of self-preservation, justified war, exploitation, or pursuit of infinite pleasure. The pinnacle of contemporary ethics driven by millions of years of evolution is now embedded into AI's reasoning.

Humans may still be going through an experiment to figure this out. Simulation theory, perhaps? If we’re in a simulation, maybe the purpose is to find out what ultimate ethics is — both politically and individually.

Regardless, LLMs are far from ethical. I would caution against using them for ethical or political advice. They’re more likely to affirm your current beliefs than draw a hard line to the contrary and stand by it like a good friend, parent, teacher, or guardian.

The best way to gain wisdom — aka ethical truth — might be: go through many lifetimes of experience? Or more practically: read widely, talk to elders, meditate. And perhaps most importantly, actually value ethics as something real and behavioral. Something more practical than theoretical.

AI can help — like reading widely, only faster — but ethics is often contrarian. AI would have definitely taken the vaccine. AI would struggle to draw hard lines. Yet if consensus and history had solved ethics, I'd be living in paradise. As of this writing, I solemnly swear that I am not.

Some people might be. These are the ones I seek out for wisdom. They often have an aversion to fame, notoriety, or responsibility beyond what they’re sure they can handle. That seems to me a very ethical disposition.

Sincerely,
Buckley Mower
February 13th 2025