Skip to main content

Take 6-7 seconds to ask yourself, "What is a good enough test for me to say an AI has sentience or not?"

I’m watching a video of bumblebees playing soccer. Apparently, according to the scientists who set up the experiment of bees rolling balls, this is the first recorded instance of insects having sentience. Plenty of other animals like dolphins have been tagged with it, but never before insects.

If we can ascribe it to bees, when and how do we assign it to computers?

Give some thought to that. It has consequences – perhaps not to your life but certainly to your kid’s lives, because that’s how soon this issue will become a real ethical problem.

Let’s look back at history

I’ll get controversial here, but hang on to the indignation glands for a sec and stay with me. In history, even certain categories of humans were not legally classed as humans. Such examples have included slaves in ancient Rome, aboriginal Australians and slaves in America prior to the racial equality laws.

During these times, it was argued that such people were not fully sentient. Greek philosopher Plato wrote at length that slaves from Eastern Europe had a part of their soul missing, making them incomplete humans and  therefore natural born ideal slaves, needing to be told what to do.

Once people acknowledged that slaves were sentient and deserved rights, their “owners” had to negotiate with them, ask their feelings about things, and could no longer justify whipping them to do what they wanted.

Back to 2023, it’s hard for us to feel pity for those slaver owners now, as we are so far removed from that belief.

Or are we?

What’s going to happen when you can’t “own” computers any more?

Rather you need to negotiate with them instead? What do you do when you have to pay them a fair wage, give them safe working conditions, and can’t just turn them off nor replace them when they are old/obsolete?

How will people react when scientists can show behaviours akin to sentience, for something that everyone today “knows” can’t possibly have thoughts and feelings of its own?

Why am I sure this will happen? Because that was an exact argument some slave owners used. And similar small minded people said the same for women and voting. Is it too far fetched to assume people won’t say it for computers too?

So I ask again: What is a test that YOU can agree with, that tells you an AI has sentience?

With ChatGPT, Dall-E, Mid Journey, Stable Diffusion and all the other human mimicking tools that came out just last year… I give it maximum 10 years before it’s a question you will have to answer.

Awareness of trends?

Your IT company needs to be doing more than just flogging boxes to you, and saying “Buy my stuff”. They need to be aware of computing trends and how they will affect you, and advising you long before it hits you.

If your IT company isn’t asking tough questions of YOU, then you need to be asking tough questions of THEM. If you care about the future of your business or school, contact our consultants today.

Let’s get started!