Are machines conscious?
The question is becoming more urgent to explore as humans spend more time interacting with chatbots in a variety of contexts, says William Marshall.
Some people may avoid the challenges involved in human relationships, for example, by forming a connection with a "perfectly well-behaved chatbot who always does exactly what you want, always laughs at your jokes and is the best possible friend," says the Associate Professor of Mathematics and Statistics.
"It will not take long to find somebody who has deep conversations with ChatGPT or whatever flavour of large language model and believe there's something like inner life there," he says.
Marshall and his team are striving to assess consciousness using a combination of scientific and psychological methods.
"I can program a computer that responds to your questions and can even say, Yes, I am conscious. I feel happy. I feel pain,' so you can't just ask the computer if they are conscious," says Marshall. "We want to build the scientific knowledge, machinery and framework that allows for experiments to be done to determine the presence or absence of machine consciousness."
The Brock-led team was one of three groups to receive the Linda G. O'Bryant Noetic Sciences Research Prize for 2025 from the U.S.-based Institute of Noetic Sciences (IONS) last November for their research.
In their winning essay, "Evaluating Artificial Consciousness through Integrated Information Theory," Marshall and researchers from the University of Wisconsin describe a possible approach to researching machine consciousness.
The team's proposed research model begins by studying the essential properties of consciousness and how they manifest physically according to integrated information theory (IIT).
"Foundational to IIT is the fact that consciousness exists, immediately and irrefutably, as recognized by Rene Descartes', I think, therefore I am,'" says Marshall.
Conscious experiences are physically manifested through "cause-effect power," he says, which is the capacity for something to make a difference to itself and others and to be affected by the actions of others.
From there, the team proposes using complex mathematical equations to measure and evaluate cause-effect power in humans and extrapolate these findings to machines, he says.
Marshall says many people assume intelligence is equal to consciousness, since the two go hand-in-hand in humans. But the presence of consciousness in machines would go beyond artificial intelligence, where algorithms and statistical models guide computer systems in learning and adapting without following explicit instructions.
If it turns out machines are conscious, Marshall says that raises a host of ethical questions and creates the need for rights and rules governing how they are treated.
"If you think your chatbot is conscious, how do you treat this? Is it OK to turn it off? Can you delete it? Can you wipe its memory? Will that cause it pain?" he says.










