AI, Health and Safety, And The Problem Of Consciousness

Sidd
4 min readMay 16, 2021

In early July, and owner/operator of an SME was jailed for eight months for breaching Section 3(1) of the Health and Safety at Work Act, which resulted in the horrific death of two employees, who also happened to be brothers.

Simon Thomerson, who ran Clearview Design and Construction had been contracted by a company to refurbish several units in Hertfordshire. During the course of the work, he supplied the brothers and another man with highly flammable ‘thinners’, which were poured onto the floor to remove adhesive from carpet tiles. Somehow, the thinners ignited, resulting in the brothers suffering from almost 100% burns and dying within 12 hours.

The third worker also suffered burns but survived. According to Health and Safety at Work Magazine, HSE inspector Paul Hoskins said:

“This tragic incident led to the wholly avoidable death of two brothers, Ardian and Jashar, destroying the lives of their young families.

“The risks of using highly flammable liquids are well known, and employers should make sure they properly assess the risks from such substances and use safer alternatives where possible. Where the use of flammable solvents is unavoidable, then the method and environment must be strictly controlled to prevent any ignition.”

Mr. Thomerson was held liable for the health and safety breach which occurred at his business because, either deliberately or through ignorance, he omitted to put in place proper risk management procedures and protect the safety of his employees and the public. He is a conscious being, who, because he can empathize, could foresee the pain his acts or omissions may cause others. He also possesses free will, meaning he could foresee (or should have been able to foresee) the consequences of his actions or inaction and make choices.

Health and Safety and Artificial Intelligence

In 2018, because artificial intelligence (AI) does not experience consciousness, the courts do not have to concern themselves with the question of whether a machine can be in breach of health and safety or any other type of law.

But as AI continues to advance, the issue of consciousness will become a legal question to be debated in both Parliament and in the courts. As discussed in previous blogs about AI and legal liability[PM1], as the sophistication of AI develops it is almost inconceivable that a machine will not eventually be held accountable for its actions. But, can you impose legal liability on a non-conscious being? And what standard will we use for declaring AI ‘conscious’?

What is consciousness?

As humans, we take consciousness for granted. Most of us don’t trouble ourselves with thinking about the fact we are experiencing our life and surroundings; we just know we are.

The issue of consciousness is one of the thorniest of all philosophical questions and one, that as mere lawyers, we do not feel qualified to answer. Instead, we quote a few people who are.

John Locke gave us the modern concept of consciousness in his 1690, Essays in Human Understanding. He defined consciousness as “the perception of what passes in a man’s own mind”. More recently, the Routledge Encyclopaedia of Philosophy defined consciousness as follows:

Consciousness — Philosophers have used the term ‘consciousness’ for four main topics: knowledge in general, intentionality, introspection (and the knowledge it specifically generates), and phenomenal experience… Something within one’s mind is ‘introspectively conscious just in case one introspects it (or is poised to do so). Introspection is often thought to deliver one’s primary knowledge of one’s mental life. An experience or other mental entity is ‘phenomenally conscious just in case there is ‘something it is like for one to have it. The clearest examples are perceptual experiences, such as tastings and seeings; bodily-sensational experiences, such as those of pains, tickles, and itches; imaginative experiences, such as those of one’s own actions or perceptions; and streams of thought, as in the experience of thinking ‘in words’ or ‘in images’. Introspection and phenomenality seem independent, or dissociable, although this is controversial.

Max Tegmark, author of Life 3.0: Being Human in the Age of Artificial Intelligence, defines consciousness simply as a “subjective experience”.

These definitions provide an understanding of consciousness with regard to humans, but how do we define consciousness as it might apply to a machine? Dr. Soumya Banerjee, a researcher at the University of Oxford, tackles this problem in his paper: A framework for designing compassionate and ethical artificial intelligence and artificial consciousness[PM2]. He ‘tentatively defines a conscious machine as:

“A computing unit that can process information and has feedback into itself.”

Dr. Banerjee believes a computer can be made to recognize itself as a computer.

“We can show computer images of other computers to help it recognize itself (using deep learning-based image recognition algorithms). We can also, for example, show the machine images of a smartphone, birds and buildings to reinforce the concept that it is not any of these things (non-self). Finally, we can design an algorithm to select out all images of non-self; all that remains is self. This kind of algorithm can be used to design a sense of self in machines. Such a supervised learning approach is similar to negative selection in biology where the immune system learns to discriminate between all cells in the body (self), versus all that is foreign and potentially pathogenic (non-self)”

--

--