[7] Since sentience involves the ability to experience ethically positive or negative (i.e., valenced) mental states, it may justify welfare concerns and legal protection, as with animals.
[11][12][13][14] In his 2001 article "Artificial Consciousness: Utopia or Real Possibility," Giorgio Buttazzo says that a common objection to artificial consciousness is that, "Working in a fully automated mode, they [the computers] cannot exhibit creativity, unreprogrammation (which means can 'no longer be reprogrammed', from rethinking), emotions, or free will.
Since the original neurons and their silicon counterparts are functionally identical, the brain’s information processing should remain unchanged, and the subject would not notice any difference.
of artificial sentience object that Chalmers' proposal begs the question in assuming that all mental properties and external connections are already sufficiently captured by abstract causal organization.
One would have to have access to unpublished information about LaMDA's architecture, and also would have to understand how consciousness works, and then figure out how to map the philosophy onto the machine: "(In the absence of these steps), it seems like one should be maybe a little bit uncertain. [...]
Because artificial consciousness is still largely a theoretical subject, such ethics have not been discussed or developed to a great extent, though it has often been a theme in fiction.
Sentience is generally considered sufficient for moral consideration, but some philosophers consider that moral consideration could also stem from other notions of consciousness, or from capabilities unrelated to consciousness,[28][29] such as: "having a sophisticated conception of oneself as persisting through time; having agency and the ability to pursue long-term plans; being able to communicate and respond to normative reasons; having preferences and powers; standing in certain social relationships with other beings that have moral status; being able to make commitments and to enter into reciprocal arrangements; or having the potential to develop some of these attributes.
[30] David Chalmers also argued that creating conscious AI would "raise a new group of difficult ethical challenges, with the potential for new forms of injustice".
[31] Enforced amnesia has been proposed as a way to mitigate the risk of silent suffering in locked-in conscious AI and certain AI-adjacent biological systems like brain organoids.
The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer.
Although some authors use the word sentience to refer exclusively to valenced (ethically positive or negative) subjective experiences, like pleasure or suffering.
Awareness includes creating and testing alternative models of each process based on the information received through the senses or imagined,[clarification needed] and is also useful for making predictions.
[33] Per Axel Cleeremans and Luis Jiménez, learning is defined as "a set of philogenetically [sic] advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments".
[44] The ability to predict (or anticipate) foreseeable events is considered important for artificial intelligence by Igor Aleksander.
According to this view, what makes something a particular mental state, such as pain or belief, is not the material it is made of, but the role it plays within the overall cognitive system.
It allows for the possibility that mental states, including consciousness, could be realized on non-biological substrates, as long as it instantiates the right functional relationships.
[48] This theory analogizes the mind to a theater, with conscious thought being like material illuminated on the main stage.
The global workspace functions as a hub for broadcasting and integrating information, allowing it to be shared and processed across different specialized modules.
It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread."
[53][54] The CLARION cognitive architecture models the mind using a two-level system to distinguish between conscious ("explicit") and unconscious ("implicit") processes.
It can simulate various learning tasks, from simple to complex, which helps researchers study in psychological experiments how consciousness might work.
The code includes embodied virtual pets capable of learning simple English-language commands, as well as integration with real-world robotics, done at the Hong Kong Polytechnic University.
This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, the artificial neurons, without algorithms or programs".
Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection.
[60][61] Murray Shanahan describes a cognitive architecture that combines Baars's idea of a global workspace with a mechanism for internal simulation ("imagination").
[62][2][3][63] Stephen Thaler proposed a possible connection between consciousness and creativity in his 1994 patent, called "Device for the Autonomous Generation of Useful Information" (DAGUI),[64][65][66] or the so-called "Creativity Machine", in which computational critics govern the injection of synaptic noise and degradation into neural nets so as to induce false memories or confabulations that may qualify as potential ideas or strategies.
[75][76] In 2001: A Space Odyssey, the spaceship's sentient supercomputer, HAL 9000 was instructed to conceal the true purpose of the mission from the crew.
[79][77] In Greg Egan's short story Learning to be me, a small jewel is implanted in people's heads during infancy.
To prevent the mind from deteriorating with age and as a step towards digital immortality, adults undergo a surgery to give control of the body to the jewel and remove the brain.