paint-brush
AI Consciousness is Inevitable: A Theoretical Computer Science Perspective by@aithics
387 reads
387 reads

AI Consciousness is Inevitable: A Theoretical Computer Science Perspective

by AIthicsSeptember 3rd, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

This study introduces the Conscious Turing Machine (CTM), a formal machine model inspired by Turing's computation and Baars’ theater model of consciousness. By integrating resource limitations and aligning with major scientific theories, the CTM demonstrates that machine consciousness is a plausible and inevitable development in AI.
featured image - AI Consciousness is Inevitable: A Theoretical Computer Science Perspective
AIthics HackerNoon profile picture

Authors:

(1) Lenore Blum ([email protected]);

(2) Manuel Blum ([email protected]).

Abstract and 1 Introduction

2 Brief Overview of CtmR, a Robot with a CTM Brain

2.1 Formal Definition of CtmR

2.2 Conscious Attention in CtmR

2.3 Conscious Awareness and the Feeling of Consciousness in CtmR

2.4 CtmR as a Framework for Artificial General Intelligence (AGI)

3 Alignment of CtmR with Other Theories of Consciousness

4 Addressing Kevin Mitchell’s questions from the perspective of CtmR

5 Summary and Conclusions

6 Acknowledgements

7 Appendix

7.1 A Brief History of the Theoretical Computer Science Approach to Computation

7.2 The Probabilistic Competition for Conscious Attention and the Influence of Disposition on it

References

ABSTRACT

We look at consciousness through the lens of Theoretical Computer Science, a branch of mathematics that studies computation under resource limitations. From this perspective, we develop a formal machine model for consciousness. The model is inspired by Alan Turing’s simple yet powerful model of computation and Bernard Baars’ theater model of consciousness. Though extremely simple, the model aligns at a high level with many of the major scientific theories of human and animal consciousness, supporting our claim that machine consciousness is inevitable.

1 Introduction

We study consciousness from the perspective of Theoretical Computer Science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations.


By taking resource limitations into account, the TCS perspective is distinguished from the earlier Turing Theory of Computation (TOC) where limitations of time and space did not figure. TOC distinguishes computable from not computable. It does not distinguish between computable and not efficiently computable. [1] We highlight the importance of this separation for tackling consciousness and related topics such as the paradox of free will.


Elsewhere (Blum & Blum, 2021), we describe the Conscious Turing Machine (CTM), a simple formal machine model of consciousness inspired in part by Alan Turing’s simple formal machine model of computation (Turing, 1937), and by Bernard Baars’ theater model of consciousness (Baars, Bernard J., 1997). In (Blum & Blum, 2022), we consider how a CTM could exhibit various phenomena associated with consciousness (e.g., blindsight, inattentive blindness, change blindness) and present CTM explanations that agree, at a high level, with cognitive neuroscience literature.


In contrast to Turing, we take resource limitations into account, both in designing the CTM model and in how resource limitations affect (and help explain) feelings of consciousness. Our perspective differs even more. What gives the CTM its feeling of consciousness is not its input output map, nor its computing power, but what’s under the hood.[2]


In this chapter we take a brief look under the hood.


In addition, we show how the CTM naturally aligns with and integrates features considered key to human and animal consciousness by many of the major scientific theories of consciousness.[3] These theories consider different aspects of consciousness and often compete with each other (Lenharo, 2024). Yet their alignment with the CTM at a high level helps demonstrate their compatibility and/or complementarity.


But, even more, their alignment with the CTM, a simple machine model that exhibits phenomena associated with consciousness, supports our claim that a conscious AI is inevitable.


David Chalmers’ introduction of the Hard Problem (Chalmers, 1995) helped classify most notions of consciousness into one of two types. The first type, variously called access consciousness (Block, 1995) or functional (computational) or cognitive consciousness, we call conscious attention. The second type (associated with the Hard Problem) is called subjective or phenomenological consciousness and is generally associated with feelings or qualia. We call it conscious awareness. Chalmers’ Hard Problem can be viewed as a challenge to show that subjective consciousness is “functional”.


We contend that consciousness writ large requires both conscious attention and conscious awareness, each informing the other to various degrees. We contend that a machine that interacts with its worlds (inner and outer) via input sensors and output actuators, that constructs models of these worlds enabling planning, prediction, testing, and learning from feedback, and that develops a rich internal multimodal language, can have both types of consciousness. In particular, we contend that subjective consciousness is computational and functional.


We emphasize that the CTM is a formal machine model designed to explore and understand consciousness from a TCS perspective. It is not intended to model the brain nor the neural correlates of consciousness. Nevertheless, the CTM is inspired by cognitive and neuroscience theories of consciousness.


Specifically, as we have mentioned, the CTM is inspired by cognitive neuroscientist Bernard Baars’ theater model of consciousness (Baars, Bernard J., 1997), the global workspace (GW) theory of consciousness. However, here again, the CTM is not a standard GW model. The CTM differs from GW in a number of important ways: its competition for global broadcast is formally defined and completely replaces the ill-defined Central Executive of other GW models; its special processors including especially its Model-of-the-World processor construct and employ models of its (inner and outer) worlds; its rich multimodal internal language, Brainish, for creating labeled sketches in its world models and for communicating between processors; and its predictive dynamics (cycles of prediction, testing, feedback and learning, locally and globally).


The CTM also interacts with its outer world via input sensors and output actuators. To emphasize CTM’s embodied, embedded, enacted and extended mind, we call it here the CTM Robot (CtmR).


While working on this chapter, we became aware of Kevin Mitchell’s blog post in Wiring the Brain (Mitchell, 2023) in which he makes a point similar to one that we make, namely, that many of the major theories of consciousness are compatible and/or complementary. For a similar conclusion, see (Storm & et.al., 2024). Even more, Mitchell presents “a non-exhaustive list of questions … that a theory of consciousness should be able to encompass”. He declares that “even if such a theory can’t currently answer all those questions, it should at least provide an overarching framework[4] (i.e., what a theory really should be), in which they can be asked in a coherent way, without one question destabilizing what we think we know about the answer to another one.”


Mitchell’s questions are thoughtful, interesting, and important. At the end of this chapter, we offer preliminary answers from the perspective of the Conscious Turing Machine Robot (CtmR). Our answers both supplement and highlight material in the brief Overview of CtmR that we now present.[5]


This paper is under CC BY 4.0 DEED license.

[1] For a brief history of TOC and TCS see Appendix 7.1.


[2] This is important. We claim that simulations that modify CTM’s key internal structures and processes will not necessarily experience what CTM does. We are not claiming that the CTM is the only possible machine model to experience feelings of consciousness.


[3] These theories include: The Global Workspace/Global Neuronal Workspace (GW/GNW), Attention Schema Theory (AST), Predictive Processing (PP), Integrated Information Theory (IIT), Embodied, Embedded, Enacted and Extended (EEEE) theories, Evolutionary theories, and the Extended Reticulothalamic Activating System + Free Energy Principle Theory (ERTAS + FEP).


[4] Italics ours.


[5] In the Overview, we annotate paragraphs that refer to Kevin Mitchell’s queries. As an example, if a paragraph has a label [KM1], then it refers to Mitchell’s first query, KM1. Conversely, if Mitchell’s query is labeled with an asterisk such as KM1*, then it refers to [KM1] in the Overview.
바카라사이트 바카라사이트 온라인바카라