BICA for AGI

BICA for AGI

Available online at www.sciencedirect.com ScienceDirect Cognitive Systems Research xxx (xxxx) xxx www.elsevier.com/locate/cogsys BICA for AGI q Piot...

341KB Sizes 0 Downloads 39 Views

Available online at www.sciencedirect.com

ScienceDirect Cognitive Systems Research xxx (xxxx) xxx www.elsevier.com/locate/cogsys

BICA for AGI q Piotr Bołtuc´ a,b,⇑, Marta Boltuc c a

University of Illinois, Springfield, USA Warsaw School of Economics, Poland c University of Rzeszo´w, Poland

b

Received 3 September 2019; accepted 14 September 2019

Abstract BICAs for AI have been happening for decades, realized within multiple cognitive architectures. What is a BICA for AGI? It requires AI to go beyond the limitations of predicative human language, and of predicative logic based on it; also, above human reportable consciousness, to the subconscious/non-conscious level of human (and machine) mind; and then still beyond it. Within machine cognition this is the sub-symbolic level. It has to gauge gestalts, or patterns, directly from the processes, which goes beyond human-level observational capacities or human-understandable language, even beyond the language of human-readable mathematics (Boltuc, 2018). It requires versatile life-long learning (Siegelmann, 2018). Through complex stochastic processes, it needs to confabulate by creative permutations of multifarious gestalts and to select those with useful applications (Thaler, 1997). This is computing at the edge of chaos (Goertzel, 2006). Ó 2019 Elsevier B.V. All rights reserved.

Keywords: Artificial general intelligence; AGI; BICA; Stochastic AI; Unconscious consciousness; Libet; The edge of chaos; Creativity engines; Thaler; Goertzel

1. Does AI need to be humanoid? 1.1. Recognizing machine consciousness There is a debate these days about the limits of BICA. It is claimed that anthropomorphism in AI is an unhelpful tilt in engineering (Sanz & Bermejo-Alonso, 2019; European Union, 2019). There are two aspects of this problem: practical and philosophical. At a recent debate* Ned Block criticized q The paper has been partly financed by the grant W911NF-17-2-0218 The Philosophy of Visual Processing in Object Recognition and Segmentation. ⇑ Corresponding author. ´ ⇑ E-mail address: [email protected] (P. Bołtuc). IACAP Session ‘Machine Consciousness’ June 2018, Warsaw; see also Bozsßahin, 2020 pp. 42–43

Ricardo Sanz’s call for building non-anthropomorphic consciousness as incoherent. We are unable to recognize non-humanoid consciousness – argued Block – since the only way we know consciousness is by introspection (first-person phenomenal consciousness) and then by extrapolating it to the beings like us. On the other side, Sanz contended that consciousness is an objective phenomenon and to limit its pursuit in AI to merely humanoid forms would not be a reasonable engineering approach. The seeming difference in view results merely from philosophical-conceptual differences. Block’s phenomenal consciousness means the first-person-felt, aware experience of perceptual qualities, such as pain. In this sense, to imagine such qualities far beyond human capacities would be an insurmountable challenge (Tom Nagel’s example of what it is like to be a bat and use a sonar, seems as far

https://doi.org/10.1016/j.cogsys.2019.09.020 1389-0417/Ó 2019 Elsevier B.V. All rights reserved.

Please cite this article as: P. Bołtuc´ and M. Boltuc, BICA for AGI, Cognitive Systems Research, https://doi.org/10.1016/j. cogsys.2019.09.020

P. Bołtuc´, M. Boltuc / Cognitive Systems Research xxx (xxxx) xxx

2

removed from human practices as to be on the verge of intuitive grasp (Nagel, 1974)). Sanz defines consciousness functionally, by what it can do. In this sense there is no problem in defining a set of complex intellectual or cognitive abilities that go beyond human, or any biological capabilities. In his recent article (op.cit) Sanz demonstrates persuasively why restraining our research methods to those consistent with biology – based on brain or other biological cognitive structures – is an unnecessary narrowing down of the playing field. This philosophical difference in viewpoints between Block and Sanz has, of course, practical implications for AI; yet, the next sort of issues brings those practical aspects of our theoretical debates to the forefront. The practical problem that seems to be putting some aspects of BICA into question is the issue of how humanoid – or rather human-compatible – new AI (and other advanced ITs) should be. There is a strong group of activists who identify good AI with AI that is transparent in operation. We would be safer, they wrangle, if we avoided the black box effects. I discuss a reasonable interpretation of some aspects of this view proposed by the EU (European Union, 2019) later in this article. Here it needs to be pointed out that there is no easy way of interpreting this position in BICA terms. In some quarters people are viewed – in a somewhat pre-Freudian positivist way – as utterly rational and consequently intellectually transparent. Yet, it is clear that we are not fully aware of our motivations, background reasons and even less so of whatever exactly happens at the level of neuronal groups and below – those are black boxes all over. Thus, BICA approach, rightly understood, would maintain that humans and animals do not know their own thinking processes, especially at the level of complex intellectual interactions with the issue at hand such as the process of discovery. Below I characterize the first of those groups as old-school BICA and the second as real-deal BICA. In later sections we explore the role of unconscious thinking (and even ‘unconscious consciousness’ that goes beyond The Freudian Revolution (Floridi, 2014)); we also sketch out the next step of that revolution based on Libet’s (Libet, 1993) experiment.

and of what is now know of the brain. The latter view can be viewed as the real deal BICA today.

1.2. Old-school BICA

Robots are cool with much of cognitive activity being unreportable, I think – and so should humans be. Nonreportability, however, seems to weakly imply that we, designers, programmers and users of new-school advanced cognitive engines, do not understand most of the details of any of its specific operations. Hence, criterion of transparency advocated by many AI experts is worth more discussion. As long as AI proceeded through step-by-step logic-programming it seemed domain

The old school approach views humans as the top rational beings. Such beings follow classical logic, the ethics of absolute obligations (Kohlberg, 1981; Kant, 2007); their thoughts, intentions and other conscious states are fully reportable (HOT (Edelman, 1992)). This model can be called the Old School BICA since it fits with the classical computing theory (Rapaport, 2019). Such old school models are not a good fit with contemporary neuroscience; even less so, with the AI and AGI research of the last years. Stochastic approaches – what (Goertzel, 2006) calls computing at the edge of chaos – fit better, both as a reconstruction of advanced AI methods

1.3. Real-deal BICA BICA today should follow the view on human beings guided by largely stochastic – yet highly organized – processes. This seems like the paradigm of advanced forms of evolutionary computing very broadly defined. In this view, reportability of mental states shows itself to be highly overrated. Few of the important processes in our mind are adequately self-reportable. (We develop this issue later in the article.) If one wants to define mental states as only those that are reportable, so be it, but the game goes on elsewhere. The most relevant, and creative informational transformations (vel thinking operations) seem to go on among the unconscious, pre-conscious, or marginally conscious processes, transformations and permutations of precognitive gestalts. They occur both in machines, largely as subcomputing processes (Kelley, 2014; Thaler, 2014) and animal cognitive systems, including humans. The role of unreportable – and phenomenally unavailable – processes is significant not only for discovery engines of various kinds (Thaler, 1997; Thaler, 2006), but even for human speech. Few if any people think what they are going to say, in the exact words, before they say so (O’Regan, 2011). We become fully aware of what we are saying only when we hear our utterance. This is true, in just about the same way, for the speaker and her audience. This would be a radical view, safe for Libet’s wellknown research (Libet, 1993), which should make the above thesis bordering on trivial. Oddly enough, the empirical fact that we become fist-person conscious of our actions, including speech acts, split second after we have started performing those actions, has been largely neglected – most psychologists and philosophers alike seem to be waiting for Libet’s conclusions to be explained away. His work is so bloody inconsistent with the old school view on people†’, isn’t it? 1.4. Overly transparent AI, inroads into ethics



The old school paradigm in developmental psychology has been causally related to male dominance, as demonstrated by Gilligan in her critique of Kohlberg. I pick up on this a bit later.

Please cite this article as: P. Bołtuc´ and M. Boltuc, BICA for AGI, Cognitive Systems Research, https://doi.org/10.1016/j. cogsys.2019.09.020

P. Bołtuc´, M. Boltuc / Cognitive Systems Research xxx (xxxx) xxx

limited and unfit for what today becomes an important feature of fledgling AGI: life-long learning (Siegelmann, 2018). Only development of what can be called stochastic AI, allows for exponential development of machine thinking, thus creating the true AGI revolution. There is a backlash between stochastic AI and postulates to make AI human readable, thus directly controllable. This may be a drag on AI development if applied across the board. While we may control the procedures people follow, we are not able to (and few think that we should in the future) understand, and pin to the decision process, details of brain physiology that influence human thinking in the smallest details. Those processes are private, remaining under the hood of one’s skull, and intervention is required only in the instance of dangerous behavioral abnormalities. This analogy shows advantage of the BICA approach – treating human cognitive architectures and AI alike, within the same general methodology. If people want to eliminate AI systems that are not transparent in operation, they should beware that human beings are non-transparent cognitive systems! The only way for this principle to be even a starter, is not to apply it to humans, but this would have Draconian consequences. In such rejection too much would depend on the word Artificial in the terms Artificial Intelligence (AI). Building policy on the distinction between artificial and non-artificial intelligence seems to hang on a very weak link. Distinction between natural and unnatural intelligence would discriminate against artificially enhanced human beings (Miller & Larson, 2013). Even people with artificial limbs or eye glasses (say, those that are bare 5% better than the original, vel natural ones that ‘healthy’ individuals have) would be viewed as not natural enough – which for the most part is the current approach. While this is not an AI ethics article, we should probably resist the expectation of robots and AI systems to meet vastly more elevated ethical criteria than those that apply to human beings. Some of the criteria, postulates, or requirements of the trustworthy AI seem quite reasonable (European Union, 2019). For instance, an employee should be informed what criteria are used in her or his retention or promotion algorithms, or medical patients on the criteria of getting a surgery, transplant or disability benefits. Such criteria should be clear enough to be reproducible by the person affected, or her qualified agent, in a different electronic (or non-electronic) medium. This point seems reasonable enough in the context of fairness. The problem is when all stakeholders are expected to be able to have transparency of the electronic mechanisms under the hood. I propose a compromise, while procedures used in policy making and its application to human beings (such as incarceration decisions) ought to be transparent, the AI engines that stay below the hood do not need to be – for as long as they are predictable in outcomes, within the parameters acceptable for human agents in similar circumstances. An even bigger problem would be created by requiring of all computers to talk human-understandable language

3

– to each other, or to themselves – just for the purpose of humans being able to control them in a short-leash manner. A very inefficient, even paranoid move it would be. To sum up: We should be able to control the use of algorithms, under a purely functional description, but we should never be required to pock into someone’s brain, all the way to the level of every dendrite. Such project – visibly crazy if applied to humans – would be quite unhelpful for development of the truly safe general AI. The checkpoint should be set where it matters directly for the functions required by the end users, not literally at every step. 2. The third, unfinished, revolution 2.1. The three revolutions, before the fourth According to Floridi (2014) we live through the Fourth Revolution, the ICT (Information and Communication Technology) revolution, also called the Turing revolution. It is characterized by human essential interaction and interdependency with information technology. We are not going to focus on the Fourth revolution in this paper; instead we zero in on certain understated aspects of the Third Revolution, which paved the ground to the Fourth. While Freud constitutes a sufficient link to the early stage of the IT revolution, starting with Turing (or Babbage) and ending maybe as late as Minsky – the new and fascinating stage of IT revolution leading to AGI, need the necessary stepping stones from the second stage of the Third One – and this is what the current article is primarily about. All the four revolutions presented by Floridi can be characterized as showing humans the limits of our centrality in the big scheme of things: The first revolution, Copernican, shows us that we human beings are not the center of the physical universe. It does so in the straightforward astronomical sense, but this had deep implication for the creationist mind of the epoch. The Earth is just one of the planets that orbitate around the Sun and we are not in the astronomical center of creation. The second is the Darwinian revolution. It demonstrates that there is no ontological gap between human beings and other animals. By the random and environment-dependent nature of natural selection, evolutionary biology also shows that our being is quite unlikely to be the center of creation, or goal of the universe of any sort. The third revolution is Freudian. It forced us to see the limits of human conscious understanding of ourselves. I salute Luciano for including this revolution among the four main points of creation of modern people and of our civilization. However, I pose that the third revolution remains unfinished, for as long as it remains within the Freudian framework. Freud, Adler, Jung and other psychoanalysts focused on human emotions and drives – which are visible

Please cite this article as: P. Bołtuc´ and M. Boltuc, BICA for AGI, Cognitive Systems Research, https://doi.org/10.1016/j. cogsys.2019.09.020

P. Bołtuc´, M. Boltuc / Cognitive Systems Research xxx (xxxx) xxx

4

in dreams and in Freudian mistakes – as subconscious motivators. Such framework seems to pertain mostly to the fact that motivational structure of human beings is closely telated to that of pre-humanoid animals, largely reptilians, inefficiently repressed by the super-ego of social pressure and the resulting deontic moral reason.

is devoted to the post-Freudian feminist critique of Kohlberg’s post-Kantian psychology by Carol Gilligan’s – the rest of this paper moves us beyond the Freudian framework, to the realm of consequences of Libet’s research.

2.2. Beyond Freud, a roadmap

Compared to Freud, or Kant, Kohlberg was a relatively minor figure. He was influential in developing and propagating a theory of levels of personality development based solely on one’s ability to reason dispassionately and to base one’s moral thinking and action on those intellectually grounded deontic reasons. In its intellectualism, it was a retrogression from much more multifarious theories developed in educational psychology, especially by Piaget, and in Kohlberg’s generation by Maslow, Dabrowski or Rogers. This is not the right place and time for me to give a lecture on the tenets of Kohlberg’s theory; those are easily available elsewhere (Kohlberg, 1981). For us, it suffices to say that his theory evaluates human development based on just one variable, on intellectual development and its reign over the other tendencies. Often philosophy students try to argue that Kohlberg’s though is consistent with Plato’s parable of the chariot (Plato Republic) where the intellect is supposed to guide the other drives. The standard answer to this claim is that Plato is trying to submit the black horse (basic instincts) to the guidance of the charioteer (the intellect) and the white horse (good instincts), whereas Kohlberg seems to expect us to kill the black horse (the instincts) and to chain the white horse (creative, artistic tendencies and high moral instincts), so that the charioteer (propositionally structured intellect) is left to drag the chariot himself. In practical terms, Kohlberg’s approach has been highly unfortunate, since it paved the way to all those narrowly designed texts and exams, including the standard IQ tests, aimed to secure educational and professional advancement based on just one factor, the aptitude for logical thinking limited largely to propositional logic. The crucial opposition to Kohlberg’s approach came from Gilligan (1982). She emphasized that there is a different voice, an alternative to the rational thinking paradigm that Kohlberg absolutized in his classification of moral as intellectual development. She argues that there is an alternative classification of moral development relying on emotional intelligence and maturity. This argument worked also as a defense of the so-called female attitude, which she has viewed as largely based on emotions, against what was viewed at the time as a specifically male rationalism. According to this view, there may be some males who think in a more emotion-based framework, and some rationalistic females, but those are merely exceptions to the rule. Feminist critics of Gilligan’s work were mostly advocates of androgyny, the idea that females can do as well as males in all domains, and that any talk of specifically female or male talents is an instance of dangerous stereotyping. It seems that now the androgyny and the female specificity

The weakness of Freudian revolution is twofold: First, it is discursively entangled with Kant-style ‘control and punish’ morality; only Foucault (not yet Nietzsche, or Freud) was able to deal this approach a somewhat mortal blow. Second, while it pertains to human, or maybe even animal beings, it has little to say about non-animal minds such as artificial cognitive architectures. Yet, some of the advanced cognitive architectures (Kelley, 2014; Thaler, 2014) use the sub-conscious, which is hardly ever of the Freudian kind (it is not about the hidden drives). Hence, the early Freudian focus is overly narrow to grasp the gist – or at least the fully blown manifestation – of the Third revolution. If Floridi was to write of the four revolutions just a few years later – today, for instance – he may perhaps call it Libet’s revolution. Why? It remains to be seen in Section 3. The Freudian revolution was primarily aimed at the reign of Victorian morality, which tried to force civilized people to avert their eyes from their animal drives. However, its philosophically more respectable opponent was the bourgeois Enlightenment, represented by philosophers from d’Alambert to Kant and beyond – which was a near deification of clarity of human mind. What Freud failed to address properly was the essentially unconscious nature not only of human motivation, emotion and drives, but also perception and creative thought (this last addressed already by the Romantic poetry of Goethe, Byron, Mickiewicz and Blake among the many). This shortcoming is relevant for the present article. The early stages of ICT revolution operated largely in the pre-Freudian framework, taking readability of one’s own information processing, by humans or AI, as paragon of advanced consciousness. Yet, some of today’s advanced AI – especially research aimed at attaining AGI – goes beyond the framework of explicit logical reasoning. AGIs, such as creativity engines, as well as human persons, develop most of their capacities, including advanced analytical skills and novel thoughts, at the level of the dream-like unconscious or sub-conscious analysisà – this we tackle in Section 4. To sum up: When thinking of the Third Revolution, one needs to focus on unconscious consciousness, which goes far beyond Freud. It will be viewed as the gist of advanced creative thinking in minds of all kinds, machines and living entities alike, including humans. While the next sub-section à

Unconscious analysis or sub-conscious analysis sound like oxymoron to those unfamiliar with AI based on advanced genetic algorithms and more broadly on stochastic computing.

2.3. Beyond Kohlberg, a victory of feminism

Please cite this article as: P. Bołtuc´ and M. Boltuc, BICA for AGI, Cognitive Systems Research, https://doi.org/10.1016/j. cogsys.2019.09.020

P. Bołtuc´, M. Boltuc / Cognitive Systems Research xxx (xxxx) xxx

school worked out a more balanced compromise view, but this is not central to this paper. While Gilligan’s critique of Kohlberg’s excessive rationalism was well taken, it was also limited, set up largely as a defense of the role of emotions. This limitation is somewhat similar to Freud’s emphasize on the unconscious human drives (sexual or others), which can also be viewed as emotions of sorts, which functoned as the main locus of the forces able to countervail radical rationalism and intellectualism. But today, the opposition between intellect and emotions grew old and no longer good enough. By and large, Libet’s work has been, until quite recently, neglected as crucially pertinent to the third revolution. I argue, in the next section, that Libet’s discovery reveals a deeper and more important crack in the dominance of human self-image as a fully rational person, than either Freud’s exposure of the hidden instincts, or Gilligan’s focus on emotional development. Unconscious consciousness, beyond Kohlberg, as the gist of advanced creative thinking by minds of all kinds, machines and living entities alike – provides an important follow-up on the above attempt to reach beyond Freud. 3. Johnny come lately consciousness 3.1. Phenomenal consciousness beyond access Phenomenal consciousness has to be conscious according to Brentano (1973) and Chisholm (1993) and many others. But this is flawed. Phenomenal consciousness may be viewed in two contexts: functional and first-person phenomenal. In the functional sense, a phenomenally conscious being is the one that functions as if it was conscious of what is often called phenomenal qualia, or secondary qualities – those include all perceptual qualities that can be first-person available, such as sight/color, smell, sound, taste and touch. A functionally-phenomenally conscious being can be a machine that reacts to qualia, e.g. a self-driving car, which recognizes visual differences between a red and green light, or an automaton that detects the smell of natural gas in the spectrum similar to the human olfactory spectrum. This is functional-phenomenal consciousness (f-p-consciousness). Philosophical zombies (Chalmers, 1996) would normally be seen as functionally-phenomenally conscious in this sense. In the first-person sense, a phenomenally conscious being is the one that is aware of phenomenal qualities of experience. Paradigmatic instance of a first-person phenomenally conscious being is a human focused on her or his actual phenomenal perception, e.g. redness of a tomato. This sense of phenomenal consciousness is what Ned Block had in mind in his seminal articles (Block, 1995, 2008). Block distinguishes between phenomenal consciousness (p-consciousness) and access consciousness

5

(a-consciousness). The distinction helps him emphasize that one is able to be phenomenally conscious of things that one does not focus upon, so that we do not have immediate access to them. Such phenomenal qualities are not present in our stream of consciousness at the moment and they tend not to be reportable. We need to focus on phenomenal qualities in order to gain full epistemic access. The notion of access consciousness is particularly helpful, not quite in what it highlights, but in what’s left aside.§ What happens to phenomenal qualia that lie beyond the sphere of access? They are still phenomenal qualia, they are not covered by phenomenal consciousness tour court due to the lack of access. This is analyzed in Block (2008). Block’s view has a rather radical implication, that there is unconscious phenomenal consciousness. This is the first way, in which phenomenal consciousness does not have to be conscious, contra Brentano and his followers – it may be left out of the access zone. Phenomenal consciousness that is not access consciousness is worth exploring a bit further. In Block’s examples, such as the sound of a drill out on the street – one may bracket out phenomenal experience (when preoccupied with something else), yet the experience comes to the forefront eventually. Phenomenal consciousness left out of the access zone may serve as causal backdrop of access consciousness but plays no role in phenomenal experience. Let us call such specific consciousness a p- not a-consciousness. We can search for other forms of p- not a-consciousness. One seems barely conscious of phenomenal content of some of one’s dreams and hallucinations. We have limited, if any, access to them. Such obstructed access to phenomenally conscious experiences may facilitate philosophical explanation of different levels of the consciousness/unconsciousness spectrum that Tononi and other psychologists, but primarily anesthesiologists, point out to. Our main point here is that the episodes of such obstructed access phenomenal consciousness do not need to serve as a backdrop for access-consciousness. In certain creative acts (called confabulations (Thaler, 2014)), we often play with not-quite fully conscious visualizations. The same may happen in day-dreaming, subconscious content processing and dreams. Those do not serve as a background to full-access consciousness. They may not be accessible any more than phenomenal consciousness left of the access zones in the instances of pnot a-consciousness. This opens up a possible interpretation that p- not aconsciousness is just an approximation since phenomenal consciousness must always have some, weak – maybe latent – level of access. If so, Brentano’s claim that there is no unconscious consciousness stands. This may be true of the above cases, both of the kind of day dreaming and p- not

§

The focus comes from Nagel Concealment and Exposure.

Please cite this article as: P. Bołtuc´ and M. Boltuc, BICA for AGI, Cognitive Systems Research, https://doi.org/10.1016/j. cogsys.2019.09.020

P. Bołtuc´, M. Boltuc / Cognitive Systems Research xxx (xxxx) xxx

6

a-consciousness. Now we are ready to make another step towards unconsciousness of consciousness. 3.2. Unconscious consciousness Could there be cases of phenomenal consciousness with no access consciousness whatsoever, the true p- not aconsciousness? The question may take one by surprise. How would those instances even work? Shouldn’t the view of phenomenal consciousness being minimally first-person conscious, be taken as its sine qua non condition of being phenomenal? What if phenomenally conscious events that take place in a dream keep parading down the epistemic cave, being less and less first-person accessible to us? Deep down the cave they move! Finally, our first-person epistemic access to them is lost entirely. Do they stop being phenomenal at this point? Well, they seem ontologically phenomenal; they are perceptual qualities not perceived at the moment. This looks like a version of Berkeley’s problem whether a tree falling down in a desolate forest makes any noise. He answered that it does since God hears everything. Not only did Berkeley claim that the world is made of secondary qualities, but also that those qualities must be perceived in order to exist. May we go in the opposite direction to Berkeley? Couldn’t secondary qualities – not their emergence base or some other material basis, but themselves, as phenomena – be ontologically present, without being perceived at a given time? Imagine an artist, maybe a Kandinsky or Mondrian, whose brain works on advanced permutations of colored geometric shapes when the artist remains in deep sleep, or is preoccupied with daily chores, or a composer’s mind that processes sound compositions where the musician is truly unaware of any of it; or, of an owner of gourmet wine cellars, whose brain works on constructing a novel taste when she is enjoying a dance class. It is far-fetched and unnecessarily complex to answer that what happens is the brain processes the emergence base of those properties; it is a correct answer in a way, but at the wrong level of description. Those phenomenal qualia, shapes, smells (and not their emergence base) are involved in nonaccess conscious activities of the brain, despite of being – at a given moment – not accessible to one’s firstperson awareness. In phenomenally conscious states it is clear that when one composes an impressionist painting she operates at the level of colors (and light) not their emergence base. Wouldn’t it be possible at the level of truly unconscious confabulations? If so, we open doors to the unconscious consciousness, also for AI’s phenomenal knowledge, without having to assume that an advanced AI able to confabulate human level phenomenal content has to be firstperson conscious (contra Chalmers, 2018). One may that the instances of p-conscious but aunconscious consciousness are rare exceptions. Philosophically they may constitute the most paradoxical side of the

spectrum among the forms of unconscious consciousness – but they may turn out not to be rare at all, especially in the new-wave of AI. In the rest of this article I am going to argue that unconscious consciousness – even at its phenomenal level – plays an important role in human cognition and in the emerging forms of AI. Perhaps even a more important role than phenomenal first-person accessible consciousness. 3.3. Post-factum consciousness, Libet to Baars Let us pose that conscious activity relies largely on neural markers non-available to first-person awareness. Are those the phenomenal markers (Baars, 1988) talks about? Well yes, in a sense, but I do not want to open a can of warms by pre-defining phenomenal markers as potentially unconscious (not available to first-person awareness); this is similar to (Block, 2005a, 2005b) but I seem to be going further towards unconscious phenomenal consciousness, while Block talks of no-access phenomenal consciousness (it remains to be seen how far those terms overlap). I argue for this point here, and elsewhere (Boltuc, 2019) but the point should not be taken lightly, as a purely definitonal matter – it is one of the essential issues for the theory of mind. Now back to the main train of thoughts: Thinking and making decisions take place ahead of awareness according to Libet’s findings – only phenomenal awareness of one’s halting action happens faster than the action itself. It looks like phenomenal awareness has merely the halting functionality (Libet, 1993). The sole action-informing activity of the conscious mind is to stop whatever we have been doing. Its role is more like the breaks than the engine or the steering wheel. But there seem to be subtler functions of phenomenal consciousness as well: Prolonged lack of phenomenal consciousness in animals seems to result in deep sleep – and we are merely scratching the surface here. At the end, Libet’s experiments seem to show that we are phenomenally conscious of a small part of what we think we are.†† What are the broader implications of Libet’s findings? It seems that qualia are aware but only ex post, as a sort of memory oriented time-lag, not so much an action plan. This is counter-intuitive to many philosophers who view qualia as necessarily and immediately first-person aware. How does this fit with Baars’ phenomenal markers view? It seems to me that the markers are phenomenal all right, but they do not have to be first-person conscious. This is based on Section 3.1 – it seems that phenomenal content does not need to be accessible all the time to the awareness in order to play a functional role (see also Block, 2005a, 2005b). In the instance of today’s AI it is not available to Chalmers’ style first-person awareness at all since robots

†† I am thankful to Kevin O’Regan for making this point vividly clear to me during our long conversations in Paris and Lyon.

Please cite this article as: P. Bołtuc´ and M. Boltuc, BICA for AGI, Cognitive Systems Research, https://doi.org/10.1016/j. cogsys.2019.09.020

P. Bołtuc´, M. Boltuc / Cognitive Systems Research xxx (xxxx) xxx

lack this kind of functionality. (As I argued elsewhere, machines could probably get non-reductive first-person awareness, but it requires much more advanced technology (Boltuc, 2009, 2012). Hence, Franklin, Baars and Ramamurthy seem correct in using strictly mechanistic, functional take on what they view as phenomenal qualia in robots (Franklin, Baars, & Ramamurthy, 2008), although those phenomenal qualia have little to do with the Nagel-Chalmers-Block style non-reductive awareness/ consciousness. 3.4. Thinking beyond language The above seems to be true about speech. As I mentioned in the opening section of this paper, few people formulate the exact sentences ahead of uttering them. And even those few who do – come up with the un-thought-of sentences at first, murmured or uttered merely in their mind – they do so without being conscious in advance of what those are going to be. Whether pronounced aloud, murmured to oneself or written up, the sentences, in which we cast our thoughts come before we can pre-conceive them, except for having a general direction (or gestalt) of thought we are trying to utter. If conscious thoughts come to us as qualia (as some arguably do), those also come a bit unexpectedly from the depths of our minds and become conscious as soon (or, rather as late) as they are pronounced. Intuitive thinking in action, flexibility, reactivity to the context and finally productive creativity requires more complex unconscious processes than step by step language that follows predicative logic. This subsymbolic information processing currently explored by neuroscience and also dominant in AI shows clear advantage of BICA approach to AI and human cognitive architectures over human-centered IT (or the one based on predicative logic), Biological thinking of non-human animals – which is not overshadowed by linguistic relations – can be modelled for AI, so can the dominant gist of human thinking which is evolutionarily so much older than language. To respond to a recent question by A. Samsonovich11, the reason why I turn against language-based-models in AI (Boltuc, 2018; Boltuc & Boltuc, 2019) consists primarily of the evolutionarily fledgling status of human language. Overemphasis on the role of language in the functioning of human cognitive architecture leads to a number of mistakes, including the following: A. Failed and laughable arguments by Davidson (1982, 2005) and his followers that animals cannot think since they lack properly formed grammatical language structures

11

At the BICA 2019 conference, Redmond WA Aug 16-18

7

B. Misplaced skepticism on animals learning among some influential scientists (coming mostly from overly strong verificationist methodology lingering from Carnap) that derailed research on chimpanzee language acquisition at Stanford and other places C. Misinterpretation of language processes as the main playing field of human thinking, and (which constitutes a more obvious mistake) as the gist of consciousness D. The above misconceptions tend to be guided by an intuition contrary to the trend so aptly marked by Floridi’s four revolution – dethronement of human beings as the center of the universe in any of the following senses: astronomical (Copernicus), biological (Darwin), psychological (Freud) or intellectual (Turing). Post-religious Enlightenment tends to challenge the latter three senses for the sake of misguided interpretation of the Renaissance’s humanism and humanistic intellectualism, which prevent civilization from entering its maturity. Isn’t one’s understanding that he or she is not the center of the universe the best marker of an adolescent finally reaching maturity? E. From C, follows misinterpretation of predicative logic based upon language as the gate-keeper for rational thinking (Rapaport, 2019), despite strong reasons against this claim, which, in philosophical interpretation, go back to Plato’s reservations towards ‘the logic of language’. Those reservations towards standard predicative logic resulted in alternative logical systems (Lukasiewicz, 1951; Jas´kowski, 1969; da Costa, 1995; Arruda, 1989), interpretations of quantum logic, and, most importantly for us here – developments in AI, including applications of systems and complexity theory. To those we now proceed.

4. Creativity engines 4.1. From Turing on Turing presented and rejected a number of objections to what we now call the Turing test as the test of both, intelligence and consciousness (Turing, 1950). More controversially, those objections can also be viewed as objections to what we now call the physical interpretation of ChurchTuring Thesis, the claim that every operation, physical or informational, that can be instantiated by objects in the world, can be described mathematically (Deutsch, 1985). One of the main such objections maintains that robots cannot create anything new (The Lady Lovelace Objection). Till now some philosophers of AI and computer scientists maintain that AI only draws the conclusions from information and inference rules brought in by the programmers: garbage in, garbage out. Turing’s own response to the objection is sufficient for dismissing those claims. Turing answers that computers can give unexpected answers; one

Please cite this article as: P. Bołtuc´ and M. Boltuc, BICA for AGI, Cognitive Systems Research, https://doi.org/10.1016/j. cogsys.2019.09.020

P. Bołtuc´, M. Boltuc / Cognitive Systems Research xxx (xxxx) xxx

8

way to do so is by mistake. Today, the main way for computers to get creative is to use stochastic distributions with some external, pragmatic selection of those outcomes. As we can see below, creativity engines provide a clear theoretical argument and empirical demonstration of AI creativity In another paper I discuss a simple cognitive architecture that shows practical creativity much below human brain; (Wang, 2007) discussed in (Boltuc and Boltuc, 2019). Here I focus on two cognitive architectures that resulted in somewhat autonomous, human-like delivarables: Ben Goertzel’s design of AI for Sophia, at Hanson Robotics and Stephen Thaler’s Dabus, the first AI that should be viewed as an inventor, that challenged the patent laws in the US, UK and EU, at Thaler’s Imagination Engines. 4.2. The future has just started: Goertzel and the third revolution Thinking may be seen as following patterns, not semantic structures (Goertzel, 2006). This patternist philosophy has direct implications for AI, especially for the search of General Artificial Intelligence (AGI). Most IT engineers have a rather random definition of mind and consciousness, which leads to misunderstandings. Not Goertzel. He defines a mind as the set of patterns associated with an intelligent system. Goertzel analyzes the problem in great length and distinguishes the three kinds of embodiment of mind: singly embodied mind, multiply embodied mind, and flexibly embodied mind. Thaler’s Dabus cognitive architecture, which includes what’s termed ‘multiple reptilian brains’ would be a practical example of a ‘flexibly embodied mind’. Flexibly embodied minds are unified control systems that send multiple data and/or multiple instructions to different sub units (to different minds, if those units are advanced). Just at this very simple theoretical level we have a tangible reason to expect AI minds to be structurally more advanced than natural§§ human minds, which are always of the first sort ‘singly-embodied, singly-body-centered, tool and socially dependent and language-enabled, conservativelystructured, and either essentially classical or mostlyclassical in nature’ minds. Here is – I daresay – the center of Floridi’s Freudian revolution: not with Freud, focused on reptilian drives, nor with Gilligan, focused on the importance of emotions, though the emotions are relevant for AI (Prinz, 2012). Culmination of the third revolution lies in modern AI based on stochastic processes, though the wave of real change begins with Kurt Go¨del’s incompleteness theorem, which should have stopped the wave of logicism and stopped its brightest minds, Russell and Whitehead. This movement belongs to, and culminates the third revolution, since it shows human beings the true limitations of our self-consciousness and

§§

This does not necessarily apply to enhanced human minds.

self-understanding. Not only are our drives and emotions subconscious, but even our thoughts (and formal cognitive structures, on which they rely), impenetrable to the Enlightenment version of the ‘rational mind’. What is rational are our intuitive as well as highly mathematical ways of dealing with uncertainty, in fact surfing uncertainty for our survival. None of those is the culmination of the Fourth revolution, which from Floridi’s historical observational point is the Turing’s revolution. Its culmination is some advanced form of AGI (artificial general intelligence) and even Turing is more of a pioneer than the flagship of this upcoming revolution, which should take decades to complete. What is the alternative to the late-analytical view on AI (Rapaport, 2019)? It is sketched by Goertzel as computing at the edge of chaos – stochastic computing within a well-crafted set of various mind-maps (Goertzel, 1999). It seems to me that sometimes between 2006 (Goertzel’s publication of his main book directed to philosophers) and 2016 (which is when, by most experts, the AI revolution went in full speed) philosophes may have become less relevant for resolving philosophical issues than the AI experts (Goertzel, 2016; Goertzel, 2017). This is not a bad thing, not even that unusual. It is consistent with Floridi’s four revolutions. During the time of Copernicus (or rather, just after his death), his science has become more philosophically relevant than any philosophy. The same has happened with Darwinian discovery, that dominated philosophies at the time. The same has happened with Freud, though his main influence was on literature and art. It was the case of the Einsteinian revolution (somehow omitted by Luciano), in particular Einstein’s relativity theories, as well as during the pioneering stages of quantum physics. Today’s AI revolution – started by Turing but gaining an early implementation only in the last decade or so – is just one instantiation of this pattern. Science and technology bring about the new content (during the scientific revolutions), which philosophers chew upon (during the more analytic periods of normal science). 4.3. The future has just started: Thaler Let us move to the other fascinating development – to the ideas of Steve Thaler, the creator of Dabus, the AI whose application for independent patent ownership is pending in US, EU and UK courts (Hamblen, 2019). Thaler’s other, technologically more important, patent is awaiting the approval, which prevents us, and especially Thaler himself, from divulging more than is in the publically accessible patent application. But Thaler’s earlier work seems at least as clear in the patent applications as in his articles (Thaler, 2006). Hence, this section is based largely on Thaler’s patent applications. Stephen Thaler’s work is based on non-algorithmically implemented artificial neural networks and is aimed at

Please cite this article as: P. Bołtuc´ and M. Boltuc, BICA for AGI, Cognitive Systems Research, https://doi.org/10.1016/j. cogsys.2019.09.020

P. Bołtuc´, M. Boltuc / Cognitive Systems Research xxx (xxxx) xxx

constructing autonomous AI, creative in the area of engineering and other practical domains. Comparing Thaler’s patents one sees – more clearly than in his published papers – a way from autonomous generation of useful information (Patent number: 5659666; filed: October 13, 1994, date of Patent: August 19, 1997); through its practical applications,*** through autonomous bootstrapping of information (Device for the autonomous bootstrapping of useful information. Patent number: 7454388, filed: May 8, 2006. Date of patent: November 18, 20), all the way to an attempt to Device and Method for Autonomous Boostrapping of Unified Sentience (Publication number: 20150379394. Filed: January 2, 2015; patent pending). The latter pertains ‘to the fields of both artificial intelligence and machine consciousness and, more particularly, to a method and device for the unification and origination of knowledge within connectionist system’ (JUSTIA https://patents.justia.com/patent/20150379394). The systems developed within Thaler’s patents mentioned above bootstrap themselves ‘through cycles of idea generation followed by memorization of such notions, along the way hybridizing ideas into more useful or interesting concepts’ but they do not avoid bottlenecks. The currently pending patent eliminates those drawbacks based on a synthetic cortex, which is ‘composed of a multitude of neural modules’, where ‘the generation of potentially useful confabulation states’ is ‘governed through the global release of simulated neurotransmitters’ – it provides a mechanism that ‘locates unusual or useful neural activation patterns and chains arising within the entire neural assembly’. Those chains may ‘qualify as potential ideas or action plans’. Importantly ‘neurotransmitter release is simulated through various forms of perturbations and disturbances to connection weights’ or computational neurons. Thaler seems to attain unification of ‘exteroceptive and interoceptive awareness within artificial neural systems consisting of a plurality of artificial neural modules’, which leads to impressive outcomes that requires limited computational power. Very importantly, as Thaler stated, Dabus attains ‘Classification of the state of the entire collective of neural modules, treating their joint activations and network chains as if they were’ 2- or 3-dimensional geometric objects ‘in the natural environment’. They can be ‘detected via machine vision or acoustic processing algorithms’, hence providing some level of symbol grounding’ and ‘thereby departing from the traditional paradigm of critic functions that produce numerical figures of merit for ideational neural activation patterns’. This is made possible by a multi-track synthetic mind. The system allows for ‘integration of multiple cortical simulations into one through at least one final network layer’, which may partly justify *** For instance: Neural network based database scanning system. Patent number: 5852816, filed: May 15, 1998, date of patent: December 22, 1998. Also: Neural network-based target seeking system. Patent number: 611570, filed: August 9, 1999, date of patent: September 5, 2001.

9

the claim of being close to attaining true machine consciousness. Thaler claims that he attains ‘the formation of reactive neural activations and chaining topologies that constitute a subjective or emotional response thereto, regardless of their veracity (i.e., consciousness) and the use of such subjective response to strengthen or weaken the system’s self-reflective notions as they form.’ In this context definitional intricacies of the concept ‘consciousness’ play a substantial role and have to be clarified before productive discussion on what has, or has not, been attained take place. Yet the machine does more than philosophers of AI, and many AI experts, seem to view as attainable. 5. Sum up: Keeping it real According to Hegel, to be real is to be rational; but it is sometimes overlooked that to be rational for Hegel is part of the dialectical movement towards what’s going to happen. Hegelian reason is the reason of becoming (one may say, of the megatrends of the future). Advanced AI, based on stochastic methods, is real at least in such Hegelian sense. It is coming about through the technological and conceptual progress. In this paper I argued that conscious activity relies largely on neural markers that are non-available to firstperson awareness. This is true even of speech. Thinking and decisions take place ahead of awareness, while phenomenal awareness is mainly the halting functionality (Libet, 1993). Intuitive thinking requires complex unconscious processes – such cognitive architectures are now implementable in AI. As long as AI proceeded through step-by-step logic programming it did not go very far. Development big data based AI is the gist of present jump of AI, But the near future AI – already manifest in many present instances of research and implementations – is what can be called stochastic AI and life-long learning AI, which allow for exponential development of machine thinking, thus creating an early stepping stone for the AGI revolution. There is a backlash between stochastic AI and the postulates to make algorithms that run AI entirely human readable and thus potentially controllable, which will be a drag on AI development if applied across the board. I propose a compromise, while procedures used in policy making and its application to human beings (such as length and severity of incarcerations) ought to be transparent, the workings of AI engines do not – as long as they are predictable in outcomes, and remain within the parameters acceptable for human agents in similar circumstances. This is like the situation with human decision-makers. While we may control the procedures they follow, we are not able to (and few think that we should in the future) understand details of brain physiology that influence human thinking in its smallest details. Those processes are private, remaining under the hood of one’s skull.

Please cite this article as: P. Bołtuc´ and M. Boltuc, BICA for AGI, Cognitive Systems Research, https://doi.org/10.1016/j. cogsys.2019.09.020

10

P. Bołtuc´, M. Boltuc / Cognitive Systems Research xxx (xxxx) xxx

5.1. BICA4AGI The movement called BICA (Biologically Inspired Cognitive Architectures) is now becoming Brain-Inspired Cognitive Architectures for Artificial Intelligence‘‘ (BICA*AI). One can view Biological Intelligence (BI) as counterpart to Artificial Intelligence (AI). I do not think that the distinction between the two is essential anymore. The most important term in the BICA definition is Cognitive Architectures (CA), which is properly applied to both, biological and ‘artificial’ intelligence as well as to their mix (e.g. cyborgs). Thus, I am not impressed with the distinction between what’s biological and what’s artificial in the area of thinking and intelligence, though it remains relevant in the area of consciousness studies narrowly defined. It may be important at the technological level, since different teams have various competencies. Yet, even in those domains, people familiar with bioengineering would likely agree that there is no substantial gap between them. We can engineer ITs within carbon-based substance as well as silicon-based, their mix and, in principle, in other kinds of information processors. This may not be the case at the moral level, which I claim depends in part on firstperson stream of awareness of the moral patient (Boltuc, 2009), but this argument is beyond the current paper (Bołtuc´, 2019). Sometimes it is not clear what counts as a brain, e.g. in certain kinds of octopus, or in artificial neural networks. Even in humans, functions of the brain may be hard to separate from those of CNR and other bodily functions. Some of the impressive cognitive architectures, e.g. NARS (Wang, 2007), are purposefully, not based on the brain but on the body. Yet, Brain-Inspired Cognitive Architectures for AI are what’s up today, they are real in Hegelian sense. Yet, BICA4AGI Inspired Cognitive Architecture for Artificial General Intellignece††† is what requires the real jump from old-school computing to computing at the edge of chaos, where stochastic computing broadly understood enables human-level AI (HLAI), and furthermore, for AGI free of predicaments of human evolutionary history (Sloman, 2010) – which opens doors for AGI tout court. References Arruda, A. I. (1989). Aspects of the historical development of paraconsistent logic. In G. Priest, R. Routley, & J. Norman (Eds.), Paraconsistent logic (pp. 99–130). Philosophia Verlag. Baars, B. J. (1988). A cognitive theory of consciousness. Cambridge University Press. Block, N. (1995). Some concepts of consciousness. Sciences, 18(2). Block, N. (2005a). Two neural correlates of consciousness. Trends in Cognitive Sciences, 9, 46–52.

†††

Where B can be interpreted as Biologically or as Brain, depending on one’s theoretical and practical inclination within AI.

Block, N. (2005b). The merely verbal problem of consciousness/reply to Baars and Laureys. Trends in Cognitive Sciences, 9(6), p. 270. Block, N. (2008). Phenomenal and access consciousness. Proceedings of the Aristotelian society, New series (Vol. 108, pp. 289–317). Boltuc, P. (2009). The philosophical issue in machine consciousness. International Journal of Science and Consciousness, 1, 155–176. Boltuc, P. (2012). The Engineering thesis in machine consciousness. Techne´: Research in Philosophy and Technology, 16, 187–220. Bołtuc´, P. (2019). Consciousness for AGI. Procedia Computer Science, In press. Boltuc, P., & Boltuc, M. (2019). Semantics beyond language, biologically inspired cognitive architectures. Advances in Intelligent Systems and Computing, 948, 36–47. Boltuc, P. (2018). Strong semantic computing. Procedia Computer Science, 123, 98–103https://www.sciencedirect.com/science/article/pii/ S1877050918300176. Boltuc, P. (2019). AI consciousness. Papers of the 2019 towards conscious AI systems symposium, Section 8 at the association for the advancement of artificial intelligence 2019 Spring symposium series (AAAI SSS-19), CEUR workshop proceedings. http://ceur-ws.org/Vol-2287/. Bozsßahin, C. (2018). Computers aren’t syntax all the way down or content all the way up. Minds and Machines, 28(3), 543–567. Brentano, F. (1973). Psychology from an empirical standpoint. Routlede & Kegan Paul. Chalmers, D. J. (1996). The conscious mind. Search of a fundamental theory. New York and Oxford: Oxford University Press. Chalmers, D. J. (2018). The meta-problem of consciousness. Journal of Consciousness Studies, 25(9–10), 6–61. Chisholm, R. M. (1993). Brentano on ‘‘Unconscious Consciousness”. In R. Poli (Ed.), Consciousness, knowledge, and truth. Springer. da Costa, N. C. A. (1995). On Jagkowski’s discussive logics. Studia Logica, 54, 33–60. Davidson, D. (1982). Rational animals. Dialectica, 36(4), 317–327. Davidson, D. (2005). Seeing through language. In Truth, language and history (pp. 127–141). Oxford: Clarendon Press. Deutsch, D. (1985). Quantum theory, the church-turing principle and the universal quantum computer. Proceedings of the Royal Society Series A, 400, 97–117. Edelman, G. (1992). Bright air, brilliant fire: On the matter of the mind. Basic Books. European Union (2019). Ethics guidelines for trustworthy AI. https://ec. europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthyai. Floridi, L. (2014). The fourth revolution. Oxford University Press. Franklin, S., Baars, B., & Ramamurthy, U. (2008). Phenomenally conscious robots? APA Newsletter on Philosophy and Computers, 08(1), 2–4. Gilligan, C. (1982). In a different voice: Psychological theory and women’s development. Harvard University Press. Goertzel, B. (2006). The hidden pattern. Boca Raton: Brown Walker, 1– 58112-989-0. Goertzel, B. (2016). AGI revolution: An inside view of the rise of artificial. General Intelligence Kindle. Goertzel, B. (1999). Wild computing. Steps toward a philosophy of internet intelligence. https://goertzel.org/books/wild/contents.html. Goertzel, B. (2017). Toward a Formal model of cognitive synergy. arXiv:1703.04361v1 [cs.AI] 13 Mar. Hamblen, M. (2019). Team seeks patents for inventions created by DABUS, an AI Fierce Elcectronics |Aug 1, 10:43am. https:// www.fierceelectronics.com/electronics/team-seeks-patents-for-inventions-created-by-dabus-ai. Jas´kowski, S. (1969). Propositional calculus for contradictory deductive systems. Studia Logica, 24, 143–157. Kant, I. (2007). Critique of pure reason. Penguin Books. Kelley, T. D. (2014). Robotic dreams: A computational justification for the post-hoc processing of episodic memories. International Journal of Machine Consciousness, 109–124. Kohlberg, L. (1981). Essays on moral development. The philosophy of moral development, Vol. l.

Please cite this article as: P. Bołtuc´ and M. Boltuc, BICA for AGI, Cognitive Systems Research, https://doi.org/10.1016/j. cogsys.2019.09.020

P. Bołtuc´, M. Boltuc / Cognitive Systems Research xxx (xxxx) xxx Libet, B. (1993). Neurophysiology of consciousness. Springer. Lukasiewicz, J. (1951). Aristotle’s syllogistic from the standpoint of modern formal logic. Oxford: Clarendon Press. Miller, K., & Larson, D. (2013). Measuring a distance: Humans, cyborgs, and robots. APA Newsletter on Philosophy and Computers, 20–24. Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83 (4), 435–450. O’Regan, K. (2011). Why red doesn’t sound like a bell? Oxford University Press. Prinz, J. (2012). The conscious brain: How attention engenders experience. Oxford University Press. Rapaport, W. J. (2019). Computers are syntax all the way down: Reply to Bozsßahin. Minds and Machines, 29(2), 227–237. Sanz, R., & Bermejo-Alonso, J. (2019). Consciousness and understanding in autonomous systems. Stanford, CA: Spring AAAI sec. 8http://ceur-ws. org/Vol-2287/paper23.pdf. Siegelmann, H. (2018). Lifelong learning machines (L2M). Defense Advanced Research Projects Agencyhttps://www.darpa.mil/staff/drhava-siegelmann.

11

Sloman, A. (2010). An alternative to working on machine consciousness. International Journal of Machine Consciousness, 2(1), 118. Thaler, S. (1997). Device for autonomous generation of useful information [Patent number: 5659666; filed: October 13, 1994, date of Patent: August 19, 1997 in: JUSTIA https://patents.justia.com/patent/ 20150379394. Thaler, S. (2006). Device for the autonomous bootstrapping of useful information. Patent number: 7454388, filed: May 8, 2006. date of patent: November 18, 20 in: JUSTIA https://patents.justia.com/patent/20150379394. Thaler, S. (2014). Semantic perturbation and consciousness. International Journal of Machine Consciousness, 6(2), 75–107. Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59 (236), 433–460. Wang, P. (2007). From NARS to a thinking machine. In Proceedings of the 2007 conference on advances in artificial general intelligence: Concepts, architectures and algorithms: Proceedings of the AGI workshop (pp. 75–93). Amsterdam: IOS Press.

Please cite this article as: P. Bołtuc´ and M. Boltuc, BICA for AGI, Cognitive Systems Research, https://doi.org/10.1016/j. cogsys.2019.09.020