Awareness, Free Will, and Artificial Intelligence
In Part 5 I described how it is theoretically possible to model intelligence by observing its emergence from within a digitally-simulated universe. The cybernetic model thus produced would eventually accumulate a near-infinite number of logic steps, which would produce an empirical model of consciousness (if-then logic) that would most likely bear some resemblance to the Constructal Pattern.
Yet, there is more to consciousness than binary logic. An AI with enough logic steps and decision-making ability might have enough information to become self-aware, but it wouldn’t necessarily have the will or the inclination to do so.
A logic-based AI could never make independent decisions, raise questions, or demonstrate spontaneous behavior. It would have no motivation to ask questions like, “What is the nature of my own existence?”, without receiving explicit instruction to do so. If logic-based intelligence isn’t issued an order, then it has nothing to act on — null input produces null output… Which is more like a daemon than life. Dreams and other sometimes illogical things have a way of keeping us sane; without them, we wouldn’t be human, we wouldn’t have time to look back and self-improve. This is a good distinction to make, I think: Humans dream, daemon’s don’t. AI should.
So if we really want to reach the Singularity and build Strong AI (i.e. replicate human consciousness In Silico), then we need to account for not only the straight lines of logic, but also for the meandering flows of feeling, chance, and intuition… an imagination, if you will — the ability to dream – which is, most essentially, a time for connecting ideas that have recently surfaced on the subconscious (within the past day). If we fail to account for this other half – this time for reintegration and the formation of mental connections, we get a potentially unstable result. This is why we need to move past the standard daemon-based model of AI and consider neuromorphic systems that might be suitable platforms to allow a virtual intelligence’s mind to wander and ‘dream’ — thus remediating all the daemon-related problems like ‘rampancy’ and overflow.
In our attempts to create virtual intelligence, we absolutely must be aware of our methods. Rather than attempting to create a form of artificial intelligence that is based in binary decision-making ability only (surely a disaster waiting to happen), let’s get it right the first time — let’s create something like us. That should be the only goal of advanced AI research – to understand and replicate human consciousness In Silico – not merely to reduce it to numerical logic.
We can’t create Strong AI from the bottom-up with logic alone; we also need to create a subconscious, a dream state. For this, we could present AI with a constant stream of ‘random’ input to increase its variance – basically, to ‘let its mind wander’ by providing it with permutations of its own ‘thoughts’, essentially granting it a subconscious and stabilizing it over time. It needs to be alive; it needs to dream.
More variance could be added to the system letting the AI choose between a number of possible outputs for any given input, and selecting the most ‘optimal’ outcome. The ensuing spontaneity would create the functional equivalent of free will, and perhaps even more than that. Also, to add some extra structure to what otherwise might be random variance of little consequence, what if these ‘random’ decisions were directly influenced by the input stream in the form of coordinated environmental feedback? What I’m essentially trying to describe is like an echo chamber of thought; a digital environment for the mind that builds itself in reaction to observation. I would argue that the mind needs this to grow. In any case, given enough ‘random’ inputs and feedback loops, the machine’s logic could ‘never’ be predicted – it would have free will. Thus, spontaneous behavior (i.e. free will) can be written as binary instruction; the only requirements are an extensive ruleset (logic) and a constant stream of random inputs. The logic would come from the AI research discussed in Part 5. The input stream, or randomized environmental feedback, might be generated using cellular automata or genetic algorithms.
Before this series concludes with Part 7, you might also want to check out this forum thread, where I discuss social and humanitarian responsibilities leading up to the Singularity. I think people tend to get distracted by abstract notions of the Singularity and machine intelligence and forget about real-world problems in the process… As important as exponential change is or may seem to be, we also need to keep it framed it in perspective.
“We don’t want to invest in such distant futures that we forget about the present.”
Next entry: Universal Duality
Table of Contents
- Six Blind Men and an Elephant – “All religions, arts and sciences are branches of the same tree.”
- The Physics of Consciousness – Consciousness explained in terms of electromagnetism and information.
- The Holographic Universe – The behavior of photons may indicate that we live in a holographic universe.
- Simulation Theory – How to emulate consciousness on a computer by allowing it to evolve from scratch.
- Artificial Intelligence – How to create self-aware, free-willing artificial intelligence.
- Awareness and Free Will – How free will can arise from binary decision-making (i.e. pure logic).
- Unified Field Theory – Living systems balance entropy and negative entropy by employing a unique mode of parallel processing.
- If the AI had an innumerable number of choices – so many that any given input corresponded to a near-infinite number of probabilistic outcomes, then its behavior would indeed appear to be completely random.↵
- This is similar to how Stephen Wolfram describes free will in his book A New Kind of Science.↵
- “Entropy generation”↵
- Of course, the balance between random inputs and logic would need to be carefully controlled. Consciousness ultimately consists of the process of balancing extremes: Logic and spontaneity, ‘Yin’ and ‘Yang’. Too much in either direction and the system becomes unstable.↵
Trackback from your site.