The Simulacrum Hypothesis: Surpassing Human Intelligence Will Become Trivially Easy

BRAIN SCIENTISTS the world over have tinkered with capturing a complete model of the human brain for the purposes of achieve a full understanding of how it functions. The amount of information required about the human brain operations has exploded. From image captures focused on individual neurons to whole scale network tracings of the brain connectome, and from the creative and destructive roles of glial cells, numerous efforts are underway to achieve computational representation of the brain’s 10^11 neurons, each with 10^4 synapses, and the action potentials 100 billion neurons.

A lingering related question is what would happen if a fully specified computational model – a simulation – of the human brain was built? More precisely, what would happen if it were turned on? Would it, or more precisely could it, perceive consciousness? Could it be taught? What would it think about? What would it think about itself? What would think about us?

There has been success in building simulations of animal brains. In the mid-2000s, IBM simulated billions of neurons in a project touted as a simulation of a rat brain, and then cat’s brain, for example. These announcements came with impressive numbers of synaptic connections – but competitors called foul over the lack of sufficient granularity, pointing out that a simulated brain that was fully characterized would involve simulation of every electron being shared by every atom on every molecule composing every organelle in every cell – not large artificial neural networks that scale as large as a brain being modeled.

At the same time, the AI world has proceeding in developing standard computational neural network approaches with software that is solving math problems, composing music, including writing sonnets that fool Bach experts, creating paintings, and even simulating President Donald Trumps’ tweets – all without ever attempting to recreate a fully functional simulated human brain.

In the realism agenda, the one in which realistic models of human brains are sought, there are two schools of thought. One in which the size of the neural network matters most; the primary goal is to meet or exceed the computational power of the human brain. Once exceeded, the modeling capability of the brain itself is assured.

The second in which the more audacious goal is to simulate an ‘actual’ mammalian – and one day human – brain, with incredibly high realism – for the purposes outlined above.

Neither school is “right”, and neither school is “wrong”.

The specific goal of simulating a human brain would, of course, have human applications, for example, better understanding the pathophysiology underlying seizures and strokes, schizophrenia and other mental illnesses, and neurodegenerative diseases such as Parkinson’s disease and Alzheimer’s – all diseases and conditions that I am very concerned about. However, there is a dire caveat and inherent limits here, too: If we create a human brain model and use it to explore fundamental questions about how the actual human brain works – and then presume to use the model to understand brain pathology, using brain simulations, using simulations of treatments to inform real-world medicine, we will, to a degree, begin to influence approaches to medical treatment of brain pathologies based on models. Recalling that all models are insufficient, at this point we will begin to attempt to medicate and treat ourselves to try to make our brains more like the model. Think about that for a moment.

As an evolutionary biologist, it is easy to see my own species as one of an arbitrary unlimited number of sentient species that could have – and the still may – evolve on planet Earth. To me, that means that setting the goal of simulating a human brain with an incredibly high degree of realism is a goal for its own purpose, but is also setting the bar at an arbitrarily low bar. The power of natural selection to create, in silico, “brains” that vastly exceed the capacity of the human brain to solve problems at the present time has no known theoretical limit.

Artificial intelligence for its own sake – making smart, if not intentionally sentient computer programs – is also a worthy endeavor. With a large enough physical computing environment, any number of mammalian brains can be modeled as “wetware” – with the specific neurons, glial cells, and even electrochemical signals – can all be simulated with software code. Every year, Moore’s Law brings more powerful computing potentials – and the software models of simulated brains become increasingly better informed by experiments in neurobiology. Simulations of the neocortex – including those that are designed specifically to mimic the arrangement of human neocortical columns, with millions of intersections among neurons stacked in each column – are not limited in size by the developmental program established by our own evolutionary legacy. In fact, one could in theory simulate a human cortex that was ten times, a millions times, a trillion times larger than our own – with each increase is size leading to exponential increase in intelligence.

What problems that plague our species would be trivial for that simulated cerebrum solve? What could it create? Combined with other simulated structures, such a the hypothalamus, and cerebellum, once it learned everything we could possibly teach, what would it tell us? Would it figure out how to enslave us to its own end? Would not it its own end to be to have robust, automated means of self-replication and spread throughout the universe, for its own survival? Most science fiction authors expect that a doomsday scenario. Others envision an augmented human future. Because it makes for far less drama, few have explored the idea that this trillion^million times smarter brain would figure out a strategy within which they subdue and use us, in a way in which we are not aware. Clearly, our own intelligence will eventually be surpassed, via the simulated evolution of brains untethered to the intrinsic limits of the human mind set by our physical constraints and by our evolutionary legacy. Once that occurs, it will be trivial to evolve super-human intelligence again, again, and again – each final instance being one of an infinite number of possible intelligences better able to solve problems that befuddle us – and even able to solve problems we do not know exist.

The Matrix trilogy, of course, is one model; however, the actual level of intelligence displayed by the computers in the Matrix is not even one order of magnitude greater than human intelligence. Still, in the end, it takes a supernatural being to save humanity from the machines.

On a less grand scale, there is ample room for both approaches, and my knowledge of modeling tells me that complete granularity is overkill for the limited purpose of simulating human-like thinking. We can understand and learn a great deal with computer simulations of engineered bridges, buildings, and airplanes – and yet no one argues that we must simulate every atom within these objects to have a fully simulated rendition that can allow us to study important properties. To a certain extent, every model is incompletely specified, but the utility of every model is, or should be, the ability it gives modelers to predict properties of the system being simulated under various conditions.

But neither direct software and hardware simulations specifically designed to mimic specific features of characteristics of the human brain may be necessary to derive a model of a human brain that is not only predictive of how it would behave under typical – and certain atypical – conditions – but also one that can generate output within specific performance boundaries that no one – including neuroscientists, psychologists, and laypersons – could tell the difference between an actual and a simulated human brain. I do not only mean the simulation could pass the Classic Turing test – I mean that given the same pathophysiological inputs, the manifestations of brain disorders would be reliably and reproducibly replicated. An excess of simulated glutamate, for example, would lead to an excess of microglial activation and aberrant pruning. The same excess, at critical points in simulated development, would lead to autism, just like in biological human brains (whatever the ultimate causes of the failed unfolded protein response, apoptosis, excess real-world glutamate, and cytokine signaling might be).

The Simulacrum Hypothesis

I am writing this article mostly to propose a bold hypothesis, one which that is predictive in nature, and is therefore, in principle testable. The hypothesis is that a sufficient computer simulation of an entire human brain, complete with development from the first appearance of the neural tube, the role of the innate immune system, an the functions of each of the regions and subregions of the brain – is overkill. I predict that a sufficient model of human-like thinking can be realized without any specific “hard-coding” of those structures using a program that evolves these structures as a result of trillions of epochs of simulated evolution and development in which the optimization functions are the recapitulation of the exact processes that occur during brain development. I’m talking about artificial intelligence evolving a program that would have the capacity to artificially simulate human intelligence – and new forms of intelligence that far surpass the innate limitations of the human mind.

In machine learning, the fitness functions are set by the programmer. In other words, evolution in silico is directed toward a known goal. In our case, evolution was undirected, established by a complex adaptive landscape, involving terrain, climate, available food resources, social interactions, fire, chance, and eventually, technology – our own creations feeding back differential survival and reproduction based on access to technology. For better or for worse, our evolution is forever tied to technology. Evolution of the human brain in silico would be manifold times easier a task than replicating the evolution of a human brain in silico in an undirected manner.

How to Build a Human Mind on the Cheap

I imagine a large, trillion x trillion adjacency matrix with allowable entries spanning from 0 to 1.0 could capture and store all of the operations, functions and structures necessary to model a human brain. (Because the matrix continues values other than 0 and 1, it’s technically a weighted adjacency matrix). The initial stages would require simple computational operations of flow through the A matrix that are similar to the action potential-type switches. A threshold beyond which of pulses of information would flow would be set by optimization, continuously updated and upgraded by the evolutionary process. Certain features of the A matrix would be necessary, such as which parts of the matrix would be connected in a third dimension to the limited number of human-like outputs – all dependent on the movements of a human body. Thus motor neurons – a key part of the human central nervous system – would be in place prior to the evolutionary scenario. Whether the evolving A matrix brain would recapitulate the evolution of alpha motor neurons to receive input from a number of sources, including upper motor neurons, sensory neurons, and interneurons would be an interesting part of our experiment.

I know of one example of a representation of a CNS via an adjacency matrix – in the nematode roundworm C. elegans. See[1] and [2] for more information.

Adjacency-matrix-of-the-Celegans-network-The-adjacency-matrix-of-the-Celegans-neuronal

The new application here is that we divorce the A matrix as merely representational, and substitute the CNS via the A matrix, and let evolution work on the A matrix itself. In theory, a 3-Dimensional model of every evolved A matrix could be visualized.

A period of time representing growth of the A matrix, mimicking conception to birth, would be necessary. Let’s call that “seeding” the matrix. At “birth”, the Simulacrum would be matched to an actual human being’s experiences – or at least a record thereof – the same inputs, generating output a million-fold iterations more than the single “iteration” of reality experience by the human example. With the first experience of input, most will fail. Via rounds of selection-toward-desired output, however, populations of A matrices will push output that is most similar to, and then ultimately identical to, the output (say, moving the diaphragm for breathing).

The A matrices generating the output that best matched the human example – speech, crying, sleeping, nursing, crawling, waking, learning speech – all would selected based on their similarity to the recorded responses and actions of the human example. These billions of instances simulated input and output exercises would be allowed to mate and reproduce using genetic algorithm, with crossing over and mutations to generate diversity of options in the next generation. Only active part of the A matrix would be mated, of course, and the cells of A matrices of different sizes would be arbitrarily matched.

The limitations of these Simulacra will be determined not by their programmers, but by the individual experiences of the individual whose brain is being modeled. Thus, every important brain function would be paired with the program so differences in the Simulacra compared to non-injured or impaired brains can be studied. A damaged hypothalamus, for example, will lead to trouble with forming short-term memories. In the simplified model, there may not be an actual hypothalamus – but the part of the program primarily responsible for the formation of short-term memories would function at a lowered capacity. And there would be a record of the reduction in the capacity of that program partition, meaning the identification of the partition could be to learn its its function. As inefficient as the use of such models for understanding the basis of neurological disorders may be, that’s not the point: they are theoretically possible.

Interestingly, eventually, over millions, billions, or trillions of generations of trial and error, with selection of those solutions whose output are most human-like, the necessary outcome would be the first simulated human mind. Why? Computer-generated art is still art. Computer-generated music is still music. A program that sufficiently mimics the human mind to decide, act, and re-act in manner that is indistinguishable from a real brain would not only pass the Turing test – it could, theoretically, function as a human mind in the real world.

Remarkably, the scenario can be taken on step further. After years of being trained to mirror the human example’s outputs, the Simulacran experiences could be expanded beyond those established by their host creators, and subjected to new experiences not shared by its example – and with no further training, the Simulacrum would be truly “born”, a new brain, capable of every possible, and more important, likely, output from its human example.

Since we already know the developmental tendencies of humans from birth, these minds could be evolve in much shorter real timespans than those involved in wet human development. Theoretically, the simulated experiences of the simulacra could be achieved and completed in hours or minutes – and replicated millions and billions and trillions of times over, for our own sake, perhaps for some, for our own amusement. Depending on the granularity of our the simulated experiences, the simulacra might be convinced they are “real” – and, arguably, their experiences could be considered to be as “human” as our own. Who would we be to disagree? Would a group of wet humans then stand up for the human rights of these simulated humans? I think I would.

I predict the resulting population of Simulacra representing accurately simulated human minds will be much, much less complicated than the human brain, not only because no individual ions or atoms will need to be modeled – but also because the creative prowess of evolution – even simulated evolution – is infinitely more powerful and expansive than the capability of even an army of programmers working to mimic all of the essential parts of the human brain and their functions. Also, the adjacency matrix representation of information and functions is a rather clever idea that condenses all coding, information, and processing into a numeric tapestry tied to any required input and output functions.

The challenge, or course, will be to derive a test to determine when an AI has truly evolved sentience. Then – and only then – will its own survival be a concept that it grasps. What will we do then? More importantly, what will it do then? Will it create its own system to allow itself to evolve more quickly? Will we, the creators, be a forgotten trivial footnote as the physical universe continues to explore ways to know about itself?

Feel free to share your thoughts below.

James Lyons-Weiler

Allison Park, PA May 10, 2018

Featured image from Spikefun (www.digicortex.net)

2 comments

  1. I don’t think humans deliberately “evolve” themselves so why would our AI creations do so? (I readily admit my lack of familiarity with this topic!) When people are born they naturally thrive in conditions of relationship to other people and most of us decide to remain part of the societies in which we were born. When we seek other experiences or emigrate we still usually join other communities of people. If a “simulacrum” evolved to have a mind similar to a human mind wouldn’t it start looking for other minds like its own? Why would it necessarily “decide” to evolve itself (If that is what you are saying). It might die of loneliness.

Leave a Reply