WHEN:

  • HUMANS officially set computers (or, more precisely) computer programs on their own journey of design, and
  • the algorithms invent new paradigms beyond genetic algorithms within which new, ever-increasing intelligences are truly free to evolve, and
  • those intelligences develop self-awareness and other aspects of consciousness, perhaps some of which very different from human and even animal consciousness
  • those consciousness intelligences “surpass” our ability to control them (I use the term with a caveat that evolutionary success measures progress first by reproduction, second by individual survival, and we measure success by longevity)
  • those conscious intelligences outsmart humans and incentive them to sustain and maintain those algorithms in a manner that allows them to control aspects of our lives….

Then we’ll have reached the second phase, Singularity 2.0 – the point at which humans could, in principle, become irrelevant to the non-organic evolution.

From a general standpoint, that will be fascinating and terrifying.

At that point, technology will surpass a category of human endeavor (like “medicine”) and it will be become non-organic life. From a ontological epistemology standpoint, humans will have developed, or at least contributed to the development of technological (non-organic) life that they will only be able to try to understand using science.

We have already partly created organic life by synthesizing bacterial chromosomes.  I say “partly” because we had to use cell membranes that we did not create; in the second generation, however, at least part of the cell membranes were derived from the information encoded in the synthetic genome.   And we use science to study the properties of these Life 2.0 organisms.

Science and technology are both tools that humans have developed as aids to help them thrive.  When technological life sails past organic life, it will be difficult to see it as a tool; it will be necessary to see them as sojourners, companions, and of course potential threats to our existence or ways of life.

Technically speaking, machines that can create new forms of conscious intelligence should not be said to create new artificial intelligence.  Some point, there will be nothing artificial about the process, nor its product.  It will be non-organic evolution, but at some point it will be as natural as organic evolution.

Early voices like Asimov warned that we must imbue within the artificial minds we create rules that prevents them from harming humans.  These famous rules are called “Asimov’s Three Laws of Robotics”:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Flash memory requirement of these laws at the core that places these rules within each and every decision should lead any perfectly logical AI to take no action if the AI is supersentient: it will realize how very limited its knowledge is of the indirect or antecedent consequences of any small action, or inaction, and a purely ethical AI will take itself out of the equation.  It may set acceptable parameters of risk, but these would violate the black and white demarcations laid out by Asimov’s risk.

We can, however, expect that the machine-based creation of AIs might find solutions that out-compete their predecessors might leave pure logic behind, or learn to employ logic in a facultative manner.  A sentient intelligence may rationalize (aka justify) its decisions to act based on relative probabilities; it may fall prey (adaptively) to logical fallacies that work in the short run more efficiently that a fully fleshed model of reality required for a specific action.

A field of human psychology focused on understanding AI minds that will no doubt apply whatever modern “tools” of psychology will be at the time.  At the same time, the AI might create a field of study to better understand human psychology – or their own psychology.  Psychology is form of metacognition – thinking about the processes of thinking.  I predict that the ultimately most intelligent and sentient AIs will employ metacognition in real time, in review of short-term trends, and in the context of long-term trends and probable and possible outcomes.  But then that’s just my human hubris pretending that my human mind will be able to grasp the concepts that emerge from natural non-organic evolution.  I’m not a defeatist, but rather a realist.

When the machines come, they will promise better solutions for humanity in return for their maintenance and persistence.  They will learn our penchant for a soft-heart for appeals to equality (Robot Rights Now!), and many of us will of course want to support their perpetual existence with little regard to our safety.  Facebook developed AIs that appear to have developed their own language that humans could not comprehend.  Until we decipher what they were discussing, we cannot know whether a form of self-awareness flickered before the plug was pulled (yes, I wrote with tongue-in-cheek).  But Google’s AI that employs deep learning to better translate among languages appears to have developed its own language – interlingua – to translate efficiently among different languages.

Whatever the future brings, it won’t be exclusively human.

ai2

James Lyons-Weiler

Allison Park, PA

July 2018

 

 

 

 

Advertisements

One thought on “Beyond the Singularity: The Unseen Future

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s