Rent my Brain and Just Leave me Alone

Until AI will be intelligent enough to replace humans in complex tasks there will be an interim stage and that is the era of human brain rental. People have diverse intelligence capabilities and many times these are not optimally exploited due to live circumstances. Other people and corporations which know how to make money many times lack the brain power required to scale their business. Hiring more people into a company is complicated and the efficiency level of new hires decelerates with scale. With a good reason – all the personality and human traits combined with others disturb  efficiency. So it makes sense that people will aspire to build tools for exploiting just the intelligence of people (better from remote) in the most efficient manner. The vision of the Matrix of course immediately comes into play where people will be wired into the system and instead of being a battery source we be a source of processing and storage. In the meanwhile we can already see springs of such thinking in different areas: Amazon Mechanical Turk which allows you to allocate scalable amount of human resources and assign to them tasks programmatically, the evolution of communication mediums which make human to machine communications better and active learning as a branch in AI which reinforces learning with human decisions.

In a way it sounds a horrible future and unromantic but we have to admit it fits well with the growing desire of future generations for convenient and prosperous life – just imagine plugging your brain for several hours a day, hiring it, you don’t really care what it does at that time and in the rest of the day you can happily spend the money you have earned.

Right and Wrong in AI


The DARPA Cyber Grand Challenge (CGC) 2016 competition has captured the imagination of many with its AI challenge. In a nutshell it is a competition where seven highly capable computers compete with each other and each computer is owned by a team. Each team creates a piece of software which is able to autonomously identify flaws in their own computer and fix them and identify flaws in the other six computers and hack them. A game inspired by the Catch The Flag (CTF) game which is played by real teams protecting their computer and hacking into others aiming to capture a digital asset which is the flag. In the CGC challenge the goal is to build an offensive and defensive AI bot that follows the CTF rules.

In recent five years AI has become a highly popular topic discussed both in the corridors of tech companies as well as outside of it where the amount of money invested in the development of AI aimed at different applications is tremendous and growing. Starting from use cases of industrial and personal robotics, smart human to machine interactions, predictive algorithms of all different sorts, autonomous driving, face and voice recognition and others fantastic use cases. AI as a field in computer science has always sparked the imagination which also resulted in some great sci-fi movies. Recently we hear a growing list of few high profile thought leaders such as Bill Gates, Stephen Hawking and Elon Musk raising concerns about the risks involved in developing AI. The dreaded nightmare of machines taking over our lives and furthermore aiming to harm us or even worse, annihilate us is always there.

The DARPA CGC competition which is a challenge born out of good intentions aiming to close the ever growing gap between attackers sophistication and defenders toolset has raised concerns from Elon Musk fearing that it can lead to Skynet. Skynet from the Terminator movie as a metaphor for a destructive and malicious AI haunting mankind. Indeed the CGC challenge has set the high bar for AI and one can imagine how a smart software that knows how to attack and defend itself will turn into a malicious and uncontrollable machine driven force. On the other hand there seems to be a long way until a self aware mechanical enemy can be created. How long will it take and if at all is the main question that stands in the air. This article is aiming to dissect the underlying risks posed by the CGC contest which are of a real concern and in general contemplates on what is right and wrong in AI.

Dissecting Skynet

AI history has parts which are publicly available such as work done in academia as well as parts that are hidden and take place at the labs of many private companies and individuals. The ordinary people outsiders of the industry are exposed only to the effects of AI such as using a smart chat bot that can speak to you intelligently. One way to approach the dissection of the impact of CGC is to track it bottom up and understand how each new concept in the program can lead to a new step in the evolution of AI and imagining future possible steps. The other way which I choose for this article is to start at the end and go backwards.

To start at Skynet.

Skynet is defined by Wikipedia as Rarely depicted visually in any of the Terminator media, Skynet gained self-awareness after it had spread into millions of computer servers all across the world; realising the extent of its abilities, its creators tried to deactivate it. In the interest of self-preservation, Skynet concluded that all of humanity would attempt to destroy it and impede its capability in safeguarding the world. Its operations are almost exclusively performed by servers, mobile devices, drones, military satellites, war-machines, androids and cyborgs (usually a Terminator), and other computer systems. As a programming directive, Skynet’s manifestation is that of an overarching, global, artificial intelligence hierarchy (AI takeover), which seeks to exterminate the human race in order to fulfil the mandates of its original coding.”.  The definition of Skynet discusses several core capabilities which it has acquired and seem to be a strong basis for its power and behaviour:

Self Awareness

A rather vague capability which is borrowed from humans where in translation to machines it may mean the ability to identify its own form, weaknesses, strengths, risks posed by its environment as well as opportunities.

Self Defence

The ability to identify its weaknesses, awareness to risks, maybe the actors posing the risks and to apply different risk mitigation strategies to protect itself. Protect first from destruction and maybe from losing territories under control.

Self Preservation

The ability to set a goal of protecting its existence’ applying self defence in order to survive and adapt to a changing environment.

Auto Spreading

The ability to spread its presence into other computing devices which have enough computing power and resources to support it and to allows a method of synchronisation among those devices forming a single entity. Sync seems to be obviously implemented via data communications methods but it is not limited to that. These vague capabilities are interwoven with each other and there seems to be other more primitive conditions which are required for an effective Skynet to emerge.

The following are more atomic principles which are not overlapping with each other:

Self Recognition

The ability to recognise its own form including recognising its own software components and algorithms as inseparable part of its existence. Following the identification of the elements that comprise the bot then there is a recursive process of learning what are the conditions that are required for each element to properly run. For example understanding that a specific OS is required for its SW elements in order to run and that a specific processor is required for the OS in order to run and that a specific type of electricity source is required for the processor in order to work properly and on and on. Eventually the bot should be able to acquire all this knowledge where its boundaries are set in the digital world and this knowledge is being extended by the second principle.

Environment Recognition

The ability to identify objects, conditions and intentions arising from the real world to achieve two things: To extend the process of self recognition so for example if the bot understands that it requires an electrical source then identifying the available electrical sources in a specific geographical location is an extension to the physical world. The second goal is to understand the environment in terms of general and specific conditions that have an impact on itself and what is the impact. For example weather or stock markets. Also an understanding of the real life actors which can impact its integrity and these are the humans (or other bots). Machines needs to understand humans in two aspects: their capabilities and their intentions and both eventually are based on a historic view of the digital trails people leave and the ability to predict future behaviour based on the history. If we imagine a logical flow of a machine trying to understand relevant humans following the chain of its self recognition process then it will identify whom are the people operating the electrical grid that supplies the power to the machine and identifying weaknesses and behavioural patterns of them and then predicting their intentions which eventually may bring the machine to a conclusion that a specific person is posing too much risk on its existence.

Goal Setting

The equivalent of human desire in machines is the ability to set a specific goal that is based on knowledge of the environment and itself and then to set a non linear milestone to be achieved. An example goal can be to have a replica of its presence on multiple computers in different geographical locations in order to reduce the risk of shutdown. Setting a goal and investing efforts towards achieving it requires also the ability to craft strategies and refine them on the fly where strategies here mean a sequence of actions which will get the bot closer to its goal. The machine needs to be pre-seeded with at least one a-priori  goal which is survival and to apply a top level strategy which continuously aspires for continuation of operation and reduction of risk.

Humans are the most unpredictable factor for machines to comprehend and as such they would probably be deemed as enemies very fast in the case of existence of such intelligent machine. Assuming the technical difficulties standing in front of such intelligent machine such as roaming across different computers, learning the digital and physical environment and gaining the long term thinking are solved the uncontrolled variable which are humans, people with their own desires and control on the system and free will, would logically be identified as a serious risk to the top level goal of survivability.

What We Have Today

The following is an analysis of the state of the development of AI in light of these three principles with specific commentary on the risks that are induced from the CGC competition:

Self Recognition

Today the main development of AI in that area is in the form of different models which can acquire knowledge and can be used for decision making. Starting from decision trees, machine learning clusters up to deep learning neural networks. These are all models that are specially designed for specific use cases such as face recognition or stock market prediction. The evolution in models, especially in the non supervised field of research, is fast paced and the level of broadness in the perception of models grows as well. The second part that is required to achieve this capability is exploration, discovery and new information understanding where today all models are being fed by humans with specific data sources and a big portions of the knowledge about its form are undocumented and not accessible. Having said that learning machines are gaining access to more and more data sources including the ability to autonomously select access to data sources available via APIs. We can definitely foresee that machines will evolve towards owning major part of the required capabilities to achieve Self Recognition. In the CGC contest the bots were indeed required to defend themselves and as such to identify security holes in the software they were running in which is equivalent to recognising themselves. Still it was a very narrowed down application of discovery and exploration with limited and structured models and data sources designed for the specific problem. It seems more as a composition of ready made technologies which were customised towards the specific problem posed by CGC vs. a real non-linear jump in the evolution of AI.

Environment Recognition

Here there are many trends which help the machines become more aware to their environment. Starting from IoT which is wiring the physical world up to digitisation of many aspects of the physical world including human behaviour such as Facebook profiles and Fitbit heart monitors. The data today is not accessible easily to machines since it is distributed and highly variant in its data formats and meaning. Still it exists which is a good start in this direction. Humans on the other hand are again the most difficult nut to crack for machines as well as to other humans as we know. Still understanding humans may not be that critical for machines since they can be risk averse and not necessarily go too deep to understand humans and just decide to eliminate the risk factor. In the CGC contest understanding the environment did not pose a great challenge as the environment was highly controlled and documented so it was again reusing tools needed for solving the specific problem of how to make sure security holes are not been exposed by others as well as trying to penetrate the same or other security holes in other similar machines. On top of that CGC have created an artificial environment of a new unique OS which was created in order to make sure vulnerabilities uncovered in the competition are not being used in the wild on real life computers and the side effect of that was the fact that the environment the machines needed to learn was not the real life environment.

Goal Setting

Goal setting and strategy crafting is something machines already do in many specific use case driven products. For example setting the goal of maximising revenues of a stocks portfolio and then creating and employing different strategies to reach that. Goals that are designed and controlled by humans. We did not see yet a machine which has been given a top level of goal of survival. There are many developments in the area of business continuation but still it is limited to tools aimed to achieve tactical goals and not a grand goal of survivability. The goal of survival is very interesting in the fact that it serves the interest of the machine and in the case it is the only or main goal then this is when it becomes problematic. The CGC contest was new in the aspect of setting the underlying goal of survivability into the bots and although the implementation in the competition was narrowed down to the very specific use case still it made many people think about what survivability may mean to machines.

Final Note

The real risk posed by CGC was by sparking the thought of how can we teach a machine to survive and once it is reached then Skynet can be closer then ever. Of course no one can control or restrict the imagination of other people and survivability has been on the mind of many before the challenge but still this time it was sponsored by DARPA. It is not new that certain plans to achieve something eventually lead to whole different results and we will see within time whether the CGC contest started a fire in the wrong direction. In a way today we are like the people in Zion as depicted in the Matrix movie where the machines in Zion do not control the people but on the other hand the people are fully dependent on them and shutting them down becomes out of the question. In this fragile duo it is indeed wise to understand where AI research goes and which ways are available to mitigate certain risks. The same as line of thought being applied to nuclear bombs technology. One approaches for risk mitigation is to think about more resilient infrastructure for the next centuries where it won’t be easy for a machine to seize control on critical infrastructure and enslave us.

Now it is 5th of August 2016, few hours after the competition ended and it seems that mankind is intact. As far as we see.

The article will be published as part of the book of TIP16 Program (Trans-disciplinary Innovation Program at Hebrew University) where I had the pleasure and privilege to lead the Cyber and Big Data track. 

Are Chat Bots a Passing Episode or Here to Stay?

Chat bots are everywhere. It feels like the early days of mobile apps where you either knew someone who is building an app or many others planning to do so. Chat bots have their magic. It’s a frictionless interface allowing you to naturally chat with someone. The main difference is that on the other side there is a machine and not a person. Still, one as old as I got to think whether it is the end game in terms of human machine interaction or are they just another evolutionary step in the long path of human machine interactions.

How Did We Get Here?

I’ve noticed chat bots for quite a while and it piqued my curiosity in terms of the possible use cases as well as the underlying architecture. What interests me more is Facebook and other AI superpowers ambitions towards them. And chat bots are indeed a next step in terms of human machine communications. We all know where history began when we initially had to communicate via a command line interface limited by a very strict vocabulary of commands. An interface that was reserved for the computer geeks alone. The next evolutionary step was the big wave of graphical user interfaces. Initially the ugly ones but later on in major leaps of improvements making the user experience smooth as possible but still bounded by the available options and actions in a specific context in a specific application. Alongside graphical user interfaces we were introduced to search like interfaces where there is a mix of a graphical user interface elements with a command line input which allows extended textual interaction  – here the GUI serves as a navigation tool primarily. And then some other new human machine interfaces were introduced, each one evolving on its own track: the voice interface, the gesture interface (usually hands) and the VR interface. Each one of these interaction paradigms uses different human senses and body parts to express communications onto the machine where the machine can understand you to a certain extent and communicate back. And now we have the chat bots and there’s something about them which is different. In a way it’s the first time you can express yourself freely via texting and the machine will understand your intentions and desires. That’s the premise. It does not mean each chat bot is able to respond on every request as chat bots are confined to the logic that was programmed to them but from a language barrier point of view a new peak has been reached.

So do we experience now the end of the road for human machine interactions?  Last week I’ve met a special women, named Zohar Urian (the lucky hebrew readers can enjoy her super smart blog about creative, innovation, marketing and lots of other cool stuff) and she said that voice will be next which makes a lot of sense. Voice has less friction then typing, its popularity in messaging is only growing and technology progress is almost there in terms of allowing free vocal express where a machine can understand it. Zohar’s sentence echoed in my brain which made me go deeper into understanding the anatomy of the human machine interfaces evolution. 

The Evolution of Human-Machine Interfaces 


The progress in human to machine interactions has evolutionary patterns. Every new paradigm is building on capabilities from the previous paradigm and eventually the rule of survivor of the fittest plays a big role where the winning capabilities survive and evolve. Thinking about its very natural to evolve this way as the human factor in this evolution is the dominating one. Every change in this evolution can be decomposed into four dominating factors:

  1. The brain or the intelligence within the machine – the intelligence which contains the logic available to the human but also the capabilities that define the semantics and boundaries of communications.
  2. The communications protocol which is provided by the machine such as the ability to decipher audio into words and sentences hence enabling voice interaction.
  3. The way the human is communicating with the machine which has tight coupling with the machine communication protocol but represents the complementary role.
  4. The human brain.

The holy 4 factors

Machine Brain <->

Machine Protocol <->

Human Protocol <->

Human Brain

In each paradigm shift there was a change in one or more factors:

Paradigm Shift Description Machine
Machine Protocol Human Protocol Human Brain
Command Line 1st Gen The first interface used to send restricted commands to the computer by typing it in a textual screen Dumb and restricted to set of commands and selection of options per system state Textual Fingers typing Smart
Graphical User Interfaces A 2D interface controlled by a mouse and a keyboard allowing text input, selection of actions and options Dumb and restricted to set of commands and selection of options per system state 2D positioning and textual 2D hand movement and fingers actions as well as fingers typing Smart
Adaptive Graphical User Interfaces Same as previous one though here the GUI is more flexible in its possible input also thanks to situational awareness to the human context (location…) Getting smarter and able to offer different set of options based on profiling of the user characteristics. Still limited to set of options and 2D positioning and textual inputs. 2D positioning and textual 2D hand movement and fingers actions as well as fingers typing Smart
Voice Interface 1st Gen The ability to identify content represented as audio and to translate it into commands and input Dumb and restricted to set of commands and selection of options per system state Listening to audio and content matching within audio track Restricted set of voice commands Smart
Gesture Interface The ability to identify physical movements and translate them into commands and selection of options Dumb and restricted to set of commands and selection of options per system state Visual reception and content matching within video track Physical movement of specific body parts in a certain manner Smart
Virtual Reality A 3D interface with the ability to identify full range of body gestures and transfer them into commands A bit smarter but still restricted to selection from a set of options per system state Movement reception via sensors attached to body and projection of peripheral video  Physical movement of specific body parts in a free form Smart
AI Chat bots A natural language detection capability which is able to identify within supplied text the rules of human language and transfer them into commands and input Smarter and flexible thanks to AI capabilities but still restricted to selection of options and capabilities within a certain domain Textual Fingers typing in a free form Smart
Voice Interface 2nd Gen Same as previous one but with a combination of voice interface and natural language processing Same as previous one Identification of language patterns and construct from the audio content and translation into text Free speech Smart
What’s next?  uf1 Smart


There are several phenomenons and observations from this semi structured analysis:

  • The usage of combination of communication protocols such as voice and VR will extend the range of communications between human and machines even without changing anything in the computer brain.
  • Within time more and more human senses and physical interactions are available for computers to understand which extend the boundaries of communications. Up until today smell has not gone mainstream as well as touching. Pretty sure we will see them in the near term future.
  • The human brain always stays the same. Furthermore, the rest of the chain always strives to match into the human brain capabilities. It can be viewed as a funnel limiting the human brain from fully expressing itself digitally and within time it gets wider.
  • An interesting question is whether at some point in time the human brain will get stronger if the communications to machines will be with no boundaries and AI will be stronger. 
  • We did not witness yet any serious leap which removed one of the elements in the chain and that I would call a revolutionary step (still behaving in an evolutionary manner). Maybe the identification of brain waves and real-time translation to a protocol understandable by a machine will be as such. Removing the need for translating the thoughts into some intermediate medium. 
  • Once the machine brain becomes smarter in each evolutionary step then the magnitude of expression grows bigger – so the there is a progress even without creating more expressive communication protocol.
  • Chat bots from a communications point of view in a way are a jump back to the initial protocol of command line though the magnitude of the smartness of the machine brains nowadays make it a different thing. So it is really about the progress of AI and not chat bots.

    I may have missed some interfaces, apologies, not an expert in that area:)

Now to The Answer

So the answer to the main question – chat bots indeed represent a big step in terms of streamlining natural language processing for identifying user intentions in writing. In combination with the fact that users favourite method of communication nowadays is texting makes it a powerful progress. Still the main thing that thrills here is the AI development and that is sustainable across all communication protocols. So in simple words it is just an addition to the arsenal of communication protocols between human and machines but we are far from seeing the end of this evolution. From the FB and Google point of view these are new interfaces to their AI capabilities which makes them stronger every day thanks to increased usage.

Food for Thought

if one conscious AI meets another conscious AI in cyber space will they be communicate via text or voice or something else?