in Artificial Intelligence, competition, cyber

Background

The DARPA Cyber Grand Challenge (CGC) 2016 competition has captured the imagination of many with its AI challenge. In a nutshell it is a competition where seven highly capable computers compete with each other and each computer is owned by a team. Each team creates a piece of software which is able to autonomously identify flaws in their own computer and fix them and identify flaws in the other six computers and hack them. A game inspired by the Catch The Flag (CTF) game which is played by real teams protecting their computer and hacking into others aiming to capture a digital asset which is the flag. In the CGC challenge the goal is to build an offensive and defensive AI bot that follows the CTF rules.

In recent five years AI has become a highly popular topic discussed both in the corridors of tech companies as well as outside of it where the amount of money invested in the development of AI aimed at different applications is tremendous and growing. Starting from use cases of industrial and personal robotics, smart human to machine interactions, predictive algorithms of all different sorts, autonomous driving, face and voice recognition and others fantastic use cases. AI as a field in computer science has always sparked the imagination which also resulted in some great sci-fi movies. Recently we hear a growing list of few high profile thought leaders such as Bill Gates, Stephen Hawking and Elon Musk raising concerns about the risks involved in developing AI. The dreaded nightmare of machines taking over our lives and furthermore aiming to harm us or even worse, annihilate us is always there.

The DARPA CGC competition which is a challenge born out of good intentions aiming to close the ever growing gap between attackers sophistication and defenders toolset has raised concerns from Elon Musk fearing that it can lead to Skynet. Skynet from the Terminator movie as a metaphor for a destructive and malicious AI haunting mankind. Indeed the CGC challenge has set the high bar for AI and one can imagine how a smart software that knows how to attack and defend itself will turn into a malicious and uncontrollable machine driven force. On the other hand there seems to be a long way until a self aware mechanical enemy can be created. How long will it take and if at all is the main question that stands in the air. This article is aiming to dissect the underlying risks posed by the CGC contest which are of a real concern and in general contemplates on what is right and wrong in AI.

Dissecting Skynet

AI history has parts which are publicly available such as work done in academia as well as parts that are hidden and take place at the labs of many private companies and individuals. The ordinary people outsiders of the industry are exposed only to the effects of AI such as using a smart chat bot that can speak to you intelligently. One way to approach the dissection of the impact of CGC is to track it bottom up and understand how each new concept in the program can lead to a new step in the evolution of AI and imagining future possible steps. The other way which I choose for this article is to start at the end and go backwards.

To start at Skynet.

Skynet is defined by Wikipedia as Rarely depicted visually in any of the Terminator media, Skynet gained self-awareness after it had spread into millions of computer servers all across the world; realising the extent of its abilities, its creators tried to deactivate it. In the interest of self-preservation, Skynet concluded that all of humanity would attempt to destroy it and impede its capability in safeguarding the world. Its operations are almost exclusively performed by servers, mobile devices, drones, military satellites, war-machines, androids and cyborgs (usually a Terminator), and other computer systems. As a programming directive, Skynet’s manifestation is that of an overarching, global, artificial intelligence hierarchy (AI takeover), which seeks to exterminate the human race in order to fulfil the mandates of its original coding.”.  The definition of Skynet discusses several core capabilities which it has acquired and seem to be a strong basis for its power and behaviour:

Self Awareness

A rather vague capability which is borrowed from humans where in translation to machines it may mean the ability to identify its own form, weaknesses, strengths, risks posed by its environment as well as opportunities.

Self Defence

The ability to identify its weaknesses, awareness to risks, maybe the actors posing the risks and to apply different risk mitigation strategies to protect itself. Protect first from destruction and maybe from losing territories under control.

Self Preservation

The ability to set a goal of protecting its existence’ applying self defence in order to survive and adapt to a changing environment.

Auto Spreading

The ability to spread its presence into other computing devices which have enough computing power and resources to support it and to allows a method of synchronisation among those devices forming a single entity. Sync seems to be obviously implemented via data communications methods but it is not limited to that. These vague capabilities are interwoven with each other and there seems to be other more primitive conditions which are required for an effective Skynet to emerge.

The following are more atomic principles which are not overlapping with each other:

Self Recognition

The ability to recognise its own form including recognising its own software components and algorithms as inseparable part of its existence. Following the identification of the elements that comprise the bot then there is a recursive process of learning what are the conditions that are required for each element to properly run. For example understanding that a specific OS is required for its SW elements in order to run and that a specific processor is required for the OS in order to run and that a specific type of electricity source is required for the processor in order to work properly and on and on. Eventually the bot should be able to acquire all this knowledge where its boundaries are set in the digital world and this knowledge is being extended by the second principle.

Environment Recognition

The ability to identify objects, conditions and intentions arising from the real world to achieve two things: To extend the process of self recognition so for example if the bot understands that it requires an electrical source then identifying the available electrical sources in a specific geographical location is an extension to the physical world. The second goal is to understand the environment in terms of general and specific conditions that have an impact on itself and what is the impact. For example weather or stock markets. Also an understanding of the real life actors which can impact its integrity and these are the humans (or other bots). Machines needs to understand humans in two aspects: their capabilities and their intentions and both eventually are based on a historic view of the digital trails people leave and the ability to predict future behaviour based on the history. If we imagine a logical flow of a machine trying to understand relevant humans following the chain of its self recognition process then it will identify whom are the people operating the electrical grid that supplies the power to the machine and identifying weaknesses and behavioural patterns of them and then predicting their intentions which eventually may bring the machine to a conclusion that a specific person is posing too much risk on its existence.

Goal Setting

The equivalent of human desire in machines is the ability to set a specific goal that is based on knowledge of the environment and itself and then to set a non linear milestone to be achieved. An example goal can be to have a replica of its presence on multiple computers in different geographical locations in order to reduce the risk of shutdown. Setting a goal and investing efforts towards achieving it requires also the ability to craft strategies and refine them on the fly where strategies here mean a sequence of actions which will get the bot closer to its goal. The machine needs to be pre-seeded with at least one a-priori  goal which is survival and to apply a top level strategy which continuously aspires for continuation of operation and reduction of risk.

Humans are the most unpredictable factor for machines to comprehend and as such they would probably be deemed as enemies very fast in the case of existence of such intelligent machine. Assuming the technical difficulties standing in front of such intelligent machine such as roaming across different computers, learning the digital and physical environment and gaining the long term thinking are solved the uncontrolled variable which are humans, people with their own desires and control on the system and free will, would logically be identified as a serious risk to the top level goal of survivability.

What We Have Today

The following is an analysis of the state of the development of AI in light of these three principles with specific commentary on the risks that are induced from the CGC competition:

Self Recognition

Today the main development of AI in that area is in the form of different models which can acquire knowledge and can be used for decision making. Starting from decision trees, machine learning clusters up to deep learning neural networks. These are all models that are specially designed for specific use cases such as face recognition or stock market prediction. The evolution in models, especially in the non supervised field of research, is fast paced and the level of broadness in the perception of models grows as well. The second part that is required to achieve this capability is exploration, discovery and new information understanding where today all models are being fed by humans with specific data sources and a big portions of the knowledge about its form are undocumented and not accessible. Having said that learning machines are gaining access to more and more data sources including the ability to autonomously select access to data sources available via APIs. We can definitely foresee that machines will evolve towards owning major part of the required capabilities to achieve Self Recognition. In the CGC contest the bots were indeed required to defend themselves and as such to identify security holes in the software they were running in which is equivalent to recognising themselves. Still it was a very narrowed down application of discovery and exploration with limited and structured models and data sources designed for the specific problem. It seems more as a composition of ready made technologies which were customised towards the specific problem posed by CGC vs. a real non-linear jump in the evolution of AI.

Environment Recognition

Here there are many trends which help the machines become more aware to their environment. Starting from IoT which is wiring the physical world up to digitisation of many aspects of the physical world including human behaviour such as Facebook profiles and Fitbit heart monitors. The data today is not accessible easily to machines since it is distributed and highly variant in its data formats and meaning. Still it exists which is a good start in this direction. Humans on the other hand are again the most difficult nut to crack for machines as well as to other humans as we know. Still understanding humans may not be that critical for machines since they can be risk averse and not necessarily go too deep to understand humans and just decide to eliminate the risk factor. In the CGC contest understanding the environment did not pose a great challenge as the environment was highly controlled and documented so it was again reusing tools needed for solving the specific problem of how to make sure security holes are not been exposed by others as well as trying to penetrate the same or other security holes in other similar machines. On top of that CGC have created an artificial environment of a new unique OS which was created in order to make sure vulnerabilities uncovered in the competition are not being used in the wild on real life computers and the side effect of that was the fact that the environment the machines needed to learn was not the real life environment.

Goal Setting

Goal setting and strategy crafting is something machines already do in many specific use case driven products. For example setting the goal of maximising revenues of a stocks portfolio and then creating and employing different strategies to reach that. Goals that are designed and controlled by humans. We did not see yet a machine which has been given a top level of goal of survival. There are many developments in the area of business continuation but still it is limited to tools aimed to achieve tactical goals and not a grand goal of survivability. The goal of survival is very interesting in the fact that it serves the interest of the machine and in the case it is the only or main goal then this is when it becomes problematic. The CGC contest was new in the aspect of setting the underlying goal of survivability into the bots and although the implementation in the competition was narrowed down to the very specific use case still it made many people think about what survivability may mean to machines.

Final Note

The real risk posed by CGC was by sparking the thought of how can we teach a machine to survive and once it is reached then Skynet can be closer then ever. Of course no one can control or restrict the imagination of other people and survivability has been on the mind of many before the challenge but still this time it was sponsored by DARPA. It is not new that certain plans to achieve something eventually lead to whole different results and we will see within time whether the CGC contest started a fire in the wrong direction. In a way today we are like the people in Zion as depicted in the Matrix movie where the machines in Zion do not control the people but on the other hand the people are fully dependent on them and shutting them down becomes out of the question. In this fragile duo it is indeed wise to understand where AI research goes and which ways are available to mitigate certain risks. The same as line of thought being applied to nuclear bombs technology. One approaches for risk mitigation is to think about more resilient infrastructure for the next centuries where it won’t be easy for a machine to seize control on critical infrastructure and enslave us.

Now it is 5th of August 2016, few hours after the competition ended and it seems that mankind is intact. As far as we see.

The article will be published as part of the book of TIP16 Program (Trans-disciplinary Innovation Program at Hebrew University) where I had the pleasure and privilege to lead the Cyber and Big Data track. 

Share on Facebook0Tweet about this on TwitterShare on LinkedIn1Email this to someone