Risks of Artificial Intelligence on Society

Random Thoughts on Cyber Security, Artificial Intelligence, and Future Risks at the OECD Event - AI: Intelligent Machines, Smart Policies

It is the end of the first day of a fascinating event in artificial intelligence, its impact on societies and how policymakers should act upon what seems like a once in lifetime technological revolution. As someone rooted deeply in the world of cybersecurity, I wanted to share my point of view on what the future might hold.

The Present and Future Role of AI in Cyber Security and Vice Verse

Every new day we are witnessing new remarkable results in the field of AI and still, it seems we only scratched the top of it. Developments which reached a certain level of maturity can be seen mostly in the areas of object and pattern recognition which is part of the greater field of perception and different branches of reasoning and decision making. AI has already entered the cyber world via defense tools where most of the applications we see are in the fields of malicious behavior detection in programs and network activity and the first level of reasoning used to deal with the information overload in security departments helping prioritize incidents. AI has a far more potential contribution in other fields of cybersecurity, existing and emerging ones:

Talent Shortage

A big industry-wide challenge where AI can be a game changer relates to the scarcity of cybersecurity professionals. Today there is a significant shortage of cybersecurity professionals which are required to perform different tasks starting from maintaining the security configuration in companies up to responding to security incidents. ISACA predicts that there will be a shortage of two million cybersecurity professionals by 2019. AI-driven automation and decision making have the potential to handle a significant portion of the tedious tasks professionals are fulfilling today. With the goal of reducing the volume of jobs to the ones which require the touch of a human expert.

Pervasive Active Intelligent Defense

The extension into active defense is inevitable where AI has the potential to address a significant portion of the threats that today, deterministic solutions can't handle properly. Mostly effective against automated threats with high propagation potential. An efficient embedding of AI inside active defense will take place in all system layers such as the network, operating systems, hardware devices and middleware forming a coordinated, intelligent defense backbone.

The Double-Edged Sword

A yet to emerge threat will be cyber attacks which are powered themselves by AI. The world of artificial intelligence, the tools, algorithms, and expertise are widely accessible, and cyber attackers will not refrain from abusing them to make their attacks more intelligent and faster. When this threat materializes then AI will be the only possible mitigation. Such attacks will be fast, agile, and in magnitude that the existing defense tools have not experienced yet. A new genre of AI-based defense tools will have to emerge.

Privacy at Risk

Consumers privacy as a whole is sliding on a slippery slope where more and more companies collect information on us, structured data such as demographic information and behavioral patterns studied implicitly while using digital services. Extrapolating the amount of data collected with the new capabilities of big data in conjunction with the multitude of new devices that will enter our life under the category of IoT then we reach an unusually high number of data points per each person. High amounts of personal data distributed across different vendors residing on their central systems increasing our exposure and creating greenfield opportunities for attackers to abuse and exploit us in unimaginable ways. Tackling this risk requires both regulation and usage of different technologies such as blockchain, while AI technologies have also a role. The ability to monitor what is collected on us, possibly moderating what is actually collected vs, what should be collected in regards to rendered services and quantifying our privacy risk is a task for AI.

Intelligent Identities

In recent year we see at an ever-accelerating pace new methods of authentication and in correspondence new attacks breaking those methods. Most authentication schemes are based on a single aspect of interaction with the user to keep the user experience as frictionless as possible. AI can play a role in creating robust and frictionless identification methods which take into account vast amounts of historical and real-time multi-faceted interaction data to deduce the person behind the technology accurately. AI can contribute to our safety and security in the future far beyond this short list of examples. Areas where the number of data points increases dramatically, and automated decision-making in circumstances of uncertainty is required, the right spot for AI as we know of today.

Is Artificial Intelligence Worrying?

The underlying theme in many AI-related discussions is fear. A very natural reaction to a transformative technology which played a role in many science fiction movies. Breaking down the horror we see two parts: the fear of change which is inevitable as AI indeed is going to transform many areas in our lives and the more primal fear from the emergence of soulless machines aiming to annihilate civilization. I see the threats or opportunities staged into different phases, the short term, medium, long-term and really long term.

The short-term

The short-term practically means the present and the primary concerns are in the area of hyper-personalization which in simple terms means all the algorithms that get to know us better then we know ourselves. An extensive private knowledge base that is exploited towards goals we never dreamt of. For example, the whole concept of microtargeting on advertising and social networks as we witnessed in the recent elections in the US. Today it is possible to build an intelligent machine that profiles the citizens for demographic, behavioral and psychological attributes. At a second stage, the machine can exploit the micro-targeting capability available on the advertising networks to deliver personalized messages disguised as adverts where the content and the design of the adverts can be adapted automatically to each person with the goal of changing the public state of mind. It happened in the US and can happen everywhere what poses a severe risk for democracy. The root of this short-term threat resides in the loss of truth as we are bound to consume most of our information from digital sources.

The medium-term

We will witness a big wave of automation which will disrupt many industries assuming that whatever can be automated whether if it is bound to a logical or physical effort then it will eventually be automated. This wave will have a dramatic impact on society, many times improving our lives such as in the case of detection of diseases which can be faster with higher accuracy without the human error. These changes across the industries will also have side effects which will challenge society such as increasing the economic inequality, mostly hurting the ones that are already weak. It will widen the gap between knowledge workers vs. others and will further intensify the new inequality based on access to information. People with access to information will have a clear advantage over those who don’t. It is quite difficult to predict whether the impact in some industries would be short-term and workers will flow to other sectors or will it cause overall stability problems, and it is a topic that should be studied further per each industry that is expecting a disruption.

The longer term

We will see more and more intelligent machines that own the power to impact life and death in humans. Examples such as autonomous driving which has can kill someone on the street as well as an intelligent medicine inducer which can kill a patient. The threat is driven by malicious humans who will hack the logic of such systems. Many smart machines we are building can be abused to give superpowers to cyber attackers. It is a severe problem as the ability to protect from such threat cannot be achieved by adding controls into the artificial intelligence as the risk is coming from intelligent humans with malicious intentions and high powers.

The real long-term

This threat still belongs to science fiction which describes a case where machines will turn against humanity while owning the power to cause harm and self-preservation. From the technology point of view, such event can happen, even today if we decide to put our fate into the hands of a malicious algorithm that can self-preserve itself while having access to capabilities that can harm us. The risk here is that society will build AI for good purposes while other humans will abuse it for other purposes which will eventually spiral out of the control of everyone.

What Policy Makers Should Do To Protect Society

Before addressing some specific directions a short discussion on the power limitations of policymakers is required in the world of technology and AI. AI is practically a genre of techniques, mostly software driven, where more and more individuals around the globe are equipping themselves with the capability to create software and later to work on AI. In a very similar fashion to the written words, software is the new way to express oneself and aspiring to set control or regulation on that is destined to fail. Same for idea exchanges. Policymakers should understand these new changed boundaries which dictate new responsibilities as well.

Areas of Impact

Private Data

Central intervention can become a protective measure for citizens is the way private data is collected, verified and most importantly used. Without data most AI systems cannot operate, and it can be an anchor of control.

Cyber Crime & Collaborative Research

Another area of intervention should be the way cybercrime is enforced by law where there are missing parts in the puzzle of law enforcement such as attribution technologies. Today, attribution is a field of cybersecurity that suffers from under-investment as it is in a way without commercial viability. Centralized investment is required to build the foundations of attribution in the future digital infrastructure. There are other areas in the cyber world where investment in research and development is in the interest of the public and not a single commercial company or government which calls for a joint research across nations. One fascinating area of research could be how to use AI in the regulation itself, especially enforcement of regulation, understanding humans' reach in a digital world is too short for effective implementation. Another idea is building accountability into AI where we will be able to record decisions taken by algorithms and make them accountable for that. Documenting those decisions should reside in the public domain while maintaining the privacy of the intellectual property of the vendors. Blockchain as a trusted distributed ledger can be the perfect tool for saving such evidence of truth about decisions taken by machines, evidence that can stand in court. An example project in this field is the Serenata de Amor Operation, a grassroots open source project which was built to fight corruption in Brazil by analyzing public expenses looking for anomalies using AI.

Central Design

A significant paradigm shift policymaker needs to take into account is the long strategic change from centralized systems to distributed technologies as they present much lesser vulnerabilities. A roadmap of centralized systems that should be transformed into distributed once should be studied and created eventually.

Challenges for Policy Makers

  • Today AI advancement is considered a competitive frontier among countries and this leads to the state that many developments are kept secret. This path leads to loss of control on technologies and especially their potential future abuse beyond the original purpose. The competitive phenomena create a serious challenge for society as a whole. It is not clear why people treat weapons in magnitude harsher vs. advanced information technology which eventually can cause more harm.
  • Our privacy is abused by market forces pushing for profit optimization where consumer protection is at the bottom of priorities. Conflicting forces at play for policymakers.
  • People across the world are different in many aspects while AI is a universal language and setting global ethical rules vs. national preferences creates an inherent conflict.
  • The question of ownership and accountability of algorithms in a world where algorithms can create damage is an open one with many diverse opinions. It gets complicated since the platforms are global and the rules many times are local.
  • What other alternatives there are beyond the basic income idea for the millions that won’t be part of the knowledge ecosystem as it is clear that not every person that loses a job will find a new one. A pre-emptive thinking should be conducted to prevent market turbulences in disrupted industries. An interesting question is how does the growth in population on the planet impacts this equation.
The main point I took from today is to be careful when designing AI tools which are designated towards a specific purpose and how they can be exploited to achieve other means. UPDATE: Link to my story on the OECD Forum Network.

Will Artificial Intelligence Lead to a Metaphorical Reconstruction of The Tower of Babel?

The story of the Tower of Babel (or Babylon) has always fascinated me as God got seriously threatened by humans if and only they would all speak the same language. To prevent that God confused all the languages spoken by the people on the tower and scattered them across the earth. Regardless of different personal religious beliefs of whether it happened or not the underlying theory of growing power when humans interconnect is intriguing and we live at times this truth is evident. Writing, print, the Internet, email, messaging, globalization and social networks are all connecting humans. Connections which dramatically increase humanity competence in many different frontiers. The mere development of science and technology can be attributed to communications among people, as Issac Newton once said "standing on the shoulders of giants". Still, our spoken languages are different and although English has become a de-facto language for doing business in many parts of the world still there are many languages across the globe and the communications barrier is still there. History had also seen multiple efforts to create a unified language such as Esperanto which did not work eventually. Transforming everyone to speak the same language seems almost impossible as language is being taught at a very early age so changing that requires a level of synchronization, co-operation and motivation which does not exist. Even when you take into account the recent highly impressive developments in natural language processing by computers achieving real-time translation the presence of the medium will always interfere. A channel in the middle creating conversion overhead and loss of context and meaning.

Artificial Intelligence can be on its path to change that. Reverting the story of the Tower of Bable. Different emerging fields in AI have the potential to merge and turn into a platform used for communicating with others without going through the process of lingual expression and recognition:

Avatar to Avatar

One direction it may happen is that our avatar, our digital residual image on some cloud, will be able to communicate with other avatars in a unified and agnostic language. Google, Facebook and Amazon build today complex profiling technologies aimed to understand the user needs, wishes and intentions. Currently, they do that in order to optimize their services. Adding to these capabilities means of expression of intentions and desires and on the other side, understanding capabilities can lead to the avatar to avatar communications paradigm. It will take a long time until these avatars will reflect our true self in real-time but still many communications can take place even beforehand. As an example let's say my avatar knows what I want for birthday and my birthday is coming soon. My friend avatar can ask at any point in time my avatar what do I want to get for my birthday and my avatar can respond in a very relevant manner.

Direct Connection

The second path that can take place is inline with the direction of Elon Musk's Neuralink concept or Facebook's brain integration idea. Here the brain-to-world connectors will be able not only to output our thoughts to the external world in a digital way but also to understand other people's thoughts and translate them back to our brain. Brain-to-world-to-brain. The interim format of translation is important though an efficient one can be achieved probably. One caveat in this direction is the assumption that our brain is structured in an agnostic manner based on abstract concepts and is not made of the actual language constructs. The constructs a person used in order to learn about the world in its own language. If each brain wiring is subjective to the individual's constructs of understanding the digestion of others' thoughts will be impossible.

Final Thought

A big difference vs. the times of Babylon is the fact there are so many humans today vs. back then which makes the potential of such wiring explosive.

Softbank eating the world

Softbank acquired BostonDynamics, the four legs robots maker, alongside secretive Schaft, two-legged robots maker. Softbank, the perpetual acquirer of emerging leaders, has entered a foray into artificial life by diluting their stakes in media and communications and setting a stronghold into the full supply chain of artificial life. The chain starts with chipsets where ARM was acquired but then a quarter of the holdings were divested since Google (TPU) and others have shown that specialized processors for artificial life are no longer a stronghold of giants such as Intel. The next move was acquiring a significant stake in Nvidia. Nvidia is the leader in general purpose AI processing workhorse but more interesting for Softbank are their themed vertical endeavors such as the package for autonomous driving. These moves set a firm stance in the two ends of the supply chain, the processors and the final products. It lays down a perfect position for creating a Tesla like company (through holdings) that can own the new emerging segment of artificial creatures. It remains to be seen what would be the initial market for these creatures, whether it will be the consumer market or the defense though their position in the chipsets domain will allow them to make money either way. The big question is what would be the next big acquisition target in AI. It has to be a major anchor in the supply chain, right in between the chipsets and the final products and such acquisition will reveal the ultimate intention towards what artificial creatures we will see first coming into reality. A specialized communications infrastructure for communicating with the creatures efficiently (maybe their satellites activity?) as well as some cloud processing framework would make sense. P.S. The shift from media into AI is a good hint on which market matured already and which one is emerging. P.S. What does this say about Alphabet, the fact they sold Boston Dynamics? P.S. I am curious to see what is their stance towards patents in the world of AI

Rent my Brain and Just Leave me Alone

Until AI will be intelligent enough to replace humans in complex tasks there will be an interim stage and that is the era of human brain rental. People have diverse intelligence capabilities and many times these are not optimally exploited due to living circumstances. Other people and corporations which know how to make money many times lack the brain power required to scale their business. Hiring more people into a company is complicated and the efficiency level of new hires decelerates with scale. With a good reason - all the personality and human traits combined with others disturb efficiency. So it makes sense that people will aspire to build tools for exploiting just the intelligence of people (better from remote) in the most efficient manner. The vision of the Matrix of course immediately comes into play where people will be wired into the system and instead of being a battery source we be a source of processing and storage. In the meanwhile we can already see springs of such thinking in different areas: Amazon Mechanical Turk which allows you to allocate scalable amount of human resources and assign to them tasks programmatically, the evolution of communication mediums which make human to machine communications better and active learning as a branch in AI which reinforces learning with human decisions. In a way it sounds a horrible future and unromantic but we have to admit it fits well with the growing desire of future generations for convenient and prosperous life - just imagine plugging your brain for several hours a day, hiring it, you don't really care what it does at that time and in the rest of the day you can happily spend the money you have earned.

Right and Wrong in AI

Background

The DARPA Cyber Grand Challenge (CGC) 2016 competition has captured the imagination of many with its AI challenge. In a nutshell, it is a contest where seven highly capable computers compete, and a team owns each computer. Each team creates a piece of software which can autonomously identify flaws in their computer and fix them and identify flaws in the other six computers and hack them. A game inspired by the Catch The Flag (CTF) game which is played by real teams protecting their computer and hacking into others aiming to capture a digital asset which is the flag. In the CGC challenge, the goal is to build an offensive and defensive AI bot that follows the CTF rules.

In recent five years, AI has become a highly popular topic discussed both in the corridors of tech companies as well as outside of it where the amount of money invested in the development of AI aimed at different applications is tremendous and growing. Use cases of industrial and personal robotics, smart human to machine interactions, predictive algorithms of all different sorts, autonomous driving, face and voice recognition and others fantastic use cases. AI as a field in computer science has always sparked the imagination which also resulted in some great sci-fi movies. Recently we hear a growing list of few high-profile thought leaders such as Bill Gates, Stephen Hawking and Elon Musk raising concerns about the risks involved in developing AI. The dreaded nightmare of machines taking over our lives and furthermore aiming to harm us or even worse, annihilate us is always there.

The DARPA CGC competition which is a challenge born out of good intentions aiming to close the ever growing gap between attackers sophistication and defenders toolset has raised concerns from Elon Musk fearing that it can lead to Skynet. Skynet from the Terminator movie as a metaphor for a destructive and malicious AI haunting mankind. Indeed the CGC challenge has set the high bar for AI and one can imagine how a smart software that knows how to attack and defend itself will turn into a malicious and uncontrollable machine driven force. On the other hand, there seems to be a long way until a self-aware mechanical enemy can be created. How long will it take and if at all is the main question that stands in the air. This article is aiming to dissect the underlying risks posed by the CGC contest which is of a real concern and in general contemplates on what is right and wrong in AI.

Dissecting Skynet

AI history has parts which are publicly available such as work done in academia as well as parts that are hidden and take place at the labs of many private companies and individuals. The ordinary people outsiders of the industry are exposed only to the effects of AI such as using a smart chat bot that can speak to you intelligently. One way to approach the dissection of the impact of CGC is to track it bottom up and understand how each new concept in the program can lead to a new step in the evolution of AI and imagining future possible steps. The other way which I choose for this article is to start at the end and go backward.

To start at Skynet.

Skynet is defined by Wikipedia as Rarely depicted visually in any of the Terminator media, Skynet gained self-awareness after it had spread into millions of computer servers all across the world; realising the extent of its abilities, its creators tried to deactivate it. In the interest of self-preservation, Skynet concluded that all of humanity would attempt to destroy it and impede its capability in safeguarding the world. Its operations are almost exclusively performed by servers, mobile devices, drones, military satellites, war-machines, androids and cyborgs (usually a Terminator), and other computer systems. As a programming directive, Skynet's manifestation is that of an overarching, global, artificial intelligence hierarchy (AI takeover), which seeks to exterminate the human race in order to fulfil the mandates of its original coding.”.  The definition of Skynet discusses several core capabilities which it has acquired and seem to be a firm basis for its power and behaviour:

Self Awareness

A rather vague skill which is borrowed from humans where in translation to machines it may mean the ability to identify its own form, weaknesses, strengths, risks posed by its environment as well as opportunities.

Self Defence

Capacity to identify its shortcomings, awareness to risks, categorizing the actors as agents of risk and take different risk mitigation measures to protect itself. Protect first from destruction and later on from losing territories under control.

Self Preservation

The ability to set a goal of protecting its existence’ applying self-defence to survive and adapt to a changing environment.

Auto Spreading

Capacity to spread its presence into other computing devices which have enough computing power and resources to support it and to allows a method of synchronization among those devices forming a single entity. Sync seems to be obviously implemented via data communications methods, but it is not limited to that. These vague capabilities are interwoven with each other, and there seem to be other more primitive conditions which are required for an active Skynet to emerge.

The following are more atomic principles which are not overlapping with each other:

Self-Recognition

The ability to recognize its form including recognizing its software components and algorithms as inseparable part of its existence. Following the identification of the elements that comprise the bot then there is a recursive process of learning what the conditions that are required for each element to properly run . For example understanding that a particular OS is required for its SW components to run and that a specific processor is needed for the OS to run and that a specific type of electricity source is required for the processor to work appropriately and on and on. Eventually, the bot should be able to acquire all this knowledge where its boundaries are set in the digital world, and the second principle is extending this knowledge.

Environment Recognition

The ability to identify objects, conditions and intentions arising from the reality to achieve two things: To broaden the process of self-recognition so for example if the bot understands that it requires an electrical source then identifying the available electrical sources in a particular geographical location is an extension of the physical world. The second goal is to understand the environment in terms of general and specific conditions that have an impact on itself and what are the implications. For example weather or stock markets. Also an understanding of the real life actors which can affect its integrity and these are the humans (or other bots). Machines needs to understand humans in two aspects: their capabilities and their intentions and both eventually are based on a historic view of the digital trails people leave and the ability to predict future behaviour based on the history. If we imagine a logical flow of a machine trying to understand relevant humans following the chain of its self-recognition process then it will identify whom are the people operating the electrical grid that supplies the power to the machine and identifying weaknesses and behavioural patterns of them and then predicting their intentions which eventually may bring the machine to a conclusion that a specific person is posing too much risk on its existence.

Goal Setting

The equivalent of human desire in machines is the ability to set a specific goal that is based on knowledge of the environment and itself and then to establish a nonlinear milestone to be achieved. An example goal can be to have a replica of its presence on multiple computers in different geographical locations to reduce the risk of shutdown. Setting a goal and investing efforts towards achieving it also requires the ability to craft strategies and refine them on the fly where strategies here mean a sequence of actions which will get the bot closer to its goal. The machine needs to be pre-seeded with at least one apriori goal which is survival and to apply a top level strategy which continuously aspires for the continuation of operation and reduction of risk.

Humans are the most unpredictable factor for machines to comprehend and as such, they would probably be deemed as enemies very fast in the case of the existence of such intelligent machine. Assuming the technical difficulties standing in front of such intelligent machine such as roaming across different computers, learning the digital and physical environment and gaining the long-term thinking are solved the uncontrolled variable which is humans, people with their own desires and control on the system and free will, would logically be identified as a serious risk to the top level goal of survivability.

What We Have Today

The following is an analysis of the state of the development of AI in light of these three principles with specific commentary on the risks that are induced from the CGC competition:

Self Recognition

Today the leading development of AI in that area is in the form of different models which can acquire knowledge and can be used for decision making. Starting from decision trees, machine learning clusters up to deep learning neural networks. These are all models that are specially designed for specific use cases such as face recognition or stock market prediction. The evolution of models, especially in the non supervised field of research, is fast paced and the level of broadness in the perception of models grows as well. The second part that is required to achieve this capability is exploration, discovery and new information understanding where today all models are being fed by humans with specific data sources and significant portions of the knowledge about its form are undocumented and not accessible. Having said that learning machines are gaining access to more and more data sources including the ability to autonomously select access to information sources available via APIs. We can definitely foresee that machines will evolve towards owning significant part of the required capabilities to achieve Self Recognition. In the CGC contest the bots were indeed needed to defend themselves and as such to identify security holes in the software they were running in which is equivalent to recognising themselves. Still it was a very narrowed down application of discovery and exploration with limited and structured models and data sources designed for the particular problem. It seems more as a composition of ready made technologies which were customised towards the particular issue posed by CGC vs. a real non-linear jump in the evolution of AI.

Environment Recognition

Here there are many trends which help the machines become more aware of their surroundings. Starting from IoT which is wiring the physical world up to digitisation of many aspects of the physical world including human behaviour such as Facebook profiles and Fitbit heart monitors. The data today is not accessible easily to machines since it is distributed and highly variant in its data formats and meaning. Still it exists which is a good start in this direction. Humans on the other hand are again the most difficult nut to crack for machines as well as to other people as we know. Still understanding people may not be that critical for machines since they can be risk averse and not necessarily go too deep to understand humans and just decide to eliminate the risk factor. In the CGC contest understanding the environment did not pose a great challenge as the environment was highly controlled and documented so it was again reusing tools needed for solving the particular problem of how to make sure security holes are not been exposed by others as well as trying to penetrate the same or other security holes in other similar machines. On top of that CGC have created an artificial environment of a new unique OS which was set up in order to make sure vulnerabilities uncovered in the competition are not being used in the wild on real life computers and the side effect of that was the fact that the environment the machines needed to learn was not the real life environment.

Goal Setting

Goal setting and strategy crafting are something machines already do in many specific use-case driven products. For example setting the goal of maximizing revenues of a stocks portfolio and then creating and employing different strategies to reach that - goals that are designed and controlled by humans. We did not see yet a machine which has been given a top level goal of survival. There are many developments in the area of business continuation, but still, it is limited to tools aimed to achieve tactical goals and not a grand goal of survivability. The goal of survival is fascinating in the fact that it serves the interest of the machine and in the case it is the only or primary goal then this is when it becomes problematic. The CGC contest was new in the aspect of setting the underlying goal of survivability into the bots, and although the implementation in the competition was narrowed down to the very particular use case, still it made many people think about what survivability may mean to machines.

Final Note

The real risk posed by CGC was by sparking the thought of how can we teach a machine to survive and once it is reached then Skynet can be closer than ever. Of course no one can control or restrict the imagination of other people and survivability has been on the mind of many before the challenge but still this time it was sponsored by DARPA. It is not new that certain plans to achieve something eventually lead to whole different results and we will see within time whether the CGC contest started a fire in the wrong direction. In a way today we are like the people in Zion as depicted in the Matrix movie where the machines in Zion do not control the people but on the other hand, the people are entirely dependent on them and shutting them down becomes out of the question. In this fragile duo, it is indeed wise to understand where AI research goes and which ways are available to mitigate certain risks. The same as the line of thought being applied to nuclear bombs technology. One approach for risk mitigation is to think about more resilient infrastructure for the next centuries where it won’t be easy for a machine to seize control of critical infrastructure and enslave us.

Now it is 5th of August 2016, few hours after the competition ended and it seems that humanity is intact. As far as we see.

The article will be published as part of the book of TIP16 Program (Trans-disciplinary Innovation Program at Hebrew University) where I had the pleasure and privilege to lead the Cyber and Big Data track. 

Are Chat Bots a Passing Episode or Here to Stay?

Chat bots are everywhere. It feels like the early days of mobile apps where you either knew someone who is building an app or many others planning to do so. Chat bots have their magic. It’s a frictionless interface allowing you to chat with someone naturally. The main difference is that on the other side there is a machine and not a person. Still, one as old as I got to think whether it is the end game concerning human-machine interaction or is they just another evolutionary step in the long path of human-machine interactions.

How Did We Get Here?

I’ve noticed chat bots for quite a while, and it piqued my curiosity concerning the possible use cases as well as the underlying architecture. What interests me more is Facebook and other AI superpowers ambitions towards them. And chat bots are indeed the next step regarding human-machine communications. We all know where history began when we initially had to communicate via a command line interface limited by a very strict vocabulary of commands. An interface that was reserved for the computer geeks alone. The next evolutionary step was the big wave of graphical user interfaces. Initially the ugly ones but later on in significant leaps of improvements making the user experience smooth as possible but still bounded by the available options and actions in a specific context in a particular application. Alongside graphical user interfaces, we were introduced to search like interfaces where there is a mix of a graphical user interface elements with a command line input which allows extensive textual interaction  - here the GUI serves as a navigation tool primarily. And then some other new human-machine interfaces were introduced, each one evolving on its track: the voice interface, the gesture interface (usually hands) and the VR interface. Each one of these interaction paradigms uses different human senses and body parts to express communications onto the machine where the machine can understand you to a certain extent and communicate back. And now we have the chat bots and there’s something about them which is different. In a way it’s the first time you can express yourself freely via texting and the machine will understand your intentions and desires. That's the premise. It does not mean each chat bot can respond to every request as chat bots are confined to the logic that was programmed to them but from a language barrier point of view, a new peak has been reached. So do we experience now the end of the road for human-machine interactions?  Last week I’ve met an extraordinary woman, named Zohar Urian (the lucky Hebrew readers can enjoy her super smart blog about creative, innovation, marketing and lots of other cool stuff) and she said that voice would be next which makes a lot of sense. Voice has less friction than typing, its popularity in messaging is only growing, and technology progress is almost there regarding allowing free vocal express where a machine can understand it. Zohar's sentence echoed in my brain which made me go deeper into understanding the anatomy of the human machine interfaces evolution. 

The Evolution of Human-Machine Interfaces 

mechtree The progress in human to machine interactions has evolutionary patterns. Every new paradigm is building on capabilities from the previous paradigm, and eventually the rule of the survivor of the fittest plays a significant role where the winning capabilities survive and evolve. Thinking about its very natural to grow this way as the human factor in this evolution is the dominating one. Every change in this evolution can be decomposed into four dominating factors:
  1. The brain or the intelligence within the machine - the intelligence which contains the logic available to the human but also the capabilities that define the semantics and boundaries of communications.
  2. The communications protocol which is provided by the machine such as the ability to decipher audio into words and sentences hence enabling voice interaction.
  3. The way the human is communicating with the machine which has tight coupling with the machine communication protocol but represents the complementary role.
  4. The human brain.
The holy 4 factors Machine Brain <-> Machine Protocol <-> Human Protocol <-> Human Brain In each paradigm shift, there was a change in one or more factors.

Paradigms

Command Line 1st Generation
The first interface used to send restricted commands to the computer by typing it in a textual screen Machine Brain: Dumb and restricted to set of commands and selection of options per system state Machine Protocol: Textual Human Protocol: Fingers typing Human Brain: Smart
Graphical User Interfaces
A 2D interface controlled by a mouse and a keyboard allowing text input, selection of actions and options Machine Brain: Dumb and restricted to set of commands and selection of options per system state Machine Protocol: 2D positioning and textual Human Protocol: 2D hand movement and fingers actions, as well as fingers, typing Human Brain: Smart
Adaptive Graphical User Interfaces
Same as previous one though here the GUI is more flexible in its possible input also thanks to situational awareness to the human context (location...) Machine Brain: Getting smarter and able to offer a different set of options based on profiling of the user characteristics. Still limited to set of options and 2D positioning and textual inputs. Machine Protocol: 2D positioning and textual Human Protocol: 2D hand movement and fingers actions, as well as fingers, typing Human Brain: Smart
Voice Interface 1st Generation
The ability to identify content represented as audio and to translate it into commands and input Machine Brain: Dumb and restricted to set of commands and selection of options per system state Machine Protocol: Listening to audio and content matching within audio track Human Protocol: Restricted set of voice commands Human Brain: Smart
Gesture Interface
The ability to identify physical movements and translate them into commands and selection of options Machine Brain: Dumb and restricted to set of commands and selection of options per system state Machine Protocol: Visual reception and content matching within video track Human Protocol: Physical movement of specific body parts in a certain manner Human Brain: Smart
Virtual Reality
A 3D interface with the ability to identify full range of body gestures and transfer them into commands Machine Brain: A bit smarter but still restricted to selection from a set of options per system state Machine Protocol: Movement reception via sensors attached to body and projection of peripheral video Human Protocol: Physical movement of specific body parts in a free form Human Brain: Smart
AI Chatbots
A natural language detection capability which can identify within supplied text the rules of human language and transfer them into commands and input Machine Brain: Smarter and flexible thanks to AI capabilities but still restricted to selection of options and capabilities within a certain domain Machine Protocol: Textual Human Protocol: Fingers typing in a free form Human Brain: Smart
Voice Interface 2nd Generation
Same as previous one but with a combination of voice interface and natural language processing Machine Brain: Same as the previous one Machine Protocol: Identification of language patterns and constructs from audio content and translation into text Human Protocol: Free speech Human Brain: Smart
What’s next?
uf1

Observations

There are several phenomenon and observations from this semi-structured analysis:
  • The usage of the combination of communication protocols such as voice and VR will extend the range of communications between human and machines even without changing anything in the computer brain.
  • Within time more and more human senses and physical interactions are available for computers to understand which extend the boundaries of communications. Up until today smell has not gone mainstream as well as touching. Pretty sure we will see them in the near term future.
  • The human brain always stays the same. Furthermore, the rest of the chain always strives to match the human brain capabilities. It can be viewed as a funnel limiting the human brain from fully expressing itself digitally, and within the time it gets wider.
  • An interesting question is whether at some point in time the human brain will get stronger if the communications to machines will be with no boundaries and AI will be stronger. 
  • We did not witness yet any serious leap which removed one of the elements in the chain and that I would call a revolutionary step (still behaving in an evolutionary manner). Maybe the identification of brain waves and real-time translation to a protocol understandable by a machine will be as such. Removing the need for translating the thoughts into some intermediate medium. 
  • Once the machine brain becomes smarter in each evolutionary step then the magnitude of expression grows bigger - so the there is progress even without creating more expressive communication protocol.
  • Chat bots from a communications point of view in a way are a jump back to the initial protocol of command line though the magnitude of the smartness of the machine brains nowadays makes it a different thing. So it is really about the progress of AI and not chat bots. I may have missed some interfaces, apologies, not an expert in that area:)

Now to The Answer

So the answer to the main question - chat bots indeed represent a big step regarding streamlining natural language processing for identifying user intentions in writing. In combination with the fact that users a favorite method of communication nowadays is texting makes it a powerful progress. Still, the main thing that thrills here is the AI development, and that is sustainable across all communication protocols. So in simple words, it is just an addition to the arsenal of communication protocols between human and machines, but we are far from seeing the end of this evolution. From the FB and Google point of view, these are new interfaces to their AI capabilities which make them stronger every day thanks to increased usage.

Food for Thought

If one conscious AI meets another conscious AI in cyberspace will they communicate via text or voice or something else?

Site Footer