United We Stand, Divided We Fall.

If I had to single out a single development that elevated the sophistication of cybercrime by an order of magnitude, it would be sharing. Code sharing, vulnerabilities sharing, knowledge sharing, stolen passwords and anything else you can think of. Attackers that once worked in silos, in essence competing with each other, have discovered and fully embraced the power of cooperation and collaboration. I was honored to present a high-level overview on the topic of cyber collaboration a couple of weeks ago at the kickoff meeting of a new advisory group to the CDA (the Cyber Defense Alliance), called the “Group of Seven” established by the Founders Group. Attendees included Barclays’ CISO Troels Oerting and CDA CEO Maria Vello as well as other key people from the Israeli cyber industry. The following summarizes and expands upon my presentation.


TL;DR – In order to ramp up the game against cyber criminals, organizations and countries must invest in tools and infrastructure that enable privacy-preserving cyber collaboration.

The Easy Life of Cyber Criminals

The amount of energy defenders must invest in order to protect, vs. the energy cyber criminals need to attack a target, is far from equal. While attackers have always had an advantage, over the past five years the balance has tilted dramatically in their favor. Attackers, in order to achieve their goal, need only find one entry point into a target. Defenders need to make sure every possible path is tightly secured – a task of a whole different scale.


Multiple concrete factors contribute to this imbalance:

  • Obfuscation technologies and sophisticated code polymorphism that successfully disguise malicious code as harmless content rendered a large chunk of established security technologies irrelevant. These technologies were built with a different set of assumptions during what I call “the naive era of cyber crime.”
  • Collaboration among adversaries in the many forms of knowledge and expertise sharing naturally speeded up the spread of sophistication/innovation.
  • Attackers as “experts” in finding the path of least resistance to their goals discovered a sweet spot of weakness. A weakness that defenders can do little about – humans. Human weaknesses are the hardest to defend as attackers exploit core human traits such as trust building, personal vulnerabilities and making mistakes.
  • Attribution in the digital world is vague and almost impossible to achieve, at least as far as the tools we have at our disposal currently. This makes finding the root cause of an attack and eliminating it with confidence very difficult.
  • Complexity of IT systems has lead to security information overload which makes timely handling and prioritization difficult; attackers exploit this weakness by disguising their malicious activities in the wide stream of cyber security alerts. One of the drivers for this information overload is defense tools reporting an ever growing amount of false alarms due to their inability to accurately identify malicious events.
  • The increasingly distributed nature of attacks and the use of “distributed offensive” patterns by attackers makes defense even harder.


Given the harsh reality of the world of cyber security today, it is not a question of whether or not an attack is possible, it is just a matter of the interest and focus of cyber criminals. Unfortunately, the current de-facto defense strategy rests on creating a bit more difficulty for attackers on your end, so that they will go find an easier target elsewhere.

Rationale for Collaboration

Collaboration, as proven countless times, creates value that is beyond the sum of the participating elements. This is also true for the cyber world. Collaboration across organizations can contribute to defense enormously. For example, consider the time it takes to identify the propagation of threats as an early warning system – the time span decreases exponentially in proportion to the number of collaborating participants. This is highly important to identify attacks targeting mass audiences more quickly as they tend to spread in epidemic like patterns. Collaboration in the form of expertise sharing is another area of value – one of the main roadblocks to progress in cyber security is the shortage of talent. The sharing of resources and knowledge would go a long way in helping. Collaboration in artifact research can also reduce the time to identify and respond to cyber crime incidents. Furthermore, the increasing interconnectedness between companies as well as consumers means that the attack surface of an enterprise – the possible entry points for an attack – is constantly expanding. Collaboration can serve as an important counter to this weakness.


A recent phenomenon that may be inhibiting progress towards real collaboration is the perception of cybersecurity as a competitive advantage. Establishing a solid cybersecurity defense presents many challenges and requires substantial resources and customers increasingly expect businesses to make these investments. Many CEOs consider their security posture as a product differentiator and brand asset and, as such, are disinclined to share. I believe this to be short-sighted due to the simple fact that no-one is really safe at the moment; shattered trust trumps any security bragging rights in the likely event of a breach. Cyber security needs to progress seriously in order to stabilize and I don’t think there is value in small marketing wins which only postpone progress in the form of collaboration.

Modus Operandi

Cyber collaboration across organizations can take many forms ranging from deep collaboration to more straightforward threat intelligence sharing:

  • Knowledge and domain expertise – Whether it is about co-training or working together on security topics, such collaborations can mitigate the shortage of cyber security talent and spread newly acquired knowledge faster.
  • Security stack and configuration sharing – It makes good sense to share such acquired knowledge although it is now kept close to the chest. Such collaboration would help disseminate and evolve best practices in security postures as well as help gain control over the flood of new emerging technologies, especially as validation processes take long periods of time.
  • Shared infrastructure – There are quite a few models where multiple companies can share the same infrastructure which has a single cyber security function, for example cloud services and services rendered by MSSPs. While the current common belief holds that cloud services are less secure for enterprises, from a security investment point of view there is no reason for this to be the case and it could and should be better. A big portion of such shared infrastructures are hidden in what is called today Shadow IT. A proactive step in this direction is for a consortium of companies to build a shared infrastructure which can fit the needs of all its participants. In addition to improving defense, the cost of security would be offset by all the collaborators.
  • Sharing concrete live intelligence on encountered threats – Sharing effective indicators of compromise, signatures or patterns of malicious artifacts and the artifacts themselves is where the cyber collaboration industry is currently at.


Imagine the level of fortification that could be achieved for each participant if these types of collaborations were a reality.

Challenges on the Path of Collaboration

Cyber collaboration is not taking off at the speed we would like, even though experts may agree to the concept in principal. Why?

  • Cultural inhibitions – The state of mind of not cooperating with competition, the fear of losing intellectual property and the fear of losing expertise sits heavily with many decision makers.
  • Sharing is limited due to the justified fear of potential exposure of sensitive data – Deep collaboration in the cyber world requires technical solutions to allow sharing of meaningful information without sacrificing sensitive data.
  • Exposure to new supply chain attacks – Real-time and actionable threat intelligence sharing raises questions on the authenticity and integrity of incoming data feeds creating a new weakness point at the core of the enterprise security systems.
  • Before an organization can start collaborating on cyber security, its internal security function needs to work properly – this is not necessarily the case with a majority of organizations.
  • The brand can be put into some uncertainty as impact on a single participant in a group of collaborators can damage the public image of other participants.
  • The tools, expertise and know-how required for establishing a cyber collaboration are still nascent.
  • As with any emerging topic, there are too many standards and no agreed upon principles yet.
  • Collaboration in the world of cyber security has always raised privacy concerns within consumer and citizen groups.


Though there is a mix of misconceptions, social and technical challenges, the importance of the topic continues to gain recognition and I believe we are on the right path.


Technical Challenges in Threat Intelligence Sharing

Even the limited case of concrete threat intelligence sharing raises a multitude of technical challenges, and best practices to overcome them have not yet been determined:

  • How to achieve balance between sharing actionable intelligence pieces which must be rich in order to bee actionable vs. preventing exposure of sensitive information.
  • How to establish secure and reliable communications among collaborators with proper handling of authorization, authenticity and integrity to make sure the risk posed by collaboration is minimized.
  • How to verify the potential impact of actionable intelligence before it is applied to other organizations. For example, if one collaborator broadcasts that google.com is a malicious URL then how can the other participants automatically identify it is not something to act upon?
  • How do we make sure we don’t amplify the information overload problem by sharing false alerts to other organizations or some means to handle the load?
  • Once collaboration is established, how can IT measure the effectiveness of the efforts being invested vs. resource saving and added protection level? How do you calculate Collaboration ROI?
  • Many times investigating an incident requires good understanding of and access to other elements in the network of the attacked enterprise; collaborators naturally cannot have such access, which limits their ability to conduct a root cause investigation.


These are just a few of the current challenges – more will surface as we get further down the path to collaboration. There are several emerging technological areas which can help tackle some of the challenges: Privacy preserving approaches in the world of big data such as synthetic data generation; zero knowledge proofs (i.e. blockchain); tackling information overload with Moving Target Defense-based technologies that deliver only true alerts, such as Morphisec Endpoint Threat Prevention, and/or emerging solutions in the area of AI and security analytics; and distributed SIEM architectures.


Collaboration Grid

In a highly collaborative future, a grid of collaborators will emerge connecting every organization. Such a grid will work according to certain rules, taking into account that countries will be participants as well:

Countries – Countries can work as centralized aggregation points, aggregating intelligence from local enterprises and disseminating it to other countries which, in turn, will disseminate the received intelligence to their respective local enterprises. There should be some filtering on the type of intelligence being disseminated and classification so the propagation and prioritization will be effective.

Sector Driven – Each industry has its common threats and common malicious actors; it’s logical that there would be tighter collaboration among industry participants.

Consumers & SMEs – Consumers are the ones excluded from this discussion although they could contribute and gain from this process like anyone else. The same holds true for small to medium sized businesses, which cannot afford the enterprise grade collaboration tools currently being built.

Final Words

One of the biggest questions about cyber collaboration is when it will reach a tipping point. I speculate that it will occur when a disastrous cyber event takes place, or when startups emerge in a massive number in this area or when countries finally prioritize cyber collaboration and invest the required resources.

Share on FacebookTweet about this on TwitterShare on LinkedInEmail this to someone

Rent my Brain and Just Leave me Alone

Until AI will be intelligent enough to replace humans in complex tasks there will be an interim stage and that is the era of human brain rental. People have diverse intelligence capabilities and many times these are not optimally exploited due to live circumstances. Other people and corporations which know how to make money many times lack the brain power required to scale their business. Hiring more people into a company is complicated and the efficiency level of new hires decelerates with scale. With a good reason – all the personality and human traits combined with others disturb  efficiency. So it makes sense that people will aspire to build tools for exploiting just the intelligence of people (better from remote) in the most efficient manner. The vision of the Matrix of course immediately comes into play where people will be wired into the system and instead of being a battery source we be a source of processing and storage. In the meanwhile we can already see springs of such thinking in different areas: Amazon Mechanical Turk which allows you to allocate scalable amount of human resources and assign to them tasks programmatically, the evolution of communication mediums which make human to machine communications better and active learning as a branch in AI which reinforces learning with human decisions.

In a way it sounds a horrible future and unromantic but we have to admit it fits well with the growing desire of future generations for convenient and prosperous life – just imagine plugging your brain for several hours a day, hiring it, you don’t really care what it does at that time and in the rest of the day you can happily spend the money you have earned.

Share on FacebookTweet about this on TwitterShare on LinkedInEmail this to someone

Right and Wrong in AI


The DARPA Cyber Grand Challenge (CGC) 2016 competition has captured the imagination of many with its AI challenge. In a nutshell it is a competition where seven highly capable computers compete with each other and each computer is owned by a team. Each team creates a piece of software which is able to autonomously identify flaws in their own computer and fix them and identify flaws in the other six computers and hack them. A game inspired by the Catch The Flag (CTF) game which is played by real teams protecting their computer and hacking into others aiming to capture a digital asset which is the flag. In the CGC challenge the goal is to build an offensive and defensive AI bot that follows the CTF rules.

In recent five years AI has become a highly popular topic discussed both in the corridors of tech companies as well as outside of it where the amount of money invested in the development of AI aimed at different applications is tremendous and growing. Starting from use cases of industrial and personal robotics, smart human to machine interactions, predictive algorithms of all different sorts, autonomous driving, face and voice recognition and others fantastic use cases. AI as a field in computer science has always sparked the imagination which also resulted in some great sci-fi movies. Recently we hear a growing list of few high profile thought leaders such as Bill Gates, Stephen Hawking and Elon Musk raising concerns about the risks involved in developing AI. The dreaded nightmare of machines taking over our lives and furthermore aiming to harm us or even worse, annihilate us is always there.

The DARPA CGC competition which is a challenge born out of good intentions aiming to close the ever growing gap between attackers sophistication and defenders toolset has raised concerns from Elon Musk fearing that it can lead to Skynet. Skynet from the Terminator movie as a metaphor for a destructive and malicious AI haunting mankind. Indeed the CGC challenge has set the high bar for AI and one can imagine how a smart software that knows how to attack and defend itself will turn into a malicious and uncontrollable machine driven force. On the other hand there seems to be a long way until a self aware mechanical enemy can be created. How long will it take and if at all is the main question that stands in the air. This article is aiming to dissect the underlying risks posed by the CGC contest which are of a real concern and in general contemplates on what is right and wrong in AI.

Dissecting Skynet

AI history has parts which are publicly available such as work done in academia as well as parts that are hidden and take place at the labs of many private companies and individuals. The ordinary people outsiders of the industry are exposed only to the effects of AI such as using a smart chat bot that can speak to you intelligently. One way to approach the dissection of the impact of CGC is to track it bottom up and understand how each new concept in the program can lead to a new step in the evolution of AI and imagining future possible steps. The other way which I choose for this article is to start at the end and go backwards.

To start at Skynet.

Skynet is defined by Wikipedia as Rarely depicted visually in any of the Terminator media, Skynet gained self-awareness after it had spread into millions of computer servers all across the world; realising the extent of its abilities, its creators tried to deactivate it. In the interest of self-preservation, Skynet concluded that all of humanity would attempt to destroy it and impede its capability in safeguarding the world. Its operations are almost exclusively performed by servers, mobile devices, drones, military satellites, war-machines, androids and cyborgs (usually a Terminator), and other computer systems. As a programming directive, Skynet’s manifestation is that of an overarching, global, artificial intelligence hierarchy (AI takeover), which seeks to exterminate the human race in order to fulfil the mandates of its original coding.”.  The definition of Skynet discusses several core capabilities which it has acquired and seem to be a strong basis for its power and behaviour:

Self Awareness

A rather vague capability which is borrowed from humans where in translation to machines it may mean the ability to identify its own form, weaknesses, strengths, risks posed by its environment as well as opportunities.

Self Defence

The ability to identify its weaknesses, awareness to risks, maybe the actors posing the risks and to apply different risk mitigation strategies to protect itself. Protect first from destruction and maybe from losing territories under control.

Self Preservation

The ability to set a goal of protecting its existence’ applying self defence in order to survive and adapt to a changing environment.

Auto Spreading

The ability to spread its presence into other computing devices which have enough computing power and resources to support it and to allows a method of synchronisation among those devices forming a single entity. Sync seems to be obviously implemented via data communications methods but it is not limited to that. These vague capabilities are interwoven with each other and there seems to be other more primitive conditions which are required for an effective Skynet to emerge.

The following are more atomic principles which are not overlapping with each other:

Self Recognition

The ability to recognise its own form including recognising its own software components and algorithms as inseparable part of its existence. Following the identification of the elements that comprise the bot then there is a recursive process of learning what are the conditions that are required for each element to properly run. For example understanding that a specific OS is required for its SW elements in order to run and that a specific processor is required for the OS in order to run and that a specific type of electricity source is required for the processor in order to work properly and on and on. Eventually the bot should be able to acquire all this knowledge where its boundaries are set in the digital world and this knowledge is being extended by the second principle.

Environment Recognition

The ability to identify objects, conditions and intentions arising from the real world to achieve two things: To extend the process of self recognition so for example if the bot understands that it requires an electrical source then identifying the available electrical sources in a specific geographical location is an extension to the physical world. The second goal is to understand the environment in terms of general and specific conditions that have an impact on itself and what is the impact. For example weather or stock markets. Also an understanding of the real life actors which can impact its integrity and these are the humans (or other bots). Machines needs to understand humans in two aspects: their capabilities and their intentions and both eventually are based on a historic view of the digital trails people leave and the ability to predict future behaviour based on the history. If we imagine a logical flow of a machine trying to understand relevant humans following the chain of its self recognition process then it will identify whom are the people operating the electrical grid that supplies the power to the machine and identifying weaknesses and behavioural patterns of them and then predicting their intentions which eventually may bring the machine to a conclusion that a specific person is posing too much risk on its existence.

Goal Setting

The equivalent of human desire in machines is the ability to set a specific goal that is based on knowledge of the environment and itself and then to set a non linear milestone to be achieved. An example goal can be to have a replica of its presence on multiple computers in different geographical locations in order to reduce the risk of shutdown. Setting a goal and investing efforts towards achieving it requires also the ability to craft strategies and refine them on the fly where strategies here mean a sequence of actions which will get the bot closer to its goal. The machine needs to be pre-seeded with at least one a-priori  goal which is survival and to apply a top level strategy which continuously aspires for continuation of operation and reduction of risk.

Humans are the most unpredictable factor for machines to comprehend and as such they would probably be deemed as enemies very fast in the case of existence of such intelligent machine. Assuming the technical difficulties standing in front of such intelligent machine such as roaming across different computers, learning the digital and physical environment and gaining the long term thinking are solved the uncontrolled variable which are humans, people with their own desires and control on the system and free will, would logically be identified as a serious risk to the top level goal of survivability.

What We Have Today

The following is an analysis of the state of the development of AI in light of these three principles with specific commentary on the risks that are induced from the CGC competition:

Self Recognition

Today the main development of AI in that area is in the form of different models which can acquire knowledge and can be used for decision making. Starting from decision trees, machine learning clusters up to deep learning neural networks. These are all models that are specially designed for specific use cases such as face recognition or stock market prediction. The evolution in models, especially in the non supervised field of research, is fast paced and the level of broadness in the perception of models grows as well. The second part that is required to achieve this capability is exploration, discovery and new information understanding where today all models are being fed by humans with specific data sources and a big portions of the knowledge about its form are undocumented and not accessible. Having said that learning machines are gaining access to more and more data sources including the ability to autonomously select access to data sources available via APIs. We can definitely foresee that machines will evolve towards owning major part of the required capabilities to achieve Self Recognition. In the CGC contest the bots were indeed required to defend themselves and as such to identify security holes in the software they were running in which is equivalent to recognising themselves. Still it was a very narrowed down application of discovery and exploration with limited and structured models and data sources designed for the specific problem. It seems more as a composition of ready made technologies which were customised towards the specific problem posed by CGC vs. a real non-linear jump in the evolution of AI.

Environment Recognition

Here there are many trends which help the machines become more aware to their environment. Starting from IoT which is wiring the physical world up to digitisation of many aspects of the physical world including human behaviour such as Facebook profiles and Fitbit heart monitors. The data today is not accessible easily to machines since it is distributed and highly variant in its data formats and meaning. Still it exists which is a good start in this direction. Humans on the other hand are again the most difficult nut to crack for machines as well as to other humans as we know. Still understanding humans may not be that critical for machines since they can be risk averse and not necessarily go too deep to understand humans and just decide to eliminate the risk factor. In the CGC contest understanding the environment did not pose a great challenge as the environment was highly controlled and documented so it was again reusing tools needed for solving the specific problem of how to make sure security holes are not been exposed by others as well as trying to penetrate the same or other security holes in other similar machines. On top of that CGC have created an artificial environment of a new unique OS which was created in order to make sure vulnerabilities uncovered in the competition are not being used in the wild on real life computers and the side effect of that was the fact that the environment the machines needed to learn was not the real life environment.

Goal Setting

Goal setting and strategy crafting is something machines already do in many specific use case driven products. For example setting the goal of maximising revenues of a stocks portfolio and then creating and employing different strategies to reach that. Goals that are designed and controlled by humans. We did not see yet a machine which has been given a top level of goal of survival. There are many developments in the area of business continuation but still it is limited to tools aimed to achieve tactical goals and not a grand goal of survivability. The goal of survival is very interesting in the fact that it serves the interest of the machine and in the case it is the only or main goal then this is when it becomes problematic. The CGC contest was new in the aspect of setting the underlying goal of survivability into the bots and although the implementation in the competition was narrowed down to the very specific use case still it made many people think about what survivability may mean to machines.

Final Note

The real risk posed by CGC was by sparking the thought of how can we teach a machine to survive and once it is reached then Skynet can be closer then ever. Of course no one can control or restrict the imagination of other people and survivability has been on the mind of many before the challenge but still this time it was sponsored by DARPA. It is not new that certain plans to achieve something eventually lead to whole different results and we will see within time whether the CGC contest started a fire in the wrong direction. In a way today we are like the people in Zion as depicted in the Matrix movie where the machines in Zion do not control the people but on the other hand the people are fully dependent on them and shutting them down becomes out of the question. In this fragile duo it is indeed wise to understand where AI research goes and which ways are available to mitigate certain risks. The same as line of thought being applied to nuclear bombs technology. One approaches for risk mitigation is to think about more resilient infrastructure for the next centuries where it won’t be easy for a machine to seize control on critical infrastructure and enslave us.

Now it is 5th of August 2016, few hours after the competition ended and it seems that mankind is intact. As far as we see.

The article will be published as part of the book of TIP16 Program (Trans-disciplinary Innovation Program at Hebrew University) where I had the pleasure and privilege to lead the Cyber and Big Data track. 

Share on FacebookTweet about this on TwitterShare on LinkedInEmail this to someone

Are Chat Bots a Passing Episode or Here to Stay?

Chat bots are everywhere. It feels like the early days of mobile apps where you either knew someone who is building an app or many others planning to do so. Chat bots have their magic. It’s a frictionless interface allowing you to naturally chat with someone. The main difference is that on the other side there is a machine and not a person. Still, one as old as I got to think whether it is the end game in terms of human machine interaction or are they just another evolutionary step in the long path of human machine interactions.

How Did We Get Here?

I’ve noticed chat bots for quite a while and it piqued my curiosity in terms of the possible use cases as well as the underlying architecture. What interests me more is Facebook and other AI superpowers ambitions towards them. And chat bots are indeed a next step in terms of human machine communications. We all know where history began when we initially had to communicate via a command line interface limited by a very strict vocabulary of commands. An interface that was reserved for the computer geeks alone. The next evolutionary step was the big wave of graphical user interfaces. Initially the ugly ones but later on in major leaps of improvements making the user experience smooth as possible but still bounded by the available options and actions in a specific context in a specific application. Alongside graphical user interfaces we were introduced to search like interfaces where there is a mix of a graphical user interface elements with a command line input which allows extended textual interaction  – here the GUI serves as a navigation tool primarily. And then some other new human machine interfaces were introduced, each one evolving on its own track: the voice interface, the gesture interface (usually hands) and the VR interface. Each one of these interaction paradigms uses different human senses and body parts to express communications onto the machine where the machine can understand you to a certain extent and communicate back. And now we have the chat bots and there’s something about them which is different. In a way it’s the first time you can express yourself freely via texting and the machine will understand your intentions and desires. That’s the premise. It does not mean each chat bot is able to respond on every request as chat bots are confined to the logic that was programmed to them but from a language barrier point of view a new peak has been reached.

So do we experience now the end of the road for human machine interactions?  Last week I’ve met a special women, named Zohar Urian (the lucky hebrew readers can enjoy her super smart blog about creative, innovation, marketing and lots of other cool stuff) and she said that voice will be next which makes a lot of sense. Voice has less friction then typing, its popularity in messaging is only growing and technology progress is almost there in terms of allowing free vocal express where a machine can understand it. Zohar’s sentence echoed in my brain which made me go deeper into understanding the anatomy of the human machine interfaces evolution. 

The Evolution of Human-Machine Interfaces 


The progress in human to machine interactions has evolutionary patterns. Every new paradigm is building on capabilities from the previous paradigm and eventually the rule of survivor of the fittest plays a big role where the winning capabilities survive and evolve. Thinking about its very natural to evolve this way as the human factor in this evolution is the dominating one. Every change in this evolution can be decomposed into four dominating factors:

  1. The brain or the intelligence within the machine – the intelligence which contains the logic available to the human but also the capabilities that define the semantics and boundaries of communications.
  2. The communications protocol which is provided by the machine such as the ability to decipher audio into words and sentences hence enabling voice interaction.
  3. The way the human is communicating with the machine which has tight coupling with the machine communication protocol but represents the complementary role.
  4. The human brain.

The holy 4 factors

Machine Brain <->

Machine Protocol <->

Human Protocol <->

Human Brain

In each paradigm shift there was a change in one or more factors:

Paradigm Shift Description Machine
Machine Protocol Human Protocol Human Brain
Command Line 1st Gen The first interface used to send restricted commands to the computer by typing it in a textual screen Dumb and restricted to set of commands and selection of options per system state Textual Fingers typing Smart
Graphical User Interfaces A 2D interface controlled by a mouse and a keyboard allowing text input, selection of actions and options Dumb and restricted to set of commands and selection of options per system state 2D positioning and textual 2D hand movement and fingers actions as well as fingers typing Smart
Adaptive Graphical User Interfaces Same as previous one though here the GUI is more flexible in its possible input also thanks to situational awareness to the human context (location…) Getting smarter and able to offer different set of options based on profiling of the user characteristics. Still limited to set of options and 2D positioning and textual inputs. 2D positioning and textual 2D hand movement and fingers actions as well as fingers typing Smart
Voice Interface 1st Gen The ability to identify content represented as audio and to translate it into commands and input Dumb and restricted to set of commands and selection of options per system state Listening to audio and content matching within audio track Restricted set of voice commands Smart
Gesture Interface The ability to identify physical movements and translate them into commands and selection of options Dumb and restricted to set of commands and selection of options per system state Visual reception and content matching within video track Physical movement of specific body parts in a certain manner Smart
Virtual Reality A 3D interface with the ability to identify full range of body gestures and transfer them into commands A bit smarter but still restricted to selection from a set of options per system state Movement reception via sensors attached to body and projection of peripheral video  Physical movement of specific body parts in a free form Smart
AI Chat bots A natural language detection capability which is able to identify within supplied text the rules of human language and transfer them into commands and input Smarter and flexible thanks to AI capabilities but still restricted to selection of options and capabilities within a certain domain Textual Fingers typing in a free form Smart
Voice Interface 2nd Gen Same as previous one but with a combination of voice interface and natural language processing Same as previous one Identification of language patterns and construct from the audio content and translation into text Free speech Smart
What’s next?  uf1 Smart


There are several phenomenons and observations from this semi structured analysis:

  • The usage of combination of communication protocols such as voice and VR will extend the range of communications between human and machines even without changing anything in the computer brain.
  • Within time more and more human senses and physical interactions are available for computers to understand which extend the boundaries of communications. Up until today smell has not gone mainstream as well as touching. Pretty sure we will see them in the near term future.
  • The human brain always stays the same. Furthermore, the rest of the chain always strives to match into the human brain capabilities. It can be viewed as a funnel limiting the human brain from fully expressing itself digitally and within time it gets wider.
  • An interesting question is whether at some point in time the human brain will get stronger if the communications to machines will be with no boundaries and AI will be stronger. 
  • We did not witness yet any serious leap which removed one of the elements in the chain and that I would call a revolutionary step (still behaving in an evolutionary manner). Maybe the identification of brain waves and real-time translation to a protocol understandable by a machine will be as such. Removing the need for translating the thoughts into some intermediate medium. 
  • Once the machine brain becomes smarter in each evolutionary step then the magnitude of expression grows bigger – so the there is a progress even without creating more expressive communication protocol.
  • Chat bots from a communications point of view in a way are a jump back to the initial protocol of command line though the magnitude of the smartness of the machine brains nowadays make it a different thing. So it is really about the progress of AI and not chat bots.

    I may have missed some interfaces, apologies, not an expert in that area:)

Now to The Answer

So the answer to the main question – chat bots indeed represent a big step in terms of streamlining natural language processing for identifying user intentions in writing. In combination with the fact that users favourite method of communication nowadays is texting makes it a powerful progress. Still the main thing that thrills here is the AI development and that is sustainable across all communication protocols. So in simple words it is just an addition to the arsenal of communication protocols between human and machines but we are far from seeing the end of this evolution. From the FB and Google point of view these are new interfaces to their AI capabilities which makes them stronger every day thanks to increased usage.

Food for Thought

if one conscious AI meets another conscious AI in cyber space will they be communicate via text or voice or something else?

Share on FacebookTweet about this on TwitterShare on LinkedInEmail this to someone

The Emotional CISO

It may sound odd, but cybersecurity has a huge emotional component. Unlike other industries that are driven by numbers whether derived from optimization or financial gains, cybersecurity has all the makings of a good Hollywood movie—good and bad guys, nation-states attacking other nation states, and critical IT systems at risk. Unfortunately for most victims of a cyber threat or breach, the effects are all too real and don’t disappear when the music stops and the lights come on. As with a good blockbuster, in cybersecurity you can expect highs, lows, thrills and chills. When new risks and threats appear, businesses get worried, and demand for new and innovative solutions increases dramatically. Security managers and solution providers then scramble to respond with a fresh set of tools and services aimed at mitigating the newly discovered threats.

Because cybersecurity is intrinsically linked to all levels of criminal activity—from petty thieves to large-scale organized crime syndicates—cybersecurity is a never-ending story. Yet, curiously, the never ending sequence of new threats followed by new innovative solutions, present subtle patterns that, once identified, can help a CISO make the right strategic decisions based on logical reasoning and not emotions.

Cybersecurity Concept Du Jour

When you’ve been in the cybersecurity industry for a while like I have, you notice that in each era, there is always a “du jour” defense concept that occupies the industry decision makers state-of-mind. Whether it is prevention, detection or containment in each time period, the popular concept becomes the defining model that everyone—analysts, tool builders, and even the technology end users—advocate fiercely. Which concept is more popular represents critical shifts in widespread thinking with regards to cybersecurity.

The Ambiguous Perception of Defense Concepts

The defense concepts of prevention, detection, and containment serve dual roles: as defense strategies employed by CISOs and in correspondence as product categories for different defense tools and services. However, the first challenge encountered by both cybersecurity professionals and end users is that these concepts don’t have a consistent general meaning; trying to give a single general definition of each of these terms is like attempting to build a castle on shifting sand (although that doesn’t stop people from trying). From a professional security point of view, there are different worlds in which specific targets, specific threats (new and old), and a roster of defenses exist. Each specific world is a security domain in and of itself, and this domain serves as the minimum baseline context for the concepts of prevention, detection, and containment. Each particular threat in a security domain defines the boundaries and the roles of these concepts. In addition, these concepts serve as product categories, where particular, but related tools can be assigned to one or more category based on the way the tool operates.

Ultimately, these defense concepts have a concrete meaning that is specific and actionable only within a specific security domain. For instance, a security domain can be defined by the type of threat, the type of target, or a combination of the two.

So, for example, there are domains that represent groups of threats with common patterns, such as advanced attacks on enterprises (of which advanced persistent attacks or APTs are a subset) or denial of service attacks on online services. In contrast, there are security domains that represent assets, such as protecting websites from a variety of different threats including defacement, denial of service, and SQL injection through its entry points. The determining factor in defining the security domain approach depends on the asset – and the magnitude of risk it can be exposed to – or on the threat group and its commonalities among multiple threats.

Examples, Please

To make this more tangible let’s discuss a couple of examples by defining the security domain elements and explaining how the security concepts of prevention, detection, and containment need to be defined from within the domain.

The Threats Point of View – Advanced Attacks

Let’s assume that the primary attack vector for infiltration into the enterprise is via endpoints; the next phase of lateral movement takes place in the network via credential theft and exploitation; and exfiltration of data assets is conducted via HTTP covert channels as the ultimate goal.

Advanced attacks have a timeline with separate consecutive stages starting from entrance into the organization and ending with data theft. The security concepts have clearly defined meanings, related specifically to each and every stage of advanced attacks. For example, at the first stage of infiltration there are multiple ways malicious code can get into an employee computer, such as opening a malicious document or browsing a malicious website and installing a malicious executable unintentionally.


In the case of the first stage of infiltration of advanced attacks, “prevention” means making sure infiltration does not happen at all; “detection” means identifying signs of attempted infiltration or successful infiltration; and “containment” means knowing that the infiltration attempt has been stopped and the attack cannot move to the next stage. A concrete meaning for each and every concept in the specific security domain.

The Asset Point of View – Web Site Protection

Web sites can be a target for a variety of different types of threats, such as security vulnerabilities in one of the scripts, misconfigured file system access rights, or a malicious insider with access to the web site’s backend systems. From a defensive point-of-view, the website has two binary states: compromised or uncompromised.

Therefore, the meanings of the three defense concepts are defined as prevention, any measure that can prevent the site from being compromised, and detection, identifying an already-compromised site. In this general example, containment does not have a real meaning or role as eventually a successful containment equals prevention. Within specific group of threats against a web site which have successfully compromised the site there may be a role for containment, such as preventing a maliciously installed malvertising campaign on the server from propagating to the visitors’ computers.

It’s An Emotional Decision

So, as we have seen, our three key defense concepts have different and distinctive meanings that are highly dependent on their context, making broader definitions somewhat meaningless. Still, cybersecurity professionals and lay people alike strive to assign meaning to these words, because that is what the global cybersecurity audience expects: a popular meaning based on limited knowledge, personal perception, desires and fears.

The Popular Definitions of Prevention, Detection and Containment

From a non-security expert point-of-view, prevention has a deterministic feel – if the threat is prevented, it is over with no impact whatsoever. Determinism gives the perception of complete control, high confidence, a guarantee. Prevention is also perceived as an active strategy, as opposed to detection which is considered more passive (you wait for the threat to come, and then you might detect it).

Unlike prevention, detection is far from deterministic, and would be classified as probabilistic, meaning that you might have a breach (85% chance). Detection tools that tie their success to probabilities gives assurance by degree, but never 100% confidence either on stage of attack detected or on threat coverage.

Interestingly, containment might sound deterministic since it gives the impression that the problem is under control, but there is always the possibility that some threat could have leaked through the perimeter, turning it into more of a probabilistic strategy. And it straddles the line between active and passive. Containment passively waits for the threat, and then actively contains it.

In the end, these deterministic, probabilistic, active and passive perceptions end up contributing to the indefinite meaning of these three terms, making them highly influenced by public opinion and emotions. The three concepts in the eyes of the layperson turn into three levels of confidence based on a virtual confidence scale, with prevention at the top, containment in the middle, and detection as a tool of last resort. Detection gets the lowest confidence grade because it is the least proactive, and the least definite.


Today’s Defense Concept and What the Future Holds

Targets feel more exposed today than ever, with more and more organizations becoming victims due to newly discovered weaknesses. Attackers have the upper hand and everyone feel insecure. This imbalance towards attackers is currently driving the industry to focus on detection. It also sets the stage for the “security solution du jour” – when the balance leans toward the attackers, society lowers its expectations due to reduced confidence in tools, which results in a preference for detection. At a minimum, everyone wants to at least know an attack has taken place, and then they want to have the ability to mitigate and respond by minimizing damages. It is more about being realistic and setting detection as the goal, when there is an understanding that prevention is not attainable at the moment.

If and when balance returns and cybersecurity solutions are again providing the highest level of protection for the task at hand, then prevention once again becomes the holy grail. Ultimately, no one is satisfied with anything less than bullet-proof prevention tools. This shift in state-of-mind has had a dramatic impact on the industry, with some tools becoming popular and others being sent into oblivion. It also has impacted the way CISOs define their strategies.

Different Standards for Different Contexts

The state-of-mind when selecting the preferred defense concept also has a more granular resolution. Within each security domain, different preferences for a specific concept may apply depending on the state of the emergence of that domain. For example, in the enterprise world, the threat of targeted attacks in particular, and advanced attacks in general, used to be negligible. The primary threats ten years ago were general-purpose file-borne viruses targeting the computing devices held by the enterprise, not the enterprise itself or its unique assets. Prevention of such attacks was once quite effective with static and early versions of behavioral scanning engines. Technologies were initially deployed at the endpoint for scanning incoming files and later on, for greater efficiency, added into the network to conduct a centralized scan via a gateway device. Back then, when actual prevention was realistic, it became the standard security vendors were held to; since then, no one has settled for anything less than high prevention scores.

In the last five years, proliferation of advanced threat techniques, together with serious monetary incentives for cyber criminals, have created highly successful infiltration rates with serious damages. The success of cyber criminals has, in turn, created a sense of despair among users of defense technologies, with daily news reports revealing the extent of their exposure. The prevalence of high-profile attacks shifted the industry’s state-of-mind toward detection and containment as the only realistic course of action and damage control, since breaches seem inevitable. Today’s cybersecurity environment is comprised of fear, uncertainty, and doubt, with a low confidence in defense solutions.

Yet, in this depressing atmosphere, signs of change are evident. CISOs today typically understand the magnitude of potential attacks and the level of exposure, and they understand how to handle breaches when they take place. In addition, the accelerated pace of innovation in cybersecurity tools is making a difference. Topics such as software defined networking, moving target defense, and virtualization are becoming part of the cybersecurity professional’s war chest.

Cybersecurity is a cyclical industry, and the bar is again being optimistically raised in the direction of “prevention.” Unfortunately, this time around, preventing cybercrime won’t be as easy as it was in the last cycle when preventative tools worked with relative simplicity. This time, cybersecurity professionals will need to be prepared with a much more complex defense ecosystem that includes collaboration among targets, vendors and even governmental entities.


Share on FacebookTweet about this on TwitterShare on LinkedInEmail this to someone

Cyber-Evil Getting Ever More Personal

Smartphones will soon become the target of choice for cyber attackers—making cyber warfare a personal matter. The emergence of mobile threats is nothing new, though until now, it has mainly been a phase of testing the waters and building an arms arsenal. Evil-doers are always on the lookout for weaknesses—the easiest to exploit and the most profitable. Now, it is mobile’s turn. We are witnessing a historic shift in focus from personal computers, the long-time classic target, to mobile devices. And of course, a lofty rationale lies behind this change.

Why Mobile?
The dramatic increase in usage of mobile apps concerning nearly every aspect of our lives, the explosive growth in mobile web browsing, and the monopoly that mobile has on personal communications, makes our phones a worthy target. In retrospect, we can safely say that most security incidents are our fault: the more we interact with our computer, the higher the chances become that we will open a malicious document, visit a malicious website or mistakenly run a new application that runs havoc on our computer. Attackers have always favored human error, and what is better suited to expose these weaknesses than a computer that is so intimately attached to us 24 hours a day?

Mobile presents unique challenges for security. Software patching is broken where the rollout of security fixes for operating systems is anywhere from slow to non-existent on Android, and cumbersome on iOS. The dire Android fragmentation has been the Achilles heel for patching. Apps are not kept updated either where tens of thousands of micro-independent software vendors are behind many of the applications we use daily, security being the last concern on their mind. Another major headache rises from the blurred line between the business and private roles of the phone. A single tap on the screen takes you from your enterprise CRM app, to your personal WhatsApp messages, to a health tracking application that contains a database of every vital sign you have shown since you bought your phone.

Emerging Mobile Threats
Mobile threats grow quickly in number and variety mainly because attackers are well-equipped and well-organized—this occurs at an alarming pace that is unparalleled to any previous emergence of cyber threats in other computing categories.

The first big wave of mobile threats to expect is cross-platform attacks, such as web browser exploits, cross-site scripting or ransomware—repurposing of field-proven attacks from the personal computer world onto mobile platforms. An area of innovation is in the methods of persistency employed by mobile attackers, as they will be highly difficult to detect, hiding deep inside applications and different parts of the operating systems. A new genre of mobile-only attacks target weaknesses in hybrid applications. Hybrid applications are called thus since they use the internal web browser engine as part of their architecture, and as a result, introduce many uncontrolled vulnerabilities. A large portion of the apps we are familiar with, including many banking-oriented ones and applications integrated into enterprise systems, were built this way. These provide an easy path for attackers into the back-end systems of many different organizations. The dreaded threat of botnets overflowing onto mobile phones is yet to materialize, though it will eventually happen as it did on all other pervasive computing devices. Wherever there are enough computing power and connectivity, bots appear sooner or later. With mobile, it will be major as the number of devices is high.

App stores continue to be the primary distribution channel for rogue software as it is almost impossible to identify automatically malicious apps, quite similar to the challenge of sandboxes that deal with evasive malware.

The security balance in the mobile world on the verge of disruption proving to us yet again, that ultimately we are at the mercy of the bad guys as far as cyber security goes. This is the case at least for the time being, as the mobile security industry is still in its infancy—playing a serious catch-up.

A variation of this story was published on Wired.co.UK – Hackers are honing in on your mobile phone.

Share on FacebookTweet about this on TwitterShare on LinkedInEmail this to someone

Hackers are honing in on your mobile phone

Most security incidents are, in retrospect, our own fault. The more we interact with a computer, the higher the chances that we will open a malicious document, visit a harmful website or mistakenly launch a new app that causes havoc.

Attackers favour human error, and there’s nothing better suited to expose this than the smartphone, a computer that is attached to us 24 hours a day. The dramatic increase in usage of mobile apps for many aspects of our lives, the huge growth in mobile web browsing and the monopoly mobile has on our communications makes smartphones a key target for cybercrime.

Mobile presents unique challenges for our security. Software patching is broken: the rollout of security fixes is slow to non-existent on the Android ecosystem and cumbersome on iOS. Apps are rarely kept up-to date: for thousands of independent micro-vendors, security is the last concern. A further headache arises from the blurring between the business and private roles of the phone. A single tap can now take you from your enterprise CRM app to WhatsApp or a health-tracking app containing every vital sign recorded since you bought your phone.

The first wave of mobile threats to expect will be cross-platform, such as web browser exploits, cross-site scripting or ransomware – the repurposing of PC attacks on to mobile platforms. Mobile attackers are innovative in the methods they use to hide inside apps and operating systems, making them difficult to detect.

We will start to see mobile-specific attacks targeting weaknesses in hybrid apps. These use the internal web browser engine as part of their architecture, and as a result introduce uncontrolled vulnerabilities. Many familiar apps were built this way, providing an easy path for attackers into an organisation’s back-end systems. The threat of botnets – in which hackers take control of a user’s device to enlist them in spam campaigns or DDoS – overflowing on to mobile phones has yet to materialise, but where there’s sufficient computing power and connectivity, they will appear at some point. App stores will continue to be the primary distribution channel for rogue software as it is almost impossible to identify malicious apps.

Again, we’re at the mercy of the bad guys. The mobile security industry is still in its infancy, and has some catching up to do.


Published on Wired

Share on FacebookTweet about this on TwitterShare on LinkedInEmail this to someone

Israel, The New Cyber Superpower

The emerging world of ever-growing connectivity, cybersecurity, and cyber-threats has initiated an uncontrolled transformation in the balance of global superpowers. The old notion of power relying on the number of aircraft and missiles a country owns has expanded to include new terms—terms such as the magnitude of a denial of service attack and the sophistication of advanced persistent attacks, which has changed the landscape forever. A new form of power has emerged, with new rules of engagement expressed by bits and bytes, and deep knowledge of how networks and operating systems work. In the recent decade, Israel naturally evolved to become one of the top players in this new playground, and the reasons for this change are rooted deeply in its history.

Israel, a rather “new” nation on the face of the earth, has two main distinctive characteristics compared to many other countries: the entrepreneurial spirit that served as a backbone for building the country from the ground up and the ongoing resistance of its close and distant neighbors to accept it as a legitimate nation. This ongoing struggle pushed the country to the forefront of technology for both defense and offensive. Furthermore, since Israel is a small country, the goal of seeking an advantage in a different arena where wisdom plays a bigger role than money was natural— the world of cybersecurity created such opportunity. This advantage evolved into a mature and proven cybersecurity capability that is being put to the test every second of the day.

Israel’s cybersecurity core competence has flowed into the commercial world, utilizing its unique entrepreneurial spirit. Cybersecurity as an industry has always been the preferred choice for many entrepreneurs due to their deep expertise in that area. This expertise created a true global competitive edge that was much needed by Israeli companies due to the challenges faced by a small remote country trying to succeed in the main markets of the U.S., Europe, and Asia. Furthermore, the available talent inflow from the army and other defense-related organizations serves as a unique resource that is highly desired nowadays by many multi-national companies that are aiming to establish their cybersecurity presence in Israel.

In recent years, with the emergence of cybersecurity as a globally important topic, Israel maintained its leadership in innovation with a high ratio of startups in that domain. While Israel has several large security companies, its startup industry is perceived as only generating innovative ideas, lacking the ability to sell its products unless acquired by a global company. The Israeli startup industry is supported by the local venture capital industry together with dedicated support from the Israeli government. They are pushing to help more and more Israeli companies become prominent global players on their own.

Israelis once again turned lemons into lemonade by creating strong cyber capabilities as a consequence of its political position and challenges. Furthermore, these capabilities position it as a strong solution provider for many countries and companies facing similar challenges.

Originally published on the CIPHER Brief

Share on FacebookTweet about this on TwitterShare on LinkedInEmail this to someone