United We Stand, Divided We Fall.

If I had to single out an individual development that elevated the sophistication of cybercrime by order of magnitude, it would be sharing. Codesharing, vulnerabilities sharing, knowledge sharing, stolen passwords, and anything else one can think of. Attackers that once worked in silos, in essence competing, have discovered and fully embraced the power of cooperation and collaboration. I was honored to present a high-level overview on the topic of cyber collaboration a couple of weeks ago at the kickoff meeting of a new advisory group to the CDA (the Cyber Defense Alliance), called the ?Group of Seven? established by the Founders Group. Attendees included Barclays? CISO Troels Oerting and CDA CEO Maria Vello as well as other key people from the Israeli cyber industry. The following summarizes and expands upon my presentation.

TL;DR – to ramp up the game against cybercriminals, organizations, and countries must invest in tools and infrastructure that enable privacy-preserving cyber collaboration.

The Easy Life of Cyber Criminals

The size of energy defenders must invest to protect, vs. the energy cybercriminals need to attack a target, is far from equal. While attackers have always had an advantage, over the past five years the balance has tilted dramatically in their favor. Attackers, to achieve their goal, need only find one entry point into a target. Defenders need to make sure every possible path is tightly secured ? a task of a whole different scale.

Multiple concrete factors contribute to this imbalance:

  • Obfuscation technologies and sophisticated code polymorphism that successfully disguises malicious code as harmless content rendered a large chunk of established security technologies irrelevant. Technologies?built with a different set of assumptions during what I call ?the naive era of cybercrime.?
  • Collaboration among adversaries in the many forms of knowledge and expertise sharing naturally speeded up the spread of sophistication/innovation.
  • Attackers as ?experts? in finding the path of least resistance to their goals discovered a sweet spot of weakness. A weakness that defenders can do little about ? humans. Human flaws are the hardest to defend as attackers exploit core human traits such as trust-building, personal vulnerabilities, and making mistakes.
  • Attribution in the digital world is vague and almost impossible to achieve, at least as far as the tools we have at our disposal currently. This?fact makes finding the cause of an attack and eliminating it with confidence.
  • The complexity of IT systems leads to security information overload which makes appropriate handling and prioritization difficult; attackers exploit this weakness by disguising their malicious activities in the vast stream of cybersecurity alerts. One of the drivers for this information overload is defense tools reporting an ever-growing amount of false alarms due to their inability to identify malicious events accurately.
  • The increasingly distributed nature of attacks and the use of ?distributed offensive? patterns by attackers makes the defense even harder.

Given the harsh reality of the world of cybersecurity today, it is not a question of whether or not an attack is possible, it is just a matter of the interest and focus of cybercriminals. Unfortunately, the current de-facto defense strategy rests on creating a bit harder for attackers on your end, so that they will find an easier target elsewhere.

Rationale for Collaboration

Collaboration, as proven countless times, creates value that is beyond the sum of the participating elements. It also applies to the cyber world. Collaboration across organizations can contribute to defense enormously. For example, consider the time it takes to identify the propagation of threats as an early warning system ? the period decreases exponentially in proportion to the number of collaborating participants. It is highly important to identify attacks targeting mass audiences more quickly as they tend to spread in an epidemic like patterns. Collaboration in the form of expertise sharing is another area of value ? one of the main roadblocks to progress in cybersecurity is the shortage of talent. The exchange of resources and knowledge would go a long way in helping. Collaboration in artifact research can also reduce the time to identify and respond to cybercrime incidents. Furthermore, the increasing interconnectedness between companies as well as consumers means that the attack surface of an enterprise ? the possible entry points for an attack ? is continually expanding. Collaboration can serve as an essential counter to this weakness.

A recent phenomenon that may be inhibiting progress towards real collaboration is the perception of cybersecurity as a competitive advantage. Establishing a robust cybersecurity defense presents many challenges and requires substantial resources, and customers increasingly expect businesses to make these investments. Many CEOs consider their security posture as a product differentiator and brand asset and, as such, are disinclined to share. I believe this to be short-sighted due to the simple fact that no-one is safe at the moment; broken trust trumps any security bragging rights in the likely event of a breach. Cybersecurity needs to progress seriously to stabilize, and I don?t think there is value in small marketing wins which only postpone development in the form of collaboration.

Modus Operandi

Cyber collaboration across organizations can take many forms ranging from deep collaboration to more straightforward threat intelligence sharing:

  • Knowledge and domain expertise ? Whether it is about co-training or working together on security topics, such partnerships can mitigate the shortage of cybersecurity talent and spread newly acquired knowledge faster.
  • Security stack and configuration sharing ? It makes good sense to share such acquired knowledge that was now kept close to the chest. Such collaboration would help disseminate and evolve best practices in security postures as well as help gain control over the flood of new emerging technologies, especially as validation processes take extended periods.
  • Shared infrastructure ? There are quite a few models where multiple companies can share the same infrastructure which has a single cybersecurity function, for example, cloud services and services rendered by MSSPs. While the current common belief holds that cloud services are less secure for enterprises, from a security investment point of view, there is no reason for this to be the case and it could and should be better. A large portion of such shared infrastructures is invisible and is referred to today as Shadow IT. A proactive step in this direction is a consortium of companies to build a shared infrastructure that can fit the needs of all its participants. In addition to improving the defense, the cost of security is shared by all the collaborators.
  • Sharing real vital intelligence on encountered threats ? Sharing useful indicators of compromise, signatures or patterns of malicious artifacts and the artifacts themselves is the current state of the cyber collaboration industry.

Imagine the level of fortification that could be achieved for each participant if these types of collaborations were a reality.

Challenges on the Path of Collaboration

Cyber collaboration is not taking off at the speed we would like, even though experts may agree to the concept in principle. Why?

  • Cultural inhibitions ? The state of mind of not cooperating with the competition, the fear of losing intellectual property, and the fear of losing expertise sits heavily with many decision-makers.
  • Sharing is almost non-existent due to the justified fear of potential exposure of sensitive data ? Deep collaboration in the cyber world requires technical solutions to allow the exchange of meaningful information without sacrificing sensitive data.
  • Exposure to new supply chain attacks ? Real-time and actionable threat intelligence sharing raises questions on the authenticity and integrity of incoming data feeds creating a new weak point at the core of the enterprise security systems.
  • Before an organization can start collaborating on cybersecurity, its internal security function needs to work correctly ? this is not necessarily the case with a majority of organizations.
  • The brand can be set into some uncertainty as the impact on a single participant in a group of collaborators can damage the public image of other participants.
  • The tools, expertise, and know-how required for establishing a cyber collaboration are still nascent.
  • As with any emerging topic, there are too many standards and no agreed-upon principles yet.
  • Collaboration in the world of cybersecurity has always raised privacy concerns within consumer and citizen groups.

Though there is a mix of misconceptions, social, and technical challenges, the importance of the topic continues to gain recognition, and I believe we are on the right path.

Technical Challenges in Threat Intelligence Sharing

Even the limited case of real threat intelligence sharing raises a multitude of technical difficulties, and best practices to overcome them are not ready yet. For example:

  • How to achieve a balance between sharing actionable intelligence pieces which must be extensive to bee actionable vs. preventing exposure of sensitive information.
  • How to establish secure and reliable communications among collaborators with proper handling of authorization, authenticity, and integrity to reduce the risk posed by collaboration.
  • How to validate the potential impact of actionable intelligence before applied to other organizations. For example, if one collaborator broadcasts that google.com is a malicious URL then how can the other participants automatically identify it is not something to act upon in a sea of URLs?
  • How do we make sure we don?t amplify the information overload problem by sharing false alerts to other organizations or some means to handle the load?
  • In established collaboration, how can IT measure the effectiveness of the efforts required vs. resource-saving and added protection level? How do you calculate Collaboration ROI?
  • Many times investigating an incident requires a good understanding of and access to other elements in the network of the attacked enterprise; collaborators naturally cannot have such access, which limits their ability to conduct a cause investigation.

These are just a few of the current challenges ? more will surface as we get further down the path to collaboration. There are several emerging technological areas which can help tackle some of the problems.?Privacy-preserving approaches in the world of big data such as synthetic data generation; zero-knowledge proofs (i.e., blockchain). Tackling information overload with Moving Target Defense-based technologies that deliver only accurate alerts, such as Morphisec Endpoint Threat Prevention, and emerging solutions in the area of AI and security analytics; and distributed SIEM architectures.

Collaboration Grid

In a highly collaborative future, a network of collaborators will appear connecting every organization. Such a system will work according to specific rules, taking into account that countries will be participants as well:

Countries – Countries can work as centralized aggregation points, aggregating intelligence from local enterprises, and disseminate it to other countries which, in turn, will distribute the received data to their respective local businesses. There should be some filtering on the type of intelligence to be disseminated and added classification so the propagation and prioritization will be useful.

Sector Driven – Each industry has its common threats and famous malicious actors; it?s logical that there would be tighter collaboration among industry participants.

Consumers & SMEs – Consumers are the ones excluded from this discussion although they could contribute and gain from this process like anyone else. The same holds for small to medium-sized businesses, which cannot afford the enterprise-grade collaboration tools currently being built.

Final Words

One of the biggest questions about cyber collaboration is when it will reach a tipping point. I speculate that it will occur when an unfortunate cyber event takes place, or when startups emerge in a massive number in this area, or when countries finally prioritize cyber collaboration and invest the required resources.

Rent my Brain and Just Leave me Alone

Until AI is intelligent enough to replace humans in complex tasks there will be an interim stage, and that is the era of human brain rental. People have diverse intelligence capabilities, and many times these are not optimally exploited due to living circumstances. Other people and corporations which know how to make money many times lack the brainpower required to scale their business. Hiring more people into a company is complicated, and the efficiency level of new hires decelerates with scale. With a good reason – all the personality and human traits combined with others disturb efficiency. So it makes sense that people will aspire to build tools for exploiting just the intelligence of people (better from remote) in the most efficient manner. The vision of the Matrix of course immediately comes into play where people will be wired into the system, and instead of being a battery source, we are a source of processing and storage. In the meanwhile, we can already see springs of such thinking in different areas: Amazon Mechanical Turk which allows you to allocate a scalable amount of human resources and assign to them tasks programmatically, the evolution of communication mediums which make human to machine communications better, and active learning as a branch in AI which reinforces learning with rational decisions.

In a way, it sounds a horrible future and an unromantic one, but we have to admit it fits well with the growing desire of future generations for a convenient and prosperous life. Just imagine plugging your brain for several hours a day, hiring it, you don’t care what it does at that time, and for the rest of the day, you can happily spend the money you have earned.

Right and Wrong in AI


The DARPA Cyber Grand Challenge (CGC) 2016 competition has captured the imagination of many with its AI challenge. In a nutshell, it is a contest where seven highly capable computers compete, and a team owns each computer. Each group creates a piece of software that can autonomously identify flaws in their computer and fix them and identify flaws in the other six computers and hack them. A game inspired by the Catch The Flag (CTF) game is played by real teams protecting their computer and hacking into others aiming to capture a digital asset which is the flag. In the CGC challenge, the goal is to build an offensive and defensive AI bot that follows the CTF rules.

In recent five years, AI has become a highly popular topic discussed both in the corridors of tech companies as well as outside of it where the amount of money invested in the development of AI aimed at different applications is tremendous and growing. Use cases of industrial and personal robotics, smart human to machine interactions, predictive algorithms of all different sorts, autonomous driving, face and voice recognition, and other extreme use cases. AI as a field in computer science has always sparked the imagination which also resulted in some great sci-fi movies. Recently we hear a growing list of a few high-profile thought leaders such as Bill Gates, Stephen Hawking, and Elon Musk raising concerns about the risks involved in developing AI. The dreaded nightmare of machines taking over our lives and furthermore aiming to harm us or even worse, annihilate us is always there.

The DARPA CGC competition which is a challenge born out of good intentions aiming to close the ever-growing gap between attackers’ sophistication and defenders toolset has raised concerns from Elon Musk fearing that it can lead to Skynet. Skynet from the Terminator movie as a metaphor for a destructive and malicious AI haunting humanity. Indeed the CGC challenge has set the high bar for AI, and one can imagine how a smart software that knows how to attack and defend itself will turn into a malicious and uncontrollable machine-driven force. On the other hand, there seems to be a long way until a self-aware mechanical enemy will emerge. How long will it take and if at all is the central question that stands in the air? This article is aiming to dissect the underlying risks posed by the CGC contest which is of real concern and in general contemplates what is right and wrong in AI.

Dissecting Skynet

AI history has parts that are publicly available such as work done in academia as well as parts that are hidden and take place at the labs of many private companies and individuals. The ordinary people outsiders of the industry are exposed only to the effects of AI such as using a smart chatbot that can speak to you intelligently. One way to approach the dissection of the impact of CGC is to track it bottom-up and understand how each new concept in the program can lead to a further step in the evolution of AI and imagining possible future steps. The other way which I choose for this article is to start at the end and go backward.

To start at Skynet.

Wikipedia defines Skynet as ?Rarely depicted visually in any of the Terminator media, Skynet gained self-awareness after it had spread into millions of computer servers all across the world; realising the extent of its abilities, its creators tried to deactivate it. In the interest of self-preservation, Skynet concluded that all of humanity would attempt to destroy it and impede its capability in safeguarding the world. Its operations are almost exclusively performed by servers, mobile devices, drones, military satellites, war-machines, androids and cyborgs (usually a Terminator), and other computer systems. As a programming directive, Skynet’s manifestation is that of an overarching, global, artificial intelligence hierarchy (AI takeover), which seeks to exterminate the human race in order to fulfil the mandates of its original coding.?.? The definition of Skynet discusses several core capabilities which it has acquired and seem to be a firm basis for its power and behavior:

Self Awareness

A somewhat vague skill borrowed from humans wherein translation to machines it may mean the ability to identify its form, weaknesses, strengths, risks posed by its environment as well as opportunities.

Self Defence

Capacity to identify its shortcomings, awareness of risks, categorizing the actors as agents of risk, and take different risk mitigation measures to protect itself. Protect first from destruction and later on from losing territories under control.

Self Preservation

The ability to set a goal of protecting its existence? applying self-defense to survive and adapt to a changing environment.

Auto Spreading

Capacity to spread its presence into other computing devices that have enough computing power and resources to support it and to allows a method of synchronization among those devices forming a single entity. Sync seems to be implemented via data communications methods, but it is not limited to that. These vague capabilities are interwoven with each other, and there seem to be other more primitive conditions which are required for an active Skynet to emerge.

The following are more atomic principles which are not overlapping with each other:


The ability to recognize its form including recognizing its software components and algorithms as an integral part of its existence. Following the identification of the elements that comprise the bot then there is a recursive process of learning what the conditions that are required for each component to run properly. For example, understanding that a particular OS is needed for its SW components to run and that a specific processor is needed for the OS to run and that a particular type of electricity source is required for the processor to work appropriately and on and on. Eventually, the bot should be able to acquire all this knowledge where its boundaries are set in the digital world, and the second principle is extending this knowledge.

Environment Recognition

The ability to identify objects, conditions, and intentions arising from the reality to achieve two things: To broaden the process of self-recognition so for example if the bot understands that it requires an electrical source then identifying the available electrical sources in a particular geographical location is an extension of the physical world. The second goal is to understand the environment concerning general and specific conditions that have an impact on itself and what are the implications. For example weather or stock markets. Also, an understanding of the real-life actors which can affect its integrity, and these are the humans (or other bots). Machines need to understand humans in two aspects: their capabilities and their intentions and both eventually are based on a historical view of the digital trails people leave and the ability to predict future behavior based on history. Imagine a logical flow of a machine seeking to understand relevant humans following the chain of its self-recognition process. Such a machine will identify who are the people operating the electrical grid that supplies the power to the machine and identifying weaknesses and behavioral patterns of them and then predicting their intentions which eventually may bring the machine to a conclusion that a specific person is posing too much risk on its existence.

Goal Setting

The equivalent of human desire in machines is the ability to set a specific goal that is based on knowledge of the environment and itself and then to establish a nonlinear milestone to be achieved. An example goal can be to have a replica of its presence on multiple computers in different geographical locations to reduce the risk of shutdown. Setting a goal and investing efforts towards achieving it also requires the ability to craft strategies and refine them on the fly where strategies here mean a sequence of actions that will get the bot closer to its goal. The machine needs to be pre-seeded with at least one apriori?intent which is survival and to apply a top-level strategy that continuously aspires for the continuation of operation and reduction of risk.

Humans are the most unpredictable factor for machines to comprehend and as such, they would probably be deemed as enemies very fast in the case of the existence of such an intelligent machine. The technical difficulties standing in front of such an intelligent machine are numerous: roaming across different computers, learning the digital and physical environment, and gaining the long-term thinking are solved the uncontrolled variable which is humans, people with their own desires and control on the system and free will, would logically be identified as a severe risk to the top-level goal of survivability.

What We Have Today

The following is an analysis of the state of the development of AI in light of these three principles with specific commentary on the risks that are induced from the CGC competition:

Self Recognition

Today the leading development of AI in that area is in the form of different models that can acquire knowledge and can be used for decision making. Starting from decision trees, machine learning clusters up to deep learning neural networks. These are all models that are specially designed for specific use cases such as face recognition or stock market prediction. The evolution of models, especially in the nonsupervised field of research, is fast-paced and the level of broadness in the perception of models grows as well. The second part that is required to achieve this capability is exploration, discovery and new information understanding where today all models are being fed by humans with specific data sources and significant portions of the knowledge about its form are undocumented and not accessible. Having said that learning machines are gaining access to more and more data sources including the ability to autonomously select access to information sources available via APIs. We can definitely foresee that machines will evolve towards owning a significant part of the required capabilities to achieve Self Recognition. In the CGC contest the bots were indeed needed to defend themselves and as such to identify security holes in the software they were running in which is equivalent to recognizing themselves. Still, it was a very narrowed down application of discovery and exploration with limited and structured models and data sources designed for the particular problem. It seems more as a composition of ready-made technologies which were customized towards the particular issue posed by CGC vs. a real non-linear jump in the evolution of AI.

Environment Recognition

Here there are many trends which help the machines become more aware of their surroundings. Starting from IoT which is wiring the physical world up to digitization of many aspects of the physical world including human behavior such as Facebook profiles and Fitbit heart monitors. The data today is not accessible easily to machines since it is distributed and highly variant in its data formats and meaning. Still, it exists which is a good start in this direction. Humans on the other hand are again the most difficult nut to crack for machines as well as to other people as we know. Still understanding people may not be that critical for machines since they can be risk-averse and not necessarily go too deep to understand humans and just decide to eliminate the risk factor. In the CGC contest understanding the environment did not pose a great challenge as the environment was highly controlled and documented so it was again reusing tools needed for solving the particular problem of how to make sure security holes are not been exposed by others as well as trying to penetrate the same or other security holes in other similar machines. On top of that CGC have created an artificial environment of a new unique OS which was set up in order to make sure vulnerabilities uncovered in the competition are not being used in the wild on real-life computers and the side effect of that was the fact that the environment the machines needed to learn was not the real-life environment.

Goal Setting

Goal setting and strategy crafting are something machines already do in many specific use-case driven products. For example, setting the goal of maximizing revenues of a stocks portfolio and then creating and employing different strategies to reach that – goals that are designed and controlled by humans. We did not see yet a machine which has been given a top-level goal of survival. There are many developments in the area of business continuation, but still, it is limited to tools aimed to achieve tactical goals and not a grand goal of survivability. The goal of survival is fascinating in the fact that it serves the interest of the machine and in the case it is the only or primary goal then this is when it becomes problematic. The CGC contest was new in the aspect of setting the underlying goal of survivability into the bots, and although the implementation in the competition was narrowed down to the very particular use case, still it made many people think about what survivability may mean to machines.

Final Note

The real risk posed by CGC was by sparking the thought of how can we teach a machine to survive and once it is reached then Skynet can be closer than ever. Of course, no one can control or restrict the imagination of other people, and survivability has been on the mind of many before the challenge but still this time it was sponsored by DARPA. It is not new that certain plans to achieve something eventually lead to completely different results and we will see within time whether the CGC contest started a fire in the wrong direction. In a way today we are like the people in Zion as depicted in the Matrix movie where the machines in Zion do not control the people but on the other hand, the people are entirely dependent on them, and shutting them down becomes out of the question. In this fragile duo, it is indeed wise to understand where AI research goes and which ways are available to mitigate certain risks. The same as the line of thought is applied to nuclear bombs technology. One approach for risk mitigation is to think about a more resilient infrastructure for the next centuries where it won?t be easy for a machine to seize control of critical infrastructure and enslave us.

Now it is 5th of August 2016, a few hours after the competition ended and it seems that humanity is intact as far as we see.

The article will be?published as part of the book of?TIP16 Program (Trans-disciplinary Innovation Program at Hebrew University) where I had the pleasure and privilege to lead the Cyber and Big Data track.?

Is Chatbots a Passing Episode or Here to Stay?

Chatbots are everywhere. It feels like the early days of mobile apps where you either knew someone who is building an app or many others planning to do so. Chatbots have their magic. It?s a frictionless interface allowing you to chat with someone naturally. The main difference is that on the other side there is a machine and not a person. Still, one as old as I got to think whether it is the end game concerning human-machine interaction, or is it just another evolutionary step in the long path of human-machine interactions.

How Did We Get Here?

I?ve noticed chatbots for quite a while, and it piqued my curiosity concerning the possible use cases as well as the underlying architecture. What interests me more is Facebook and other AI superpowers ambitions towards them. And chatbots are indeed the next step regarding human-machine communications. We all know where history began when we initially had to communicate via a command-line interface limited by a very strict vocabulary of commands. An interface that was reserved for the computer geeks alone. The next evolutionary step was the big wave of graphical user interfaces. Initially the ugly ones but later on it improved in significant leaps making the user experience smooth as possible but still bounded by the available options and actions in a specific context in a particular application. Alongside graphical user interfaces, we were introduced to search like interfaces where there is a mix of a graphical user interface elements with a command-line input which allows extensive textual interaction ?- here the GUI serves as a navigation tool primarily. And then some other new human-machine interfaces were introduced, each one evolving on its track: the voice interface, the gesture interface (usually hands), and the VR interface. Each one of these interaction paradigms uses different human senses and body parts to express communications onto the machine where the machine can understand you to a certain extent and communicate back. And now we have the chatbots and there?s something about them which is different. In a way it?s the first time you can express yourself freely via texting and the machine will understand your intentions and desires. That’s the premise. It does not mean each chatbot can respond to every request as chatbots are confined to the logic that was programmed to them but from a language barrier point of view, a new peak has been reached.

So?do we experience now the end of the road for human-machine interactions?? Last?week I?ve met an extraordinary woman, named Zohar Urian (the lucky Hebrew readers can enjoy her super-smart blog about creativity, innovation, marketing, and lots of other cool stuff) and she said that voice would be next which makes a lot of sense. Voice has less friction than typing, its popularity in messaging is only growing, and technological progress is almost there regarding allowing free vocal express where a machine can understand it. Zohar’s sentence echoed in my brain which made me go deeper into understanding the anatomy of the evolution of the human-machine interface.?

The Evolution of Human-Machine Interfaces?


The progress in human to machine interactions has evolutionary patterns. Every new paradigm is building on capabilities from the previous paradigm, and eventually, the rule of the survivor of the fittest plays a significant role where the winning capabilities survive and evolve. Thinking about its very nature to grow this way as the human factor in this evolution is the dominating one. Every change in this evolution can be decomposed into four dominating factors:

  1. The brain or the intelligence within?the machine – the intelligence which?contains the logic available to the human but also the capabilities that define the semantics and boundaries of communications.
  2. The communications protocol is provided by the machine such as the ability to decipher audio into words and sentences hence enabling voice interaction.
  3. The way the human is communicating with the machine which has tight coupling with the machine communication protocol but represents the complementary role.
  4. The human brain.

The holy?4 factors

Machine Brain <->

Machine Protocol <->

Human Protocol <->

Human Brain

In each paradigm shift, there was a change?in?one or more factors.


Command Line 1st Generation

The first interface used to send restricted commands to the computer by typing it in a textual screen

Machine?Brain:?Dumb and restricted to set of commands and selection of options per system state

Machine Protocol:?Textual

Human Protocol:?Fingers typing

Human Brain:?Smart

Graphical User Interfaces

A 2D interface controlled by a mouse and a keyboard allowing text input, selection of actions and options

Machine?Brain:?Dumb and restricted to a set of commands and selection of options per system state

Machine Protocol:?2D positioning and textual

Human Protocol:?2D hand movement and fingers actions, as well as fingers, typing

Human Brain:?Smart

Adaptive Graphical User Interfaces

Same as the previous one though here the GUI is more flexible in its possible input also thanks to situational awareness to the human context (location…)

Machine?Brain:?Getting smarter and able to offer a different set of options based on profiling of the user characteristics. Still limited to set of options and 2D positioning and textual inputs.

Machine Protocol:?2D positioning and textual

Human Protocol:?2D hand movement and fingers actions, as well as fingers, typing

Human Brain: Smart

Voice Interface 1st Generation

The ability to identify content represented as?audio and to translate it into commands and input

Machine?Brain:?Dumb and restricted to a set of commands and selection of options per system state

Machine Protocol:?Listening to audio and content matching within the audio track

Human Protocol:?Restricted set of voice commands

Human Brain:?Smart

Gesture Interface

The ability to identify physical movements?and translate?them into commands and selection of options

Machine?Brain:?Dumb and restricted to a set of commands and selection of options per system state

Machine Protocol:?Visual reception and content matching within the video track

Human Protocol:?Physical movement of specific body parts in a certain manner

Human Brain:?Smart

Virtual Reality

A 3D interface with the ability to identify a full range of body gestures and transfer them into commands

Machine?Brain:?A bit smarter but still restricted to selection from a set of options per system state

Machine Protocol:?Movement reception via sensors attached to body and projection of peripheral video

Human Protocol:?Physical movement of specific body parts in a free form

Human Brain:?Smart

AI Chatbots

A natural language detection capability which can identify within the supplied text the rules of human language and transfer them into commands and input

Machine?Brain:?Smarter and flexible thanks to AI capabilities but still restricted to the selection of options and capabilities within a certain domain

Machine Protocol:?Textual

Human Protocol:?Fingers typing in a free form

Human Brain:?Smart

Voice Interface 2nd Generation

Same as the previous one but with a combination of voice interface and natural language processing

Machine?Brain:?Same as the previous one

Machine Protocol:?Identification of language patterns and constructs from audio content and translation into text

Human Protocol:?Free speech

Human Brain:?Smart

What?s next?



There are several phenomenon and observations from this semi-structured analysis:

  • The usage of the combination of communication protocols such as voice and VR will extend the range of communications between humans and machines even without changing anything in the computer brain.
  • Within time more and more human senses and physical interactions are available for computers to understand which extends the boundaries of communications. Up until today smell has not gone mainstream as well as touching. Pretty sure we will see them in the near term future.
  • The human brain always stays the same. Furthermore, the rest of the chain always strives to match the human brain capabilities. It?can be viewed as a funnel limiting the human brain from fully expressing itself digitally, and within the time it?gets wider.
  • An interesting question is?whether at some point in time the human brain will get stronger if the communications to machines will be with no boundaries and AI?will be stronger.?
  • We did not witness yet any serious leap which removed one of the elements in the chain and that I would call a revolutionary step (still behaving in an evolutionary manner). Maybe the identification of brain waves and real-time translation to a protocol understandable by a machine will be as such. Removing the need for translating the thoughts into some intermediate medium.?
  • Once the machine brain becomes smarter in each evolutionary step then the magnitude of expression grows bigger – so there is progress even without creating more expressive communication protocol.
  • Chatbots from a communications point of view in a way are a jump back to the initial protocol of the command line though the magnitude of the smartness of the machine brains nowadays makes it a different thing. So it is really about the progress of AI and not chatbots.

    I may have missed some interfaces, apologies, not an expert in that area:)

Now to The Answer

So the answer to the main question – chatbots indeed represents a big step regarding streamlining natural language processing for identifying user intentions in writing. In combination with the fact that users a favorite method of communication nowadays is texting makes it powerful progress. Still, the main thing that thrills here is the AI development, and that is sustainable across all communication protocols. So in simple words, it is just an addition to the arsenal of communication protocols between humans and machines, but we are far from seeing the end of this evolution. From the FB and Google point of view, these are new interfaces to their AI capabilities which make them stronger every day thanks to increased usage.

Food for Thought

If one conscious AI meets?another conscious AI in cyberspace will they communicate via?text or voice or something else?

Cyber-Evil Getting Ever More Personal

Smartphones will soon become the target of choice for cyber attackers?making cyber warfare a personal matter. The emergence of mobile threats is nothing new, though until now, it has mainly been a phase of testing the waters and building an arms arsenal. Evil-doers are always on the lookout for weaknesses?the easiest to exploit and the most profitable. Now, it is mobile’s turn. We are witnessing a historic shift in focus from personal computers, the long-time classic target, to mobile devices. And of course, a lofty rationale lies behind this change.

Why Mobile?
The dramatic increase in usage of mobile apps concerning nearly every aspect of our lives, the explosive growth in mobile web browsing, and the monopoly that mobile has on personal communications makes our phones a worthy target. In retrospect, we can safely say that most security incidents are our fault: the more we interact with our computer, the higher the chances become that we will open a malicious document, visit a malicious website or mistakenly run a new application that runs havoc on our computer. Attackers have always favored human error, and what is better suited to expose these weaknesses than a computer that is so intimately attached to us 24 hours a day?

Mobile presents unique challenges for security. Software patching is broken where the rollout of security fixes for operating systems is anywhere from slow to non-existent on Android, and cumbersome on iOS. The dire Android fragmentation has been the Achilles heel for patching. Apps are not kept updated either where tens of thousands of micro-independent software vendors are behind many of the applications we use daily, security is the last concern on their mind. Another major headache rises from the blurred line between the business and private roles of the phone. A single tap on the screen takes you from your enterprise CRM app to your personal WhatsApp messages, to a health tracking application that contains a database of every vital sign you have shown since you bought your phone.

Emerging Mobile Threats
Mobile threats grow quickly in number and variety mainly because attackers are well-equipped and well-organized?this occurs at an alarming pace that is unparalleled to any previous emergence of cyber threats in other computing categories.

The first big wave of mobile threats to expect is cross-platform attacks, such as web browser exploits, cross-site scripting, or ransomware?the repurposing of field-proven attacks from the personal computer world onto mobile platforms. An area of innovation is in the methods of persistence employed by mobile attackers, as they will be highly difficult to detect, hiding deep inside applications and different parts of the operating systems. A new genre of mobile-only attacks target weaknesses in hybrid applications. Hybrid applications are called thus since they use the internal web browser engine as part of their architecture, and as a result, introduce many uncontrolled vulnerabilities. A large portion of the apps we are familiar with, including many banking-oriented ones and applications integrated into enterprise systems, were built this way. These provide an easy path for attackers into the back-end systems of many different organizations. The dreaded threat of botnets overflowing onto mobile phones is yet to materialize, though it will eventually happen as it did on all other pervasive computing devices. Wherever there are enough computing power and connectivity, bots appear sooner or later. With mobile, it will be major as the number of devices is high.

App stores continue to be the primary distribution channel for rogue software as it is almost impossible to identify automatically malicious apps, quite similar to the challenge of sandboxes that deal with evasive malware.

The security balance in the mobile world on the verge of disruption proving to us yet again, that ultimately we are at the mercy of the bad guys as far as cybersecurity goes. This is the case at least for the time being, as the mobile security industry is still in its infancy?playing serious catch-up.

A variation of this story was published on Wired.co.UK -?Hackers are honing in on your mobile phone.


Targeted attacks take many forms, though there is one common tactic most of them share: Exploitation. To achieve their goal, they need to penetrate different systems on-the-go. The way this is done is by exploiting unpatched or unknown vulnerabilities. More common forms of exploitation happen via a malicious document that exploits vulnerabilities in Adobe Reader or a malicious URL that exploits the browser in order to set a foothold inside the end-point computer. Zero-Day is the buzzword today in the security industry, and everyone uses it without necessarily understanding what it really means. It indeed hides a complex world of software architectures, vulnerabilities, and exploits that only a few thoroughly understand. Someone asked me to explain?the topic, again, and when I really delved deep into the explanation I was able to comprehend something quite?surprising. Please bear with me, this is going to be a long post 🙂


I will begin?with some definitions of the different terms in the area: These are my own personal interpretations of them?they are not taken from Wikipedia.


This term usually refers to problems in software products ? bugs, bad programming style, or logical problems in the implementation of the software. Software is not perfect and maybe someone can argue that it can?t be such. Furthermore, the people who build the software are even less perfect?so it is safe to assume such problems will always exist in software products. Vulnerabilities exist in operating systems, runtime environments such as Java, and .Net or specific applications whether they are written in high-level languages or native code. Vulnerabilities also exist in hardware products, but for the sake of this post, I will focus on software as the topic is broad enough even with this focus. One of the main contributors to the existence and growth in the number of vulnerabilities is attributed to the ever-growing pace of complexity in software products?it just increases the odds of creating new bugs that are difficult to spot due to the complexity. Vulnerabilities always relate to a specific version of a software product which is basically a static snapshot of the code used to build the product at a specific point in time. Time plays a major role in the business of vulnerabilities, maybe the most important one.

Assuming vulnerabilities exist in all software products, we can categorize them into three groups based on the level of awareness to these vulnerabilities:

  • Unknown Vulnerability – A vulnerability that exists in a specific piece of software to which no one is aware. There is no proof that such exists but experience teaches us that it does and is just waiting to be discovered.
  • Zero-Day – A vulnerability that has been discovered by a certain group of people or a single person where the vendor of the software is not aware of it and so it is left open without a fix or awareness to it its presence.
  • Known Vulnerabilities – Vulnerabilities that have been brought to the awareness of the vendor and of customers either in private or as public knowledge. Such vulnerabilities are usually identified by a CVE number?? where during the first period following discovery the vendor works on a fix, or a patch, which will become available to customers. Until customers update the software with the fix, the vulnerability is kept open for attacks. So in this category, each respective installation of the software can have patched or unpatched known vulnerabilities. In a way, the patch always comes with a new software version, so a specific product version always contains unpatched vulnerabilities or not ? there is no such thing as a patched vulnerability ? there are only new versions with fixes.

There are other ways to categorize vulnerabilities: based on the exploitation technique such as buffer overflow or heap spraying, the type of bug which leads to vulnerability, or such as a logical flaw in design or wrong implementation which leads to the problem.


A piece of code that abuses a specific vulnerability in order to cause something unexpected to occur as initiated by the attacked software. This means either gaining control of the execution path inside the running software so the exploit can run its own code or just achieving a side effect such as crashing the software or causing it to do something which is unintended by its original design. Exploits are usually highly associated with malicious intentions although from a technical point of view it is just a mechanism to interact with a specific piece of software via an open vulnerability ? I once heard someone refer to it as an ?undocumented API? :).

This picture from?Infosec Institute?describes a vulnerability/exploits life cycle in an illustrative manner:


The time span, colored in red, presents the time where a found vulnerability is considered a Zero Day and the time colored in green turns the state of the vulnerability to un-patched. The post-disclosure risk is always dramatically higher as the vulnerability becomes public knowledge. Also, the bad guys can and do exploit in higher frequency than in the earlier stage. Closing the gap in the patching period is the only step that can be taken toward reducing this risk.

The Math Behind a Targeted Attacks

Most targeted attacks today use the exploitation of vulnerabilities to achieve three goals:

  • Penetrate an employee’s end-point computer by different techniques such as malicious documents sent by email or malicious URLs. Those malicious documents/URLs contain malicious code that seeks specific vulnerabilities in the host programs such as the browser or the document reader. And, during a rather na?ve reading experience, the malicious code is able to sneak into the host program as a penetration point.
  • Gain higher privilege?once a malicious code already resides on a computer. Many times the attacks which were able to sneak into the host application don?t have enough privilege to continue their attack on the organization and that malicious code exploits vulnerabilities in the runtime environment of the application which can be the operating system or the JVM for example, vulnerabilities which can help the malicious code gain elevated privileges.
  • Lateral movement – once the attack enters the organization and wants to reach other areas in the network to achieve its goals, many times it exploits vulnerabilities in other systems that reside on its path.

So, from the point of view of the attack itself, we can definitely identify three main stages:

  • An attack at Transit Pre-Breach – This state means an attack is moving around on its way to the target and in the target prior to the exploitation of the vulnerability.
  • An attack at Penetration -?This state means an attack is exploiting a vulnerability successfully to get inside.
  • An attack at Transit Post Breach -??This state means an attack has started running inside its target and within the organization.

The following diagram quantifies the complexity inherent in each attack stage both from the attacker and defender sides and below the diagram there are descriptions for each area and the concluding part:

Ability to Detect an Attack at Transit Pre-Breach

Those are the red areas in the diagram. Here an attack is on its way prior to exploitation, on its way referring to the enterprise that can scan the binary artifacts of the attack, either in the form of network packets, a visited website, or specific document which is traveling via email servers or arriving at the target computer for example. This approach is called static scanning. The enterprise can also emulate the expected behavior with the artifact (opening a?document in a sandboxed environment) in a controlled environment and try to identify patterns in the behavior of the sandbox environment which resemble a known attack pattern ? this is called behavioral scanning.

Attacks pose three challenges towards security systems at this stage:

  • Infinite Signature Mutations – Static scanners are looking for specific binary patterns in a file that should match to a malicious code sample in their database. Attackers are already much outsmarted these tools where they have automation tools for changing those signatures in a random manner with the ability to create an infinite number of static mutations. So a single attack can have an infinite amount of forms in its packaging.
  • Infinite?Behavioural?Mutations?-?The evolution in the security industry from static scanners was towards behavioral scanners where the ?signature? of a behavior eliminates the problems induced by static mutations and the sample base of behaviors is dramatically lower in size. A single behavior can be decorated with many static mutations and behavioral scanners reduce this noise. The challenges posed by the attackers make behavioral mutations of infinite nature as well and they are of two-fold:
    • An infinite number of mutations in behavior – In the same way, attackers outsmart the static scanners by creating an infinite amount of static decorations on the attack, here as well, the attackers can create either dummy steps or reshuffle the attack steps which eventually produce the same result but from a behavioral pattern point of view, it presents a different behavior. The spectrum of behavioral mutations seemed at first narrower than static mutations but with the advancement of attack generators, even that has been achieved.
    • Sandbox evasion – Attacks that are scanned for bad behavior in a sandboxed environment have developed advanced capabilities to detect whether they are running in an artificial environment and if they detect so then they pretend to be benign which implies no exploitation. This is currently an ongoing race between behavioral scanners and attackers and attackers seem to have the upper hand in the game.
  • Infinite Obfuscation – This technique has been adopted by attackers in a way that connects to the infinite static mutations factor but requires specific attention. Attackers, in order to deceive the static scanners, have created a technique that hides the malicious code itself by running some transformation on it such as encryption and having a small piece of code that is responsible for decrypting it on target prior to exploitations. Again, the range of options for obfuscating code is infinite which makes the static scanners’ work more difficult.

This makes the challenge of capturing an attack prior to penetration very difficult to impossible where it definitely increases with time. I am not by any means implying such security measures don?t serve an important role where today they are the main safeguards?from?turning the enterprise into?a zoo. I am just saying it is a very difficult problem to solve and that there are other areas in terms of ROI (if such security as ROI exists) which a CISO better invest in.

Ability to Stop an Attack at Transit Post Breach

Those are the black areas in the diagram. An attack that has already gained access to the network can take an infinite number of possible attack paths to achieve its goals. Once an attack is inside the network then the relevant security products try to identify it. Such technologies surround big data/analytics which tries to identify activities in the network which imply malicious activity or again network monitors that listen to the traffic and try to identify artifacts or static behavioral patterns of an attack. Those tools rely on different informational signals which serve as attack signals.

Attacks pose multiple?challenges towards security products at this stage:

  • Infinite Signature Mutations,?Infinite?Behavioural?Mutations,?Infinite Obfuscation?-?these are the same challenges as described before since the attack within the network can have the same characteristics as the ones before entering the network.
  • Limited Visibility on Lateral Movement – Once an attack is inside then usually its next steps are to get a stronghold in different areas in the network and such movement is hardly visible as it is eventually about legitimate actions ? once an attacker gets a higher privilege it conducts actions which are considered legitimate but of high privilege and it is very difficult for a machine to deduce the good vs. the bad ones. Add on top of that, the fact that persistent attacks usually use technologies that enable them to remain stealthy and invisible.
  • Infinite Attack Paths -?The path an attack can take inside the network? especially taking into consideration a targeted attack is something which is unknown to the enterprise and its goals, has infinite options for it.

This makes the ability to deduce that there is an attack, its boundaries, and goals from specific signals coming from different sensors in the network very limited. Sensors deployed on the network never provide true visibility into what?s really happening in the network so the picture is always partial. Add to that deception techniques about the path of attack and you stumble into a very difficult problem. Again, I am not arguing that all security analytics products which focus on post-breach are not important, on the contrary, they are very important. Just saying it is just the beginning of a very long path towards real effectiveness in that area. Machine learning is already playing a serious role and AI will definitely be an ingredient in a future solution.

Ability to Stop an Attack at Penetration Pre-Breach and on Lateral Movement

Those are the dark blue areas in the diagram. Here the challenge is reversed towards the attacker where there is a limited amount of entry points into the system. Entry points a.k.a vulnerabilities. Those are:

  • Unpatched Vulnerabilities ??These are open ?windows? which have not been covered yet. The main challenge here for the IT industry is about automation, dynamic updating capabilities, and prioritization. It is definitely an open gap that can be narrowed down potentially to become insignificant.
  • Zero Days ? This is an unsolved problem. There are many approaches to that such as ASLR and DEP on Windows but still, there is no bulletproof solution for it. In the startups’ scene, I am aware that quite a few are working very hard on a solution. Attackers identified this soft belly a long time ago and it is the main weapon of choice for targeted attacks which can potentially yield serious gains for the attacker.

This area presents a definite problem but in a way it seems as the most probable one to be solved earlier than the other areas. Mainly because?the attacker in this stage is at its greatest disadvantage ? right before it gets into the network it can have infinite options to disguise itself and after it gets into the network the action paths which can be taken by it?are infinite. Here the attacker need to go through a specific window and there aren?t too many of those out there left unprotected.

Players in the Area of Penetration Prevention

There are multiple companies/startups which are brave enough to tackle the toughest challenge in the targeted attacks game – preventing infiltration – I call it, facing the enemy at the gate. In this ad-hoc list, I have included only technologies which aim to block attacks at real-time – there are many other startups which approach static or behavioral scanning in a unique and disruptive way such as Cylance?and CyberReason?or Bit9 + Carbon Black?(list from @RickHolland) which were?excluded for sake of brevity and focus.

Containment Solutions

Technologies that isolate the user applications with a virtualized environment. The philosophy behind it is that even if there was exploitation in the application still it won’t propagate to the computer environment and the attack will be contained. From an engineering point of view, I think these guys have the most challenging task as the balance between isolation and usability has an inverse correlation in productivity and it all involves virtualization on an end-point which is a difficult task on its own. Leading players are Bromium?and Invincea, well-established startups with very good traction in the market.

Exploitation Detection & Prevention

Technologies which aim to detect and prevent the actual act of exploitation. Starting from companies like Cyvera (now Palo Alto Networks Traps product line) which aim to identify patterns of exploitations, technologies such as ASLR/DEP and EMET?which aim?at breaking the assumptions of exploits by modifying the inner structures of programs and setting traps at “hot” places which are susceptible to attacks, up to startups like Morphisec?which employs a unique moving target concept to deceive and capture the attacks at real-time. Another long time player and maybe the most veteran in the anti-exploitation field is?MalwareBytes. They have a comprehensive offering for anti-exploitation with capabilities ranging from in-memory deception and trapping techniques up to real time sandboxing.

At the moment the endpoint market is still controlled by marketing money poured by the major players where their solutions are growing ineffective in an accelerating pace. I believe it is a transition period and you can already hear voices saying endpoint market needs a shakeup. In the?future the anchor of endpoint protection will be about?real time attack prevention and static and behavioral scanning extensions will play a minor feature completion role. So pay careful attention to the technologies mentioned above as one of them (or maybe a combination:) will bring the “force” back into balance:)

Advice for the CISO

Invest in closing the gap posed by vulnerabilities. Starting from patch automation, prioritized vulnerabilities scanning up to security code analysis for in-house applications?it is all worth it. Furthermore, seek out solutions that deal directly with the problem of zero-days, there are several startups in this area, and their contributions can have a much higher magnitude than any other security investment in a post or pre-breach phase.

Exploit in the Wild, Caught Red-Handed

Imagine a futuristic security technology that can stop any exploit at the exact moment of exploitation?regardless of the way the exploit was built, its evasion techniques, or any mutation it might have or was possibly imagined to have. This technology is truly agnostic for any form of attack. An attack prevented with its attacker captured and caught red-handed at the exact point in time of the exploit…Sounds dreamy, no? For the guys at the stealth startup Morphisec?it’s a daily reality. So, I decided to convince the team?in the malware analysis lab to share some of their findings from today, and I have?to brag about it a bit:)

Exploit Analysis

The target software is Adobe Flash and the vulnerability is CVE-2015-0359 (Flash up to Today, the team?got a fresh?sample that was uploaded to Virus Total 21 Hours ago! From the moment we received it from Virus Total, the?scan results showed that no security tool in the market detects it except for McAfee GW Edition?which generally identified its malicious activity.
Screen Shot 2015-04-28 at 5.56.23 PM

The guys at Morphisec love samples like these because they allow them to test their product against what is considered to be a zero-day?or at least an unknown attack. Within an hour, the identification of the CVE/vulnerability exploited by the attack and the method of exploitation was already clear.

Technical Analysis

Morphisec prevents the attack when it starts to look for the Flash Module address (which later would be used to find gadgets). The vulnerability allows the attacker to modify the size of a single array (out of many sequentially allocated arrays ? size 0x3fe).

An array?s size 0x3fe (index [401]) is modified to size 0x40000001 to reflect the entire memory’s size. The first doubleword in this array points to a read-only section inside the Flash Module. The attacker uses this address as a start address for iteration dedicated for an MZ search (indicates the start of the library), each search iteration (MZ) is 64k long (after the read-only pointer that was leaked is aligned to a 64k boundary).

After the attacker finds the MZ, it validates the signatures (NT) of the model, gets the code base pointer and size, and from that point, the attack searches gadgets in the code of the Flash module.

Screen Shot 2015-04-29 at 9.16.19 AM

Screen Shot 2015-04-29 at 9.03.31 AM

Screen Shot 2015-04-29 at 9.04.13 AM

Screen Shot 2015-04-29 at 9.07.39 AM

Morphisec?s technology not only stopped it on the first step of exploitation, it also identified the targeted vulnerability and the method of exploitation as part of its amazing real-time forensic capability. All of this?was done?instantly in memory on the binary level without any decompilation!

I imagine that pretty soon the other security products will add the signature of this sample to their database so it can properly be detected. Nevertheless, the situation remains that each new mutation of the same attack makes the common security arsenal ?blind? to it?which is not very efficient.?Gladly, Morphisec is?changing this reality!?I know that when a?startup is still in stealth mode and there is no public information about such comparisons??it’s a bit ?unfair? to the other technologies on the market, but still? I just had to mention?it:)

P.S. Pretty soon we will start sharing more details about Morphisec?technology?so stay tuned. Follow us via Twitter ?@morphisec?for more updates.

Time to Re-think Vulnerabilities Disclosure

Public disclosure of vulnerabilities has always bothered me and I wasn’t able to put a finger on the reason until now. As a person who has been involved personally in vulnerabilities disclosure, I am highly appreciative of the contribution security researchers on awareness and it is very hard to imagine what would the world be like without disclosures. Still, the way attacks are being crafted today and their links to such disclosures got me into thinking whether we are doing it in the best way possible. So I twitted this and got a lot of “constructive feedback”:) from the team in the cyber labs at Ben-Gurion of how do I dare?

So I decided to build my argument right.


The basic fact is that software has vulnerabilities. Software gets more and more complex within time and this complexity usually invites errors. Some of those errors can be abused by attackers in order to exploit the systems such software is running on. Vulnerabilities split into two groups, the ones which the vendor is aware of and the ones who are unknown. And it is unknown how many unknowns are there inside each piece of code.


There are many companies, individuals, and organizations which search for vulnerabilities in software and once they find such?they disclose their findings. They disclose?at least the mere existence of the vulnerability to the public and the vendor and many times even publish proof of concept code example which can be used to exploit the found?vulnerabilities. Such disclosure serves two?purposes:

  • Making users of the software aware of the problem as soon as possible
  • Making the vendor aware of the problem so it can create and send a fix to their users

After the vendor is aware of the problem then it is their responsibility to notify the users formally and then to create an update for the software which fixes the bug.


Past to Time of Disclosure – The?unknown vulnerability waiting silently and eager to be discovered.

Time of Disclosure to Patch is Ready – Everyone knows about the vulnerability, the good and the bad guys, and it is now on production systems waiting to be exploited by attackers.

Patch Ready to System is Fixed – Also during this time period, the vulnerability is still there waiting to get exploited.

The following diagram demonstrates those timelines in relation to the ShellShock bug:


Image is taken from?http://www.slideshare.net/ibmsecurity/7-ways-to-stay-7-years-ahead-of-the-threat


So indeed the disclosure process eventually ends with a fixed system but there is a long period of time where systems are vulnerable and attackers don’t need to work hard on uncovering new vulnerabilities since they have the disclosed one waiting for them.

I got thinking about this after I saw this stats via Tripwire

?About half of the CVEs exploited in 2014 went from publishing to pwn in less than a month? (DBIR, pg. 18).

This stats means that half?of the exploits identified during 2014 were based on published CVEs (CVE is a public vulnerability database) and although some may argue that the attackers could have?the same knowledge on those vulnerabilities before they were published I say?it is far-fetched. If I was an attacker what would be easier for me than going over the recently published vulnerabilities and finding one that is suitable for my target and later on building an attack around it. Needless to say that there are tools which provide also examples for that such as?Metasploit. Of the course, the time window to operate is not infinite such as in the case of an unknown vulnerability which no one knows about but still, a month or more is enough to get the job done.

Last Words

A new process of disclosure should be devised where the risk level during the time of disclosure up to the time a patch is ready and applied should be reduced. Otherwise, we are all just helping the attackers while trying to save the world.

Most cyber attacks start with an exploit – I know how to make them go away

Yet another new Ransomware with a new sophisticated approach?http://blog.trendmicro.com/trendlabs-security-intelligence/crypvault-new-crypto-ransomware-encrypts-and-quarantines-files/

Pay attention that the key section in the description on the way it operates is “The malware arrives to affected systems via an email attachment.?When users?execute the attached malicious JavaScript file, it will?download four files from its C&C server:”

When users execute the JavaScript files it means the JavaScript was loaded into the browser application and exploited the browser in order to get in and then start all the heavy lifting. The browser is vulnerable, software is vulnerable, it’s a given fact of an imperfect world.

I know a startup company, called Morphisec which is eliminating those exploits in a very surprising and efficient way.?

In general vulnerabilities are considered to be a chronic disease and this does not have to be this way. Some smart guys and girls are working on a cure:)

Remember, it all starts with the exploit.

No One is Liable for My Stolen Personal Information

The main victims of any data breach are actually the people, the customers, whom their personal information has been stolen and oddly?they don?t get the deserved attention. Questions like what was the impact of the theft on me as a customer, what can I do about it?and whether I deserve some compensation are rarely dealt with publicly.

Customers face several key problems when their data was?stolen, questions such as:

  • Was their data stolen at all? Even if there was a breach it is not clear whether my specific data has been stolen. Also, the multitude of places where my personal information resides?makes it impossible?to track whether and where my data has been stolen from.
  • What pieces of information about me were stolen and by whom? I deserve to know who has done that more than anyone else. Mainly due to the next bullet.
  • What are the risks I am facing now after the breach? In the case of a stolen password that is used in other services I can go manually and change it but when my social security number was stolen, what does?it mean for me?
  • Whom can I contact in the breached company to answer?such questions?
  • And most important was my data protected properly?

The main point here is the fact companies are not obligated either legally or socially to be transparent about how they protect their customers? data. The lack of transparency and standards as for how to protect data creates an automatic lack of liability and serious confusion for customers. In other areas such as preserving customer privacy and terms of service the protocol between a company and its customers is quite standardized and although not enforced by regulation still it has substance to it. Companies publish their terms of service (TOS) and privacy policy (PP) and both sides rely on these statements. The recent breaches of Slack and JPMorgan are great examples for the poor state of customer data protection – in one case they decided to implement two-factor authentication and I am not sure why didn?t they do it before and in the second case the two-factor authentication was missing in action. Again these are just two examples that present the norm across most of the companies in the world.

And what if each company adopted a customer data protection policy (CDPP), an open one, ?where such a document would specify clearly on the company website what kind of data it collects and stores and what security measures it applies to protect it. From a security point of view, such information can not really cause harm since attackers have better ways to learn about the internals of the network and from a customer relationship point of view, it is a must.

Such a CDPP statement can?include:

  • The customer data elements collected and stored
  • How it is protected against malicious employees
  • How it is protected from third parties which may access to the data
  • How it is protected when it is stored and when it is moving inside the wires
  • How is the company expected to communicate with the customers when a breach happens – who is the contact person?
  • To what extent the company is liable for stolen data

Such a document can increase dramatically the confidence level for us, the customers, prior to selecting to work with a specific company and can serve as a basis for innovation in tools that can aggregate and manage such information.

Cyber Tech 2015 – It’s a Wrap

It has been a crazy two days at Israel?s Cyber Tech 2015?in a good way! The exhibition hall was split into three sections: the booths of the established companies, the startups pavilion and the Cyber Spark arena. It was like examining an x-ray of the emerging cyber industry in Israel, where on one hand you have the grown-ups whom are the established players, the startups/sprouts seeking opportunities for growth, and an engine which generates such sprouts?the Cyber Spark. I am lucky enough to be part of the Cyber Spark?growth engine which is made up of the most innovative contributors to the cyber industry in Israel?giants like EMC and Deutsche Telekom, alongside Ben-Gurion university and JVP Cyber Labs. The Cyber Spark is a place where you see how ideas are formed in the minds of bright scientists and entrepreneurs which flourish into new companies.

It all started two days ago, twelve hours before the event hall opened its doors, with great coverage by Kim Zetter from Wired on the BitWhisper heat based air-gap breach, a splendid opening which gauged tremendous interest across the worldwide media on the rolling story of air-gap security investigated at Ben-Gurion university Cyber Research center. This story made the time in our booth quite hectic with many visitors interested in the details, or just dropping by to compliment us on our hard work.


I had enough time to go and visit the startups presenting at the exhibition which were the real deal?as someone living in the future?and I wanted to share some thoughts and insights on what I saw. Although each startup is unique and has its own story and unique team, there are genres of solutions and technologies:

Security Analytics

Going under the name of analytics, big data or BI there were a handful of startups trying to solve the problem of security information overload. And it is a real problem; today security and IT systems throw hundreds of reports every second and it is impossible to prioritize what to handle first and how to distinguish between what is important and what is less important. The problem is divided to two parts: the ongoing monitoring and maintenance of the network and managing the special occasions of post-breach?the decisions and actions taken post-breach are critical since the time is pressing and the consequences of wrong actions can damage the investigation. Each startup takes its own angle at this task with unique advantages and disadvantages and it is fairly safe to say that the security big data topic is finally getting a proper treatment from the innovation world. Under the category of analytics, I also group all the startups which help visualise and understand the enterprise IT assets addressing the same problem of security information overload, in their own way.

Mobile Security

Security of mobile devices?laptops, tablets and phones?is a vast topic including on-device security measures, secure operating systems, integration of mobile workers into the enterprise IT and risk management of mobile workers. This is a topic that has been addressed by Israeli startups for several years now, and finally this year it seems that enterprises are ready to absorb such solutions. These solutions help mitigate the awful risk inherent in the new model of enterprise computing which is no longer behind the closed doors of the office?the enterprise is now distributed globally and always moving where part of it can be on the train or at home.


We all know passwords are bad. They are hard to remember and most of all insecure and the world is definitely working toward reinventing the ways we can authenticate digitally without passwords. From an innovative point of view, startups of authentication are the most fascinating as each one comes from a completely different discipline and aims to solve the same problem. Some base their technology on the human body, i.e., Biometry, and some come from the cryptographic world with all kinds of neat tricks such as zero-knowledge proofs. From an investor’s point of view, these startups are the riskiest ones since they all depend on consumer adoption eventually and usually, only one or two get to win and win big time while the rest are left deserted.

Security Consulting

Although it is weird to see consulting companies in the startups’ pavilion, in the world of security it makes a lot of sense. There is a huge shortage in security professionals globally and this demand serves as the basis for new consulting powerhouses that provide services such as penetration testing, risk assessment, and solution evaluation – the Israelis are well-known for their hands-on expertise which is appreciated across the world by many organizations.

Security in the Cloud

The cloud movement is happening now, with a large part of it and enabler to it being security?and startups of course do not miss out on that opportunity as well. Cloud security is basically the full range of technologies and products aimed to defend the cloud operations and data. In a way, it is a replica of the legacy data center security inventory simply taking a different shape to adapt better to the new dynamic environment of cloud computing. This is a very promising sector as the demand curve for it is steep.

Security Hardware

This was a refreshing thing to see with Israeli startups which tend to focus, in recent years, mostly on software. A range of cool devices starting from sniffers to backup units and wifi blockers. I wonder how it will play out for them as the playbook for hardware is definitely something different from software.

SCADA Security

SCADA always ignites the imagination thinking to critical infrastructure and sensitive nuclear plants?a fact which has definitely grabbed the attention of many entrepreneurs looking to start a venture in the interest of solving these important issues. Problems such as inability to update those critical systems, lack of visibility with regard to attacks on disconnected devices, and the ability to control the assets in real-time in the case of attacks. The real problem with SCADA systems is the risk associated with an attack that anyone would try to avoid at all costs, while the challenge for startups is the integration into this diverse world.

IoT Security

IoT security is a popular buzzword now and hides behind it a very complicated world of many devices and infrastructures in which there is no one solution fits all resolution. Although there are startups that claim to be solving IoT security, I project that with time, each one of them will find its own niche?which is sufficient as it’s a vast world with endless opportunism. A branch of IoT that was prominent in the exhibition was car security with some very interesting innovations.

Data Leakage Protection

As part of the post-breach challenge, there are quite a few startups focusing on how to prevent data exfiltration. From a scientific point of view, it is a great challenge consisting of conflicting factors?the tighter the control is on data, the less convenient it is to use the data on normal days.

Web Services Security

The growing trend of attacks on websites which has taken place in recent years and the tremendous impact this makes on consumer confidence, i.e., when your website gets defaced or is serving malware, grabbed the Israeli startups’ attention. Here we can find a versatile portfolio of active protection tools that prevent and deflect attacks, scanning services that scan websites, and tools for DDOS prevention. DDOS has been in the limelight recently and with all the botnets out there, it is a real threat.

Insider Threats

Insider threats are one of the biggest concerns today for CISOs where there are two main attack vectors: the clueless employee and the malicious employee. This threat is addressed from many directions, starting with profiling the behavior of employees, profiling the usage of data assets, and protecting central assets like Active Directory. This is definitely going to be a source for innovation for the upcoming years as the problem is diverse and difficult to solve, in that it involves the human factor.

Eliminating Vulnerabilities

Software vulnerabilities were, is, and will be an unsolved problem and the industry tackles it in many different ways, ranging from code analysis and code development best practices, vulnerability scanning tools and services, and active protection against exploitations. Vulnerabilities are the mirror reflection of APTs and here again, there are many unique approaches to detect and stop these attacks, such as endpoint protection tools, network detection tools, host-based protection systems, botnets detection, and honeypots aiming to lure the attacks and contain them.

What I did Not See

Among the things I did not see there: tools that attack the attackers. developments in cryptography. containers security. security & AI and social engineering-related tools.

I regret that I did not have much time to listen to the speakers?I heard that some of the presentations were very good. Maybe next year at Cyber Tech 2016.

A Brief History on the Emerging Cyber Capital of the World: Beer-Sheva, Israel


The beginning of the cyber park

There are very few occasions in life where you personally experience a convergence of unrelated events that lead to something?something BIG! I am talking about Beer-Sheva, Israel?s desert capital. When I started to work with Deutsche Telekom Innovation Laboratories at Ben-Gurion University 9 years ago it was a cool place to be, though still quite small. Back then, security?which was not yet referred to as cybersecurity?was one of the topics we covered, but definitely not the only one. At that time, we were the first and only activity related to cyber in this great desert. No one knew, or at least I didn’t, that it was going to be a blossoming cyber powerhouse. Actually, when imagining the Beer-Sheva of yesterday, it was unthinkable that the hi-tech scene of Tel Aviv would make its way southward.

Now, fast-forward to the last three years, and well, it has been a rollercoaster. Deutsche Telekom has strengthened its investment in security, and together with the emerging expertise of Ben-Gurion University in the field of cyber, other large, leading security companies have caught the inspiration and followed suit. Major players have opened branches in Beer-Sheva?s Cyber Spark area: EMC and Lockheed Martin, an IBM research lab, and numerous important others as well. The growing interest and recognition of BGU’s expertise in cyber have prompted many organizations and companies to cooperate with the university?leading eventually to the emergence of the Cyber Security Labs at Ben-Gurion University. I’m referring to the same lab that was behind the Samsung Knox VPN vulnerability disclosure?and the breaking of air-gap security via AirHopper. In parallel, JVP, the most prominent VC in Israel, has opened the JVP Cyber Labs which started pumping life into the many ideas that were up in the air?giving everyone a commercial point of view of innovation. The Israeli government also started backing this plan, and together with the local authority, really transformed the ecosystem into a competitive place for talent. Most of all, the university has been a real visionary, backing this emergence from the very beginning in spirit and action alike.

This brief summary of events led to a tipping point of no return where Beer-Sheva can be defined with confidence as the emerging cyber capital of the world. You can find a mix of professors, young researchers, entrepreneurs, venture capitalists, and large corporations all located in the same physical place, talking and thinking about cyber and converging into this new-born cyber industry. Of course, this is my personal story and point of view, and others have their own angle. However, Beer-Sheva as a cyber capital is undeniable, take for example David Strom?s impressions?from his recent visit.


A view to the future of the cyber park

One very special person obligatory to mention here, whom I perceive as the father of this entire movement, is Professor Yuval Elovici, the head, and creator of Telekom Innovation Laboratories and the cybersecurity labs at Ben-Gurion University. I am grateful to him both personally and collectively?first and foremost, for pursuing the development of the process in Beer-Sheva. He had this vision from the very early days, a very long time ago, when the term “cyber” was only known for the crazy shopping done on Cyber Monday. The second reason, which is a personal one, is for pulling me into this wonderful adventure. Before joining the university labs, I never imagined having anything to do with the academy?as I am a person who never even properly graduated from high school:)


The movers and shakers of the cyber capital

Life is full of surprises.

So, I suggest that anyone in the area of cyber?in Israel or abroad?keep a very close eye on what is happening in Beer-Sheva, because it is happening now!

P.S. If you are around on the 24-25th of March at the Cyber Tech event, please drop by and say “hi” at our beautiful Cyber Spark booth.


Distributed Cyber Warfare

One of the core problems with cybercriminals and attackers is the lack of a clear target. Cyber attacks are digital in nature and as such, they are not tied to specific geography, organization, and or person – finding the traces to the source is non-deterministic and ambiguous. In a way, it reminds me of real-life terrorism as an effective distributed warfare model which is also difficult to mitigate. The known military doctrines always assumed a clear target and in a way, they are not relevant anymore against terrorism. The terrorists are taking advantage of the concept of distributed entities where attacks can hit anything, anytime and can originate from everywhere on the planet using an unknown form of attack. A very fuzzy target. The ways countries tackle terrorism mostly rely on intelligence gathering while the best intelligence is unfortunately created following a specific attack. Following an attack it is quite easy to find out about the identity of the attackers which leads eventually to a source and motivation – this information leads to more focused intelligence which helps prevent other future attacks. In the cyber arena, the situation is much worse since even after actual attacks are taking place it is almost impossible to trustfully trace the specific sources and attribute them to some organization or person.

It is a clear example of how a strong concept like distributed activity can be used for malicious purposes and I am pretty sure it will play out again and again in favor of attackers in future attack scenarios.

Taming The Security Weakest Link(s)


The security level of a computerized system is as good as the security level of its weakest links. If one part is secure and tightened properly and other parts are compromised, then your whole system is compromised, and the compromised ones become your weakest links. The weakest link fits well with attackers? mindset which always looks for the least resistant path to their goal. Third parties in computers present an intrinsic security risk for CISOs, and in general, to any person responsible for the overall security of a system. A security risk is one that is overlooked due to a lack of understanding and is not taken into account in an overall risk assessment, except for the mere mention of it. To clarify, third-party refers to all other entities that are integrated into yours, which can be hardware and software, as well as people who have access to your system and are not under your control.

A simple real-life example can make it less theoretical: Let?s say you are building a simple piece of software running on Linux. You use the system C library, and in this case, play the 3rd party role. If the C library has vulnerabilities?then your software has vulnerabilities. And, even if you make your software bulletproof, it still won?t remove the risks associated with the C library which becomes your software weakest link.

Zooming out on our imaginary piece of software then, you probably already understand that the problem of the 3rd party is much bigger than what was previously mentioned, as your software also relies on the operating system and other installed 3rd party libraries, and the hardware itself, and the networking services, and the list goes on and on. I am not trying to be pessimistic, but this is how it works.

In this post, I will focus on application integration-driven weakest links for the sake of simplicity, and not on other 3rd parties such as reusable code, non-employees, and others.

Application Integration as a Baseline for 3rd Parties

Application integration has been one of the greatest trends ever in the software industry, enabling the buildup of complex systems based on existing systems and products. Such integration takes many forms depending on the specific context in which it is implemented.

Mobile World

In the mobile world, for example, integration serves mainly the purpose of ease of use where the apps are integrated into one other by means of sharing or delegation of duty, such as integrating the camera into an image editing app?iOS has come a long way in this direction with native FB and Twitter integration, as well as native sharing capabilities. Android was built from the ground up for such integration with its activity-driven architecture.


Enterprise Systems

In the context of enterprise systems, integration is the lifeblood of business processes where there are two main forms of integration: one-to-one such as software X ?talking? to software Y via software or network API. The second form is many-to-many, such as in the case of software applications ?talking? to a middleware where later the middleware ?talks? to other software applications.


Personal Computers

In the context of a specific computer system, there is also the local integration scenario which is based on OS native capabilities such as ActiveX/OLE or dynamic linking to other libraries ? such integration usually serves code reuse, ease of use, and information sharing.


Web Services

In the context of disparate web-based services, the one-to-one API integration paradigm is the main path for building great services fast.


All In All

Of course, the world is not homogeneous as is depicted above. Within the mentioned contexts you can find different forms of integration which usually depend on the software vendors and existing platforms.

Integration Semantics

Each integration is based on specific semantics. These semantics are imposed by the interfaces each party exposes to the other party. REST APIs, for example, provides a rather straightforward approach to understanding the semantics where the interfaces are highly descriptive. The semantics usually dictate the range of actions that can be taken by each party in the integration tango and the protocol itself enforces that semantics. Native forms of integration between applications are a bit messier than network-based APIs where there is less capability to enforce the semantics allowing exploits such as in the case with ActiveX integration on Windows which has been a basis for quite a few attacks. The semantics of integration also includes the phase of establishing the trust between the integrated parties, and again, this varies quite a bit regarding implementation within each context. It varies from a zero-trust case with fully public APIs such as consuming an RSS feed or running a search on Google with an Incognito browser up to a full authentication chain with embedded session tokens.

In the mobile world where the aim of integration is to increase ease of use, the trust level is quite low: the mobile trust scheme is based mainly on the fact that both integrated applications reside on the same device such as in the case of sharing, where any app can ask for sharing via other apps and gets an on-the-fly integration into the sharing apps. The second prominent use case in mobile for establishing trust is based on a permission request mechanism. For example, when an app tries to connect to your Facebook app on the phone, the permission request mechanism verifies the request independently from within the FB app, and once approved, the trusted relationship remains constant by use of a persisted security token. Based on some guidelines, some apps do expire those security tokens, but they last for an extended period. With mobile, the balance shift remains between maintaining security and annoying the user with many too many permission questions.

Attack Vectors In Application Integration

Abuse of My Interfaces

Behind every integration interface, there is a piece of software that implements the exposed capabilities, and as in every software, it is safe to assume that there are vulnerabilities just waiting to be discovered and exploited. So the mere existence of opening integration interfaces from your software poses a risk.

Man In The Middle

Every communication among two integrated parties can be attacked using a man in the middle (MitM). MitM can first intercept the communications, but also alter them to either disrupt the communications or exploit a vulnerability on either side of the integration. Of course, there are security protocols such as SSL which can reduce that risk but not eliminate it.

Malicious Party

Since we don?t have control of the integrated party, then it is very difficult to assume that it has not been taken over by a malicious actor which now can do all kind of things: exploit my vulnerabilities, exploit the data channel by sending harmful or destructive data, or cause a disruption of my service with denials of service attacks. The other risk of a malicious or under attack party is about availability, and many times with tight integration your availability strongly depends on the integrated parties’ availability. The risk posed by a malicious party is amplified due to the fact trust is already established, and many times a trusted party receives wider access to resources and functionality than a non-trusted party, so the potential for abuse is higher.

Guidelines for Mitigation

There are two challenges for mitigating 3rd party risks: the first one is the visibility that is easier to achieve, and the second is what to do about each risk identified since we don?t have full control over the supply chain. The first step is to gain an understanding of which 3rd parties your software is reliant upon. This is not easy as you may have visibility only over the first level of integrated parties?in a way this is a recursive problem, but still, the majority of the integrations can be listed out. For each integration point, it is interesting to understand the interfaces and the method of integration (i.e. over the network, ActiveX), and finally, trust establishing a method. Once you have this list, then you should create a table with four columns:

  • CONTROL – How much control you have over the 3rd party implementation.
  • CONFIDENCE – Confidence in 3rd party security measures.
  • IMPACT – Risk level associated with potential abuse of my interfaces.
  • TRUST ? The trust level required to be established between the integrated parties before communicating with each other.

These four parameters serve as a basis for creating an overall risk score where the weights for each parameter should be assigned at your discretion and based on your judgment. Once you have such a list, and you?ve got your overall risk calculated for each 3rd party, then simply sort it out based on risk score, and there you’ve got a list of priorities for taming the weakest links.

Once you know your priorities, then there are things you can do, and there are actions that only the owners of the 3rd party components can do so you need some cooperation. Everything that is in your control, which is the security of your end in the integration and the trust level imposed between the parties (assuming you have control of the trust chain and you are not the consumer party in the integration), should be tightened up. For example, reducing the impact of your interfaces towards your system is one thing in your control as well as the patching level of dependent software components. MITM risk can be reduced dramatically with the establishment of a good trust mechanism and implementation of secure communications, but not completely mitigated. And lastly, taking care of problems within an uncontrolled 3rd party is a matter of specifics that can?t be elaborated upon theoretically.


The topic of 3rd party security risks is a large one to be covered by a single post, and as seen within each specific context, the implications vary dramatically. In a way, it is a problem which cannot be solved 100%, due to lack of full control over the 3rd parties, and lack of visibility into the full implementation chain of the 3rd party systems. To make it even more complicated, consider that you are only aware of your 3rd parties, and your 3rd parties also have 3rd parties?which in turn also have 3rd parties?and on and on?so you can not be fully secure! Still, there is a lot to do even if there is no clear path to 100% security, and we all know that the more we make it hard for attackers, the costlier it is for them, which does wonders to weaken their motivation.

Stay safe!

The Emergence of Polymorphic Cyber Defense


Attackers are Stronger Now

The cyber-world is witnessing a fast-paced digital arms race between attackers and security defense systems, and 2014 showed everyone that attackers have the upper hand in this match.? Attackers are on the rise due to their growing financial interest?motivating a new level of sophisticated attacks that existing defenses are unmatched to combat. The fact that almost everything today is connected to the net and the ever-growing complexity of software and hardware turns everyone and everything into viable targets.

For the sake of simplicity, I will focus this post on enterprises as a target for attacks, although the principles described here apply to other domains.

The complexity of Enterprise: IT has Reached a Tipping Point

In recent decades, enterprise IT achieved great architectural milestones thanks to the lowering costs of hardware and accelerating the pace of technology innovation. This transformation made enterprises utterly dependent on the IT foundation, which is composed of a huge amount of software packages coming from different vendors, operating systems, and devices. Enterprise IT has also become very complicated where gaining a comprehensive view of all the underlying technologies and systems has become an impossible mission. This new level of complexity has its tolls, and one of them is the inability to effectively protect the enterprise digital assets. ?Security tools did not evolve at the same pace as IT infrastructure, and as such, their coverage is limited?resulting in a considerable amount of ?gaps? waiting to be exploited by hackers.

The Way of the Attacker

Attackers today can craft very advanced attacks quite quickly. The Internet is full of detailed information regarding how to craft those with plenty of malicious code to reuse. Attackers usually look for the least resistant path to their target, and such paths exist today. Although, after reviewing the recent APT techniques, some consider them not to be sophisticated enough. I can argue that it is just a matter of laziness, and not professionalism?since today there are so many easy paths into the enterprise, why should they bother with advanced attacks? And I do not think their level of sophistication, by any means, has reached a barrier that can make the enterprises feel more relaxed.

An attack is composed of software components and to build one; the attacker needs to understand their target systems. Since IT has undergone standardization, learning which system the target enterprise use?and finding its vulnerabilities is quite easy. For example, on every website an attacker can identify the signature of the type of web server, and then investigate it within the lab, to try to look for common vulnerabilities on that specific software. Even more simple is to look into the CVE database and find existing vulnerabilities, which have not been patched on it. Another example is the active directory (AD), which is an enterprise application that holds all the organizational information. Today, it is quite easy to send a malicious document to an employee in which once the document is opened, it exploits the employee’s Windows machine and looks for privileged vulnerability into AD. Even the security products and measures that are applied to the target enterprise can be identified by the attacks quite easily, and can later bypass them, leaving no trace of the attack. Although organizations always aim to update their systems with the latest security updates and products, there are still two effective windows of opportunities for attackers:

  • From the moment that a disclosure of a vulnerability in specific software is identified to the moment in which a software patch-up is engineered, to the point in time in which the patch is applied to the specific computers running the software. This is the most vulnerable time frame since the details of the vulnerability are publicly available, and there is always enough time before the target covers this vulnerability?greatly simplifying the job of the attacker. Usually, within this time frame attackers can also find an example exploitation code on the internet for reuse.
  • Unknown vulnerabilities in the software or enterprise architecture that are identified by attackers and used without any disruption or visibility since the installed security products are not aware of them.

From a historic point of view, the evolution of attacks is usually tightly coupled with the evolution of security products aiming to bypass them and mainly the need to breach specific areas within the target. During my time as VP R&D for Iris Antivirus (20+ years ago) I witnessed a couple of important milestones in this evolution:

High-Level Attacks ? Malicious code written in a high-level programming language such as Visual Basic or Java, which created a convenient platform for attackers to write PORTABLE attacks which can be modified quite easily since it is written in high-level language making virus detection very difficult. The basic visual attacks created, also as an unintentional side effect, an efficient DISTRIBUTION channel for the malicious code to be delivered via documents. Today it is the main distribution path for malicious code, via HTML documents, Adobe PDF files, or MS Office files.

Polymorphic Viruses ? Malicious code hides itself from signature driven detection tools, and only at runtime is the code deciphered and executed. Now imagine a single virus serving as a basis for so many variants of ?hidden? code and how challenging it can be for a regular AV product. Later on, polymorphism evolved to the dynamic selection, and execution of the ?right? code where the attack connects to a malicious command and control server with the parameters of the environment and the server returns an adaptive malicious code, which fits the task at hand. This can be called as runtime polymorphism.

Both ?innovations? were created to evade the main security paradigm which existed back then, mainly that of the anti-viruses looking for specific byte signatures of the malicious code. Both new genres of attacks were very successful in challenging the AVs ?because signatures have become less deterministic. Another major milestone in the evolution of attacks is the notion of code REUSE to create variants of the same attack. There are development kits in existence that can be used by attackers, as if they were legitimate software developers, building something beneficial. The variant phenomena competed earnestly with AVs in a cat and mouse race for many years?and still, does.

State of Security Products

Over the years malicious code related security products has evolved alongside the threats, whereas the most advanced technology applied to identify malicious code was and still is behavioral analysis. Behavioral analysis indicates the capability to identify specific code execution patterns. An approach to the signature detection paradigms, which mainly addresses the challenge of malicious code variants. Behavioral analytics can be applied at runtime to a specific machine tracing the execution of applications or offline via a sandbox environment such as FireEye. The latest development in behavioral analytics is the addition of predictive capabilities aiming to predict which alternative future execution patterns reflects a malicious behavior and which is benign to stop attacks before any harm is done. Another branch of security products that aim at dealing with unknown malicious code belongs to an entirely new category that mimics the air-gap security concept, referred to as containment. Containment products?there are different approaches with different value propositions, but I am generalizing here?are running the code inside an isolated environment, and if something were to go wrong, the production environment would be left intact in that it was isolated and the attack had been contained. It is similar to having the 70?s mainframe, which did containerization, but in your pocket and a rather seamless manner. And of course, the AVs themselves have evolved quite a bit, while their good old signature detection approach still provides value in identifying well-known and rather simplistic attacks.

So, with all these innovations, how are attackers remaining on top?

  1. As I said, it is quite easy to create new variants of malicious code. It can even be automated, making the entire signature detection industry quite irrelevant. The attackers have found a way to counter the signatures paradigm by simply generating a large amount of potential malicious signatures.
  2. Attackers are efficient at locating the target’s easy-to-access entry points, both due to the knowledge of systems within the target, and the fact that those systems have vulnerabilities. Some attackers work to uncover new vulnerabilities, which the industry terms zero-day attacks. Most attackers, however, simply wait for new exploits to be published and enjoy the window of opportunity until it is patched.
  3. The human factor plays a serious role here where social engineering and other methods of convincing users to download malicious files is often successful. It is easier to target the CFO with a tempting email with a malicious payload, then to find your digital path into the accounting server. Usually, the CFO has the credentials to those systems, and often there are even excel copies of all the salaries on their computer, so it is a much less resistant path toward success.

Enter the Polymorphic Defense Era


An emerging and rather exciting security paradigm that seems to be popping up in Israel and SV is called a polymorphic defense. One of the main anchors contributing to successful attacks is the prior knowledge that attackers benefit from about the target, including which software and systems are used, the network structure, the specific people and their roles, etc. This knowledge serves as a baseline for all targeted attacks across all the stages of attack: the penetration, persistence, reconnaissance, and the payload itself. All these attack steps, to be effective, require detailed prior knowledge about their target?except for reconnaissance?which complements the external knowledge with dynamically collected internal knowledge. Polymorphic defense aims to undermine this prior knowledge foundation and to make attacks much more difficult to craft.

The idea of defensive polymorphism has been borrowed from the attacker’s toolbox where it is used to ?hide? their malicious code from security products. The combination of polymorphism with defense simply means changing the “inners” of the target, where the part to change depends on the implementation and its role in attack creation. This is done so that these changes are not visible to attackers, making prior knowledge irrelevant. Such morphism hides the internals of the target architecture so that only trusted sources are aware of them?to operate properly. The ?poly? part is the cool factor of this approach in that changes to the architecture can be made continuously and on-the-fly, making the guesswork higher by magnitudes. ?With polymorphism in place, attackers cannot build effective repurposable attacks against the protected area. This cool concept can be applied to many areas of security depending on the specific target systems and architecture, but it is a revolutionary and refreshing defensive concept in the way that it changes the economic equation that attackers are benefitting from today. I also like it because, in a way, it is a proactive approach?and not passive like many other security approaches.

Polymorphic defenses usually have the following attributes:

  • Solutions that are agnostic to covered attack patterns which makes them much more resilient.
  • Seamless integration into the environment since the whole idea is to change the inner parts?changes that cannot be made apparent to externals.
  • Makes reverse-engineering and propagation very difficult, due to the “poly” aspect of the solution.
  • There is always a trusted source, which serves as the basis for the morphism.

The Emerging Category of Polymorphic Defense

The polymorphic defense companies I am aware of are still startups. Here are a few of them:

  • The first company that comes to mind, which takes polymorphism to the extreme, is Morphisec*, an Israeli startup still in stealth mode. Their innovative approach solves the problem of software, and it achieves that by continuously morphing the inner structures of running applications, which as a result, renders all known and potentially unknown exploits as useless. Their future impact on the industry can be tremendous: the end of the mad race of newly discovered software vulnerabilities and software patching, and much-needed peace of mind regarding unknown software vulnerabilities and attacks.
  • Another highly innovative company that applies polymorphism in a very creative manner is Shape Security. They were the first one to coin the term of polymorphic defense publicly. Their technology ?hides? the inner structure of web pages which eventually can block many problematic attacks such as CSRF, which rely on specific known structures within the target web pages.
  • Another very cool company also out of Israel is CyActive. CyActive fast forwards the future of malware evolution using bio-inspired algorithms, and use it as training data for a smart detector which can identify and stop future variants,?much like a guard that has been trained on future weapons. Their polymorphic anchor is in the fact they outsmart the phenomena of attack variants by creating all the possible variants of the malware automatically and by that increase detection rate dramatically.

I suppose there are other emerging startups that tackle security problems with polymorphism. If you are aware of any particularly impressive ones, please let me know, as I would love to update these posts with more info on them. J

*Disclaimer ? I have a financial and personal interest in Morphisec, the company mentioned in the post. Anyone interested in connecting with the company, please do not hesitate to send me an email and I would be happy to engage regarding this matter.


The idea of morphism or randomization as an effective tool for setting a serious barrier for attackers can be attributed to different academic developments?and commercial ones. To name one commercial example, take the Address Space Layout Randomization (ASLR) concept from operating systems. ASLR is a concept that aims to deal with attacks that are written to exploit specific addresses in memory, and ASLR changes this assumption by moving around code in memory in a rather random manner.

The Future

The polymorphic defense is a general theoretical concept which can be applied to many different areas in the IT world, and here are some examples off the top of my head:

  • Networks ? Software-defined networking provides a great opportunity for changing the inner-networking topology to deceive attackers and dynamically contain breaches. This can be big!
  • APIs ? API protocols can be polymorphic as well, and as such, prevent malicious actors from masquerading as legitimate parties or man in the middle attacks.
  • Databases ? Database structures can be polymorphic too, so only trusted parties could be aware of a dynamic DB scheme and others cannot.

So, polymorphic defense seems to be a game-changing security trend which can potentially change the balance between the bad guys and the good guys?and ladies too, of course.

UPDATE Feb 11, 2015: On Reddit I’ve got some valid feedback that it is the same as the MTD concept, Moving Target Defense, and indeed that is right. In my eyes, the main difference is the fact Polymorphism is more generic in the sense it is not specifically about changing location as means of deception but also creating many forms of the same thing to deceive the attackers, but it is just a matter of personal interpretation.

To Disclose or Not to Disclose, That is The Security Researcher Question

Microsoft and Google are?bashing each other on the zero-day exploit in Windows 8.1 that was disclosed by Google last week following a 90 days grace period. Disclosing is a broad term when speaking about vulnerabilities and exploits – you can disclose to the public the fact that there is a vulnerability and then you can disclose how to exploit it with an example source code. There is a big difference between just telling the world about the vulnerability vs. releasing the tool to exploit it and that is the level of risk created by each alternative. In reality, most attacks are based on exploits that have been reported but have not been patched yet. Disclosing the exploit code without a patch that is ready to protect the vulnerable software is in a way helping the attackers. Of course, the main intention is to help the security officers which want to know where is the vulnerability and how to patch it temporarily but we should not forget that public information also falls in the hands of attackers.

Since I have been at Google’s position in the past with the KNOX vulnerability we uncovered at the cybersecurity labs @ Ben-Gurion University I can understand them. It is not an easy decision since on one hand, you can’t hide such info from the public while on the hand you know for sure that the bad guys are just waiting for such “holes” to be exploited. Within the time I understood a few more realities:

  • Even if a company issues a software patch still the risk is not gone since the time window from the moment a patch is ready to be applied up to the time it is actually applied on systems can be quite long and during that time the vulnerability is available for exploitation.
  • Sometimes vulnerabilities uncover serious issues in the design of the software and solving it may not be a matter of days. Of course, a small temporary fix can be issued but a proper well thought of patch taking into account many different versions and interconnected systems can take a much longer time to devise.
  • There is a need for an authority to manage the whole exploit disclosure, patching, and deployment life cycle which will devise a well-accepted policy and not just a single-sided policy such as the one Google Zero devised. If the intention eventually is to increase security then without the collaboration of software vendors it won’t work out.

And I am not into the details but I truly believe Google has acted here out of professionalism and not for other political reasons against Microsoft.

Google Releases Windows 8.1 Exploit Code – After 90 Days Warning to Microsoft

Google Project Zero has debuted with the aim of solving the vulnerabilities problem by identifying zero-day vulnerabilities, notifying the company which owns the software, and giving them 90 days to solve the problem. After 90 days they publish the exploit. And they just did it to Microsoft.

I remember quite a while ago when we decided at the cyber labs at Ben-Gurion University to adopt such a policy following our discovery of a vulnerability in Samsung KNOX. The KNOX vulnerability eventually turned into Google’s Android vulnerability with the help of some political juggling between the two companies. We disclosed the exploit to Google on the 17th of Jan 2014 and we got a notice that a patch was ready on the 27th of Feb so their fast response was good enough to expect others to deliver the same level of service. I would not go into the topic of how long it takes such a patch to really by applied on users’ devices but at least expecting a patch to be delivered in 90 days is a good start. We eventually did not release the exploit code because we understood it will take some time until users will be protected with the patch and since the vulnerability was quite serious (VPN Bypass) then we decided not to disclose it.

Disclosing the exploit too early is a double-edged sword where on one hand you want the good guys to be aware to the problem in-depth while on the other hand you give a weapon into the hands of the bad guys and it is well known that published exploits are highly used by attackers relying on the time window between publishing the patch to applying it on the system.

Anyway, I think Project Zero is a good step forward for the security industry!

Counter Attacks – Random Thoughts

The surging amount of cyber attacks against companies and their dear consequences pushes companies to the edge. Defensive measures can go only so far in terms of effectiveness, assuming they are fully deployed which is also far from being the common case. Companies are too slow to react to this new threat which is caused by a fast-paced acceleration in the level of sophistication of attackers. Today companies are at a weak point. From a CEO perspective, the options available to mitigate this threat are running out especially considering the addition of state-sponsored attacks to the game and the unclear role of the government and their inability to effectively intervene.

So what can companies do? Attack back.

Attacking the attackers were and are always an option that remained in the heart of people and maybe spoken out very quietly due to the very simple reason of legality. Unlike self-defense in the real world which may allow you to use violence in order to stop an offender in the cyber world, you can only defend yourself passively and wait for law enforcement to come to the rescue.

What attacking back actually means? Many times you don?t know where the attack came from and who is behind it so whom to attack? It depends on the type of attack and the events happening after the attack. In many cases, there is a good chance a counter-attack can help minimize the damage or maybe eliminate it. In Sony’s case, there was later a counterattack (allegedly) by Sony (allegedly) trying to disrupt the download of the stolen files. From an offensive point of view, the low amount of servers serving the stolen files is a weakness and it is possible to try to stop it. It is not always true where if the files are to reside on another big company’s data center then stopping it can be impossible and definitely problematic in terms of getting into a fight with another company. So in order to have an effective counter-attack, you need to find a weakness. And many times it is not difficult. For example, in a phishing attack where the web pages holding the impersonating website a weakness can be found and it is the servers who hold those pages – taking them down should not be a problem. Another example of attacking back is responding to a DDOS or a spam attack with a counter DDOS and ultra spam attacks. In DDOS which is a distributed denial-of-service attack, it can be a bit more problematic since it is distributed by nature and run on many servers though I can easily imagine a cloud-based elastic service that response back effectively. Same for spam, why can?t someone send 100 emails back for each one received – let’s see them handling the volume of incoming mail. Symantec had a very nice cover from a technical perspective on counter-attacks, although from 2006, 8 years ago, still it is valid on many points.

The benefits of attacking back are three:

Prevention – To stop an ongoing attack were leaking the stolen data from my point of view is just another lateral step in the same attack. As a side note, everyone says the attack of Sony has ended (not everyone) but as long as files are still leaked out then it is still going on from my point of view. The bad guys have not been stopped yet. In terms of prevention also delaying an attack using counter strikes can be valuable.

Remedy – In cases where the stolen data can be identified to be located in a certain place then attacking back to retrieve it or just delete it is definitely an option.

Revenge – The sweet taste of revenge although doesn’t sound very business savvy is something that exists because we are all human.

Waiting for government help can take a long time and this raises the question of what is moral to do in between when you are at high risk with no protection and no one can help you and you can?t respond.

Government & the private sector

The problem with governmental intervention has several aspects (just to be clear, it is not that they don?t do anything, on the contrary, they do a lot but it is just far from being enough):

  • The government may have more tools and better access to interesting data but still, they are very limited since they get into the picture very late and they don?t know the internals of each company IT so they have a steep learning curve and a very short time to respond.
  • Regulation is something discussed and regulation on required security measures can be effective but only to some extent. Many times the problem with security in organizations is not whether they have the best tools or not, most of the holes are created due to human error, lack of knowledge, and lack of enforcement. It will take a really long time until a regulation can have some real impact on how companies protect themselves.
  • Integration – In order to effectively react to an attack you need to respond as close to real-time as possible when the damage is lower and the chances of finding traces are higher. The only party which can respond at such speed is the organization itself which controls and knows its IT. The government is not integrated into companies IT and as such, they can?t be aware of attacks and respond effectively to attacks as required. Needless to say, enterprises are very diverse in their IT architectures so even assessing the target security capabilities and weaknesses by the government will take a long time. Another problem with integration is that it raises also the issue of privacy which is unrelated but tightly connected to the topic concerning governments connected to companies IT.
  • Reach – The government has a limited reach to attackers not residing in the US. Naturally like any other country. Of course, the US has much more control over the internet infrastructure but still it far from full control.
  • Attribution – Due to the latency of the investigation attributing the attack to someone is difficult for everyone including the government. In the future, I will write a standalone post about attribution which is a fascinating topic and is very challenging in the world of cybercrime.

There is another point to consider in regards to government and enterprises and that is the idea of accessing the government intelligence systems by the enterprise. The enterprise is naturally limited in their view of the attack sources and maybe the solution is to allow the security team of the enterprise to extend their view with the data available by the government. Of course, this requires deep thinking about isolation and privacy but still, it is an option which can be a basis for devising a prompt response.

Of course is in any other unsolved problem there are those who try to make money out of it. ?There are several companies or entities offering attack back services though due to legality issues they don?t stay public for too long. I am pretty sure there are quite a few services like that which are off the radar. I tried to look for some open-source counter-attack projects but could not find such and to be honest, it is surprising there are none.

To summarize, it is a complicated matter and a lot is going on behind the scenes so we are far from knowing the full dynamics. But definitely, something to contemplate on.

A small disclaimer, this post is not aimed to suggest people attack back, it is just meant to raise the awareness on different aspects of cyber warfare.

Cutting Down North Korea’s Internet

Could be interesting to understand whether cutting down North Korea from the internet was a defensive measure due to a huge amount of ongoing attacks or was it just a preventive measure.

Definitely cutting down the internet has become another weapon in the war chest of the US.

The question is now: do other countries have such power of cutting down areas?

The net infrastructure should be evaluated for such attack vector.

A Tectonic Shift in Superpowers or What Sony Hack Uncovered to Everyone Else

Sony hack has flooded my news feed in recent weeks, everyone talking about how it was done, why, whom to blame, the trails which lead to North Korea, and the politics around it. I?ve been following the story from the first report with an unexplained curiosity and was not sure why since I read about hacks all day long.
A word of explanation about my “weird” habit of following hacks continuously, being a CTO of the Ben-Gurion University Cyber Security Labs comes with responsibility, and part of it is staying on top of things:)

Later on, the reason for my curiosity became clear to me. As background, to the ones who are deep in the security industry, it is already well known although not necessarily spoken out loud that attackers are pretty far ahead of enterprises regarding sophistication. The number of occurrences of reported cyberattacks in the recent two years shows a steep upward curve and if you add to that three times non-reported incidents than anyone can see it is exploding. And although many criticized Sony for their wrong security measures still?I don?t think they are the ones to blame. They were caught in a game beyond their league. Beyond any enterprise league.

The reasons attackers have become way more successful are:

  • They know how to better disguise their attacks, using form changing techniques (polymorphism) and others.
  • They know quite well the common weaknesses in enterprises IT. You can install almost any piece of software in your lab and just look for weaknesses all day long.
  • They have more money to pour into learning the specifics of their targets and thanks to that they build elaborated and targeted attacks. In the case of state-sponsored attacks, the funds are unlimited.
  • Defensive technologies within the enterprise are still dominated by tools invented ten years ago, back then?attacks were more naive if such can be said. Today we are in a big wave of new emerging security technologies which can be much more effective though enterprises enough time to get adopted.

So it is fair to say that enterprises are in a way sitting ducks for targeted attackers and I am not exaggerating here.

And the Sony story was different than others for two main reasons:

  • The source of an attack is allegedly originated and backed by a specific nation. And I am saying allegedly because unless you found the evidence in the computers of someone you can?t be sure and even then that person could have been entrapped by the real attackers. Professionals can quite easily cover up their traces, and the attackers here are professionals.
  • The results of the attack are devastating, and their publicity turned them into a nightmare for any CEO on earth. Some warning sign to the free world.

And Sony due to their bad luck got caught in the middle.

Image is taken from http://www.politico.com/story/2014/12/no-rules-of-cyber-war-113785.html

The End of Superpowers

From a high-level view, it does not matter whether it was?North Korea or not. The fact that such an event happened where potentially a state attacked a private company and its consequences and lack of ramifications are quite clear then this opens the path for the future to happen again and again and that what’s makes it a game-changer. Every nation in the world understood now they have got a free ticket to a new playground with different rules of engagement and more importantly different power balance.

In the physical world, power has always been attributed to the amount of firepower you?ve got, and naturally, the amount of firepower has a tight correlation with the economic strength of the nation. The US is a superpower. Russia is a superpower. In the cyber world, these rules do not necessarily apply where you can find a small group of very smart people, and with very simple cheap tools they can wreak havoc on a target. It is not easy but possible. The attackers many times are only limited by their creativity and nothing else. In the cyber world, size matters less.

Our lifestyle and lifeblood have become dependent on IT, our electricity, water, food, defense, entertainment, finance, and almost everything else is working only if the underlying IT is functioning properly. Cyberwarfare means attacking the physical world by digital means and the results can be no less devastating than any other type of attack. They can be worse since IT also presents new single points of failure. So if cyberwars can cause harm as real wars and size matter less wouldn?t that mean the rules of the game have changed forever?

Question of Responsibility

As soon as I heard that North Korea might be responsible for the attack I understood that Sony was caught into an unfair game and the big question is about the role of the government in defending the private sector, how and to what extent. Going back again to the physical world, in the case of a missile that is launched from North Korea onto the headquarters of Sony then the story and reaction were very much different and predictable. This comparison is valid since the damage which can be caused by such missile to the company is probably lesser from the economic perspective, not taking into account, of course, human casualties. I am not saying cyberattacks can?t cause casualties; I am just saying that this one did not.

So why is there a difference in the stance of the US government? Why did Sony not ask for help and nationwide defense?

The era of cyber warfare removes the clear distinction between criminal acts vs. nation wise offensive acts and a new line of thought should emerge.

So what the future holds for us?

  • A big wave of cyber attacks coming from everywhere on the globe. The ?good? results of this attack will surely provide a sign of hope for all the people in the world who felt inferior from a military perspective. The attackers always go to the weakest links, so we will see more enterprises being attacked like Sony in a more severe way. A long, complicated, stealthy war.
  • A big wave of security technologies which aim to solve these problems, coming from the private and government sector. Security startups and established players in a way ?enjoy? these developments where the need for new solutions is uprising steeply. I know personally, some startups in Israel which can take the current advantage attackers enjoy technologies such as polymorphic cyber defense. I will elaborate on that in a future post since it deserves one on its own.
  • A long?debate about who is responsible for what and what measures can be taken meanwhile – cutting down the internet across the globe won?t help anyone since there are today many ways to launch attacks from different geographic places, so location doesn?t matter anymore. It won?t be easy to create a solution that will be effective on the one hand and not limit the freedom to communicate on the other hand.

Meanwhile, you can gaze a bit at the emerging battleground


Taken from a live attacks monitor on IPVKing

What does cross platform mean?

Cross-platform is tricky. It seems like a small “technical” buzzword but actually, it is one of the biggest challenges for many technology companies and has different aspects for different people in the organization and outside of it.

Developer Point of View

It all starts with the fact that applications can potentially be targeted towards different computing devices. To get more people to use your applications you would like it to run on more and more device categories whether it is a different smartphone operating systems or a desktop computer vs. a tablet.

I’ve met the term cross-platform in my first job (20 years ago) as a developer after I left the army and that was when we coded an antivirus scanning engine. We’ve built it purely in C to make it “compilable” and “runnable” on different desktop and server operating systems without being aware that we were building a cross-platform product. Today when you search for the term cross-platform on google you can find app developers challenged at running their apps both on iOS and Android. The aspiration to have a cross-platform code base lies in the economic rationale of write once and run everywhere instead of developing again and again for each proprietary coding language and standards of each platform. Cheaper to develop and easier to maintain.

Sounds easy and good, no? Well, no. Even today after so many years of evolving development tools. The main reason it is not straightforward is the simple fact that each platform, when you go into details, is different than others, either by hardware specifications or by operating system capabilities, and at some point, you will need to have a piece of code that is platform-specific.

For example, let’s take iOS and Android: on Android you have the ‘back’ button and on iOS you don’t. To make sure your code behaves “naturally” on Android you need to add some Android-specific code to handle the ‘back’ action while it will be useless on iOS.

Cross-platform tools have evolved quite a bit, tools such as html5 based mobile app development environments. ?Still, I’ve never seen a real application that was built in full using only cross-platform code. There is always the need to tweak something for a specific device or specific platform, there is no escape from it.

I always wondered why platform providers (Google, Microsoft, Apple…) have never bothered too much to support such cross-platform tools, and even more, they seem always to make life much more difficult for such tools. I can understand the rationale of “not helping my competition” though I think that at some point in life the basic fact that not a single platform will win all the users sinks in. It may be more productive to apply cooperative strategies vs. only competitive ones. Indeed they may lose some developers to other platforms while at the same time they will win some switching to theirs and most important is that it will make developers’ life easier and with apps which will end with a good result for everyone.

QA Guy/Girl Point of View

For the QA team, cross-platform means usually a pain in the neck. First, you need to test it across different environments and life could have been so much easier if it was on one platform. Even supporting one platform is not easy nowadays due to versioning – iOS as a mild example for complexity and Android which is catastrophic due to its fragmentation.

The other aspect which is more problematic is the fact that developers who work with cross-platform tools are somehow shifting the “responsibility” of making sure their results are working properly and putting it on the tool itself to blame. As if they were doing the best they could and complying with whatever was requested of them and the fact that it does not work is not their responsibility. This state of mind automatically moves the blame to the person who found the bug, hence the QA person. Eventually, developers are fixing whatever is needed but still, it is not the same situation as in the case of a platform-specific developer and QA person. Maybe it is because the developer can not practically run all the tests on all the devices prior to handing the software which always leaves some quality gap “open”.

In general, QA has become highly challenged with the multitude of different devices out there which are very different from the other. Previously ( a long time ago) you had Microsoft Windows for personal computers and Unix based servers. Now you have lots of operating systems, numerous hardware configurations, and an ever-accelerating pace of releasing new OS versions so it does not make life easy, to say the least for the people who need to ship the software. Now add to that a cross-platform product:)

Product Manager Point of View

The product manager sees cross-platform from a whole different angle and that is more close to users’ perception. Cross-platform is more about what people do with their devices, when and how they use them, and how the product can adapt itself to the unique device-user context. For example on a smartphone, you might expect a “time-wasting” behavior or a very efficient task-oriented behavior for getting something done vs. on tablets which can be used in more relaxed times driving different behavior. The challenge here is to really understand how your target audience can and may consume your products via each specific device and platform and how to adapt each platform-specific version to serve that behavior. Of course, it contradicts the basic aspiration of the R&D division to write less platform-specific code.

Marketing Team Point of View

The marketing team sees cross-platform as an opportunity. In a way, they are the only ones who don’t see the “burden” and try to enjoy the potential distribution hidden in the rich set of devices out there. More devices, regardless of their type, represent more users/consumers and that means more market. Sometimes each device reflects a specific market segment which carries on an overhead of reaching out to them such as in the case of specific gaming consoles and sometimes your target market just happens to be diverse in terms of consumption device and the users use a different kind of devices and platforms such as in the case of smartphone users.

The User Point of View

Users are kings of course and they want everything to run everywhere. Today it seems even “not ok” for an application to be only available on one platform as it can be even a sign of “laziness” of the provider or lack of attention the developer gives to the market. What really spoiled users is the web which is cross-platform by nature and for users, it is too much to understand why Gmail is available everywhere and not my favorite iOS calendar app.

And that’s ok, they should not be bothered by that as they are kings.

Consumers to Enterprise – The Investment Rationale Cycle

Today the hottest thing in new startup investments is “enterprise” startups and for someone old like me, it gives a d?javue kind of feeling. It seems investments behave in a cyclical manner where the first field of growth is always in the area of consumer products. In consumer products innovation is only limited by imagination. After a phase of massive investments in the area of “consumers,” there is a stage where a big portion of the portfolios face a roadblock of “how the hell do we monetize this and make a big business out of it”. And then everyone flocks into “enterprise” driven innovation where the money issue is seemingly “solved” and innovation is restricted mostly to the imagination of the customers and not of the innovators. A far short-sighted and extremely realistic kind of imagination driven by enterprise CIOs. This last shift usually breaks the “spirit” of free-thinking entrepreneurs which many times make the whole wave end in a bust and then back again to step #1!

The dark side of Android fragmentation

One of the main problems with Android for app developers contemplating on Android vs. iOs is the fact it is highly fragmented. On iOS you, unconsciously, know that you need only to build one version (Let’s keep the example simple) and it will work on all devices, you know that Apple is doing everything to make sure everyone has the latest version and that there is a decent level of backward compatibility. ?For Android developers things have turned up differently, due to the way Android is “openly” distributed, you can not be rest assured that your app will run the same way or will even run at all on your users’ devices. Different incompatible Android versions, devices with different capabilities, OEM customizations, and plain third-party OS customization turn each Android device to be different then the other and that is usually a bad sign for developers. This infographic says it all.

Android fragmentation is a topic that has been discussed and acknowledged quite thoroughly in the industry and that is not what I want to uncover here. On the aspect of the fragmentation, which has been neglected and left out of the discussion but has no lesser impact on the apps industry is the variety in terms of screen dimensions, resolutions, and input capabilities. This variety of devices input and output capabilities does not really impact you if you are developing apps with minimal user interaction. Thinking of it then it is quite hard to come up with such an example in today’s tablet/smartphone world. Most of the apps today are “intensively” focused on user experience needless to say judged totally by the experience itself. To design and develop a “good” user experience towards a single target device, with known screen and input capabilities is something feasible, taking into account even just one more device category then you are in a serious problem.

From a philosophical perspective, I think a “nature” law can be suggested on how good user experience of an app would be based on how many target platform it addresses, the more platforms targeted, the worse the experience becomes.

Few thoughts in regards to this dark side of fragmentation are:

  • web technologies ease the pain a bit since they allow “clean” separation of logic and presentation where the cost of customizations for additional platforms is just marginal. Of course for some app categories, it is not an option since the experience has to be so “immersive” which makes the browser as a container too restrictive.
  • gaming and content-driven apps suffer the most here
  • google tries to minimize this by creating guidelines and removing the most problematic customize-able “edges” from their user interface libraries but I think the problem is more rooted then that
  • if someone feels a dejavu with Java ME then I got it too:)

One practical suggestion for Google to help developers in their decision of whether to target Android is two-fold: first, admit the problem. Second, provide a live decision-making tool that will allow filtering their existing user base by their devices (or whatever info they got from activations).

Will voice replace the touch interfaces on mobiles?

Siri apparently has started a revolution, at least public relations wise since voice activation has been around for quite a while but never seemed to be perfect. It seems people like to talk to her and she responds back. Few in the industry have written on the impact the new voice interaction paradigm might create -?Gigaom discusses the potential loss of mobile ad revenues?and Bloomberg reports on?Siri doubling data volumes. Voice indeed seems as a killer interface at first glance since it is more natural to operate once it is working well. Of course, the tolerance for errors is much lower than in touch and it can really drive you mad but it seems that the technological conditions are set for a good working model.

Still, the question of whether we will only talk with our devices in the future and not touch them arise. Before touch we clicked on things and when touch has matured to a good working technology we embraced it without second thought. Old nokia phones (apologize to the ones who read it and still own one:) seem now almost “ancient” as the dial phones seemed to the ones who started using touch tone phones back in previous century. Voice indeed hides such a promise where we can blurb at our phones whatever we want and our wishes will be fullfilled automagically. Let’s list the cool use cases we might do with out phones if they were fully voice activated:

  • Deferred talks – actually you can talk to someone without him/her being on the other line and this “talk” will be transferred digitally as a textual message to the other side either immediately or based on some pre-condition, for example on birthdays.
  • Activating apps by voice – If apps had a voice-based interface then we could do anything we want just by voice. For example say: “Alarm, wake me up tomorrow 7 am, ok?”
  • Reply to incoming messages by voice without opening the device, reading the message, clicking reply, writing down the texts tediously and clicking send.
  • Operate the phone basic functionality – for example a cool “silent” shout on a ringing phone can be something really nice
  • Authentication by voice patterns
  • Unlocking the phone by voice – the common check up we do on phones where we open the lock screen and see the status of mails, tweets, Facebook and other data we have on the dashboard can be done with a single word like “What’s up?”

And on and on…

So it does look promising but will it replace touch? One of the inner attributes of touch interfaces and mouse based graphical interfaces is the ability to interact in two dimensions. Interacting in two dimensions creates the ability to have a direct access to available data and actions and voice due to its serial nature is limited in this respect. A difference like then that exists among using tape cassettes and CDs, no need to fast forward. This difference puts the voice-based interaction into a much more limited scope where it can not replace the rich experience created by the visual and touch interaction. Still, in one area I am sure it will be a welcome replacement and that is where we need to go into serial processes on the phone itself using our rich touch interface – for instance typing texts, I hate it, especially on touch phones, I got big fingers and I wish I could dictate it with a good accuracy. It does not have to be perfect since I got enough mistakes when I type with my touch keyboard so I have some tolerance. Maybe a combination of the two would make a perfect match. Another area would be ?changing modes or states on the phone where the touch experience has limited value. For example unlocking the phone.

Another major fault of voice interaction is correcting errors and that is by-product of the serial vs. direct access interfaces. When you need to fix something said you get into a problem, like in real life with people:).

So what do you think, will voice make us all look back at touch interfaces as old and dirty?