United We Stand, Divided We Fall.

If I had to single out a single development that elevated the sophistication of cybercrime by an order of magnitude, it would be sharing. Code sharing, vulnerabilities sharing, knowledge sharing, stolen passwords and anything else you can think of. Attackers that once worked in silos, in essence competing with each other, have discovered and fully embraced the power of cooperation and collaboration. I was honored to present a high-level overview on the topic of cyber collaboration a couple of weeks ago at the kickoff meeting of a new advisory group to the CDA (the Cyber Defense Alliance), called the “Group of Seven” established by the Founders Group. Attendees included Barclays’ CISO Troels Oerting and CDA CEO Maria Vello as well as other key people from the Israeli cyber industry. The following summarizes and expands upon my presentation.

 

TL;DR – In order to ramp up the game against cyber criminals, organizations and countries must invest in tools and infrastructure that enable privacy-preserving cyber collaboration.

The Easy Life of Cyber Criminals

The amount of energy defenders must invest in order to protect, vs. the energy cyber criminals need to attack a target, is far from equal. While attackers have always had an advantage, over the past five years the balance has tilted dramatically in their favor. Attackers, in order to achieve their goal, need only find one entry point into a target. Defenders need to make sure every possible path is tightly secured – a task of a whole different scale.

 

Multiple concrete factors contribute to this imbalance:

  • Obfuscation technologies and sophisticated code polymorphism that successfully disguise malicious code as harmless content rendered a large chunk of established security technologies irrelevant. These technologies were built with a different set of assumptions during what I call “the naive era of cyber crime.”
  • Collaboration among adversaries in the many forms of knowledge and expertise sharing naturally speeded up the spread of sophistication/innovation.
  • Attackers as “experts” in finding the path of least resistance to their goals discovered a sweet spot of weakness. A weakness that defenders can do little about – humans. Human weaknesses are the hardest to defend as attackers exploit core human traits such as trust building, personal vulnerabilities and making mistakes.
  • Attribution in the digital world is vague and almost impossible to achieve, at least as far as the tools we have at our disposal currently. This makes finding the root cause of an attack and eliminating it with confidence very difficult.
  • Complexity of IT systems has lead to security information overload which makes timely handling and prioritization difficult; attackers exploit this weakness by disguising their malicious activities in the wide stream of cyber security alerts. One of the drivers for this information overload is defense tools reporting an ever growing amount of false alarms due to their inability to accurately identify malicious events.
  • The increasingly distributed nature of attacks and the use of “distributed offensive” patterns by attackers makes defense even harder.

 

Given the harsh reality of the world of cyber security today, it is not a question of whether or not an attack is possible, it is just a matter of the interest and focus of cyber criminals. Unfortunately, the current de-facto defense strategy rests on creating a bit more difficulty for attackers on your end, so that they will go find an easier target elsewhere.

Rationale for Collaboration

Collaboration, as proven countless times, creates value that is beyond the sum of the participating elements. This is also true for the cyber world. Collaboration across organizations can contribute to defense enormously. For example, consider the time it takes to identify the propagation of threats as an early warning system – the time span decreases exponentially in proportion to the number of collaborating participants. This is highly important to identify attacks targeting mass audiences more quickly as they tend to spread in epidemic like patterns. Collaboration in the form of expertise sharing is another area of value – one of the main roadblocks to progress in cyber security is the shortage of talent. The sharing of resources and knowledge would go a long way in helping. Collaboration in artifact research can also reduce the time to identify and respond to cyber crime incidents. Furthermore, the increasing interconnectedness between companies as well as consumers means that the attack surface of an enterprise – the possible entry points for an attack – is constantly expanding. Collaboration can serve as an important counter to this weakness.

 

A recent phenomenon that may be inhibiting progress towards real collaboration is the perception of cybersecurity as a competitive advantage. Establishing a solid cybersecurity defense presents many challenges and requires substantial resources and customers increasingly expect businesses to make these investments. Many CEOs consider their security posture as a product differentiator and brand asset and, as such, are disinclined to share. I believe this to be short-sighted due to the simple fact that no-one is really safe at the moment; shattered trust trumps any security bragging rights in the likely event of a breach. Cyber security needs to progress seriously in order to stabilize and I don’t think there is value in small marketing wins which only postpone progress in the form of collaboration.

Modus Operandi

Cyber collaboration across organizations can take many forms ranging from deep collaboration to more straightforward threat intelligence sharing:

  • Knowledge and domain expertise – Whether it is about co-training or working together on security topics, such collaborations can mitigate the shortage of cyber security talent and spread newly acquired knowledge faster.
  • Security stack and configuration sharing – It makes good sense to share such acquired knowledge although it is now kept close to the chest. Such collaboration would help disseminate and evolve best practices in security postures as well as help gain control over the flood of new emerging technologies, especially as validation processes take long periods of time.
  • Shared infrastructure – There are quite a few models where multiple companies can share the same infrastructure which has a single cyber security function, for example cloud services and services rendered by MSSPs. While the current common belief holds that cloud services are less secure for enterprises, from a security investment point of view there is no reason for this to be the case and it could and should be better. A big portion of such shared infrastructures are hidden in what is called today Shadow IT. A proactive step in this direction is for a consortium of companies to build a shared infrastructure which can fit the needs of all its participants. In addition to improving defense, the cost of security would be offset by all the collaborators.
  • Sharing concrete live intelligence on encountered threats – Sharing effective indicators of compromise, signatures or patterns of malicious artifacts and the artifacts themselves is where the cyber collaboration industry is currently at.

 

Imagine the level of fortification that could be achieved for each participant if these types of collaborations were a reality.

Challenges on the Path of Collaboration

Cyber collaboration is not taking off at the speed we would like, even though experts may agree to the concept in principal. Why?

  • Cultural inhibitions – The state of mind of not cooperating with competition, the fear of losing intellectual property and the fear of losing expertise sits heavily with many decision makers.
  • Sharing is limited due to the justified fear of potential exposure of sensitive data – Deep collaboration in the cyber world requires technical solutions to allow sharing of meaningful information without sacrificing sensitive data.
  • Exposure to new supply chain attacks – Real-time and actionable threat intelligence sharing raises questions on the authenticity and integrity of incoming data feeds creating a new weakness point at the core of the enterprise security systems.
  • Before an organization can start collaborating on cyber security, its internal security function needs to work properly – this is not necessarily the case with a majority of organizations.
  • The brand can be put into some uncertainty as impact on a single participant in a group of collaborators can damage the public image of other participants.
  • The tools, expertise and know-how required for establishing a cyber collaboration are still nascent.
  • As with any emerging topic, there are too many standards and no agreed upon principles yet.
  • Collaboration in the world of cyber security has always raised privacy concerns within consumer and citizen groups.

 

Though there is a mix of misconceptions, social and technical challenges, the importance of the topic continues to gain recognition and I believe we are on the right path.

 

Technical Challenges in Threat Intelligence Sharing

Even the limited case of concrete threat intelligence sharing raises a multitude of technical challenges, and best practices to overcome them have not yet been determined:

  • How to achieve balance between sharing actionable intelligence pieces which must be rich in order to bee actionable vs. preventing exposure of sensitive information.
  • How to establish secure and reliable communications among collaborators with proper handling of authorization, authenticity and integrity to make sure the risk posed by collaboration is minimized.
  • How to verify the potential impact of actionable intelligence before it is applied to other organizations. For example, if one collaborator broadcasts that google.com is a malicious URL then how can the other participants automatically identify it is not something to act upon?
  • How do we make sure we don’t amplify the information overload problem by sharing false alerts to other organizations or some means to handle the load?
  • Once collaboration is established, how can IT measure the effectiveness of the efforts being invested vs. resource saving and added protection level? How do you calculate Collaboration ROI?
  • Many times investigating an incident requires good understanding of and access to other elements in the network of the attacked enterprise; collaborators naturally cannot have such access, which limits their ability to conduct a root cause investigation.

 

These are just a few of the current challenges – more will surface as we get further down the path to collaboration. There are several emerging technological areas which can help tackle some of the challenges: Privacy preserving approaches in the world of big data such as synthetic data generation; zero knowledge proofs (i.e. blockchain); tackling information overload with Moving Target Defense-based technologies that deliver only true alerts, such as Morphisec Endpoint Threat Prevention, and/or emerging solutions in the area of AI and security analytics; and distributed SIEM architectures.

 

Collaboration Grid

In a highly collaborative future, a grid of collaborators will emerge connecting every organization. Such a grid will work according to certain rules, taking into account that countries will be participants as well:

Countries – Countries can work as centralized aggregation points, aggregating intelligence from local enterprises and disseminating it to other countries which, in turn, will disseminate the received intelligence to their respective local enterprises. There should be some filtering on the type of intelligence being disseminated and classification so the propagation and prioritization will be effective.

Sector Driven – Each industry has its common threats and common malicious actors; it’s logical that there would be tighter collaboration among industry participants.

Consumers & SMEs – Consumers are the ones excluded from this discussion although they could contribute and gain from this process like anyone else. The same holds true for small to medium sized businesses, which cannot afford the enterprise grade collaboration tools currently being built.

Final Words

One of the biggest questions about cyber collaboration is when it will reach a tipping point. I speculate that it will occur when a disastrous cyber event takes place, or when startups emerge in a massive number in this area or when countries finally prioritize cyber collaboration and invest the required resources.

Right and Wrong in AI

Background

The DARPA Cyber Grand Challenge (CGC) 2016 competition has captured the imagination of many with its AI challenge. In a nutshell it is a competition where seven highly capable computers compete with each other and each computer is owned by a team. Each team creates a piece of software which is able to autonomously identify flaws in their own computer and fix them and identify flaws in the other six computers and hack them. A game inspired by the Catch The Flag (CTF) game which is played by real teams protecting their computer and hacking into others aiming to capture a digital asset which is the flag. In the CGC challenge the goal is to build an offensive and defensive AI bot that follows the CTF rules.

In recent five years AI has become a highly popular topic discussed both in the corridors of tech companies as well as outside of it where the amount of money invested in the development of AI aimed at different applications is tremendous and growing. Starting from use cases of industrial and personal robotics, smart human to machine interactions, predictive algorithms of all different sorts, autonomous driving, face and voice recognition and others fantastic use cases. AI as a field in computer science has always sparked the imagination which also resulted in some great sci-fi movies. Recently we hear a growing list of few high profile thought leaders such as Bill Gates, Stephen Hawking and Elon Musk raising concerns about the risks involved in developing AI. The dreaded nightmare of machines taking over our lives and furthermore aiming to harm us or even worse, annihilate us is always there.

The DARPA CGC competition which is a challenge born out of good intentions aiming to close the ever growing gap between attackers sophistication and defenders toolset has raised concerns from Elon Musk fearing that it can lead to Skynet. Skynet from the Terminator movie as a metaphor for a destructive and malicious AI haunting mankind. Indeed the CGC challenge has set the high bar for AI and one can imagine how a smart software that knows how to attack and defend itself will turn into a malicious and uncontrollable machine driven force. On the other hand there seems to be a long way until a self aware mechanical enemy can be created. How long will it take and if at all is the main question that stands in the air. This article is aiming to dissect the underlying risks posed by the CGC contest which are of a real concern and in general contemplates on what is right and wrong in AI.

Dissecting Skynet

AI history has parts which are publicly available such as work done in academia as well as parts that are hidden and take place at the labs of many private companies and individuals. The ordinary people outsiders of the industry are exposed only to the effects of AI such as using a smart chat bot that can speak to you intelligently. One way to approach the dissection of the impact of CGC is to track it bottom up and understand how each new concept in the program can lead to a new step in the evolution of AI and imagining future possible steps. The other way which I choose for this article is to start at the end and go backwards.

To start at Skynet.

Skynet is defined by Wikipedia as Rarely depicted visually in any of the Terminator media, Skynet gained self-awareness after it had spread into millions of computer servers all across the world; realising the extent of its abilities, its creators tried to deactivate it. In the interest of self-preservation, Skynet concluded that all of humanity would attempt to destroy it and impede its capability in safeguarding the world. Its operations are almost exclusively performed by servers, mobile devices, drones, military satellites, war-machines, androids and cyborgs (usually a Terminator), and other computer systems. As a programming directive, Skynet’s manifestation is that of an overarching, global, artificial intelligence hierarchy (AI takeover), which seeks to exterminate the human race in order to fulfil the mandates of its original coding.”.  The definition of Skynet discusses several core capabilities which it has acquired and seem to be a strong basis for its power and behaviour:

Self Awareness

A rather vague capability which is borrowed from humans where in translation to machines it may mean the ability to identify its own form, weaknesses, strengths, risks posed by its environment as well as opportunities.

Self Defence

The ability to identify its weaknesses, awareness to risks, maybe the actors posing the risks and to apply different risk mitigation strategies to protect itself. Protect first from destruction and maybe from losing territories under control.

Self Preservation

The ability to set a goal of protecting its existence’ applying self defence in order to survive and adapt to a changing environment.

Auto Spreading

The ability to spread its presence into other computing devices which have enough computing power and resources to support it and to allows a method of synchronisation among those devices forming a single entity. Sync seems to be obviously implemented via data communications methods but it is not limited to that. These vague capabilities are interwoven with each other and there seems to be other more primitive conditions which are required for an effective Skynet to emerge.

The following are more atomic principles which are not overlapping with each other:

Self Recognition

The ability to recognise its own form including recognising its own software components and algorithms as inseparable part of its existence. Following the identification of the elements that comprise the bot then there is a recursive process of learning what are the conditions that are required for each element to properly run. For example understanding that a specific OS is required for its SW elements in order to run and that a specific processor is required for the OS in order to run and that a specific type of electricity source is required for the processor in order to work properly and on and on. Eventually the bot should be able to acquire all this knowledge where its boundaries are set in the digital world and this knowledge is being extended by the second principle.

Environment Recognition

The ability to identify objects, conditions and intentions arising from the real world to achieve two things: To extend the process of self recognition so for example if the bot understands that it requires an electrical source then identifying the available electrical sources in a specific geographical location is an extension to the physical world. The second goal is to understand the environment in terms of general and specific conditions that have an impact on itself and what is the impact. For example weather or stock markets. Also an understanding of the real life actors which can impact its integrity and these are the humans (or other bots). Machines needs to understand humans in two aspects: their capabilities and their intentions and both eventually are based on a historic view of the digital trails people leave and the ability to predict future behaviour based on the history. If we imagine a logical flow of a machine trying to understand relevant humans following the chain of its self recognition process then it will identify whom are the people operating the electrical grid that supplies the power to the machine and identifying weaknesses and behavioural patterns of them and then predicting their intentions which eventually may bring the machine to a conclusion that a specific person is posing too much risk on its existence.

Goal Setting

The equivalent of human desire in machines is the ability to set a specific goal that is based on knowledge of the environment and itself and then to set a non linear milestone to be achieved. An example goal can be to have a replica of its presence on multiple computers in different geographical locations in order to reduce the risk of shutdown. Setting a goal and investing efforts towards achieving it requires also the ability to craft strategies and refine them on the fly where strategies here mean a sequence of actions which will get the bot closer to its goal. The machine needs to be pre-seeded with at least one a-priori  goal which is survival and to apply a top level strategy which continuously aspires for continuation of operation and reduction of risk.

Humans are the most unpredictable factor for machines to comprehend and as such they would probably be deemed as enemies very fast in the case of existence of such intelligent machine. Assuming the technical difficulties standing in front of such intelligent machine such as roaming across different computers, learning the digital and physical environment and gaining the long term thinking are solved the uncontrolled variable which are humans, people with their own desires and control on the system and free will, would logically be identified as a serious risk to the top level goal of survivability.

What We Have Today

The following is an analysis of the state of the development of AI in light of these three principles with specific commentary on the risks that are induced from the CGC competition:

Self Recognition

Today the main development of AI in that area is in the form of different models which can acquire knowledge and can be used for decision making. Starting from decision trees, machine learning clusters up to deep learning neural networks. These are all models that are specially designed for specific use cases such as face recognition or stock market prediction. The evolution in models, especially in the non supervised field of research, is fast paced and the level of broadness in the perception of models grows as well. The second part that is required to achieve this capability is exploration, discovery and new information understanding where today all models are being fed by humans with specific data sources and a big portions of the knowledge about its form are undocumented and not accessible. Having said that learning machines are gaining access to more and more data sources including the ability to autonomously select access to data sources available via APIs. We can definitely foresee that machines will evolve towards owning major part of the required capabilities to achieve Self Recognition. In the CGC contest the bots were indeed required to defend themselves and as such to identify security holes in the software they were running in which is equivalent to recognising themselves. Still it was a very narrowed down application of discovery and exploration with limited and structured models and data sources designed for the specific problem. It seems more as a composition of ready made technologies which were customised towards the specific problem posed by CGC vs. a real non-linear jump in the evolution of AI.

Environment Recognition

Here there are many trends which help the machines become more aware to their environment. Starting from IoT which is wiring the physical world up to digitisation of many aspects of the physical world including human behaviour such as Facebook profiles and Fitbit heart monitors. The data today is not accessible easily to machines since it is distributed and highly variant in its data formats and meaning. Still it exists which is a good start in this direction. Humans on the other hand are again the most difficult nut to crack for machines as well as to other humans as we know. Still understanding humans may not be that critical for machines since they can be risk averse and not necessarily go too deep to understand humans and just decide to eliminate the risk factor. In the CGC contest understanding the environment did not pose a great challenge as the environment was highly controlled and documented so it was again reusing tools needed for solving the specific problem of how to make sure security holes are not been exposed by others as well as trying to penetrate the same or other security holes in other similar machines. On top of that CGC have created an artificial environment of a new unique OS which was created in order to make sure vulnerabilities uncovered in the competition are not being used in the wild on real life computers and the side effect of that was the fact that the environment the machines needed to learn was not the real life environment.

Goal Setting

Goal setting and strategy crafting is something machines already do in many specific use case driven products. For example setting the goal of maximising revenues of a stocks portfolio and then creating and employing different strategies to reach that. Goals that are designed and controlled by humans. We did not see yet a machine which has been given a top level of goal of survival. There are many developments in the area of business continuation but still it is limited to tools aimed to achieve tactical goals and not a grand goal of survivability. The goal of survival is very interesting in the fact that it serves the interest of the machine and in the case it is the only or main goal then this is when it becomes problematic. The CGC contest was new in the aspect of setting the underlying goal of survivability into the bots and although the implementation in the competition was narrowed down to the very specific use case still it made many people think about what survivability may mean to machines.

Final Note

The real risk posed by CGC was by sparking the thought of how can we teach a machine to survive and once it is reached then Skynet can be closer then ever. Of course no one can control or restrict the imagination of other people and survivability has been on the mind of many before the challenge but still this time it was sponsored by DARPA. It is not new that certain plans to achieve something eventually lead to whole different results and we will see within time whether the CGC contest started a fire in the wrong direction. In a way today we are like the people in Zion as depicted in the Matrix movie where the machines in Zion do not control the people but on the other hand the people are fully dependent on them and shutting them down becomes out of the question. In this fragile duo it is indeed wise to understand where AI research goes and which ways are available to mitigate certain risks. The same as line of thought being applied to nuclear bombs technology. One approaches for risk mitigation is to think about more resilient infrastructure for the next centuries where it won’t be easy for a machine to seize control on critical infrastructure and enslave us.

Now it is 5th of August 2016, few hours after the competition ended and it seems that mankind is intact. As far as we see.

The article will be published as part of the book of TIP16 Program (Trans-disciplinary Innovation Program at Hebrew University) where I had the pleasure and privilege to lead the Cyber and Big Data track. 

The Emotional CISO

It may sound odd, but cybersecurity has a huge emotional component. Unlike other industries that are driven by numbers whether derived from optimization or financial gains, cybersecurity has all the makings of a good Hollywood movie—good and bad guys, nation-states attacking other nation states, and critical IT systems at risk. Unfortunately for most victims of a cyber threat or breach, the effects are all too real and don’t disappear when the music stops and the lights come on. As with a good blockbuster, in cybersecurity you can expect highs, lows, thrills and chills. When new risks and threats appear, businesses get worried, and demand for new and innovative solutions increases dramatically. Security managers and solution providers then scramble to respond with a fresh set of tools and services aimed at mitigating the newly discovered threats.

Because cybersecurity is intrinsically linked to all levels of criminal activity—from petty thieves to large-scale organized crime syndicates—cybersecurity is a never-ending story. Yet, curiously, the never ending sequence of new threats followed by new innovative solutions, present subtle patterns that, once identified, can help a CISO make the right strategic decisions based on logical reasoning and not emotions.

Cybersecurity Concept Du Jour

When you’ve been in the cybersecurity industry for a while like I have, you notice that in each era, there is always a “du jour” defense concept that occupies the industry decision makers state-of-mind. Whether it is prevention, detection or containment in each time period, the popular concept becomes the defining model that everyone—analysts, tool builders, and even the technology end users—advocate fiercely. Which concept is more popular represents critical shifts in widespread thinking with regards to cybersecurity.

The Ambiguous Perception of Defense Concepts

The defense concepts of prevention, detection, and containment serve dual roles: as defense strategies employed by CISOs and in correspondence as product categories for different defense tools and services. However, the first challenge encountered by both cybersecurity professionals and end users is that these concepts don’t have a consistent general meaning; trying to give a single general definition of each of these terms is like attempting to build a castle on shifting sand (although that doesn’t stop people from trying). From a professional security point of view, there are different worlds in which specific targets, specific threats (new and old), and a roster of defenses exist. Each specific world is a security domain in and of itself, and this domain serves as the minimum baseline context for the concepts of prevention, detection, and containment. Each particular threat in a security domain defines the boundaries and the roles of these concepts. In addition, these concepts serve as product categories, where particular, but related tools can be assigned to one or more category based on the way the tool operates.

Ultimately, these defense concepts have a concrete meaning that is specific and actionable only within a specific security domain. For instance, a security domain can be defined by the type of threat, the type of target, or a combination of the two.

So, for example, there are domains that represent groups of threats with common patterns, such as advanced attacks on enterprises (of which advanced persistent attacks or APTs are a subset) or denial of service attacks on online services. In contrast, there are security domains that represent assets, such as protecting websites from a variety of different threats including defacement, denial of service, and SQL injection through its entry points. The determining factor in defining the security domain approach depends on the asset – and the magnitude of risk it can be exposed to – or on the threat group and its commonalities among multiple threats.

Examples, Please

To make this more tangible let’s discuss a couple of examples by defining the security domain elements and explaining how the security concepts of prevention, detection, and containment need to be defined from within the domain.

The Threats Point of View – Advanced Attacks

Let’s assume that the primary attack vector for infiltration into the enterprise is via endpoints; the next phase of lateral movement takes place in the network via credential theft and exploitation; and exfiltration of data assets is conducted via HTTP covert channels as the ultimate goal.

Advanced attacks have a timeline with separate consecutive stages starting from entrance into the organization and ending with data theft. The security concepts have clearly defined meanings, related specifically to each and every stage of advanced attacks. For example, at the first stage of infiltration there are multiple ways malicious code can get into an employee computer, such as opening a malicious document or browsing a malicious website and installing a malicious executable unintentionally.

prevent1

In the case of the first stage of infiltration of advanced attacks, “prevention” means making sure infiltration does not happen at all; “detection” means identifying signs of attempted infiltration or successful infiltration; and “containment” means knowing that the infiltration attempt has been stopped and the attack cannot move to the next stage. A concrete meaning for each and every concept in the specific security domain.

The Asset Point of View – Web Site Protection

Web sites can be a target for a variety of different types of threats, such as security vulnerabilities in one of the scripts, misconfigured file system access rights, or a malicious insider with access to the web site’s backend systems. From a defensive point-of-view, the website has two binary states: compromised or uncompromised.

Therefore, the meanings of the three defense concepts are defined as prevention, any measure that can prevent the site from being compromised, and detection, identifying an already-compromised site. In this general example, containment does not have a real meaning or role as eventually a successful containment equals prevention. Within specific group of threats against a web site which have successfully compromised the site there may be a role for containment, such as preventing a maliciously installed malvertising campaign on the server from propagating to the visitors’ computers.

It’s An Emotional Decision

So, as we have seen, our three key defense concepts have different and distinctive meanings that are highly dependent on their context, making broader definitions somewhat meaningless. Still, cybersecurity professionals and lay people alike strive to assign meaning to these words, because that is what the global cybersecurity audience expects: a popular meaning based on limited knowledge, personal perception, desires and fears.

The Popular Definitions of Prevention, Detection and Containment

From a non-security expert point-of-view, prevention has a deterministic feel – if the threat is prevented, it is over with no impact whatsoever. Determinism gives the perception of complete control, high confidence, a guarantee. Prevention is also perceived as an active strategy, as opposed to detection which is considered more passive (you wait for the threat to come, and then you might detect it).

Unlike prevention, detection is far from deterministic, and would be classified as probabilistic, meaning that you might have a breach (85% chance). Detection tools that tie their success to probabilities gives assurance by degree, but never 100% confidence either on stage of attack detected or on threat coverage.

Interestingly, containment might sound deterministic since it gives the impression that the problem is under control, but there is always the possibility that some threat could have leaked through the perimeter, turning it into more of a probabilistic strategy. And it straddles the line between active and passive. Containment passively waits for the threat, and then actively contains it.

In the end, these deterministic, probabilistic, active and passive perceptions end up contributing to the indefinite meaning of these three terms, making them highly influenced by public opinion and emotions. The three concepts in the eyes of the layperson turn into three levels of confidence based on a virtual confidence scale, with prevention at the top, containment in the middle, and detection as a tool of last resort. Detection gets the lowest confidence grade because it is the least proactive, and the least definite.

prevent2

Today’s Defense Concept and What the Future Holds

Targets feel more exposed today than ever, with more and more organizations becoming victims due to newly discovered weaknesses. Attackers have the upper hand and everyone feel insecure. This imbalance towards attackers is currently driving the industry to focus on detection. It also sets the stage for the “security solution du jour” – when the balance leans toward the attackers, society lowers its expectations due to reduced confidence in tools, which results in a preference for detection. At a minimum, everyone wants to at least know an attack has taken place, and then they want to have the ability to mitigate and respond by minimizing damages. It is more about being realistic and setting detection as the goal, when there is an understanding that prevention is not attainable at the moment.

If and when balance returns and cybersecurity solutions are again providing the highest level of protection for the task at hand, then prevention once again becomes the holy grail. Ultimately, no one is satisfied with anything less than bullet-proof prevention tools. This shift in state-of-mind has had a dramatic impact on the industry, with some tools becoming popular and others being sent into oblivion. It also has impacted the way CISOs define their strategies.

Different Standards for Different Contexts

The state-of-mind when selecting the preferred defense concept also has a more granular resolution. Within each security domain, different preferences for a specific concept may apply depending on the state of the emergence of that domain. For example, in the enterprise world, the threat of targeted attacks in particular, and advanced attacks in general, used to be negligible. The primary threats ten years ago were general-purpose file-borne viruses targeting the computing devices held by the enterprise, not the enterprise itself or its unique assets. Prevention of such attacks was once quite effective with static and early versions of behavioral scanning engines. Technologies were initially deployed at the endpoint for scanning incoming files and later on, for greater efficiency, added into the network to conduct a centralized scan via a gateway device. Back then, when actual prevention was realistic, it became the standard security vendors were held to; since then, no one has settled for anything less than high prevention scores.

In the last five years, proliferation of advanced threat techniques, together with serious monetary incentives for cyber criminals, have created highly successful infiltration rates with serious damages. The success of cyber criminals has, in turn, created a sense of despair among users of defense technologies, with daily news reports revealing the extent of their exposure. The prevalence of high-profile attacks shifted the industry’s state-of-mind toward detection and containment as the only realistic course of action and damage control, since breaches seem inevitable. Today’s cybersecurity environment is comprised of fear, uncertainty, and doubt, with a low confidence in defense solutions.

Yet, in this depressing atmosphere, signs of change are evident. CISOs today typically understand the magnitude of potential attacks and the level of exposure, and they understand how to handle breaches when they take place. In addition, the accelerated pace of innovation in cybersecurity tools is making a difference. Topics such as software defined networking, moving target defense, and virtualization are becoming part of the cybersecurity professional’s war chest.

Cybersecurity is a cyclical industry, and the bar is again being optimistically raised in the direction of “prevention.” Unfortunately, this time around, preventing cybercrime won’t be as easy as it was in the last cycle when preventative tools worked with relative simplicity. This time, cybersecurity professionals will need to be prepared with a much more complex defense ecosystem that includes collaboration among targets, vendors and even governmental entities.

 

Cyber-Evil Getting Ever More Personal

Smartphones will soon become the target of choice for cyber attackers—making cyber warfare a personal matter. The emergence of mobile threats is nothing new, though until now, it has mainly been a phase of testing the waters and building an arms arsenal. Evil-doers are always on the lookout for weaknesses—the easiest to exploit and the most profitable. Now, it is mobile’s turn. We are witnessing a historic shift in focus from personal computers, the long-time classic target, to mobile devices. And of course, a lofty rationale lies behind this change.

Why Mobile?
The dramatic increase in usage of mobile apps concerning nearly every aspect of our lives, the explosive growth in mobile web browsing, and the monopoly that mobile has on personal communications, makes our phones a worthy target. In retrospect, we can safely say that most security incidents are our fault: the more we interact with our computer, the higher the chances become that we will open a malicious document, visit a malicious website or mistakenly run a new application that runs havoc on our computer. Attackers have always favored human error, and what is better suited to expose these weaknesses than a computer that is so intimately attached to us 24 hours a day?

Mobile presents unique challenges for security. Software patching is broken where the rollout of security fixes for operating systems is anywhere from slow to non-existent on Android, and cumbersome on iOS. The dire Android fragmentation has been the Achilles heel for patching. Apps are not kept updated either where tens of thousands of micro-independent software vendors are behind many of the applications we use daily, security being the last concern on their mind. Another major headache rises from the blurred line between the business and private roles of the phone. A single tap on the screen takes you from your enterprise CRM app, to your personal WhatsApp messages, to a health tracking application that contains a database of every vital sign you have shown since you bought your phone.

Emerging Mobile Threats
Mobile threats grow quickly in number and variety mainly because attackers are well-equipped and well-organized—this occurs at an alarming pace that is unparalleled to any previous emergence of cyber threats in other computing categories.

The first big wave of mobile threats to expect is cross-platform attacks, such as web browser exploits, cross-site scripting or ransomware—repurposing of field-proven attacks from the personal computer world onto mobile platforms. An area of innovation is in the methods of persistency employed by mobile attackers, as they will be highly difficult to detect, hiding deep inside applications and different parts of the operating systems. A new genre of mobile-only attacks target weaknesses in hybrid applications. Hybrid applications are called thus since they use the internal web browser engine as part of their architecture, and as a result, introduce many uncontrolled vulnerabilities. A large portion of the apps we are familiar with, including many banking-oriented ones and applications integrated into enterprise systems, were built this way. These provide an easy path for attackers into the back-end systems of many different organizations. The dreaded threat of botnets overflowing onto mobile phones is yet to materialize, though it will eventually happen as it did on all other pervasive computing devices. Wherever there are enough computing power and connectivity, bots appear sooner or later. With mobile, it will be major as the number of devices is high.

App stores continue to be the primary distribution channel for rogue software as it is almost impossible to identify automatically malicious apps, quite similar to the challenge of sandboxes that deal with evasive malware.

The security balance in the mobile world on the verge of disruption proving to us yet again, that ultimately we are at the mercy of the bad guys as far as cyber security goes. This is the case at least for the time being, as the mobile security industry is still in its infancy—playing a serious catch-up.

A variation of this story was published on Wired.co.UK – Hackers are honing in on your mobile phone.

Hackers are honing in on your mobile phone

Most security incidents are, in retrospect, our own fault. The more we interact with a computer, the higher the chances that we will open a malicious document, visit a harmful website or mistakenly launch a new app that causes havoc.

Attackers favour human error, and there’s nothing better suited to expose this than the smartphone, a computer that is attached to us 24 hours a day. The dramatic increase in usage of mobile apps for many aspects of our lives, the huge growth in mobile web browsing and the monopoly mobile has on our communications makes smartphones a key target for cybercrime.

Mobile presents unique challenges for our security. Software patching is broken: the rollout of security fixes is slow to non-existent on the Android ecosystem and cumbersome on iOS. Apps are rarely kept up-to date: for thousands of independent micro-vendors, security is the last concern. A further headache arises from the blurring between the business and private roles of the phone. A single tap can now take you from your enterprise CRM app to WhatsApp or a health-tracking app containing every vital sign recorded since you bought your phone.

The first wave of mobile threats to expect will be cross-platform, such as web browser exploits, cross-site scripting or ransomware – the repurposing of PC attacks on to mobile platforms. Mobile attackers are innovative in the methods they use to hide inside apps and operating systems, making them difficult to detect.

We will start to see mobile-specific attacks targeting weaknesses in hybrid apps. These use the internal web browser engine as part of their architecture, and as a result introduce uncontrolled vulnerabilities. Many familiar apps were built this way, providing an easy path for attackers into an organisation’s back-end systems. The threat of botnets – in which hackers take control of a user’s device to enlist them in spam campaigns or DDoS – overflowing on to mobile phones has yet to materialise, but where there’s sufficient computing power and connectivity, they will appear at some point. App stores will continue to be the primary distribution channel for rogue software as it is almost impossible to identify malicious apps.

Again, we’re at the mercy of the bad guys. The mobile security industry is still in its infancy, and has some catching up to do.

 

Published on Wired

Israel, The New Cyber Superpower

The emerging world of ever-growing connectivity, cybersecurity, and cyber-threats has initiated an uncontrolled transformation in the balance of global superpowers. The old notion of power relying on the number of aircraft and missiles a country owns has expanded to include new terms—terms such as the magnitude of a denial of service attack and the sophistication of advanced persistent attacks, which has changed the landscape forever. A new form of power has emerged, with new rules of engagement expressed by bits and bytes, and deep knowledge of how networks and operating systems work. In the recent decade, Israel naturally evolved to become one of the top players in this new playground, and the reasons for this change are rooted deeply in its history.

Israel, a rather “new” nation on the face of the earth, has two main distinctive characteristics compared to many other countries: the entrepreneurial spirit that served as a backbone for building the country from the ground up and the ongoing resistance of its close and distant neighbors to accept it as a legitimate nation. This ongoing struggle pushed the country to the forefront of technology for both defense and offensive. Furthermore, since Israel is a small country, the goal of seeking an advantage in a different arena where wisdom plays a bigger role than money was natural— the world of cybersecurity created such opportunity. This advantage evolved into a mature and proven cybersecurity capability that is being put to the test every second of the day.

Israel’s cybersecurity core competence has flowed into the commercial world, utilizing its unique entrepreneurial spirit. Cybersecurity as an industry has always been the preferred choice for many entrepreneurs due to their deep expertise in that area. This expertise created a true global competitive edge that was much needed by Israeli companies due to the challenges faced by a small remote country trying to succeed in the main markets of the U.S., Europe, and Asia. Furthermore, the available talent inflow from the army and other defense-related organizations serves as a unique resource that is highly desired nowadays by many multi-national companies that are aiming to establish their cybersecurity presence in Israel.

In recent years, with the emergence of cybersecurity as a globally important topic, Israel maintained its leadership in innovation with a high ratio of startups in that domain. While Israel has several large security companies, its startup industry is perceived as only generating innovative ideas, lacking the ability to sell its products unless acquired by a global company. The Israeli startup industry is supported by the local venture capital industry together with dedicated support from the Israeli government. They are pushing to help more and more Israeli companies become prominent global players on their own.

Israelis once again turned lemons into lemonade by creating strong cyber capabilities as a consequence of its political position and challenges. Furthermore, these capabilities position it as a strong solution provider for many countries and companies facing similar challenges.

Originally published on the CIPHER Brief

Is It GAME OVER?

Targeted attacks come in many forms, though there is one common tactic most of them share: Exploitation. To achieve their goal, they need to penetrate different systems on-the-go. The way this is done is by exploiting unpatched or unknown vulnerabilities. More common forms of exploitation happen via a malicious document which exploits vulnerabilities in Adobe Reader or a malicious URL which exploits the browser in order to set a foothold inside the end-point computer. Zero Day is the buzzword today in the security industry, and everyone uses it without necessarily understanding what it really means. It indeed hides a complex world of software architectures, vulnerabilities and exploits that only few thoroughly understand. Someone asked me to explain the topic, again, and when I really delved deep into the explanation I was able to comprehend something quite surprising. Please bear with me, this is going to be a long post 🙂

Overview

I will begin with some definitions of the different terms in the area: These are my own personal interpretations on them…they are not taken from Wikipedia.

Vulnerabilities

This term usually refers to problems in software products – bugs, bad programming style or logical problems in the implementation of software. Software is not perfect and maybe someone can argue that it can’t be such. Furthermore, the people whom build the software are even less perfect—so it is safe to assume such problems will always exist in software products. Vulnerabilities exist in operating systems, runtime environments such as Java and .Net or specific applications whether they are written in high level languages or native code. Vulnerabilities also exist in hardware products, but for the sake of this post I will focus on software as the topic is broad enough even with this focus. One of the main contributors to the existence and growth in the number of vulnerabilities is attributed to the ever-growing pace of complexity in software products—it just increases the odds for creating new bugs which are difficult to spot due to the complexity. Vulnerabilities always relate to a specific version of a software product which is basically a static snapshot of the code used to build the product at a specific point in time. Time plays a major role in the business of vulnerabilities, maybe the most important one.

Assuming vulnerabilities exist in all software products, we can categorize them into three groups based on the level of awareness to these vulnerabilities:

  • Unknown Vulnerability – A vulnerability which exists in a specific piece of software to which no one is aware. There is no proof that such exists but experience teaches us that it does and is just awaiting to be discovered.
  • Zero Day – A vulnerability which has been discovered by a certain group of people or a single person where the vendor of the software is not aware of it and so it is left open without a fix or awareness to it its presence.
  • Known Vulnerabilities – Vulnerabilities which have been brought to the awareness of the vendor and of customers either in private or as public knowledge. Such vulnerabilities are usually identified by a CVE number – where during the first period following discovery the vendor works on a fix, or a patch, which will become available to customers. Until customers update the software with the fix, the vulnerability is kept open for attacks. So in this category, each respective installation of the software can have patched or un-patched known vulnerabilities. In a way, the patch always comes with a new software version, so a specific product version always contains un-patched vulnerabilities or not – there is no such thing as a patched vulnerability – there are only new versions with fixes.

There are other ways to categorize vulnerabilities: based on the exploitation technique such as buffer overflow or heap spraying, the type of bug which lead to the vulnerability, or such as a logical flaw in design or wrong implementation which leads to the problem.

Exploits

A piece of code which abuses a specific vulnerability in order to cause something unexpected to occur as initiated by the attacked software. This means either gaining control of the execution path inside the running software so the exploit can run its own code or just achieving a side effect such as crashing the software or causing it to do something which is unintended by its original design. Exploits are usually highly associated with malicious intentions although from a technical point of view it is just a mechanism to interact with a specific piece of software via an open vulnerability – I once heard someone refer to it as an “undocumented API” :).

This picture from Infosec Institute describes a vulnerability/exploits life cycle in an illustrative manner:

042115_1024_ZeroDayExpl1

The time span, colored in red, presents the time where a found vulnerability is considered a Zero Day and the time colored in green turns the state of the vulnerability to un-patched. The post disclosure risk is always dramatically higher as the vulnerability becomes public knowledge. Also, the bad guys can and do exploit in higher frequency than in the earlier stage. Closing the gap on the patching period is the only step which can be taken toward reducing this risk.

The Math Behind a Targeted Attacks

Most targeted attacks today use the exploitation of vulnerabilities to achieve three goals:

  • Penetrate an employee end-point computer by different techniques such as malicious documents sent by email or malicious URLs. Those malicious documents/URLs contain malicious code which seeks specific vulnerabilities in the host programs such as the browser or the document reader. And, during a rather naïve reading experience, the malicious code is able to sneak into the host program as a penetration point.
  • Gain higher privilege once a malicious code already resides on a computer. Many times the attacks which were able to sneak into the host application don’t have enough privilege to continue their attack into the organization and that malicious code exploits vulnerabilities in the runtime environment of the application which can be the operating system or the JVM for example, vulnerabilities which can help the malicious code gain elevated privileges.
  • Lateral movement – once the attack enters the organization and wants to reach other areas in the network to achieve its goals, many times it exploits vulnerabilities in other systems which reside on its path.

So, from the point of view of the attack itself, we can definitely identify three main stages:

  • Attack at Transit Pre Breach – This state means an attack is moving around on its way to the target and in the target prior to exploitation of the vulnerability.
  • Attack at Penetration – This state means an attack is exploiting a vulnerability successfully to get inside.
  • Attack at Transit Post Breach –  This state means an attack has started running inside its target and within the organization.

The following diagram quantifies the complexity inherent in each attack stage both from the attacker and defender sides and below the diagram there are descriptions for each area and the concluding part:

Ability to Detect an Attack at Transit Pre Breach

Those are the red areas in the diagram. Here an attack is on its way prior to exploitation, on its way referring to the enterprise that can scan the binary artifacts of the attack, either in the form of network packets, a visited website or specific document which is traveling via email servers or arriving to the target computer for example. This approach is called static scanning. The enterprise can also emulate the expected behavior with the artifact (opening a document in a sandboxed environment) in a controlled environment and try to identify patterns in the behavior of the sandbox environment which resemble a known attack pattern – this is called behavioral scanning.

Attacks pose three challenges towards security systems at this stage:

  • Infinite Signature Mutations – Static scanners are looking for specific binary patterns in a file which should match to a malicious code sample in their database. Attackers are already much out smarted these tools where they have automation tools for changing those signatures in a random manner with the ability to create infinite number of static mutations. So a single attack can have an infinite amount of forms in its packaging.
  • Infinite Behavioural Mutations – The evolution in the security industry from static scanners was towards behavioral scanners where the “signature” of a behavior eliminates the problems induced by static mutations and the sample base of behaviors is dramatically lower in size. A single behavior can be decorated with many static mutations and behavioral scanners reduce this noise. The challenges posed by the attackers make behavioral mutations of infinite nature as well and they are of two-fold:
    • Infinite number of mutations in behaviour – In the same way attackers outsmart the static scanners by creating infinite amount of static decorations on the attack, here as well, the attackers can create either dummy steps or reshuffle the attack steps which eventually produce the same result but from a behavioral pattern point of view it presents a different behavior. The spectrum of behavioral mutations seemed at first narrower then static mutations but with advancement of attack generators even that has been achieved.
    • Sandbox evasion – Attacks which are scanned for bad behavior in a sandboxed environment have developed advanced capabilities to detect whether they are running in an artificial environment and if they detect so then they pretend to be benign which implies no exploitation. This is currently an ongoing race between behavioral scanners and attackers and attackers seem to have the upper hand in the game.
  • Infinite Obfuscation – This technique has been adopted by attackers in a way that connects to the infinite static mutations factor but requires specific attention. Attackers in order to deceive the static scanners have created a technique which hides the malicious code itself by running some transformation on it such as encryption and having a small piece of code which is responsible for decrypting it on target prior to exploitations. Again, the range of options for obfuscating code are infinite which makes the static scanners’ work more difficult.

This makes the challenge of capturing an attack prior to penetration very difficult to impossible where it definitely increases with time. I am not by any means implying such security measures don’t serve an important role where today they are the main safeguards from turning the enterprise into a zoo. I am just saying it is a very difficult problem to solve and that there are other areas in terms of ROI (if such security as ROI exists) which a CISO better invest in.

Ability to Stop an Attack at Transit Post Breach

Those are the black areas in the diagram. An attack which has already gained access into the network can take infinite number of possible attack paths to achieve its goals. Once an attack is inside the network then the relevant security products try to identify it. Such technologies  surround big data/analytics which try to identify activities in the network which imply malicious activity or again network monitors which listen to the traffic and try to identify artifacts or static behavioral patterns of an attack. Those tools rely on different informational signals which serve as attack signals.

Attacks pose multiple challenges towards security products at this stage:

  • Infinite Signature Mutations, Infinite Behavioural Mutations, Infinite Obfuscation – these are the same challenges as described before since the attack within the network can have the same characteristics as the ones before entering the network.
  • Limited Visibility on Lateral Movement – Once an attack is inside then usually its next steps are to get a stronghold in different areas in the network and such movement is hardly visible as it is eventually about legitimate actions – once an attacker gets a higher privilege it conducts actions which are considered legitimate but of high privilege and it is very difficult for a machine to deduce the good vs. the bad ones. Add on top of that, the fact that persistent attacks usually use technologies which enable them to remain stealthy and invisible.
  • Infinite Attack Paths – The path an attack can take inside the network’ especially taking into consideration a targeted attack is something which is unknown to the enterprise and its goals, has infinite options for it.

This makes the ability to deduce that there is an attack, its boundaries and goals from specific signals coming from different sensors in the network very limited. Sensors deployed on the network never provide true visibility into what’s really happening in the network so the picture is always partial. Add to that deception techniques about the path of attack and you stumble into a very difficult problem. Again, I am not arguing that all security analytics products which focus on post breach are not important, on the contrary, they are very important. Just saying it is just the beginning in a very long path towards real effectiveness in that area. Machine learning is already playing a serious role and AI will definitely be an ingredient in a future solution.

Ability to Stop an Attack at Penetration Pre Breach and on Lateral Movement

Those are the dark blue areas in the diagram. Here the challenge is reversed towards the attacker where there are only limited amount of entry points into the system. Entry points a.k.a vulnerabilities. Those are:

  • Unpatched Vulnerabilities – These are open “windows” which have not been covered yet. The main challenge here for the IT industry is about automation, dynamic updating capabilities and prioritization. It is definitely an open gap which can be narrowed down potentially to become insignificant.
  • Zero Days – This is an unsolved problem. There are many approaches towards that such as ASLR and DEP on Windows but still there is no bulletproof solution for it. In the startups scene I am aware that quite a few are working very hard on a solution. Attackers identified this soft belly long time ago and it is the main weapon of choice for targeted attacks which can potentially yield serious gains for the attacker.

This area presents a definite problem but in a way it seems as the most probable one to be solved earlier than the other areas. Mainly because the attacker in this stage is at its greatest disadvantage – right before it gets into the network it can have infinite options to disguise itself and after it gets into the network the action paths which can be taken by it are infinite. Here the attacker need to go through a specific window and there aren’t too many of those out there left unprotected.

Players in the Area of Penetration Prevention

There are multiple companies/startups which are brave enough to tackle the toughest challenge in the targeted attacks game – preventing infiltration – I call it, facing the enemy at the gate. In this ad-hoc list I have included only technologies which aim to block attacks at real-time – there are many other startups which approach static or behavioral scanning in a unique and disruptive way such as Cylance and CyberReason or Bit9 + Carbon Black (list from @RickHolland) which were excluded for sake of brevity and focus.

Containment Solutions

Technologies which isolate the user applications with a virtualized environment. The philosophy behind it is that even if there was an exploitation in the application still it won’t propagate to the computer environment and the attack will be contained. From an engineering point of view I think these guys have the most challenging task as the balance between isolation and usability has inverse correlation in productivity and it all involves virtualization on an end-point which is a difficult task on its own. Leading players are Bromium and Invincea, well established startups with very good traction in the market.

Exploitation Detection & Prevention

Technologies which aim to detect and prevent the actual act of exploitation. Starting from companies like Cyvera (now Palo Alto Networks Traps product line) which aim to identify patterns of exploitations, technologies such as ASLR/DEP and EMET which aim at breaking the assumptions of exploits by modifying the inner structures of programs and setting traps at “hot” places which are susceptible to attacks, up to startups like Morphisec which employs a unique moving target concept to deceive and capture the attacks at real-time. Another long time player and maybe the most veteran in the anti exploitation field is MalwareBytes. They have a comprehensive offering for anti exploitation with capabilities ranging from in-memory deception and trapping techniques up to real time sandboxing.

At the moment the endpoint market is still controlled by marketing money poured by the major players where their solutions are growing ineffective in an accelerating pace. I believe it is a transition period and you can already hear voices saying endpoint market needs a shakeup. In the future the anchor of endpoint protection will be about real time attack prevention and static and behavioral scanning extensions will play a minor feature completion role. So pay careful attention to the technologies mentioned above as one of them (or maybe a combination:) will bring the “force” back into balance:)

 

Advise for the CISO

Invest in closing the gap posed by vulnerabilities. Starting from patch automation, prioritized vulnerabilities scanning up to security code analysis for in-house applications—it is all worth it. Furthermore, seek out for solutions which deal directly with the problem of zero days, there are several startups in this area, and their contributions can have much higher magnitude than any other security investment in post or pre breach phases.

 

Cyber Tech 2015 – It’s a Wrap

It has been a crazy two days at Israel’s Cyber Tech 2015…in a good way! The exhibition hall was split into three sections: the booths of the established companies, the startups pavilion and the Cyber Spark arena. It was like examining an x-ray of the emerging cyber industry in Israel, where on one hand you have the grown-ups whom are the established players, the startups/sprouts seeking opportunities for growth, and an engine which generates such sprouts—the Cyber Spark. I am lucky enough to be part of the Cyber Spark growth engine which is made up of the most innovative contributors to the cyber industry in Israel—giants like EMC and Deutsche Telekom, alongside Ben-Gurion university and JVP Cyber Labs. The Cyber Spark is a place where you see how ideas are formed in the minds of bright scientists and entrepreneurs which flourish into new companies.

It all started two days ago, twelve hours before the event hall opened its doors, with great coverage by Kim Zetter from Wired on the BitWhisper heat based air-gap breach, a splendid opening which gauged tremendous interest across the worldwide media on the rolling story of air-gap security investigated at Ben-Gurion university Cyber Research center. This story made the time in our booth quite hectic with many visitors interested in the details, or just dropping by to compliment us on our hard work.

 

Startups

I had enough time to go and visit the startups presenting at the exhibition which were the real deal—as someone living in the future—and I wanted to share some thoughts and insights on what I saw. Although each startup is unique and has its own story and unique team, there are genres of solutions and technologies:

Security Analytics

Going under the name of analytics, big data or BI there were a handful of startups trying to solve the problem of security information overload. And it is a real problem; today security and IT systems throw hundreds of reports every second and it is impossible to prioritize what to handle first and how to distinguish between what is important and what is less important. The problem is divided to two parts: the ongoing monitoring and maintenance of the network and managing the special occasions of post-breach—the decisions and actions taken post-breach are critical since the time is pressing and the consequences of wrong actions can damage the investigation. Each startup takes its own angle at this task with unique advantages and disadvantages and it is fairly safe to say that the security big data topic is finally getting a proper treatment from the innovation world. Under the category of analytics, I also group all the startups which help visualise and understand the enterprise IT assets addressing the same problem of security information overload, in their own way.

Mobile Security

Security of mobile devices—laptops, tablets and phones—is a vast topic including on-device security measures, secure operating systems, integration of mobile workers into the enterprise IT and risk management of mobile workers. This is a topic that has been addressed by Israeli startups for several years now, and finally this year it seems that enterprises are ready to absorb such solutions. These solutions help mitigate the awful risk inherent in the new model of enterprise computing which is no longer behind the closed doors of the office—the enterprise is now distributed globally and always moving where part of it can be on the train or at home.

Authentication

We all know passwords are bad. They are hard to remember and most of all insecure and the world is definitely working toward reinventing the ways we can authenticate digitally without passwords. From an innovative point of view, startups of authentication are the most fascinating as each one comes from a completely different discipline and aims to solve the same problem. Some base their technology on the human body, i.e., Biometry, and some come from the cryptographic world with all kinds of neat tricks such as zero knowledge proofs. From an investor point of view, these startups are the riskiest ones since they all depend on consumer adoption eventually and usually only one or two get to win and win big time while the rest are left deserted.

Security Consulting

Although it is weird to see consulting companies in the startups pavilion, in the world of security it makes a lot of sense. There is a huge shortage in security professionals globally and this demand serves as the basis for new consulting powerhouses that provide services such as penetration testing, risk assessment and solution evaluation – the Israelis are well-known for their hands-on expertise which is appreciated across the world by many organizations.

Security in the Cloud

The cloud movement is happening now, with a large part of it and enabler to it being security—and startups of course do not miss out on that opportunity as well. Cloud security is basically the full range of technologies and products aimed to defend the cloud operations and data. In a way, it is a replica of the legacy data center security inventory simply taking a different shape to adapt better to the new dynamic environment of cloud computing. This is a very promising sector as the demand curve for it is steep.

Security Hardware

This was a refreshing thing to see with Israeli startups which tend to focus, in recent years, mostly on software. A range of cool devices starting from sniffers to backup units and wifi blockers. I wonder how it will play out for them as the playbook for hardware is definitely something different from software.

SCADA Security

SCADA always ignites the imagination thinking to critical infrastructure and sensitive nuclear plants—a fact which has definitely grabbed the attention of many entrepreneurs looking to start a venture in the interest of solving these important issue. Problems such as inability to update those critical systems, lack of visibility with regard to attacks on disconnected devices, and ability to control the assets in real-time in the case of attacks. The real problem with SCADA systems is the risk associated with an attack that anyone would try to avoid at all costs, while the challenge for startups is the integration into this diverse world.

IOT Security

IOT security is a popular buzzword now and hides behind it a very complicated world of many devices and infrastructures in which there is no one solution fits all resolution. Although there are startups which claim to be solving IOT security, I project that with time, each one of them will find its own niche—which is sufficient as it’s a vast world with endless opportunism. A branch of IOT that was prominent in the exhibition was car security with some very interesting innovations.

Data Leakage Protection

As part of the post breach challenge, there are quite a few startups focusing on how to prevent data exfiltration. From a scientific point of view, it is a great challenge consisting of conflicting factors—the tighter the control is on data, the less convenient it is to use the data on normal days.

Web Services Security

The growing trend of attacks on websites which has taken place in recent years and the tremendous impact this makes on consumer confidence, i.e., when your website gets defaced or is serving malware, grabbed the Israeli startups attention. Here we can find a versatile portfolio of active protection tools which prevent and deflect attacks, scanning services which scan websites and tools for DDOS prevention. DDOS has been in the limelight recently and with all the botnets out there, it is a real threat.

Insider Threats

Insider threats are one of the biggest concerns today for CISOs where there are two main attack vectors: the clueless employee and the malicious employee. This threat is addressed from many directions, starting with profiling the behaviour of employees, profiling the usage of data assets and protecting central assets like Active Directory. This is definitely going to be a source for innovation for the upcoming years as the problem is diverse and difficult to solve, in that it involves the human factor.

Eliminating Vulnerabilities

Software vulnerabilities was, is and will be an unsolved problem and the industry tackles it in many different ways, ranging from code analysis and code development best practices, vulnerability scanning tools and services and active protections against exploitations. Vulnerabilities are the mirror reflection of APTs and here again there are many unique approaches to detect and stop these attacks, such as: endpoint protection tools, network detection tools, host based protection system, botnets detection and honeypots aiming to lure the attacks and contain them.

What I did Not See

Among the things I did not see there: tools which attack the attackers. developments in cryptography. containers security. security & AI and  social engineering related tools.

 

I regret that I did not have much time to listen to the speakers…I heard that some of the presentations were very good. Maybe next year at Cyber Tech 2016.

A Brief History on the Emerging Cyber Capital of the World: Beer-Sheva, Israel

6a010536b66d71970c01b8d0e90a0b970c-pi

The beginning of the cyber park

There are very few occasions in life where you personally experience a convergence of unrelated events that lead to something…something BIG! I am talking about Beer-Sheva, Israel’s desert capital. When I started to work with Deutsche Telekom Innovation Laboratories at Ben-Gurion University 9 years ago it was a cool place to be, though still quite small. Back then, security—which was not yet referred to as cyber security—was one of the topics we covered, but definitely not the only one. At that time, we were the first and only activity related to cyber in this great desert. No one knew, or at least I didn’t, that it was going to be a blossoming cyber powerhouse. Actually, when imagining the Beer-Sheva of yesterday, it was unthinkable that the hi-tech scene of Tel Aviv would make its way southward.

Now, fast-forward to the last three years, and well, it has been a rollercoaster. Deutsche Telekom has strengthened its investment in security, and together with the emerging expertise of Ben-Gurion University in the field of cyber, other large, leading security companies have caught the inspiration and followed suit. Major players have opened branches in Beer-Sheva’s Cyber Spark area: EMC and Lockheed Martin, an IBM research lab, and numerous important others as well. The growing interest and recognition of BGU’s expertise in cyber has prompted many organizations and companies to cooperate with the university—leading eventually to the emergence of the Cyber Security Labs at Ben-Gurion University. I’m referring to the same lab that was behind the Samsung Knox VPN vulnerability disclosure and the breaking of air-gap security via AirHopper. In parallel, JVP, the most prominent VC in Israel, has opened the JVP Cyber Labs which started pumping life into the many ideas that were up in the air—giving everyone a commercial point of view of innovation. The Israeli government also started backing this plan, and together with the local authority, really transformed the ecosystem into a competitive place for talent. Most of all, the university has been a real visionary, backing this emergence from the very beginning in spirit and action alike.

This brief summary of events led to a tipping point of no return where Beer-Sheva can be defined with confidence as the emerging cyber capital of the world. You can find a mix of professors, young researchers, entrepreneurs, venture capitalists and large corporations all located in the same physical place, talking and thinking about cyber and converging into this new-born cyber industry. Of course, this is my personal story and point of view, and others have their own angle. However, Beer-Sheva as a cyber capital is undeniable, take for example David Strom’s impressions from his recent visit.

6a010536b66d71970c01b8d0e90a17970c-800wi

A view to the future of the cyber park

One very special person obligatory to mention here, whom I perceive as the father of this entire movement, is Professor Yuval Elovici, the head and creator of Telekom Innovation Laboratories and the cyber security labs at Ben-Gurion university. I am grateful to him both personally and collectively—first and foremost, for pursing the development of the process in Beer-Sheva. He had this vision from the very early days, a very long time ago, when the term “cyber” was only known for the crazy shopping done on cyber Monday. The second reason, which is a personal one, is for pulling me into this wonderful adventure. Before joining the university labs, I never imagined having anything to do with the academy—as I am a person who never even properly graduated from high school:)

6a010536b66d71970c01bb08036bda970d-pi

The movers and shakers of the cyber capital

 

Life is full of surprises.

 

So, I suggest that anyone in the area of cyber—in Israel or abroad—keep a very close eye on what is happening in Beer-Sheva, because it is happening now!

P.S. If you are around on the 24-25th of March at the Cyber Tech event, please drop by and say “hi” at our beautiful Cyber Spark booth.

סייברטק