Is It GAME OVER?

Targeted attacks take many forms, though there is one common tactic most of them share: Exploitation. To achieve their goal, they need to penetrate different systems on-the-go. The way this is done is by exploiting unpatched or unknown vulnerabilities. More common forms of exploitation happen via a malicious document which exploits vulnerabilities in Adobe Reader or a malicious URL which exploits the browser in order to set a foothold inside the end-point computer. Zero Day is the buzzword today in the security industry, and everyone uses it without necessarily understanding what it really means. It indeed hides a complex world of software architectures, vulnerabilities, and exploits that only few thoroughly understand. Someone asked me to explain the topic, again, and when I really delved deep into the explanation I was able to comprehend something quite surprising. Please bear with me, this is going to be a long post :-)

Overview

I will begin with some definitions of the different terms in the area: These are my own personal interpretations of them…they are not taken from Wikipedia.

Vulnerabilities

This term usually refers to problems in software products – bugs, bad programming style or logical problems in the implementation of software. Software is not perfect and maybe someone can argue that it can’t be such. Furthermore, the people who build the software are even less perfect—so it is safe to assume such problems will always exist in software products. Vulnerabilities exist in operating systems, runtime environments such as Java and .Net or specific applications whether they are written in high-level languages or native code. Vulnerabilities also exist in hardware products, but for the sake of this post, I will focus on software as the topic is broad enough even with this focus. One of the main contributors to the existence and growth in the number of vulnerabilities is attributed to the ever-growing pace of complexity in software products—it just increases the odds for creating new bugs which are difficult to spot due to the complexity. Vulnerabilities always relate to a specific version of a software product which is basically a static snapshot of the code used to build the product at a specific point in time. Time plays a major role in the business of vulnerabilities, maybe the most important one. Assuming vulnerabilities exist in all software products, we can categorize them into three groups based on the level of awareness to these vulnerabilities:
  • Unknown Vulnerability - A vulnerability which exists in a specific piece of software to which no one is aware. There is no proof that such exists but experience teaches us that it does and is just awaiting to be discovered.
  • Zero Day - A vulnerability which has been discovered by a certain group of people or a single person where the vendor of the software is not aware of it and so it is left open without a fix or awareness to it its presence.
  • Known Vulnerabilities - Vulnerabilities which have been brought to the awareness of the vendor and of customers either in private or as public knowledge. Such vulnerabilities are usually identified by a CVE number – where during the first period following discovery the vendor works on a fix, or a patch, which will become available to customers. Until customers update the software with the fix, the vulnerability is kept open for attacks. So in this category, each respective installation of the software can have patched or un-patched known vulnerabilities. In a way, the patch always comes with a new software version, so a specific product version always contains un-patched vulnerabilities or not – there is no such thing as a patched vulnerability – there are only new versions with fixes.
There are other ways to categorize vulnerabilities: based on the exploitation technique such as buffer overflow or heap spraying, the type of bug which lead to the vulnerability, or such as a logical flaw in design or wrong implementation which leads to the problem.

Exploits

A piece of code which abuses a specific vulnerability in order to cause something unexpected to occur as initiated by the attacked software. This means either gaining control of the execution path inside the running software so the exploit can run its own code or just achieving a side effect such as crashing the software or causing it to do something which is unintended by its original design. Exploits are usually highly associated with malicious intentions although from a technical point of view it is just a mechanism to interact with a specific piece of software via an open vulnerability – I once heard someone refer to it as an “undocumented API” :).

This picture from Infosec Institute describes a vulnerability/exploits life cycle in an illustrative manner:

042115_1024_ZeroDayExpl1

The time span, colored in red, presents the time where a found vulnerability is considered a Zero Day and the time colored in green turns the state of the vulnerability to un-patched. The post disclosure risk is always dramatically higher as the vulnerability becomes public knowledge. Also, the bad guys can and do exploit in higher frequency than in the earlier stage. Closing the gap on the patching period is the only step which can be taken toward reducing this risk.

The Math Behind a Targeted Attacks

Most targeted attacks today use the exploitation of vulnerabilities to achieve three goals:
  • Penetrate an employee end-point computer by different techniques such as malicious documents sent by email or malicious URLs. Those malicious documents/URLs contain malicious code which seeks specific vulnerabilities in the host programs such as the browser or the document reader. And, during a rather naïve reading experience, the malicious code is able to sneak into the host program as a penetration point.
  • Gain higher privilege once a malicious code already resides on a computer. Many times the attacks which were able to sneak into the host application don’t have enough privilege to continue their attack on the organization and that malicious code exploits vulnerabilities in the runtime environment of the application which can be the operating system or the JVM for example, vulnerabilities which can help the malicious code gain elevated privileges.
  • Lateral movement - once the attack enters the organization and wants to reach other areas in the network to achieve its goals, many times it exploits vulnerabilities in other systems which reside on its path.
So, from the point of view of the attack itself, we can definitely identify three main stages:
  • An attack at Transit Pre-Breach - This state means an attack is moving around on its way to the target and in the target prior to exploitation of the vulnerability.
  • An attack at Penetration - This state means an attack is exploiting a vulnerability successfully to get inside.
  • An attack at Transit Post Breach This state means an attack has started running inside its target and within the organization.
The following diagram quantifies the complexity inherent in each attack stage both from the attacker and defender sides and below the diagram there are descriptions for each area and the concluding part:

Ability to Detect an Attack at Transit Pre-Breach

Those are the red areas in the diagram. Here an attack is on its way prior to exploitation, on its way referring to the enterprise that can scan the binary artifacts of the attack, either in the form of network packets, a visited website or specific document which is traveling via email servers or arriving at the target computer for example. This approach is called static scanning. The enterprise can also emulate the expected behavior with the artifact (opening a document in a sandboxed environment) in a controlled environment and try to identify patterns in the behavior of the sandbox environment which resemble a known attack pattern – this is called behavioral scanning. Attacks pose three challenges towards security systems at this stage:
  • Infinite Signature Mutations - Static scanners are looking for specific binary patterns in a file which should match to a malicious code sample in their database. Attackers are already much outsmarted these tools where they have automation tools for changing those signatures in a random manner with the ability to create an infinite number of static mutations. So a single attack can have an infinite amount of forms in its packaging.
  • Infinite Behavioural Mutations - The evolution in the security industry from static scanners was towards behavioral scanners where the “signature” of a behavior eliminates the problems induced by static mutations and the sample base of behaviors is dramatically lower in size. A single behavior can be decorated with many static mutations and behavioral scanners reduce this noise. The challenges posed by the attackers make behavioral mutations of infinite nature as well and they are of two-fold:
    • Infinite number of mutations in behaviour - In the same way, attackers outsmart the static scanners by creating infinite amount of static decorations on the attack, here as well, the attackers can create either dummy steps or reshuffle the attack steps which eventually produce the same result but from a behavioral pattern point of view it presents a different behavior. The spectrum of behavioral mutations seemed at first narrower than static mutations but with an advancement of attack generators, even that has been achieved.
    • Sandbox evasion - Attacks which are scanned for bad behavior in a sandboxed environment have developed advanced capabilities to detect whether they are running in an artificial environment and if they detect so then they pretend to be benign which implies no exploitation. This is currently an ongoing race between behavioral scanners and attackers and attackers seem to have the upper hand in the game.
  • Infinite Obfuscation - This technique has been adopted by attackers in a way that connects to the infinite static mutations factor but requires specific attention. Attackers, in order to deceive the static scanners, have created a technique which hides the malicious code itself by running some transformation on it such as encryption and having a small piece of code which is responsible for decrypting it on target prior to exploitations. Again, the range of options for obfuscating code is infinite which makes the static scanners' work more difficult.
This makes the challenge of capturing an attack prior to penetration very difficult to impossible where it definitely increases with time. I am not by any means implying such security measures don’t serve an important role where today they are the main safeguards from turning the enterprise into a zoo. I am just saying it is a very difficult problem to solve and that there are other areas in terms of ROI (if such security as ROI exists) which a CISO better invest in.

Ability to Stop an Attack at Transit Post Breach

Those are the black areas in the diagram. An attack which has already gained access to the network can take an infinite number of possible attack paths to achieve its goals. Once an attack is inside the network then the relevant security products try to identify it. Such technologies surround big data/analytics which tries to identify activities in the network which imply malicious activity or again network monitors which listen to the traffic and try to identify artifacts or static behavioral patterns of an attack. Those tools rely on different informational signals which serve as attack signals. Attacks pose multiple challenges towards security products at this stage:
  • Infinite Signature Mutations, Infinite Behavioural Mutations, Infinite Obfuscation - these are the same challenges as described before since the attack within the network can have the same characteristics as the ones before entering the network.
  • Limited Visibility on Lateral Movement - Once an attack is inside then usually its next steps are to get a stronghold in different areas in the network and such movement is hardly visible as it is eventually about legitimate actions – once an attacker gets a higher privilege it conducts actions which are considered legitimate but of high privilege and it is very difficult for a machine to deduce the good vs. the bad ones. Add on top of that, the fact that persistent attacks usually use technologies which enable them to remain stealthy and invisible.
  • Infinite Attack Paths - The path an attack can take inside the network’ especially taking into consideration a targeted attack is something which is unknown to the enterprise and its goals, has infinite options for it.
This makes the ability to deduce that there is an attack, its boundaries, and goals from specific signals coming from different sensors in the network very limited. Sensors deployed on the network never provide true visibility into what’s really happening in the network so the picture is always partial. Add to that deception techniques about the path of attack and you stumble into a very difficult problem. Again, I am not arguing that all security analytics products which focus on post-breach are not important, on the contrary, they are very important. Just saying it is just the beginning of a very long path towards real effectiveness in that area. Machine learning is already playing a serious role and AI will definitely be an ingredient in a future solution.

Ability to Stop an Attack at Penetration Pre-Breach and on Lateral Movement

Those are the dark blue areas in the diagram. Here the challenge is reversed towards the attacker where there are an only limited amount of entry points into the system. Entry points a.k.a vulnerabilities. Those are:
  • Unpatched Vulnerabilities – These are open “windows” which have not been covered yet. The main challenge here for the IT industry is about automation, dynamic updating capabilities, and prioritization. It is definitely an open gap which can be narrowed down potentially to become insignificant.
  • Zero Days – This is an unsolved problem. There are many approaches towards that such as ASLR and DEP on Windows but still, there is no bulletproof solution for it. In the startups' scene, I am aware that quite a few are working very hard on a solution. Attackers identified this soft belly long time ago and it is the main weapon of choice for targeted attacks which can potentially yield serious gains for the attacker.
This area presents a definite problem but in a way it seems as the most probable one to be solved earlier than the other areas. Mainly because the attacker in this stage is at its greatest disadvantage – right before it gets into the network it can have infinite options to disguise itself and after it gets into the network the action paths which can be taken by it are infinite. Here the attacker need to go through a specific window and there aren’t too many of those out there left unprotected.

Players in the Area of Penetration Prevention

There are multiple companies/startups which are brave enough to tackle the toughest challenge in the targeted attacks game - preventing infiltration - I call it, facing the enemy at the gate. In this ad-hoc list I have included only technologies which aim to block attacks at real-time - there are many other startups which approach static or behavioral scanning in a unique and disruptive way such as Cylance and CyberReason or Bit9 + Carbon Black (list from @RickHolland) which were excluded for sake of brevity and focus.

Containment Solutions

Technologies which isolate the user applications with a virtualized environment. The philosophy behind it is that even if there was an exploitation in the application still it won't propagate to the computer environment and the attack will be contained. From an engineering point of view, I think these guys have the most challenging task as the balance between isolation and usability has an inverse correlation in productivity and it all involves virtualization on an end-point which is a difficult task on its own. Leading players are Bromium and Invincea, well-established startups with very good traction in the market.

Exploitation Detection & Prevention

Technologies which aim to detect and prevent the actual act of exploitation. Starting from companies like Cyvera (now Palo Alto Networks Traps product line) which aim to identify patterns of exploitations, technologies such as ASLR/DEP and EMET which aim at breaking the assumptions of exploits by modifying the inner structures of programs and setting traps at "hot" places which are susceptible to attacks, up to startups like Morphisec which employs a unique moving target concept to deceive and capture the attacks at real-time. Another long time player and maybe the most veteran in the anti-exploitation field is MalwareBytes. They have a comprehensive offering for anti-exploitation with capabilities ranging from in-memory deception and trapping techniques up to real time sandboxing.
At the moment the endpoint market is still controlled by marketing money poured by the major players where their solutions are growing ineffective in an accelerating pace. I believe it is a transition period and you can already hear voices saying endpoint market needs a shakeup. In the future the anchor of endpoint protection will be about real time attack prevention and static and behavioral scanning extensions will play a minor feature completion role. So pay careful attention to the technologies mentioned above as one of them (or maybe a combination:) will bring the "force" back into balance:)

Advice for the CISO

Invest in closing the gap posed by vulnerabilities. Starting from patch automation, prioritized vulnerabilities scanning up to security code analysis for in-house applications—it is all worth it. Furthermore, seek out for solutions which deal directly with the problem of zero days, there are several startups in this area, and their contributions can have much higher magnitude than any other security investment in a post or pre-breach phases.

Time to Re-think Vulnerabilities Disclosure

Public disclosure of vulnerabilities has always bothered me and I wasn't able to put a finger on the reason until now. As a person who has been involved personally in vulnerabilities disclosure, I am highly appreciative for the contribution security researchers on awareness and it is very hard to imagine what would the world be like without disclosures. Still, the way attacks are being crafted today and their links to such disclosures got me into thinking whether we are doing it in the best way possible. So I twitted this and got a lot of "constructive feedback":) from the team in the cyber labs at Ben-Gurion of how do I dare?

  So I decided to build my argument right. Vulnerabilities The basic fact is that software has vulnerabilities. Software gets more and more complex within time and this complexity usually invites errors. Some of those errors can be abused by attackers in order to exploit the systems such software is running on. Vulnerabilities split into two groups, the ones which the vendor is aware of and the ones who are unknown. And it is unknown how many unknowns are there inside each piece of code. Disclosure There are many companies, individuals, and organizations which search for vulnerabilities in software and once they find such they disclose their findings. They disclose at least the mere existence of the vulnerability to the public and the vendor and many times even publish proof of concept code example which can be used to exploit the found vulnerabilities. Such disclosure serves two purposes:
  • Making users of the software aware of the problem as soon as possible
  • Making the vendor aware of the problem so it can create and send a fix to their users
After the vendor is aware of the problem then it is their responsibility to notify the users formally and then to create an update for the software which fixes the bug. Timelines Past to Time of Disclosure - The unknown vulnerability waiting silently and eager to be discovered. Time of Disclosure to Patch is Ready - Everyone knows about the vulnerability, the good and the bad guys, and it is now on production systems waiting to be exploited by attackers. Patch Ready to System is Fixed - Also during this time period, the vulnerability is still there waiting to get exploited. The following diagram demonstrates those timelines in relation to the ShellShock bug: 7-ways-to-stay-7-years-ahead-of-the-threat-5-638 Image is taken from http://www.slideshare.net/ibmsecurity/7-ways-to-stay-7-years-ahead-of-the-threat   Summary So indeed the disclosure process eventually ends with a fixed system but there is a long period of time where systems are vulnerable and attackers don't need to work hard on uncovering new vulnerabilities since they have the disclosed one waiting for them. I got thinking about this after I saw this stats via Tripwire “About half of the CVEs exploited in 2014 went from publishing to pwn in less than a month” (DBIR, pg. 18). This stats means that half of the exploits identified during 2014 were based on published CVEs (CVE is a public vulnerability database) and although some may argue that the attackers could have the same knowledge on those vulnerabilities before they were published I say it is far-fetched. If I was an attacker what would be easier for me than going over the recently published vulnerabilities and finding one that is suitable for my target and later on building an attack around it. Needless to say that there are tools which provide also examples for that such as Metasploit. Of the course, the time window to operate is not infinite such as in the case of an unknown vulnerability which no one knows about but still, a month or more is enough to get the job done. Last Words A new process of disclosure should be devised where the risk level during the time of disclosure up to the time a patch is ready and applied should be reduced. Otherwise, we are all just helping the attackers while trying to save the world.

Most cyber attacks start with an exploit – I know how to make them go away

Yet another new Ransomware with a new sophisticated approach http://blog.trendmicro.com/trendlabs-security-intelligence/crypvault-new-crypto-ransomware-encrypts-and-quarantines-files/ Pay attention that the key section in the description on the way it operates is "The malware arrives to affected systems via an email attachment. When users execute the attached malicious JavaScript file, it will download four files from its C&C server:" When users execute the JavaScript files it means the JavaScript was loaded into the browser application and exploited the browser in order to get in and then start all the heavy lifting. The browser is vulnerable, software is vulnerable, it's a given fact of an imperfect world. I know a startup company, called Morphisec which is eliminating those exploits in a very surprising and efficient way.  In general vulnerabilities are considered to be a chronic disease and this does not have to be this way. Some smart guys and girls are working on a cure:)

Remember, it all starts with the exploit.

The Emergence of Polymorphic Cyber Defense

Background

Attackers are Stronger Now

The cyber world is witnessing a fast-paced digital arms race between attackers and security defense systems, and 2014 showed everyone that attackers have the upper hand in this match.  Attackers are on the rise due to their growing financial interest—motivating a new level of sophisticated attacks that existing defenses are unmatched to combat. The fact that almost everything today is connected to the net and the ever-growing complexity of software and hardware turns everyone and everything into viable targets. For the sake of simplicity, I will focus this post on enterprises as a target for attacks, although the principles described here apply to other domains.

Complexity of Enterprise: IT has Reached a Tipping Point

In recent decades, enterprise IT achieved great architectural milestones thanks to the lowering costs of hardware and accelerating the pace of technology innovation. This transformation made enterprises utterly dependent on IT foundation, which is composed of a huge amount of software packages coming from different vendors, operating systems, and devices. Enterprise IT has also become very complicated where gaining a comprehensive view of all the underlying technologies and systems has become an impossible mission. This new level of complexity has its tolls, and one of them is the inability to effectively protect the enterprise digital assets.  Security tools did not evolve at the same pace as IT infrastructure, and as such, their coverage is limited—resulting in a considerable amount of “gaps” waiting to be exploited by hackers.

The Way of the Attacker

Attackers today can craft very advanced attacks quite quickly. The Internet is full of detailed information regarding how to craft those with plenty of malicious code to reuse. Attackers usually look for the least resistant path to their target, and such paths exist today. Although, after reviewing the recent APT techniques, some consider them not to be sophisticated enough. I can argue that it is just a matter of laziness, and not professionalism—since today there are so many easy paths into the enterprise, why should they bother with advanced attacks? And I do not think their level of sophistication, by any means, has reached a barrier that can make the enterprises feel more relaxed. An attack is composed of software components and to build one; the attacker needs to understand their target systems. Since IT has undergone standardization, learning which system the target enterprise use and finding its vulnerabilities is quite easy. For example, on every website an attacker can identify the signature of the type of web server, and then investigate it within the lab, to try to look for common vulnerabilities on that specific software. Even more simple is to look into the CVE database and find existing vulnerabilities, which have not been patched on it. Another example is the active directory (AD), which is an enterprise application that holds all the organizational information. Today, it is quite easy to send a malicious document to an employee in which once the document is opened, it exploits the employee's Windows machine and looks for privileged vulnerability into AD. Even the security products and measures that are applied to the target enterprise can be identified by the attacks quite easily, and can later bypass them, leaving no trace of the attack. Although organizations always aim to update their systems with the latest security updates and products, there are still two effective windows of opportunities for attackers:
  • From the moment that a disclosure of a vulnerability in specific software is identified to the moment in which a software patch-up is engineered, to the point in time in which the patch is applied to the specific computers running the software. This is the most vulnerable time frame since the details of the vulnerability are publicly available, and there is always enough time before the target covers this vulnerability—greatly simplifying the job of the attacker. Usually, within this time frame attackers can also find example exploitation code on the internet for reuse.
  • Unknown vulnerabilities in the software or enterprise architecture that are identified by attackers and used without any disruption or visibility since the installed security products are not aware of them.
From a historic point of view, the evolution of attacks is usually tightly coupled with the evolution of security products aiming to bypass them and mainly the need to breach specific areas within the target. During my time as VP R&D for Iris Antivirus (20+ years ago) I witnessed a couple of important milestones in this evolution: High-Level Attacks – Malicious code written in a high-level programming language such as Visual Basic or Java, which created a convenient platform for attackers to write a PORTABLE attacks which can be modified quite easily since it is written in high-level language making virus detection very difficult. The basic visual attacks created, also as an unintentional side effect, an efficient DISTRIBUTION channel for the malicious code to be delivered via documents. Today it is the main distribution path for malicious code, via HTML documents, Adobe PDF files or MS Office files. Polymorphic Viruses – Malicious code hides itself from signature driven detection tools, and only at runtime is the code deciphered and executed. Now imagine a single virus serving as a basis for so many variants of “hidden” code and how challenging it can be for a regular AV product. Later on, polymorphism evolved to dynamic selection, and execution of the “right” code where the attack connects to a malicious command and control server with the parameters of the environment and the server returns an adaptive malicious code, which fits the task at hand. This can be called as runtime polymorphism. Both “innovations” were created to evade the main security paradigm which existed back then, mainly that of the anti-viruses looking for specific byte signatures of the malicious code. Both new genres of attacks were very successful in challenging the AVs —because signatures have become less deterministic. Another major milestone in the evolution of attacks is the notion of code REUSE to create variants of the same attack. There are development kits in existence which can be used by attackers, as if they were legitimate software developers, building something beneficial. The variants phenomena competed earnestly with AVs in a cat and mouse race for many years—and still, does.

State of Security Products

Over the years malicious code related security products have evolved alongside the threats, whereas the most advanced technology applied to identifying malicious code was and still is behavioral analysis. Behavioral analysis indicates the capability to identify specific code execution patterns. An approach to the signature detection paradigms, which mainly addresses the challenge of malicious code variants. Behavioral analytics can be applied at runtime to a specific machine tracing the execution of applications or offline via a sandbox environment such as FireEye. The latest development in behavioral analytics is the addition of predictive capabilities aiming to predict which alternative future execution patterns reflects a malicious behavior and which is benign to stop attacks before any harm is done. Another branch of security products which aim at dealing with unknown malicious code belongs to an entirely new category that mimics the air-gap security concept, referred to as containment. Containment products—there are different approaches with different value propositions, but I am generalizing here—are running the code inside an isolated environment, and if something were to go wrong, the production environment would be left intact in that it was isolated and the attack had been contained. It is similar to having the 70’s mainframe, which did containerization, but in your pocket and a rather seamless manner. And of course, the AVs themselves have evolved quite a bit, while their good old signature detection approach still provides value in identifying well-known and rather simplistic attacks. So, with all these innovations, how are attackers remaining on top?
  1. As I said, it is quite easy to create new variants of malicious code. It can even be automated, making the entire signature detection industry quite irrelevant. The attackers have found a way to counter the signatures paradigm by simply generating a large amount of potential malicious signatures.
  2. Attackers are efficient at locating the target's easy-to-access entry points, both due to the knowledge of systems within the target, and the fact that those systems have vulnerabilities. Some attackers work to uncover new vulnerabilities, which the industry terms zero-day attacks. Most attackers, however, simply wait for new exploits to be published and enjoy the window of opportunity until it is patched.
  3. The human factor plays a serious role here where social engineering and other methods of convincing users to download malicious files is often successful. It is easier to target the CFO with a tempting email with a malicious payload, then to find your digital path into the accounting server. Usually, the CFO has the credentials to those systems, and often there are even excel copies of all the salaries on their computer, so it is a much less resistant path toward success.

Enter the Polymorphic Defense Era

6a010536b66d71970c01b7c74a1d69970b-800wi An emerging and rather exciting security paradigm that seems to be popping up in Israel and SV are called a polymorphic defense. One of the main anchors contributing to successful attacks is the prior knowledge that attackers benefit from about the target, including which software and systems are used, the network structure, the specific people and their roles, etc. This knowledge serves as a baseline for all targeted attacks across all the stages of attack: the penetration, persistence, reconnaissance and the payload itself. All these attack steps, to be effective, require a detailed prior knowledge about their target—except for reconnaissance—which complements the external knowledge with dynamically collected internal knowledge. Polymorphic defense aims to undermine this prior knowledge foundation and to make attacks much more difficult to craft. The idea of defensive polymorphism has been borrowed from the attacker's toolbox where it is used to “hide” their malicious code from security products. The combination of polymorphism with defense simply means changing the "inners" of the target, where the part to change depends on the implementation and its role in attack creation. This is done so that these changes are not visible to attackers, making prior knowledge irrelevant. Such morphism hides the internals of the target architecture so that only trusted sources are aware of them—to operate properly. The “poly” part is the cool factor of this approach in that changes to the architecture can be made continuously and on-the-fly, making the guesswork higher by magnitudes.  With polymorphism in place, attackers cannot build effective repurposable attacks against the protected area. This cool concept can be applied to many areas of security depending on the specific target systems and architecture, but it is a revolutionary and a refreshing defensive concept in the way that it changes the economic equation that attackers are benefitting from today. I also like it because, in a way, it is a proactive approach—and not passive like many other security approaches. Polymorphic defenses usually have the following attributes:
  • Solutions that are agnostic to covered attack patterns which makes them much more resilient.
  • Seamless integration into the environment since the whole idea is to change the inner parts—changes which cannot be made apparent to externals.
  • Makes reverse-engineering and propagation very difficult, due to the "poly" aspect of the solution.
  • There is always a trusted source, which serves as the basis for the morphism.

The Emerging Category of Polymorphic Defense

The polymorphic defense companies I am aware of are still startups. Here are few of them:
  • The first company that comes to mind, which takes polymorphism to the extreme, is Morphisec*, an Israeli startup still in stealth mode. Their innovative approach solves the problem of software, and it achieves that by continuously morphing the inner structures of running applications, which as a result, renders all known and potentially unknown exploits as useless. Their future impact on the industry can be tremendous: the end of the mad race of newly discovered software vulnerabilities and software patching, and much-needed peace of mind regarding unknown software vulnerabilities and attacks.
  • Another highly innovative company that applies polymorphism in a very creative manner is Shape Security. They were the first one to coin the term of polymorphic defense publicly. Their technology “hides” the inner structure of web pages which eventually can block many problematic attacks such as CSRF, which rely on specific known structures within the target web pages.
  • Another very cool company also out of Israel is CyActive. CyActive fast forwards the future of malware evolution using bio-inspired algorithms, and use it as training data for a smart detector which can identify and stop future variantsmuch like a guard that has been trained on future weapons. Their polymorphic anchor is in the fact they outsmart the phenomena of attack variants by creating all the possible variants of the malware automatically and by that increase detection rate dramatically.
I suppose there are other emerging startups which tackle security problems with polymorphism. If you are aware of any particularly impressive ones, please let me know, as I would love to update this posts with more info on them. J *Disclaimer – I have a financial and personal interest in Morphisec, the company mentioned in the post. Anyone interested in connecting with the company, please do not hesitate to send me an email and I would be happy to engage regarding this matter.

History

The idea of morphism or randomization as an effective tool for setting a serious barrier for attackers can be attributed to different academic developments and commercial ones. To name one commercial example, take the Address Space Layout Randomization (ASLR) concept from operating systems. ASLR is a concept that aims to deal with attacks that are written to exploit specific addresses in memory, and ASLR changes this assumption by moving around code in memory in a rather random manner.

The Future

Polymorphic defense is a general theoretical concept which can be applied to many different areas in the IT world, and here are some examples off the top of my head:
  • Networks – Software defined networking provides a great opportunity for changing the inner-networking topology to deceive attackers and dynamically contain breaches. This can be big!
  • APIs – API protocols can be polymorphic as well, and as such, prevent malicious actors from masquerading as legitimate parties or man in the middle attacks.
  • Databases – Database structures can be polymorphic too, so only trusted parties could be aware of a dynamic DB scheme and others cannot.
So, polymorphic defense seems to be a game-changing security trend which can potentially change the balance between the bad guys and the good guys…and ladies too, of course. UPDATE Feb 11, 2015: On Reddit I've got some valid feedback that it is the same as the MTD concept, Moving Target Defense, and indeed that is right. In my eyes, the main difference is the fact Polymorphism is more generic in the sense it is not specifically about changing location as means of deception but also creating many forms of the same thing to deceive the attackers, but it is just a matter of personal interpretation.

A Tectonic Shift in Superpowers or What Sony Hack Uncovered to Everyone Else

Sony hack has flooded my news feed in recent weeks, everyone talking about how it was done, why, whom to blame, the trails which lead to North Korea and the politics around it. I’ve been following the story from the first report with an unexplained curiosity and was not sure why since I read about hacks all day long. A word of explanation about my "weird" habit of following hacks continuously, being a CTO of the Ben-Gurion University Cyber Security Labs comes with responsibility, and part of it is staying on top of things:) Later on, the reason for my curiosity became clear to me. As a background, to the ones who are deep in the security industry, it is already well known although not necessarily spoken out loud that attackers are pretty far ahead of enterprises regarding sophistication. The number of occurrences of reported cyber attacks in recent two years shows a steep upward curve and if you add to that three times non-reported incidents than anyone can see it is exploding. And although many criticized Sony for their wrong security measures still I don’t think they are the ones to blame. They were caught in a game beyond their league. Beyond any enterprise league. The reasons attackers have become way more successful are:

  • They know how to better disguise their attacks, using form changing techniques (polymorphism) and others.
  • They know quite well the common weaknesses in enterprises IT. You can install almost any piece of software in your lab and just look for weaknesses all day long.
  • They have more money to pour into learning the specifics of their targets and thanks to that they build elaborated and targeted attacks. In the case of state-sponsored attacks, the funds are unlimited.
  • Defensive technologies within the enterprise are still dominated by tools invented ten years ago, back then attacks were more naive if such can be said. Today we are in a big wave of new emerging security technologies which can be much more effective though enterprises enough time to get adopted.
So it is fair to say that enterprises are in a way sitting ducks for targeted attackers and I am not exaggerating here. And the Sony story was different than others for two main reasons:
  • The source of attack is allegedly originated and backed by a specific nation. And I am saying allegedly because unless you found the evidence in the computers of someone you can’t be sure and even then that person could have been entrapped by the real attackers. Professionals can quite easily cover up their traces, and the attackers here are professionals.
  • The results of the attack are devastating, and their publicity turned them into a nightmare for any CEO on earth. Some warning sign to the free world.
And Sony due to their bad luck got caught in the middle. 6a010536b66d71970c01bb07cb8c50970d-800wi Image is taken from http://www.politico.com/story/2014/12/no-rules-of-cyber-war-113785.html

The End of Superpowers

From a high-level view, it does not matter whether it was North Korea or not. The fact that such an event happened where potentially a state attacked a private company and its consequences and lack of ramifications are quite clear then this opens the path for the future to happen again and again and that what's makes it a game changer. Every nation in the world understood now they have got a free ticket to a new playground with different rules of engagement and more important different power balance. In the physical world power has always been attributed to the amount of firepower you’ve got, and naturally, the amount of firepower has a tight correlation with the economic strength of the nation. The US is a superpower. Russia is a superpower. In the cyber world these rules do not necessarily apply where you can find a small group of very smart people, and with very simple cheap tools they can wreak havoc on a target. It is not easy but possible. The attackers many times are only limited by their creativity and nothing else. In the cyber world, size matters less. Our lifestyle and lifeblood have become dependent on IT, our electricity, water, food, defense, entertainment, finance and almost everything else is working only if the underlying IT is functioning properly. Cyber warfare means attacking the physical world by digital means and the results can be no less devastating than any other type of attack. They can be worse since IT also presents new single points of failure. So if cyber wars can cause harm as real wars and size matter less wouldn’t that mean the rules of the game have changed forever?

Question of Responsibility

As soon as I heard that North Korea might be responsible for the attack I understood that Sony was caught into an unfair game and the big question is about the role of the government in defending the private sector, how and to what extent. Going back again to the physical world, in the case of a missile that is launched from North Korea onto the headquarters of Sony then the story and reaction were very much different and predictable. This comparison is valid since the damage which can be caused by such missile to the company is probably lesser from the economic perspective, not taking into account, of course, human casualties. I am not saying cyber attacks can’t cause casualties; I am just saying that this one did not. So why is there a difference in the stance of the US government? Why did Sony not ask for help and nationwide defense? The era of cyber warfare removes the clear distinction between criminal acts vs. nation wise offensive acts and a new line of thought should emerge.

So what the future holds for us?

  • A big wave of cyber attacks coming from everywhere on the globe. The “good” results of this attack will surely provide a sign of hope for all the people in the world who felt inferior from a military perspective. The attackers always go to the weakest links, so we will see more enterprises being attacked like Sony in a more severe way. A long, complicated, stealthy war.
  • A big wave of security technologies which aim to solve these problems, coming from the private and government sector. Security startups and established players in a way “enjoy” these developments where the need for new solutions is uprising steeply. I know personally some startups in Israel which can take the current advantage attackers enjoy technologies such as polymorphic cyber defense. I will elaborate on that in a future post since it deserves one on its own.
  • A long debate about who is responsible for what and what measures can be taken meanwhile - cutting down the internet across the globe won’t help anyone since there is today many ways to launch attacks from different geographic places, so location doesn’t matter anymore. It won’t be easy to create a solution which will be effective on the one hand and not limit the freedom to communicate on the other hand.

Meanwhile, you can gaze a bit at the emerging battleground

6a010536b66d71970c01b7c7271f5c970b-800wi Taken from a live attacks monitor on IPVKing

Site Footer