Some Of These Rules Can Be Bent, Others Can Be Broken

Cryptography is a serious topic. A technology based on mathematical foundation posing an ever-growing challenge for attackers. On November 11th 2016 Motherboard wrote a piece about FBI’s ability to break into suspects’ locked phones. Contrary to FBI’s continuous complaints about going dark with strong encryption the actual number of phones they were able to break into was quite high. The high success ratio of penetrating locked phone in some way doesn’t make sense - it is not clear what was so special with the devices they could not break into. Logically similar phone models has the same crypto algorithms and if there was a way to break into one phone how come they could not break into all of them? Maybe the FBI has found an easier path to the locked phones other than breaking encryption. Possibly they crafted a piece of code that exploits a vulnerability in the phone OS, maybe a zero-day vulnerability known only to them. Locked smartphones have some parts of the operating system active even if they are only turned on and an illegal access in the form of exploitation to those active areas can circumvent the encryption altogether. To be honest, I don’t know what happened there and it is all just speculations though this story provides a glimpse into the other side, the attacker’s point of view, and that is the topic of this post. What an easy life attackers have as they are not bound by the rules of the system they want to break into and they need to seek for only one unwatched hole. Defenders who carry the burden of protecting the whole system need to make sure every potential hole is covered while bound to the system rules - an asymmetric burden that results in an unfair advantage for attackers.

The Path of Least Resistance

If attackers had ideology and laws then the governing one would have been “Walk The Path of Least Resistance” - it is reflected over and over again in their mentality and method of operation. Wikipedia’s explanation fits perfectly the hacker’s state of mind
The path of least resistance is the physical or metaphorical pathway that provides the least resistance to forward motion by a given object or entity, among a set of alternative paths.
In the cyber world, there are two dominant roles: the defender and the attacker and both deal with the exact same topic – the mere existence of an attack on a specific target. I used to think that the views of both sides would be an exact opposite to each other as eventually the subject of matter, the attack, is the same and interests are reversely-aligned but that is not the case. For sake of argument, I will deep dive into the domain of enterprise security while the logic will serve as a general principle applicable to other security domains. In the enterprise world the enterprise security department, the defender, roughly does two things: they need to know very well the architecture and the assets of the system they should protect, its structures, interconnections with other systems as well as with the external world. Secondly, they need to devise defense mechanisms and strategy that on one hand will allow the system to continue functioning while on the other hand eliminate possible entry points and paths that can be abused by attackers on their way in. As a side note, achieving this fine balance resembles the mathematical branch of constraints satisfaction problems. Now let’s switch to the other point of view - the attacker – the attacker needs only to find a single path into the enterprise in order to achieve its goal. No one knows the actual goal of the attacker and such a goal fits probably one of the following categories: theft, extortion, disruption or espionage. Within each category, the goals are very specific. So the attacker is laser-focused on a specific target and the attacker’s learning curve required for building an attack is limited and bounded to their specific interest. For example, the attacker does not need to care about the overall data center network layout in case it wants to get only the information about the salaries of the employees where such a document probably resides in the headquarters office. Another big factor in favor of attackers is that some of the possible paths towards the target include the human factor. And humans, as we all know, have flaws, vulnerabilities if you like, and from the attacker’s standpoint, these weaknesses are proper means for achieving the goal. From all the possible paths that theoretically an attacker can select from, the ones with the highest success ratio and minimal effort are the preferable ones, hence the path of least resistance.

The Most Favorite Path in The Enterprise World

Today the most popular path of least resistance is to infiltrate the enterprise via exploiting human weaknesses. Usually in the form of minimal online trust building where the target employee is eventually set to activate a malicious piece of code by opening an email attachment for example. The software stack employees have on their computers is quite standard in most organizations: mostly MS-Windows operating systems; the same document processing applications as well as the highly popular web browsers. This stack is easily replicated at the attacker’s environment used for finding potential points of infiltration in the form of un-patched vulnerabilities. The easiest way to find a target vulnerability is to review the most recent vulnerabilities uncovered by others and reported as CVEs. There is a window of opportunity for attackers in between the time the public is made aware of the existence of a new vulnerability and the actual time an organization patches the vulnerable software. Some statistics say that within many organizations this time window of opportunity can be stretched into months as rolling out patches across an enterprise is painful and slow. Attackers that want to really fly below the radar and reach high success ratios for their attacks search for zero-day vulnerabilities or just buy them somewhere. Finding a zero-day is possible as software has become overly complex with many different technologies embedded in products which eventually increase the chances for vulnerabilities to exist - the patient and persistent attacker will always find its zero-day. Once an attacker acquires that special exploit code then the easier part of the attack path comes into play – the part where the attacker finds a person in the organization that will open such malicious document. This method of operation is in magnitude easier vs. learning in details the organization internal structures and finding vulnerabilities in proprietary systems such as routers and server applications where the access to their technology is not straightforward. In recent WannaCry attack we have witnessed an even easier path to enter an organization using a weakness in enterprise computers that have an open network vulnerability that can be exploited from the outside without human intervention. Going back to the case of the locked phones, it is way easier to find a vulnerability in the code of the operating system that runs on the phone vs. breaking the crypto and decrypting the encrypted information.

We Are All Vulnerable

Human vulnerabilities span beyond inter-personal weaknesses such as deceiving someone to open a malicious attachment. They also exist in the products we design and build, especially in the world of hardware or software where complexity has surpassed humans’ comprehension ability. Human weaknesses span also to the world of miss-configuration of systems, one of the easiest and most favorable paths for cyber attackers. The world of insider threats many times is based on human weaknesses exploited and extorted by adversaries as well. Attackers found their golden path of least resistance and it is always on the boundaries of human imperfection. The only way for defenders to handle such inherent weaknesses is to break down the path of least resistance into parts and make the easier parts to become more difficult. That would result in a shift in the method of operation of attackers and will send them to search for other easy ways to get in where hopefully it will become harder in overall within time.

Deep into the Rabbit Hole

Infiltrating an organization via inducing an employee to activate a malicious code is based on two core weakness points: The human factor which is quite easy and the ease of finding a technical vulnerability in the software used by the employee as described earlier. There are multiple defense approaches addressing the human factor mostly revolving around training and education and the expected improvement is linear and slow. Addressing the second technical weakness is today’s one of the main lines of business in the world of cyber security, hence endpoint protection and more precisely preventing infiltration.

Tackling The Ease of Finding a Vulnerability

Vulnerabilities disclosure practices, that serve as the basis for many attacks in the window of opportunity, have been scrutinized for many years and there is a real progress towards the goal of achieving a fine balance between awareness and risk aversion. Still, it is not there yet since there is no bullet proof way to isolate attackers from this public knowledge. It could be that the area of advanced threat intelligence collaboration tools will evolve into that direction though it is too early to say. It is a tricky matter to solve, as it is everybody’s general problem and at the same time nobody’s specific problem. The second challenge is the fact that if a vulnerability exists in application X and there is a malicious code that can exploit this vulnerability then it will work anywhere this application X is installed.
Different Proactive Defense Approaches
There are multiple general approaches towards preventing such an attack from taking place:
Looking for Something
This is the original paradigm of anti-viruses that searches for known digital signatures of malicious code in data. This inspection takes place both when data is flowing in the network, moved around in memory in the computing device as well as at rest when it is persisted as a file (in case it is not a full in-memory attack). Due to attackers’ sophistication with malicious code obfuscation and polymorphism, were infinite variations of digital signatures of the same malicious code can be created, this approach has become less effective. The signatures approach is highly effective on old threats spreading across the Internet or viruses written by novice attackers. In the layered defense thesis, the signatures are the lower defense line and serve as an initial filter for the noise.
Looking at Something
Here, instead of looking at the digital fingerprint of a specific virus the search is for behavioral patterns of the malicious code. Behavioral patterns mean, for example, the unique sequence of system APIs accessed, functions called and frequencies of execution of different parts of the code in the virus. The category that was invented quite a long time ago enjoys a renaissance thanks to the advanced pattern recognition capabilities of artificial intelligence. The downside of AI in this context is inherent in the way AI works and that is fuzziness. Fuzzy detection leads to false alarms, phenomena that overburden the already growing problem of analyst shortage required to decide which alarm is true and which isn’t. The portion of false alarms I hear about today are still in majority and are in the high double digits where some of the vendors solve this problem by providing full SIEM management behind the scenes that include filtering false alarms manually. Another weakness of this approach is the fact that attackers evolved into mutating the behavior of the attack. Creating variations on the logic virus while making sure the result stays the same, variations that go unnoticed by the pattern recognition mechanism – there is a field called Adversarial AI which covers this line of thinking. The most serious drawback of this approach is the fact that these mechanisms are blind to in-memory malicious activities. An inherent blindness to a big chunk of the exploitation logic that is and will always stay in-memory. This blindness is a sweet spot identified by attackers and again is being abused with fileless attacks etc... This analysis reflects the current state of AI integrated and commercialized in the domain of cyber security in the area of endpoint threat detection. AI had major advancements in recent time, which has not been implemented yet in this cyber domain – developments that could create a totally different impact.
Encapsulation
There is a rising concept in the world of cyber security, which aims to tackle the ease of learning the target environment and creating exploits that work on any similar system. The concept is called moving target defense and pledges the fact that if the inner parts of any system will be known only to the legitimate system users it will thwart any attack attempt by outsiders. It is eventually an encapsulation concept similar to the one in the object-oriented programming world where external functionality can not access the inner functionality of a module without permission. In cyber security the implementation is different based on the technical domain it is implemented but still preserves the same information hiding theory. This new emerging category is highly promising towards the goal of changing the cyber power balance by taking attackers out of the current path of least resistance. Moving target defense innovation exists in different domains of cyber security. In endpoint protection, it touches the heart of the assumption of attackers that the internal structures of applications and the OS stays the same and their exploit code will work perfectly on the target. The concept here is quite simple to understand (very challenging to implement) - it is about continuously moving around and changing the internal structures of the system that on one hand the internal legitimate code will continue functioning as designated while on the other hand malicious code with assumptions on the internal structure will fail immediately. This defense paradigm seems as highly durable as it is agnostic to the type of attack.

Recommendation

The focus of the security industry should be on devising mechanisms that make the current popular path of least resistance not worthwhile and let them waste time and energy in a search for a new one.

Is It GAME OVER?

Targeted attacks take many forms, though there is one common tactic most of them share: Exploitation. To achieve their goal, they need to penetrate different systems on-the-go. The way this is done is by exploiting unpatched or unknown vulnerabilities. More common forms of exploitation happen via a malicious document which exploits vulnerabilities in Adobe Reader or a malicious URL which exploits the browser in order to set a foothold inside the end-point computer. Zero Day is the buzzword today in the security industry, and everyone uses it without necessarily understanding what it really means. It indeed hides a complex world of software architectures, vulnerabilities, and exploits that only few thoroughly understand. Someone asked me to explain the topic, again, and when I really delved deep into the explanation I was able to comprehend something quite surprising. Please bear with me, this is going to be a long post :-)

Overview

I will begin with some definitions of the different terms in the area: These are my own personal interpretations of them…they are not taken from Wikipedia.

Vulnerabilities

This term usually refers to problems in software products – bugs, bad programming style or logical problems in the implementation of software. Software is not perfect and maybe someone can argue that it can’t be such. Furthermore, the people who build the software are even less perfect—so it is safe to assume such problems will always exist in software products. Vulnerabilities exist in operating systems, runtime environments such as Java and .Net or specific applications whether they are written in high-level languages or native code. Vulnerabilities also exist in hardware products, but for the sake of this post, I will focus on software as the topic is broad enough even with this focus. One of the main contributors to the existence and growth in the number of vulnerabilities is attributed to the ever-growing pace of complexity in software products—it just increases the odds for creating new bugs which are difficult to spot due to the complexity. Vulnerabilities always relate to a specific version of a software product which is basically a static snapshot of the code used to build the product at a specific point in time. Time plays a major role in the business of vulnerabilities, maybe the most important one. Assuming vulnerabilities exist in all software products, we can categorize them into three groups based on the level of awareness to these vulnerabilities:
  • Unknown Vulnerability - A vulnerability which exists in a specific piece of software to which no one is aware. There is no proof that such exists but experience teaches us that it does and is just awaiting to be discovered.
  • Zero Day - A vulnerability which has been discovered by a certain group of people or a single person where the vendor of the software is not aware of it and so it is left open without a fix or awareness to it its presence.
  • Known Vulnerabilities - Vulnerabilities which have been brought to the awareness of the vendor and of customers either in private or as public knowledge. Such vulnerabilities are usually identified by a CVE number – where during the first period following discovery the vendor works on a fix, or a patch, which will become available to customers. Until customers update the software with the fix, the vulnerability is kept open for attacks. So in this category, each respective installation of the software can have patched or un-patched known vulnerabilities. In a way, the patch always comes with a new software version, so a specific product version always contains un-patched vulnerabilities or not – there is no such thing as a patched vulnerability – there are only new versions with fixes.
There are other ways to categorize vulnerabilities: based on the exploitation technique such as buffer overflow or heap spraying, the type of bug which lead to the vulnerability, or such as a logical flaw in design or wrong implementation which leads to the problem.

Exploits

A piece of code which abuses a specific vulnerability in order to cause something unexpected to occur as initiated by the attacked software. This means either gaining control of the execution path inside the running software so the exploit can run its own code or just achieving a side effect such as crashing the software or causing it to do something which is unintended by its original design. Exploits are usually highly associated with malicious intentions although from a technical point of view it is just a mechanism to interact with a specific piece of software via an open vulnerability – I once heard someone refer to it as an “undocumented API” :).

This picture from Infosec Institute describes a vulnerability/exploits life cycle in an illustrative manner:

042115_1024_ZeroDayExpl1

The time span, colored in red, presents the time where a found vulnerability is considered a Zero Day and the time colored in green turns the state of the vulnerability to un-patched. The post disclosure risk is always dramatically higher as the vulnerability becomes public knowledge. Also, the bad guys can and do exploit in higher frequency than in the earlier stage. Closing the gap on the patching period is the only step which can be taken toward reducing this risk.

The Math Behind a Targeted Attacks

Most targeted attacks today use the exploitation of vulnerabilities to achieve three goals:
  • Penetrate an employee end-point computer by different techniques such as malicious documents sent by email or malicious URLs. Those malicious documents/URLs contain malicious code which seeks specific vulnerabilities in the host programs such as the browser or the document reader. And, during a rather naïve reading experience, the malicious code is able to sneak into the host program as a penetration point.
  • Gain higher privilege once a malicious code already resides on a computer. Many times the attacks which were able to sneak into the host application don’t have enough privilege to continue their attack on the organization and that malicious code exploits vulnerabilities in the runtime environment of the application which can be the operating system or the JVM for example, vulnerabilities which can help the malicious code gain elevated privileges.
  • Lateral movement - once the attack enters the organization and wants to reach other areas in the network to achieve its goals, many times it exploits vulnerabilities in other systems which reside on its path.
So, from the point of view of the attack itself, we can definitely identify three main stages:
  • An attack at Transit Pre-Breach - This state means an attack is moving around on its way to the target and in the target prior to exploitation of the vulnerability.
  • An attack at Penetration - This state means an attack is exploiting a vulnerability successfully to get inside.
  • An attack at Transit Post Breach This state means an attack has started running inside its target and within the organization.
The following diagram quantifies the complexity inherent in each attack stage both from the attacker and defender sides and below the diagram there are descriptions for each area and the concluding part:

Ability to Detect an Attack at Transit Pre-Breach

Those are the red areas in the diagram. Here an attack is on its way prior to exploitation, on its way referring to the enterprise that can scan the binary artifacts of the attack, either in the form of network packets, a visited website or specific document which is traveling via email servers or arriving at the target computer for example. This approach is called static scanning. The enterprise can also emulate the expected behavior with the artifact (opening a document in a sandboxed environment) in a controlled environment and try to identify patterns in the behavior of the sandbox environment which resemble a known attack pattern – this is called behavioral scanning. Attacks pose three challenges towards security systems at this stage:
  • Infinite Signature Mutations - Static scanners are looking for specific binary patterns in a file which should match to a malicious code sample in their database. Attackers are already much outsmarted these tools where they have automation tools for changing those signatures in a random manner with the ability to create an infinite number of static mutations. So a single attack can have an infinite amount of forms in its packaging.
  • Infinite Behavioural Mutations - The evolution in the security industry from static scanners was towards behavioral scanners where the “signature” of a behavior eliminates the problems induced by static mutations and the sample base of behaviors is dramatically lower in size. A single behavior can be decorated with many static mutations and behavioral scanners reduce this noise. The challenges posed by the attackers make behavioral mutations of infinite nature as well and they are of two-fold:
    • Infinite number of mutations in behaviour - In the same way, attackers outsmart the static scanners by creating infinite amount of static decorations on the attack, here as well, the attackers can create either dummy steps or reshuffle the attack steps which eventually produce the same result but from a behavioral pattern point of view it presents a different behavior. The spectrum of behavioral mutations seemed at first narrower than static mutations but with an advancement of attack generators, even that has been achieved.
    • Sandbox evasion - Attacks which are scanned for bad behavior in a sandboxed environment have developed advanced capabilities to detect whether they are running in an artificial environment and if they detect so then they pretend to be benign which implies no exploitation. This is currently an ongoing race between behavioral scanners and attackers and attackers seem to have the upper hand in the game.
  • Infinite Obfuscation - This technique has been adopted by attackers in a way that connects to the infinite static mutations factor but requires specific attention. Attackers, in order to deceive the static scanners, have created a technique which hides the malicious code itself by running some transformation on it such as encryption and having a small piece of code which is responsible for decrypting it on target prior to exploitations. Again, the range of options for obfuscating code is infinite which makes the static scanners' work more difficult.
This makes the challenge of capturing an attack prior to penetration very difficult to impossible where it definitely increases with time. I am not by any means implying such security measures don’t serve an important role where today they are the main safeguards from turning the enterprise into a zoo. I am just saying it is a very difficult problem to solve and that there are other areas in terms of ROI (if such security as ROI exists) which a CISO better invest in.

Ability to Stop an Attack at Transit Post Breach

Those are the black areas in the diagram. An attack which has already gained access to the network can take an infinite number of possible attack paths to achieve its goals. Once an attack is inside the network then the relevant security products try to identify it. Such technologies surround big data/analytics which tries to identify activities in the network which imply malicious activity or again network monitors which listen to the traffic and try to identify artifacts or static behavioral patterns of an attack. Those tools rely on different informational signals which serve as attack signals. Attacks pose multiple challenges towards security products at this stage:
  • Infinite Signature Mutations, Infinite Behavioural Mutations, Infinite Obfuscation - these are the same challenges as described before since the attack within the network can have the same characteristics as the ones before entering the network.
  • Limited Visibility on Lateral Movement - Once an attack is inside then usually its next steps are to get a stronghold in different areas in the network and such movement is hardly visible as it is eventually about legitimate actions – once an attacker gets a higher privilege it conducts actions which are considered legitimate but of high privilege and it is very difficult for a machine to deduce the good vs. the bad ones. Add on top of that, the fact that persistent attacks usually use technologies which enable them to remain stealthy and invisible.
  • Infinite Attack Paths - The path an attack can take inside the network’ especially taking into consideration a targeted attack is something which is unknown to the enterprise and its goals, has infinite options for it.
This makes the ability to deduce that there is an attack, its boundaries, and goals from specific signals coming from different sensors in the network very limited. Sensors deployed on the network never provide true visibility into what’s really happening in the network so the picture is always partial. Add to that deception techniques about the path of attack and you stumble into a very difficult problem. Again, I am not arguing that all security analytics products which focus on post-breach are not important, on the contrary, they are very important. Just saying it is just the beginning of a very long path towards real effectiveness in that area. Machine learning is already playing a serious role and AI will definitely be an ingredient in a future solution.

Ability to Stop an Attack at Penetration Pre-Breach and on Lateral Movement

Those are the dark blue areas in the diagram. Here the challenge is reversed towards the attacker where there are an only limited amount of entry points into the system. Entry points a.k.a vulnerabilities. Those are:
  • Unpatched Vulnerabilities – These are open “windows” which have not been covered yet. The main challenge here for the IT industry is about automation, dynamic updating capabilities, and prioritization. It is definitely an open gap which can be narrowed down potentially to become insignificant.
  • Zero Days – This is an unsolved problem. There are many approaches towards that such as ASLR and DEP on Windows but still, there is no bulletproof solution for it. In the startups' scene, I am aware that quite a few are working very hard on a solution. Attackers identified this soft belly long time ago and it is the main weapon of choice for targeted attacks which can potentially yield serious gains for the attacker.
This area presents a definite problem but in a way it seems as the most probable one to be solved earlier than the other areas. Mainly because the attacker in this stage is at its greatest disadvantage – right before it gets into the network it can have infinite options to disguise itself and after it gets into the network the action paths which can be taken by it are infinite. Here the attacker need to go through a specific window and there aren’t too many of those out there left unprotected.

Players in the Area of Penetration Prevention

There are multiple companies/startups which are brave enough to tackle the toughest challenge in the targeted attacks game - preventing infiltration - I call it, facing the enemy at the gate. In this ad-hoc list I have included only technologies which aim to block attacks at real-time - there are many other startups which approach static or behavioral scanning in a unique and disruptive way such as Cylance and CyberReason or Bit9 + Carbon Black (list from @RickHolland) which were excluded for sake of brevity and focus.

Containment Solutions

Technologies which isolate the user applications with a virtualized environment. The philosophy behind it is that even if there was an exploitation in the application still it won't propagate to the computer environment and the attack will be contained. From an engineering point of view, I think these guys have the most challenging task as the balance between isolation and usability has an inverse correlation in productivity and it all involves virtualization on an end-point which is a difficult task on its own. Leading players are Bromium and Invincea, well-established startups with very good traction in the market.

Exploitation Detection & Prevention

Technologies which aim to detect and prevent the actual act of exploitation. Starting from companies like Cyvera (now Palo Alto Networks Traps product line) which aim to identify patterns of exploitations, technologies such as ASLR/DEP and EMET which aim at breaking the assumptions of exploits by modifying the inner structures of programs and setting traps at "hot" places which are susceptible to attacks, up to startups like Morphisec which employs a unique moving target concept to deceive and capture the attacks at real-time. Another long time player and maybe the most veteran in the anti-exploitation field is MalwareBytes. They have a comprehensive offering for anti-exploitation with capabilities ranging from in-memory deception and trapping techniques up to real time sandboxing.
At the moment the endpoint market is still controlled by marketing money poured by the major players where their solutions are growing ineffective in an accelerating pace. I believe it is a transition period and you can already hear voices saying endpoint market needs a shakeup. In the future the anchor of endpoint protection will be about real time attack prevention and static and behavioral scanning extensions will play a minor feature completion role. So pay careful attention to the technologies mentioned above as one of them (or maybe a combination:) will bring the "force" back into balance:)

Advice for the CISO

Invest in closing the gap posed by vulnerabilities. Starting from patch automation, prioritized vulnerabilities scanning up to security code analysis for in-house applications—it is all worth it. Furthermore, seek out for solutions which deal directly with the problem of zero days, there are several startups in this area, and their contributions can have much higher magnitude than any other security investment in a post or pre-breach phases.

Exploit in the Wild, Caught Red-Handed

Imagine a futuristic security technology that can stop any exploit at the exact moment of exploitation—regardless of the way the exploit was built, its evasion techniques or any mutation it might have or was possibly imagined to have. This technology is truly agnostic for any form of attack. An attack prevented with its attacker captured and caught red-handed at the exact point in time of the exploit...Sounds dreamy, no? For the guys at the stealth startup Morphisec it's a daily reality. So, I decided to convince the team in the malware analysis lab to share some of their findings from today, and I have to brag about it a bit:)
 

Exploit Analysis

The target software is Adobe Flash and the vulnerability is CVE-2015-0359 (Flash up to 17.0.0.134). Today, the team got a fresh sample which was uploaded to Virus Total 21 Hours ago! From the moment we received it from Virus Total, the scan results showed that no security tool in the market detects it except for McAfee GW Edition—which generally identified its malicious activity.
Screen Shot 2015-04-28 at 5.56.23 PM
 
The guys at Morphisec love samples like these because they allows them to test their product against what is considered to be a zero-day—or at least an unknown attack. Within an hour, the identification of the CVE/vulnerability exploited by the attack and the method of exploitation were already clear.
 

Technical Analysis

Morphisec prevents the attack when it starts to look for the Flash Module address (which later would be used to find gadgets). The vulnerability allows the attacker to modify the size of a single array (out of many sequentially allocated arrays – size 0x3fe). An array’s size 0x3fe (index [401]) is modified to size 0x40000001 to reflect the entire memory's size. The first double word in this array points to a read-only section inside the Flash Module. Attacker uses this address as a start address for iteration dedicated for an MZ search (indicates the start of the library), each search iteration (MZ) is 64k long (after the read only pointer that was leaked is aligned to a 64k boundary). After the attacker finds the MZ, it validates the signatures (NT) of the model, gets the code base pointer and size, and from that point, the attack searches gadgets in the code of the Flash module.
Screen Shot 2015-04-29 at 9.16.19 AM Screen Shot 2015-04-29 at 9.03.31 AM Screen Shot 2015-04-29 at 9.04.13 AM   Screen Shot 2015-04-29 at 9.07.39 AM
Morphisec’s technology not only stopped it on the first step of exploitation, it also identified the targeted vulnerability and the method of exploitation as part of its amazing real-time forensic capability. All of this was done instantly in memory on the binary level without any decompilation!
I imagine that pretty soon the other security products will add the signature of this sample to their database so it can properly be detected. Nevertheless, the situation remains that each new mutation of the same attack makes the common security arsenal “blind” to it—which is not very efficient. Gladly, Morphisec is changing this reality! I know that when a startup is still in stealth mode and there is no public information about such comparisons… it's a bit “unfair” to the other technologies on the market, but still… I just had to mention it:)
 
P.S. Pretty soon we will start sharing more details about Morphisec technology—so stay tuned. Follow us via Twitter  @morphisec for more updates.

Time to Re-think Vulnerabilities Disclosure

Public disclosure of vulnerabilities has always bothered me and I wasn't able to put a finger on the reason until now. As a person who has been involved personally in vulnerabilities disclosure, I am highly appreciative for the contribution security researchers on awareness and it is very hard to imagine what would the world be like without disclosures. Still, the way attacks are being crafted today and their links to such disclosures got me into thinking whether we are doing it in the best way possible. So I twitted this and got a lot of "constructive feedback":) from the team in the cyber labs at Ben-Gurion of how do I dare?

  So I decided to build my argument right. Vulnerabilities The basic fact is that software has vulnerabilities. Software gets more and more complex within time and this complexity usually invites errors. Some of those errors can be abused by attackers in order to exploit the systems such software is running on. Vulnerabilities split into two groups, the ones which the vendor is aware of and the ones who are unknown. And it is unknown how many unknowns are there inside each piece of code. Disclosure There are many companies, individuals, and organizations which search for vulnerabilities in software and once they find such they disclose their findings. They disclose at least the mere existence of the vulnerability to the public and the vendor and many times even publish proof of concept code example which can be used to exploit the found vulnerabilities. Such disclosure serves two purposes:
  • Making users of the software aware of the problem as soon as possible
  • Making the vendor aware of the problem so it can create and send a fix to their users
After the vendor is aware of the problem then it is their responsibility to notify the users formally and then to create an update for the software which fixes the bug. Timelines Past to Time of Disclosure - The unknown vulnerability waiting silently and eager to be discovered. Time of Disclosure to Patch is Ready - Everyone knows about the vulnerability, the good and the bad guys, and it is now on production systems waiting to be exploited by attackers. Patch Ready to System is Fixed - Also during this time period, the vulnerability is still there waiting to get exploited. The following diagram demonstrates those timelines in relation to the ShellShock bug: 7-ways-to-stay-7-years-ahead-of-the-threat-5-638 Image is taken from http://www.slideshare.net/ibmsecurity/7-ways-to-stay-7-years-ahead-of-the-threat   Summary So indeed the disclosure process eventually ends with a fixed system but there is a long period of time where systems are vulnerable and attackers don't need to work hard on uncovering new vulnerabilities since they have the disclosed one waiting for them. I got thinking about this after I saw this stats via Tripwire “About half of the CVEs exploited in 2014 went from publishing to pwn in less than a month” (DBIR, pg. 18). This stats means that half of the exploits identified during 2014 were based on published CVEs (CVE is a public vulnerability database) and although some may argue that the attackers could have the same knowledge on those vulnerabilities before they were published I say it is far-fetched. If I was an attacker what would be easier for me than going over the recently published vulnerabilities and finding one that is suitable for my target and later on building an attack around it. Needless to say that there are tools which provide also examples for that such as Metasploit. Of the course, the time window to operate is not infinite such as in the case of an unknown vulnerability which no one knows about but still, a month or more is enough to get the job done. Last Words A new process of disclosure should be devised where the risk level during the time of disclosure up to the time a patch is ready and applied should be reduced. Otherwise, we are all just helping the attackers while trying to save the world.

Most cyber attacks start with an exploit – I know how to make them go away

Yet another new Ransomware with a new sophisticated approach http://blog.trendmicro.com/trendlabs-security-intelligence/crypvault-new-crypto-ransomware-encrypts-and-quarantines-files/ Pay attention that the key section in the description on the way it operates is "The malware arrives to affected systems via an email attachment. When users execute the attached malicious JavaScript file, it will download four files from its C&C server:" When users execute the JavaScript files it means the JavaScript was loaded into the browser application and exploited the browser in order to get in and then start all the heavy lifting. The browser is vulnerable, software is vulnerable, it's a given fact of an imperfect world. I know a startup company, called Morphisec which is eliminating those exploits in a very surprising and efficient way.  In general vulnerabilities are considered to be a chronic disease and this does not have to be this way. Some smart guys and girls are working on a cure:)

Remember, it all starts with the exploit.

Taming The Security Weakest Link(s)

Overview

The security level of a computerized system is as good as the security level of its weakest links. If one part is secure and tightened properly and other parts are compromised, then your whole system is compromised, and the compromised ones become your weakest links. The weakest link fits well with attackers’ mindset which always looks for the least resistant path to their goal. Third parties in computers present an intrinsic security risk for CISOs, and in general, to any person responsible for the overall security of a system. A security risk is one that is overlooked due to a lack of understanding and is not taken into account in an overall risk assessment, except for the mere mention of it. To clarify, third-party refers to all other entities that are integrated into yours, which can be hardware and software, as well as people who have access to your system and are not under your control. A simple real life example can make it less theoretical: Let’s say you are building a simple piece of software running on Linux. You use the system C library, and in this case, plays the 3rd party role. If the C library has vulnerabilities—then your software has vulnerabilities. And, even if you make your software bulletproof, it still won’t remove the risks associated with the C library which becomes your software weakest link. Zooming out on our imaginary piece of software then, you probably already understand that the problem of the 3rd party is much bigger than what was previously mentioned, as your software also relies on the operating system and other installed 3rd party libraries, and the hardware itself, and the networking services, and the list goes on and on. I am not trying to be pessimistic, but this is how it works. In this post, I will focus on application integration-driven weakest links for the sake of simplicity, and not on other 3rd parties such as reusable code, non-employees, and others.  

Application Integration as a Baseline for 3rd Parties

Application integration has been one of the greatest trends ever in the software industry, enabling the buildup of complex systems based on existing systems and products. Such integration takes many forms depending on the specific context in which it is implemented.

Mobile World

In the mobile world, for example, integration serves mainly the purpose of ease of use where the apps are integrated into one other by means of sharing or delegation of duty, such as integrating the camera into an image editing app—iOS have come a long way in this direction with native FB and Twitter integration, as well as native sharing capabilities. Android was built from the ground up for such integration with its activity-driven architecture. 6a010536b66d71970c01b7c754ea16970b-pi  

Enterprise Systems

In the context of enterprise systems, integration is the lifeblood of business processes where there are two main forms of integration: one-to-one such as software X “talking” to software Y via software or network API. The second form is many-to-many, such as in the case of software applications “talking” to a middleware where later the middleware “talks” to other software applications. 6a010536b66d71970c01bb07f8b814970d-pi

Personal Computers

In the context of a specific computer system, there is also the local integration scenario which is based on OS native capabilities such as ActiveX/OLE or dynamic linking to other libraries – such integration usually serves code reuse, ease of use and information sharing. 6a010536b66d71970c01b8d0de35f0970c-pi  

Web Services

In the context of disparate web-based services, the one-to-one API integration paradigm is the main path for building great services fast. 6a010536b66d71970c01b7c754ea7f970b-320wi

All In All

Of course, the world is not homogeneous as is depicted above. Within the mentioned contexts you can find different forms of integration which usually depend on the software vendors and existing platforms.

Integration Semantics

Each integration is based on specific semantics. This semantics are imposed by the interfaces each party exposes to the other party. REST APIs, for example, provide a rather straightforward approach to understanding the semantics where the interfaces are highly descriptive. The semantics usually dictate the range of actions that can be taken by each party in the integration tango and the protocol itself enforces that semantics. Native forms of integration between applications are a bit messier than network based APIs where there is less capability to enforce the semantics allowing exploits such as in the case with ActiveX integration on Windows which has been a basis for quite a few attacks. The semantics of integration also includes the phase of establishing the trust between the integrated parties, and again, this varies quite a bit regarding implementation within each context. It varies from a zero trust case with fully public APIs such as consuming an RSS feed or running a search on Google with an Incognito browser up to a full authentication chain with embedded session tokens. In the mobile world where the aim of integration is to increase ease of use, the trust level is quite low: the mobile trust scheme is based mainly on the fact that both integrated applications reside on the same device such as in the case of sharing, where any app can ask for sharing via other apps and gets an on-the-fly integration into the sharing apps. The second prominent use case in mobile for establishing trust is based on a permission request mechanism. For example, when an app tries to connect to your Facebook app on the phone, the permission request mechanism verifies the request independently from within the FB app, and once approved, the trusted relationship remains constant by use of a persisted security token. Based on some guidelines, some apps do expire those security tokens, but they last for an extended period. With mobile, the balance shift remains between maintaining security and annoying the user with many too many permission questions.  

Attack Vectors In Application Integration

Abuse of My Interfaces

Behind every integration interface, there is a piece of software which implements the exposed capabilities, and as in every software, it is safe to assume that there are vulnerabilities just waiting to be discovered and exploited. So the mere existence of opening integration interfaces from your software poses a risk.

Man In the Middle

Every communication among two integrated parties can be attacked using a man in the middle (MitM). MitM can first intercept the communications, but also alter them to either disrupt the communications or exploit a vulnerability on either side of the integration. Of course, there are secure protocols such as SSL which can reduce that risk but not eliminate it.

Malicious Party

Since we don’t have control of the integrated party, then it is very difficult to assume that it has not been taken over by a malicious actor which now can do all kind of things: exploit my vulnerabilities, exploit the data channel by sending harmful or destructive data, or cause a disruption of my service with denials of service attacks. The other risk of a malicious or under attack party is about availability, and many times with tight integration your availability strongly depends on the integrated parties availability. The risk posed by a malicious party is amplified due to the fact a trust is already established, and many times a trusted party receives wider access to resources and functionality than a non-trusted party, so the potential for abuse is higher.

Guidelines for Mitigation

There are two challenges for mitigating 3rd party risks: the first one is visibility that is easier to achieve, and the second is what to do about each risk identified since we don’t have full control over the supply chain. The first step is to gain an understanding of which 3rd parties your software is reliant upon. This is not easy as you may have visibility only over the first level of integrated parties—in a way this is a recursive problem, but still, the majority of the integrations can be listed out. For each integration point, it is interesting to understand the interfaces and the method of integration (i.e. over the network, ActiveX), and finally, trust establishing a method. Once you have this list, then you should create a table with four columns:
  • CONTROL - How much control you have over the 3rd party implementation.
  • CONFIDENCE - Confidence in 3rd party security measures.
  • IMPACT - Risk level associated with potential abuse of my interfaces.
  • TRUST – The trust level required to be established between the integrated parties before communicating with each other.
These four parameters serve as a basis for creating an overall risk score where the weights for each parameter should be assigned at your discretion and based on your judgment. Once you have such a list, and you’ve got your overall risk calculated for each 3rd party, then simply sort it out based on risk score, and there you've got a list of priorities for taming the weakest links. Once you know your priorities, then there are things you can do, and there are actions that only the owners of the 3rd party components can do so you need some cooperation. Everything that is in your control, which is the security of your end in the integration and the trust level imposed between the parties (assuming you have control of the trust chain and you are not the consumer party in the integration), should be tightened up. For example, reducing the impact of your interfaces towards your system is one thing in your control as well as patching level of dependent software components. MITM risk can be reduced dramatically with the establishment of a good trust mechanism and implementation of secure communications, but not completely mitigated. And lastly, taking care of problems within an uncontrolled 3rd party is a matter of specifics which can’t be elaborated upon theoretically.

Summary

The topic of 3rd party security risks is a large one to be covered by a single post, and as seen within each specific context, the implications vary dramatically. In a way, it is a problem which cannot be solved 100%, due to lack of full control over the 3rd parties, and lack of visibility into the full implementation chain of the 3rd party systems. To make it even more complicated, consider that you are only aware of your 3rd parties, and your 3rd parties also have 3rd parties—which in turn also have 3rd parties…and on and on…so you can not be fully secure! Still, there is a lot to do even if there is no clear path to 100% security, and we all know that the more we make it hard for attackers, the costlier it is for them, which does wonders to weaken their motivation. Stay safe!

The Emergence of Polymorphic Cyber Defense

Background

Attackers are Stronger Now

The cyber world is witnessing a fast-paced digital arms race between attackers and security defense systems, and 2014 showed everyone that attackers have the upper hand in this match.  Attackers are on the rise due to their growing financial interest—motivating a new level of sophisticated attacks that existing defenses are unmatched to combat. The fact that almost everything today is connected to the net and the ever-growing complexity of software and hardware turns everyone and everything into viable targets. For the sake of simplicity, I will focus this post on enterprises as a target for attacks, although the principles described here apply to other domains.

Complexity of Enterprise: IT has Reached a Tipping Point

In recent decades, enterprise IT achieved great architectural milestones thanks to the lowering costs of hardware and accelerating the pace of technology innovation. This transformation made enterprises utterly dependent on IT foundation, which is composed of a huge amount of software packages coming from different vendors, operating systems, and devices. Enterprise IT has also become very complicated where gaining a comprehensive view of all the underlying technologies and systems has become an impossible mission. This new level of complexity has its tolls, and one of them is the inability to effectively protect the enterprise digital assets.  Security tools did not evolve at the same pace as IT infrastructure, and as such, their coverage is limited—resulting in a considerable amount of “gaps” waiting to be exploited by hackers.

The Way of the Attacker

Attackers today can craft very advanced attacks quite quickly. The Internet is full of detailed information regarding how to craft those with plenty of malicious code to reuse. Attackers usually look for the least resistant path to their target, and such paths exist today. Although, after reviewing the recent APT techniques, some consider them not to be sophisticated enough. I can argue that it is just a matter of laziness, and not professionalism—since today there are so many easy paths into the enterprise, why should they bother with advanced attacks? And I do not think their level of sophistication, by any means, has reached a barrier that can make the enterprises feel more relaxed. An attack is composed of software components and to build one; the attacker needs to understand their target systems. Since IT has undergone standardization, learning which system the target enterprise use and finding its vulnerabilities is quite easy. For example, on every website an attacker can identify the signature of the type of web server, and then investigate it within the lab, to try to look for common vulnerabilities on that specific software. Even more simple is to look into the CVE database and find existing vulnerabilities, which have not been patched on it. Another example is the active directory (AD), which is an enterprise application that holds all the organizational information. Today, it is quite easy to send a malicious document to an employee in which once the document is opened, it exploits the employee's Windows machine and looks for privileged vulnerability into AD. Even the security products and measures that are applied to the target enterprise can be identified by the attacks quite easily, and can later bypass them, leaving no trace of the attack. Although organizations always aim to update their systems with the latest security updates and products, there are still two effective windows of opportunities for attackers:
  • From the moment that a disclosure of a vulnerability in specific software is identified to the moment in which a software patch-up is engineered, to the point in time in which the patch is applied to the specific computers running the software. This is the most vulnerable time frame since the details of the vulnerability are publicly available, and there is always enough time before the target covers this vulnerability—greatly simplifying the job of the attacker. Usually, within this time frame attackers can also find example exploitation code on the internet for reuse.
  • Unknown vulnerabilities in the software or enterprise architecture that are identified by attackers and used without any disruption or visibility since the installed security products are not aware of them.
From a historic point of view, the evolution of attacks is usually tightly coupled with the evolution of security products aiming to bypass them and mainly the need to breach specific areas within the target. During my time as VP R&D for Iris Antivirus (20+ years ago) I witnessed a couple of important milestones in this evolution: High-Level Attacks – Malicious code written in a high-level programming language such as Visual Basic or Java, which created a convenient platform for attackers to write a PORTABLE attacks which can be modified quite easily since it is written in high-level language making virus detection very difficult. The basic visual attacks created, also as an unintentional side effect, an efficient DISTRIBUTION channel for the malicious code to be delivered via documents. Today it is the main distribution path for malicious code, via HTML documents, Adobe PDF files or MS Office files. Polymorphic Viruses – Malicious code hides itself from signature driven detection tools, and only at runtime is the code deciphered and executed. Now imagine a single virus serving as a basis for so many variants of “hidden” code and how challenging it can be for a regular AV product. Later on, polymorphism evolved to dynamic selection, and execution of the “right” code where the attack connects to a malicious command and control server with the parameters of the environment and the server returns an adaptive malicious code, which fits the task at hand. This can be called as runtime polymorphism. Both “innovations” were created to evade the main security paradigm which existed back then, mainly that of the anti-viruses looking for specific byte signatures of the malicious code. Both new genres of attacks were very successful in challenging the AVs —because signatures have become less deterministic. Another major milestone in the evolution of attacks is the notion of code REUSE to create variants of the same attack. There are development kits in existence which can be used by attackers, as if they were legitimate software developers, building something beneficial. The variants phenomena competed earnestly with AVs in a cat and mouse race for many years—and still, does.

State of Security Products

Over the years malicious code related security products have evolved alongside the threats, whereas the most advanced technology applied to identifying malicious code was and still is behavioral analysis. Behavioral analysis indicates the capability to identify specific code execution patterns. An approach to the signature detection paradigms, which mainly addresses the challenge of malicious code variants. Behavioral analytics can be applied at runtime to a specific machine tracing the execution of applications or offline via a sandbox environment such as FireEye. The latest development in behavioral analytics is the addition of predictive capabilities aiming to predict which alternative future execution patterns reflects a malicious behavior and which is benign to stop attacks before any harm is done. Another branch of security products which aim at dealing with unknown malicious code belongs to an entirely new category that mimics the air-gap security concept, referred to as containment. Containment products—there are different approaches with different value propositions, but I am generalizing here—are running the code inside an isolated environment, and if something were to go wrong, the production environment would be left intact in that it was isolated and the attack had been contained. It is similar to having the 70’s mainframe, which did containerization, but in your pocket and a rather seamless manner. And of course, the AVs themselves have evolved quite a bit, while their good old signature detection approach still provides value in identifying well-known and rather simplistic attacks. So, with all these innovations, how are attackers remaining on top?
  1. As I said, it is quite easy to create new variants of malicious code. It can even be automated, making the entire signature detection industry quite irrelevant. The attackers have found a way to counter the signatures paradigm by simply generating a large amount of potential malicious signatures.
  2. Attackers are efficient at locating the target's easy-to-access entry points, both due to the knowledge of systems within the target, and the fact that those systems have vulnerabilities. Some attackers work to uncover new vulnerabilities, which the industry terms zero-day attacks. Most attackers, however, simply wait for new exploits to be published and enjoy the window of opportunity until it is patched.
  3. The human factor plays a serious role here where social engineering and other methods of convincing users to download malicious files is often successful. It is easier to target the CFO with a tempting email with a malicious payload, then to find your digital path into the accounting server. Usually, the CFO has the credentials to those systems, and often there are even excel copies of all the salaries on their computer, so it is a much less resistant path toward success.

Enter the Polymorphic Defense Era

6a010536b66d71970c01b7c74a1d69970b-800wi An emerging and rather exciting security paradigm that seems to be popping up in Israel and SV are called a polymorphic defense. One of the main anchors contributing to successful attacks is the prior knowledge that attackers benefit from about the target, including which software and systems are used, the network structure, the specific people and their roles, etc. This knowledge serves as a baseline for all targeted attacks across all the stages of attack: the penetration, persistence, reconnaissance and the payload itself. All these attack steps, to be effective, require a detailed prior knowledge about their target—except for reconnaissance—which complements the external knowledge with dynamically collected internal knowledge. Polymorphic defense aims to undermine this prior knowledge foundation and to make attacks much more difficult to craft. The idea of defensive polymorphism has been borrowed from the attacker's toolbox where it is used to “hide” their malicious code from security products. The combination of polymorphism with defense simply means changing the "inners" of the target, where the part to change depends on the implementation and its role in attack creation. This is done so that these changes are not visible to attackers, making prior knowledge irrelevant. Such morphism hides the internals of the target architecture so that only trusted sources are aware of them—to operate properly. The “poly” part is the cool factor of this approach in that changes to the architecture can be made continuously and on-the-fly, making the guesswork higher by magnitudes.  With polymorphism in place, attackers cannot build effective repurposable attacks against the protected area. This cool concept can be applied to many areas of security depending on the specific target systems and architecture, but it is a revolutionary and a refreshing defensive concept in the way that it changes the economic equation that attackers are benefitting from today. I also like it because, in a way, it is a proactive approach—and not passive like many other security approaches. Polymorphic defenses usually have the following attributes:
  • Solutions that are agnostic to covered attack patterns which makes them much more resilient.
  • Seamless integration into the environment since the whole idea is to change the inner parts—changes which cannot be made apparent to externals.
  • Makes reverse-engineering and propagation very difficult, due to the "poly" aspect of the solution.
  • There is always a trusted source, which serves as the basis for the morphism.

The Emerging Category of Polymorphic Defense

The polymorphic defense companies I am aware of are still startups. Here are few of them:
  • The first company that comes to mind, which takes polymorphism to the extreme, is Morphisec*, an Israeli startup still in stealth mode. Their innovative approach solves the problem of software, and it achieves that by continuously morphing the inner structures of running applications, which as a result, renders all known and potentially unknown exploits as useless. Their future impact on the industry can be tremendous: the end of the mad race of newly discovered software vulnerabilities and software patching, and much-needed peace of mind regarding unknown software vulnerabilities and attacks.
  • Another highly innovative company that applies polymorphism in a very creative manner is Shape Security. They were the first one to coin the term of polymorphic defense publicly. Their technology “hides” the inner structure of web pages which eventually can block many problematic attacks such as CSRF, which rely on specific known structures within the target web pages.
  • Another very cool company also out of Israel is CyActive. CyActive fast forwards the future of malware evolution using bio-inspired algorithms, and use it as training data for a smart detector which can identify and stop future variantsmuch like a guard that has been trained on future weapons. Their polymorphic anchor is in the fact they outsmart the phenomena of attack variants by creating all the possible variants of the malware automatically and by that increase detection rate dramatically.
I suppose there are other emerging startups which tackle security problems with polymorphism. If you are aware of any particularly impressive ones, please let me know, as I would love to update this posts with more info on them. J *Disclaimer – I have a financial and personal interest in Morphisec, the company mentioned in the post. Anyone interested in connecting with the company, please do not hesitate to send me an email and I would be happy to engage regarding this matter.

History

The idea of morphism or randomization as an effective tool for setting a serious barrier for attackers can be attributed to different academic developments and commercial ones. To name one commercial example, take the Address Space Layout Randomization (ASLR) concept from operating systems. ASLR is a concept that aims to deal with attacks that are written to exploit specific addresses in memory, and ASLR changes this assumption by moving around code in memory in a rather random manner.

The Future

Polymorphic defense is a general theoretical concept which can be applied to many different areas in the IT world, and here are some examples off the top of my head:
  • Networks – Software defined networking provides a great opportunity for changing the inner-networking topology to deceive attackers and dynamically contain breaches. This can be big!
  • APIs – API protocols can be polymorphic as well, and as such, prevent malicious actors from masquerading as legitimate parties or man in the middle attacks.
  • Databases – Database structures can be polymorphic too, so only trusted parties could be aware of a dynamic DB scheme and others cannot.
So, polymorphic defense seems to be a game-changing security trend which can potentially change the balance between the bad guys and the good guys…and ladies too, of course. UPDATE Feb 11, 2015: On Reddit I've got some valid feedback that it is the same as the MTD concept, Moving Target Defense, and indeed that is right. In my eyes, the main difference is the fact Polymorphism is more generic in the sense it is not specifically about changing location as means of deception but also creating many forms of the same thing to deceive the attackers, but it is just a matter of personal interpretation.

To Disclose or Not to Disclose, That is The Security Researcher Question

Microsoft and Google are bashing each other on the zero-day exploit in Windows 8.1 that was disclosed by Google last week following a 90 days grace period. Disclosing is a broad term when speaking about vulnerabilities and exploits - you can disclose to the public the fact that there is a vulnerability and then you can disclose how to exploit it with an example source code. There is a big difference between just telling the world about the vulnerability vs. releasing the tool to exploit it and that is the level of risk created by each alternative. In reality, most attacks are based on exploits which have been reported but have not been patched yet. Disclosing the exploit code without a patch that is ready to protect the vulnerable software is in a way helping the attackers. Of course, the main intention is to help the security officers which want to know where is the vulnerability and how to patch it temporarily but we should not forget that public information also falls in the hands of attackers. Since I have been at Google's position in the past with the KNOX vulnerability we uncovered at the cyber security labs @ Ben-Gurion University I can understand them. It is not an easy decision since on one hand, you can't hide such info from the public while on the hand you know for sure that the bad guys are just waiting for such "holes" to be exploited. Within time I understood few more realities:

  • Even if a company issues a software patch still the risk is not gone since the time window from the moment a patch is ready to be applied up to the time it is actually applied on systems can be quite long and during that time the vulnerability is available for exploitation.
  • Sometimes vulnerabilities uncover serious issues in the design of the software and solving it may not be a matter of days. Of course, a small temporary fix can be issued but a proper well thought of patch taking into account many different versions and interconnected systems can take a much longer time to devise.
  • There is a need for an authority to manage the whole exploit disclosure, patching and deployment life cycle which will devise a well-accepted policy and not just a single sided policy such as the one Google Zero devised. If the intention eventually is to increase security then without the collaboration of software vendors it won't work out.
And I am not into the details but I truly believe Google have acted here out of professionalism and not for other political reasons against Microsoft.

Google Releases Windows 8.1 Exploit Code – After 90 Days Warning to Microsoft

Google Project Zero has debuted with the aim of solving the vulnerabilities problem by identifying zero day vulnerabilities, notifying the company which owns the software and giving them 90 days to solve the problem. After 90 days they publish the exploit. And they just did it to Microsoft

I remember quite a while ago when we decided at the cyber labs at Ben-Gurion university to adopt such policy following our discovery of a vulnerability in Samsung KNOX. The KNOX vulnerability eventually turned into a Google's Android vulnerability with the help of some political juggling between the two companies. We disclosed the exploit to Google on the 17th of Jan 2014 and we got a notice that a patch was ready on the 27th of Feb so their fast response was good enough to expect others to deliver the same level of service. I would not go into the topic of how long it takes such patch to really by applied on users' devices but at least expecting a patch to be delivered in 90 days is a good start. We eventually did not release the exploit code because we understood it will take some time until users will be protected with the patch and since the vulnerability was quite serious (VPN Bypass) then we decided not to disclose it.

Disclosing the exploit too early is a double edged sword where on one hand you want the good guys to be aware to the problem in depth while on the other hand you give a weapon into the hands of the bad guys and it is well know that published exploits are highly used by attackers relying on the time window between publishing the patch to applying it on system. 

Anyway, I think Project Zero is a good step forward for the security industry!

 

Site Footer