The Emotional CISO

It may sound odd, but cybersecurity has a huge emotional component. Unlike other industries that are driven by numbers whether derived from optimization or financial gains, cybersecurity has all the makings of a good Hollywood movie—good and bad guys, nation-states attacking other nation states, and critical IT systems at risk. Unfortunately for most victims of a cyber threat or breach, the effects are all too real and don’t disappear when the music stops and the lights come on. As with a good blockbuster, in cybersecurity you can expect highs, lows, thrills and chills. When new risks and threats appear, businesses get worried, and demand for new and innovative solutions increases dramatically. Security managers and solution providers then scramble to respond with a fresh set of tools and services aimed at mitigating the newly discovered threats.

Because cybersecurity is intrinsically linked to all levels of criminal activity—from petty thieves to large-scale organized crime syndicates—cybersecurity is a never-ending story. Yet, curiously, the never ending sequence of new threats followed by new innovative solutions, present subtle patterns that, once identified, can help a CISO make the right strategic decisions based on logical reasoning and not emotions.

Cybersecurity Concept Du Jour

When you’ve been in the cybersecurity industry for a while like I have, you notice that in each era, there is always a “du jour” defense concept that occupies the industry decision makers state-of-mind. Whether it is prevention, detection or containment in each time period, the popular concept becomes the defining model that everyone—analysts, tool builders, and even the technology end users—advocate fiercely. Which concept is more popular represents critical shifts in widespread thinking with regards to cybersecurity.

The Ambiguous Perception of Defense Concepts

The defense concepts of prevention, detection, and containment serve dual roles: as defense strategies employed by CISOs and in correspondence as product categories for different defense tools and services. However, the first challenge encountered by both cybersecurity professionals and end users is that these concepts don’t have a consistent general meaning; trying to give a single general definition of each of these terms is like attempting to build a castle on shifting sand (although that doesn’t stop people from trying). From a professional security point of view, there are different worlds in which specific targets, specific threats (new and old), and a roster of defenses exist. Each specific world is a security domain in and of itself, and this domain serves as the minimum baseline context for the concepts of prevention, detection, and containment. Each particular threat in a security domain defines the boundaries and the roles of these concepts. In addition, these concepts serve as product categories, where particular, but related tools can be assigned to one or more category based on the way the tool operates.

Ultimately, these defense concepts have a concrete meaning that is specific and actionable only within a specific security domain. For instance, a security domain can be defined by the type of threat, the type of target, or a combination of the two.

So, for example, there are domains that represent groups of threats with common patterns, such as advanced attacks on enterprises (of which advanced persistent attacks or APTs are a subset) or denial of service attacks on online services. In contrast, there are security domains that represent assets, such as protecting websites from a variety of different threats including defacement, denial of service, and SQL injection through its entry points. The determining factor in defining the security domain approach depends on the asset – and the magnitude of risk it can be exposed to – or on the threat group and its commonalities among multiple threats.

Examples, Please

To make this more tangible let’s discuss a couple of examples by defining the security domain elements and explaining how the security concepts of prevention, detection, and containment need to be defined from within the domain.

The Threats Point of View – Advanced Attacks

Let’s assume that the primary attack vector for infiltration into the enterprise is via endpoints; the next phase of lateral movement takes place in the network via credential theft and exploitation; and exfiltration of data assets is conducted via HTTP covert channels as the ultimate goal.

Advanced attacks have a timeline with separate consecutive stages starting from entrance into the organization and ending with data theft. The security concepts have clearly defined meanings, related specifically to each and every stage of advanced attacks. For example, at the first stage of infiltration there are multiple ways malicious code can get into an employee computer, such as opening a malicious document or browsing a malicious website and installing a malicious executable unintentionally.

prevent1

In the case of the first stage of infiltration of advanced attacks, “prevention” means making sure infiltration does not happen at all; “detection” means identifying signs of attempted infiltration or successful infiltration; and “containment” means knowing that the infiltration attempt has been stopped and the attack cannot move to the next stage. A concrete meaning for each and every concept in the specific security domain.

The Asset Point of View – Web Site Protection

Web sites can be a target for a variety of different types of threats, such as security vulnerabilities in one of the scripts, misconfigured file system access rights, or a malicious insider with access to the web site’s backend systems. From a defensive point-of-view, the website has two binary states: compromised or uncompromised.

Therefore, the meanings of the three defense concepts are defined as prevention, any measure that can prevent the site from being compromised, and detection, identifying an already-compromised site. In this general example, containment does not have a real meaning or role as eventually a successful containment equals prevention. Within specific group of threats against a web site which have successfully compromised the site there may be a role for containment, such as preventing a maliciously installed malvertising campaign on the server from propagating to the visitors’ computers.

It’s An Emotional Decision

So, as we have seen, our three key defense concepts have different and distinctive meanings that are highly dependent on their context, making broader definitions somewhat meaningless. Still, cybersecurity professionals and lay people alike strive to assign meaning to these words, because that is what the global cybersecurity audience expects: a popular meaning based on limited knowledge, personal perception, desires and fears.

The Popular Definitions of Prevention, Detection and Containment

From a non-security expert point-of-view, prevention has a deterministic feel – if the threat is prevented, it is over with no impact whatsoever. Determinism gives the perception of complete control, high confidence, a guarantee. Prevention is also perceived as an active strategy, as opposed to detection which is considered more passive (you wait for the threat to come, and then you might detect it).

Unlike prevention, detection is far from deterministic, and would be classified as probabilistic, meaning that you might have a breach (85% chance). Detection tools that tie their success to probabilities gives assurance by degree, but never 100% confidence either on stage of attack detected or on threat coverage.

Interestingly, containment might sound deterministic since it gives the impression that the problem is under control, but there is always the possibility that some threat could have leaked through the perimeter, turning it into more of a probabilistic strategy. And it straddles the line between active and passive. Containment passively waits for the threat, and then actively contains it.

In the end, these deterministic, probabilistic, active and passive perceptions end up contributing to the indefinite meaning of these three terms, making them highly influenced by public opinion and emotions. The three concepts in the eyes of the layperson turn into three levels of confidence based on a virtual confidence scale, with prevention at the top, containment in the middle, and detection as a tool of last resort. Detection gets the lowest confidence grade because it is the least proactive, and the least definite.

prevent2

Today’s Defense Concept and What the Future Holds

Targets feel more exposed today than ever, with more and more organizations becoming victims due to newly discovered weaknesses. Attackers have the upper hand and everyone feel insecure. This imbalance towards attackers is currently driving the industry to focus on detection. It also sets the stage for the “security solution du jour” – when the balance leans toward the attackers, society lowers its expectations due to reduced confidence in tools, which results in a preference for detection. At a minimum, everyone wants to at least know an attack has taken place, and then they want to have the ability to mitigate and respond by minimizing damages. It is more about being realistic and setting detection as the goal, when there is an understanding that prevention is not attainable at the moment.

If and when balance returns and cybersecurity solutions are again providing the highest level of protection for the task at hand, then prevention once again becomes the holy grail. Ultimately, no one is satisfied with anything less than bullet-proof prevention tools. This shift in state-of-mind has had a dramatic impact on the industry, with some tools becoming popular and others being sent into oblivion. It also has impacted the way CISOs define their strategies.

Different Standards for Different Contexts

The state-of-mind when selecting the preferred defense concept also has a more granular resolution. Within each security domain, different preferences for a specific concept may apply depending on the state of the emergence of that domain. For example, in the enterprise world, the threat of targeted attacks in particular, and advanced attacks in general, used to be negligible. The primary threats ten years ago were general-purpose file-borne viruses targeting the computing devices held by the enterprise, not the enterprise itself or its unique assets. Prevention of such attacks was once quite effective with static and early versions of behavioral scanning engines. Technologies were initially deployed at the endpoint for scanning incoming files and later on, for greater efficiency, added into the network to conduct a centralized scan via a gateway device. Back then, when actual prevention was realistic, it became the standard security vendors were held to; since then, no one has settled for anything less than high prevention scores.

In the last five years, proliferation of advanced threat techniques, together with serious monetary incentives for cyber criminals, have created highly successful infiltration rates with serious damages. The success of cyber criminals has, in turn, created a sense of despair among users of defense technologies, with daily news reports revealing the extent of their exposure. The prevalence of high-profile attacks shifted the industry’s state-of-mind toward detection and containment as the only realistic course of action and damage control, since breaches seem inevitable. Today’s cybersecurity environment is comprised of fear, uncertainty, and doubt, with a low confidence in defense solutions.

Yet, in this depressing atmosphere, signs of change are evident. CISOs today typically understand the magnitude of potential attacks and the level of exposure, and they understand how to handle breaches when they take place. In addition, the accelerated pace of innovation in cybersecurity tools is making a difference. Topics such as software defined networking, moving target defense, and virtualization are becoming part of the cybersecurity professional’s war chest.

Cybersecurity is a cyclical industry, and the bar is again being optimistically raised in the direction of “prevention.” Unfortunately, this time around, preventing cybercrime won’t be as easy as it was in the last cycle when preventative tools worked with relative simplicity. This time, cybersecurity professionals will need to be prepared with a much more complex defense ecosystem that includes collaboration among targets, vendors and even governmental entities.

 

Is It GAME OVER?

Targeted attacks come in many forms, though there is one common tactic most of them share: Exploitation. To achieve their goal, they need to penetrate different systems on-the-go. The way this is done is by exploiting unpatched or unknown vulnerabilities. More common forms of exploitation happen via a malicious document which exploits vulnerabilities in Adobe Reader or a malicious URL which exploits the browser in order to set a foothold inside the end-point computer. Zero Day is the buzzword today in the security industry, and everyone uses it without necessarily understanding what it really means. It indeed hides a complex world of software architectures, vulnerabilities and exploits that only few thoroughly understand. Someone asked me to explain the topic, again, and when I really delved deep into the explanation I was able to comprehend something quite surprising. Please bear with me, this is going to be a long post 🙂

Overview

I will begin with some definitions of the different terms in the area: These are my own personal interpretations on them…they are not taken from Wikipedia.

Vulnerabilities

This term usually refers to problems in software products – bugs, bad programming style or logical problems in the implementation of software. Software is not perfect and maybe someone can argue that it can’t be such. Furthermore, the people whom build the software are even less perfect—so it is safe to assume such problems will always exist in software products. Vulnerabilities exist in operating systems, runtime environments such as Java and .Net or specific applications whether they are written in high level languages or native code. Vulnerabilities also exist in hardware products, but for the sake of this post I will focus on software as the topic is broad enough even with this focus. One of the main contributors to the existence and growth in the number of vulnerabilities is attributed to the ever-growing pace of complexity in software products—it just increases the odds for creating new bugs which are difficult to spot due to the complexity. Vulnerabilities always relate to a specific version of a software product which is basically a static snapshot of the code used to build the product at a specific point in time. Time plays a major role in the business of vulnerabilities, maybe the most important one.

Assuming vulnerabilities exist in all software products, we can categorize them into three groups based on the level of awareness to these vulnerabilities:

  • Unknown Vulnerability – A vulnerability which exists in a specific piece of software to which no one is aware. There is no proof that such exists but experience teaches us that it does and is just awaiting to be discovered.
  • Zero Day – A vulnerability which has been discovered by a certain group of people or a single person where the vendor of the software is not aware of it and so it is left open without a fix or awareness to it its presence.
  • Known Vulnerabilities – Vulnerabilities which have been brought to the awareness of the vendor and of customers either in private or as public knowledge. Such vulnerabilities are usually identified by a CVE number – where during the first period following discovery the vendor works on a fix, or a patch, which will become available to customers. Until customers update the software with the fix, the vulnerability is kept open for attacks. So in this category, each respective installation of the software can have patched or un-patched known vulnerabilities. In a way, the patch always comes with a new software version, so a specific product version always contains un-patched vulnerabilities or not – there is no such thing as a patched vulnerability – there are only new versions with fixes.

There are other ways to categorize vulnerabilities: based on the exploitation technique such as buffer overflow or heap spraying, the type of bug which lead to the vulnerability, or such as a logical flaw in design or wrong implementation which leads to the problem.

Exploits

A piece of code which abuses a specific vulnerability in order to cause something unexpected to occur as initiated by the attacked software. This means either gaining control of the execution path inside the running software so the exploit can run its own code or just achieving a side effect such as crashing the software or causing it to do something which is unintended by its original design. Exploits are usually highly associated with malicious intentions although from a technical point of view it is just a mechanism to interact with a specific piece of software via an open vulnerability – I once heard someone refer to it as an “undocumented API” :).

This picture from Infosec Institute describes a vulnerability/exploits life cycle in an illustrative manner:

042115_1024_ZeroDayExpl1

The time span, colored in red, presents the time where a found vulnerability is considered a Zero Day and the time colored in green turns the state of the vulnerability to un-patched. The post disclosure risk is always dramatically higher as the vulnerability becomes public knowledge. Also, the bad guys can and do exploit in higher frequency than in the earlier stage. Closing the gap on the patching period is the only step which can be taken toward reducing this risk.

The Math Behind a Targeted Attacks

Most targeted attacks today use the exploitation of vulnerabilities to achieve three goals:

  • Penetrate an employee end-point computer by different techniques such as malicious documents sent by email or malicious URLs. Those malicious documents/URLs contain malicious code which seeks specific vulnerabilities in the host programs such as the browser or the document reader. And, during a rather naïve reading experience, the malicious code is able to sneak into the host program as a penetration point.
  • Gain higher privilege once a malicious code already resides on a computer. Many times the attacks which were able to sneak into the host application don’t have enough privilege to continue their attack into the organization and that malicious code exploits vulnerabilities in the runtime environment of the application which can be the operating system or the JVM for example, vulnerabilities which can help the malicious code gain elevated privileges.
  • Lateral movement – once the attack enters the organization and wants to reach other areas in the network to achieve its goals, many times it exploits vulnerabilities in other systems which reside on its path.

So, from the point of view of the attack itself, we can definitely identify three main stages:

  • Attack at Transit Pre Breach – This state means an attack is moving around on its way to the target and in the target prior to exploitation of the vulnerability.
  • Attack at Penetration – This state means an attack is exploiting a vulnerability successfully to get inside.
  • Attack at Transit Post Breach –  This state means an attack has started running inside its target and within the organization.

The following diagram quantifies the complexity inherent in each attack stage both from the attacker and defender sides and below the diagram there are descriptions for each area and the concluding part:

Ability to Detect an Attack at Transit Pre Breach

Those are the red areas in the diagram. Here an attack is on its way prior to exploitation, on its way referring to the enterprise that can scan the binary artifacts of the attack, either in the form of network packets, a visited website or specific document which is traveling via email servers or arriving to the target computer for example. This approach is called static scanning. The enterprise can also emulate the expected behavior with the artifact (opening a document in a sandboxed environment) in a controlled environment and try to identify patterns in the behavior of the sandbox environment which resemble a known attack pattern – this is called behavioral scanning.

Attacks pose three challenges towards security systems at this stage:

  • Infinite Signature Mutations – Static scanners are looking for specific binary patterns in a file which should match to a malicious code sample in their database. Attackers are already much out smarted these tools where they have automation tools for changing those signatures in a random manner with the ability to create infinite number of static mutations. So a single attack can have an infinite amount of forms in its packaging.
  • Infinite Behavioural Mutations – The evolution in the security industry from static scanners was towards behavioral scanners where the “signature” of a behavior eliminates the problems induced by static mutations and the sample base of behaviors is dramatically lower in size. A single behavior can be decorated with many static mutations and behavioral scanners reduce this noise. The challenges posed by the attackers make behavioral mutations of infinite nature as well and they are of two-fold:
    • Infinite number of mutations in behaviour – In the same way attackers outsmart the static scanners by creating infinite amount of static decorations on the attack, here as well, the attackers can create either dummy steps or reshuffle the attack steps which eventually produce the same result but from a behavioral pattern point of view it presents a different behavior. The spectrum of behavioral mutations seemed at first narrower then static mutations but with advancement of attack generators even that has been achieved.
    • Sandbox evasion – Attacks which are scanned for bad behavior in a sandboxed environment have developed advanced capabilities to detect whether they are running in an artificial environment and if they detect so then they pretend to be benign which implies no exploitation. This is currently an ongoing race between behavioral scanners and attackers and attackers seem to have the upper hand in the game.
  • Infinite Obfuscation – This technique has been adopted by attackers in a way that connects to the infinite static mutations factor but requires specific attention. Attackers in order to deceive the static scanners have created a technique which hides the malicious code itself by running some transformation on it such as encryption and having a small piece of code which is responsible for decrypting it on target prior to exploitations. Again, the range of options for obfuscating code are infinite which makes the static scanners’ work more difficult.

This makes the challenge of capturing an attack prior to penetration very difficult to impossible where it definitely increases with time. I am not by any means implying such security measures don’t serve an important role where today they are the main safeguards from turning the enterprise into a zoo. I am just saying it is a very difficult problem to solve and that there are other areas in terms of ROI (if such security as ROI exists) which a CISO better invest in.

Ability to Stop an Attack at Transit Post Breach

Those are the black areas in the diagram. An attack which has already gained access into the network can take infinite number of possible attack paths to achieve its goals. Once an attack is inside the network then the relevant security products try to identify it. Such technologies  surround big data/analytics which try to identify activities in the network which imply malicious activity or again network monitors which listen to the traffic and try to identify artifacts or static behavioral patterns of an attack. Those tools rely on different informational signals which serve as attack signals.

Attacks pose multiple challenges towards security products at this stage:

  • Infinite Signature Mutations, Infinite Behavioural Mutations, Infinite Obfuscation – these are the same challenges as described before since the attack within the network can have the same characteristics as the ones before entering the network.
  • Limited Visibility on Lateral Movement – Once an attack is inside then usually its next steps are to get a stronghold in different areas in the network and such movement is hardly visible as it is eventually about legitimate actions – once an attacker gets a higher privilege it conducts actions which are considered legitimate but of high privilege and it is very difficult for a machine to deduce the good vs. the bad ones. Add on top of that, the fact that persistent attacks usually use technologies which enable them to remain stealthy and invisible.
  • Infinite Attack Paths – The path an attack can take inside the network’ especially taking into consideration a targeted attack is something which is unknown to the enterprise and its goals, has infinite options for it.

This makes the ability to deduce that there is an attack, its boundaries and goals from specific signals coming from different sensors in the network very limited. Sensors deployed on the network never provide true visibility into what’s really happening in the network so the picture is always partial. Add to that deception techniques about the path of attack and you stumble into a very difficult problem. Again, I am not arguing that all security analytics products which focus on post breach are not important, on the contrary, they are very important. Just saying it is just the beginning in a very long path towards real effectiveness in that area. Machine learning is already playing a serious role and AI will definitely be an ingredient in a future solution.

Ability to Stop an Attack at Penetration Pre Breach and on Lateral Movement

Those are the dark blue areas in the diagram. Here the challenge is reversed towards the attacker where there are only limited amount of entry points into the system. Entry points a.k.a vulnerabilities. Those are:

  • Unpatched Vulnerabilities – These are open “windows” which have not been covered yet. The main challenge here for the IT industry is about automation, dynamic updating capabilities and prioritization. It is definitely an open gap which can be narrowed down potentially to become insignificant.
  • Zero Days – This is an unsolved problem. There are many approaches towards that such as ASLR and DEP on Windows but still there is no bulletproof solution for it. In the startups scene I am aware that quite a few are working very hard on a solution. Attackers identified this soft belly long time ago and it is the main weapon of choice for targeted attacks which can potentially yield serious gains for the attacker.

This area presents a definite problem but in a way it seems as the most probable one to be solved earlier than the other areas. Mainly because the attacker in this stage is at its greatest disadvantage – right before it gets into the network it can have infinite options to disguise itself and after it gets into the network the action paths which can be taken by it are infinite. Here the attacker need to go through a specific window and there aren’t too many of those out there left unprotected.

Players in the Area of Penetration Prevention

There are multiple companies/startups which are brave enough to tackle the toughest challenge in the targeted attacks game – preventing infiltration – I call it, facing the enemy at the gate. In this ad-hoc list I have included only technologies which aim to block attacks at real-time – there are many other startups which approach static or behavioral scanning in a unique and disruptive way such as Cylance and CyberReason or Bit9 + Carbon Black (list from @RickHolland) which were excluded for sake of brevity and focus.

Containment Solutions

Technologies which isolate the user applications with a virtualized environment. The philosophy behind it is that even if there was an exploitation in the application still it won’t propagate to the computer environment and the attack will be contained. From an engineering point of view I think these guys have the most challenging task as the balance between isolation and usability has inverse correlation in productivity and it all involves virtualization on an end-point which is a difficult task on its own. Leading players are Bromium and Invincea, well established startups with very good traction in the market.

Exploitation Detection & Prevention

Technologies which aim to detect and prevent the actual act of exploitation. Starting from companies like Cyvera (now Palo Alto Networks Traps product line) which aim to identify patterns of exploitations, technologies such as ASLR/DEP and EMET which aim at breaking the assumptions of exploits by modifying the inner structures of programs and setting traps at “hot” places which are susceptible to attacks, up to startups like Morphisec which employs a unique moving target concept to deceive and capture the attacks at real-time. Another long time player and maybe the most veteran in the anti exploitation field is MalwareBytes. They have a comprehensive offering for anti exploitation with capabilities ranging from in-memory deception and trapping techniques up to real time sandboxing.

At the moment the endpoint market is still controlled by marketing money poured by the major players where their solutions are growing ineffective in an accelerating pace. I believe it is a transition period and you can already hear voices saying endpoint market needs a shakeup. In the future the anchor of endpoint protection will be about real time attack prevention and static and behavioral scanning extensions will play a minor feature completion role. So pay careful attention to the technologies mentioned above as one of them (or maybe a combination:) will bring the “force” back into balance:)

 

Advise for the CISO

Invest in closing the gap posed by vulnerabilities. Starting from patch automation, prioritized vulnerabilities scanning up to security code analysis for in-house applications—it is all worth it. Furthermore, seek out for solutions which deal directly with the problem of zero days, there are several startups in this area, and their contributions can have much higher magnitude than any other security investment in post or pre breach phases.

 

Time to Re-think Vulnerabilities Disclosure

Public disclosure of vulnerabilities has always bothered me and I wasn’t able to put a finger on the reason until now. As a person whom has been involved personally in vulnerabilities disclosure I am highly appreciative for the contribution security researchers on awareness and it is very hard to imagine what would the world be  like without disclosures. Still the way attacks are being crafted today and their links to such disclosures got me into thinking whether we are doing it in the best way possible. So I twitted this and got a lot of “constructive feedback”:) from the team in the cyber labs at Ben-Gurion of how do I dare?

 

So I decided to build my argument right.

Vulnerabilities

The basic fact is that software has vulnerabilities. Software gets more and more complex within time and this complexity usually invites errors. Some of those errors can be abused by attackers in order to exploit the systems such software is running on. Vulnerabilities split to two groups, the ones which the vendor is aware of and the ones whom are unknown. And it is unknown how many unknowns are there inside each piece of code.

Disclosure

There are many companies, individuals and organisations which search for vulnerabilities in software and once they find such they disclose their findings. They disclose at least the mere existence of the vulnerability to the public and the vendor and many times even publish proof of concept code example which can be used to exploit the found vulnerabilities. Such disclosure serves two purposes:

  • Making users of the software aware to the problem as soon as possible
  • Making the vendor aware of the problem so it can create and send a fix to their users

After the vendor is aware to the problem then it is in their responsibility to notify the users formally and then to create an update for the software which fixes the bug.

Timelines

Past to Time of Disclosure – The unknown vulnerability waiting silently and eager to be discovered.

Time of Disclosure to Patch is Ready – Everyone know about the vulnerability, the good and the bad guys, and it is now on production systems waiting to be exploited by attackers.

Patch Ready to System is Fixed – Also during this time period the vulnerability is still there waiting to get exploited.

The following diagram demonstrates those timelines in relation to the ShellShock bug:

7-ways-to-stay-7-years-ahead-of-the-threat-5-638

Image taken from http://www.slideshare.net/ibmsecurity/7-ways-to-stay-7-years-ahead-of-the-threat

 

Summary

So indeed the disclosure process eventually ends with a fixed system but there is a long period of time where systems are vulnerable and attackers don’t need to work hard on uncovering new vulnerabilities since they have the disclosed one waiting for them.

I got into thinking about this after I saw this stats via Tripwire

“About half of the CVEs exploited in 2014 went from publish to pwn in less than a month” (DBIR, pg. 18).

This stats means that half of the exploits identified during 2014 were based on published CVEs (CVE is a public vulnerability database) and although some may argue that the attackers could have the same knowledge on those vulnerabilities before they were published I say it is far-fetched. If I was an attacker what would be easier for me than going over the recently published vulnerabilities and finding one that is suitable for my target and later on building an attack around it. Needless to say that there are tools which provide also examples for that such as Metasploit. Of the course the time window to operate is not infinite such as in the case of an unknown vulnerability which no one knows about but still a month or more is enough to get the job done.

Last Words

A new process of disclosure should be devised where the risk level during the time of disclosure up to the time a patch is ready and applied should be reduced. Otherwise we are all just helping the attackers while trying to save the world.

 

 

 

 

Most cyber attacks start with an exploit – I know how to make them go away

Yet another new Ransomware with a new sophisticated approach http://blog.trendmicro.com/trendlabs-security-intelligence/crypvault-new-crypto-ransomware-encrypts-and-quarantines-files/

Pay attention that the key section in the description on the way it operates is “The malware arrives to affected systems via an email attachment. When users execute the attached malicious JavaScript file, it will download four files from its C&C server:”

When users execute the JavaScript files it means the JavaScript was loaded into the browser application and exploited the browser in order to get in and then start all the heavy lifting. The browser is vulnerable, software is vulnerable, it’s a given fact of an imperfect world.

I know a startup company, called Morphisec which is eliminating those exploits in a very surprising and efficient way. 

In general vulnerabilities are considered to be a chronic disease and this does not have to be this way. Some smart guys and girls are working on a cure:)

Remember, it all starts with the exploit.

    No One is Liable for My Stolen Personal Information

    The main victims in any data breach are actually the people, the customers, whom their personal information has been stolen and oddly they don’t get the deserved attention. Questions like what was the impact of the theft on me as a customer, what can I do about it and whether I deserve some compensation are rarely dealt with publicly.

    Customers face several key problems when their data was stolen, questions such as:

    • Was their data stolen at all? Even if there was a breach it is not clear whether my specific data has been stolen. Also, the multitude of places where my personal information resides makes it impossible to track whether and where my data has been stolen from.
    • What pieces of information about me were stolen and by whom? I deserve to know whom has done that more than anyone else. Mainly due to the next bullet.
    • What are the risks I am facing now after the breach? In case of a stolen password that is used in other services I can go manually and change it but when my social security number was stolen, what does it mean for me?
    • Whom can I contact in the breached company to answer such questions?
    • And most important was my data protected properly?

    The main point here is the fact companies are not obligated either legally or socially to be transparent about how they protect their customers’ data. The lack of transparency and standards as for how to protect data creates an automatic lack of liability and serious confusion for customers. In other areas such as preserving customer privacy and terms of service the protocol between a company and its customers is quite standardized and although not enforced by regulation still it has substance to it. Companies publish their terms of service (TOS) and privacy policy (PP) and both sides rely on these statements. The recent breaches of Slack and JPMorgan are great examples for the poor state of customer data protection – in one case they decided to implement two factor authentication and I am not sure why didn’t they do it before and in the second case the two factor authentication was missing in action. Again these are just two examples which present the norm across most of the companies in the world.

    And what if each company adopted a customer data protection policy (CDPP), an open one,  where such a document would specify clearly on the company website what kind of data it collects and stores and what security measures it applies to protect it. From a security point of view such information can not really cause harm since attackers have better ways to learn about the internals of the network and from a customer relationship point of view it is a must.

    Such a CDPP statement can include:

    • The customer data elements collected and stored
    • How it is protected against malicious employees
    • How it is protected from third parties which may access to the data
    • How it is protected when it is stored and when it is moving inside the wires
    • How the company is expected to communicate with the customers when a breach happens – whom is the contact person?
    • To what extent the company is liable for stolen data


    Such document can increase dramatically the confidence level for us, the customers, prior to selecting to work with a specific company and can serve as a basis for innovation in tools which can aggregate and manage such information.

    Breaching The Air-Gap with Heat

    Researcher Mordechai Guri, guided by Prof. Yuval Elovici, has uncovered a new method to breach air-gapped systems. Our last finding on air-gap security was published in August of 2014, using a method called Air-Hopper which utilizes FM waves for data exfiltration. The new research initiative, termed BitWhisper, is part of the ongoing research on the topic of air-gap security at the Cyber Security Research Center at Ben-Gurion University. BitWhisper is a demonstration for a covert bi-directional communication channel between two close by air-gapped computers communicating via heat. The method allows bridging the air-gap between the two physically adjacent and compromised computers using their heat emissions and built-in thermal sensors to communicate.
    The following video presents a proof of concept demonstration:

    The scenario of two adjacent computers is very prevalent in many organizations in which two computers are situated on a single desk, one being connected to the internal network and the other one connected to the Internet. The method demonstrated can serve both for data leakage for low data packages and for command and control. Guri, whom was recently selected to receive a 2015-2016 IBM PhD Fellowship Award, was also the lead researcher on the AirHopper finding.

    The full research paper will be published on the cyber research center blog soon, so stay tuned. Deeper coverage can be found on Wired. Journalists looking to cover the story and gain early access to the full research paper can contact me, Dudu Mimran, at dudu@dudumimran.com

     

    Distributed Cyber Warfare

    One of the core problems with cyber criminals and attackers is the lack of a clear target. Cyber attacks are digital in nature and as such they are not tied with a specific geography, organization and or a person – finding the traces to the source is non-deterministic and ambiguous. In a way it reminds me of real life terrorism as an effective distributed warfare model which is also difficult to mitigate. The known military doctrines always assumed a clear target and in a way they are not relevant anymore against terrorism. The terrorists are taking advantage of the concept of distributed entities where attacks can hit anything, anytime and can originate from everywhere on the planet using an unknown form of attack. A very fuzzy target. The ways countries tackle terrorism mostly rely on intelligence gathering while the best intelligence is unfortunately created following a specific attack. Following an attack it is quite easy to find out about the identity of the attackers which lead eventually to a source and motivation – this information leads to more focused intelligence which helps prevent other future attacks. In the cyber arena the situation is much worse since even after actual attacks are taking place it is almost impossible to trustfully trace the specific sources and attribute them to some organization or person.

     

    It is a clear example of how a strong concept like distributed activity can be used for malicious purposes and I am pretty sure it will play out again and again in favour of attackers in future attack scenarios.

    Taming The Security Weakest Link(s)

    Overview

    The security level of a computerised system is as good as the security level of its weakest links. If one part is secure and tightened properly and other parts are compromised, then your whole system is compromised and the compromised ones become your weakest links. The weakest link fits well with attackers’ mindset which always looks for the least resistant path to their goal. Third parties in computers present an intrinsic security risk for CISOs, and in general, to any person responsible for the overall security of a system. A security risk is one that is overlooked due to a lack of understanding, and is not taken into account in an overall risk assessment, except for the mere mention of it. To clarify, third-party refers to all other entities that are integrated into yours, which can be hardware and software, as well as people who have access to your system and are not under your control.

    A simple real life example can make it less theoretical: Let’s say you are building a simple piece of software running on Linux. You use the system C library, and in this case, plays the 3rd party role. If the C library has vulnerabilities—then your own software has vulnerabilities. And, even if you make your software bulletproof, it still won’t remove the risks associated with the C library which becomes your software weakest link.

    Zooming out on our imaginary piece of software then, you probably already understand that the problem of the 3rd party is much bigger then what was previously mentioned, as your software also relies on the operating system and other installed 3rd party libraries, and the hardware itself, and the networking services, and the list goes on and on. I am not trying to be pessimistic but this is how it actually works.

    In this post, I will focus on application integration-driven weakest links for the sake of simplicity, and not on other 3rd parties such as reusable code, non-employees and others.

     

    Application Integration as a Baseline for 3rd Parties

    Application integration has been one of the greatest trends ever in the software industry, enabling the buildup of complex systems based on existing systems and products. Such integration takes many forms depending on the specific context in which it is implemented.

    In the mobile world for example, integration serves mainly the purpose of ease of use where the apps are integrated into one other by means of sharing or delegation of duty, such as integrating the camera into an image editing app—iOS have come a long way in this direction with native FB and Twitter integration, as well as native sharing capabilities. Android was built from the ground up for such integration with its activity driven architecture.

    6a010536b66d71970c01b7c754ea16970b-pi

     

    In the context of enterprise systems, integration is the lifeblood of business processes where there are two main forms of integration: one-to-one such as software X “talking” to software Y via software or network API. The second form is many-to-many, such as in the case of software applications “talking” to a middleware where later the middleware “talks” to other software applications.

    6a010536b66d71970c01bb07f8b814970d-pi

     

    In the context of a specific computer system, there is also the local integration scenario which is based on OS native capabilities such as ActiveX/OLE or dynamic linking to other libraries – such integration usually serves code reuse, ease of use and information sharing.

    6a010536b66d71970c01b8d0de35f0970c-pi

     

    In the context of disparate web-based services, the one-to-one API integration paradigm is definitely the main path for building great services fast.

    6a010536b66d71970c01b7c754ea7f970b-320wi

     

    Of course, the world is not homogeneous as is depicted above. Within the mentioned contexts you can find different forms of integration which usually depend on the software vendors and existing platforms.

     

    Integration Semantics

    Each integration is based on specific semantics. These semantics are imposed by the interfaces each party exposes to the other party. REST APIs for example provide a rather straightforward approach for understanding the semantics where the interfaces are highly descriptive. The semantics usually dictate the range of actions that can be taken by each party in the integration tango and the protocol itself enforces those semantics. Native forms of integration between applications are a bit more messy then network based APIs where there is less capability to enforce the semantics allowing exploits such as in the case with ActiveX integration on Windows which has been a basis for quite a few attacks. The semantics of integration includes also the phase of establishing the trust between the integrated parties, and again, this varies quite a bit in terms of implementation within each context. It varies from a zero trust case with fully public APIs such as consuming an RSS feed or running a search on Google with an Incognito browser up to a full authentication chain with embedded session tokens.

    In the mobile world where the aim of integration is to increase ease of use, the trust level is quite low: the mobile trust scheme is based mainly on the fact that both integrated applications reside on the same device such as in the case of sharing, where any app can ask for sharing via other apps and gets an on-the-fly integration into the sharing apps. The second prominent use case in mobile for establishing trust is based on a permission request mechanism. For example, when an app tries to connect to your Facebook app on the phone, the permission request mechanism verifies the request independently from within the FB app, and once approved, the trusted relationship remains constant by use of a persisted security token. Based on some guidelines, some apps do expire those security tokens, but they definitely last for an extended period of time. With mobile, the balance shift remains between maintaining security, and annoying the user with many too many permission questions.

     

    Attack Vectors In Application Integration

    Abuse of My Interfaces

    Behind every integration interface, there is a piece of software which implements the exposed capabilities, and as in every software, it is safe to assume that there are vulnerabilities just waiting to be discovered and exploited. So the mere existence of opening integration interfaces from your software poses a risk.

    Man In the Middle

    Every communication among two integrated parties can be attacked by means of man in the middle (MitM). MitM can first intercept the communications, but also alter them to either disrupt the communications or exploit a vulnerability on either side of the integration. Of course, there are secure protocols such as SSL which can reduce that risk but not eliminate it.

    Malicious Party

    Since we don’t have control of the integrated party, then it is very difficult to assume that it has not been taken over by a malicious actor which now can do all kind of things: exploit my vulnerabilities, exploit the data channel by sending harmful or destructive data, or cause a disruption of my service with denials of service attacks. The other risk of a malicious or under attack party is about availability, and many times with tight integration your availability strongly depends on the integrated parties availability. The risk posed by a malicious party is amplified due to the fact a trust is already established and many times a trusted party receives wider access to resources and functionality then a non-trusted party so the potential for abuse is higher.

     

    Guidelines for Mitigation

    There are two challenges for mitigating 3rd party risks: the first one is visibility that is easier to achieve, and the second is what to do about each risk identified since we don’t have full control over the supply chain. The first step is to gain an understanding of which 3rd parties your software is reliant upon. This is not easy as you may have visibility only over the first level of integrated parties—in a way this is a recursive problem, but still the majority of the integrations can be listed out. For each integration point, it is interesting to understand the interfaces and the method of integration (i.e. over the network, ActiveX), and finally trust establishing method. Once you have this list, then you should create a table with four columns:

    • CONTROL – How much control you have over the 3rd party implementation.
    • CONFIDENCE – Confidence in 3rd party security measures.
    • IMPACT – Risk level associated with potential abuse of my interfaces.
    • TRUST – The trust level required to be established between the integrated parties prior to communicating with each other.

    These four parameters serve as a basis for creating an overall risk score where the weights for each parameter should be assigned at your own discretion and based on your own judgment. Once you have such a list, and you’ve got your overall risk calculated for each 3rd party, then simply sort it out based on risk score and there you’ve got a list of priorities for taming the weakest links.

     

    Once you know your priorities, then there are things you can do and there are actions that only the owners of the 3rd party components can do so you need some cooperation. Everything that is in your control, which is the security of your end in the integration and the trust level imposed between the parties (assuming you have control of the trust chain and you are not the consumer party in the integration), should be tightened up. For example, reducing the impact of your interfaces towards your system is one thing in your control as well as patching level of dependent software components. MitM risk can be reduced dramatically with the establishment of a good trust mechanism and implementation of secure communications, but not completely mitigated. And lastly, taking care of problems within an uncontrolled 3rd party is a matter of specifics which can’t be elaborated upon theoretically.

     

    Summary

    The topic of 3rd party security risks is definitely a large one to be covered by a single post and as seen within each specific context, the implications vary dramatically. In a way, it is a problem which cannot be solved 100%, due to lack of full control over the 3rd parties, and lack of visibility into the full implementation chain of the 3rd party systems. To make it even more complicated, consider that you are only aware of your 3rd parties, and your 3rd parties have also 3rd parties—which in turn also have 3rd parties…and on and on…so you can not really be fully secure! Still, there is a lot to do even if there is no clear path to 100% security, and we all know that the more we make it hard for attackers, the costlier it is for them, which does wonders to weaken their motivation.

    Stay safe!

    The Emergence of Polymorphic Cyber Defense

    Background

    Attackers are Stronger Now

    The cyber world is witnessing a fast-paced digital arms race between attackers and security defense systems, and 2014 showed everyone that attackers have the upper hand in this match.  Attackers are on the rise due to their growing financial interest—motivating a new level of sophisticated attacks that existing defenses are unmatched to combat. The fact that almost everything today is connected to the net and the ever-growing complexity of software and hardware turns everyone and everything into viable targets.

    For the sake of simplicity, I will focus this post on enterprises as a target for attacks, although the principles described here are applicable to other domains.

    Complexity of Enterprise: IT has Reached a Tipping Point

    In recent decades, enterprise IT achieved great architectural milestones thanks to the lowering costs of hardware and accelerating pace of technology innovation. This transformation made enterprises utterly dependent on IT foundation, which is composed of a huge amount of software packages coming from different vendors, operating systems and devices. Enterprise IT has also become very complicated where gaining a comprehensive view on all the underlying technologies and systems has become an impossible mission. This new level of complexity has its tolls, and one of them is the inability to effectively protect the enterprise digital assets.  Security tools did not evolve at the same pace as IT infrastructure and as such, their coverage is limited—resulting in a considerable amount of “gaps” waiting to be exploited by hackers.

    The Way of the Attacker

    Attackers today are able to craft very advanced attacks quite easily. The Internet is full of detailed information regarding how to craft those with plenty of malicious code to reuse. Attackers usually look for the least resistant path to their target, and such paths exist today. Although, after reviewing the recent APT techniques, some consider them not to be sophisticated enough. I can argue that it is just a matter of laziness, and not professionalism—since today there are so many easy paths into the enterprise, why should they bother with advanced attacks? And I do not think their level of sophistication, by any means, has reached a barrier that can make the enterprises feel more relaxed.

    An attack is composed of software components and to build one, the attacker needs to understand their target systems. Since IT has undergone standardization, learning which system the target enterprise uses and finding its vulnerabilities is quite easy. For example, on every website an attacker can identify the signature of the type of web server, and then investigating it within the lab, to try to look for common vulnerabilities on that specific software. Even more simple is to look into the CVE database and find existing vulnerabilities, which have not been patched on it. Another example is active directory (AD), which is an enterprise application that holds all the organizational information. Today, it is quite easy to send a malicious document to an employee in which once the document is opened, it exploits the employee’s Windows machine and looks for privileged vulnerability into AD. Even the security products and measures that are applied to the target enterprise can be identified by the attacks quite easily, and can later bypass them, leaving no trace of the attack. Although organizations always aim to update their systems with the latest security updates and products, there are still two effective windows of opportunities for attackers:

    • From the moment that a disclosure of a vulnerability in specific software is identified to the moment in which a software patch-up is engineered, to the point in time in which the patch is applied to the specific computers running the software. This is the most vulnerable time frame since the details of the vulnerability are publicly available and there is always enough time before the target covers this vulnerability—greatly simplifying the job of the attacker. Usually within this time frame attackers can also find example exploitation code on the internet for reuse.
    • Unknown vulnerabilities in the software or enterprise architecture that are identified by attackers and used without any disruption or visibility since the installed security products are not aware of them.

    From a historic point of view the evolution of attacks is usually tightly coupled with the evolution of security products aiming to bypass them and mainly the need to breach specific areas within the target. During my time as VP R&D for Iris Antivirus (20+ years ago) I witnessed a couple of important milestones in this evolution:

    High Level Attacks – Malicious code written in a high level programming language such as Visual Basic or Java, which created a convenient platform for attackers to write a PORTABLE attacks which can be modified quite easily since it is written in high level language making virus detection very difficult. The visual basic attacks created, also as an unintentional side effect, an efficient DISTRIBUTION channel for the malicious code to be delivered via documents. Today it is the main distribution path for malicious code, via HTML documents, Adobe PDF files or MS Office files.

    Polymorphic Viruses – Malicious code hides itself from signature driven detection tools, and only at runtime is the code deciphered and executed. Now imagine a single virus serving as a basis for so many variants of “hidden” code and how challenging it can be for a regular AV product. Later on polymorphism evolved to dynamic selection, and execution of the “right” code where the attack connects to a malicious command and control server with the parameters of the environment and the server returns an adaptive malicious code, which fits the task at hand. This can be called as runtime polymorphism.

    Both “innovations” were created to evade the main security paradigm which existed back then, mainly that of the anti-viruses looking for specific byte signatures of the malicious code. Both new genres of attacks were very successful in challenging the AVs —due to the fact that signatures have become less deterministic. Another major milestone in the evolution of attacks is the notion of code REUSE in order to create variants of the same attack. There are development kits in existence which can be used by attackers, as if they were legitimate software developers, building something beneficial. The variants phenomena competed earnestly with AVs in a cat and mouse race for many years—and still does.

    State of Security Products

    Over the years malicious code related security products have evolved alongside the threats, whereas the most advanced technology applied to identifying malicious code was and still is behavioral analysis. Behavioral analysis indicates the capability to identify specific code execution patterns. An approach to the signature detection paradigms, which mainly addresses the challenge of malicious code variants. Behavioral analytics can be applied at runtime to a specific machine tracing the execution of applications or offline via a sandbox environment such as Fireeye. The latest development in behavioral analytics is the addition of predictive capabilities aiming to predict which alternative future execution patterns reflects a malicious behavior and which is benign in order to stop attacks before any harm is done. Another branch of security products which aim at dealing with unknown malicious code belongs to an entirely new category that mimics the air-gap security concept, referred to as containment. Containment products—there are different approaches with different value propositions, but I am generalizing here—are actually running the code inside an isolated environment, and if something were to go wrong, the production environment would be left intact in that it was isolated and the attack had been contained. It is similar to having the 70’s mainframe, which did containerization, but in your pocket and in a rather seamless manner. And of course the AVs themselves have evolved quite a bit, while their good ol’ signature detection approach still provides value in identifying well-known and rather simplistic attacks.

    So, with all these innovations, how are attackers remaining on top?

    1. As I said, it is quite easy to create new variants of malicious code. It can even be automated, making the entire signature detection industry quite irrelevant. The attackers have found a way to counter the signatures paradigm by simply generating a large amount of potential malicious signatures.
    2. Attackers are efficient at locating the target’s easy-to-access entry points, both due to the knowledge of systems within the target, and the fact that those systems have vulnerabilities. Some attackers work to uncover new vulnerabilities, which the industry terms zero-day attacks. Most attackers, however, simply wait for new exploits to be published and enjoy the window of opportunity until it is patched.
    3. The human factor plays a serious role here where social engineering and other methods of convincing users to download malicious files is often successful. It is easier to target the CFO with a tempting email with a malicious payload, then to find your digital path into the accounting server. Usually, the CFO has the credentials to those systems and often times there are even excel copies of all the salaries on their computer so it is definitely a much less resistant path toward success.

     

    Enter the Polymorphic Defense Era

    6a010536b66d71970c01b7c74a1d69970b-800wi

    An emerging and rather exciting security paradigm that seems to be popping up in Israel and SV is called polymorphic defense. One of the main anchors contributing to successful attacks is the prior knowledge that attackers benefit from about the target, including: which software and systems are used, the network structure, the specific people and their roles, etc. This knowledge serves as a baseline for all targeted attacks across all the stages of an attack: the penetration, persistence, reconnaissance and the payload itself. All these attack steps, in order to be effective, require a detailed prior knowledge about their target—except for reconnaissance—which complements the external knowledge with dynamically collected internal knowledge. Polymorphic defense aims to undermine this prior knowledge foundation and to make attacks much more difficult to craft.

    The idea of defensive polymorphism has been borrowed from the attacker’s toolbox where it is used in order to “hide” their malicious code from security products. The combination of polymorphism with defense simply means changing the “inners” of the target, where the part to change depends on the implementation and its role in attack creation. This is done so that these changes are not visible to attackers, making prior knowledge irrelevant. Such morphism hides the internals of the target architecture so that only trusted sources are aware of them—in order to operate properly. The “poly” part is the cool factor of this approach in that changes to the architecture can be made continuously and on-the-fly, making the guesswork higher by magnitudes.  With polymorphism in place, attackers cannot build effective re-purposable attacks against the protected area. This cool concept can be applied to many areas of security depending on the specific target systems and architecture, but it is definitely a revolutionary and a refreshing defensive concept in the way that it changes the economic equation that attackers are benefitting from today. I also like it because, in a way, it is a proactive approach—and not passive like many other security approaches.

    Polymorphic defenses usually have the following attributes:

    • Solutions that are agnostic to covered attack patterns which makes them much more resilient.
    • Seamless integration into the environment since the whole idea is to change the inner parts—changes which cannot be made apparent to externals.
    • Makes reverse-engineering and/or propagation very difficult, due to the “poly” aspect of the solution.
    • There is always a trusted source, which serves as the basis for the morphism.

    The Emerging Category of Polymorphic Defense

    The polymorphic defense companies I am aware of are still startups. Here are few of them:

    • The first company that comes to mind, which really takes polymorphism to the extreme, is Morphisec*, an Israeli startup still in stealth mode. Their innovative approach solves the problem of software and it achieves that by continuously morphing the inner structures of running applications, which as a result, renders all known and potentially unknown exploits as useless. Their future impact on the industry can be tremendous: the end of the mad race of newly discovered software vulnerabilities and software patching, and much-needed peace of mind regarding unknown software vulnerabilities and attacks.
    • Another highly innovative company that applies polymorphism in a very creative manner is Shape Security. They were the first one to coin the term of polymorphic defense publicly. Their technology “hides” the inner structure of web pages which eventually can block many problematic attacks such as CSRF, which rely on specific known structures within the target web pages.
    • Another very cool company also out of Israel is CyActive. CyActive fast forwards the future of malware evolution using bio-inspired algorithms, and use it as training data for a smart detector which can identify and stop future variantsmuch like a guard that has been trained on future weapons. Their polymorphic anchor is in the fact they outsmart the phenomena of attack variants by creating all the possible variants of the malware automatically and by that increase detection rate dramatically.

    I suppose there are other emerging startups which tackle security problems with polymorphism. If you are aware of any particularly impressive ones, please let me know, as I would love to update this posts with more info on them. J

    *Disclaimer – I have a financial and personal interest in Morphisec, the company mentioned in the post. Anyone interested in connecting with the company, please do not hesitate to send me an email and I would be happy to engage regarding this matter.

    History

    The idea of morphism or randomization as an effective tool for setting a serious barrier for attackers can be attributed to different academic developments and commercial ones. To name one commercial example, take the Address Space Layout Randomization (ASLR) concept from operating systems. ASLR is a concept that aims to deal with attacks that are written to exploit specific addresses in memory, and ASLR changes this assumption by moving around code in memory in a rather random manner.

    The Future

    Polymorphic defense is a general theoretical concept which can be applied to many different areas in the IT world, and here are some examples off the top of my head:

    • Networks – Software defined networking provides a great opportunity for changing the inner-networking topology to deceive attackers and dynamically contain breaches. This can be big!
    • APIs – API protocols can be polymorphic as well, and as such, prevent malicious actors from masquerading as legitimate parties or man in the middle attacks.
    • Databases – Database structures can be polymorphic too, so only trusted parties can be aware of a dynamic DB scheme and others cannot.

    So, polymorphic defense definitely seems to be a game changing security trend which can potentially change the balance between the bad guys and the good guys…and ladies too, of course.

     

    UPDATE Feb 11, 2015: On Reddit I’ve got some valid feedback that it is the same as the MTD concept, Moving Target Defense, and indeed that is right. In my eyes the main difference is the fact Polymorphism is more generic in the sense it is not specifically about changing location as means of deception but also creating many forms of the same thing in order to deceive the attackers but it is just a matter of personal interpretation.

    To Disclose or Not to Disclose, That is The Security Researcher Question

    Microsoft and Google are bashing each other on the zero day exploit in Windows 8.1 that was disclosed by Google last week following a 90 days grace period. Disclosing is a broad term when speaking about vulnerabilities and exploits – you can disclose to the public the fact that there is a vulnerability and then you can disclose how to exploit it with an example source code. There is a big difference between just telling the world about the vulnerability vs. releasing the tool to exploit it and that is the level of risk created by each alternative. In reality most attacks are based on exploits which have been reported but have not been patched yet. Disclosing the exploit code without a patch that is ready to protect the vulnerable software is in a way helping the attackers. Of course the main intention is to help the security officers which want to know where is the vulnerability and how to patch it temporarily but we should not forget that public information also falls in the hands of attackers.

    Since I have been at Google’s position in the past with the KNOX vulnerability we uncovered at the cyber security labs @ Ben-Gurion university I can understand them. It is not an easy decision since on one hand you can’t hide such info from the public while on the hand you know for sure that the bad guys are just waiting for such “holes” to be exploited. Within time I understood few more realities:

    • Even if a company issues a software patch still the risk is not gone since the time window from the moment a patch is ready to be applied up to the time it is actually applied on systems can be quite long and during that time the vulnerability is available for exploitation.
    • Sometimes vulnerabilities uncover serious issues in the design of the software and solving it may not be a matter of days. Of course a small temporary fix can be issued but a proper well thought of patch taking into account many different versions and inter connected systems can take a much longer time to devise.
    • There is a need for an authority to manage the whole exploit disclosure, patching and deployment life cycle which will devise a well accepted policy and not just a single sided policy such as the one Google Zero devised. If the intention eventually is to increase security then without the collaboration of software vendors it won’t work out.

    And I am not into the details but I truly believe Google have acted here out of professionalism and not for other political reasons against Microsoft.