Thoughts on The Russians Intervention in the US Elections. Allegedly.

I got a call last night on whether I want to come to the morning show on TV and talk about Google’s recent findings of alleged Russian sponsored political advertising. Advertising that could have impacted the last US elections results, joining other similar discoveries on Facebook and Twitter and now Microsoft is also looking for clues. At first instant, I wanted to say, what is there to say about it but still, I agreed as a recent hobby of mine is being guested on TV shows:) So this event got me reading about the subject quite a bit later at night and this early morning to be well prepared, and the discussion was good, a bit light as expected from a morning show but good enough to be informative for its viewers. What struck me later on while contemplating on the actual findings is the significant vulnerability uncovered in this incident, the mere exploitation of that weakness by Russians (allegedly) and the hazardous path technology has taken us in recent decades while changing human behavior.

The Russian Intervention Theory

The summarize it: there are political forces and citizens in the United States which are worried about the depth of Russian intervention in the elections, and part of that is whether the social networks and digital mediums were exploited via digital advertising and to what extent. The findings until now show that advertising campaigns at the cost of tens of thousands of dollars have been launched via organizations that seem to be tied to the Russians. And these findings take place across the most prominent social networks and search engines.  The public does not know yet what was the nature of the advertising on each platform, who is behind this adverts and whether there was some cooperation of the advertisers with the people behind Trump’s campaign. This lack of information and especially the nature of the suspicious adverts leads to many theories, and although my mind is full of crazy ideas it seems that sharing them will only push the truth further away. So I won’t do that. The nature of the adverts is the most important piece of the puzzle since based on their content and variation patterns one can deduce whether they had a synergistic role with Trump’s campaign and what was the thinking behind them. Especially due to the fact the campaigns that were discovered are strangely uniform across all the advertising networks budget wise. As the story unfolds we will become wiser.

How To Tackle This Threat

This phenomenon is of concern to any democracy on the planet with concerned citizens which spend enough time on digital means such as Facebook and there are some ways to improve the situation:


Advertising networks make their money from adverts. The core competence of these companies is to know who you are and to promote commercial offerings in the most seamless way. Advertisements which are of political nature without any commercial offerings behind them are abusing this information targeting and delivery mechanism to control the mindset of people. Same as it happens in advertisements on television where on TV there is a control on such content. There is no rational reason why digital advertising networks will get a free pass to allow anyone to broadcast any message on their networks without no accountability in the case of non-commercial offerings. These networks were not built for brainwashing and the customers, us, deserve a high level of transparency in this case which should be supervised and enforced by the regulator. So if there is an advert which is not of commercial nature, it should be emphasized that it is an advert (many times the adverts blend so good with the content that even identifying them is a difficult task), what is the source of the funding for the advert with a link to the website of the funder. If the advertising networks team up to define a code of ethics which will be self-enforced among them maybe regulation is not needed. At the moment we, the users, are misled and hurt by the way their service is rendered now.


The primary advertising networks (FB, Google, Twitter, Microsoft) have vast machine learning capabilities, and they should employ these to identify anomalies. Assuming regulation will be in place whether governmental or just self-regulation, there will be groups which will try to exploit these rules and here comes the role of technology in the pursuit for identifying deviations from the rules. Whether it is about identifying the source of funding of a campaign automatically and alerting such anomalies at real-time up to identifying automated strategies such as brute force AB testing done by an army of bots. Investing in technology to make sure everyone is complying with the house rules. Part of such an effort is opening up the data about the advertisers and campaigns of non-commercial products to the public to allow third-party companies to work on identification of such anomalies and to innovate in parallel to the advertising networks. Same goes for other elements in the networks which can be abused such as Facebook pages.

Last Thoughts on the Incident

  • How come no one identified the adverts in real time during elections. I would imagine there were complaints about specific ads during elections and how come no complaint escalated a more in-depth research into a specific campaign. Maybe there is too much reliance on bots which manage the self-service workflow for such advertising tools - the dark side of automation.
  • Looking out for digital signs that the Russians cooperated in this campaign with the Trump campaign seems far-fetched to me. The whole idea of a parallel campaign is the separation where synchronization if such took place it was probably done verbally without any digital traces.
  • The mapping of the demographic database that was allegedly created by Cambridge Analytica into the targeting taxonomy of Facebook, for example, is an extremely powerful tool for AB Testing via microtargeting. A perfect cost-efficient tool for mind control.
  • Why everyone assumes that the Russians are in favor of Trump? No one that raises the option that maybe the Russians had a different intention or perhaps it was not them. Reminds me alot of the fruitless efforts to attribute cyber attacks.
More thoughts on the weaknesses of systems and what can be done about it in a future post.

Some Of These Rules Can Be Bent, Others Can Be Broken

Cryptography is a serious topic. A technology based on mathematical foundation posing an ever-growing challenge for attackers. On November 11th 2016 Motherboard wrote a piece about FBI’s ability to break into suspects’ locked phones. Contrary to FBI’s continuous complaints about going dark with strong encryption the actual number of phones they were able to break into was quite high. The high success ratio of penetrating locked phone in some way doesn’t make sense - it is not clear what was so special with the devices they could not break into. Logically similar phone models has the same crypto algorithms and if there was a way to break into one phone how come they could not break into all of them? Maybe the FBI has found an easier path to the locked phones other than breaking encryption. Possibly they crafted a piece of code that exploits a vulnerability in the phone OS, maybe a zero-day vulnerability known only to them. Locked smartphones have some parts of the operating system active even if they are only turned on and an illegal access in the form of exploitation to those active areas can circumvent the encryption altogether. To be honest, I don’t know what happened there and it is all just speculations though this story provides a glimpse into the other side, the attacker’s point of view, and that is the topic of this post. What an easy life attackers have as they are not bound by the rules of the system they want to break into and they need to seek for only one unwatched hole. Defenders who carry the burden of protecting the whole system need to make sure every potential hole is covered while bound to the system rules - an asymmetric burden that results in an unfair advantage for attackers.

The Path of Least Resistance

If attackers had ideology and laws then the governing one would have been “Walk The Path of Least Resistance” - it is reflected over and over again in their mentality and method of operation. Wikipedia’s explanation fits perfectly the hacker’s state of mind
The path of least resistance is the physical or metaphorical pathway that provides the least resistance to forward motion by a given object or entity, among a set of alternative paths.
In the cyber world, there are two dominant roles: the defender and the attacker and both deal with the exact same topic – the mere existence of an attack on a specific target. I used to think that the views of both sides would be an exact opposite to each other as eventually the subject of matter, the attack, is the same and interests are reversely-aligned but that is not the case. For sake of argument, I will deep dive into the domain of enterprise security while the logic will serve as a general principle applicable to other security domains. In the enterprise world the enterprise security department, the defender, roughly does two things: they need to know very well the architecture and the assets of the system they should protect, its structures, interconnections with other systems as well as with the external world. Secondly, they need to devise defense mechanisms and strategy that on one hand will allow the system to continue functioning while on the other hand eliminate possible entry points and paths that can be abused by attackers on their way in. As a side note, achieving this fine balance resembles the mathematical branch of constraints satisfaction problems. Now let’s switch to the other point of view - the attacker – the attacker needs only to find a single path into the enterprise in order to achieve its goal. No one knows the actual goal of the attacker and such a goal fits probably one of the following categories: theft, extortion, disruption or espionage. Within each category, the goals are very specific. So the attacker is laser-focused on a specific target and the attacker’s learning curve required for building an attack is limited and bounded to their specific interest. For example, the attacker does not need to care about the overall data center network layout in case it wants to get only the information about the salaries of the employees where such a document probably resides in the headquarters office. Another big factor in favor of attackers is that some of the possible paths towards the target include the human factor. And humans, as we all know, have flaws, vulnerabilities if you like, and from the attacker’s standpoint, these weaknesses are proper means for achieving the goal. From all the possible paths that theoretically an attacker can select from, the ones with the highest success ratio and minimal effort are the preferable ones, hence the path of least resistance.

The Most Favorite Path in The Enterprise World

Today the most popular path of least resistance is to infiltrate the enterprise via exploiting human weaknesses. Usually in the form of minimal online trust building where the target employee is eventually set to activate a malicious piece of code by opening an email attachment for example. The software stack employees have on their computers is quite standard in most organizations: mostly MS-Windows operating systems; the same document processing applications as well as the highly popular web browsers. This stack is easily replicated at the attacker’s environment used for finding potential points of infiltration in the form of un-patched vulnerabilities. The easiest way to find a target vulnerability is to review the most recent vulnerabilities uncovered by others and reported as CVEs. There is a window of opportunity for attackers in between the time the public is made aware of the existence of a new vulnerability and the actual time an organization patches the vulnerable software. Some statistics say that within many organizations this time window of opportunity can be stretched into months as rolling out patches across an enterprise is painful and slow. Attackers that want to really fly below the radar and reach high success ratios for their attacks search for zero-day vulnerabilities or just buy them somewhere. Finding a zero-day is possible as software has become overly complex with many different technologies embedded in products which eventually increase the chances for vulnerabilities to exist - the patient and persistent attacker will always find its zero-day. Once an attacker acquires that special exploit code then the easier part of the attack path comes into play – the part where the attacker finds a person in the organization that will open such malicious document. This method of operation is in magnitude easier vs. learning in details the organization internal structures and finding vulnerabilities in proprietary systems such as routers and server applications where the access to their technology is not straightforward. In recent WannaCry attack we have witnessed an even easier path to enter an organization using a weakness in enterprise computers that have an open network vulnerability that can be exploited from the outside without human intervention. Going back to the case of the locked phones, it is way easier to find a vulnerability in the code of the operating system that runs on the phone vs. breaking the crypto and decrypting the encrypted information.

We Are All Vulnerable

Human vulnerabilities span beyond inter-personal weaknesses such as deceiving someone to open a malicious attachment. They also exist in the products we design and build, especially in the world of hardware or software where complexity has surpassed humans’ comprehension ability. Human weaknesses span also to the world of miss-configuration of systems, one of the easiest and most favorable paths for cyber attackers. The world of insider threats many times is based on human weaknesses exploited and extorted by adversaries as well. Attackers found their golden path of least resistance and it is always on the boundaries of human imperfection. The only way for defenders to handle such inherent weaknesses is to break down the path of least resistance into parts and make the easier parts to become more difficult. That would result in a shift in the method of operation of attackers and will send them to search for other easy ways to get in where hopefully it will become harder in overall within time.

Deep into the Rabbit Hole

Infiltrating an organization via inducing an employee to activate a malicious code is based on two core weakness points: The human factor which is quite easy and the ease of finding a technical vulnerability in the software used by the employee as described earlier. There are multiple defense approaches addressing the human factor mostly revolving around training and education and the expected improvement is linear and slow. Addressing the second technical weakness is today’s one of the main lines of business in the world of cyber security, hence endpoint protection and more precisely preventing infiltration.

Tackling The Ease of Finding a Vulnerability

Vulnerabilities disclosure practices, that serve as the basis for many attacks in the window of opportunity, have been scrutinized for many years and there is a real progress towards the goal of achieving a fine balance between awareness and risk aversion. Still, it is not there yet since there is no bullet proof way to isolate attackers from this public knowledge. It could be that the area of advanced threat intelligence collaboration tools will evolve into that direction though it is too early to say. It is a tricky matter to solve, as it is everybody’s general problem and at the same time nobody’s specific problem. The second challenge is the fact that if a vulnerability exists in application X and there is a malicious code that can exploit this vulnerability then it will work anywhere this application X is installed.
Different Proactive Defense Approaches
There are multiple general approaches towards preventing such an attack from taking place:
Looking for Something
This is the original paradigm of anti-viruses that searches for known digital signatures of malicious code in data. This inspection takes place both when data is flowing in the network, moved around in memory in the computing device as well as at rest when it is persisted as a file (in case it is not a full in-memory attack). Due to attackers’ sophistication with malicious code obfuscation and polymorphism, were infinite variations of digital signatures of the same malicious code can be created, this approach has become less effective. The signatures approach is highly effective on old threats spreading across the Internet or viruses written by novice attackers. In the layered defense thesis, the signatures are the lower defense line and serve as an initial filter for the noise.
Looking at Something
Here, instead of looking at the digital fingerprint of a specific virus the search is for behavioral patterns of the malicious code. Behavioral patterns mean, for example, the unique sequence of system APIs accessed, functions called and frequencies of execution of different parts of the code in the virus. The category that was invented quite a long time ago enjoys a renaissance thanks to the advanced pattern recognition capabilities of artificial intelligence. The downside of AI in this context is inherent in the way AI works and that is fuzziness. Fuzzy detection leads to false alarms, phenomena that overburden the already growing problem of analyst shortage required to decide which alarm is true and which isn’t. The portion of false alarms I hear about today are still in majority and are in the high double digits where some of the vendors solve this problem by providing full SIEM management behind the scenes that include filtering false alarms manually. Another weakness of this approach is the fact that attackers evolved into mutating the behavior of the attack. Creating variations on the logic virus while making sure the result stays the same, variations that go unnoticed by the pattern recognition mechanism – there is a field called Adversarial AI which covers this line of thinking. The most serious drawback of this approach is the fact that these mechanisms are blind to in-memory malicious activities. An inherent blindness to a big chunk of the exploitation logic that is and will always stay in-memory. This blindness is a sweet spot identified by attackers and again is being abused with fileless attacks etc... This analysis reflects the current state of AI integrated and commercialized in the domain of cyber security in the area of endpoint threat detection. AI had major advancements in recent time, which has not been implemented yet in this cyber domain – developments that could create a totally different impact.
There is a rising concept in the world of cyber security, which aims to tackle the ease of learning the target environment and creating exploits that work on any similar system. The concept is called moving target defense and pledges the fact that if the inner parts of any system will be known only to the legitimate system users it will thwart any attack attempt by outsiders. It is eventually an encapsulation concept similar to the one in the object-oriented programming world where external functionality can not access the inner functionality of a module without permission. In cyber security the implementation is different based on the technical domain it is implemented but still preserves the same information hiding theory. This new emerging category is highly promising towards the goal of changing the cyber power balance by taking attackers out of the current path of least resistance. Moving target defense innovation exists in different domains of cyber security. In endpoint protection, it touches the heart of the assumption of attackers that the internal structures of applications and the OS stays the same and their exploit code will work perfectly on the target. The concept here is quite simple to understand (very challenging to implement) - it is about continuously moving around and changing the internal structures of the system that on one hand the internal legitimate code will continue functioning as designated while on the other hand malicious code with assumptions on the internal structure will fail immediately. This defense paradigm seems as highly durable as it is agnostic to the type of attack.


The focus of the security industry should be on devising mechanisms that make the current popular path of least resistance not worthwhile and let them waste time and energy in a search for a new one.

Searching Under The Flashlight of Recent WannaCry Attack

Some random thoughts about WannaCry attack


The propagation of the WannaCry attack was massive and mostly due to the fact it infected computers via SMB1, an old Windows file sharing network protocol. Some security experts complained that Ransomware has been massive for two years already and this event is only a one big hype wave though I think there is a difference here and it is the magnitude of propagation. There is a big difference when attack distribution relies solely on people unintentionally clicking on a malicious link or document and get infected vs. this attack propagation patterns. This is the first attack as far as I remember where an attack propagates both across the internet and inside organizations using the same single vulnerability. Very efficient propagation scheme apparently.


The attack unveiled the explosive number of computers globally that are outdated and non-patched. Some of them are outdated since patches did not exist - for example, Windows XP which does not have an active updates support. The rest of the victims were not up-to-date with the latest patches since it is highly cumbersome to constantly keep computers up-to-date - truth needs to be told. Keeping everything patched in an organization reduces productivity eventually as there are many disruptions to work - for instance, many applications running on an old system stop working when the underlying operating system is updated. I heard of a large organization that was hurt deeply by the attack and not because the Ransomware hit them, they had to stop working for a full day across the organization since the security updates delivered by the IT department ironically made all the computers unusable. Another thing to take into account is the magnitude of a vulnerability. The magnitude of a vulnerability has tight correlation to its prevalence and the ease of accessing it. This EternalBlue vulnerability has massive magnitude as it is apparently highly popular. It is the first time I think that an exploit for a vulnerability feels like a weapon. Maybe it is time to create some dynamic risk ranking for vulnerabilities beyond the rigid CVE classification. Vulnerabilities by definition are software bugs and there are different classes of software. There are operating systems and inside the operating systems category, there are drivers, kernel, and user-mode processes. Also within the world of kernel, there are different areas such as the networking stack, the display drivers, interprocess mechanisms etc.. Besides operating systems we have user applications as well as user services which are pieces of software that provide services in the back to user applications. A vulnerability can reside in each one of those areas where fixing a vulnerability or protecting against exploitation of it has a whole different magnitude of complexity. For example kernel vulnerabilities are the hardest to fix compared to vulnerabilities in user applications. In correspondence their impact once exploited is always measurably severer in terms of what an attacker can do post exploitation due to the level of freedom such software class allows. The massive impact of WannaCry was not due to the sophistication of its ransomware component, it was due to the SMB1 vulnerability which turned out to be highly popular. Actually, the ransomware itself was quite naive in terms of the way it operated. The funny turn of events was that many advanced defense products did not capture the attack since they assume some level of sophistication while plain signature-based anti-viruses which search for digital signatures were quite efficient. This case is an enforcement to the layered defense thesis which means signatures are here to stay and should be layered with more advanced defense tools. As for the sheer luck, we had with this naive ransomware, just imagine what would happen if the payload of the attack was at least as sophisticated as other advanced attacks we see nowadays. It could have been devastating and unfortunately, we are not our of danger yet as it can happen - this attack was a lesson not only for defenders but also for attackers.


Very quickly law enforcement authorities found the target bitcoin accounts used for collecting the ransomware and started watching for someone that withdraws the money. The amount of money collected was quite low even though the distribution was massive and some attribute it to the novice ransomware backend that as I read in some cases it won't even decrypt the files even if you pay. The successful distribution did something that the attackers did not take into account and that is the high visibility of the campaign. It is quite obvious that such a mass scale attack would wake up all law enforcement authorities to search for the money which makes withdrawing the money impossible.

Final Thoughts

Something about this attack does not make sense - on one hand the distribution was highly successful in a magnitude not seen before for such attacks while at the same time the payload, hence the ransomware, was naive, the monetization scheme was not planned properly and even the backend for collecting money and decrypting the user files was unstable. So either it was a demonstration of power and not really a ransomware campaign like launching a ballistic missile towards the ocean or just a real amateur attacker. Another thought is that I don't have yet a solid recommendation on how to be more prepared for the next time. There are a multitude of open vulnerabilities out there, some with patches available and some not and even if you patch like crazy still it won't provide a full guarantee. Of course, my baseline must recommendation is to use advanced prevention security products and do automatic patching. Final thought is that a discussion about regulatory intervention in the level of protection at the private sector should start. I can really see the effectiveness of mandatory security provisions required from organizations similar to what is done in the accounting world. Very similar to getting vaccinated. The private sector and especially the small medium size businesses are currently highly vulnerable.


Targeted attacks take many forms, though there is one common tactic most of them share: Exploitation. To achieve their goal, they need to penetrate different systems on-the-go. The way this is done is by exploiting unpatched or unknown vulnerabilities. More common forms of exploitation happen via a malicious document which exploits vulnerabilities in Adobe Reader or a malicious URL which exploits the browser in order to set a foothold inside the end-point computer. Zero Day is the buzzword today in the security industry, and everyone uses it without necessarily understanding what it really means. It indeed hides a complex world of software architectures, vulnerabilities, and exploits that only few thoroughly understand. Someone asked me to explain the topic, again, and when I really delved deep into the explanation I was able to comprehend something quite surprising. Please bear with me, this is going to be a long post :-)


I will begin with some definitions of the different terms in the area: These are my own personal interpretations of them…they are not taken from Wikipedia.


This term usually refers to problems in software products – bugs, bad programming style or logical problems in the implementation of software. Software is not perfect and maybe someone can argue that it can’t be such. Furthermore, the people who build the software are even less perfect—so it is safe to assume such problems will always exist in software products. Vulnerabilities exist in operating systems, runtime environments such as Java and .Net or specific applications whether they are written in high-level languages or native code. Vulnerabilities also exist in hardware products, but for the sake of this post, I will focus on software as the topic is broad enough even with this focus. One of the main contributors to the existence and growth in the number of vulnerabilities is attributed to the ever-growing pace of complexity in software products—it just increases the odds for creating new bugs which are difficult to spot due to the complexity. Vulnerabilities always relate to a specific version of a software product which is basically a static snapshot of the code used to build the product at a specific point in time. Time plays a major role in the business of vulnerabilities, maybe the most important one. Assuming vulnerabilities exist in all software products, we can categorize them into three groups based on the level of awareness to these vulnerabilities:
  • Unknown Vulnerability - A vulnerability which exists in a specific piece of software to which no one is aware. There is no proof that such exists but experience teaches us that it does and is just awaiting to be discovered.
  • Zero Day - A vulnerability which has been discovered by a certain group of people or a single person where the vendor of the software is not aware of it and so it is left open without a fix or awareness to it its presence.
  • Known Vulnerabilities - Vulnerabilities which have been brought to the awareness of the vendor and of customers either in private or as public knowledge. Such vulnerabilities are usually identified by a CVE number – where during the first period following discovery the vendor works on a fix, or a patch, which will become available to customers. Until customers update the software with the fix, the vulnerability is kept open for attacks. So in this category, each respective installation of the software can have patched or un-patched known vulnerabilities. In a way, the patch always comes with a new software version, so a specific product version always contains un-patched vulnerabilities or not – there is no such thing as a patched vulnerability – there are only new versions with fixes.
There are other ways to categorize vulnerabilities: based on the exploitation technique such as buffer overflow or heap spraying, the type of bug which lead to the vulnerability, or such as a logical flaw in design or wrong implementation which leads to the problem.


A piece of code which abuses a specific vulnerability in order to cause something unexpected to occur as initiated by the attacked software. This means either gaining control of the execution path inside the running software so the exploit can run its own code or just achieving a side effect such as crashing the software or causing it to do something which is unintended by its original design. Exploits are usually highly associated with malicious intentions although from a technical point of view it is just a mechanism to interact with a specific piece of software via an open vulnerability – I once heard someone refer to it as an “undocumented API” :).

This picture from Infosec Institute describes a vulnerability/exploits life cycle in an illustrative manner:


The time span, colored in red, presents the time where a found vulnerability is considered a Zero Day and the time colored in green turns the state of the vulnerability to un-patched. The post disclosure risk is always dramatically higher as the vulnerability becomes public knowledge. Also, the bad guys can and do exploit in higher frequency than in the earlier stage. Closing the gap on the patching period is the only step which can be taken toward reducing this risk.

The Math Behind a Targeted Attacks

Most targeted attacks today use the exploitation of vulnerabilities to achieve three goals:
  • Penetrate an employee end-point computer by different techniques such as malicious documents sent by email or malicious URLs. Those malicious documents/URLs contain malicious code which seeks specific vulnerabilities in the host programs such as the browser or the document reader. And, during a rather naïve reading experience, the malicious code is able to sneak into the host program as a penetration point.
  • Gain higher privilege once a malicious code already resides on a computer. Many times the attacks which were able to sneak into the host application don’t have enough privilege to continue their attack on the organization and that malicious code exploits vulnerabilities in the runtime environment of the application which can be the operating system or the JVM for example, vulnerabilities which can help the malicious code gain elevated privileges.
  • Lateral movement - once the attack enters the organization and wants to reach other areas in the network to achieve its goals, many times it exploits vulnerabilities in other systems which reside on its path.
So, from the point of view of the attack itself, we can definitely identify three main stages:
  • An attack at Transit Pre-Breach - This state means an attack is moving around on its way to the target and in the target prior to exploitation of the vulnerability.
  • An attack at Penetration - This state means an attack is exploiting a vulnerability successfully to get inside.
  • An attack at Transit Post Breach This state means an attack has started running inside its target and within the organization.
The following diagram quantifies the complexity inherent in each attack stage both from the attacker and defender sides and below the diagram there are descriptions for each area and the concluding part:

Ability to Detect an Attack at Transit Pre-Breach

Those are the red areas in the diagram. Here an attack is on its way prior to exploitation, on its way referring to the enterprise that can scan the binary artifacts of the attack, either in the form of network packets, a visited website or specific document which is traveling via email servers or arriving at the target computer for example. This approach is called static scanning. The enterprise can also emulate the expected behavior with the artifact (opening a document in a sandboxed environment) in a controlled environment and try to identify patterns in the behavior of the sandbox environment which resemble a known attack pattern – this is called behavioral scanning. Attacks pose three challenges towards security systems at this stage:
  • Infinite Signature Mutations - Static scanners are looking for specific binary patterns in a file which should match to a malicious code sample in their database. Attackers are already much outsmarted these tools where they have automation tools for changing those signatures in a random manner with the ability to create an infinite number of static mutations. So a single attack can have an infinite amount of forms in its packaging.
  • Infinite Behavioural Mutations - The evolution in the security industry from static scanners was towards behavioral scanners where the “signature” of a behavior eliminates the problems induced by static mutations and the sample base of behaviors is dramatically lower in size. A single behavior can be decorated with many static mutations and behavioral scanners reduce this noise. The challenges posed by the attackers make behavioral mutations of infinite nature as well and they are of two-fold:
    • Infinite number of mutations in behaviour - In the same way, attackers outsmart the static scanners by creating infinite amount of static decorations on the attack, here as well, the attackers can create either dummy steps or reshuffle the attack steps which eventually produce the same result but from a behavioral pattern point of view it presents a different behavior. The spectrum of behavioral mutations seemed at first narrower than static mutations but with an advancement of attack generators, even that has been achieved.
    • Sandbox evasion - Attacks which are scanned for bad behavior in a sandboxed environment have developed advanced capabilities to detect whether they are running in an artificial environment and if they detect so then they pretend to be benign which implies no exploitation. This is currently an ongoing race between behavioral scanners and attackers and attackers seem to have the upper hand in the game.
  • Infinite Obfuscation - This technique has been adopted by attackers in a way that connects to the infinite static mutations factor but requires specific attention. Attackers, in order to deceive the static scanners, have created a technique which hides the malicious code itself by running some transformation on it such as encryption and having a small piece of code which is responsible for decrypting it on target prior to exploitations. Again, the range of options for obfuscating code is infinite which makes the static scanners' work more difficult.
This makes the challenge of capturing an attack prior to penetration very difficult to impossible where it definitely increases with time. I am not by any means implying such security measures don’t serve an important role where today they are the main safeguards from turning the enterprise into a zoo. I am just saying it is a very difficult problem to solve and that there are other areas in terms of ROI (if such security as ROI exists) which a CISO better invest in.

Ability to Stop an Attack at Transit Post Breach

Those are the black areas in the diagram. An attack which has already gained access to the network can take an infinite number of possible attack paths to achieve its goals. Once an attack is inside the network then the relevant security products try to identify it. Such technologies surround big data/analytics which tries to identify activities in the network which imply malicious activity or again network monitors which listen to the traffic and try to identify artifacts or static behavioral patterns of an attack. Those tools rely on different informational signals which serve as attack signals. Attacks pose multiple challenges towards security products at this stage:
  • Infinite Signature Mutations, Infinite Behavioural Mutations, Infinite Obfuscation - these are the same challenges as described before since the attack within the network can have the same characteristics as the ones before entering the network.
  • Limited Visibility on Lateral Movement - Once an attack is inside then usually its next steps are to get a stronghold in different areas in the network and such movement is hardly visible as it is eventually about legitimate actions – once an attacker gets a higher privilege it conducts actions which are considered legitimate but of high privilege and it is very difficult for a machine to deduce the good vs. the bad ones. Add on top of that, the fact that persistent attacks usually use technologies which enable them to remain stealthy and invisible.
  • Infinite Attack Paths - The path an attack can take inside the network’ especially taking into consideration a targeted attack is something which is unknown to the enterprise and its goals, has infinite options for it.
This makes the ability to deduce that there is an attack, its boundaries, and goals from specific signals coming from different sensors in the network very limited. Sensors deployed on the network never provide true visibility into what’s really happening in the network so the picture is always partial. Add to that deception techniques about the path of attack and you stumble into a very difficult problem. Again, I am not arguing that all security analytics products which focus on post-breach are not important, on the contrary, they are very important. Just saying it is just the beginning of a very long path towards real effectiveness in that area. Machine learning is already playing a serious role and AI will definitely be an ingredient in a future solution.

Ability to Stop an Attack at Penetration Pre-Breach and on Lateral Movement

Those are the dark blue areas in the diagram. Here the challenge is reversed towards the attacker where there are an only limited amount of entry points into the system. Entry points a.k.a vulnerabilities. Those are:
  • Unpatched Vulnerabilities – These are open “windows” which have not been covered yet. The main challenge here for the IT industry is about automation, dynamic updating capabilities, and prioritization. It is definitely an open gap which can be narrowed down potentially to become insignificant.
  • Zero Days – This is an unsolved problem. There are many approaches towards that such as ASLR and DEP on Windows but still, there is no bulletproof solution for it. In the startups' scene, I am aware that quite a few are working very hard on a solution. Attackers identified this soft belly long time ago and it is the main weapon of choice for targeted attacks which can potentially yield serious gains for the attacker.
This area presents a definite problem but in a way it seems as the most probable one to be solved earlier than the other areas. Mainly because the attacker in this stage is at its greatest disadvantage – right before it gets into the network it can have infinite options to disguise itself and after it gets into the network the action paths which can be taken by it are infinite. Here the attacker need to go through a specific window and there aren’t too many of those out there left unprotected.

Players in the Area of Penetration Prevention

There are multiple companies/startups which are brave enough to tackle the toughest challenge in the targeted attacks game - preventing infiltration - I call it, facing the enemy at the gate. In this ad-hoc list I have included only technologies which aim to block attacks at real-time - there are many other startups which approach static or behavioral scanning in a unique and disruptive way such as Cylance and CyberReason or Bit9 + Carbon Black (list from @RickHolland) which were excluded for sake of brevity and focus.

Containment Solutions

Technologies which isolate the user applications with a virtualized environment. The philosophy behind it is that even if there was an exploitation in the application still it won't propagate to the computer environment and the attack will be contained. From an engineering point of view, I think these guys have the most challenging task as the balance between isolation and usability has an inverse correlation in productivity and it all involves virtualization on an end-point which is a difficult task on its own. Leading players are Bromium and Invincea, well-established startups with very good traction in the market.

Exploitation Detection & Prevention

Technologies which aim to detect and prevent the actual act of exploitation. Starting from companies like Cyvera (now Palo Alto Networks Traps product line) which aim to identify patterns of exploitations, technologies such as ASLR/DEP and EMET which aim at breaking the assumptions of exploits by modifying the inner structures of programs and setting traps at "hot" places which are susceptible to attacks, up to startups like Morphisec which employs a unique moving target concept to deceive and capture the attacks at real-time. Another long time player and maybe the most veteran in the anti-exploitation field is MalwareBytes. They have a comprehensive offering for anti-exploitation with capabilities ranging from in-memory deception and trapping techniques up to real time sandboxing.
At the moment the endpoint market is still controlled by marketing money poured by the major players where their solutions are growing ineffective in an accelerating pace. I believe it is a transition period and you can already hear voices saying endpoint market needs a shakeup. In the future the anchor of endpoint protection will be about real time attack prevention and static and behavioral scanning extensions will play a minor feature completion role. So pay careful attention to the technologies mentioned above as one of them (or maybe a combination:) will bring the "force" back into balance:)

Advice for the CISO

Invest in closing the gap posed by vulnerabilities. Starting from patch automation, prioritized vulnerabilities scanning up to security code analysis for in-house applications—it is all worth it. Furthermore, seek out for solutions which deal directly with the problem of zero days, there are several startups in this area, and their contributions can have much higher magnitude than any other security investment in a post or pre-breach phases.

Time to Re-think Vulnerabilities Disclosure

Public disclosure of vulnerabilities has always bothered me and I wasn't able to put a finger on the reason until now. As a person who has been involved personally in vulnerabilities disclosure, I am highly appreciative for the contribution security researchers on awareness and it is very hard to imagine what would the world be like without disclosures. Still, the way attacks are being crafted today and their links to such disclosures got me into thinking whether we are doing it in the best way possible. So I twitted this and got a lot of "constructive feedback":) from the team in the cyber labs at Ben-Gurion of how do I dare?

  So I decided to build my argument right. Vulnerabilities The basic fact is that software has vulnerabilities. Software gets more and more complex within time and this complexity usually invites errors. Some of those errors can be abused by attackers in order to exploit the systems such software is running on. Vulnerabilities split into two groups, the ones which the vendor is aware of and the ones who are unknown. And it is unknown how many unknowns are there inside each piece of code. Disclosure There are many companies, individuals, and organizations which search for vulnerabilities in software and once they find such they disclose their findings. They disclose at least the mere existence of the vulnerability to the public and the vendor and many times even publish proof of concept code example which can be used to exploit the found vulnerabilities. Such disclosure serves two purposes:
  • Making users of the software aware of the problem as soon as possible
  • Making the vendor aware of the problem so it can create and send a fix to their users
After the vendor is aware of the problem then it is their responsibility to notify the users formally and then to create an update for the software which fixes the bug. Timelines Past to Time of Disclosure - The unknown vulnerability waiting silently and eager to be discovered. Time of Disclosure to Patch is Ready - Everyone knows about the vulnerability, the good and the bad guys, and it is now on production systems waiting to be exploited by attackers. Patch Ready to System is Fixed - Also during this time period, the vulnerability is still there waiting to get exploited. The following diagram demonstrates those timelines in relation to the ShellShock bug: 7-ways-to-stay-7-years-ahead-of-the-threat-5-638 Image is taken from   Summary So indeed the disclosure process eventually ends with a fixed system but there is a long period of time where systems are vulnerable and attackers don't need to work hard on uncovering new vulnerabilities since they have the disclosed one waiting for them. I got thinking about this after I saw this stats via Tripwire “About half of the CVEs exploited in 2014 went from publishing to pwn in less than a month” (DBIR, pg. 18). This stats means that half of the exploits identified during 2014 were based on published CVEs (CVE is a public vulnerability database) and although some may argue that the attackers could have the same knowledge on those vulnerabilities before they were published I say it is far-fetched. If I was an attacker what would be easier for me than going over the recently published vulnerabilities and finding one that is suitable for my target and later on building an attack around it. Needless to say that there are tools which provide also examples for that such as Metasploit. Of the course, the time window to operate is not infinite such as in the case of an unknown vulnerability which no one knows about but still, a month or more is enough to get the job done. Last Words A new process of disclosure should be devised where the risk level during the time of disclosure up to the time a patch is ready and applied should be reduced. Otherwise, we are all just helping the attackers while trying to save the world.

Most cyber attacks start with an exploit – I know how to make them go away

Yet another new Ransomware with a new sophisticated approach Pay attention that the key section in the description on the way it operates is "The malware arrives to affected systems via an email attachment. When users execute the attached malicious JavaScript file, it will download four files from its C&C server:" When users execute the JavaScript files it means the JavaScript was loaded into the browser application and exploited the browser in order to get in and then start all the heavy lifting. The browser is vulnerable, software is vulnerable, it's a given fact of an imperfect world. I know a startup company, called Morphisec which is eliminating those exploits in a very surprising and efficient way.  In general vulnerabilities are considered to be a chronic disease and this does not have to be this way. Some smart guys and girls are working on a cure:)

Remember, it all starts with the exploit.

No One is Liable for My Stolen Personal Information

The main victims of any data breach are actually the people, the customers, whom their personal information has been stolen and oddly they don’t get the deserved attention. Questions like what was the impact of the theft on me as a customer, what can I do about it and whether I deserve some compensation are rarely dealt with publicly. Customers face several key problems when their data was stolen, questions such as:

  • Was their data stolen at all? Even if there was a breach it is not clear whether my specific data has been stolen. Also, the multitude of places where my personal information resides makes it impossible to track whether and where my data has been stolen from.
  • What pieces of information about me were stolen and by whom? I deserve to know who has done that more than anyone else. Mainly due to the next bullet.
  • What are the risks I am facing now after the breach? In the case of a stolen password that is used in other services I can go manually and change it but when my social security number was stolen, what does it mean for me?
  • Whom can I contact in the breached company to answer such questions?
  • And most important was my data protected properly?
The main point here is the fact companies are not obligated either legally or socially to be transparent about how they protect their customers’ data. The lack of transparency and standards as for how to protect data creates an automatic lack of liability and serious confusion for customers. In other areas such as preserving customer privacy and terms of service the protocol between a company and its customers is quite standardized and although not enforced by regulation still it has substance to it. Companies publish their terms of service (TOS) and privacy policy (PP) and both sides rely on these statements. The recent breaches of Slack and JPMorgan are great examples for the poor state of customer data protection - in one case they decided to implement two-factor authentication and I am not sure why didn’t they do it before and in the second case the two-factor authentication was missing in action. Again these are just two examples which present the norm across most of the companies in the world. And what if each company adopted a customer data protection policy (CDPP), an open one,  where such a document would specify clearly on the company website what kind of data it collects and stores and what security measures it applies to protect it. From a security point of view such information can not really cause harm since attackers have better ways to learn about the internals of the network and from a customer relationship point of view, it is a must. Such a CDPP statement can include:
  • The customer data elements collected and stored
  • How it is protected against malicious employees
  • How it is protected from third parties which may access to the data
  • How it is protected when it is stored and when it is moving inside the wires
  • How is the company expected to communicate with the customers when a breach happens - who is the contact person?
  • To what extent the company is liable for stolen data
Such document can increase dramatically the confidence level for us, the customers, prior to selecting to work with a specific company and can serve as a basis for innovation in tools which can aggregate and manage such information.

Site Footer