Cryptography is a serious topic — a technology based on a mathematical foundation posing an ever-growing challenge for attackers. On November 11th, 2016, Motherboard wrote a piece about the FBI’s ability to break into suspects’ locked phones. Contrary to the FBI’s constant complaints about going dark with strong encryption, the actual number of phones they were able to break into was relatively high. The high success ratio of penetrating locked phones in some way doesn’t make sense – it is not clear what was so special with those devices they failed to break into. Logically similar phone models have the same crypto algorithms, and if there was a way to break into one phone, how come they could not break into all of them? Maybe the FBI has found an easier path to the locked phones other than breaking encryption. Possibly they crafted a piece of code that exploits a vulnerability in the phone OS, maybe a zero-day vulnerability known only to them. Locked smartphones have some parts of the operating system active even if they are only turned on and illegal access in the form of exploitation to those active areas can circumvent the encryption altogether. I don’t know what happened there and it is all just speculations though this story provides a glimpse into the other side, the attacker’s point of view, and that is the topic of this post. What easy life attackers have as they are not bound by the rules of the system they want to break into and they need to seek only one unwatched hole. Defenders who carry the burden of protecting the whole system need to make sure every potential hole is covered while bound to the system rules – an asymmetric burden that results in an unfair advantage for attackers.
The Path of Least Resistance
If attackers had ideology and laws then the governing one would have been “Walk The Path of Least Resistance” – it is reflected over and over again in their mentality and method of operation.
Wikipedia’s explanation fits perfectly the hacker’s state of mind
The path of least resistance is the physical or metaphorical pathway that provides the least resistance to forward motion by a given object or entity, among a set of alternative paths.
In the cyber world, there are two dominant roles: the defender and the attacker, and both deal with the exact same topic – the mere existence of an attack on a specific target. I used to think that the views of both sides would be an exact opposite to each other as eventually the subject of matter, the attack, is the same and interests are reversely-aligned but that is not the case. For sake of argument, I will deep dive into the domain of enterprise security while the logic will serve as a general principle applicable to other security domains. In the enterprise world the enterprise security department, the defender, roughly does two things: they need to know very well the architecture and the assets of the system they should protect, its structures, interconnections with other systems as well as the external world. Secondly, they need to devise defense mechanisms and strategies that on one hand will allow the system to continue functioning while on the other hand eliminate possible entry points and paths that can be abused by attackers on their way in. As a side note, achieving this fine balance resembles the mathematical branch of constraints satisfaction problems. Now let’s switch to the other point of view – the attacker – the attacker needs only to find a single path into the enterprise in order to achieve its goal. No one knows the actual goal of the attacker and such a goal fits probably one of the following categories: theft, extortion, disruption or espionage. Within each category, the goals are very specific. So the attacker is laser-focused on a specific target and the attacker’s learning curve required for building an attack is limited and bounded to their specific interest. For example, the attacker does not need to care about the overall data center network layout in case it wants to get only the information about the salaries of the employees where such a document probably resides in the headquarters office. Another big factor in favor of attackers is that some of the possible paths towards the target include the human factor. And humans, as we all know, have flaws, vulnerabilities if you like, and from the attacker’s standpoint, these weaknesses are proper means for achieving the goal. From all the possible paths that theoretically an attacker can select from, the ones with the highest success ratio and minimal effort are the preferable ones, hence the path of least resistance.
The Most Favorite Path in The Enterprise World
Today the most popular path of least resistance is to infiltrate the enterprise via exploiting human weaknesses. Usually in the form of minimal online trust building where the target employee is eventually set to activate a malicious piece of code by opening an email attachment for example. The software stack employees have on their computers is quite standard in most organizations: mostly MS-Windows operating systems; the same document processing applications as well as the highly popular web browsers. This stack is easily replicated at the attacker’s environment used for finding potential points of infiltration in the form of unpatched vulnerabilities. The easiest way to find a targeted vulnerability is to review the most recent vulnerabilities uncovered by others and reported as CVEs. There is a window of opportunity for attackers between the time the public is made aware of the existence of a new vulnerability and the actual time an organization patches the vulnerable software. Some statistics say that within many organizations this time window of opportunity can be stretched into months as rolling out patches across an enterprise is painful and slow. Attackers that want to really fly below the radar and reach high success ratios for their attacks search for zero-day vulnerabilities or just buy them somewhere. Finding a zero-day is possible as the software has become overly complex with many different technologies embedded in products which eventually increase the chances for vulnerabilities to exist – the patient and the persistent attacker will always find its zero-day. Once an attacker acquires that special exploit code then the easier part of the attack path comes into play – the part where the attacker finds a person in the organization that will open such a malicious document. This method of operation is in magnitude easier vs. learning in detail the organization internal structures and finding vulnerabilities in proprietary systems such as routers and server applications where access to their technology is not straightforward. In the recent WannaCry attack, we have witnessed an even easier path to enter an organization using a weakness in enterprise computers that have an open network vulnerability that can be exploited from the outside without human intervention.
Going back to the case of the locked phones, it is way easier to find a vulnerability in the code of the operating system that runs on the phone vs. breaking the crypto and decrypting the encrypted information.
We Are All Vulnerable
Human vulnerabilities span beyond inter-personal weaknesses such as deceiving someone to open a malicious attachment. They also exist in the products we design and build, especially in the world of hardware or software where complexity has surpassed humans’ comprehension ability. Human weaknesses span also to the world of miss-configuration of systems, one of the easiest and most favorable paths for cyber attackers. The world of insider threats many times is based on human weaknesses exploited and extorted by adversaries as well. Attackers found their golden path of least resistance and it is always on the boundaries of human imperfection.
The only way for defenders to handle such inherent weaknesses is to break down the path of least resistance into parts and make the easier parts to become more difficult. That would result in a shift in the method of operation of attackers and will send them to search for other easy ways to get in where hopefully it will become harder in overall within time.
Deep into the Rabbit Hole
Infiltrating an organization via inducing an employee to activate a malicious code is based on two core weaknesses points: The human factor which is quite easy and the ease of finding a technical vulnerability in the software used by the employee as described earlier. There are multiple defense approaches addressing the human factor mostly revolving around training and education and the expected improvement is linear and slow. Addressing the second technical weakness is today’s one of the main lines of business in the world of cybersecurity, hence endpoint protection and more precisely preventing infiltration.
Tackling The Ease of Finding a Vulnerability
Vulnerabilities disclosure practices, that serve as the basis for many attacks in the window of opportunity, have been scrutinized for many years and there is real progress towards the goal of achieving a fine balance between awareness and risk aversion. Still, it is not there yet since there is no bulletproof way to isolate attackers from this public knowledge. It could be that the area of advanced threat intelligence collaboration tools will evolve into that direction though it is too early to say. It is a tricky matter to solve, as it is everybody’s general problem and at the same time nobody’s a specific problem. The second challenge is the fact that if a vulnerability exists in application X and there is a malicious code that can exploit this vulnerability then it will work anywhere this application X is installed.
Different Proactive Defense Approaches
There are multiple general approaches towards preventing such an attack from taking place:
Looking for Something
This is the original paradigm of anti-viruses that search for known digital signatures of malicious code in data. This inspection takes place both when data is flowing in the network, moved around in memory in the computing device as well as at rest when it is persisted as a file (in case it is not a full in-memory attack). Due to attackers’ sophistication with malicious code obfuscation and polymorphism, were infinite variations of digital signatures of the same malicious code can be created, this approach has become less effective. The signatures approach is highly effective on old threats spreading across the Internet or viruses written by novice attackers. In the layered defense thesis, the signatures are the lower defense line and serve as an initial filter for the noise.
Looking at Something
Here, instead of looking at the digital fingerprint of a specific virus, the search is for behavioral patterns of the malicious code. Behavioral patterns mean, for example, the unique sequence of system APIs accessed, functions called and frequencies of execution of different parts of the code in the virus.
The category that was invented quite a long time ago enjoys a renaissance thanks to the advanced pattern recognition capabilities of artificial intelligence. The downside of AI in this context is inherent in the way AI works and that is fuzziness. Fuzzy detection leads to false alarms, phenomena that overburden the already growing problem of analyst shortage required to decide which alarm is true and which isn’t. The portion of false alarms I hear about today are still in majority and are in the high double digits where some of the vendors solve this problem by providing full SIEM management behind the scenes that include filtering false alarms manually.
Another weakness of this approach is the fact that attackers evolved into mutating the behavior of the attack. Creating variations on the logic virus while making sure the result stays the same, variations that go unnoticed by the pattern recognition mechanism – there is a field called Adversarial AI which covers this line of thinking. The most serious drawback of this approach is the fact that these mechanisms are blind to in-memory malicious activities. Inherent blindness to a big chunk of the exploitation logic that is and will always stay in memory. This blindness is a sweet spot identified by attackers and again is being abused with file-less attacks etc…
This analysis reflects the current state of AI integrated and commercialized in the domain of cybersecurity in the area of endpoint threat detection. AI had major advancements in recent times, which has not been implemented yet in this cyber domain – developments that could create a totally different impact.
There is a rising concept in the world of cybersecurity, which aims to tackle the ease of learning the target environment and creating exploits that work on any similar system. The concept is called moving target defense and pledges the fact that if the inner parts of any system will be known only to the legitimate system users it will thwart any attack attempt by outsiders. It is eventually an encapsulation concept similar to the one in the object-oriented programming world where external functionality cannot access the inner functionality of a module without permission. In cybersecurity the implementation is different based on the technical domain it is implemented but still preserves the same information hiding theory. This new emerging category is highly promising towards the goal of changing the cyber power balance by taking attackers out of the current path of least resistance. Moving target defense innovation exists in different domains of cybersecurity. In endpoint protection, it touches the heart of the assumption of attackers that the internal structures of applications and the OS stays the same and their exploit code will work perfectly on the target. The concept here is quite simple to understand (very challenging to implement) – it is about continuously moving around and changing the internal structures of the system that on one hand the internal legitimate code will continue functioning as designated while on the other hand malicious code with assumptions on the internal structure will fail immediately. This defense paradigm seems as highly durable as it is agnostic to the type of attack.
The focus of the security industry should be on devising mechanisms that make the current popular path of least resistance not worthwhile and let them waste time and energy in a search for a new one.