Blog Posts

Will Artificial Intelligence Lead to a Metaphorical Reconstruction of The Tower of Babel?

The story of the Tower of Babel (or Babylon) has always fascinated me as God got seriously threatened by humans if and only they would all speak the same language. To prevent that God confused all the languages spoken by the people on the tower and scattered them across the earth. Regardless of different personal religious beliefs of whether it happened or not the underlying theory of growing power when humans interconnect is intriguing and we live at times this truth is evident. Writing, print, the Internet, email, messaging, globalization and social networks are all connecting humans. Connections which dramatically increase humanity competence in many different frontiers. The mere development of science and technology can be attributed to communications among people, as Issac Newton once said "standing on the shoulders of giants". Still, our spoken languages are different and although English has become a de-facto language for doing business in many parts of the world still there are many languages across the globe and the communications barrier is still there. History had also seen multiple efforts to create a unified language such as Esperanto which did not work eventually. Transforming everyone to speak the same language seems almost impossible as language is being taught at a very early age so changing that requires a level of synchronization, co-operation and motivation which does not exist. Even when you take into account the recent highly impressive developments in natural language processing by computers achieving real-time translation the presence of the medium will always interfere. A channel in the middle creating conversion overhead and loss of context and meaning.

Artificial Intelligence can be on its path to change that. Reverting the story of the Tower of Bable. Different emerging fields in AI have the potential to merge and turn into a platform used for communicating with others without going through the process of lingual expression and recognition:

Avatar to Avatar

One direction it may happen is that our avatar, our digital residual image on some cloud, will be able to communicate with other avatars in a unified and agnostic language. Google, Facebook and Amazon build today complex profiling technologies aimed to understand the user needs, wishes and intentions. Currently, they do that in order to optimize their services. Adding to these capabilities means of expression of intentions and desires and on the other side, understanding capabilities can lead to the avatar to avatar communications paradigm. It will take a long time until these avatars will reflect our true self in real-time but still many communications can take place even beforehand. As an example let's say my avatar knows what I want for birthday and my birthday is coming soon. My friend avatar can ask at any point in time my avatar what do I want to get for my birthday and my avatar can respond in a very relevant manner.

Direct Connection

The second path that can take place is inline with the direction of Elon Musk's Neuralink concept or Facebook's brain integration idea. Here the brain-to-world connectors will be able not only to output our thoughts to the external world in a digital way but also to understand other people's thoughts and translate them back to our brain. Brain-to-world-to-brain. The interim format of translation is important though an efficient one can be achieved probably. One caveat in this direction is the assumption that our brain is structured in an agnostic manner based on abstract concepts and is not made of the actual language constructs. The constructs a person used in order to learn about the world in its own language. If each brain wiring is subjective to the individual's constructs of understanding the digestion of others' thoughts will be impossible.

Final Thought

A big difference vs. the times of Babylon is the fact there are so many humans today vs. back then which makes the potential of such wiring explosive.

Softbank eating the world

Softbank acquired BostonDynamics, the four legs robots maker, alongside secretive Schaft, two-legged robots maker. Softbank, the perpetual acquirer of emerging leaders, has entered a foray into artificial life by diluting their stakes in media and communications and setting a stronghold into the full supply chain of artificial life. The chain starts with chipsets where ARM was acquired but then a quarter of the holdings were divested since Google (TPU) and others have shown that specialized processors for artificial life are no longer a stronghold of giants such as Intel. The next move was acquiring a significant stake in Nvidia. Nvidia is the leader in general purpose AI processing workhorse but more interesting for Softbank are their themed vertical endeavors such as the package for autonomous driving. These moves set a firm stance in the two ends of the supply chain, the processors and the final products. It lays down a perfect position for creating a Tesla like company (through holdings) that can own the new emerging segment of artificial creatures. It remains to be seen what would be the initial market for these creatures, whether it will be the consumer market or the defense though their position in the chipsets domain will allow them to make money either way. The big question is what would be the next big acquisition target in AI. It has to be a major anchor in the supply chain, right in between the chipsets and the final products and such acquisition will reveal the ultimate intention towards what artificial creatures we will see first coming into reality. A specialized communications infrastructure for communicating with the creatures efficiently (maybe their satellites activity?) as well as some cloud processing framework would make sense. P.S. The shift from media into AI is a good hint on which market matured already and which one is emerging. P.S. What does this say about Alphabet, the fact they sold Boston Dynamics? P.S. I am curious to see what is their stance towards patents in the world of AI

Random Thoughts About Mary Meeker’s Internet Trends 2017 Presentation

Random thoughts regarding Mary Meeker's Internet Trends 2017 report:

Slide #5

The main question that popped in mind was where are the rest of the people? Today there are 3.4B internet users where the world has a population of 7.5B. Could be interesting to see who are the other non-digital 4 billion humans. Interesting for reasons such as understanding the growth potential of the internet user base (by the level of difficulty of penetrating the different remaining segments) as well as identifying unique social patterns in general. Understanding the social demographics of the 3.4B connected ones can be valuable as well as a baseline for understanding the rest of the statistics in the presentation. Another interesting fact is that global smartphones shipments grew by 3% while the growth in smartphones installed base was 12% - that gap represents the pace of the slowdown in the global smartphones market growth and can be used as a predictor for next years.

Slide #7

Interesting to see that the iOS market share in the smartphone world follows similar patterns to Mac in the PC world. In the smartphone world, Apple market share is a bit higher vs. the PC market share but still carries similar proportions.

Slide #13

The gap fill of ad spending vs. time spent in media across time follows nicely the physical law of conservation of mass. Print out, mobile in.

Slide #17

Measuring advertising ROI is still is a challenge even when advertising channels have become fully digital - a symptom of the offline/online divide in conversion tracking which has not been bridged yet.

Slide #18

It seems as if there is a connection between the massive popularity of ad blockers on mobile vs. the advertising potential on mobile. If there is such then the suggested potential can not be fulfilled due to the existence of ad blockers and the level of tolerance users have on mobile which is maybe the reason ad blockers are so popular on mobile in the first place.

Slide #25

99% accurately tracking is phenomenal though the question is whether it can scale as a business model - will a big enough audience opt-in for such tracking and what will be done about the battery drain resultant of such tracking. This hyper monitoring if achieved on a global scale will become an interesting privacy and regulation debate.

Slide #47

Amazon Echo numbers are still small regardless of the hype level. Could be fascinating to see the level of usage of skills. The number of skills is very impressive but maybe misleading (many find a resemblance to the hyper growth in apps). The increase in the apps world was not only in the number of apps created but also in the explosive growth in usage (downloads, buys) - here we see only the inventory.

Slide #48

This, of course, is a serious turning point in the world of user interfaces and will be reflected in many areas, not only in home assistants.

Slide #81

2.4B Gamers?!? The fine print says that you need to play a game at least once in three months which is not a gamer by my definition.

Slide #181

Do these numbers include shadow IT in the cloud or does it reflect concrete usage of cloud resources by the enterprise? There is a big difference between an organization deploying data center workload into the cloud vs. using a product which is behind the scenes partially hosted in the cloud such as Salesforce. Totally different state of mind in terms of overcoming cloud inhibitions.

Slide #183

The reduction in concerns about data security in the cloud is a good sign of maturity and adoption. Cloud can be as secure as any data center application and even much more though still many are afraid of that uncertainty.

Slide #190

The reasons cloud applications are categorized as not enterprise-ready is not necessarily due to their security weakness. The adoption of cloud products inside the enterprise follow other paths such as level of integration into other systems, customization fit to the specific industry, etc...

Slide #191

The reason for the weaponization of spam is simply due to the higher revenue potential for spam botnets operators. Sending plain spam can earn you money, sending a malware can make you much more.

Slide #347

Remarkable to see that the founders of the largest tech companies are 2nd and 3rd generation of immigrants. That's all for now.

The Not So Peculiar Case of A Diamond in The Rough

IBM stock was hit severely in recent month, mostly due to the disappointment from the recent earnings report. It wasn't a real disappointment, but IBM had a buildup of expectations from their ongoing turnaround, and the recent earnings announcement has poured cold water on the growing enthusiasm. This post is about IBM's story but carries a morale which applies to many other companies going through disruption in their industry. IBM is an enormous business with many product lines, intellectual property reserves, large customers/partners ecosystems and a big pile of cash reserves. IBM has been disrupted in the recent decade by various megatrends including cloud, mobile computing, software as a service and others. IBM started a turnaround which became visible to the investors' community at the beginning of 2016, a significant change executed quite efficiently across different product lines. This disruption found many other tech companies unprepared, a classic tech disruption where new entrants need to focus only on next generation products and established players play catch up. A seemingly unfair situation where the big players carry the burden of what was previously defined as fresh and innovative, not so long time ago. IBM turnaround was about refocusing into cognitive computing a.k.a AI and although the turnaround is executed very professionally the shackles of the past prevent them from pleasing the impatient investors' community.

Can Every Business Turn Around?

A turnaround, or a pivot as coined in the startup world, means to change the business plan of an existing enterprise towards a new market/audience requiring a different set of skills/products/technologies. Pivoting in the startup world is a private case of a general business turnaround. In a nutshell, every business at any point in time owns a different set of offerings (products/technologies) and cash reserves. Each offering has customers, prospects, partners and the costs incurred of the creation and the delivery of the offerings to the market. In an industry which is not disrupted the equation of success is quite simple, the money you make on sales of your offerings should be higher than the attached costs. In the early phases of new market creation, it makes sense to wait for that equation to get into play by investing more cash in building the right product as well as establish excellent access to the market. Disruption is first spotted when it's hard to grow at the same or higher rate, and fundamental change is required to the offerings such as rebuilding the offering from scratch. This happens when new entrants/startups have an economic advantage in entering the market or by creating a new overlapping market. When a market is in its early days of disruption the large enterprises are mostly watching and hoping for the new trends to fade away. Once the winds of change are blowing too strong, then a new thinking is required.

A Disruption is Happening - Now What

Once the changes in the market ring the alarm bell at the top floors, management can take one or more of the following courses of actions:
  • Buy into the trend by acquiring technologies/products/teams/early market footprints. The challenges in this course are an efficient absorption of the acquired assets as well as an adaptation of the existing operations towards a new direction based on the newly acquired capabilities.
  • Create a new line of products and technologies in-house from scratch realigning existing operations into a dual mode of operation - maintaining the old vs. building the new. Dual offerings that co-exist until a successful internal transfer of leadership to the new product lines take place.
  • Build/Invest in a new external entity that is set to create the future offering in a detached manner. The ultimate and contradicting goal of the new business is to eventually cannibalize the existing product lines towards leadership in the market. A controlled competitor.
Each path creates a multitude of opportunities and challenges. Eventually, a gameplan should be devised based on the particular posture of the company and the target market supply chain.

Contemplating About A Turnaround

From a bird's eye view, all forms of turnarounds have common patterns. Every turnaround has costs. Direct costs of the investment in new products and technologies as well as indirect costs created due to the organizational transformation. Expenses incurred on top of keeping the existing business lines healthy and growing. These additional costs are taken from the cash reserves or it is new capital raised from investors. Either way, it is a limited pool of capital which requires a well balanced and aggressive plan with almost no room for mistakes. Any mistake will either hurt the innovation efforts or the margins of the current lines of business and for public companies neither is forgivable. Time is also critical here, and fast execution is key. If mistakes happen, the path can turn into a slippery slope very quickly. Besides the financial challenges in running a successful turnaround, there is a multitude of psychological, emotional and organizational issues hanging in the air. First and foremost is the feeling of loss around sunk cost. Usually, before a turnaround is grasped there are many efforts to revive existing business lines with different investment options such as linear evolution in products, reorganizations, rebranding and new partnerships. These cost a lot of money and until the understanding that it is not going to work finally sinks the burden of sunk costs has grown very fast. The second big issue is the impact of a turnaround on the organizational chart. People tend not to like changes and turnarounds. The top management is hyper-motivated thanks to the optimistic change consultants, but the employees who make the hierarchies do not necessarily see the full picture nor care about it. It goes down to every single individual who is part of the change, their thoughts about the impact on their career as well as their personal likings and aspirations. Spreading the change across the organization is kind of black magic and the ones who know how to do that are very rare. The key to a successful organizational change is to have change agents coming from within and not letting the change being driven by the consultants who are anyway perceived as overnight guests. The third strategic concern is the underlying fear of cannibalization. Many times the successful path of a turnaround is death to existing business lines and getting support for that across the board is somewhat problematic.

Should IBM Divest?

A tough question for an outsider like me and I guess pretty challenging even if you are an insider. My point of view is that IBM has reached a firm stance in AI, a position that it is becoming more challenging to maintain over time. AI has in magnitude more potential then the rest of the business and these unique assets should be freed from the burden of the other lines of business. IBM should maintain the strategic connections to the other divisions as they are probably the best distribution channels for those cognitive capabilities.

The Private Case of Startup Pivots

A pivot in startups is tricky and risky. First, there is the psychological barrier of admitting that the direction is wrong. Contradicting the general atmosphere of boundless startup optimism is a challenge. On top of that, there will always be enough naysayers that will complain that there is not sufficient proof that the startup is indeed in the wrong direction. Needless to talk about the disbelievers that will require seeing evidence before going into the new direction. Quite difficult to rationalize plans when decision making is anyway full of intuitions with minimal history. Since the history of many startups is quite limited and their existence at the early stages is dependent on cash infusion then the act of pivot even if right and justified is many times a killer. There aren't too many people in general who have the mental flexibility for a pivot, and you need everyone in the startup on board. The very few pivots I saw that were successful did well thanks to incredible leadership which made everyone follow it half blindly -  a leap of faith. Food for thought - How come we rarely see disruptors buying established disrupted players to gain fast market footprint?

Artificial Intelligence Is Going to Kill Patents

The patents system never got along quite well with software inventions. Software is too fluid for the patenting system, a system that was built a long time ago for inventions with inherent physical aspects. Software is perceived by the physical point view as a big pile of bits organized in some manner. In recent years the patenting system was bent to cope with software by adding into patent applications artificial additions containing linkage into physical computing components such as storage or CPU so they can be approved by the patent office. But that is just a patch and not evolution.

The Age of Algorithms

Fast forward to nowadays where AI has become the main innovation frontier – the world of intellectual property is about to be disrupted as well and let me elaborate. Artificial intelligence although a big buzzword, when it goes down to details it means algorithms. Algorithms are probably the most complicated form of software as it is composed by base structures and functions dictated by the genre of the algorithm such as neural networks but it also includes the data component. Whether it is the training data or the accumulated knowledge it eventually is part of the logic which means a functional extension to the basic algorithm. That makes AI in its final form an even less comprehensible piece of software. Many times it is difficult to explain how a live algorithm works even by the developers of the algorithm themselves. So technically speaking patenting an algorithm is in magnitude more complicated. As a side effect of this complexity, there is a problem with the desire to publish an algorithm in the form of a patent. An algorithm is like a secret sauce and no one wants to reveal their secret sauce to the public since others can copy it quite easily without worrying about litigation. For the sake of example let’s assume someone copies the personalization algorithm of Facebook, since that algorithm works behind the scenes it will be difficult up to impossible to prove that someone copied it. The observed results of an algorithm can be achieved in many different ways and we are exposed only to the results of an algorithm on not to its implementation. Same goes for the concept of prior art, how can someone prove that no one has implemented that algorithm before? To summarize, algorithms are inherently difficult to patent, no one wants to expose them via the patenting system as they are indefensible. So if we are going into a future where most of the innovation will be in algorithms then the value of patents will be diminished dramatically as fewer patents will be created. I personally believe we are going into a highly proprietary world where the race will not be driven by ownership of intellectual property but rather by the ability to create a competitive intellectual property which works.

Some Of These Rules Can Be Bent, Others Can Be Broken

Cryptography is a serious topic. A technology based on mathematical foundation posing an ever-growing challenge for attackers. On November 11th 2016 Motherboard wrote a piece about FBI’s ability to break into suspects’ locked phones. Contrary to FBI’s continuous complaints about going dark with strong encryption the actual number of phones they were able to break into was quite high. The high success ratio of penetrating locked phone in some way doesn’t make sense - it is not clear what was so special with the devices they could not break into. Logically similar phone models has the same crypto algorithms and if there was a way to break into one phone how come they could not break into all of them? Maybe the FBI has found an easier path to the locked phones other than breaking encryption. Possibly they crafted a piece of code that exploits a vulnerability in the phone OS, maybe a zero-day vulnerability known only to them. Locked smartphones have some parts of the operating system active even if they are only turned on and an illegal access in the form of exploitation to those active areas can circumvent the encryption altogether. To be honest, I don’t know what happened there and it is all just speculations though this story provides a glimpse into the other side, the attacker’s point of view, and that is the topic of this post. What an easy life attackers have as they are not bound by the rules of the system they want to break into and they need to seek for only one unwatched hole. Defenders who carry the burden of protecting the whole system need to make sure every potential hole is covered while bound to the system rules - an asymmetric burden that results in an unfair advantage for attackers.

The Path of Least Resistance

If attackers had ideology and laws then the governing one would have been “Walk The Path of Least Resistance” - it is reflected over and over again in their mentality and method of operation. Wikipedia’s explanation fits perfectly the hacker’s state of mind
The path of least resistance is the physical or metaphorical pathway that provides the least resistance to forward motion by a given object or entity, among a set of alternative paths.
In the cyber world, there are two dominant roles: the defender and the attacker and both deal with the exact same topic – the mere existence of an attack on a specific target. I used to think that the views of both sides would be an exact opposite to each other as eventually the subject of matter, the attack, is the same and interests are reversely-aligned but that is not the case. For sake of argument, I will deep dive into the domain of enterprise security while the logic will serve as a general principle applicable to other security domains. In the enterprise world the enterprise security department, the defender, roughly does two things: they need to know very well the architecture and the assets of the system they should protect, its structures, interconnections with other systems as well as with the external world. Secondly, they need to devise defense mechanisms and strategy that on one hand will allow the system to continue functioning while on the other hand eliminate possible entry points and paths that can be abused by attackers on their way in. As a side note, achieving this fine balance resembles the mathematical branch of constraints satisfaction problems. Now let’s switch to the other point of view - the attacker – the attacker needs only to find a single path into the enterprise in order to achieve its goal. No one knows the actual goal of the attacker and such a goal fits probably one of the following categories: theft, extortion, disruption or espionage. Within each category, the goals are very specific. So the attacker is laser-focused on a specific target and the attacker’s learning curve required for building an attack is limited and bounded to their specific interest. For example, the attacker does not need to care about the overall data center network layout in case it wants to get only the information about the salaries of the employees where such a document probably resides in the headquarters office. Another big factor in favor of attackers is that some of the possible paths towards the target include the human factor. And humans, as we all know, have flaws, vulnerabilities if you like, and from the attacker’s standpoint, these weaknesses are proper means for achieving the goal. From all the possible paths that theoretically an attacker can select from, the ones with the highest success ratio and minimal effort are the preferable ones, hence the path of least resistance.

The Most Favorite Path in The Enterprise World

Today the most popular path of least resistance is to infiltrate the enterprise via exploiting human weaknesses. Usually in the form of minimal online trust building where the target employee is eventually set to activate a malicious piece of code by opening an email attachment for example. The software stack employees have on their computers is quite standard in most organizations: mostly MS-Windows operating systems; the same document processing applications as well as the highly popular web browsers. This stack is easily replicated at the attacker’s environment used for finding potential points of infiltration in the form of un-patched vulnerabilities. The easiest way to find a target vulnerability is to review the most recent vulnerabilities uncovered by others and reported as CVEs. There is a window of opportunity for attackers in between the time the public is made aware of the existence of a new vulnerability and the actual time an organization patches the vulnerable software. Some statistics say that within many organizations this time window of opportunity can be stretched into months as rolling out patches across an enterprise is painful and slow. Attackers that want to really fly below the radar and reach high success ratios for their attacks search for zero-day vulnerabilities or just buy them somewhere. Finding a zero-day is possible as software has become overly complex with many different technologies embedded in products which eventually increase the chances for vulnerabilities to exist - the patient and persistent attacker will always find its zero-day. Once an attacker acquires that special exploit code then the easier part of the attack path comes into play – the part where the attacker finds a person in the organization that will open such malicious document. This method of operation is in magnitude easier vs. learning in details the organization internal structures and finding vulnerabilities in proprietary systems such as routers and server applications where the access to their technology is not straightforward. In recent WannaCry attack we have witnessed an even easier path to enter an organization using a weakness in enterprise computers that have an open network vulnerability that can be exploited from the outside without human intervention. Going back to the case of the locked phones, it is way easier to find a vulnerability in the code of the operating system that runs on the phone vs. breaking the crypto and decrypting the encrypted information.

We Are All Vulnerable

Human vulnerabilities span beyond inter-personal weaknesses such as deceiving someone to open a malicious attachment. They also exist in the products we design and build, especially in the world of hardware or software where complexity has surpassed humans’ comprehension ability. Human weaknesses span also to the world of miss-configuration of systems, one of the easiest and most favorable paths for cyber attackers. The world of insider threats many times is based on human weaknesses exploited and extorted by adversaries as well. Attackers found their golden path of least resistance and it is always on the boundaries of human imperfection. The only way for defenders to handle such inherent weaknesses is to break down the path of least resistance into parts and make the easier parts to become more difficult. That would result in a shift in the method of operation of attackers and will send them to search for other easy ways to get in where hopefully it will become harder in overall within time.

Deep into the Rabbit Hole

Infiltrating an organization via inducing an employee to activate a malicious code is based on two core weakness points: The human factor which is quite easy and the ease of finding a technical vulnerability in the software used by the employee as described earlier. There are multiple defense approaches addressing the human factor mostly revolving around training and education and the expected improvement is linear and slow. Addressing the second technical weakness is today’s one of the main lines of business in the world of cyber security, hence endpoint protection and more precisely preventing infiltration.

Tackling The Ease of Finding a Vulnerability

Vulnerabilities disclosure practices, that serve as the basis for many attacks in the window of opportunity, have been scrutinized for many years and there is a real progress towards the goal of achieving a fine balance between awareness and risk aversion. Still, it is not there yet since there is no bullet proof way to isolate attackers from this public knowledge. It could be that the area of advanced threat intelligence collaboration tools will evolve into that direction though it is too early to say. It is a tricky matter to solve, as it is everybody’s general problem and at the same time nobody’s specific problem. The second challenge is the fact that if a vulnerability exists in application X and there is a malicious code that can exploit this vulnerability then it will work anywhere this application X is installed.
Different Proactive Defense Approaches
There are multiple general approaches towards preventing such an attack from taking place:
Looking for Something
This is the original paradigm of anti-viruses that searches for known digital signatures of malicious code in data. This inspection takes place both when data is flowing in the network, moved around in memory in the computing device as well as at rest when it is persisted as a file (in case it is not a full in-memory attack). Due to attackers’ sophistication with malicious code obfuscation and polymorphism, were infinite variations of digital signatures of the same malicious code can be created, this approach has become less effective. The signatures approach is highly effective on old threats spreading across the Internet or viruses written by novice attackers. In the layered defense thesis, the signatures are the lower defense line and serve as an initial filter for the noise.
Looking at Something
Here, instead of looking at the digital fingerprint of a specific virus the search is for behavioral patterns of the malicious code. Behavioral patterns mean, for example, the unique sequence of system APIs accessed, functions called and frequencies of execution of different parts of the code in the virus. The category that was invented quite a long time ago enjoys a renaissance thanks to the advanced pattern recognition capabilities of artificial intelligence. The downside of AI in this context is inherent in the way AI works and that is fuzziness. Fuzzy detection leads to false alarms, phenomena that overburden the already growing problem of analyst shortage required to decide which alarm is true and which isn’t. The portion of false alarms I hear about today are still in majority and are in the high double digits where some of the vendors solve this problem by providing full SIEM management behind the scenes that include filtering false alarms manually. Another weakness of this approach is the fact that attackers evolved into mutating the behavior of the attack. Creating variations on the logic virus while making sure the result stays the same, variations that go unnoticed by the pattern recognition mechanism – there is a field called Adversarial AI which covers this line of thinking. The most serious drawback of this approach is the fact that these mechanisms are blind to in-memory malicious activities. An inherent blindness to a big chunk of the exploitation logic that is and will always stay in-memory. This blindness is a sweet spot identified by attackers and again is being abused with fileless attacks etc... This analysis reflects the current state of AI integrated and commercialized in the domain of cyber security in the area of endpoint threat detection. AI had major advancements in recent time, which has not been implemented yet in this cyber domain – developments that could create a totally different impact.
There is a rising concept in the world of cyber security, which aims to tackle the ease of learning the target environment and creating exploits that work on any similar system. The concept is called moving target defense and pledges the fact that if the inner parts of any system will be known only to the legitimate system users it will thwart any attack attempt by outsiders. It is eventually an encapsulation concept similar to the one in the object-oriented programming world where external functionality can not access the inner functionality of a module without permission. In cyber security the implementation is different based on the technical domain it is implemented but still preserves the same information hiding theory. This new emerging category is highly promising towards the goal of changing the cyber power balance by taking attackers out of the current path of least resistance. Moving target defense innovation exists in different domains of cyber security. In endpoint protection, it touches the heart of the assumption of attackers that the internal structures of applications and the OS stays the same and their exploit code will work perfectly on the target. The concept here is quite simple to understand (very challenging to implement) - it is about continuously moving around and changing the internal structures of the system that on one hand the internal legitimate code will continue functioning as designated while on the other hand malicious code with assumptions on the internal structure will fail immediately. This defense paradigm seems as highly durable as it is agnostic to the type of attack.


The focus of the security industry should be on devising mechanisms that make the current popular path of least resistance not worthwhile and let them waste time and energy in a search for a new one.

Searching Under The Flashlight of Recent WannaCry Attack

Some random thoughts about WannaCry attack


The propagation of the WannaCry attack was massive and mostly due to the fact it infected computers via SMB1, an old Windows file sharing network protocol. Some security experts complained that Ransomware has been massive for two years already and this event is only a one big hype wave though I think there is a difference here and it is the magnitude of propagation. There is a big difference when attack distribution relies solely on people unintentionally clicking on a malicious link or document and get infected vs. this attack propagation patterns. This is the first attack as far as I remember where an attack propagates both across the internet and inside organizations using the same single vulnerability. Very efficient propagation scheme apparently.


The attack unveiled the explosive number of computers globally that are outdated and non-patched. Some of them are outdated since patches did not exist - for example, Windows XP which does not have an active updates support. The rest of the victims were not up-to-date with the latest patches since it is highly cumbersome to constantly keep computers up-to-date - truth needs to be told. Keeping everything patched in an organization reduces productivity eventually as there are many disruptions to work - for instance, many applications running on an old system stop working when the underlying operating system is updated. I heard of a large organization that was hurt deeply by the attack and not because the Ransomware hit them, they had to stop working for a full day across the organization since the security updates delivered by the IT department ironically made all the computers unusable. Another thing to take into account is the magnitude of a vulnerability. The magnitude of a vulnerability has tight correlation to its prevalence and the ease of accessing it. This EternalBlue vulnerability has massive magnitude as it is apparently highly popular. It is the first time I think that an exploit for a vulnerability feels like a weapon. Maybe it is time to create some dynamic risk ranking for vulnerabilities beyond the rigid CVE classification. Vulnerabilities by definition are software bugs and there are different classes of software. There are operating systems and inside the operating systems category, there are drivers, kernel, and user-mode processes. Also within the world of kernel, there are different areas such as the networking stack, the display drivers, interprocess mechanisms etc.. Besides operating systems we have user applications as well as user services which are pieces of software that provide services in the back to user applications. A vulnerability can reside in each one of those areas where fixing a vulnerability or protecting against exploitation of it has a whole different magnitude of complexity. For example kernel vulnerabilities are the hardest to fix compared to vulnerabilities in user applications. In correspondence their impact once exploited is always measurably severer in terms of what an attacker can do post exploitation due to the level of freedom such software class allows. The massive impact of WannaCry was not due to the sophistication of its ransomware component, it was due to the SMB1 vulnerability which turned out to be highly popular. Actually, the ransomware itself was quite naive in terms of the way it operated. The funny turn of events was that many advanced defense products did not capture the attack since they assume some level of sophistication while plain signature-based anti-viruses which search for digital signatures were quite efficient. This case is an enforcement to the layered defense thesis which means signatures are here to stay and should be layered with more advanced defense tools. As for the sheer luck, we had with this naive ransomware, just imagine what would happen if the payload of the attack was at least as sophisticated as other advanced attacks we see nowadays. It could have been devastating and unfortunately, we are not our of danger yet as it can happen - this attack was a lesson not only for defenders but also for attackers.


Very quickly law enforcement authorities found the target bitcoin accounts used for collecting the ransomware and started watching for someone that withdraws the money. The amount of money collected was quite low even though the distribution was massive and some attribute it to the novice ransomware backend that as I read in some cases it won't even decrypt the files even if you pay. The successful distribution did something that the attackers did not take into account and that is the high visibility of the campaign. It is quite obvious that such a mass scale attack would wake up all law enforcement authorities to search for the money which makes withdrawing the money impossible.

Final Thoughts

Something about this attack does not make sense - on one hand the distribution was highly successful in a magnitude not seen before for such attacks while at the same time the payload, hence the ransomware, was naive, the monetization scheme was not planned properly and even the backend for collecting money and decrypting the user files was unstable. So either it was a demonstration of power and not really a ransomware campaign like launching a ballistic missile towards the ocean or just a real amateur attacker. Another thought is that I don't have yet a solid recommendation on how to be more prepared for the next time. There are a multitude of open vulnerabilities out there, some with patches available and some not and even if you patch like crazy still it won't provide a full guarantee. Of course, my baseline must recommendation is to use advanced prevention security products and do automatic patching. Final thought is that a discussion about regulatory intervention in the level of protection at the private sector should start. I can really see the effectiveness of mandatory security provisions required from organizations similar to what is done in the accounting world. Very similar to getting vaccinated. The private sector and especially the small medium size businesses are currently highly vulnerable.

Site Footer