It?s not hard to understand the concept of proactive cyber defense: acting in anticipation of an attack against a computer or network. The goal is getting in front of attacks by evading, outwitting, or neutralizing them early instead of waiting for the damage to start like reactive cyber defenses.

It?s also not hard to understand the benefits of being proactive: preventing the negative effects of cyber attacks instead of trying to minimize the damage. The only thing hard to understand is why every company doesn?t practice proactive cyber defense already?

To answer that question, we need to dive into the history of cybersecurity. It offers a powerful lesson about what can happen when we?re blind to the flaws in our own methodology. It also helps explain why catastrophic cyber attacks have become so common lately. Most importantly, a look at the recent past tells us a lot about the path forward for cybersecurity ? and it looks drastically different than the path we?re currently on.


Historically, cybersecurity wasn?t proactive, per se, but it was close. As soon as experts observed a new attack, they would add it to a registry of known threats. Various antivirus products would then draw on that registry to identify and block incoming attacks at the perimeter of enterprise IT, which was reliable because these attacks were infrequent and they carried an obvious calling card. Brand-new attacks could still get through, but their efficacy was short-lived, and the antivirus product stopped the vast majority of attacks proactively.

This approach worked reasonably well for years, until around a decade ago. Consider that in 2009, U.S. cybersecurity spending totaled $27.4 billion, but by 2018, that number had increased to $66 billion and continues to skyrocket.

What happened in the intervening years? Tools enabling polymorphic attacks began to proliferate, which exploded the potential number of signatures on files. Vendors could not cope with this explosion in new techniques, which further led to a rise in fileless attacks and in-memory exploits.

Unlike traditional malware, which arrives as a file with a distinctive ?signature? that antivirus products can detect, fileless and in-memory attacks use ever-changing signatures and behaviors (disguises, essentially) to bypass gatekeepers without making their presence known. Then they work inside of the software, operating systems, or protocols to cause harm for as long as they go unnoticed.

Conceding that they couldn?t stop these attacks on the outside, security professionals shifted their focus to finding threats hiding on the inside ? a fundamentally reactive strategy, and one with spotty results. Modern attacks are particularly hard to spot or stop because they leave few fingerprints within the vast amount of data that modern security solutions often collect. It?s akin to finding a needle in a haystack.

In response to this challenge, companies have spent billions on detection and mitigation over the last decade, investing in behavioral and heuristic analysis products that promise to uncover the tracks of fileless and in-memory threats; the reality though is that there is a massive noise-to-signal ratio involved with detection and response, making it difficult to quickly identify the footprints of fileless attacks.

The results of this approach speak for themselves: cybersecurity spending has more than doubled, yet the cost of cybercrime is projected to grow from $3 trillion in 2015 to $6 trillion as soon as 2021. Furthermore, behavioral antivirus software delivers frequent false alarms that distract responders from what really matters. Undeniably, the reactive approach to cybersecurity that still dominates today has been an utter failure.

The future of cybersecurity is about updating the proactive cyber defenses of the past for the advanced threats of today and tomorrow.


Companies have settled for a reactive approach for so long ? despite getting more detection alarms daily than an SOC could address in a month ? because they assumed it was the only option. The flaws in this strategy were fairly obvious; the alternatives weren?t.

But that?s changing as proactive cyber defense once again becomes the dominant paradigm. Instead of seeing their perimeter as inevitably vulnerable and porous, or seeing detection and response solutions as infallible, security-savvy companies are starting to move their emphasis earlier.

They?re closing whatever gaps exist in the security architecture through hardening, credential control, and security training. These companies believe proactive cyber defense isn?t just possible ? it?s a priority in an era when any successful attack can rock a company to its core.

The notion that companies can stay in front of hackers, outrunning their attacks instead of absorbing the blow, challenges the narrative around cybersecurity. But with the right policies, technologies, and philosophies in place, companies can consistently prevent attacks, including fileless and in-memory variants. Crucial elements of proactive cyber defenses include:

  • Frequently updating patches to close security gaps
  • Implementing moving target defense to neutralize new and emerging threats
  • Hardening endpoints against known attacks ? e.g., a classic antivirus strategy

Taking a comprehensive approach creates an unbroken defensive perimeter around a company. But, frankly, any attempt at proactive cyber defense foils many attacks because they?re not used to encountering resistance before accomplishing their objective. Years of reactive cyber defenses have made hackers fat, happy, and complacent. By finally removing the obvious weaknesses and gaping holes in a security perimeter, proactive cyber defense confronts hackers on the front lines and short-circuits their attacks before they have any negative consequences.

It?s time to reject the defeatist attitude embodied by the detection and response strategy. And it?s time to stop letting hackers dictate the terms of the conflict. Proactive cyber defenses make companies a formidable adversary instead of an easy target.

This post originally appeared at the?Morphisec Moving Target Defense Blog

What is Cloud Workload Protection?

Cloud usage is increasing rapidly. Analysts forecast growth of 17 percent for the worldwide public cloud services market in 2020 alone. This proliferation comes on top of already widespread cloud adoption. In a recent report by Flexera, over 83 percent of companies described themselves as intermediate to heavy users of cloud platforms, while 93 percent report having a multi-cloud strategy.

With a growing number of companies planning on doing more in diverse cloud environments, cloud workloads are becoming more common. Over 50 percent of workloads already run in the cloud. This figure is predicted to increase by a further 10 percent within the next 12 months.

As users shift their on-premises workloads into the cloud and transform legacy applications into cloud-native technologies, they?re faced with elevated cybersecurity risk on top of the existing ones, regardless of how safe the cloud appears. According to Oracle, 75 percent of users feel that the cloud is more secure than on-premises systems.

However, not all users realize that protecting cloud operations requires a different approach than protecting standard physical and virtual endpoints. To ensure strong cloud workload protection, companies first need to understand what they are and what the threat landscape facing them looks like.


A cloud workload is the amount of work running on a cloud instance at any particular moment. Whether it’s one task or thousands of interactions taking place simultaneously, a cloud workload is the sum of the actions being performed through a cloud instance at once. Ultimately, a workload can be your API server, data processing, messaging handling code, and so on.

PullQuote What Is Cloud Workload Protection v1A cloud instance can be any form of code execution, whether packaged as a container, serverless function, or executable running on virtual machines. This is true regardless of whether a business is using software as a service (SaaS), infrastructure as a service (IaaS), or platform as a service (PaaS).

These services all create cloud workloads that run on servers dedicated to hosting the application layer of whatever applications are in use. While the servers that power public clouds can run anything, dedicating them to a particular purpose helps them run more efficiently. As companies use cloud-based services more frequently, they create more cloud workloads and increase their security risk.

Cloud workloads have become more prominent in recent years as a result of digital transformation, a need to modernize legacy architecture, and the desire to integrate more concretely with third-party services in a frictionless manner. The push for modernizing legacy architectures is especially acute, as a shift toward the cloud enables companies to adopt platforms that streamline everything from solution development to accessing critical customer data.


Cloud workloads present distinct security risks due to the big architectural and conceptual change from perimeter-protected applications, which used to reside in the on-premises data center, to a diverse and highly connected architecture that is mostly out of the company?s control.

A significant cybersecurity threat to cloud workloads comes from misconfiguration. Improperly set up access management systems and weak data transfer protocols increase cloud workload vulnerability. Misconfiguration, often the result of rushed cloud migrations, is the cause of nearly 60 percent of all cloud data breaches according to a recent state of the cloud report from Divvy.

Misconfigurations happen frequently because the code of cloud applications change, which requires changes to permissions. These constant changes can lead to configuration fatigue. As a result, many developers relax permissions and ultimately leave a new hole in the attack surface.

Another cloud security weak point is access. Threat actors targeting cloud workloads focus on trying to steal cloud access credentials through phishing attacks. In a recent Oracle study, 59 percent of respondents reported having had privileged cloud credentials compromised in a phishing attack.

Cloud workloads are exposed publicly so they can be accessed via an internet connection. As a result of this exposure, getting credentials can mean ?game over? in an attack. The exposed nature of cloud workloads is in direct contrast to the difficulty entailed in gaining persistence in an on-premises environment and then ensuring lateral movement of an attack inside an on-premises perimeter. All this combines into ensuring access is strictly guarded within cloud workloads.

The key here is that you don’t want malicious code operating inside the actual cloud workload runtime environment, which is where the bad things can happen. Malicious code can enter your workload via three vectors:

  • Supply chain attacks, which means a malicious code was transplanted in one of the thousand packages you use to build your cloud workload stack, hidden in an obfuscated form and just waiting for the right time to decrypt its malicious payload and start running.
  • Through legitimate interfaces that your applications open to the public or to third parties such as web applications and API servers. Vulnerabilities whether in your own code or any of the software packages participating in your application provide an easy path for attacks to exploit and infiltrate your operations.
  • Data handling, which is where many workloads eventually do data processing, including incoming event handling or processing images uploaded by your customers. This data, which is usually beyond your control in terms of quality, is an easy path for embedding malicious code that will exploit the code aimed to process the data and again provide the attacker access into the heart of your operation.


Securing cloud workloads has become a shared burden between the customer and the cloud provider. In the “shared responsibility” model used by most public cloud providers, both parties split responsibility between “security in the cloud” and “security of the cloud.” The cloud provider (i.e., AWS or Azure) takes responsibility for the security of the cloud (hardware, software, networking, etc.) while the customer remains responsible for securing what happens in the cloud platforms they use.

These boundaries many times remain elusive to customers, who can easily and accidentally think the cloud provider will protect everything. The other extreme is installing measures that the cloud provider already has, turning those security tools into redundant and cumbersome defense layers.

Moreover, client-grade solutions like signature-based AV and even ?next-gen? antivirus are inadequate when it comes to providing customers with in-cloud protection. Beyond not being designed to protect against the threats endemic to cloud workloads, AV and NGAV agents are too cumbersome for the cloud.

Running most AV agents in the cloud results in higher usage costs on the cloud service; they might be an annoyance on employee computers, but on a cloud workload this usage is unbearable. AV and NGAV tools were quite simply built for a different use case and are just not suited for the job of securing cloud workloads.

A major paradigm change in the cloud workload defense world is the fact that response operations take a different form. Unlike an employee workstation, which may require containment of files or restoration of the registry, on cloud workloads this is less relevant. Cloud workloads such as containers deal with immutable storage so persistency is less of a threat, files are not being transferred and changed and many times it is easier to shutdown a workload instance and restart it with new hardening rules as opposed to trying to fix it on the fly.

EPPs also lack exploit prevention and memory protection capabilities, which are critical to protecting against fileless threats and in-memory attacks. In-memory attacks are the central theme behind most cloud workload malicious code threats? method of operation. Protecting cloud workloads also requires heightened network security, a necessity for limiting cloud workloads’ ability to communicate with other resources.

Further, the common approach of detection and response products such as EDR, MDR and XDR is to chase fileless attacks and always look for signs of infiltration. This approach means defenders are always too late, and must contend with blindspots to evasive activities in memory, which leaves attackers an open playground in runtime. Threat actors can do a substantial amount of harm before getting on the radar of most detection-centric solutions, and well before any incident response can occur from any internal security team or outsourced service provider.

The biggest challenge facing customers, however, is the misleading marketing campaigns about a perfect cloud workload protection solution. Eventually, these products are the same as classic endpoint protection platforms with the added bonus of nice packaging and marketing slogans about how effective they are at protecting the cloud. This noisy environment means CISOs and other tech leaders often struggle to really understand what they?re getting under the hood and make the decision for themselves whether it really fits their challenges or not.


There are a few foundational capabilities required for effective cloud workload security. These capabilities, which should be present in any cloud workload protection strategy, include hardening to achieve zero trust in a pre-production environment, application control and whitelisting for zero trust runtime, exploit prevention and memory protection, and system integrity assurance.


Key to avoiding configuration-related vulnerabilities is removing unnecessary components from cloud systems and configuring workloads according to provider, not third-party, guidelines. Regular system patching also helps stay up-to-date with identified vulnerabilities.

These activities can be accomplished by many open source tools as well as third party products that specialize in cleaning your code from vulnerabilities before it is launched into production. Having robust pre-production code and stack inspection workflows can reduce your attack surface dramatically and remove the convenience attackers enjoy of picking up an old vulnerability and using it to exploit systems.

Hardening is one of the oldest security practices. Despite this, it has withstood the test of time to stay highly effective at reducing your attack surface and creating a virtual perimeter surrounding your applications. Hardening can be in the form of segmenting your applications from a network perspective, applying zero trust on user IDs and permissions to reduce damage, and limiting application access to unneeded operating system services such as file persistency ? depending on the workload functionality.

Hardening is a tedious task when it comes to highly volatile code that changes a lot thanks to innovation practices among your application developers. Regardless, organized hardening practices can reduce the risk of a successful attack a lot.

Part of this involves isolating cloud workloads through proper network firewalling, which removes unwanted communication capabilities between them and other resources. Microsegmentation within cloud networks further reduces both the potential attack surface and the visibility of further network assets.


By allowing only authorized applications to run in a workload, application control and whitelisting locks out malicious applications. Application control allows the customer to set clear rules about what can and can?t run within a cloud workload ? making it fundamental to cloud workload security.

This measure tackles the attack vector of malicious programs looking to run inside your environment quite effectively though there is still a wide open gap that is open during the runtime part of the application lifecycle where things that takes place during the execution of a program, such as memory exploitation, evade the whitelisting mechanism. The exposure during runtime is the longest one in terms of your workload lifecycle, as it immediately opens after the workload has loaded and remains open until the workload shuts down.


Given that the most advanced threats to cloud workloads run entirely in-memory, securing application memory against attack is crucial for long-term cloud workload security. The runtime threat is at the core of your weakness and, eventually, regardless of how malicious code found its way into the workload, the last line of defense. The central defense then is the ability to prevent malicious code from running, which takes place in the complex arena of in-memory code execution.

Exploit prevention and memory protection solutions can thus provide broad protection against attacks, without the overhead of signature-based antivirus solutions or even next-gen AV platforms. They can also be used as mitigating controls when patches are not available.

One of the most promising approaches for achieving zero trust at runtime is moving target defense, which eliminates the execution of any code not deemed trusted. This occurs during the loading time following a whitelisting check, thus creating a chain of trust.

The technical approach behind moving target defense for achieving zero trust at runtime is via randomizing the memory and OS interfaces at load time using a temporary randomization key — available only to the whitelisted code residing in the application executable — that is deleted once the randomization is done. This allows for the system to deem that code as trusted, creating a zero trust memory environment where any other code that invades the runtime environment during its execution lifecycle is deemed untrusted by definition and is deflected.

An additional byproduct of this approach is the ability to accurately identify the threat that was deflected, and then report it to the security team. By doing this, the moving target defense platform allows the security team to achieve situational awareness and enact policy changes that can respond to the threat origins and risk level.

One of the challenges for security architects aiming to protect runtime is the false promises by security vendors evangelizing detection. In practice, runtime detection is always effective after some stages of the attack have taken place. That?s how it works; it needs to ?observe? a certain amount of steps of the attack in order to identify it.

This leaves you in an unknown security posture, while the amount of telemetry collected by these agents is exploding as well as posing privacy issues once they are sent to your XDR/MDR/MSSP outsourcer and, most of all, ineffective. There are many things that can take place below the radar of telemetry.

Advanced attackers know how to deceive the solutions based on telemetry by either deleting their traces or injecting false telemetry, which makes covering their tracks easy. Unlike the world of endpoint protection where you have 30 days on average until you uncover an attack, cloud workload customers don’t have that luxury. A second too late can mean the whole difference between a breach or not and the assets your workload is exposed to are the ones with the highest risk.


By checking the integrity of the physical cloud system, both pre and post-boot, system integrity assurance measures and monitors the integrity of workloads while running as well as the integrity of the system itself before boot.

These foundational capabilities can secure cloud workloads against the worst cyber threats. However, the server-based nature of cloud workloads also means that client-grade protection is usually ill-suited to protect cloud technologies against cyber attacks. Client-grade technology is also too heavy for resource-constrained cloud workloads and often fails to protect cloud workloads from zero-day attacks and fileless malware.

This is important because the consequences of a new zero-day attack breaking through to a server ? either through lateral movement or infecting the server directly ? are more damaging than if the same zero-day infected a desktop or laptop. It’s due to this high potential for negative consequences that exploit prevention, which often doesn’t come packaged with client-grade technology, is crucial for the servers that underpin cloud workloads.

By applying zero trust to your workload runtime with technologies such as moving target defense, you?re able to deploy an adequate protection solution for cloud workloads without straining cloud resource use. Moving target defense prevents memory-based attacks by randomizing the OS kernels, libraries, and applications involved in cloud workloads. In this way, fileless and zero-day attacks are stopped before they damage either cloud workloads or their servers.


As more companies use the cloud more often, cloud workload protection is becoming a crucial investment area for cybersecurity professionals. While traditional AV solutions aren?t adequate for cloud environments, a zero trust strategy focused on zero trust runtime enabled by moving target defense can provide a capable cloud workload security solution. With the power of moving target defense to secure application memory and prevent malicious code execution, users can be confident in their cloud security posture.

The threat model behind cloud workload is a new one and totally different than what we were all used to while thinking about protecting assets on-premise. A change from the “magical” detection and response paradigm into zero trust first strategy hardening your pre-production and runtime environments, achieving a solid security posture, and refining it iteratively based on changing threats.

This post originally appeared at the Morphisec Moving Target Defense Blog

Solving Data Privacy Once and For All

The way online services are setup today implies that the only technical means to provide a more personalized experience to customers is to collect as much as possible personal data into a server and then to put it into some machine that offers recommendations. Personalization is convenient, and we all want convenience, even at the price of compromise of our personal lives. This line of thought started with Amazon, Google, and Facebook, and today it seems that every other online service is operating under the same modus operandi. A situation that is irrational in terms of consumer privacy having hundreds of copies of our most intimate online and demographic data in the hands of thousands of employees and systems in small and large companies.?

The fact that our data is collected and stored somewhere out of our hands is the root of all evil ? exactly where the privacy sagas of the recent decade started. On a broader view, the world is stuck in a stalemate against this new world paradigm, where legal and government institutions do not even know how to approach this issue beyond offering arbitrary fines, which are hard to enforce. We all march towards a future where more and more sensitive data is collected about us and potentially abused in ways we can’t imagine.

The question is whether, from a technical point of view, this modus operandi of collecting more and more data centrally to personalize experiences is the only way to go.

To understand our options, we need a little background on how personalization algorithms work. Let’s say we want to see when we go to amazon.com, a list of products that fit our personal preferences. Amazon has millions of products, and it does not make sense to serve customers an alphabetical list of all the products available to choose from. Our personal preferences naturally reside inside our brains, and unless we communicate them explicitly, no one can know about them. One way to create a personalized product list is for the user to tell Amazon explicitly which product categories are interesting, and that is in a way how the personalization wave started. That approach didn’t stand the test of time as our preferences change entirely across time and context in our lives. Furthermore, reviewing a list of product categories and specifying what is interesting and what is not is a tedious task no one wants to go through. The convenience cost of explicitly stating your preferences is higher than the convenience value of getting a personalized product list. Adding to that the fact that every online service today is interested in offering personalization ? I bet it would take 20% of our digital time to fill in such forms.

Once we got over the paradigm of explicitly specifying preferences, companies started understanding that they can extract these preferences implicitly from the way we interact with the online service. For example, you have seen a book on Amazon and clicked to enter the book page and read some reviews about it for five minutes. These online actions can imply that the book has something interesting for you, something that can hint at your preferences. The more behavior recorded on the site, the more accurate they can build a rich profile of our changing preferences within time. Today, recommendation algorithms collect all the interactions on the website and map them to the list of items you interacted with. Every item on the list has a comprehensive descriptive profile – for example, a specific Business Management book has metadata of the subjects the book is dealing with, the name of the author, the text that is inside the book, and a list of other customers who bought that book as well. The profile of the item you showed interest for is compiled into your user profile and, within time, turning your personal profile into an accurate and rich depiction of what are your preferences. That is in simple terms how a personalization process looks like where in reality it is fine-tuned to be more precise. Improvements such as comparing a user profile with other “similar” minded customers for cross recommending items bought or updating a user profile on Netflix based on the actual scenes you see in a movie wrapped with metadata accompanying each scene. An endless game of creating richer user profiles for more accurately optimizing your experience to increase the chances of you doing what they want you to do.

This modus operandi is not a necessity from a technological point of view, and online services can offer personalization in a different way, which is respectful and privacy-preserving of your data. The main thing to keep in mind while thinking about alternative approaches is to make sure you understand that In the digital realm, once you give up a single copy of your data into the hands of a third party, you have lost the battle. One way forward is to record and store all your data locally on your devices, and to offer online services the opportunity to interact with your data but in a respectful manner. It is, in a way, a reversed personalization process where the online service interacts with your local personal profile temporarily whenever a personalized decision needs to be taken. The online service asks for the relevant part of your profile in which you allowed access to and uses that snapshot to create the personalized experience. The online service will be obligated to treat that snapshot of the personal profile as temporary and anonymous hence will not be associated with you beyond the specific browsing session. A concept that is much easier to enforce from a regulatory point of view. There are many different ways where such a scheme can work, including doing the actual heavyweight personalization process with the online service catalog locally on the user device in a secure manner to create more accurate personalized experiences without sending anything to the servers of the company. In a world of 5G where data bandwidth is not a problem anymore, such data exchange can happen seamlessly.

A reversed approach would finally allow consumers to have full control over their data, control over who has access to data, and at what granularity ? shifting the power back to consumers. From a regulatory point of view, since the raw data is not located on the company’s servers anymore, it is easier to enforce laws that prevent providers from using the temporary profile snapshots for other purposes. 

It is essential to understand that beyond personalized experiences, there are no real incentives for consumers to give away their data for free. Once the technological challenge of how to create personalized experiences is solved in a privacy-preserving manner, the options are unlimited in terms of going one step further and attaching value to our data.

Digital Transformation Is Hard and Existential

There is no large corporation on the planet which does not have digital transformation as one of the top three strategic priorities, and many have already deep-dived into it without necessarily understanding the meaning of success. Digital transformation is highly strategic, and many times existential due to the simple fact that technology changed everyone’s life forever and kept on doing that. A change that gave birth to a new breed of companies with technological DNA enabling them to create a superior substitute to the many services and products catered by the “old” world companies. Furthermore, these “new” companies catch up on customers’ changing preferences and adapt very efficiently. The agility of the new world puts a shining spotlight on the weaknesses and clumsiness of the incumbents; “Old” companies built with human processes as core DNA and far from even becoming a decent player in the new game. The obsoleteness of the incumbents is not apparent to the naked eye at first as large piles of cash are used to set up a theater play for posing as a new world company though the clock to their disappearance is not impressed by the show and continues ticking. Again, you see a huge investment and brainpower spent on “transforming” these companies, and I want to set some frame of thinking which can be useful to understand what does it mean to have a successful transformation.

When I think about companies, the metaphor of an organism always comes into my mind. Although it is not a perfect model for describing the whereabouts of a company in the long run, still the dynamics and actors at play are very much presenting an orchestrated long term behavior similar to the way organisms work. For example, I used the term DNA earlier to describe the core competence of a company, and it made a perfect sense. Another illustration of the difference between incumbents and upstarts is the amount of fat each group has, the ratio of muscles to fat and the type of muscles at play. In a world where running would be the essential criteria for survival, certain groups of muscles and capabilities matter the most. The magnitude of change by technology and mostly software is more about discussing a new specie and not a linear improvement in specific areas in a family of organisms.

Anyone that is overweight and nonathletic that needs to get on a strict diet and training routine; the change in life is dramatic. The path is like a roller coaster with many illusion and disillusion peeks and lows. Getting started is nearly impossible as the whole body is not ready for such a change. The urgency to lose weight and get in shape, where it is not just for the sake of aesthetics in the case of reviving company competitiveness, may lead someone to decide on an extreme diet – A start that usually ends with a shock, both to a body and to a company. As for the path itself, everyone is different and eventually need their own way to get there. A simple truth which is in contradiction to the consulting industry approach, which replicates formulas from one customer to another. And lastly, let’s say a company had a very successful transformation, and they are back in the game ? the immediate questions that arise are: is it the same company at all? Do they serve the same customers with the same products and services? Which parts have died on the process, and what was born?

It seems that if a company is going through a successful transformation, it can not, by definition, work the same and provide the same output to the world. Successful transformation changes you profoundly, and this is a truth that has to be communicated internally and externally very clearly. Without it being openly out there, every participant in the process which is expected to play a role in the change, at a subconscious level, will oppose the idea of the change as it is an unknown existential threat. And eventually, they are right; this change can get some of them out of the game as part of a successful transformation.

Unpredictions for 2020 in Cyber Security

The end of the year tradition of prediction is becoming a guessing game as the pace of innovation is increasing towards pure randomness. So I will stop pretending I know what is going to happen in 2020, and I want to write on areas that seem like the most unpredictable for 2020. Below you can find an honest review of my?2019 predictions.

2020 Unpredictions


A much talked about topic in 2019 with billions poured on rollouts across the globe. However, it is still unclear what are the killer use-cases, which is usually one step before starting to think about threats, security concepts, and the supply chain of cybersecurity vendors meant to serve this future market. I think we will stay in this state of vagueness for at least the next three years.

Insurance for the Digital World

Even though a big part of our lives has shifted into the digital realm, the insurance world is still observing, and hesitatingly testing the waters with small initiatives. It is unclear how insurance will immerse into digital life, and cyber insurance is one example of such unpredictability. It seems like a room for lots of innovation beyond helping the behemoth to transform.

Cloud Security

2018 and 2019 where glorious years for cloud security – it seems as if it is clear what the customers need, and the only thing left for the vendors is to get the work done. Cloud transformation, in general, is hiding a high complexity and a highly volatile transition of businesses and operations into the cloud. A process that will take another ten years at a minimum, and during that time, technologies/models and architectures will change many times. Since security is eventually attached to the shape this transformation takes, it will take some time until the right security concepts and paradigms will stabilize ? much shuffling in the security vendors’ space before we see an established and directed industry. I believe the markets will meet this random realization in 2020.

Alternative Digital Worlds

It seems many countries are contemplating the creation of their own “internet” including countries such as Russia, China, and others, and the narrative is about reducing dependency on the “American” controlled internet. It is a big question involving human rights, progress, nationalism, trade, and the matter will remain unsolved as the forces at play seem to be here for the long haul.



I said IoT security is a big undefined problem, and it still is. I don’t see anything changing in 2020 even though IoT deployments have become more commonplace.


I predicted 2019 would be the start of a purchasing spree for cloud DevOps related security startups, and I was spot on. The trend will continue into 2020 as the DevSecOps stack is emerging.


I predicted a flood of new chip designs beyond Intel and AMD, and with many security vulnerabilities disclosed. I was slightly right as there are many efforts to create new unique chipsets. However, the market is still stuck with the golden standard of Intel tilting a bit towards AMD product lines. I was dead wrong about the level of interest in researching vulnerabilities in chipsets, maybe because there is not much to do about them.

Small Business Security

I predicted small businesses would emerge as a serious target market for cybersecurity vendors. I was wrong as no one cares to sell to small companies as it does not correspond to the typical startup/VC playbook. Still optimistic.

AI in Cyber Security

I predicted that the hype in the endpoint AI security market would fade, and I was spot on ? the hype is gone, and limitations became very clear. There is a growing shift from local AI in endpoints towards centralized security analytics. Pushed by Azure, CrowdStrike, and Palo Alto Networks with the narrative of collecting as much as possible data and running some magic algorithms to get the job done on the cloud ? a new buzz that will meet reality much faster than the original hype of AI in endpoints.

AI in the Hands of Cyber Attackers

I predicted 2019 would be the year we will see the first large scale attack automated by AI. Well, that did not happen. There is a growing group of people talking about this, but there is no real evidence for such attacks. I am still a believer in weaponization using AI becoming the next big wave of cyber threats, but I guess it will take some more time. Maybe it is due to the fact it is still easy to achieve any goal by attackers with rather simplistic attacks due to weak security posture.

Data Privacy

I predicted it would be the year of awakening where everyone will understand the fact they “pay” for all the free services with their data. I was right about this one – everyone knows now what is the nature of the relationship they have with the big consumer tech companies, what they give, and what they get.

Elections & Democracy

I predicted that the manipulations of elections via social networks would diminish the citizens’ trust in the democratic process across the globe. I was spot on ? In Israel, for example, we are entering; unfortunately, the third round of elections and the confidence and trust is at all times low.

Tech Regulation

I wrongly expected regulation to be fast and innovative and that it would integrate with tech companies for tight oversight. I was optimistically wrong. I don’t see anything like that happening in the next five years!

The Emergence of Authentication Methods

I predicted the competition for the best authentication method would stay a mess with many alternatives, old and new, and no winner. I was right about this one. The situation will remain the same for the foreseeable future.

Supply Chain Attacks

I predicted supply chain attacks would become a big thing in 2019, and I was wrong about the magnitude of supply chain attacks even though they played a decent role in the mix of cyber threats in 2019.

Happy End of 2019 ??

The ACCEPTABLE Way to Handle Data Breaches

LifeLabs, a Canadian company, suffered a significant data breach. According to this statement, the damage was “customer information that could include name, address, email, login, passwords, date of birth, health card number and lab test results” in the magnitude of “approximately 15 million customers on the computer systems that were potentially accessed in this breach”.

It is an unfortunate event for the company, but eventually, the ones hurt the most are the customers who entrusted them with their private information. It is also clear that the resources that were allocated by this company to defend the private information were not enough. I don’t know the intimate details of that event. Still, from my experience, usually, the cyber defense situation in these companies is on the verge of negligence and most commonly underfunded severely. We, as consumers, got used to stories like that every other week, numbing us into accepting whatever the industry dictates as the best practices for such an event.

The playbook of best practices can be captured quite accurately from the letter to customers:

“We have taken several measures to protect our customer information including:

  • Immediately engaging with world-class cybersecurity experts to isolate and secure the affected systems and determine the scope of the breach;
  • Further strengthening our systems to deter future incidents;
  • Retrieving the data by making a payment. We did this in collaboration with experts familiar with cyber-attacks and negotiations with cybercriminals;
  • Engaging with law enforcement, who are currently investigating the matter; and
  • Offering cybersecurity protection services to our customers, such as identity theft and fraud protection insurance.”

My interpretation of those practices:

  • First, deal with the breach internally with very high urgency even though many times, the attackers were inside your network for months. The awareness of the mere existence of the breach puts everyone in a critical mode. Implying most commonly disconnecting and shutting down everything and calling law enforcement.
  • Get your data back so the business can continue running ? you can’t imagine how many companies don’t have a fresh copy of their data, so they have to pay the extortionists the ransom to get their data back.
  • And here comes the “strengthening the security to deter such attacks” – I don’t know what it means in practice as from my experience, it takes a long time to turn a company from a probable breach case into something that can deter future attacks. I guess it is a one time expense in the form of buying some fancy security products, which will take months and maybe years to roll out.
  • Now that the company is back in business and customers still don’t know that their data is potentially out there, bringing joy and prosperity to the attackers, the last and main challenge emerges: how to prevent a potential PR nightmare. And the acceptable answer is: let’s set up some website to show we care and let’s give the customers insurance on fraud and alerting service to know when their information gets abused. Practically saying to the customer that now that your data is out there, you are on your own, and it is advisable to stay tuned to alerts telling you when your data reaches terrible places. Good luck with that…

A new theatre play called “Best Practices” emerged mostly to mitigate all kinds of business risks while posing as “taking care of” customers.

Spanning the Chasm: The Missing Link in Tech Regulation – Part 1 of 2

Mark Zuckerberg was right when he wrote in his op-ed to the Washington Post that the internet needs new rules ? though naturally, his view is limited as a CEO of a private company. For three decades governments across the globe have created an enormous regulatory vacuum due to a profound misunderstanding of the magnitude of technology on society. As a result, they neglected their duty to protect society in the mixed reality of technology and humanity. Facebook is the scapegoat of this debate due to its enormous impact on the social fabric, but the chasm between governments, regulation and tech affect every other tech company whether it is part of a supply chain of IT infrastructure or a consumer-facing service. The spring of initiatives to regulate Artificial Intelligence (AI) carry the same burden and that is why the driving force behind them is mostly fear, uncertainty and negative sentiment. I am personally involved in one of those initiatives, and I can?t escape the feeling it is a bandage for a severe illness, resulting in a short-sighted solution to a much bigger problem.

Before technology became immersed in our reality, human-driven processes governed our social fabric. Methods that evolved during centuries to balance the power and responsibility among governments, citizens and companies resulted in a set of rules which are observable and enforceable by humans quite effectively. Never a perfect solution, but a steady approach for the democratic systems we know. Every system has a pace and rhythm where the government-societal system is bound to humans’ pace to create, understand, express and collaborate effectively with others. The pace of living we all got used to is measured in days, weeks, months and even years. Technology, on the other hand, works on a different time scale. Information technology is a multi-purpose lego with a fast learning curve, creating the most significant impact in a shorter and shorter timeframe. In the world of technology, the pace has two facets: the creation/innovation span, optimized into achieving significant impact in a shorter period; and the run time aspect, which introduces a more profound complexity.

Running IT systems hide a great deal of complexity from their users ? highly volatile dynamics operating in the space of nanoseconds. IT systems are made of source code used to describe to computers what should be done in order to achieve the goal of the system. The code is nothing more than a stream of electrons and as such can be changed many times a second to reflect ideas desired by the creator, where a change in the code leads to a different system. One of the greatest premises of AI, for example, is the fact it can create code on its own using only data and without human intervention. A change, for example, can carry an innocent error that reveals the personal details of millions of consumers to the public. This volatile system impacts privacy, consumer protection and human rights. The rapid pace of change of technology is an order of magnitude faster than humans? capability to perceive the complexity of a change in time to effectively apply human decisions the way regulation works today.

The mandate for, and requirement of governments to protect citizens have not changed at all during the last 30 years besides supporting societal changes. What has changed is reality, where technological forces govern more and more parts of our lives and our way of living, and governments cannot fulfill their duty due to their inability to bridge these two disconnected worlds. Every suggestion of a human-driven regulatory framework will be blindsided and defensive by definition, with no real impact and eventually harmful for the technological revolution. Harm to technological innovation will directly inflict on our way of living as we have already passed the tipping point of dependency on technology in many critical aspects of life. The boundaries of what regulation suggests about what is right and wrong still make sense and have not changed, as it applies to humans after all. The way to apply the regulation on the technological part of reality has to adapt to the rules of the game of the technology world to become useful, and not counter-intuitive to the main benefits we rip from tech innovation.

The growing gap between the worlds of humans and IT has much more significant ramifications, and we already experience some of them such as in the case of cyber attacks, uncontrolled AI capabilities and usage, robotics and automation as disruptors for complete economic ecosystems, autonomous weapons, the information gap, and others we don?t know about yet. The lagging of governments has placed absurd de-facto privatization of regulation into the hands of private enterprises motivated by the economic forces of profitability and growth. Censorship, consumer protection, human and civilian rights have been privatized without even contemplating the consequences of this loose framework, until over the last two years where scandals surprisingly surfaced. One of the implications of this privatization is the transformation of humans into a resource, being tapped for attention which eventually leads to spending ? and it won?t stop here.

Another root cause which governs many of the conflicts we experience today is the global nature of technology vs. the local nature of legal frameworks. Technology as a fabric has no boundaries, and it can exist wherever electricity flows. This factor is one of the main reasons behind the remarkable economic value of IT companies. On the other hand, national or regional regulation is by definition anchored to the local governing societal principles. A great divide lies between the subjective, human definition of regulation to the objective nature of technology. Adding to that complexity are countries that harness technology as a global competitive advantage without the willingness to openly participate under the same shared rules.

What Will Happen When Machines Start Lying to Us

The mere thought of a computer lying to you about something has boggled my brain ever since I heard it from a friend professor on a flight as an anecdote on what could happen next in AI. That one sentence took me on a long trip in a rabbit hole of a wide range of implications. I did not want to write on it first, not to be the one which saws that idea in the brain of people with bad intentions, but today I saw that (The AI Learned to Hide Data From Its Creators to Cheat at Tasks They Gave It) and I felt as if the cat was out of the bag. So here I go.

An underlying and maybe subliminal assumption people have while interacting with computers ever since they were invented is that computers say the truth. Computers may report incorrect results due to false or missing data or due to incorrect programming but I personally never assumed anything else may be going on. Excluding the case where a computer is only used as a communications medium with other people. Systems, processes, organizations, and even societies dependent on computing assume computers are doing only what they were programmed to.

AI as a technology game-changer is slowly penetrating many systems and applications which are an inseparable part of our lives, playing the role of a powerful and versatile alternative brain. Replacing the rigid procedural decision making logic. This shift introduces a lot of new and unknown variables to the future of computing impacting the delicate balance our society is based on. Unknown variables which many times translate to fear such as in the case of the impact on the jobs market, the potential impact on human curiosity and productivity when everything will be automated, the threat of autonomous cybersecurity attacks, and of course the dreadful nightmares about machines making up their minds to eliminate humans. Some of the fears are grounded in reality and need to be tackled in the way we drive this transformation. Some are still in the science fiction zone. The more established fears are imagined in the realm of the known impact and known capabilities computers can potentially reach with AI. For example, if cars will be fully autonomous thanks to the ability to identify objects in digital vision and correlate it with map information and a database of past good and bad driving decisions then it may cause a shortage of jobs to taxi and truck drivers. This is a grounded concern. Still, there are certain human characteristics that we never imagined to be potentially transferred to AI. Maybe due to the idealistic view of AI as a purer form of humanity keeping only what seems as positive and useful. Deception is one of those traits we don’t want in AI. It is a trait that will change everything as we know about human to machine relationships as well as machine to machine relationships.

Although the research mentioned is far from being a general-purpose capability to employ deception as a strategy to achieve unknown means still, the mere fact deception is just another strategy to be programmed, evaluated, and selected by a machine in order to achieve its goals in a more optimized manner is scary.

This is an example of a side effect of AI that cannot be eliminated as it is implied by its underlying capabilities such as understanding the environmental conditions required to achieve a task and the ability to select a feasible strategy based on its tactical capabilities.

Why Privacy Will Remain an Open Issue Unless

2018 was a year of awakening to the dear side effects of technological innovation on privacy. The news from Facebook’s mishandling of users’ data has raised concerns everywhere. We saw the misuse of private information for optimizing business goals and abuse of personal data as a platform to serve mind-washing political influencers posing as commercial advertisers. Facebook is in a way the privacy scapegoat of the world but they are not alone. Google, Twitter, and others are on the same boat. Adding to the fiasco were the too many examples of consumer services that neglected to protect their customer data from cyber attacks. 2018 was a year with rising concerns about privacy breaking the myth people don’t care for privacy anymore. People actually do care and understand what is personal data though their options are limited and there is no sign 2019 would be any different.

So how did we get here? A growing part of our life is becoming digital and convenience is definitely the number one priority and a luxury possible thanks to technological innovation. Convenience means a personalized experience and personalization requires access to personal data. The more data we provide the better experience we get. Personal data is made of the information provided by the user or indications of user activity implicitly collected using different digital tracking technologies. The collected data is fed into different systems residing in central computing facilities which make the service work. Some of the data is fed into machine learning systems which seek to learn something insightful about the user or predict the user’s next move. Inside those complex IT systems of the service provider, our data is constantly vulnerable to misuse where exposure to unauthorized parties by mistake or intention is possible. The same data is also vulnerable just by the mere fact it resides and flows in the service provider systems as they are susceptible to cyberattacks by highly motivated cyber hackers. Our data is at the mercy of the people operating the service and their ability and desire to protect it. They have access to it, control it, decide who gets access to it or not as well as decide when and what to disclose to us about how they use it.

We are here in this poor state of lack of control on our privacy since the main technological paradigm dominating the recent 10 years wave of digital innovation is to collect data in a central manner. Data is a physical object and it needs to be accessible to the information systems that process it and central data storage is the de-facto standard for building applications. There are new data storage and processing paradigms that aspire to work differently such as edge analytics and distributed storage (partially blockchain-related). These innovations hide a promise to a better future for our privacy but they are still at a very experimental early-stage unfortunately.

Unless we change the way we build digital services our privacy will remain and continue to be a growing concern where our only hope as individuals would be to have enough luck of not getting hurt.

My Ten Cyber Security Predictions for 2019

Well, 2018 is almost over and cyber threats are still here to keep us alert and ready for our continued roller coaster ride in 2019 as well.

So here are some of my predictions for the world of cybersecurity 2019:


IoT is slowly turning into reality and security becomes a growing concern in afterthought fashion as always. This reality will not materialize into a new cohort of specialized vendors due to its highly fragmented nature. So, we are not set to see any serious IoT security industry emergence in 2019. Again. Maybe in 2020 or 2021.


DevOps security had a serious wave of innovations in recent three years across different areas in the process as well as in the cloud and on-premise. 2019 may be the time for consolidation into full DevOps security suites to avoid vendor inflation and ease integration across the processes.


In 2019 we will see a flood of chipsets from Intel and AMD, Nvidia, Qualcomm, FPGAs, and many other custom makers such as Facebook, Google, and others. Many new paradigms and concepts have not been battle-tested yet from a security point of view. That will result in many new vulnerabilities uncovered. Also due to the reliance of chipsets on more software inside and of course due to the growing appetite of security researchers to uncover wildly popular and difficult to fix vulnerabilities.

Freelancers and Small Office

Professional and small businesses reliant on digital services will become a prime and highly vulnerable target for cyber attacks. The same businesses which find out it is very difficult to recover from an attack. There are already quite a few existing vendors and new ones flocking to save them and trends will intensify in 2019. The once-feared highly fragmented market of small businesses will start being served with specialized solutions. Especially in light of the over competitiveness in the large enterprise cybersecurity arena.

Enterprise Endpoint Protection

The AI hype wave will come to the realization and will be reduced back to its appropriate size in terms of capabilities and limitations. An understanding clarifying the need for a complete and most important effective protective solution which can be durable for at least 3-5 years. Commoditization of AV in mid to smaller businesses and consumers will take another step forward with the improvement of Windows Defender and its attractiveness as a highly integrated signature engine replacement which costs nothing.

AI Inside Cyber Attacks

We will see the first impactful and proliferated cyber attacks hitting the big news with AI inside and they will set new challenges for defense systems and paradigms.

Facebook, Google, Twitter…

Another year of deeper realization that much more data then we thought of is in the hands of these companies making us more vulnerable and that they are not immune to cyber threats like everyone else, compromising us eventually. We will also come to realize that services that use our data as the main tool to optimize their service conflict with protecting our privacy. And our aspiration for control is fruitless with the way these companies are built and the way their products are architectured. We will see more good intentions of the people operating these companies.

Brain Washing

As more elections will take place across the planet in different countries we will learn that the tactics used to bend the democracy in the US will be reused and applied in even less elegant ways, especially in non-English speaking languages. Diminishing the overall trust in the system and the democratic process of electing leadership.

Tech Regulation

Regulators and policymakers will eventually understand that to enforce regulation effectively on dynamic technological systems there is a need for a live technological system with AI inside on the regulator side. Humans can not cope with the speed of changes in products and the after effect approach of reacting to incidents when the damage is already done will not be sufficient anymore.


2018 was the year of multitude authentication ideas and schemes coming in different flavors and 2019 will be another year of natural selection for the non-applicable ideas. Authentication will stay an open issue and may stay like that for a long time due to the dynamic nature of systems and interfaces. Having said that, many people had enough text passwords and 2fa.

The Year of Supply Chain Attacks

2018 was the year where supply chain attacks were successfully tested by attackers as an approach and 2019 will be the year it will be taken into full scale. IT outsourcing will be a soft spot as their access and control over customer systems can provide a great launchpad to companies’ assets.

Let’s see how it plays out.

Happy Holidays and Safe 2019!

How to Disclose a Security Vulnerability and Stay Alive

In recent ten years, I was involved in the disclosure of multiple vulnerabilities to different organizations and each story is unique and diverse as there is no standard way of doing it. I am not a security researcher and did not find those vulnerabilities on my own, but I was there. A responsible researcher, subjective to your definition of what is responsible, discloses first the vulnerability to the developer of the product via email or a bug bounty web page. The idea is to notify the vendor as soon as possible so they can have time to study the vulnerability, understand its impact, create a fix and publish an update so customers can have a solution before weaponization starts. Once the vendor disclosure is over, you want to notify the public about the existence of the vulnerability for situational awareness. Some researchers wait a specified period before exposure, some never disclose it to the public, and some do not wait at all. There is also variance in the level of detailing in the disclosure to the public where some researchers only hint on the location of the vulnerability with mitigation tips vs. those who publish a full proof of concept code which demonstrates how to exploit the vulnerability. I am writing this to share some thoughts about the process with considerations and pitfalls that may take place.

A Bug Was Found

It all starts with the particular moment where you find a bug in a specific product, a bug that can be abused by a malicious actor to manipulate the product into doing something un-intended and usually beneficial to the attacker. Whether you searched for the bug days and nights under a coherent thesis or just encountered it accidentally, it is a special moment. Once the excitement settles the first thing to do is to check on the internet and in some specialized databases whether the bug is already known in some form. In the case it is unknown then you are entering a singular phase in time where you may be the only one on earth which knows about this vulnerability. I say maybe as either the vendor already knows about it but has not released a fix for it yet for some reason or an attacker known about it and is already abusing it in ongoing stealth attacks. It could also be that there is another researcher in the world who seats on this hot potato contemplating what to do with it. The found vulnerability could have existed for many years and can also be known to select few; this is a potential reality you can not eliminate. The clock started ticking loudly. In a way, you discovered the secret sauce of a potential cyber weapon with an unknown impact as the vulnerabilities are just a means to an end for attackers.

Disclosing to the Vendor

You can and should communicate it to the vendor immediately where most of the software/hardware vendors publish the means for disclosure. Unfortunately sending it quickly to the vendor does not reduce the uncertainty in the process, it adds more to it. For instance, you can have silence on the other line and not get any reply from the vendor who can put you into a strange limbo state. Another outcome could be getting an angry reply with how dare you to look into the guts of their product searching for bugs and that you are driven only by publicity lust, a response potentially accompanied by a legal letter. You could also get a warning not to publish your work to the public at any point in time as it can cause damage to the vendor. These responses do take place in reality and are not fictional, so you should have these options in mind. The best result of the first email to the vendor is a fast reply, acknowledging the discovery, maybe promising a bounty but most important cooperating sensibly with your public safety disclosure goal.

There are those researchers who do not hold their breath for helping the vendor and immediately go to public channels with their findings assuming the vendor hears about it eventually and will react to it. This approach most of the time sacrifices users? safety in the short term on behalf of a stronger pressure on the vendor to respond. A plan not for the faint of heart.

In the constructive scenarios of disclosure to the vendor, there is usually a process of communicating back and forth with the technical team behind the product, exchanging details on the vulnerability, sharing the proof of concept so the vendor can reproduce it fast, and supporting the effort to create a fix quickly. Keep in mind that even if a fix is created it does not mean it fits the company?s plans to roll it out immediately due to whatever reason, and this is where your decision on how to time the public disclosure comes into play. The vendor, on the one hand, wants the timeline to be adjusted to their convenience while your interest is to make sure a fix and the public awareness of the problem is available to users as soon as possible. Sometimes aligned interests but sometimes conflicted. Google Project Zero made the 90 days deadline a famous and reasonable period from the vendor to public disclosure but it is not written in stone as each vulnerability reveals different dynamics concerning fix rollout and it should be thought carefully.

Public Disclosure

Communicating the vulnerability to the public should have a broad impact to reach the awareness of the users, and it usually takes one of two possible paths. The easiest one is to publish a blog post and sharing it on some cybersecurity experts forums, and if the story is interesting it will pick up very fast as the information dissemination in the world of infosec is working quite right ? the traditional media will pick it up from this initial buzz. It is the easiest way but not necessarily the one which you have the most control over its consequences as the interpretations and opinions along the way can vary greatly. The second path is to connect directly with a journalist from a responsible media outlet with shared interest areas and to build a story together where they can take the time to ask for comments from the vendor and other related parties and develop the story correctly. In both cases, the vulnerability uncovered should have a broad audience impact to reach publicity. Handling the public disclosure comes with quite a bit of stress for the inexperienced as once the story starts rolling publicly you are not in control anymore and the only thing left to you or my the best advice is to stay true to your story, know your details and be responsive.

I suggest letting the vendor know about your public disclosure intentions from the get-go so there won?t be surprises and hopefully they will cooperate with it even though there is a risk they will have enough time to downplay or mitigate this disclosure if they are not open to the publicity step.

One of the main questions that arise when contemplating public disclosure is whether to publish the code of the proof of concept or not. It has pros and cons. In my eyes more cons than pros. In general, once you publish your research finding of the mere existence of the vulnerability you covered the goal of awareness and combined with the public pressure it may create on the vendor, then you may have shortened the time for a fix to be built. The published code may create more pressure on the vendor, but the addition is marginal. Bear in mind that once you publish a POC, you shortened the time for attackers to weaponizing their attacks with the new arsenal during the most sensitive time where the new fix does not protect most of the users. I am not suggesting that attackers are in pressing need of your POC for abusing the new vulnerability ? the CVE entry which pinpoints the vulnerability is enough for them to build an attack. I am arguing that by definition, you did not make their life harder while giving them an example code. Making their life harder and buying more time for users of the vulnerable technology is all about safety which is the original goal of the disclosure anyhow. The reason to be in favor of publishing a POC is the contribution to the security research industry where researchers can have another tool in their arsenal in the search for other vulnerabilities. Still, once you share something like that in public you, cannot control who gets this knowledge and who does not and you should assume both attackers and defenders will. There are people in the industry that strongly oppose POC publishing due to the cons I mentioned, but I think they are taking a too harsh stance. It is a fact that the mere CVE publication causes a spike of new attacks abusing the new vulnerability even in the cases where a POC is not available in the CVE, so it does not seem to be the main contributor to that phenomena. I am not in favor of publishing a POC though I think about that carefully on a case by case basis.

One of the side benefits of publishing a vulnerability is recognition in the respective industry, and this motivation goes alongside the goal of increasing safety. The same applies to possible monetary compensation. These two ?nonprofessional? motivations can sometimes cause misjudgment for the person disclosing the vulnerability, especially when navigating in the harsh waters of publicity. Many times it creates public criticism on the researchers due to these motivations. I believe independent security researchers are more than entitled to these compensations as they put their time and energy into fixing broken technologies that they do not own with good intentions, so the extra drivers eventually increase safety for all of us.

On Patching

The main perceived milestone during a vulnerability disclosure journey is the introduction of a new version by the vendor that fixes the vulnerability. The real freedom to disclose everything about vulnerability is when users? are protected with that new fix, and in reality, there is a considerable gap between the time of the introduction of a new patch until the time systems have that fix applied. In enterprises, unless it is a critical patch with a massive impact, it may take 6-18 months until patches are applied to systems. On many categories of IoT devices, no patching takes place at all, and on consumer products such as laptops and phones the pace of patching can be fast but is also cumbersome and tedious and due to that many people just shut it off. The architecture of software patches which many times also include new features mixed with security fixes is outdated, flawed, and not optimized for the volatility in the cybersecurity world. So please bear in mind that even if a patch exists, it does not mean people and systems are safe and protected by it.

The world has changed a lot in recent seven years regarding how vulnerability disclosure works. More and more companies come to appreciate the work by external security researchers, and there is more openness on a collaborative effort to make products safer. There is still a long way to go to achieve agility and more safety, but we are definitely in the right direction.

The Emerging Attention Attack Surface

A well-known truth among security experts that humans are the weakest link and social engineering is the least resistant path for cyber attackers. The classic definition of social engineering is deception aimed to make people do what you want them to do. In the world of cybersecurity, it can be mistakenly opening an email attachment plagued with malicious code. The definition of social engineering is broad and does not cover deception methods. The classic ones are temporary confidence building, wrong decisions due to lack of attention, and curiosity traps.

Our lives have become digital. An overwhelming digitization wave with ever exciting new digital services and products improving our lives better. The only constant in this significant change is our limited supply of attention. As humans, we have limited time, and due to that our attention is a scarce resource. A resource every digital supplier wants to grab more and more of it. In a way, we evolved into attention optimization machines where we continuously decide what is interesting and what is not, and we can ask the digital services to notify us when something of interest takes place in the future. The growing attention scarcity drove many technological innovations such as personalization on social networks. The underlying mechanism of attention works by directing our brainpower on a specific piece of information where initially we gather enough metadata to decide whether the new information is worthy of our attention or not. Due to the exploding amount of temptations for our attention, the time it takes us to decide whether something is interesting or not is getting shorter within time, which makes it much more selective and faster to decide whether to skip or not. This change in behavior creates an excellent opportunity for cyber attackers which refine their ways in social engineering; a new attack surface is emerging. The initial attention decision-making phase allows attackers to deceive by introducing artificial but highly exciting and relevant baits at the right time, an approach that results in a much higher conversion ratio for the attackers. The combination of attention optimization, shortening decision times, and highly interesting fake pieces of information set the stage for a new attack vector potentially highly effective.

Some examples:

Email?? An email with a subject line and content that discusses something that has timely interest to you. For example, you changed your Linkedin job position today, and then you got an email one hour later with another job offer which sounds similar to your new job. When you change jobs your attention to the career topic is skyrocketing – I guess very few can resist the temptation to open such an email.

Social Networks Mentions?? Imagine you?ve twitted that you are going for a trip to Washington and someone with a fake account replies to you with a link about delays in flights, wouldn?t you click on it? If the answer is yes, you could get infected by the mere click on the link.

Google Alerts?? So you want to track mentions of yourself on the internet, and you set a google alert to send you an email whenever a new webpage appears on the net with your name on it. Now imagine getting such a new email mentioning you in a page with a juicy excerpt, wouldn?t you click on the link to read the whole page and see what they wrote about you?

All these examples promise high conversion ratios because they are all relevant and come in a timely fashion. If you are targeted at the busy part of the day the chances, you will click on something like that are high.

One of the main contributors to the emergence of this attack surface is the growth in personal data that is spread out on different networks and services. This public information serves as a sound basis for attackers to understand what is interesting for you and when.

The First Principle of Security By Design

People create technologies to serve a purpose. It starts with a goal in mind and then the creator is going through the design phase and later on builds a technology-based system that can achieve that goal. For example, someone created Google Docs which allows people to write documents online. A system is a composition of constructs and capabilities which are set to be used in a certain intended way. Designers always aspire for generalization in their creation so it can serve other potential uses to enjoy reuse of technologies and resources. This path which starts at the purpose and goes through design, construction, and usage, later on, is the primary paradigm of technological tools.

The challenge arises when technological creations are abused for unintended purposes. Every system has a theoretical spectrum of possible usages dictated by its capabilities, and it may be even impossible to grasp the full potential. The gap between potential vs. intended usages is the root of most if not all cybersecurity problems. The inherent risk in artificial intelligence lies within the same weakness of purpose vs. actual usage as well. Million of examples come to my mind, starting from computer viruses abusing standard operating system mechanisms to harm up to the recent abuse of Facebook’s advertising network to control the minds of US citizens during last elections. The pattern is not unique to technologies alone; it is a general attribute of tools while information technologies in their far reach elevated the risk of misuse.

One way to tackle this weakness is to add a phase into the design process which evaluates the boundaries of potential usages of each new system and devises a self-regulating framework. Each system will have its self-regulatory capability. An effort that should take place during the design phase but also evaluated continuously as the intersection of technologies create other potential uses. A first and fundamental principle in the emerging paradigm of security by design. Any protective measure that is added after the design phase will incur higher implementation costs while its efficiency will be reduced. The later self-regulating protection is applied, the higher the magnitude of reduction in its effectiveness.

Security in technologies should stop being an afterthought.

Risks of Artificial Intelligence on Society

Random Thoughts on Cyber Security, Artificial Intelligence, and Future Risks at the OECD Event – AI: Intelligent Machines, Smart Policies

It is the end of the first day of a fascinating event in artificial intelligence, its impact on societies, and how policymakers should act upon what seems like a once in lifetime technological revolution. As someone rooted deeply in the world of cybersecurity, I wanted to share my point of view on what the future might hold.

The Present and Future Role of AI in Cyber Security and Vice Verse

Every new day we are witnessing new remarkable results in the field of AI and still, it seems we only scratched the top of it. Developments that reached a certain level of maturity can be seen mostly in the areas of object and pattern recognition which is part of the greater field of perception and different branches of reasoning and decision making. AI has already entered the cyber world via defense tools where most of the applications we see are in the fields of malicious behavior detection in programs and network activity and the first level of reasoning used to deal with the information overload in security departments helping prioritize incidents.

AI has a far more potential contribution in other fields of cybersecurity, existing, and emerging ones:

Talent Shortage

A big industry-wide challenge where AI can be a game-changer relates to the scarcity of cybersecurity professionals. Today there is a significant shortage of cybersecurity professionals who are required to perform different tasks starting from maintaining the security configuration in companies up to responding to security incidents. ISACA predicts that there will be a shortage of two million cybersecurity professionals by 2019. AI-driven automation and decision making have the potential to handle a significant portion of the tedious tasks professionals are fulfilling today. With the goal of reducing the volume of jobs to the ones which require the touch of a human expert.

Pervasive Active Intelligent Defense

The extension into active defense is inevitable where AI has the potential to address a significant portion of the threats that today, deterministic solutions can’t handle properly. Mostly effective against automated threats with high propagation potential. An efficient embedding of AI inside active defense will take place in all system layers such as the network, operating systems, hardware devices, and middleware forming a coordinated, intelligent defense backbone.

The Double-Edged Sword

A yet to emerge threat will be cyber-attacks that are powered themselves by AI. The world of artificial intelligence, the tools, algorithms, and expertise are widely accessible, and cyber attackers will not refrain from abusing them to make their attacks more intelligent and faster. When this threat materializes then AI will be the only possible mitigation. Such attacks will be fast, agile, and in magnitude that the existing defense tools have not experienced yet. A new genre of AI-based defense tools will have to emerge.

Privacy at Risk

Consumers’ privacy as a whole is sliding on a slippery slope where more and more companies collect information on us, structured data such as demographic information and behavioral patterns studied implicitly while using digital services. Extrapolating the amount of data collected with the new capabilities of big data in conjunction with the multitude of new devices that will enter our life under the category of IoT then we reach an unusually high number of data points per each person. High amounts of personal data distributed across different vendors residing on their central systems increasing our exposure and creating greenfield opportunities for attackers to abuse and exploit us in unimaginable ways. Tackling this risk requires both regulation and usage of different technologies such as blockchain, while AI technologies have also a role. The ability to monitor what is collected on us, possibly moderating what is actually collected vs, what should be collected in regards to rendered services and quantifying our privacy risk is a task for AI.

Intelligent Identities

In recent years we see at an ever-accelerating pace new methods of authentication and in correspondence new attacks breaking those methods. Most authentication schemes are based on a single aspect of interaction with the user to keep the user experience as frictionless as possible. AI can play a role in creating robust and frictionless identification methods which take into account vast amounts of historical and real-time multi-faceted interaction data to deduce the person behind the technology accurately.

AI can contribute to our safety and security in the future far beyond this short list of examples. Areas where the number of data points increases dramatically, and automated decision-making in circumstances of uncertainty is required, the right spot for AI as we know of today.

Is Artificial Intelligence Worrying?

The underlying theme in many AI-related discussions is fear. A very natural reaction to a transformative technology that played a role in many science fiction movies. Breaking down the horror we see two parts: the fear of change which is inevitable as AI indeed is going to transform many areas in our lives and the more primal fear from the emergence of soulless machines aiming to annihilate civilization. I see the threats or opportunities staged into different phases, the short term, medium, long-term, and really long term.

The short-term

The short-term practically means the present and the primary concerns are in the area of hyper-personalization which in simple terms means all the algorithms that get to know us better then we know ourselves. An extensive private knowledge base that is exploited towards goals we never dreamt of. For example, the whole concept of microtargeting on advertising and social networks as we witnessed in the recent elections in the US. Today it is possible to build an intelligent machine that profiles the citizens for demographic, behavioral, and psychological attributes. At a second stage, the machine can exploit the micro-targeting capability available on the advertising networks to deliver personalized messages disguised as adverts where the content and the design of the adverts can be adapted automatically to each person with the goal of changing the public state of mind. It happened in the US and can happen everywhere what poses a severe risk for democracy. The root of this short-term threat resides in the loss of truth as we are bound to consume most of our information from digital sources.

The medium-term

We will witness a big wave of automation which will disrupt many industries assuming that whatever can be automated whether if it is bound to a logical or physical effort then it will eventually be automated. This wave will have a dramatic impact on society, many times improving our lives such as in the case of detection of diseases which can be faster with higher accuracy without human error. These changes across the industries will also have side effects that will challenge society such as increasing economic inequality, mostly hurting the ones that are already weak. It will widen the gap between knowledge workers vs. others and will further intensify the new inequality based on access to information. People with access to information will have a clear advantage over those who don?t. It is quite difficult to predict whether the impact in some industries would be short-term and workers will flow to other sectors or will it cause overall stability problems, and it is a topic that should be studied further per each industry that is expecting a disruption.

The longer-term

We will see more and more intelligent machines that own the power to impact life and death in humans. Examples such as autonomous driving which has can kill someone on the street as well as an intelligent medicine inducer that can kill a patient. The threat is driven by malicious humans who will hack the logic of such systems. Many smart machines we are building can be abused to give superpowers to cyber attackers. It is a severe problem as the ability to protect from such a threat cannot be achieved by adding controls into artificial intelligence as the risk is coming from intelligent humans with malicious intentions and high powers.

The real long-term

This threat still belongs to science fiction which describes a case where machines will turn against humanity while owning the power to cause harm and self-preservation. From the technology point of view, such an event can happen, even today if we decide to put our fate into the hands of a malicious algorithm that can self-preserve itself while having access to capabilities that can harm us. The risk here is that society will build AI for good purposes while other humans will abuse it for other purposes which will eventually spiral out of the control of everyone.

What Policy Makers Should Do To Protect Society

Before addressing some specific directions a short discussion on the power limitations of policymakers is required in the world of technology and AI. AI is practically a genre of techniques, mostly software-driven, where more and more individuals around the globe are equipping themselves with the capability to create software and later to work on AI. In a very similar fashion to the written words, the software is the new way to express oneself, and aspiring to set control or regulation on that is destined to fail. Same for idea exchanges. Policymakers should understand these new changed boundaries which dictate new responsibilities as well.

Areas of Impact

Private Data

Central intervention can become a protective measure for citizens is the way private data is collected, verified, and most importantly used. Without data most, AI systems cannot operate, and it can be an anchor of control.

Cyber Crime & Collaborative Research

Another area of intervention should be the way cybercrime is enforced by law where there are missing parts in the puzzle of law enforcement such as attribution technologies. Today, attribution is a field of cybersecurity that suffers from under-investment as it is in a way without commercial viability. Centralized investment is required to build the foundations of attribution in the future digital infrastructure. There are other areas in the cyber world where investment in research and development is in the interest of the public and not a single commercial company or government which calls for joint research across nations. One fascinating area of research could be how to use AI in the regulation itself, especially the enforcement of regulation, understanding humans’ reach in a digital world is too short for effective implementation. Another idea is building accountability into AI where we will be able to record decisions taken by algorithms and make them accountable for that. Documenting those decisions should reside in the public domain while maintaining the privacy of the intellectual property of the vendors. Blockchain as a trusted distributed ledger can be the perfect tool for saving such evidence of truth about decisions taken by machines, evidence that can stand in court. An example project in this field is the Serenata de Amor Operation, a grassroots open source project which was built to fight corruption in Brazil by analyzing public expenses looking for anomalies using AI.

Central Design

A significant paradigm shift policymaker needs to take into account is the long strategic change from centralized systems to distributed technologies as they present much lesser vulnerabilities. A roadmap of centralized systems that should be transformed into distributed once should be studied and created eventually.

Challenges for Policy Makers

  • Today AI advancement is considered a competitive frontier among countries and this leads to the state that many developments are kept secret. This path leads to loss of control on technologies and especially their potential future abuse beyond the original purpose. The competitive phenomena create a serious challenge for society as a whole. It is not clear why people treat weapons in magnitude harsher vs. advanced information technology which eventually can cause more harm.
  • Our privacy is abused by market forces pushing for profit optimization where consumer protection is at the bottom of priorities. Conflicting forces at play for policymakers.
  • People across the world are different in many aspects while AI is a universal language and setting global ethical rules vs. national preferences creates an inherent conflict.
  • The question of ownership and accountability of algorithms in a world where algorithms can create damage is an open one with many diverse opinions. It gets complicated since the platforms are global and the rules many times are local.
  • What other alternatives there are beyond the basic income idea for the millions that won?t be part of the knowledge ecosystem as it is clear that not every person that loses a job will find a new one. Pre-emptive thinking should be conducted to prevent market turbulence in disrupted industries. An interesting question is how does the growth in population on the planet impacts this equation.

The main point I took from today is to be careful when designing AI tools that are designated towards a specific purpose and how they can be exploited to achieve other means.

UPDATE: Link to my story on the OECD Forum Network.

Accountability – Where AI and Blockchain Intersect

Recently I?ve been thinking about the intersection of blockchain and AI. Although several exciting directions are rising from the convergence of these technologies, I want to explore a specific one: accountability.

One of the hottest discussions on AI is whether to constraint AI with regulation and ethics to prevent an apocalyptic future. Without going into whether it is right or wrong to do so, I think that blockchain can play a crucial role if such future direction materialize. There is a particular group of AI applications, mostly including automated decision making, which can impact life and death. For example, an autonomous driving algorithm can decide that will eventually end with an accident and loss of life. In a world where AI is under scrutiny for ethical compliance, accountability will be the most crucial aspect. To create the technological platform for accountability, we need to record decisions taken by algorithms. Documenting those decisions can take place inside the vendor database or a trusted distributed ledger. Recording decisions in the vendor database is somewhat the natural path for implementing such capability though it suffers from a lack of neutrality, lack of authenticity, and lack of integrity. A decision is a piece of knowledge that should reside in the public domain while maintaining the privacy of the intellectual property of the vendor. Blockchain as a trusted distributed ledger can be the perfect paradigm for saving such evidence of truth about decisions taken by machines, evidence that can stand in court. Blockchain can be a neutral middleware shared by the auto vendors or a service rendered by the government.

Thoughts on The Russians Intervention in the US Elections. Allegedly.

I got a call last night on whether I want to come to the morning show on TV and talk about Google?s recent findings of alleged Russian sponsored political advertising. Advertising that could have impacted the last US election results, joining other similar discoveries on Facebook and Twitter and now Microsoft is also looking for clues. At first instant, I wanted to say, what is there to say about it but still, I agreed as a recent hobby of mine is being guested on TV shows:)

So this event got me reading about the subject quite a bit later at night and this early morning to be well prepared, and the discussion was good, a bit light as expected from a morning show but good enough to be informative for its viewers. What struck me later on while contemplating the actual findings is the significant vulnerability uncovered in this incident, the mere exploitation of that weakness by Russians (allegedly), and the hazardous path technology has taken us in recent decades while changing human behavior.

The Russian Intervention Theory

The summarize it: there are political forces and citizens in the United States who are worried about the depth of Russian intervention in the elections, and part of that is whether the social networks and digital mediums were exploited via digital advertising and to what extent. The findings until now show that advertising campaigns at the cost of tens of thousands of dollars have been launched via organizations that seem to be tied to the Russians. And these findings take place across the most prominent social networks and search engines. ?The public does not know yet what was the nature of the advertising on each platform, who is behind this adverts, and whether there was some cooperation of the advertisers with the people behind Trump?s campaign. This lack of information and especially the nature of the suspicious adverts leads to many theories, and although my mind is full of crazy ideas it seems that sharing them will only push the truth further away. So I won?t do that. The nature of the adverts is the most important piece of the puzzle since based on their content and variation patterns one can deduce whether they had a synergistic role with Trump?s campaign and what was the thinking behind them. Especially due to the fact the campaigns that were discovered are strangely uniform across all the advertising networks budget-wise. As the story unfolds we will become wiser.

How To Tackle This Threat

This phenomenon is of concern to any democracy on the planet with concerned citizens which spend enough time on digital means such as Facebook and there are some ways to improve the situation:


Advertising networks make their money from adverts. The core competence of these companies is to know who you are and to promote commercial offerings in the most seamless way. Advertisements that are of political nature without any commercial offerings behind them are abusing this information targeting and delivery mechanism to control the mindset of people. Same as it happens in advertisements on television where on TV there is a control over such content. There is no rational reason why digital advertising networks will get a free pass to allow anyone to broadcast any message on their networks without any accountability in the case of non-commercial offerings. These networks were not built for brainwashing and the customers, us, deserve a high level of transparency in this case which should be supervised and enforced by the regulator. So if there is an advert which is not of commercial nature, it should be emphasized that it is an advert (many times the adverts blend so good with the content that even identifying them is a difficult task), what is the source of the funding for the advert with a link to the website of the funder. If the advertising networks team up to define a code of ethics that will be self-enforced among them maybe regulation is not needed. At the moment we, the users, are misled and hurt by the way their service is rendered now.


The primary advertising networks (FB, Google, Twitter, Microsoft) have vast machine learning capabilities, and they should employ these to identify anomalies. Assuming regulation will be in place whether governmental or just self-regulation, there will be groups that will try to exploit these rules, and here comes the role of technology in the pursuit of identifying deviations from the rules. Whether it is about identifying the source of funding of a campaign automatically and alerting such anomalies at real-time up to identifying automated strategies such as brute force AB testing done by an army of bots. Investing in technology to make sure everyone is complying with the house rules. Part of such an effort is opening up the data about the advertisers and campaigns of non-commercial products to the public to allow third-party companies to work on the identification of such anomalies and to innovate in parallel to the advertising networks. The same goes for other elements in the networks which can be abused such as Facebook pages.

Last Thoughts on the Incident

  • How come no one identified the adverts in real-time during elections. I would imagine there were complaints about specific ads during elections and how come no complaint escalated more in-depth research into a specific campaign. Maybe there is too much reliance on bots that manage the self-service workflow for such advertising tools – the dark side of automation.
  • Looking out for digital signs that the Russians cooperated in this campaign with the Trump campaign seems far-fetched to me. The whole idea of a parallel campaign is the separation where synchronization if such took place it was probably done verbally without any digital traces.
  • The mapping of the demographic database that was allegedly created by Cambridge Analytica into the targeting taxonomy of Facebook, for example, is an extremely powerful tool for AB Testing via microtargeting. A perfect cost-efficient tool for mind control.
  • Why everyone assumes that the Russians are in favor of Trump? No one that raises the option that may be the Russians had a different intention or perhaps it was not them. Reminds me a lot of the fruitless efforts to attribute cyber attacks.

More thoughts on the weaknesses of systems and what can be done about it in a future post.

Will Artificial Intelligence Lead to a Metaphorical Reconstruction of The Tower of Babel?

The story of the Tower of Babel (or Babylon) has always fascinated me as God got seriously threatened by humans if and only they would all speak the same language. To prevent that God confused all the words spoken by the people on the tower and scattered them across the earth. Regardless of the different personal religious beliefs of whether it happened or not the underlying theory of growing power when humans interconnect is intriguing and we live at times this truth is evident. Writing, print, the Internet, email, messaging, globalization and social networks are all connecting humans ? connections which dramatically increase humanity competence in many different frontiers. The development of science and technology can be attributed to communications among people, as Issac Newton once said “standing on the shoulders of giants.” Still, our spoken languages are different, and although English has become a de-facto language for doing business in many parts of the world yet there are many languages across the globe, and the communications barrier is still there. History had also seen multiple efforts to create a unified language such as Esperanto which did not work eventually. Transforming everyone to speak the same language seems almost impossible as language is being taught at a very early age so changing that requires a level of synchronization, co-operation, and motivation which does not exist. Even when you take into account the recent highly impressive developments in natural language processing by computers achieving real-time translation of the presence of the medium will always interfere. A channel in the middle creating conversion overhead and loss of context and meaning.

Artificial Intelligence can be on its path to change that, reverting the story of the Tower of Bable.

Different emerging fields in AI have the potential to merge and turn into a platform used for communicating with others without going through the process of lingual expression and recognition:

Avatar to Avatar

One direction it may happen is that our avatar, our residual digital image on some cloud, will be able to communicate with other avatars in a unified and agnostic language. Google, Facebook, and Amazon build today complex profiling technologies aimed to understand user needs, wishes and intentions. Currently, they do that to optimize their services. Adding to these capabilities means of expression of intentions and desires and on the other side, understanding capabilities can lead to the avatar to avatar communications paradigm. It will take a long time until these avatars will reflect our true self in real-time but still many communications can take place even beforehand. As an example let’s say my avatar knows what I want for birthday and my birthday is coming soon. My friend avatar can ask at any point in time my avatar what do I want to get for my birthday and my avatar can respond in a very relevant manner.

Direct Connection

The second path that can take place is in line with the direction of Elon Musk’s Neuralink concept or Facebook’s brain integration idea. Here the brain-to-world connectors will be able not only to output our thoughts to the external world in a digital way but also to understand each other thoughts and transcode them back to our brain. Brain-to-world-to-brain. One caveat in this direction is the assumption that our brain is structured in an agnostic manner based on abstract transferable concepts – if each brain wiring is subjective to the individual’s constructs of understanding the digestion of others’ thoughts will be impossible.

Final Thought

A big difference today vs. the times of Babylon is the size of population which makes the potential of such wiring explosive.

Softbank eating the world

Softbank acquired BostonDynamics, the four legs robots maker, alongside secretive Schaft, two-legged?robots maker. Softbank, the perpetual acquirer of emerging leaders, has entered a foray into artificial life by diluting their stakes in media and communications and setting a stronghold into the full supply chain of artificial life. It starts with chipsets (ARM), but then they divested a quarter of the holdings since Google (TPU) and others have shown that specialized processors for artificial life are no longer a stronghold of giants such as Intel. The next move was acquiring?a significant stake in Nvidia. Nvidia is the leader in general-purpose AI processing workhorse, but more interesting for Softbank are their themed vertical endeavors?such as the package for autonomous driving. These moves set a firm stance in the two ends of the supply chain, the processors, and the final products. It lays down a perfect position for creating a Tesla-like company?(through holdings) to own the newly emerging segment of artificial creatures. It remains to be seen what the initial market for these creatures, whether it will be the consumer market or the defense is. Their position in the chipsets domain will allow them to make money either way. The big question is, what would be the next big acquisition target in AI. It has to be a significant anchor in the supply chain, right in between the chipsets and the final products. Such acquisition will reveal the ultimate intention towards what artificial creatures we will see first coming into reality. A specialized communications infrastructure for communicating with the creatures efficiently (maybe their satellite activity?) and some cloud processing framework would make sense.

P.S. The shift from media into AI is a good hint on which market matured already and which one is emerging.

P.S. What does this say about Alphabet, the fact they sold Boston Dynamics?

P.S. I am curious to see what is their stance towards patents in the world of AI

Random Thoughts About Mary Meeker’s Internet Trends 2017 Presentation

Random thoughts regarding Mary Meeker’s?Internet Trends 2017 report:

Slide #5

The main question that popped into my mind was, where are the rest of the people? Today there are 3.4B internet users where the world has a population of 7.5B. It could be interesting to see who are the other non-digital 4 billion humans. Interesting for reasons such as understanding the growth potential of the internet user base (by the level of difficulty of penetrating the different remaining segments) and identifying unique social patterns in general. Understanding the social demographics of the 3.4B connected ones can be valuable and a baseline for understanding the rest of the statistics in the presentation.

Another interesting fact is that global smartphone shipments grew by 3% while the growth in smartphones installed base was 12% – that gap represents the slowdown in the worldwide smartphone market growth and can be used as a predictor for next years.

Slide #7

Interesting to see that the iOS market share in the smartphone world follows similar patterns to Mac in the PC world. In the smartphone world, Apple’s market share is a bit higher vs. the PC market share but still carries similar proportions.

Slide #13

The gap-fill of ad spend vs. time spent in media across time follows the physical law of conservation of mass nicely. Print out, mobile in.

Slide #17

Measuring advertising ROI is still is a challenge even when advertising channels have become fully digital – a symptom of the offline/online divide in the conversion tracking, which has not been bridged yet.

Slide #18

It seems as if there is a connection between the massive popularity of ad blockers on mobile vs. the advertising potential on mobile. If it is such, then the suggested potential can not be fulfilled due to the existence of ad blockers and the level of tolerance users have on mobile, which is maybe the reason ad blockers are so popular on mobile in the first place.

Slide #25

99% accurately tracking is phenomenal though the question is whether it can scale as a business model – will a big enough audience opt-in for such tracking and what will be done about the battery drain resultant of such monitoring. This hyper monitoring, if achieved on a global scale, will become an exciting privacy and regulation debate.

Slide #47

Amazon Echo numbers are still small, regardless of the hype level.

It could be fascinating to see the level of usage of skills. The number of skills is impressive but maybe misleading (many find a resemblance to the hyper-growth in apps). The increase in the apps world was not only in the number of apps created but also in the explosive growth in usage (downloads, buys) – here, we see only the inventory.

Slide #48

This slide shows a turning point in user interfaces and will be reflected in many areas, not only in the home assistants market.

Slide #81

2.4B Gamers?!? The fine print says that you need to play a game at least once in three months, which is not a gamer by my definition.

Slide #181

Do these numbers include shadow IT in the cloud, or does it reflect the concrete usage of cloud resources by the enterprise? There is a big difference between an organization deploying data center workload into the cloud vs. using a product behind the scenes partially hosted in the cloud, such as Salesforce. A different state of mind in terms of overcoming cloud inhibitions.

Slide #183

The reduction in concerns about data security in the cloud is a good sign of maturity and adoption. Cloud can be as secure as any data center application and even much more though many are afraid of that uncertainty.

Slide #190

The reasons cloud applications are categorized as not enterprise-ready is not necessarily due to their security weakness. The adoption of cloud products inside the enterprise follows other paths such as integration into other systems, customization fit to the specific industry, etc.

Slide #191

The reason for the weaponization of spam is simply due to the higher revenue potential for spam botnets operators. Sending direct spam can earn you money, sending malware can make you much more.

Slide #347

Remarkable to see that the founders of the largest tech companies are 2nd and 3rd generation of immigrants.

That’s all for now.

The Not So Peculiar Case of A Diamond in The Rough

IBM stock was hit severely?in recent month, mostly due to the disappointment from the latest earnings report. It wasn’t a real disappointment, but IBM had a buildup of expectations from their ongoing turnaround, and the recent earnings announcement has poured cold water on the growing enthusiasm. This post is about IBM’s story but carries a moral which applies to many other companies going through disruption in their industry.

IBM is an enormous business with many product lines, intellectual property reserves, large customers/partners ecosystems, and a big pile of cash reserves. IBM has been disrupted in the recent decade by various megatrends, including cloud, mobile computing, software as a service, and others. IBM started a?turnaround which became visible to the investors’ community at the beginning of 2016, a significant change executed quite efficiently across different product lines. This disruption found many other tech companies unprepared, a classic tech disruption where new entrants need to focus only on next-generation products and established players play catch up. It is an unfair situation where the big players carry the burden of what was?not so long ago,?fresh and innovative. IBM turnaround was about refocusing into cognitive computing, a.k.a AI. Although the turnaround is executed very professionally, the shackles of the past prevent them from pleasing the impatient investors’ community.

Can Every Business Turn Around?

A turnaround, or a pivot as coined in the startup world, means to change the business plan of an existing enterprise towards a new market/audience requiring a different set of skills/products/technologies. Pivoting in the startup world is a private case of a?general business turnaround. In a nutshell, every business at any point in time owns a different set of offerings (products/technologies) and cash reserves. Each offering has customers, prospects, partners, and the costs incurred of the creation and the delivery of the offerings to the market. In an industry that is not disrupted, the equation of success is quite simple; the money you make on sales of your offerings should be higher than the attached costs. In the early phases of new market creation, it makes sense to wait for that equation to get into a play by investing more cash in building the right product and establishing excellent access to the market. Disruption is first spotted when it’s hard to grow at the same or higher rate, and fundamental change to the offerings is needed, such as a full rebuild. This?situation happens when new entrants/startups have an economic advantage in entering the market or creating a new overlapping market. When a market is in its early days of disruption, the large enterprises are mostly watching and hoping for the latest trends to fade away. Once the winds of change are blowing too strong, then new thinking is required.

A Disruption is Happening – Now What

Once the changes in the market ring the alarm bell at the top floors, management can take one or more of the following courses of actions:

  • Buy into the trend by acquiring technologies/products/teams/early market footprints. The challenges in this course are an efficient absorption of the acquired assets and an adaptation of the existing operations towards a?new direction based on the newly acquired capabilities.
  • Create a new line of products and technologies in-house from scratch, realigning existing operations into a dual mode of operation – maintaining the old vs. building the new. Dual offerings that co-exist until a successful internal transfer of leadership to the new product lines take place.
  • Build/Invest in a new external entity set to create a future offering in a detached manner. The ultimate and contradicting goal of the new business is to eventually cannibalize the existing product lines towards leadership in the market ? a controlled competitor.

Each path creates a multitude of opportunities and challenges. Eventually, a gameplan should be devised based on the particular posture of the company and the target market supply chain.

Contemplating About A?Turnaround

From a bird’s eye view, all forms of turnarounds have common patterns. Every turnaround has costs. Direct costs of the investment in new products and technologies and indirect costs created due to the organizational transformation. Expenses incurred on top?of keeping the existing business lines healthy and growing. These additional costs are allocated from the cash reserves or?new capital raised from investors. Either way, it is a limited pool of money which requires a well balanced and aggressive plan with almost no room for mistakes. Any mistake will either hurt the innovation efforts or the margins of the current business lines and for public companies, neither is forgivable. Time is also critical here, and fast execution is vital. If mistakes happen, the path can turn into a slippery slope very quickly.

Besides the financial challenges in running a successful turnaround, there are many psychological, emotional, and organizational issues hanging in the air. First and foremost is the feeling of loss around sunk cost. Usually, before a turnaround is grasped, there are many efforts to revive existing business lines with different investment options such as linear evolution in products, reorganizations, rebranding, and new partnerships. These cost a lot of money, and until the understanding that it is not going to work finally sinks, the burden of sunk costs has grown very fast. The second big issue is the impact of a turnaround on the organizational chart. People tend not to like changes and turnarounds. The top management is hyper-motivated?thanks to the optimistic change consultants, but the employees who make the hierarchies do not necessarily see the full picture nor care about it. It goes down to every single individual who is part of the change, their thoughts about the impact on their career as well as their likings and aspirations. Spreading the move across the organization is a kind of black magic, and the ones who know how to do that are very rare. The key to a successful organizational change is to have change agents coming from within and not letting the change driven by the consultants who are perceived as many times as overnight guests. The third strategic concern is the underlying fear of cannibalization.?Often, the successful path of a turnaround is death to existing business lines, and getting support for that across the board is somewhat problematic.

Should IBM Divest?

A tough question for an outsider like me, and I guess pretty challenging even if you are an insider. My point of view is that IBM has reached a firm stance in AI, which is becoming more challenging to maintain over time. AI has, in magnitude, more potential than the rest of the business, and these unique assets should be freed from the burden of the other lines of business. IBM should maintain the strategic connections to the other divisions as they are probably the best distribution channels for those cognitive capabilities.

The Private Case of Startup Pivots

A pivot in startups is tricky and risky. First, there is the psychological barrier of admitting that the direction is wrong. Contradicting the general atmosphere of boundless startup optimism is a challenge. On top of that, there will always be enough naysayers that will complain that there is not sufficient proof that the startup is indeed in the wrong direction. Needless to talk about the disbelievers that will require seeing evidence before going into the new direction. Quite tricky to rationalize plans when decision making is anyway full of intuitions with minimal history. Due to the limited history of many startups and dependence on cash infusion, a pivot, even if justified, is many times a killer. There aren’t too many people in general who have the mental flexibility for a pivot, and you need everyone in the startup on board. The very few pivots I saw that were successful did well thanks to incredible leadership, which made everyone follow it half blindly – ?a leap of faith.

Food for thought – How come we rarely see disruptors buying established disrupted players to gain a fast market footprint?

Artificial Intelligence Is Going to Kill Patents

The patents system never got along quite well with software inventions. Software is?too fluid for the patenting system, built a long time ago for creations with?physical aspects. The material point view perceives software as a big pile?of electronically powered bits organized in some manner. In recent years the patenting system was bent to cope with software by adding into patent applications artificial additions containing linkage into?physical computing components such as storage or CPU so the patent office can approve them. But that is just a patch and not evolution.

The Age of Algorithms

Nowadays, AI has become the leading innovation frontier ? the world of intellectual property is about to be disrupted and let me elaborate. Artificial intelligence, although a big buzzword, when it goes down to details, it means algorithms. Algorithms are probably the most complicated form of software. It is composed of base structures and functions dictated by the genre of the algorithm, such as neural networks, but it also includes the data component. Whether it is the training data or the accumulated knowledge, it eventually is part of the logic, which means a functional extension to?the basic algorithm. That makes AI in its final form an even less comprehensible piece of software.?Many times it is difficult to explain how a live algorithm works even by the developers of the algorithm themselves. So technically speaking, patenting an algorithm is?in magnitude more complicated. As a side effect of this complexity, there is a problem with the desire to publish an algorithm in the form of a patent. An algorithm is like a secret sauce, and no one wants to reveal their secret sauce to the public since others can copy it quite easily without worrying about litigation. For the sake of example, let?s assume someone copies the personalization algorithm of Facebook. Since that algorithm works secretly behind the scenes, it will be difficult up to impossible to prove that someone copied it. The observed results of an algorithm can be achieved in many different ways, and we are exposed only to the results of an algorithm on not to its implementation. The same goes for the concept of prior art; how can someone prove that no one has implemented that algorithm before?

To summarize, algorithms are inherently tricky to patent; no one wants to expose them via the patenting system as?they are indefensible. So if we?are going into a?future where most of the innovation will be in algorithms, then the value of patents will be diminished dramatically as fewer patents will be created. I believe we are going into a highly proprietary world where the race will not be driven by ownership of intellectual property but rather by the ability to create a competitive?intellectual property that works.

Some Of These Rules Can Be Bent, Others Can Be Broken

Cryptography is a serious topic ? a technology based on a mathematical foundation posing an ever-growing challenge for attackers. On November 11th, 2016, Motherboard wrote a piece about the FBI?s ability to break into suspects? locked phones. Contrary to the FBI?s constant complaints about going dark with strong encryption, the actual number of phones they were able to break into was relatively high. The high success ratio of penetrating locked phones in some way doesn?t make sense – it is not clear what was so special with those devices they failed to break into. Logically similar phone models have the same crypto algorithms, and if there was a way to break into one phone, how come they could not break into all of them? Maybe the FBI has found an easier path to the locked phones other than breaking encryption. Possibly they crafted a piece of code that exploits a vulnerability in the phone OS, maybe a zero-day vulnerability known only to them. Locked smartphones have some parts of the operating system active even if they are only turned on and illegal access in the form of exploitation to those active areas can circumvent the encryption altogether. I don?t know what happened there and it is all just speculations though this story provides a glimpse into the other side, the attacker?s point of view, and that is the topic of this post. What easy life attackers have as they are not bound by the rules of the system they want to break into and they need to seek only one unwatched hole. Defenders who carry the burden of protecting the whole system need to make sure every potential hole is covered while bound to the system rules – an asymmetric burden that results in an unfair advantage for attackers.

The Path of Least Resistance

If attackers had ideology and laws then the governing one would have been ?Walk The Path of Least Resistance? – it is reflected over and over again in their mentality and method of operation.

Wikipedia?s explanation fits perfectly the hacker?s state of mind

The path of least resistance is the physical or metaphorical pathway that provides the least resistance to forward motion by a given object or entity, among a set of alternative paths.

In the cyber world, there are two dominant roles: the defender and the attacker, and both deal with the exact same topic ? the mere existence of an attack on a specific target. I used to think that the views of both sides would be an exact opposite to each other as eventually the subject of matter, the attack, is the same and interests are reversely-aligned but that is not the case. For sake of argument, I will deep dive into the domain of enterprise security while the logic will serve as a general principle applicable to other security domains. In the enterprise world the enterprise security department, the defender, roughly does two things: they need to know very well the architecture and the assets of the system they should protect, its structures, interconnections with other systems as well as the external world. Secondly, they need to devise defense mechanisms and strategies that on one hand will allow the system to continue functioning while on the other hand eliminate possible entry points and paths that can be abused by attackers on their way in. As a side note, achieving this fine balance resembles the mathematical branch of constraints satisfaction problems. Now let?s switch to the other point of view – the attacker ? the attacker needs only to find a single path into the enterprise in order to achieve its goal. No one knows the actual goal of the attacker and such a goal fits probably one of the following categories: theft, extortion, disruption or espionage. Within each category, the goals are very specific. So the attacker is laser-focused on a specific target and the attacker?s learning curve required for building an attack is limited and bounded to their specific interest. For example, the attacker does not need to care about the overall data center network layout in case it wants to get only the information about the salaries of the employees where such a document probably?resides in the headquarters office. Another big factor in favor of attackers is that some of the possible paths towards the target include the human factor. And humans, as we all know, have flaws, vulnerabilities if you like, and from the attacker?s standpoint, these weaknesses are proper means for achieving the goal. From all the possible paths that theoretically an attacker can select from, the ones with the highest success ratio and minimal effort are the preferable ones, hence the path of least resistance.

The Most Favorite Path in The Enterprise World

Today the most popular path of least resistance is to infiltrate the enterprise via exploiting human weaknesses. Usually in the form of minimal online trust building where the target employee is eventually set to activate a malicious piece of code by opening an email attachment for example. The software stack employees have on their computers is quite standard in most organizations: mostly MS-Windows operating systems; the same document processing applications as well as the highly popular web browsers. This stack is easily replicated at the attacker?s environment used for finding potential points of infiltration in the form of unpatched vulnerabilities. The easiest way to find a targeted vulnerability is to review the most recent vulnerabilities uncovered by others and reported as CVEs. There is a window of opportunity for attackers between the time the public is made aware of the existence of a new vulnerability and the actual time an organization patches the vulnerable software. Some statistics say that within many organizations this time window of opportunity can be stretched into months as rolling out patches across an enterprise is painful and slow. Attackers that want to really fly below the radar and reach high success ratios for their attacks search for zero-day vulnerabilities or just buy them somewhere. Finding a zero-day is possible as the software has become overly complex with many different technologies embedded in products which eventually increase the chances for vulnerabilities to exist – the patient and the persistent attacker will always find its zero-day. Once an attacker acquires that special exploit code then the easier part of the attack path comes into play ? the part where the attacker finds a person in the organization that will open such a malicious document. This method of operation is in magnitude easier vs. learning in detail the organization internal structures and finding vulnerabilities in proprietary systems such as routers and server applications where access to their technology is not straightforward. In the recent WannaCry attack, we have witnessed an even easier path to enter an organization using a weakness in enterprise computers that have an open network vulnerability that can be exploited from the outside without human intervention.

Going back to the case of the locked phones, it is way easier to find a vulnerability in the code of the operating system that runs on the phone vs. breaking the crypto and decrypting the encrypted information.

We Are All Vulnerable

Human vulnerabilities span beyond inter-personal weaknesses such as deceiving someone to open a malicious attachment. They also exist in the products we design and build, especially in the world of hardware or software where complexity has surpassed humans? comprehension ability. Human weaknesses span also to the world of miss-configuration of systems, one of the easiest and most favorable paths for cyber attackers. The world of insider threats many times?is based on human weaknesses exploited and extorted by adversaries as well. Attackers found their golden path of least resistance and it is always on the boundaries of human imperfection.

The only way for defenders to handle such inherent weaknesses is to break down the path of least resistance into parts and make the easier parts to become more difficult. That would result in a shift in the method of operation of attackers and will send them to search for other easy ways to get in where hopefully it will become harder in overall within time.

Deep into the Rabbit Hole

Infiltrating an organization via inducing an employee to activate a malicious code is based on two core weaknesses points: The human factor which is quite easy and the ease of finding a technical vulnerability in the software used by the employee as described earlier. There are multiple defense approaches addressing the human factor mostly revolving around training and education and the expected improvement is linear and slow. Addressing the second technical weakness is today?s one of the main lines of business in the world of cybersecurity, hence endpoint protection and more precisely preventing infiltration.

Tackling The Ease of Finding a Vulnerability

Vulnerabilities disclosure practices, that serve as the basis for many attacks in the window of opportunity, have been scrutinized for many years and there is real progress towards the goal of achieving a fine balance between awareness and risk aversion. Still, it is not there yet since there is no bulletproof way to isolate attackers from this public knowledge. It could be that the area of advanced threat intelligence collaboration tools will evolve into that direction though it is too early to say. It is a tricky matter to solve, as it is everybody?s general problem and at the same time nobody?s a specific problem. The second challenge?is the fact?that if a vulnerability exists in application X and there is a malicious code that can exploit this vulnerability then it will work anywhere this application X is installed.

Different Proactive Defense Approaches

There are multiple general approaches towards preventing such an attack from taking place:

Looking for Something

This is the original paradigm of anti-viruses that search?for known digital signatures of malicious code in data. This inspection takes place both when data is flowing in the network, moved around in memory in the computing device as well as at rest when it is persisted as a file (in case it is not a full in-memory attack). Due to attackers? sophistication with malicious code obfuscation and polymorphism, were infinite variations of digital signatures of the same malicious code can be created, this approach has become less effective. The signatures approach is highly effective on old threats spreading across the Internet or viruses written by novice attackers. In the layered defense thesis, the signatures are the lower defense line and serve as an initial filter for the?noise.

Looking at Something

Here, instead of looking at the digital fingerprint of a specific virus, the search is for behavioral patterns of the malicious code. Behavioral patterns mean, for example, the unique sequence of system APIs accessed, functions called and frequencies of execution of different parts of the code in the virus.

The category that was?invented quite a long time ago enjoys a renaissance thanks to the advanced pattern recognition capabilities of artificial intelligence. The downside of AI in this context is inherent in the way AI works and that is fuzziness. Fuzzy detection leads to false alarms, phenomena that overburden the already growing problem of analyst shortage required to decide which alarm is true and which isn?t. The portion?of false alarms I hear about today are still in majority and are in the high double digits where some of the vendors solve this problem by providing full SIEM management behind the scenes that include filtering false alarms manually.

Another weakness of this approach is the fact that attackers evolved into mutating the behavior of the attack. Creating variations on the logic virus while making sure the result stays the same, variations that go?unnoticed by the pattern recognition mechanism ? there is a field called Adversarial AI which covers this line of thinking. The most serious drawback of this approach is the fact that these mechanisms are blind to in-memory malicious activities. Inherent blindness to a big chunk of the exploitation logic that is and will always stay in memory. This blindness is a sweet spot identified by attackers and again is being abused with file-less attacks etc…

This analysis reflects the current state of AI integrated and commercialized in the domain of cybersecurity in the area of endpoint threat detection. AI had major advancements in recent times, which has not been implemented yet in this cyber domain ? developments that could create a totally different impact.


There is a rising concept in the world of cybersecurity, which aims to tackle the ease of learning the target environment and creating exploits that work on any similar system. The concept is called moving target defense and pledges the fact that if the inner parts of any system will be known only to the legitimate system users it will thwart any attack attempt by outsiders. It is eventually an encapsulation concept similar to the one in the object-oriented programming world where external functionality cannot access the inner functionality of a module without permission. In cybersecurity the implementation is different based on the technical domain?it is implemented but still preserves the same information hiding theory. This new emerging category is highly promising towards the goal of changing the cyber power balance by taking attackers out of the current path of least resistance. Moving target defense innovation exists in different domains of cybersecurity. In endpoint protection, it touches the heart of the assumption of attackers that the internal structures of applications and the OS stays the same and their exploit code will work perfectly on the target. The concept here is quite simple to understand (very?challenging to implement) – it is about continuously moving around and changing the internal structures of the system that on one hand the internal legitimate code will continue functioning as designated while on the other hand malicious code with assumptions on the internal structure will fail immediately. This defense paradigm seems as highly durable as it is agnostic to the type of attack.


The focus of the security industry should be on devising mechanisms that make the current popular path of least resistance not worthwhile and let them waste time and energy in a search for a new one.

Searching Under The Flashlight of Recent WannaCry Attack

Random thoughts about WannaCry


The propagation of the WannaCry attack was massive and mostly due to the fact it infected computers via SMB1, an old Windows file-sharing network protocol. Some security experts complained that Ransomware has been massive for two years already and this event is only a one big hype wave though I think there is a difference here and it is the magnitude of propagation. There is a big difference when attack distribution relies solely on people unintentionally clicking on a malicious link or document and get infected vs. this attack propagation patterns. This is the first attack as far as I remember where an attack propagates both across the internet and inside organizations using the same single vulnerability. Very efficient propagation scheme apparently.


The attack unveiled the explosive number?of?computers globally?that are outdated?and non-patched. Some of them?are outdated since?patches did not exist – for example, Windows XP which does not have an active updates support. The rest?of the victims were not up-to-date with the latest patches since it is highly cumbersome to constantly keep computers up-to-date – truth needs to be told. Keeping everything patched?in an organization reduces productivity eventually as there are many disruptions to work – for instance, many applications running on an old system stop working when?the underlying operating system?is?updated. I heard of a large organization that was hurt deeply by the attack and not because the Ransomware hit them, they had to stop working for a full day across the organization?since the security updates delivered by the IT department ironically made all the computers unusable.

Another thing to take into account is the magnitude of vulnerability. The magnitude of vulnerability has a tight correlation to its prevalence and the ease of accessing it. This EternalBlue vulnerability has massive magnitude as it is apparently highly popular. It is the first time I think that an exploit for a vulnerability feels like a weapon. Maybe it is time to create some dynamic risk ranking for vulnerabilities beyond the rigid CVE classification. Vulnerabilities by definition are software bugs and there are different classes of software. There are operating systems and inside the category of the operating system, there are drivers, kernel, and user-mode processes. Also within the world of the kernel, there are different areas such as the networking stack, the display drivers, interprocess mechanisms, etc.. Besides operating systems, we have user applications as well as user services which are pieces of software that provide services in the back to user applications. A vulnerability can reside in each one of those areas where fixing a vulnerability or protecting against exploitation of it has a whole different magnitude of complexity. For example kernel vulnerabilities are the hardest to fix compared to vulnerabilities in user applications. In correspondence their impact once exploited is always measurably severer in terms of what an attacker can do post-exploitation due to the level of freedom such software class allows.

The massive impact of WannaCry was not due to the sophistication of its ransomware component, it was due to the SMB1 vulnerability which turned out to be highly popular. Actually, the ransomware itself was quite naive in terms of the way it operated. The funny turn of events was that many advanced defense products did not capture the attack since they assume some level of sophistication while plain signature-based anti-viruses which search for digital signatures were quite efficient. This case is enforcement of the layered defense thesis which means signatures are here to stay and should be layered with more advanced defense tools.

As for the sheer luck, we had with this?naive ransomware, just imagine what would happen if the payload of the attack was at least as sophisticated as other advanced attacks we see nowadays. It could have been devastating and unfortunately, we are not our of danger yet as it can happen – this attack was a lesson not only for defenders but also for attackers.


Very quickly law enforcement authorities found?the target bitcoin accounts used for collecting the ransomware and started watching for?someone that withdraws the money. The amount of money collected was quite low even though the distribution was massive and some attribute it to the novice ransomware backend that as I read?in some cases it won’t even decrypt the files even if you pay.

The successful distribution did?something that the attackers did not take into account and that is the high visibility of the campaign. It is quite obvious that such a mass scale attack would wake up all law enforcement authorities to search for the money which makes withdrawing the money impossible.

Final Thoughts

Something about this attack does not make sense – on one hand, the distribution was highly successful in a magnitude not seen before for such attacks while at the same time the payload, hence the ransomware, was naive, the monetization scheme was not planned properly and even the backend for collecting money and decrypting the user files was unstable. So either it was a demonstration of power and not really a ransomware campaign like launching a ballistic missile towards the ocean or just a real amateur attacker.

Another thought is that I don’t have yet a solid?recommendation on how to be more prepared for the next time. There are a multitude of open vulnerabilities out there, some with patches available and some not and even if you patch like crazy still it won’t provide a full?guarantee. Of course, my baseline must recommendation is to use advanced prevention security products and do automatic patching.

The final thought is that a discussion about regulatory intervention in the level of protection in the private sector should start. I can really see the effectiveness of mandatory security provisions required from organizations similar to what is done in the accounting world. Very similar to getting vaccinated. The private sector and especially the small medium-size businesses are currently highly vulnerable.

A Cyber Visit to London


I had a super interesting visit to London for two cyber-related events. The first was a meeting of the CDA which is a new collaboration effort among the top European banks headed by Barclays Global CISO and the CDA themselves. The Israel Founders Group assembled top experts from the world of cyber security and gathered them as an advisory board to the CDA.

CDA Group of Seven

British Government

The second part of the trip was no less interesting, I was invited by the Israeli embassy to participate in a thinking tank of the British government about how to build a strong cyber capability in the UK.

That’s a picture taken at the Royal Society, no faces;)