Unpredictions for 2020 in Cyber Security
The end of the year tradition of prediction is becoming a guessing game as the pace of innovation is increasing towards pure randomness. So I will stop pretending I know what is going to happen in 2020, and I want to write on areas that seem like the most unpredictable for 2020. Below you can find an honest review of my 2019 predictions.
5G – A much talked about topic in 2019 with billions poured on rollouts across the globe. However, it is still unclear what are the killer use-cases, which is usually one step before starting to think about threats, security concepts, and the supply chain of cybersecurity vendors meant to serve this future market. I think we will stay in this state of vagueness for at least the next three years.
Insurance for the Digital World – Even though a big part of our lives has shifted into the digital realm, the insurance world is still observing, and hesitatingly testing the waters with small initiatives. It is unclear how insurance will immerse into digital life, and cyber insurance is one example of such unpredictability. It seems like a room for lots of innovation beyond helping the behemoth to transform.
Cloud Security – 2018 and 2019 where glorious years for cloud security – it seems as if it is clear what the customers need, and the only thing left for the vendors is to get the work done. Cloud transformation, in general, is hiding a high complexity and a highly volatile transition of businesses and operations into the cloud. A process that will take another ten years at a minimum, and during that time, technologies/models and architectures will change many times. Since security is eventually attached to the shape this transformation takes, it will take some time until the right security concepts and paradigms will stabilize — much shuffling in the security vendors’ space before we see an established and directed industry. I believe the markets will meet this random realization in 2020.
Alternative Digital Worlds – It seems many countries are contemplating the creation of their own “internet” including countries such as Russia, China, and others, and the narrative is about reducing dependency on the “American” controlled internet. It is a big question involving human rights, progress, nationalism, trade, and the matter will remain unsolved as the forces at play seem to be here for the long haul.
2019 predictions review
IoT – I said IoT security is a big undefined problem, and it still is. I don’t see anything changing in 2020 even though IoT deployments have become more commonplace.
DevSecOps – I predicted 2019 would be the start of a purchasing spree for cloud DevOps related security startups, and I was spot on. The trend will continue into 2020 as the DevSecOps stack is emerging.
Chipsets – I predicted a flood of new chip designs beyond Intel and AMD, and with many security vulnerabilities disclosed. I was slightly right as there are many efforts to create new unique chipsets. However, the market is still stuck with the golden standard of Intel tilting a bit towards AMD product lines. I was dead wrong about the level of interest in researching vulnerabilities in chipsets, maybe because there is not much to do about them.
Small Business Security – I predicted small businesses would emerge as a serious target market for cybersecurity vendors. I was wrong as no one cares to sell to small companies as it does not correspond to the typical startup/VC playbook. Still optimistic.
AI in Cyber Security – I predicted that the hype in the endpoint AI security market would fade, and I was spot on – the hype is gone, and limitations became very clear. There is a growing shift from local AI in endpoints towards centralized security analytics. Pushed by Azure, CrowdStrike, and Palo Alto Networks with the narrative of collecting as much as possible data and running some magic algorithms to get the job done on the cloud – a new buzz that will meet reality much faster than the original hype of AI in endpoints.
AI in the Hands of Cyber Attackers – I predicted 2019 would be the year we will see the first large scale attack automated by AI. Well, that did not happen. There is a growing group of people talking about this, but there is no real evidence for such attacks. I am still a believer in weaponization using AI becoming the next big wave of cyber threats, but I guess it will take some more time. Maybe it is due to the fact it is still easy to achieve any goal by attackers with rather simplistic attacks due to weak security posture.
Data Privacy – I predicted it would be the year of awakening where everyone will understand the fact they “pay” for all the free services with their data. I was right about this one – everyone knows now what is the nature of the relationship they have with the big consumer tech companies, what they give, and what they get.
Elections & Democracy – I predicted that manipulations of elections via social networks would diminish the citizens’ trust in the democratic process across the globe. I was spot on – In Israel, for example, we are entering; unfortunately, the third round of elections and the confidence and trust is at all times low.
Tech Regulation – I wrongly expected regulation to be fast and innovative and that it would integrate with tech companies for tight oversight. I was optimistically wrong. I don’t see anything like that happening in the next five years!
The Emergence of Authentication Methods – I predicted the competition for the best authentication method would stay a mess with many alternatives, old and new, and no winner. I was right about this one. The situation will remain the same for the foreseeable future.
Supply Chain Attacks – I predicted supply chain attacks would become a big thing in 2019, and I was wrong about the magnitude of supply chain attacks even though they played a decent role in the mix of cyber threats in 2019.
Happy End of 2019 🥳🎉
LifeLabs, a Canadian company, suffered a significant data breach. According to this statement, the damage was “customer information that could include name, address, email, login, passwords, date of birth, health card number and lab test results” in the magnitude of “approximately 15 million customers on the computer systems that were potentially accessed in this breach”.
It is an unfortunate event for the company, but eventually, the ones hurt the most are the customers who entrusted them with their private information. It is also clear that the resources that were allocated by this company to defend the private information were not enough. I don’t know the intimate details of that event. Still, from my experience, usually, the cyber defense situation in these companies is on the verge of negligence and most commonly underfunded severely. We, as consumers, got used to stories like that every other week, numbing us into accepting whatever the industry dictates as the best practices for such an event.
The playbook of best practices can be captured quite accurately from the letter to customers:
“We have taken several measures to protect our customer information including:
- Immediately engaging with world-class cyber security experts to isolate and secure the affected systems and determine the scope of the breach;
- Further strengthening our systems to deter future incidents;
- Retrieving the data by making a payment. We did this in collaboration with experts familiar with cyber-attacks and negotiations with cyber criminals;
- Engaging with law enforcement, who are currently investigating the matter; and
- Offering cyber security protection services to our customers, such as identity theft and fraud protection insurance.”
My interpretation of those practices:
- First, deal with the breach internally with very high urgency even though many times, the attackers were inside your network for months. The awareness of the mere existence of the breach puts everyone in a critical mode. Implying most commonly disconnecting and shutting down everything and calling law enforcement.
- Get your data back so the business can continue running – you can’t imagine how many companies don’t have a fresh copy of their data, so they have to pay the extortionists the ransom to get their data back.
- And here comes the “strengthening the security to deter such attacks” – I don’t know what it means in practice as from my experience, it takes a long time to turn a company from a probable breach case into something that can deter future attacks. I guess it is a one time expense in the form of buying some fancy security products, which will take months and maybe years to roll out.
- Now that the company is back in business and customers still don’t know that their data is potentially out there, bringing joy and prosperity to the attackers, the last and main challenge emerges: how to prevent a potential PR nightmare. And the acceptable answer is: let’s set up some website to show we care and let’s give the customers insurance on fraud and alerting service to know when their information gets abused. Practically saying to the customer that now that your data is out there, you are on your own, and it is advisable to stay tuned to alerts telling you when your data reaches terrible places. Good luck with that…
A new theatre play called “Best Practices” emerged mostly to mitigate all kinds of business risks while posing as “taking care of” customers.
Mark Zuckerberg was right when he wrote in his op-ed to the Washington Post that the internet needs new rules – though naturally, his view is limited as a CEO of a private company. For three decades governments across the globe have created an enormous regulatory vacuum due to a profound misunderstanding of the magnitude of technology on society. As a result, they neglected their duty to protect society in the mixed reality of technology and humanity. Facebook is the scapegoat of this debate due to its enormous impact on the social fabric, but the chasm between governments, regulation and tech affect every other tech company whether it is part of a supply chain of IT infrastructure or a consumer-facing service. The spring of initiatives to regulate Artificial Intelligence (AI) carry the same burden and that is why the driving force behind them is mostly fear, uncertainty and negative sentiment. I am personally involved in one of those initiatives, and I can’t escape the feeling it is a bandage for a severe illness, resulting in a short-sighted solution to a much bigger problem.
Before technology became immersed in our reality, human-driven processes governed our social fabric. Methods that evolved during centuries to balance the power and responsibility among governments, citizens and companies resulted in a set of rules which are observable and enforceable by humans quite effectively. Never a perfect solution, but a steady approach for the democratic systems we know. Every system has a pace and rhythm where the government-societal system is bound to humans’ pace to create, understand, express and collaborate effectively with others. The pace of living we all got used to is measured in days, weeks, months and even years. Technology, on the other hand, works on a different time scale. Information technology is a multi-purpose lego with a fast learning curve, creating the most significant impact in a shorter and shorter timeframe. In the world of technology, the pace has two facets: the creation/innovation span, optimized into achieving significant impact in a shorter period; and the run time aspect, which introduces a more profound complexity.
Running IT systems hide a great deal of complexity from their users – highly volatile dynamics operating in the space of nanoseconds. IT systems are made of source code used to describe to computers what should be done in order to achieve the goal of the system. The code is nothing more than a stream of electrons and as such can be changed many times a second to reflect ideas desired by the creator, where a change in the code leads to a different system. One of the greatest premises of AI, for example, is the fact it can create code on its own using only data and without human intervention. A change, for example, can carry an innocent error that reveals the personal details of millions of consumers to the public. This volatile system impacts privacy, consumer protection and human rights. The rapid pace of change of technology is an order of magnitude faster than humans’ capability to perceive the complexity of a change in time to effectively apply human decisions the way regulation works today.
The mandate for, and requirement of governments to protect citizens have not changed at all during the last 30 years besides supporting societal changes. What has changed is reality, where technological forces govern more and more parts of our lives and our way of living, and governments cannot fulfill their duty due to their inability to bridge these two disconnected worlds. Every suggestion of a human-driven regulatory framework will be blindsided and defensive by definition, with no real impact and eventually harmful for the technological revolution. Harm to technological innovation will directly inflict on our way of living as we have already passed the tipping point of dependency on technology in many critical aspects of life. The boundaries of what regulation suggests about what is right and wrong still make sense and have not changed, as it applies to humans after all. The way to apply the regulation on the technological part of reality has to adapt to the rules of the game of the technology world to become useful, and not counter-intuitive to the main benefits we rip from tech innovation.
The growing gap between the worlds of humans and IT has much more significant ramifications, and we already experience some of them such as in the case of cyber attacks, uncontrolled AI capabilities and usage, robotics and automation as disruptors for complete economic ecosystems, autonomous weapons, the information gap, and others we don’t know about yet. The lagging of governments has placed absurd de-facto privatization of regulation into the hands of private enterprises motivated by the economic forces of profitability and growth. Censorship, consumer protection, human and civilian rights have been privatized without even contemplating the consequences of this loose framework, until over the last two years where scandals surprisingly surfaced. One of the implications of this privatization is the transformation of humans into a resource, being tapped for attention which eventually leads to spending – and it won’t stop here.
Another root cause which governs many of the conflicts we experience today is the global nature of technology vs. the local nature of legal frameworks. Technology as a fabric has no boundaries, and it can exist wherever electricity flows. This factor is one of the main reasons behind the remarkable economic value of IT companies. On the other hand, national or regional regulation is by definition anchored to the local governing societal principles. A great divide lies between the subjective, human definition of regulation to the objective nature of technology. Adding to that complexity are countries that harness technology as a global competitive advantage without the willingness to openly participate under the same shared rules.
The mere thought of a computer lying to you about something has boggled my brain ever since I heard it from a friend professor on a flight as an anecdote on what could happen next in AI. That one sentence took me on a long trip in a rabbit hole of a wide range of implications. I did not want to write on it first, not to be the one which saws that idea in the brain of people with bad intentions, but today I saw that (The AI Learned to Hide Data From Its Creators to Cheat at Tasks They Gave It) and I felt as if the cat was out of the bag. So here I go.
An underlying and maybe subliminal assumption people have while interacting with computers ever since they were invented is that computers say the truth. Computers may report incorrect results due to false or missing data or due to incorrect programming but I personally never assumed anything else may be going on. Excluding the case where a computer is only used as a communications medium with other people. Systems, processes, organizations and even societies dependent on computing assume computers are doing only what they where programmed to.
AI as a technology game changer is slowly penetrating many systems and applications which are an inseparable part of our lives, playing the role of a powerful and versatile alternative brain. Replacing the rigid procedural decision making logic. This shift introduces a lot of new and unknown variables to the future of computing impacting the delicate balance our society is based on. Unknown variables which many times translate to fear such as in the case of the impact on the jobs market, the potential impact on human curiosity and productivity when everything will be automated, the threat of autonomous cybersecurity attacks and of course the dreadful nightmares about machines making up their minds to eliminate humans. Some of the fears are grounded in reality and need to be tackled in the way we drive this transformation. Some are still in the science fiction zone. The more established fears are imagined in the realm of the known impact and known capabilities computers can potentially reach with AI. For example, if cars will be fully autonomous thanks to the ability to identify objects in digital vision and correlate it with map information and a database of past good and bad driving decisions then it may cause shortage of jobs to taxi and truck drivers. This is a grounded concern. Still, there are certain human characteristics which we never imagined to be potentially transferred to AI. Maybe due to the idealistic view of AI as a purer form of humanity keeping only what seems as positive and useful. Deception is one of those traits we don’t want in AI. It is a trait that will change everything as we know about human to machine relationships as well as machine to machine relationships.
Although the research mentioned is far from being a general purpose capability to employ deception as a strategy to achieve unknown means still, the mere fact deception is just another strategy to be programmed, evaluated and selected by a machine in order to achieve its goals in a more optimized manner is scary.
This is an example of a side effect of AI that cannot be eliminated as it is implied by its underlying capabilities such as understanding the environmental conditions required to achieve a task and the ability to select a feasible strategy based on its tactical capabilities.
2018 was a year of awakening to the dear side effects of technological innovation on privacy. The news from Facebook’s mishandling of users’ data has raised concerns everywhere. We saw misuse of private information for optimizing business goals and abuse of personal data as a platform to serve mind-washing political influencers posing as commercial advertisers. Facebook is in a way the privacy scapegoat of the world but they are not alone. Google, Twitter and others are on the same boat. Adding to the fiasco were the too many examples of consumer services which neglected to protect their customer data from cyber attacks. 2018 was a year with rising concerns about privacy breaking the myth people don’t care for privacy anymore. People actually do care and understand what is personal data though their options are limited and there is no sign 2019 would be any different.
So how did we get here? A growing part of our life is becoming digital and convenience is definitely the number one priority and a luxury possible thanks to technological innovation. Conveniency means a personalized experience and personalization requires access to personal data. The more data we provide the better experience we get. Personal data is made of information provided by the user or indications of user activity implicitly collected using different digital tracking technologies. The collected data is fed into different systems residing in central computing facilities which make the service work. Some of the data is fed into machine learning systems which seek to learn something insightful about the user or predict the user next move. Inside those complex IT systems of the service provider, our data is constantly vulnerable to misuse where exposure to unauthorized parties by mistake or intention is possible. The same data is also vulnerable just by the mere fact it resides and flows in the service provider systems as they are susceptible to cyber attacks by highly motivated cyber hackers. Our data is at the mercy of the people operating the service and their ability and desire to protect it. They have access to it, control it, decide who gets access to it or not as well as decide when and what to disclose to us about how they use it.
We are here in this poor state of lack of control on our privacy since the main technological paradigm dominating the recent 10 years wave of digital innovation is to collect data in a central manner. Data is a physical object and it needs to be accessible to the information systems which process it and central data storage is the de-facto standard for building applications. There are new data storage and processing paradigms which aspire to work differently such as edge analytics and distributed storage (partially blockchain related). These innovations hide a promise to a better future for our privacy but they are still at a very experimental early stage unfortunately.
Unless we change the way we build digital services our privacy will remain and continue to be a growing concern where our only hope as individuals would be to have enough luck of not getting hurt.
Well, 2018 is almost over and cyber threats are still here to keep us alert and ready for our continued roller coaster ride in 2019 as well.
So here are some of my predictions for the world of cybersecurity 2019:
IoT is slowly turning into reality and security becomes a growing concern in the afterthought fashion as always. This reality will not materialize into a new cohort of specialized vendors due to its highly fragmented nature. So, we are not set to see any serious IoT security industry emergence in 2019. Again. Maybe in 2020 or 2021.
Devops security had a serious wave of innovations in recent three years across different areas in the process as well as in the cloud and on-premise. 2019 may be the time for consolidation into full devops security suites to avoid vendor inflation and ease integration across the processes.
In 2019 we will see a flood of chipsets from Intel and AMD, Nvidia, Qualcomm, FPGAs and many other custom makers such as Facebook, Google, and others. Many new paradigms and concepts which have not been battle-tested yet from a security point of view. That will result in many new vulnerabilities uncovered. Also due to the reliance of chipsets on more software inside and of course due to the growing appetite of security researchers to uncover wildly popular and difficult to fix vulnerabilities.
Freelancers and Small Office
Professional and small businesses reliant on digital services will become a prime and highly vulnerable target for cyber attacks. The same businesses which find out it is very difficult to recover from an attack. There are already quite a few existing vendors and new ones flocking to save them and trend will intensify in 2019. The once feared highly fragmented market of small businesses will start being served with specialized solutions. Especially in light of the over competitiveness in the large enterprise cybersecurity arena.
Enterprise Endpoint Protection
The AI hype wave will come to realization and will be reduced back to its appropriate size in terms of capabilities and limitations. An understanding clarifying the need for a complete and most important effective protective solution which can be durable for at least 3-5 years. Commoditization of AV in mid to smaller businesses and consumers will take another step forward with the improvement of Windows Defender and its attractiveness as a highly integrated signature engine replacement which costs nothing.
AI Inside Cyber Attacks
We will see the first impactful and proliferated cyber attacks hitting the big news with AI inside and they will set new challenges for defense systems and paradigms.
Facebook, Google, Twitter…
Another year of deeper realization that much more data then we thought of is in the hands of these companies making us more vulnerable and that they are not immune to cyber threats like everyone else, compromising us eventually. We will also come to realize that services which use our data as the main tool to optimize their service are in conflict with protecting our privacy. And our aspiration for control is fruitless with the way these companies are built and the way their products are architectured. We will see more good intentions from the people operating these companies.
As more elections will take place across the planet in different countries we will learn that the tactics used to bend the democracy in the US will be reused and applied in even less elegant ways, especially in non-english speaking languages. Diminishing the overall trust in the system and the democratic process of electing leadership.
Regulators and policymakers will eventually understand that in order to enforce regulation effectively on dynamic technological systems there is a need for a live technological system with AI inside on the regulator side. Humans can not cope with the speed of changes in products and the after effect approach of reacting to incidents when the damage is already done will not be sufficient anymore.
2018 was the year of multitude authentication ideas and schemes coming in different flavors and 2019 will be another year of natural selection for the non-applicable ideas. Authentication will stay an open issue and may stay like that for a long time due to the dynamic nature of systems and interfaces. Having said that, many people really had enough with text passwords and 2fa.
The Year of Supply Chain Attacks
2018 was the year where supply chain attacks were successfully tested by attackers as a an approach and 2019 will be the year it will be taken into full scale. IT outsourcing will be a soft spot as their access and control over customers systems can provide a great launchpad to companies’ assets.
Let’s see how it plays out.
Happy Holidays and Safe 2019!
In recent ten years, I was involved in the disclosure of multiple vulnerabilities to different organizations and each story is unique and diverse as there is no standard way of doing it. I am not a security researcher and did not find those vulnerabilities on my own, but I was there. A responsible researcher, subjective to your definition of what is responsible, discloses first the vulnerability to the developer of the product via email or a bug bounty web page. The idea is to notify the vendor as soon as possible so they can have time to study the vulnerability, understand its impact, create a fix and publish an update so customers can have a solution before weaponization starts. Once the vendor disclosure is over, you want to notify the public about the existence of the vulnerability for situational awareness. Some researchers wait a specified period before exposure, there are those who never disclose it to the public, and there are those who do not wait at all. There is also variance in the level of detailing in the disclosure to the public where some researchers only hint on the location of the vulnerability with mitigation tips vs. those who publish a full proof of concept code which demonstrates how to exploit the vulnerability. I am writing this to share some thoughts about the process with considerations and pitfalls that may take place.
A Bug Was Found
It all starts with the particular moment where you find a bug in a specific product, a bug which can be abused by a malicious actor to manipulate the product into doing something un-intended and usually beneficial to the attacker. Whether you searched for the bug days and nights under a coherent thesis or just encountered it accidentally, it is a special moment. Once the excitement settles the first thing to do is to check on the internet and in some specialized databases whether the bug is already known in some form. In the case it is unknown then you are entering a singular phase in time where you may be the only one on earth which knows about this vulnerability. I say maybe as either the vendor already knows about it but has not released a fix for it yet for some reason or an attacker known about it and is already abusing it in active and ongoing stealth attacks. It could also be that there is another researcher in the world which seats on this hot potato contemplating what to do with it. The found vulnerability could have existed for many years and can also be known to select few; this is a potential reality you can not eliminate. The clock started ticking loudly. In a way, you discovered the secret sauce of a potential cyber weapon with an unknown impact as the vulnerabilities are just a means to an end for attackers.
Disclosing to the Vendor
You can and should communicate it to the vendor immediately where most of the software/hardware vendors publish the means for disclosure. Unfortunately sending it quickly to the vendor does not reduce the uncertainty in the process, it adds more to it. For instance, you can have silence on the other line and not get any reply from the vendor who can put you into a strange limbo state. Another outcome could be getting an angry reply with how dare you to look into the guts of their product searching for bugs and that you are driven only by publicity lust, a response potentially accompanied by a legal letter. You could also get a warning not to publish your work to the public at any point in time as it can cause damage to the vendor. These responses do take place in reality and are not fictional, so you should have these options in mind. The best result of the first email to the vendor is a fast reply, acknowledging the discovery, maybe promising a bounty but most important cooperating in a sensible manner with your public safety disclosure goal.
There are those researchers who do not hold their breath for helping the vendor and immediately go to public channels with their findings assuming the vendor hears about it eventually and will react to it. This approach most of the times sacrifices users’ safety in the short term on behalf of a stronger pressure on the vendor to respond. A plan not for the faint of heart.
In the constructive scenarios of disclosure to the vendor, there is usually a process of communicating back and forth with the technical team behind the product, exchanging details on the vulnerability, sharing the proof of concept so the vendor can reproduce it fast and supporting the effort to create a fix quickly. Keep in mind that even if a fix is created it does not mean it fits the company’s plans to roll it out immediately due to whatever reason, and this is where your decision on how to time the public disclosure comes into play. The vendor, on the one hand, wants the timeline to be adjusted to their convenience while your interest is to make sure a fix and the public awareness to the problem is available to users as soon as possible. Sometimes aligned interests but sometimes conflicted. Google Project Zero made the 90 days deadline a famous and reasonable period from vendor to public disclosure but it is not written in stone as each vulnerability reveals different dynamics concerning fix rollout and it should be thought carefully.
Communicating the vulnerability to the public should have a broad impact to reach the awareness of the users, and it usually takes one of two possible paths. The easiest one is to publish a blog post and sharing it on some cybersecurity experts forums, and if the story is interesting it will pick up very fast as the information dissemination in the world of infosec is working quite right – the traditional media will pick it up from this initial buzz. It is the easiest way but not necessarily the one which you have the most control over its consequences as the interpretations and opinions along the way can vary greatly. The second path is to connect directly with a journalist from a responsible media outlet with shared interest areas and to build a story together where they can take the time to ask for comments from the vendor and other related parties and develop the story correctly. In both cases, the vulnerability uncovered should have a broad audience impact to reach publicity. Handling the public disclosure comes with quite a bit of stress for the inexperienced as once the story starts rolling publicly you are not in control anymore and the only thing left to you or my the best advice is to stay true to your story, know your details and be responsive.
I suggest to let the vendor know about your public disclosure intentions from the get-go so there won’t be surprises and hopefully they will cooperate with it even though there is a risk they will have enough time to downplay or mitigate this disclosure if they are not open to the publicity step.
One of the main questions that arise when contemplating public disclosure is whether to publish the code of the proof of concept or not. It has pros and cons. In my eyes more cons than pros. In general, once you publish your research finding of the mere existence of the vulnerability you covered the goal of awareness and combined with the public pressure it may create on the vendor, then you may have shortened the time for a fix to be built. The published code may create more pressure on the vendor, but the addition is marginal. Bear in mind that once you publish a POC, you shortened the time for attackers to weaponizing their attacks with the new arsenal during the most sensitive time where the new fix does not protect most of the users. I am not suggesting that attackers are in pressing need of your POC for abusing the new vulnerability – the CVE entry which pinpoints the vulnerability is enough for them to build an attack. I am arguing that by definition, you did not make their life harder while giving them an example code. Making their life harder and buying more time for users of the vulnerable technology is all about safety which is the original goal of the disclosure anyhow. The reason to be in favor of publishing a POC is the contribution to the security research industry where researchers can have another tool in their arsenal in the search for other vulnerabilities. Still, once you share something like that in public you, cannot control who gets this knowledge and who does not and you should assume both attackers and defenders will. There are people in the industry which strongly oppose the POC publishing due to the cons I mentioned, but I think they are taking a too harsh stance. It is a fact that the mere CVE publication causes a spike of new attacks abusing the new vulnerability even in the cases where a POC is not available in the CVE, so it does not seem to be the main contributor to that phenomena. I am not in favor of publishing a POC though I think about that carefully on a case by case basis.
One of the side benefits of publishing a vulnerability is recognition in the respective industry, and this motivation goes alongside the goal of increasing safety. The same applies to possible monetary compensation. These two “nonprofessional” motivations can sometimes cause misjudgment for the person disclosing the vulnerability, especially when navigating in the harsh waters of publicity. Many times it creates public criticism on the researchers due to these motivations. I believe independent security researchers are more than entitled to these compensations as they put their time and energy into fixing broken technologies which they do not own with good intentions, so the extra drivers eventually increase safety for all of us.
The main perceived milestone during a vulnerability disclosure journey is the introduction of a new version by the vendor that fixes the vulnerability. The real freedom to disclose everything about vulnerability is when users’ are protected with that new fix, and in reality, there is a considerable gap between the time of the introduction of a new patch until the time systems have that fix applied. In enterprises, unless it is a critical patch with a massive impact, it may take 6-18 months until patches are applied on systems. On many categories of IoT devices no patching takes place at all, and on consumer products such as laptops and phones the pace of patching can be fast but is also cumbersome and tedious and due to that many people just shut it off. The architecture of software patches which many times also include new features mixed with security fixes is outdated, flawed and not optimized for the volatility in the cyber security world. So please bear in mind that even if a patch exists, it does not mean people and systems are safe and protected by it.
The world has changed a lot in recent seven years regarding how vulnerability disclosure works. More and more companies come to appreciate the work by external security researchers, and there is more openness on a collaborative effort to make products safer. There is still a long way to go to achieve agility and more safety, but we are definitely in the right direction.
A well-known truth among security experts that humans are the weakest link and social engineering is the least resistant path for cyber attackers. The classic definition of social engineering is deception aimed to make people do what you want them to do. In the world of cybersecurity, it can be mistakenly opening an email attachment plagued with malicious code. The definition of social engineering is broad and does not cover the deception methods. The classic ones are temporary confidence building, wrong decisions due to lack of attention and curiosity traps.
Our lives have become digital. An overwhelming digitization wave with ever exciting new digital services and products improving our lives better. The only constant in this significant change is our limited supply of attention. As humans, we have limited time, and due to that our attention is a scarce resource. A resource every digital supplier wants to grab more and more of it. In a way, we evolved into attention optimization machines where we continuously decide what is interesting and what is not and furthermore we can ask the digital services to notify us when something of interest takes place in the future. The growing attention scarcity drove many technological innovations such as personalization on social networks. The underlying mechanism of attention works by directing our brainpower on a specific piece of information where initially we gather enough metadata to decide whether the new information is worthy of our attention or not. Due to the exploding amount of temptations for our attention, the time it takes us to decide whether something is interesting or not is getting shorter within time, which makes much more selective and faster to decide whether to skip or not. This change in behavior creates an excellent opportunity for cyber attackers which refine their ways in social engineering; a new attack surface is emerging. The initial attention decision-making phase allows attackers to deceive by introducing artificial but highly exciting and relevant baits at the right time, an approach that results in a much higher conversion ratio for the attackers. The combination of attention optimization, shortening decision times and highly interesting fake pieces of information set the stage for a new attack vector potentially highly effective.
Email – An email with a subject line and content that discusses something that has timely interest to you. For example, you changed your Linkedin job position today, and then you got an email one hour later with another job offer which sounds similar to your new job. When you change jobs your attention to the career topic is skyrocketing – I guess very few can resist the temptation to open such an email.
Social Networks Mentions – Imagine you’ve twitted that you are going for a trip to Washington and someone with a fake account replies to you with a link about delays in flights, wouldn’t you click on it? If the answer is yes, you could get infected by the mere click on the link.
Google Alerts – So you want to track mentions of yourself on the internet, and you set a google alert to send you an email whenever a new webpage appears on the net with your name on it. Now imagine getting such a new email mentioning you in a page with a juicy excerpt, wouldn’t you click on the link to read the whole page and see what they wrote about you?
All these examples promise high conversion ratios because they are all relevant and come in a timely fashion. If you are targeted at the busy part of the day the chances, you will click on something like that are high.
One of the main contributors to the emergence of this attack surface is the growth in personal data that is spread out on different networks and services. This public information serves as a sound basis for attackers to understand what is interesting for you and when.
People create technologies to serve a purpose. It starts with a goal in mind and then the creator is going through the design phase and later on builds a technology-based system that can achieve that goal. For example, someone created Google Docs which allows people to write documents online. A system is a composition of constructs and capabilities which are set to be used in a certain intended way. Designers always aspire for generalization in their creation so it can serve other potential uses to enjoy reuse of technologies and resources. This path which starts at the purpose and goes through design, construction, and usage, later on, is the primary paradigm of technological tools.
The challenge arises when technological creations are abused for unintended purposes. Every system has a theoretical spectrum of possible usages dictated by its capabilities, and it may be even impossible to grasp the full potential. The gap between potential vs. intended usages is the root of most if not all cybersecurity problems. The inherent risk in artificial intelligence lies within the same weakness of purpose vs. actual usage as well. Million of examples come to my mind, starting from computer viruses abusing standard operating system mechanisms to harm up to the recent abuse of Facebook’s advertising network to control the minds of US citizens during last elections. The pattern is not unique to technologies alone; it is a general attribute of tools while information technologies in their far reach elevated the risk of misuse.
One way to tackle this weakness is to add a phase into the design process which evaluates the boundaries of potential usages of each new system and devises a self-regulating framework. Each system will have its self-regulatory capability. An effort that should take place during the design phase but also evaluated continuously as the intersection of technologies create other potential uses. A first and fundamental principle in the emerging paradigm of security by design. Any protective measure that is added after the design phase will incur higher implementation costs while its efficiency will be reduced. The later a self-regulating protection is applied, the higher the magnitude of reduction in its effectiveness.
Security in technologies should stop being an afterthought.
Random Thoughts on Cyber Security, Artificial Intelligence, and Future Risks at the OECD Event – AI: Intelligent Machines, Smart Policies
It is the end of the first day of a fascinating event in artificial intelligence, its impact on societies and how policymakers should act upon what seems like a once in lifetime technological revolution. As someone rooted deeply in the world of cybersecurity, I wanted to share my point of view on what the future might hold.
The Present and Future Role of AI in Cyber Security and Vice Verse
Every new day we are witnessing new remarkable results in the field of AI and still, it seems we only scratched the top of it. Developments which reached a certain level of maturity can be seen mostly in the areas of object and pattern recognition which is part of the greater field of perception and different branches of reasoning and decision making. AI has already entered the cyber world via defense tools where most of the applications we see are in the fields of malicious behavior detection in programs and network activity and the first level of reasoning used to deal with the information overload in security departments helping prioritize incidents.
AI has a far more potential contribution in other fields of cybersecurity, existing and emerging ones:
A big industry-wide challenge where AI can be a game changer relates to the scarcity of cybersecurity professionals. Today there is a significant shortage of cybersecurity professionals which are required to perform different tasks starting from maintaining the security configuration in companies up to responding to security incidents. ISACA predicts that there will be a shortage of two million cybersecurity professionals by 2019. AI-driven automation and decision making have the potential to handle a significant portion of the tedious tasks professionals are fulfilling today. With the goal of reducing the volume of jobs to the ones which require the touch of a human expert.
Pervasive Active Intelligent Defense
The extension into active defense is inevitable where AI has the potential to address a significant portion of the threats that today, deterministic solutions can’t handle properly. Mostly effective against automated threats with high propagation potential. An efficient embedding of AI inside active defense will take place in all system layers such as the network, operating systems, hardware devices and middleware forming a coordinated, intelligent defense backbone.
The Double-Edged Sword
A yet to emerge threat will be cyber attacks which are powered themselves by AI. The world of artificial intelligence, the tools, algorithms, and expertise are widely accessible, and cyber attackers will not refrain from abusing them to make their attacks more intelligent and faster. When this threat materializes then AI will be the only possible mitigation. Such attacks will be fast, agile, and in magnitude that the existing defense tools have not experienced yet. A new genre of AI-based defense tools will have to emerge.
Privacy at Risk
Consumers privacy as a whole is sliding on a slippery slope where more and more companies collect information on us, structured data such as demographic information and behavioral patterns studied implicitly while using digital services. Extrapolating the amount of data collected with the new capabilities of big data in conjunction with the multitude of new devices that will enter our life under the category of IoT then we reach an unusually high number of data points per each person. High amounts of personal data distributed across different vendors residing on their central systems increasing our exposure and creating greenfield opportunities for attackers to abuse and exploit us in unimaginable ways. Tackling this risk requires both regulation and usage of different technologies such as blockchain, while AI technologies have also a role. The ability to monitor what is collected on us, possibly moderating what is actually collected vs, what should be collected in regards to rendered services and quantifying our privacy risk is a task for AI.
In recent year we see at an ever-accelerating pace new methods of authentication and in correspondence new attacks breaking those methods. Most authentication schemes are based on a single aspect of interaction with the user to keep the user experience as frictionless as possible. AI can play a role in creating robust and frictionless identification methods which take into account vast amounts of historical and real-time multi-faceted interaction data to deduce the person behind the technology accurately.
AI can contribute to our safety and security in the future far beyond this short list of examples. Areas where the number of data points increases dramatically, and automated decision-making in circumstances of uncertainty is required, the right spot for AI as we know of today.
Is Artificial Intelligence Worrying?
The underlying theme in many AI-related discussions is fear. A very natural reaction to a transformative technology which played a role in many science fiction movies. Breaking down the horror we see two parts: the fear of change which is inevitable as AI indeed is going to transform many areas in our lives and the more primal fear from the emergence of soulless machines aiming to annihilate civilization. I see the threats or opportunities staged into different phases, the short term, medium, long-term and really long term.
The short-term practically means the present and the primary concerns are in the area of hyper-personalization which in simple terms means all the algorithms that get to know us better then we know ourselves. An extensive private knowledge base that is exploited towards goals we never dreamt of. For example, the whole concept of microtargeting on advertising and social networks as we witnessed in the recent elections in the US. Today it is possible to build an intelligent machine that profiles the citizens for demographic, behavioral and psychological attributes. At a second stage, the machine can exploit the micro-targeting capability available on the advertising networks to deliver personalized messages disguised as adverts where the content and the design of the adverts can be adapted automatically to each person with the goal of changing the public state of mind. It happened in the US and can happen everywhere what poses a severe risk for democracy. The root of this short-term threat resides in the loss of truth as we are bound to consume most of our information from digital sources.
We will witness a big wave of automation which will disrupt many industries assuming that whatever can be automated whether if it is bound to a logical or physical effort then it will eventually be automated. This wave will have a dramatic impact on society, many times improving our lives such as in the case of detection of diseases which can be faster with higher accuracy without the human error. These changes across the industries will also have side effects which will challenge society such as increasing the economic inequality, mostly hurting the ones that are already weak. It will widen the gap between knowledge workers vs. others and will further intensify the new inequality based on access to information. People with access to information will have a clear advantage over those who don’t. It is quite difficult to predict whether the impact in some industries would be short-term and workers will flow to other sectors or will it cause overall stability problems, and it is a topic that should be studied further per each industry that is expecting a disruption.
The longer term
We will see more and more intelligent machines that own the power to impact life and death in humans. Examples such as autonomous driving which has can kill someone on the street as well as an intelligent medicine inducer which can kill a patient. The threat is driven by malicious humans who will hack the logic of such systems. Many smart machines we are building can be abused to give superpowers to cyber attackers. It is a severe problem as the ability to protect from such threat cannot be achieved by adding controls into the artificial intelligence as the risk is coming from intelligent humans with malicious intentions and high powers.
The real long-term
This threat still belongs to science fiction which describes a case where machines will turn against humanity while owning the power to cause harm and self-preservation. From the technology point of view, such event can happen, even today if we decide to put our fate into the hands of a malicious algorithm that can self-preserve itself while having access to capabilities that can harm us. The risk here is that society will build AI for good purposes while other humans will abuse it for other purposes which will eventually spiral out of the control of everyone.
What Policy Makers Should Do To Protect Society
Before addressing some specific directions a short discussion on the power limitations of policymakers is required in the world of technology and AI. AI is practically a genre of techniques, mostly software driven, where more and more individuals around the globe are equipping themselves with the capability to create software and later to work on AI. In a very similar fashion to the written words, software is the new way to express oneself and aspiring to set control or regulation on that is destined to fail. Same for idea exchanges. Policymakers should understand these new changed boundaries which dictate new responsibilities as well.
Areas of Impact
Central intervention can become a protective measure for citizens is the way private data is collected, verified and most importantly used. Without data most AI systems cannot operate, and it can be an anchor of control.
Cyber Crime & Collaborative Research
Another area of intervention should be the way cybercrime is enforced by law where there are missing parts in the puzzle of law enforcement such as attribution technologies. Today, attribution is a field of cybersecurity that suffers from under-investment as it is in a way without commercial viability. Centralized investment is required to build the foundations of attribution in the future digital infrastructure. There are other areas in the cyber world where investment in research and development is in the interest of the public and not a single commercial company or government which calls for a joint research across nations. One fascinating area of research could be how to use AI in the regulation itself, especially enforcement of regulation, understanding humans’ reach in a digital world is too short for effective implementation. Another idea is building accountability into AI where we will be able to record decisions taken by algorithms and make them accountable for that. Documenting those decisions should reside in the public domain while maintaining the privacy of the intellectual property of the vendors. Blockchain as a trusted distributed ledger can be the perfect tool for saving such evidence of truth about decisions taken by machines, evidence that can stand in court. An example project in this field is the Serenata de Amor Operation, a grassroots open source project which was built to fight corruption in Brazil by analyzing public expenses looking for anomalies using AI.
A significant paradigm shift policymaker needs to take into account is the long strategic change from centralized systems to distributed technologies as they present much lesser vulnerabilities. A roadmap of centralized systems that should be transformed into distributed once should be studied and created eventually.
Challenges for Policy Makers
- Today AI advancement is considered a competitive frontier among countries and this leads to the state that many developments are kept secret. This path leads to loss of control on technologies and especially their potential future abuse beyond the original purpose. The competitive phenomena create a serious challenge for society as a whole. It is not clear why people treat weapons in magnitude harsher vs. advanced information technology which eventually can cause more harm.
- Our privacy is abused by market forces pushing for profit optimization where consumer protection is at the bottom of priorities. Conflicting forces at play for policymakers.
- People across the world are different in many aspects while AI is a universal language and setting global ethical rules vs. national preferences creates an inherent conflict.
- The question of ownership and accountability of algorithms in a world where algorithms can create damage is an open one with many diverse opinions. It gets complicated since the platforms are global and the rules many times are local.
- What other alternatives there are beyond the basic income idea for the millions that won’t be part of the knowledge ecosystem as it is clear that not every person that loses a job will find a new one. A pre-emptive thinking should be conducted to prevent market turbulences in disrupted industries. An interesting question is how does the growth in population on the planet impacts this equation.
The main point I took from today is to be careful when designing AI tools which are designated towards a specific purpose and how they can be exploited to achieve other means.
UPDATE: Link to my story on the OECD Forum Network.