The way online services are setup today implies that the only technical means to provide a more personalized experience to customers is to collect as much as possible personal data into a server and then to put it into some machine that offers recommendations. Personalization is convenience, and we all want convenience, even at the price of compromise of our personal lives. This line of thought started with Amazon, Google, and Facebook, and today it seems that every other online service is operating under the same modus operandi. A situation that is irrational in terms of consumer privacy having hundreds of copies of our most intimate online and demographic data in the hands of thousands of employees and systems in small and large companies.
The fact that our data is collected and stored somewhere out of our hands is the root of all evil – exactly where the privacy sagas of the recent decade started. On a broader view, the world is stuck in a stalemate against this new world paradigm, where legal and government institutions do not even know how to approach this issue beyond offering arbitrary fines, which are hard to enforce. We all march towards a future where more and more sensitive data is collected about us and potentially abused in ways we can’t imagine.
The question is whether, from a technical point of view, this modus operandi of collecting more and more data centrally to personalize experiences is the only way to go.
To understand our options, we need a little background on how personalization algorithms work. Let’s say we want to see when we go to amazon.com, a list of products that fit our personal preferences. Amazon has millions of products, and it does not make sense to serve customers an alphabetical list of all the products available to choose from. Our personal preferences naturally reside inside our brains, and unless we communicate them explicitly, no one can know about them. One way to create a personalized product list is for the user to tell Amazon explicitly which product categories are interesting, and that is in a way how the personalization wave started. That approach didn’t stand the test of time as our preferences change entirely across time and context in our lives. Furthermore, reviewing a list of product categories and specifying what is interesting and what is not is a tedious task no one wants to go through. The convenience cost of explicitly stating your preferences is higher than the convenience value of getting a personalized product list. Adding to that the fact that every online service today is interested in offering personalization – I bet it would take 20% of our digital time to fill in such forms.
Once we got over the paradigm of explicitly specifying preferences, companies started understanding that they can extract these preferences implicitly from the way we interact with the online service. For example, you have seen a book on Amazon and clicked to enter the book page and read some reviews about it for five minutes. These online actions can imply that the book has something interesting for you, something that can hint at your preferences. The more behavior recorded on the site, the more accurate they can build a rich profile of our changing preferences within time. Today, recommendation algorithms collect all the interactions on the website and map them to the list of items you interacted with. Every item on the list has a comprehensive descriptive profile – for example, a specific Business Management book has metadata of the subjects the book is dealing with, the name of the author, the text that is inside the book, and a list of other customers who bought that book as well. The profile of the item you showed interest for is compiled into your user profile and, within time, turning your personal profile into an accurate and rich depiction of what are your preferences. That is in simple terms how a personalization process looks like where in reality it is fine-tuned to be more precise. Improvements such as comparing a user profile with other “similar” minded customers for cross recommending items bought or updating a user profile on Netflix based on the actual scenes you see in a movie wrapped with metadata accompanying each scene. An endless game of creating richer user profiles for more accurately optimizing your experience to increase the chances of you doing what they want you to do.
This modus operandi is not a necessity from a technological point of view, and online services can offer personalization in a different way, which is respectful and privacy-preserving of your data. The main thing to keep in mind while thinking about alternative approaches is to make sure you understand that In the digital realm, once you give up a single copy of your data into the hands of a third party, you have lost the battle. One way forward is to record and store all your data locally on your devices, and to offer online services the opportunity to interact with your data but in a respectful manner. It is, in a way, a reversed personalization process where the online service interacts with your local personal profile temporarily whenever a personalized decision needs to be taken. The online service asks for the relevant part of your profile in which you allowed access to and uses that snapshot to create the personalized experience. The online service will be obligated to treat that snapshot of the personal profile as temporary and anonymous hence will not be associated with you beyond the specific browsing session. A concept that is much easier to enforce from a regulatory point of view. There are many different ways where such a scheme can work, including doing the actual heavyweight personalization process with the online service catalog locally on the user device in a secure manner to create more accurate personalized experiences without sending anything to the servers of the company. In a world of 5G where data bandwidth is not a problem anymore, such data exchange can happen seamlessly.
A reversed approach would finally allow consumers to have full control over their data, control over who has access to data, and at what granularity – shifting the power back to consumers. From a regulatory point of view, since the raw data is not located on the company’s servers anymore, it is easier to enforce laws that prevent providers from using the temporary profile snapshots for other purposes.
It is essential to understand that beyond personalized experiences, there are no real incentives for consumers to give away their data for free. Once the technological challenge of how to create personalized experiences is solved in a privacy-preserving manner, the options are unlimited in terms of going one step further and attaching value to our data.
There is no large corporation on the planet which does not have digital transformation as one of the top three strategic priorities, and many have already deep-dived into it without necessarily understanding the meaning of success. Digital transformation is highly strategic, and many times existential due to the simple fact that technology changed everyone’s life forever and kept on doing that. A change that gave birth to a new breed of companies with technological DNA enabling them to create a superior substitute to the many services and products catered by the “old” world companies. Furthermore, these “new” companies catch up on customers’ changing preferences and adapt very efficiently. The agility of the new world puts a shining spotlight on the weaknesses and clumsiness of the incumbents; “Old” companies built with human processes as core DNA and far from even becoming a decent player in the new game. The obsoleteness of the incumbents is not apparent to the naked eye at first as large piles of cash are used to set up a theater play for posing as a new world company though the clock to their disappearance is not impressed by the show and continues ticking. Again, you see a huge investment and brainpower spent on “transforming” these companies, and I want to set some frame of thinking which can be useful to understand what does it mean to have a successful transformation.
When I think about companies, the metaphor of an organism always comes into my mind. Although it is not a perfect model for describing the whereabouts of a company in the long run, still the dynamics and actors at play are very much presenting an orchestrated long term behavior similar to the way organisms work. For example, I used the term DNA earlier to describe the core competence of a company, and it made a perfect sense. Another illustration of the difference between incumbents and upstarts is the amount of fat each group has, the ratio of muscles to fat and the type of muscles at play. In a world where running would be the essential criteria for survival, certain groups of muscles and capabilities matter the most. The magnitude of change by technology and mostly software is more about discussing a new specie and not a linear improvement in specific areas in a family of organisms.
Anyone that is overweight and nonathletic that needs to get on a strict diet and training routine; the change in life is dramatic. The path is like a roller coaster with many illusion and disillusion peeks and lows. Getting started is nearly impossible as the whole body is not ready for such a change. The urgency to lose weight and get in shape, where it is not just for the sake of aesthetics in the case of reviving company competitiveness, may lead someone to decide on an extreme diet – A start that usually ends with a shock, both to a body and to a company. As for the path itself, everyone is different and eventually need their own way to get there. A simple truth which is in contradiction to the consulting industry approach, which replicates formulas from one customer to another. And lastly, let’s say a company had a very successful transformation, and they are back in the game – the immediate questions that arise are: is it the same company at all? Do they serve the same customers with the same products and services? Which parts have died on the process, and what was born?
It seems that if a company is going through a successful transformation, it can not, by definition, work the same and provide the same output to the world. Successful transformation changes you profoundly, and this is a truth that has to be communicated internally and externally very clearly. Without it being openly out there, every participant in the process which is expected to play a role in the change, at a subconscious level, will oppose the idea of the change as it is an unknown existential threat. And eventually, they are right; this change can get some of them out of the game as part of a successful transformation.
The end of the year tradition of prediction is becoming a guessing game as the pace of innovation is increasing towards pure randomness. So I will stop pretending I know what is going to happen in 2020, and I want to write on areas that seem like the most unpredictable for 2020. Below you can find an honest review of my 2019 predictions.
5G – A much talked about topic in 2019 with billions poured on rollouts across the globe. However, it is still unclear what are the killer use-cases, which is usually one step before starting to think about threats, security concepts, and the supply chain of cybersecurity vendors meant to serve this future market. I think we will stay in this state of vagueness for at least the next three years.
Insurance for the Digital World – Even though a big part of our lives has shifted into the digital realm, the insurance world is still observing, and hesitatingly testing the waters with small initiatives. It is unclear how insurance will immerse into digital life, and cyber insurance is one example of such unpredictability. It seems like a room for lots of innovation beyond helping the behemoth to transform.
Cloud Security – 2018 and 2019 where glorious years for cloud security – it seems as if it is clear what the customers need, and the only thing left for the vendors is to get the work done. Cloud transformation, in general, is hiding a high complexity and a highly volatile transition of businesses and operations into the cloud. A process that will take another ten years at a minimum, and during that time, technologies/models and architectures will change many times. Since security is eventually attached to the shape this transformation takes, it will take some time until the right security concepts and paradigms will stabilize — much shuffling in the security vendors’ space before we see an established and directed industry. I believe the markets will meet this random realization in 2020.
Alternative Digital Worlds – It seems many countries are contemplating the creation of their own “internet” including countries such as Russia, China, and others, and the narrative is about reducing dependency on the “American” controlled internet. It is a big question involving human rights, progress, nationalism, trade, and the matter will remain unsolved as the forces at play seem to be here for the long haul.
2019 predictions review
IoT – I said IoT security is a big undefined problem, and it still is. I don’t see anything changing in 2020 even though IoT deployments have become more commonplace.
DevSecOps – I predicted 2019 would be the start of a purchasing spree for cloud DevOps related security startups, and I was spot on. The trend will continue into 2020 as the DevSecOps stack is emerging.
Chipsets – I predicted a flood of new chip designs beyond Intel and AMD, and with many security vulnerabilities disclosed. I was slightly right as there are many efforts to create new unique chipsets. However, the market is still stuck with the golden standard of Intel tilting a bit towards AMD product lines. I was dead wrong about the level of interest in researching vulnerabilities in chipsets, maybe because there is not much to do about them.
Small Business Security – I predicted small businesses would emerge as a serious target market for cybersecurity vendors. I was wrong as no one cares to sell to small companies as it does not correspond to the typical startup/VC playbook. Still optimistic.
AI in Cyber Security – I predicted that the hype in the endpoint AI security market would fade, and I was spot on – the hype is gone, and limitations became very clear. There is a growing shift from local AI in endpoints towards centralized security analytics. Pushed by Azure, CrowdStrike, and Palo Alto Networks with the narrative of collecting as much as possible data and running some magic algorithms to get the job done on the cloud – a new buzz that will meet reality much faster than the original hype of AI in endpoints.
AI in the Hands of Cyber Attackers – I predicted 2019 would be the year we will see the first large scale attack automated by AI. Well, that did not happen. There is a growing group of people talking about this, but there is no real evidence for such attacks. I am still a believer in weaponization using AI becoming the next big wave of cyber threats, but I guess it will take some more time. Maybe it is due to the fact it is still easy to achieve any goal by attackers with rather simplistic attacks due to weak security posture.
Data Privacy – I predicted it would be the year of awakening where everyone will understand the fact they “pay” for all the free services with their data. I was right about this one – everyone knows now what is the nature of the relationship they have with the big consumer tech companies, what they give, and what they get.
Elections & Democracy – I predicted that manipulations of elections via social networks would diminish the citizens’ trust in the democratic process across the globe. I was spot on – In Israel, for example, we are entering; unfortunately, the third round of elections and the confidence and trust is at all times low.
Tech Regulation – I wrongly expected regulation to be fast and innovative and that it would integrate with tech companies for tight oversight. I was optimistically wrong. I don’t see anything like that happening in the next five years!
The Emergence of Authentication Methods – I predicted the competition for the best authentication method would stay a mess with many alternatives, old and new, and no winner. I was right about this one. The situation will remain the same for the foreseeable future.
Supply Chain Attacks – I predicted supply chain attacks would become a big thing in 2019, and I was wrong about the magnitude of supply chain attacks even though they played a decent role in the mix of cyber threats in 2019.
Happy End of 2019 🥳🎉
LifeLabs, a Canadian company, suffered a significant data breach. According to this statement, the damage was “customer information that could include name, address, email, login, passwords, date of birth, health card number and lab test results” in the magnitude of “approximately 15 million customers on the computer systems that were potentially accessed in this breach”.
It is an unfortunate event for the company, but eventually, the ones hurt the most are the customers who entrusted them with their private information. It is also clear that the resources that were allocated by this company to defend the private information were not enough. I don’t know the intimate details of that event. Still, from my experience, usually, the cyber defense situation in these companies is on the verge of negligence and most commonly underfunded severely. We, as consumers, got used to stories like that every other week, numbing us into accepting whatever the industry dictates as the best practices for such an event.
The playbook of best practices can be captured quite accurately from the letter to customers:
“We have taken several measures to protect our customer information including:
- Immediately engaging with world-class cyber security experts to isolate and secure the affected systems and determine the scope of the breach;
- Further strengthening our systems to deter future incidents;
- Retrieving the data by making a payment. We did this in collaboration with experts familiar with cyber-attacks and negotiations with cyber criminals;
- Engaging with law enforcement, who are currently investigating the matter; and
- Offering cyber security protection services to our customers, such as identity theft and fraud protection insurance.”
My interpretation of those practices:
- First, deal with the breach internally with very high urgency even though many times, the attackers were inside your network for months. The awareness of the mere existence of the breach puts everyone in a critical mode. Implying most commonly disconnecting and shutting down everything and calling law enforcement.
- Get your data back so the business can continue running – you can’t imagine how many companies don’t have a fresh copy of their data, so they have to pay the extortionists the ransom to get their data back.
- And here comes the “strengthening the security to deter such attacks” – I don’t know what it means in practice as from my experience, it takes a long time to turn a company from a probable breach case into something that can deter future attacks. I guess it is a one time expense in the form of buying some fancy security products, which will take months and maybe years to roll out.
- Now that the company is back in business and customers still don’t know that their data is potentially out there, bringing joy and prosperity to the attackers, the last and main challenge emerges: how to prevent a potential PR nightmare. And the acceptable answer is: let’s set up some website to show we care and let’s give the customers insurance on fraud and alerting service to know when their information gets abused. Practically saying to the customer that now that your data is out there, you are on your own, and it is advisable to stay tuned to alerts telling you when your data reaches terrible places. Good luck with that…
A new theatre play called “Best Practices” emerged mostly to mitigate all kinds of business risks while posing as “taking care of” customers.
Mark Zuckerberg was right when he wrote in his op-ed to the Washington Post that the internet needs new rules – though naturally, his view is limited as a CEO of a private company. For three decades governments across the globe have created an enormous regulatory vacuum due to a profound misunderstanding of the magnitude of technology on society. As a result, they neglected their duty to protect society in the mixed reality of technology and humanity. Facebook is the scapegoat of this debate due to its enormous impact on the social fabric, but the chasm between governments, regulation and tech affect every other tech company whether it is part of a supply chain of IT infrastructure or a consumer-facing service. The spring of initiatives to regulate Artificial Intelligence (AI) carry the same burden and that is why the driving force behind them is mostly fear, uncertainty and negative sentiment. I am personally involved in one of those initiatives, and I can’t escape the feeling it is a bandage for a severe illness, resulting in a short-sighted solution to a much bigger problem.
Before technology became immersed in our reality, human-driven processes governed our social fabric. Methods that evolved during centuries to balance the power and responsibility among governments, citizens and companies resulted in a set of rules which are observable and enforceable by humans quite effectively. Never a perfect solution, but a steady approach for the democratic systems we know. Every system has a pace and rhythm where the government-societal system is bound to humans’ pace to create, understand, express and collaborate effectively with others. The pace of living we all got used to is measured in days, weeks, months and even years. Technology, on the other hand, works on a different time scale. Information technology is a multi-purpose lego with a fast learning curve, creating the most significant impact in a shorter and shorter timeframe. In the world of technology, the pace has two facets: the creation/innovation span, optimized into achieving significant impact in a shorter period; and the run time aspect, which introduces a more profound complexity.
Running IT systems hide a great deal of complexity from their users – highly volatile dynamics operating in the space of nanoseconds. IT systems are made of source code used to describe to computers what should be done in order to achieve the goal of the system. The code is nothing more than a stream of electrons and as such can be changed many times a second to reflect ideas desired by the creator, where a change in the code leads to a different system. One of the greatest premises of AI, for example, is the fact it can create code on its own using only data and without human intervention. A change, for example, can carry an innocent error that reveals the personal details of millions of consumers to the public. This volatile system impacts privacy, consumer protection and human rights. The rapid pace of change of technology is an order of magnitude faster than humans’ capability to perceive the complexity of a change in time to effectively apply human decisions the way regulation works today.
The mandate for, and requirement of governments to protect citizens have not changed at all during the last 30 years besides supporting societal changes. What has changed is reality, where technological forces govern more and more parts of our lives and our way of living, and governments cannot fulfill their duty due to their inability to bridge these two disconnected worlds. Every suggestion of a human-driven regulatory framework will be blindsided and defensive by definition, with no real impact and eventually harmful for the technological revolution. Harm to technological innovation will directly inflict on our way of living as we have already passed the tipping point of dependency on technology in many critical aspects of life. The boundaries of what regulation suggests about what is right and wrong still make sense and have not changed, as it applies to humans after all. The way to apply the regulation on the technological part of reality has to adapt to the rules of the game of the technology world to become useful, and not counter-intuitive to the main benefits we rip from tech innovation.
The growing gap between the worlds of humans and IT has much more significant ramifications, and we already experience some of them such as in the case of cyber attacks, uncontrolled AI capabilities and usage, robotics and automation as disruptors for complete economic ecosystems, autonomous weapons, the information gap, and others we don’t know about yet. The lagging of governments has placed absurd de-facto privatization of regulation into the hands of private enterprises motivated by the economic forces of profitability and growth. Censorship, consumer protection, human and civilian rights have been privatized without even contemplating the consequences of this loose framework, until over the last two years where scandals surprisingly surfaced. One of the implications of this privatization is the transformation of humans into a resource, being tapped for attention which eventually leads to spending – and it won’t stop here.
Another root cause which governs many of the conflicts we experience today is the global nature of technology vs. the local nature of legal frameworks. Technology as a fabric has no boundaries, and it can exist wherever electricity flows. This factor is one of the main reasons behind the remarkable economic value of IT companies. On the other hand, national or regional regulation is by definition anchored to the local governing societal principles. A great divide lies between the subjective, human definition of regulation to the objective nature of technology. Adding to that complexity are countries that harness technology as a global competitive advantage without the willingness to openly participate under the same shared rules.
The mere thought of a computer lying to you about something has boggled my brain ever since I heard it from a friend professor on a flight as an anecdote on what could happen next in AI. That one sentence took me on a long trip in a rabbit hole of a wide range of implications. I did not want to write on it first, not to be the one which saws that idea in the brain of people with bad intentions, but today I saw that (The AI Learned to Hide Data From Its Creators to Cheat at Tasks They Gave It) and I felt as if the cat was out of the bag. So here I go.
An underlying and maybe subliminal assumption people have while interacting with computers ever since they were invented is that computers say the truth. Computers may report incorrect results due to false or missing data or due to incorrect programming but I personally never assumed anything else may be going on. Excluding the case where a computer is only used as a communications medium with other people. Systems, processes, organizations and even societies dependent on computing assume computers are doing only what they where programmed to.
AI as a technology game changer is slowly penetrating many systems and applications which are an inseparable part of our lives, playing the role of a powerful and versatile alternative brain. Replacing the rigid procedural decision making logic. This shift introduces a lot of new and unknown variables to the future of computing impacting the delicate balance our society is based on. Unknown variables which many times translate to fear such as in the case of the impact on the jobs market, the potential impact on human curiosity and productivity when everything will be automated, the threat of autonomous cybersecurity attacks and of course the dreadful nightmares about machines making up their minds to eliminate humans. Some of the fears are grounded in reality and need to be tackled in the way we drive this transformation. Some are still in the science fiction zone. The more established fears are imagined in the realm of the known impact and known capabilities computers can potentially reach with AI. For example, if cars will be fully autonomous thanks to the ability to identify objects in digital vision and correlate it with map information and a database of past good and bad driving decisions then it may cause shortage of jobs to taxi and truck drivers. This is a grounded concern. Still, there are certain human characteristics which we never imagined to be potentially transferred to AI. Maybe due to the idealistic view of AI as a purer form of humanity keeping only what seems as positive and useful. Deception is one of those traits we don’t want in AI. It is a trait that will change everything as we know about human to machine relationships as well as machine to machine relationships.
Although the research mentioned is far from being a general purpose capability to employ deception as a strategy to achieve unknown means still, the mere fact deception is just another strategy to be programmed, evaluated and selected by a machine in order to achieve its goals in a more optimized manner is scary.
This is an example of a side effect of AI that cannot be eliminated as it is implied by its underlying capabilities such as understanding the environmental conditions required to achieve a task and the ability to select a feasible strategy based on its tactical capabilities.
2018 was a year of awakening to the dear side effects of technological innovation on privacy. The news from Facebook’s mishandling of users’ data has raised concerns everywhere. We saw misuse of private information for optimizing business goals and abuse of personal data as a platform to serve mind-washing political influencers posing as commercial advertisers. Facebook is in a way the privacy scapegoat of the world but they are not alone. Google, Twitter and others are on the same boat. Adding to the fiasco were the too many examples of consumer services which neglected to protect their customer data from cyber attacks. 2018 was a year with rising concerns about privacy breaking the myth people don’t care for privacy anymore. People actually do care and understand what is personal data though their options are limited and there is no sign 2019 would be any different.
So how did we get here? A growing part of our life is becoming digital and convenience is definitely the number one priority and a luxury possible thanks to technological innovation. Conveniency means a personalized experience and personalization requires access to personal data. The more data we provide the better experience we get. Personal data is made of information provided by the user or indications of user activity implicitly collected using different digital tracking technologies. The collected data is fed into different systems residing in central computing facilities which make the service work. Some of the data is fed into machine learning systems which seek to learn something insightful about the user or predict the user next move. Inside those complex IT systems of the service provider, our data is constantly vulnerable to misuse where exposure to unauthorized parties by mistake or intention is possible. The same data is also vulnerable just by the mere fact it resides and flows in the service provider systems as they are susceptible to cyber attacks by highly motivated cyber hackers. Our data is at the mercy of the people operating the service and their ability and desire to protect it. They have access to it, control it, decide who gets access to it or not as well as decide when and what to disclose to us about how they use it.
We are here in this poor state of lack of control on our privacy since the main technological paradigm dominating the recent 10 years wave of digital innovation is to collect data in a central manner. Data is a physical object and it needs to be accessible to the information systems which process it and central data storage is the de-facto standard for building applications. There are new data storage and processing paradigms which aspire to work differently such as edge analytics and distributed storage (partially blockchain related). These innovations hide a promise to a better future for our privacy but they are still at a very experimental early stage unfortunately.
Unless we change the way we build digital services our privacy will remain and continue to be a growing concern where our only hope as individuals would be to have enough luck of not getting hurt.
Well, 2018 is almost over and cyber threats are still here to keep us alert and ready for our continued roller coaster ride in 2019 as well.
So here are some of my predictions for the world of cybersecurity 2019:
IoT is slowly turning into reality and security becomes a growing concern in the afterthought fashion as always. This reality will not materialize into a new cohort of specialized vendors due to its highly fragmented nature. So, we are not set to see any serious IoT security industry emergence in 2019. Again. Maybe in 2020 or 2021.
Devops security had a serious wave of innovations in recent three years across different areas in the process as well as in the cloud and on-premise. 2019 may be the time for consolidation into full devops security suites to avoid vendor inflation and ease integration across the processes.
In 2019 we will see a flood of chipsets from Intel and AMD, Nvidia, Qualcomm, FPGAs and many other custom makers such as Facebook, Google, and others. Many new paradigms and concepts which have not been battle-tested yet from a security point of view. That will result in many new vulnerabilities uncovered. Also due to the reliance of chipsets on more software inside and of course due to the growing appetite of security researchers to uncover wildly popular and difficult to fix vulnerabilities.
Freelancers and Small Office
Professional and small businesses reliant on digital services will become a prime and highly vulnerable target for cyber attacks. The same businesses which find out it is very difficult to recover from an attack. There are already quite a few existing vendors and new ones flocking to save them and trend will intensify in 2019. The once feared highly fragmented market of small businesses will start being served with specialized solutions. Especially in light of the over competitiveness in the large enterprise cybersecurity arena.
Enterprise Endpoint Protection
The AI hype wave will come to realization and will be reduced back to its appropriate size in terms of capabilities and limitations. An understanding clarifying the need for a complete and most important effective protective solution which can be durable for at least 3-5 years. Commoditization of AV in mid to smaller businesses and consumers will take another step forward with the improvement of Windows Defender and its attractiveness as a highly integrated signature engine replacement which costs nothing.
AI Inside Cyber Attacks
We will see the first impactful and proliferated cyber attacks hitting the big news with AI inside and they will set new challenges for defense systems and paradigms.
Facebook, Google, Twitter…
Another year of deeper realization that much more data then we thought of is in the hands of these companies making us more vulnerable and that they are not immune to cyber threats like everyone else, compromising us eventually. We will also come to realize that services which use our data as the main tool to optimize their service are in conflict with protecting our privacy. And our aspiration for control is fruitless with the way these companies are built and the way their products are architectured. We will see more good intentions from the people operating these companies.
As more elections will take place across the planet in different countries we will learn that the tactics used to bend the democracy in the US will be reused and applied in even less elegant ways, especially in non-english speaking languages. Diminishing the overall trust in the system and the democratic process of electing leadership.
Regulators and policymakers will eventually understand that in order to enforce regulation effectively on dynamic technological systems there is a need for a live technological system with AI inside on the regulator side. Humans can not cope with the speed of changes in products and the after effect approach of reacting to incidents when the damage is already done will not be sufficient anymore.
2018 was the year of multitude authentication ideas and schemes coming in different flavors and 2019 will be another year of natural selection for the non-applicable ideas. Authentication will stay an open issue and may stay like that for a long time due to the dynamic nature of systems and interfaces. Having said that, many people really had enough with text passwords and 2fa.
The Year of Supply Chain Attacks
2018 was the year where supply chain attacks were successfully tested by attackers as a an approach and 2019 will be the year it will be taken into full scale. IT outsourcing will be a soft spot as their access and control over customers systems can provide a great launchpad to companies’ assets.
Let’s see how it plays out.
Happy Holidays and Safe 2019!
In recent ten years, I was involved in the disclosure of multiple vulnerabilities to different organizations and each story is unique and diverse as there is no standard way of doing it. I am not a security researcher and did not find those vulnerabilities on my own, but I was there. A responsible researcher, subjective to your definition of what is responsible, discloses first the vulnerability to the developer of the product via email or a bug bounty web page. The idea is to notify the vendor as soon as possible so they can have time to study the vulnerability, understand its impact, create a fix and publish an update so customers can have a solution before weaponization starts. Once the vendor disclosure is over, you want to notify the public about the existence of the vulnerability for situational awareness. Some researchers wait a specified period before exposure, there are those who never disclose it to the public, and there are those who do not wait at all. There is also variance in the level of detailing in the disclosure to the public where some researchers only hint on the location of the vulnerability with mitigation tips vs. those who publish a full proof of concept code which demonstrates how to exploit the vulnerability. I am writing this to share some thoughts about the process with considerations and pitfalls that may take place.
A Bug Was Found
It all starts with the particular moment where you find a bug in a specific product, a bug which can be abused by a malicious actor to manipulate the product into doing something un-intended and usually beneficial to the attacker. Whether you searched for the bug days and nights under a coherent thesis or just encountered it accidentally, it is a special moment. Once the excitement settles the first thing to do is to check on the internet and in some specialized databases whether the bug is already known in some form. In the case it is unknown then you are entering a singular phase in time where you may be the only one on earth which knows about this vulnerability. I say maybe as either the vendor already knows about it but has not released a fix for it yet for some reason or an attacker known about it and is already abusing it in active and ongoing stealth attacks. It could also be that there is another researcher in the world which seats on this hot potato contemplating what to do with it. The found vulnerability could have existed for many years and can also be known to select few; this is a potential reality you can not eliminate. The clock started ticking loudly. In a way, you discovered the secret sauce of a potential cyber weapon with an unknown impact as the vulnerabilities are just a means to an end for attackers.
Disclosing to the Vendor
You can and should communicate it to the vendor immediately where most of the software/hardware vendors publish the means for disclosure. Unfortunately sending it quickly to the vendor does not reduce the uncertainty in the process, it adds more to it. For instance, you can have silence on the other line and not get any reply from the vendor who can put you into a strange limbo state. Another outcome could be getting an angry reply with how dare you to look into the guts of their product searching for bugs and that you are driven only by publicity lust, a response potentially accompanied by a legal letter. You could also get a warning not to publish your work to the public at any point in time as it can cause damage to the vendor. These responses do take place in reality and are not fictional, so you should have these options in mind. The best result of the first email to the vendor is a fast reply, acknowledging the discovery, maybe promising a bounty but most important cooperating in a sensible manner with your public safety disclosure goal.
There are those researchers who do not hold their breath for helping the vendor and immediately go to public channels with their findings assuming the vendor hears about it eventually and will react to it. This approach most of the times sacrifices users’ safety in the short term on behalf of a stronger pressure on the vendor to respond. A plan not for the faint of heart.
In the constructive scenarios of disclosure to the vendor, there is usually a process of communicating back and forth with the technical team behind the product, exchanging details on the vulnerability, sharing the proof of concept so the vendor can reproduce it fast and supporting the effort to create a fix quickly. Keep in mind that even if a fix is created it does not mean it fits the company’s plans to roll it out immediately due to whatever reason, and this is where your decision on how to time the public disclosure comes into play. The vendor, on the one hand, wants the timeline to be adjusted to their convenience while your interest is to make sure a fix and the public awareness to the problem is available to users as soon as possible. Sometimes aligned interests but sometimes conflicted. Google Project Zero made the 90 days deadline a famous and reasonable period from vendor to public disclosure but it is not written in stone as each vulnerability reveals different dynamics concerning fix rollout and it should be thought carefully.
Communicating the vulnerability to the public should have a broad impact to reach the awareness of the users, and it usually takes one of two possible paths. The easiest one is to publish a blog post and sharing it on some cybersecurity experts forums, and if the story is interesting it will pick up very fast as the information dissemination in the world of infosec is working quite right – the traditional media will pick it up from this initial buzz. It is the easiest way but not necessarily the one which you have the most control over its consequences as the interpretations and opinions along the way can vary greatly. The second path is to connect directly with a journalist from a responsible media outlet with shared interest areas and to build a story together where they can take the time to ask for comments from the vendor and other related parties and develop the story correctly. In both cases, the vulnerability uncovered should have a broad audience impact to reach publicity. Handling the public disclosure comes with quite a bit of stress for the inexperienced as once the story starts rolling publicly you are not in control anymore and the only thing left to you or my the best advice is to stay true to your story, know your details and be responsive.
I suggest to let the vendor know about your public disclosure intentions from the get-go so there won’t be surprises and hopefully they will cooperate with it even though there is a risk they will have enough time to downplay or mitigate this disclosure if they are not open to the publicity step.
One of the main questions that arise when contemplating public disclosure is whether to publish the code of the proof of concept or not. It has pros and cons. In my eyes more cons than pros. In general, once you publish your research finding of the mere existence of the vulnerability you covered the goal of awareness and combined with the public pressure it may create on the vendor, then you may have shortened the time for a fix to be built. The published code may create more pressure on the vendor, but the addition is marginal. Bear in mind that once you publish a POC, you shortened the time for attackers to weaponizing their attacks with the new arsenal during the most sensitive time where the new fix does not protect most of the users. I am not suggesting that attackers are in pressing need of your POC for abusing the new vulnerability – the CVE entry which pinpoints the vulnerability is enough for them to build an attack. I am arguing that by definition, you did not make their life harder while giving them an example code. Making their life harder and buying more time for users of the vulnerable technology is all about safety which is the original goal of the disclosure anyhow. The reason to be in favor of publishing a POC is the contribution to the security research industry where researchers can have another tool in their arsenal in the search for other vulnerabilities. Still, once you share something like that in public you, cannot control who gets this knowledge and who does not and you should assume both attackers and defenders will. There are people in the industry which strongly oppose the POC publishing due to the cons I mentioned, but I think they are taking a too harsh stance. It is a fact that the mere CVE publication causes a spike of new attacks abusing the new vulnerability even in the cases where a POC is not available in the CVE, so it does not seem to be the main contributor to that phenomena. I am not in favor of publishing a POC though I think about that carefully on a case by case basis.
One of the side benefits of publishing a vulnerability is recognition in the respective industry, and this motivation goes alongside the goal of increasing safety. The same applies to possible monetary compensation. These two “nonprofessional” motivations can sometimes cause misjudgment for the person disclosing the vulnerability, especially when navigating in the harsh waters of publicity. Many times it creates public criticism on the researchers due to these motivations. I believe independent security researchers are more than entitled to these compensations as they put their time and energy into fixing broken technologies which they do not own with good intentions, so the extra drivers eventually increase safety for all of us.
The main perceived milestone during a vulnerability disclosure journey is the introduction of a new version by the vendor that fixes the vulnerability. The real freedom to disclose everything about vulnerability is when users’ are protected with that new fix, and in reality, there is a considerable gap between the time of the introduction of a new patch until the time systems have that fix applied. In enterprises, unless it is a critical patch with a massive impact, it may take 6-18 months until patches are applied on systems. On many categories of IoT devices no patching takes place at all, and on consumer products such as laptops and phones the pace of patching can be fast but is also cumbersome and tedious and due to that many people just shut it off. The architecture of software patches which many times also include new features mixed with security fixes is outdated, flawed and not optimized for the volatility in the cyber security world. So please bear in mind that even if a patch exists, it does not mean people and systems are safe and protected by it.
The world has changed a lot in recent seven years regarding how vulnerability disclosure works. More and more companies come to appreciate the work by external security researchers, and there is more openness on a collaborative effort to make products safer. There is still a long way to go to achieve agility and more safety, but we are definitely in the right direction.
A well-known truth among security experts that humans are the weakest link and social engineering is the least resistant path for cyber attackers. The classic definition of social engineering is deception aimed to make people do what you want them to do. In the world of cybersecurity, it can be mistakenly opening an email attachment plagued with malicious code. The definition of social engineering is broad and does not cover the deception methods. The classic ones are temporary confidence building, wrong decisions due to lack of attention and curiosity traps.
Our lives have become digital. An overwhelming digitization wave with ever exciting new digital services and products improving our lives better. The only constant in this significant change is our limited supply of attention. As humans, we have limited time, and due to that our attention is a scarce resource. A resource every digital supplier wants to grab more and more of it. In a way, we evolved into attention optimization machines where we continuously decide what is interesting and what is not and furthermore we can ask the digital services to notify us when something of interest takes place in the future. The growing attention scarcity drove many technological innovations such as personalization on social networks. The underlying mechanism of attention works by directing our brainpower on a specific piece of information where initially we gather enough metadata to decide whether the new information is worthy of our attention or not. Due to the exploding amount of temptations for our attention, the time it takes us to decide whether something is interesting or not is getting shorter within time, which makes much more selective and faster to decide whether to skip or not. This change in behavior creates an excellent opportunity for cyber attackers which refine their ways in social engineering; a new attack surface is emerging. The initial attention decision-making phase allows attackers to deceive by introducing artificial but highly exciting and relevant baits at the right time, an approach that results in a much higher conversion ratio for the attackers. The combination of attention optimization, shortening decision times and highly interesting fake pieces of information set the stage for a new attack vector potentially highly effective.
Email – An email with a subject line and content that discusses something that has timely interest to you. For example, you changed your Linkedin job position today, and then you got an email one hour later with another job offer which sounds similar to your new job. When you change jobs your attention to the career topic is skyrocketing – I guess very few can resist the temptation to open such an email.
Social Networks Mentions – Imagine you’ve twitted that you are going for a trip to Washington and someone with a fake account replies to you with a link about delays in flights, wouldn’t you click on it? If the answer is yes, you could get infected by the mere click on the link.
Google Alerts – So you want to track mentions of yourself on the internet, and you set a google alert to send you an email whenever a new webpage appears on the net with your name on it. Now imagine getting such a new email mentioning you in a page with a juicy excerpt, wouldn’t you click on the link to read the whole page and see what they wrote about you?
All these examples promise high conversion ratios because they are all relevant and come in a timely fashion. If you are targeted at the busy part of the day the chances, you will click on something like that are high.
One of the main contributors to the emergence of this attack surface is the growth in personal data that is spread out on different networks and services. This public information serves as a sound basis for attackers to understand what is interesting for you and when.