Category Archive For "blockchain"
2018 was a year of awakening to the dear side effects of technological innovation on privacy. The news from Facebook’s mishandling of users’ data has raised concerns everywhere. We saw misuse of private information for optimizing business goals and abuse of personal data as a platform to serve mind-washing political influencers posing as commercial advertisers. Facebook is in a way the privacy scapegoat of the world but they are not alone. Google, Twitter and others are on the same boat. Adding to the fiasco were the too many examples of consumer services which neglected to protect their customer data from cyber attacks. 2018 was a year with rising concerns about privacy breaking the myth people don’t care for privacy anymore. People actually do care and understand what is personal data though their options are limited and there is no sign 2019 would be any different.
So how did we get here? A growing part of our life is becoming digital and convenience is definitely the number one priority and a luxury possible thanks to technological innovation. Conveniency means a personalized experience and personalization requires access to personal data. The more data we provide the better experience we get. Personal data is made of information provided by the user or indications of user activity implicitly collected using different digital tracking technologies. The collected data is fed into different systems residing in central computing facilities which make the service work. Some of the data is fed into machine learning systems which seek to learn something insightful about the user or predict the user next move. Inside those complex IT systems of the service provider, our data is constantly vulnerable to misuse where exposure to unauthorized parties by mistake or intention is possible. The same data is also vulnerable just by the mere fact it resides and flows in the service provider systems as they are susceptible to cyber attacks by highly motivated cyber hackers. Our data is at the mercy of the people operating the service and their ability and desire to protect it. They have access to it, control it, decide who gets access to it or not as well as decide when and what to disclose to us about how they use it.
We are here in this poor state of lack of control on our privacy since the main technological paradigm dominating the recent 10 years wave of digital innovation is to collect data in a central manner. Data is a physical object and it needs to be accessible to the information systems which process it and central data storage is the de-facto standard for building applications. There are new data storage and processing paradigms which aspire to work differently such as edge analytics and distributed storage (partially blockchain related). These innovations hide a promise to a better future for our privacy but they are still at a very experimental early stage unfortunately.
Unless we change the way we build digital services our privacy will remain and continue to be a growing concern where our only hope as individuals would be to have enough luck of not getting hurt.
Random Thoughts on Cyber Security, Artificial Intelligence, and Future Risks at the OECD Event – AI: Intelligent Machines, Smart Policies
It is the end of the first day of a fascinating event in artificial intelligence, its impact on societies and how policymakers should act upon what seems like a once in lifetime technological revolution. As someone rooted deeply in the world of cybersecurity, I wanted to share my point of view on what the future might hold.
The Present and Future Role of AI in Cyber Security and Vice Verse
Every new day we are witnessing new remarkable results in the field of AI and still, it seems we only scratched the top of it. Developments which reached a certain level of maturity can be seen mostly in the areas of object and pattern recognition which is part of the greater field of perception and different branches of reasoning and decision making. AI has already entered the cyber world via defense tools where most of the applications we see are in the fields of malicious behavior detection in programs and network activity and the first level of reasoning used to deal with the information overload in security departments helping prioritize incidents.
AI has a far more potential contribution in other fields of cybersecurity, existing and emerging ones:
A big industry-wide challenge where AI can be a game changer relates to the scarcity of cybersecurity professionals. Today there is a significant shortage of cybersecurity professionals which are required to perform different tasks starting from maintaining the security configuration in companies up to responding to security incidents. ISACA predicts that there will be a shortage of two million cybersecurity professionals by 2019. AI-driven automation and decision making have the potential to handle a significant portion of the tedious tasks professionals are fulfilling today. With the goal of reducing the volume of jobs to the ones which require the touch of a human expert.
Pervasive Active Intelligent Defense
The extension into active defense is inevitable where AI has the potential to address a significant portion of the threats that today, deterministic solutions can’t handle properly. Mostly effective against automated threats with high propagation potential. An efficient embedding of AI inside active defense will take place in all system layers such as the network, operating systems, hardware devices and middleware forming a coordinated, intelligent defense backbone.
The Double-Edged Sword
A yet to emerge threat will be cyber attacks which are powered themselves by AI. The world of artificial intelligence, the tools, algorithms, and expertise are widely accessible, and cyber attackers will not refrain from abusing them to make their attacks more intelligent and faster. When this threat materializes then AI will be the only possible mitigation. Such attacks will be fast, agile, and in magnitude that the existing defense tools have not experienced yet. A new genre of AI-based defense tools will have to emerge.
Privacy at Risk
Consumers privacy as a whole is sliding on a slippery slope where more and more companies collect information on us, structured data such as demographic information and behavioral patterns studied implicitly while using digital services. Extrapolating the amount of data collected with the new capabilities of big data in conjunction with the multitude of new devices that will enter our life under the category of IoT then we reach an unusually high number of data points per each person. High amounts of personal data distributed across different vendors residing on their central systems increasing our exposure and creating greenfield opportunities for attackers to abuse and exploit us in unimaginable ways. Tackling this risk requires both regulation and usage of different technologies such as blockchain, while AI technologies have also a role. The ability to monitor what is collected on us, possibly moderating what is actually collected vs, what should be collected in regards to rendered services and quantifying our privacy risk is a task for AI.
In recent year we see at an ever-accelerating pace new methods of authentication and in correspondence new attacks breaking those methods. Most authentication schemes are based on a single aspect of interaction with the user to keep the user experience as frictionless as possible. AI can play a role in creating robust and frictionless identification methods which take into account vast amounts of historical and real-time multi-faceted interaction data to deduce the person behind the technology accurately.
AI can contribute to our safety and security in the future far beyond this short list of examples. Areas where the number of data points increases dramatically, and automated decision-making in circumstances of uncertainty is required, the right spot for AI as we know of today.
Is Artificial Intelligence Worrying?
The underlying theme in many AI-related discussions is fear. A very natural reaction to a transformative technology which played a role in many science fiction movies. Breaking down the horror we see two parts: the fear of change which is inevitable as AI indeed is going to transform many areas in our lives and the more primal fear from the emergence of soulless machines aiming to annihilate civilization. I see the threats or opportunities staged into different phases, the short term, medium, long-term and really long term.
The short-term practically means the present and the primary concerns are in the area of hyper-personalization which in simple terms means all the algorithms that get to know us better then we know ourselves. An extensive private knowledge base that is exploited towards goals we never dreamt of. For example, the whole concept of microtargeting on advertising and social networks as we witnessed in the recent elections in the US. Today it is possible to build an intelligent machine that profiles the citizens for demographic, behavioral and psychological attributes. At a second stage, the machine can exploit the micro-targeting capability available on the advertising networks to deliver personalized messages disguised as adverts where the content and the design of the adverts can be adapted automatically to each person with the goal of changing the public state of mind. It happened in the US and can happen everywhere what poses a severe risk for democracy. The root of this short-term threat resides in the loss of truth as we are bound to consume most of our information from digital sources.
We will witness a big wave of automation which will disrupt many industries assuming that whatever can be automated whether if it is bound to a logical or physical effort then it will eventually be automated. This wave will have a dramatic impact on society, many times improving our lives such as in the case of detection of diseases which can be faster with higher accuracy without the human error. These changes across the industries will also have side effects which will challenge society such as increasing the economic inequality, mostly hurting the ones that are already weak. It will widen the gap between knowledge workers vs. others and will further intensify the new inequality based on access to information. People with access to information will have a clear advantage over those who don’t. It is quite difficult to predict whether the impact in some industries would be short-term and workers will flow to other sectors or will it cause overall stability problems, and it is a topic that should be studied further per each industry that is expecting a disruption.
The longer term
We will see more and more intelligent machines that own the power to impact life and death in humans. Examples such as autonomous driving which has can kill someone on the street as well as an intelligent medicine inducer which can kill a patient. The threat is driven by malicious humans who will hack the logic of such systems. Many smart machines we are building can be abused to give superpowers to cyber attackers. It is a severe problem as the ability to protect from such threat cannot be achieved by adding controls into the artificial intelligence as the risk is coming from intelligent humans with malicious intentions and high powers.
The real long-term
This threat still belongs to science fiction which describes a case where machines will turn against humanity while owning the power to cause harm and self-preservation. From the technology point of view, such event can happen, even today if we decide to put our fate into the hands of a malicious algorithm that can self-preserve itself while having access to capabilities that can harm us. The risk here is that society will build AI for good purposes while other humans will abuse it for other purposes which will eventually spiral out of the control of everyone.
What Policy Makers Should Do To Protect Society
Before addressing some specific directions a short discussion on the power limitations of policymakers is required in the world of technology and AI. AI is practically a genre of techniques, mostly software driven, where more and more individuals around the globe are equipping themselves with the capability to create software and later to work on AI. In a very similar fashion to the written words, software is the new way to express oneself and aspiring to set control or regulation on that is destined to fail. Same for idea exchanges. Policymakers should understand these new changed boundaries which dictate new responsibilities as well.
Areas of Impact
Central intervention can become a protective measure for citizens is the way private data is collected, verified and most importantly used. Without data most AI systems cannot operate, and it can be an anchor of control.
Cyber Crime & Collaborative Research
Another area of intervention should be the way cybercrime is enforced by law where there are missing parts in the puzzle of law enforcement such as attribution technologies. Today, attribution is a field of cybersecurity that suffers from under-investment as it is in a way without commercial viability. Centralized investment is required to build the foundations of attribution in the future digital infrastructure. There are other areas in the cyber world where investment in research and development is in the interest of the public and not a single commercial company or government which calls for a joint research across nations. One fascinating area of research could be how to use AI in the regulation itself, especially enforcement of regulation, understanding humans’ reach in a digital world is too short for effective implementation. Another idea is building accountability into AI where we will be able to record decisions taken by algorithms and make them accountable for that. Documenting those decisions should reside in the public domain while maintaining the privacy of the intellectual property of the vendors. Blockchain as a trusted distributed ledger can be the perfect tool for saving such evidence of truth about decisions taken by machines, evidence that can stand in court. An example project in this field is the Serenata de Amor Operation, a grassroots open source project which was built to fight corruption in Brazil by analyzing public expenses looking for anomalies using AI.
A significant paradigm shift policymaker needs to take into account is the long strategic change from centralized systems to distributed technologies as they present much lesser vulnerabilities. A roadmap of centralized systems that should be transformed into distributed once should be studied and created eventually.
Challenges for Policy Makers
- Today AI advancement is considered a competitive frontier among countries and this leads to the state that many developments are kept secret. This path leads to loss of control on technologies and especially their potential future abuse beyond the original purpose. The competitive phenomena create a serious challenge for society as a whole. It is not clear why people treat weapons in magnitude harsher vs. advanced information technology which eventually can cause more harm.
- Our privacy is abused by market forces pushing for profit optimization where consumer protection is at the bottom of priorities. Conflicting forces at play for policymakers.
- People across the world are different in many aspects while AI is a universal language and setting global ethical rules vs. national preferences creates an inherent conflict.
- The question of ownership and accountability of algorithms in a world where algorithms can create damage is an open one with many diverse opinions. It gets complicated since the platforms are global and the rules many times are local.
- What other alternatives there are beyond the basic income idea for the millions that won’t be part of the knowledge ecosystem as it is clear that not every person that loses a job will find a new one. A pre-emptive thinking should be conducted to prevent market turbulences in disrupted industries. An interesting question is how does the growth in population on the planet impacts this equation.
The main point I took from today is to be careful when designing AI tools which are designated towards a specific purpose and how they can be exploited to achieve other means.
UPDATE: Link to my story on the OECD Forum Network.
Recently I’ve been thinking about the intersection of blockchain and AI and although several exciting directions are rising from the intersection of the technologies I want to explore one direction here.
One of the hottest discussions on AI is whether to constraint AI with regulation and ethics to prevent apocalyptic future. Without going into whether it is right or wrong to do so, I think that blockchain can play a crucial role if such future direction materialize. There is a particular group of AI applications, mostly including automated decision making, which can impact life and death. For example, an autonomous driving algorithm which can take a decision that will eventually end with an accident and loss of life. In a world where AI is enforced for compliance with ethics then accountability will be the most crucial aspect of it. To create the technological platform for accountability we need to be able to record decisions taken by algorithms. Documenting those decisions can take place inside the vendor database or a trusted distributed ledger. Recording decisions in the vendor database is somewhat the natural path for implementation of such capability though it suffers from lack of neutrality, lack of authenticity and lack of integrity. In a way, such a decision is a piece of knowledge that should reside in the public domain while maintaining the privacy of the intellectual property of the vendor. Blockchain as a trusted distributed ledger can be the perfect paradigm for saving such evidence of truth about decisions taken by machines, evidence that can stand in court.
The question is whether such blockchain will be a neutral middleware shared by the auto vendors or a service rendered by the government.
If I had to single out an individual development that elevated the sophistication of cybercrime by order of magnitude, it would be sharing. Code sharing, vulnerabilities sharing, knowledge sharing, stolen passwords and anything else one can think of. Attackers that once worked in silos, in essence competing, have discovered and fully embraced the power of cooperation and collaboration. I was honored to present a high-level overview on the topic of cyber collaboration a couple of weeks ago at the kickoff meeting of a new advisory group to the CDA (the Cyber Defense Alliance), called the “Group of Seven” established by the Founders Group. Attendees included Barclays’ CISO Troels Oerting and CDA CEO Maria Vello as well as other key people from the Israeli cyber industry. The following summarizes and expands upon my presentation.
TL;DR – to ramp up the game against cybercriminals, organizations, and countries must invest in tools and infrastructure that enable privacy-preserving cyber collaboration.
The Easy Life of Cyber Criminals
The size of energy defenders must invest to protect, vs. the energy cybercriminals need to attack a target, is far from equal. While attackers have always had an advantage, over the past five years the balance has tilted dramatically in their favor. Attackers, to achieve their goal, need only find one entry point into a target. Defenders need to make sure every possible path is tightly secured – a task of a whole different scale.
Multiple concrete factors contribute to this imbalance:
- Obfuscation technologies and sophisticated code polymorphism that successfully disguises malicious code as harmless content rendered a large chunk of established security technologies irrelevant. Technologies built with a different set of assumptions during what I call “the naive era of cybercrime.”
- Collaboration among adversaries in the many forms of knowledge and expertise sharing naturally speeded up the spread of sophistication/innovation.
- Attackers as “experts” in finding the path of least resistance to their goals discovered a sweet spot of weakness. A weakness that defenders can do little about – humans. Human flaws are the hardest to defend as attackers exploit core human traits such as trust building, personal vulnerabilities and making mistakes.
- Attribution in the digital world is vague and almost impossible to achieve, at least as far as the tools we have at our disposal currently. This fact makes finding the cause of an attack and eliminating it with confidence.
- The complexity of IT systems leads to security information overload which makes appropriate handling and prioritization difficult; attackers exploit this weakness by disguising their malicious activities in the vast stream of cybersecurity alerts. One of the drivers for this information overload is defense tools reporting an ever growing amount of false alarms due to their inability to identify malicious events accurately.
- The increasingly distributed nature of attacks and the use of “distributed offensive” patterns by attackers makes the defense even harder.
Given the harsh reality of the world of cybersecurity today, it is not a question of whether or not an attack is possible, it is just a matter of the interest and focus of cybercriminals. Unfortunately, the current de-facto defense strategy rests on creating a bit harder for attackers on your end, so that they will find an easier target elsewhere.
Rationale for Collaboration
Collaboration, as proven countless times, creates value that is beyond the sum of the participating elements. It also applies to the cyber world. Collaboration across organizations can contribute to defense enormously. For example, consider the time it takes to identify the propagation of threats as an early warning system – the period decreases exponentially in proportion to the number of collaborating participants. It is highly important to identify attacks targeting mass audiences more quickly as they tend to spread in an epidemic like patterns. Collaboration in the form of expertise sharing is another area of value – one of the main roadblocks to progress in cybersecurity is the shortage of talent. The exchange of resources and knowledge would go a long way in helping. Collaboration in artifact research can also reduce the time to identify and respond to cybercrime incidents. Furthermore, the increasing interconnectedness between companies as well as consumers means that the attack surface of an enterprise – the possible entry points for an attack – is continually expanding. Collaboration can serve as an essential counter to this weakness.
A recent phenomenon that may be inhibiting progress towards real collaboration is the perception of cybersecurity as a competitive advantage. Establishing a robust cybersecurity defense presents many challenges and requires substantial resources, and customers increasingly expect businesses to make these investments. Many CEOs consider their security posture as a product differentiator and brand asset and, as such, are disinclined to share. I believe this to be short-sighted due to the simple fact that no-one is safe at the moment; broken trust trumps any security bragging rights in the likely event of a breach. Cybersecurity needs to progress seriously to stabilize, and I don’t think there is value in small marketing wins which only postpone development in the form of collaboration.
Cyber collaboration across organizations can take many forms ranging from deep collaboration to more straightforward threat intelligence sharing:
- Knowledge and domain expertise – Whether it is about co-training or working together on security topics, such partnerships can mitigate the shortage of cybersecurity talent and spread newly acquired knowledge faster.
- Security stack and configuration sharing – It makes good sense to share such acquired knowledge where now kept close to the chest. Such collaboration would help disseminate and evolve best practices in security postures as well as help gain control over the flood of new emerging technologies, especially as validation processes take extended periods.
- Shared infrastructure – There are quite a few models where multiple companies can share the same infrastructure which has a single cyber security function, for example, cloud services and services rendered by MSSPs. While the current common belief holds that cloud services are less secure for enterprises, from a security investment point of view, there is no reason for this to be the case and it could and should be better. A large portion of such shared infrastructures is invisible and is referred to today as Shadow IT. A proactive step in this direction is a consortium of companies to build a shared infrastructure which can fit the needs of all its participants. In addition to improving the defense, the cost of security is shared by all the collaborators.
- Sharing real vital intelligence on encountered threats – Sharing useful indicators of compromise, signatures or patterns of malicious artifacts and the artifacts themselves is the current state of the cyber collaboration industry.
Imagine the level of fortification that could be achieved for each participant if these types of collaborations were a reality.
Challenges on the Path of Collaboration
Cyber collaboration is not taking off at speed we would like, even though experts may agree to the concept in principle. Why?
- Cultural inhibitions – The state of mind of not cooperating with competition, the fear of losing intellectual property and the fear of losing expertise sits heavily with many decision makers.
- Sharing is almost non-existant due to the justified fear of potential exposure of sensitive data – Deep collaboration in the cyber world requires technical solutions to allow the exchange of meaningful information without sacrificing sensitive data.
- Exposure to new supply chain attacks – Real-time and actionable threat intelligence sharing raises questions on the authenticity and integrity of incoming data feeds creating a new weakness point at the core of the enterprise security systems.
- Before an organization can start collaborating on cybersecurity, its internal security function needs to work correctly – this is not necessarily the case with a majority of organizations.
- The brand can be set into some uncertainty as the impact on a single participant in a group of collaborators can damage the public image of other participants.
- The tools, expertise, and know-how required for establishing a cyber collaboration are still nascent.
- As with any emerging topic, there are too many standards and no agreed-upon principles yet.
- Collaboration in the world of cyber security has always raised privacy concerns within consumer and citizen groups.
Though there is a mix of misconceptions, social and technical challenges, the importance of the topic continues to gain recognition, and I believe we are on the right path.
Technical Challenges in Threat Intelligence Sharing
Even the limited case of real threat intelligence sharing raises a multitude of technical difficulties, and best practices to overcome them are not ready yet. For example:
- How to achieve a balance between sharing actionable intelligence pieces which must be extensive to bee actionable vs. preventing exposure of sensitive information.
- How to establish secure and reliable communications among collaborators with proper handling of authorization, authenticity, and integrity to reduce the risk posed by collaboration.
- How to validate the potential impact of actionable intelligence before applied to other organizations. For example, if one collaborator broadcasts that google.com is a malicious URL then how can the other participants automatically identify it is not something to act upon in a sea of URLs?
- How do we make sure we don’t amplify the information overload problem by sharing false alerts to other organizations or some means to handle the load?
- In established collaboration, how can IT measure the effectiveness of the efforts required vs. resource saving and added protection level? How do you calculate Collaboration ROI?
- Many times investigating an incident requires a good understanding of and access to other elements in the network of the attacked enterprise; collaborators naturally cannot have such access, which limits their ability to conduct a cause investigation.
These are just a few of the current challenges – more will surface as we get further down the path to collaboration. There are several emerging technological areas which can help tackle some of the problems. Privacy-preserving approaches in the world of big data such as synthetic data generation; zero-knowledge proofs (i.e., blockchain). Tackling information overload with Moving Target Defense-based technologies that deliver only accurate alerts, such as Morphisec Endpoint Threat Prevention, and emerging solutions in the area of AI and security analytics; and distributed SIEM architectures.
In a highly collaborative future, a network of collaborators will appear connecting every organization. Such a system will work according to specific rules, taking into account that countries will be participants as well:
Countries – Countries can work as centralized aggregation points, aggregating intelligence from local enterprises and disseminate it to other countries which, in turn, will distribute the received data to their respective local businesses. There should be some filtering on the type of intelligence to be disseminated and added classification so the propagation and prioritization will be useful.
Sector Driven – Each industry has its common threats and famous malicious actors; it’s logical that there would be tighter collaboration among industry participants.
Consumers & SMEs – Consumers are the ones excluded from this discussion although they could contribute and gain from this process like anyone else. The same holds for small to medium-sized businesses, which cannot afford the enterprise-grade collaboration tools currently being built.
One of the biggest questions about cyber collaboration is when it will reach a tipping point. I speculate that it will occur when an unfortunate cyber event takes place, or when startups emerge in a massive number in this area or when countries finally prioritize cyber collaboration and invest the required resources.