The Emerging Attention Attack Surface

It is known among security experts that humans are the weakest link and social engineering is the least resistant path for cyber attackers. The classic definition of social engineering is deception aimed to make people do what you want them to do. In the world of cybersecurity, it can be mistakenly opening an email attachment plagued with malicious code. The definition of social engineering is broad and does not cover the deception methods. The classic ones are temporary confidence building, wrong decisions due to lack of attention and curiosity traps. Our lives have become digital. An overwhelming digitization wave with ever exciting new digital services and products promising to make our lives better. The only constant in this big change is our limited supply of attention. As humans, we are limited by time, and due to that our attention is a scarce resource. A resource every digital supplier wants to grab more and more of it. In a way we evolved into attention optimization machines where we continuously decide what is interesting and what is not and furthermore we can ask the digital services to notify us when something of interest takes place in the future. The growing attention scarcity drove many technological innovations such as personalization on social networks. The underlying mechanism of attention works by directing our brainpower on a specific piece of information where initially we gather enough meta data in order to decide whether the new information is worthy of our attention or not. Due to the exploding amount of temptations for our attention the time it takes us to decide whether something is interesting or not is getting shorter within time. Which makes much more selective and faster to decide whether to skip or not. This change in behavior creates an excellent opportunity for cyber attackers which refine their ways in social engineering; a new attack surface is emerging. The initial attention decision-making phase allows attackers to deceive by introducing fake but highly interesting and relevant baits at the right time, an approach that results in a much higher conversion ratio for the attackers. The combination of attention optimization, shortening decision times and highly interesting fake pieces of information set the stage and for a new attack vector which will become quite popular. Some examples:

  • Email - An email with a subject line and content that discusses something that has timely interest to you. For example, you changed your Linkedin job position today and then you got an email one hour later with another job offer which sounds similar to your new job. When you change jobs your attention to the career topic is skyrocketing and I guess there will be very few people that would resist the temptation to open such an email.
  • Social Networks Mentions - Imagine you’ve twitted that you are going for a trip to Washington and someone with a fake account replies to you with a link that flights are delayed, wouldn’t you click on such link? If the answer is yes, you could get infected by the mere click on the link.
  • Google Alerts - So you want to track mentions of yourself on the internet, and you set a google alert to send you an email whenever a new webpage appears on the net with your name on it. Now imagine getting such a new email mentioning you in a page with a juicy excerpt, wouldn’t you click on the link to read the whole page and see what they wrote about you?
All these examples will have high conversion ratios because they are all relevant and come in a timely fashion and if they are targeted at the busy part of the day the chances that you will just click on something like that is high. One of the main contributors for the emergence of this attack surface is the growth in personal data that is spread out on different networks and services. This public information serves as a sound basis for attackers to understand what is interesting for you and when.

The First Principle of Security By Design

People create technologies to serve a purpose. It starts with a goal in mind and then the creator is going through the design phase and later on builds a technology-based system that can achieve that goal. For example, someone created Google Docs which allows people to write documents online. A system is a composition of constructs and capabilities which can be used in a certain intended way. Designers always aspire for generalization in their creation so it can serve other potential uses to enjoy reuse of technologies and resources. This path which starts at the purpose and goes through design, construction, and usage, later on, is the primary paradigm of technological tools.   The challenge arises when technological creations are used for other purposes. Abused for unintended purposes. Every system has a theoretical spectrum of possible usages dictated by its capabilities, and it may be even impossible to grasp the full potential. The gap between potential vs. intended usages is the root of most if not all cybersecurity problems. The inherent risk in artificial intelligence lies within the same weakness of purpose vs. actual usage as well. Million of examples come to my mind, starting from computer viruses abusing standard operating system mechanisms to harm up to the recent abuse of Facebook's advertising network to control the minds of US citizens during last elections. The pattern is not unique to technologies alone; it is a general attribute of tools while information technologies in their far reach elevated the risk of misuse.   One way to tackle this weakness is to add an additional phase into the design process which evaluates the boundaries of potential usages of each new system and devises a self-regulating framework. Each system will have its self-regulatory capability. An effort that should take place during the design phase but also evaluated continuously as the intersection of technologies create other potential uses. A first and basic principle in the emerging paradigm of security by design. Any protective measure that will be added post the design phase will incur higher implementation costs while its efficiency will be reduced. The later a self-regulating protection is applied, the higher the magnitude of reduction in its effectiveness.   Security in technologies should stop being an afterthought.

Risks of Artificial Intelligence on Society

Random Thoughts on Cyber Security, Artificial Intelligence, and Future Risks at the OECD Event - AI: Intelligent Machines, Smart Policies

It is the end of the first day of a fascinating event in artificial intelligence, its impact on societies and how policymakers should act upon what seems like a once in lifetime technological revolution. As someone rooted deeply in the world of cybersecurity, I wanted to share my point of view on what the future might hold.

The Present and Future Role of AI in Cyber Security and Vice Verse

Every new day we are witnessing new remarkable results in the field of AI and still, it seems we only scratched the top of it. Developments which reached a certain level of maturity can be seen mostly in the areas of object and pattern recognition which is part of the greater field of perception and different branches of reasoning and decision making. AI has already entered the cyber world via defense tools where most of the applications we see are in the fields of malicious behavior detection in programs and network activity and the first level of reasoning used to deal with the information overload in security departments helping prioritize incidents. AI has a far more potential contribution in other fields of cybersecurity, existing and emerging ones:

Talent Shortage

A big industry-wide challenge where AI can be a game changer relates to the scarcity of cybersecurity professionals. Today there is a significant shortage of cybersecurity professionals which are required to perform different tasks starting from maintaining the security configuration in companies up to responding to security incidents. ISACA predicts that there will be a shortage of two million cybersecurity professionals by 2019. AI-driven automation and decision making have the potential to handle a significant portion of the tedious tasks professionals are fulfilling today. With the goal of reducing the volume of jobs to the ones which require the touch of a human expert.

Pervasive Active Intelligent Defense

The extension into active defense is inevitable where AI has the potential to address a significant portion of the threats that today, deterministic solutions can't handle properly. Mostly effective against automated threats with high propagation potential. An efficient embedding of AI inside active defense will take place in all system layers such as the network, operating systems, hardware devices and middleware forming a coordinated, intelligent defense backbone.

The Double-Edged Sword

A yet to emerge threat will be cyber attacks which are powered themselves by AI. The world of artificial intelligence, the tools, algorithms, and expertise are widely accessible, and cyber attackers will not refrain from abusing them to make their attacks more intelligent and faster. When this threat materializes then AI will be the only possible mitigation. Such attacks will be fast, agile, and in magnitude that the existing defense tools have not experienced yet. A new genre of AI-based defense tools will have to emerge.

Privacy at Risk

Consumers privacy as a whole is sliding on a slippery slope where more and more companies collect information on us, structured data such as demographic information and behavioral patterns studied implicitly while using digital services. Extrapolating the amount of data collected with the new capabilities of big data in conjunction with the multitude of new devices that will enter our life under the category of IoT then we reach an unusually high number of data points per each person. High amounts of personal data distributed across different vendors residing on their central systems increasing our exposure and creating greenfield opportunities for attackers to abuse and exploit us in unimaginable ways. Tackling this risk requires both regulation and usage of different technologies such as blockchain, while AI technologies have also a role. The ability to monitor what is collected on us, possibly moderating what is actually collected vs, what should be collected in regards to rendered services and quantifying our privacy risk is a task for AI.

Intelligent Identities

In recent year we see at an ever-accelerating pace new methods of authentication and in correspondence new attacks breaking those methods. Most authentication schemes are based on a single aspect of interaction with the user to keep the user experience as frictionless as possible. AI can play a role in creating robust and frictionless identification methods which take into account vast amounts of historical and real-time multi-faceted interaction data to deduce the person behind the technology accurately. AI can contribute to our safety and security in the future far beyond this short list of examples. Areas where the number of data points increases dramatically, and automated decision-making in circumstances of uncertainty is required, the right spot for AI as we know of today.

Is Artificial Intelligence Worrying?

The underlying theme in many AI-related discussions is fear. A very natural reaction to a transformative technology which played a role in many science fiction movies. Breaking down the horror we see two parts: the fear of change which is inevitable as AI indeed is going to transform many areas in our lives and the more primal fear from the emergence of soulless machines aiming to annihilate civilization. I see the threats or opportunities staged into different phases, the short term, medium, long-term and really long term.

The short-term

The short-term practically means the present and the primary concerns are in the area of hyper-personalization which in simple terms means all the algorithms that get to know us better then we know ourselves. An extensive private knowledge base that is exploited towards goals we never dreamt of. For example, the whole concept of microtargeting on advertising and social networks as we witnessed in the recent elections in the US. Today it is possible to build an intelligent machine that profiles the citizens for demographic, behavioral and psychological attributes. At a second stage, the machine can exploit the micro-targeting capability available on the advertising networks to deliver personalized messages disguised as adverts where the content and the design of the adverts can be adapted automatically to each person with the goal of changing the public state of mind. It happened in the US and can happen everywhere what poses a severe risk for democracy. The root of this short-term threat resides in the loss of truth as we are bound to consume most of our information from digital sources.

The medium-term

We will witness a big wave of automation which will disrupt many industries assuming that whatever can be automated whether if it is bound to a logical or physical effort then it will eventually be automated. This wave will have a dramatic impact on society, many times improving our lives such as in the case of detection of diseases which can be faster with higher accuracy without the human error. These changes across the industries will also have side effects which will challenge society such as increasing the economic inequality, mostly hurting the ones that are already weak. It will widen the gap between knowledge workers vs. others and will further intensify the new inequality based on access to information. People with access to information will have a clear advantage over those who don’t. It is quite difficult to predict whether the impact in some industries would be short-term and workers will flow to other sectors or will it cause overall stability problems, and it is a topic that should be studied further per each industry that is expecting a disruption.

The longer term

We will see more and more intelligent machines that own the power to impact life and death in humans. Examples such as autonomous driving which has can kill someone on the street as well as an intelligent medicine inducer which can kill a patient. The threat is driven by malicious humans who will hack the logic of such systems. Many smart machines we are building can be abused to give superpowers to cyber attackers. It is a severe problem as the ability to protect from such threat cannot be achieved by adding controls into the artificial intelligence as the risk is coming from intelligent humans with malicious intentions and high powers.

The real long-term

This threat still belongs to science fiction which describes a case where machines will turn against humanity while owning the power to cause harm and self-preservation. From the technology point of view, such event can happen, even today if we decide to put our fate into the hands of a malicious algorithm that can self-preserve itself while having access to capabilities that can harm us. The risk here is that society will build AI for good purposes while other humans will abuse it for other purposes which will eventually spiral out of the control of everyone.

What Policy Makers Should Do To Protect Society

Before addressing some specific directions a short discussion on the power limitations of policymakers is required in the world of technology and AI. AI is practically a genre of techniques, mostly software driven, where more and more individuals around the globe are equipping themselves with the capability to create software and later to work on AI. In a very similar fashion to the written words, software is the new way to express oneself and aspiring to set control or regulation on that is destined to fail. Same for idea exchanges. Policymakers should understand these new changed boundaries which dictate new responsibilities as well.

Areas of Impact

Private Data

Central intervention can become a protective measure for citizens is the way private data is collected, verified and most importantly used. Without data most AI systems cannot operate, and it can be an anchor of control.

Cyber Crime & Collaborative Research

Another area of intervention should be the way cybercrime is enforced by law where there are missing parts in the puzzle of law enforcement such as attribution technologies. Today, attribution is a field of cybersecurity that suffers from under-investment as it is in a way without commercial viability. Centralized investment is required to build the foundations of attribution in the future digital infrastructure. There are other areas in the cyber world where investment in research and development is in the interest of the public and not a single commercial company or government which calls for a joint research across nations. One fascinating area of research could be how to use AI in the regulation itself, especially enforcement of regulation, understanding humans' reach in a digital world is too short for effective implementation. Another idea is building accountability into AI where we will be able to record decisions taken by algorithms and make them accountable for that. Documenting those decisions should reside in the public domain while maintaining the privacy of the intellectual property of the vendors. Blockchain as a trusted distributed ledger can be the perfect tool for saving such evidence of truth about decisions taken by machines, evidence that can stand in court. An example project in this field is the Serenata de Amor Operation, a grassroots open source project which was built to fight corruption in Brazil by analyzing public expenses looking for anomalies using AI.

Central Design

A significant paradigm shift policymaker needs to take into account is the long strategic change from centralized systems to distributed technologies as they present much lesser vulnerabilities. A roadmap of centralized systems that should be transformed into distributed once should be studied and created eventually.

Challenges for Policy Makers

  • Today AI advancement is considered a competitive frontier among countries and this leads to the state that many developments are kept secret. This path leads to loss of control on technologies and especially their potential future abuse beyond the original purpose. The competitive phenomena create a serious challenge for society as a whole. It is not clear why people treat weapons in magnitude harsher vs. advanced information technology which eventually can cause more harm.
  • Our privacy is abused by market forces pushing for profit optimization where consumer protection is at the bottom of priorities. Conflicting forces at play for policymakers.
  • People across the world are different in many aspects while AI is a universal language and setting global ethical rules vs. national preferences creates an inherent conflict.
  • The question of ownership and accountability of algorithms in a world where algorithms can create damage is an open one with many diverse opinions. It gets complicated since the platforms are global and the rules many times are local.
  • What other alternatives there are beyond the basic income idea for the millions that won’t be part of the knowledge ecosystem as it is clear that not every person that loses a job will find a new one. A pre-emptive thinking should be conducted to prevent market turbulences in disrupted industries. An interesting question is how does the growth in population on the planet impacts this equation.
The main point I took from today is to be careful when designing AI tools which are designated towards a specific purpose and how they can be exploited to achieve other means. UPDATE: Link to my story on the OECD Forum Network.

Thoughts on The Russians Intervention in the US Elections. Allegedly.

I got a call last night on whether I want to come to the morning show on TV and talk about Google’s recent findings of alleged Russian sponsored political advertising. Advertising that could have impacted the last US elections results, joining other similar discoveries on Facebook and Twitter and now Microsoft is also looking for clues. At first instant, I wanted to say, what is there to say about it but still, I agreed as a recent hobby of mine is being guested on TV shows:) So this event got me reading about the subject quite a bit later at night and this early morning to be well prepared, and the discussion was good, a bit light as expected from a morning show but good enough to be informative for its viewers. What struck me later on while contemplating on the actual findings is the significant vulnerability uncovered in this incident, the mere exploitation of that weakness by Russians (allegedly) and the hazardous path technology has taken us in recent decades while changing human behavior.

The Russian Intervention Theory

The summarize it: there are political forces and citizens in the United States which are worried about the depth of Russian intervention in the elections, and part of that is whether the social networks and digital mediums were exploited via digital advertising and to what extent. The findings until now show that advertising campaigns at the cost of tens of thousands of dollars have been launched via organizations that seem to be tied to the Russians. And these findings take place across the most prominent social networks and search engines.  The public does not know yet what was the nature of the advertising on each platform, who is behind this adverts and whether there was some cooperation of the advertisers with the people behind Trump’s campaign. This lack of information and especially the nature of the suspicious adverts leads to many theories, and although my mind is full of crazy ideas it seems that sharing them will only push the truth further away. So I won’t do that. The nature of the adverts is the most important piece of the puzzle since based on their content and variation patterns one can deduce whether they had a synergistic role with Trump’s campaign and what was the thinking behind them. Especially due to the fact the campaigns that were discovered are strangely uniform across all the advertising networks budget wise. As the story unfolds we will become wiser.

How To Tackle This Threat

This phenomenon is of concern to any democracy on the planet with concerned citizens which spend enough time on digital means such as Facebook and there are some ways to improve the situation:

Regulation

Advertising networks make their money from adverts. The core competence of these companies is to know who you are and to promote commercial offerings in the most seamless way. Advertisements which are of political nature without any commercial offerings behind them are abusing this information targeting and delivery mechanism to control the mindset of people. Same as it happens in advertisements on television where on TV there is a control on such content. There is no rational reason why digital advertising networks will get a free pass to allow anyone to broadcast any message on their networks without no accountability in the case of non-commercial offerings. These networks were not built for brainwashing and the customers, us, deserve a high level of transparency in this case which should be supervised and enforced by the regulator. So if there is an advert which is not of commercial nature, it should be emphasized that it is an advert (many times the adverts blend so good with the content that even identifying them is a difficult task), what is the source of the funding for the advert with a link to the website of the funder. If the advertising networks team up to define a code of ethics which will be self-enforced among them maybe regulation is not needed. At the moment we, the users, are misled and hurt by the way their service is rendered now.

Intelligence

The primary advertising networks (FB, Google, Twitter, Microsoft) have vast machine learning capabilities, and they should employ these to identify anomalies. Assuming regulation will be in place whether governmental or just self-regulation, there will be groups which will try to exploit these rules and here comes the role of technology in the pursuit for identifying deviations from the rules. Whether it is about identifying the source of funding of a campaign automatically and alerting such anomalies at real-time up to identifying automated strategies such as brute force AB testing done by an army of bots. Investing in technology to make sure everyone is complying with the house rules. Part of such an effort is opening up the data about the advertisers and campaigns of non-commercial products to the public to allow third-party companies to work on identification of such anomalies and to innovate in parallel to the advertising networks. Same goes for other elements in the networks which can be abused such as Facebook pages.

Last Thoughts on the Incident

  • How come no one identified the adverts in real time during elections. I would imagine there were complaints about specific ads during elections and how come no complaint escalated a more in-depth research into a specific campaign. Maybe there is too much reliance on bots which manage the self-service workflow for such advertising tools - the dark side of automation.
  • Looking out for digital signs that the Russians cooperated in this campaign with the Trump campaign seems far-fetched to me. The whole idea of a parallel campaign is the separation where synchronization if such took place it was probably done verbally without any digital traces.
  • The mapping of the demographic database that was allegedly created by Cambridge Analytica into the targeting taxonomy of Facebook, for example, is an extremely powerful tool for AB Testing via microtargeting. A perfect cost-efficient tool for mind control.
  • Why everyone assumes that the Russians are in favor of Trump? No one that raises the option that maybe the Russians had a different intention or perhaps it was not them. Reminds me alot of the fruitless efforts to attribute cyber attacks.
More thoughts on the weaknesses of systems and what can be done about it in a future post.

Site Footer