Crossing the point of no return in the cyber world published in Calcalist.
FaaS, The last phase in the cloud revolution published in Globes IT in Hebrew.
If I had to single out an individual development that elevated the sophistication of cybercrime by order of magnitude, it would be sharing. Code sharing, vulnerabilities sharing, knowledge sharing, stolen passwords and anything else you can think of. Attackers that once worked in silos, in essence competing, have discovered and fully embraced the power of cooperation and collaboration. I was honored to present a high-level overview on the topic of cyber collaboration a couple of weeks ago at the kickoff meeting of a new advisory group to the CDA (the Cyber Defense Alliance), called the “Group of Seven” established by the Founders Group. Attendees included Barclays’ CISO Troels Oerting and CDA CEO Maria Vello as well as other key people from the Israeli cyber industry. The following summarizes and expands upon my presentation. TL;DR - to ramp up the game against cyber criminals, organizations and countries must invest in tools and infrastructure that enable privacy-preserving cyber collaboration.
The Easy Life of Cyber CriminalsThe amount of energy defenders must spend to protect, vs. the energy cyber criminals need to attack a target, is far from equal. While attackers have always had an advantage, over the past five years the balance has tilted dramatically in their favor. Attackers, to achieve their goal, need only find one entry point into a target. Defenders need to make sure every possible path is tightly secured – a task of a whole different scale. Multiple concrete factors contribute to this imbalance:
- Obfuscation technologies and sophisticated code polymorphism that successfully disguises malicious code as harmless content rendered a large chunk of established security technologies irrelevant. These technologies were built with a different set of assumptions during what I call “the naive era of cyber crime.”
- Collaboration among adversaries in the many forms of knowledge and expertise sharing naturally speeded up the spread of sophistication/innovation.
- Attackers as “experts” in finding the path of least resistance to their goals discovered a sweet spot of weakness. A weakness that defenders can do little about – humans. Human flaws are the hardest to defend as attackers exploit core human traits such as trust building, personal vulnerabilities and making mistakes.
- Attribution in the digital world is vague and almost impossible to achieve, at least as far as the tools we have at our disposal currently. This makes finding the cause of an attack and eliminating it with confidence tough.
- The complexity of IT systems leads to security information overload which makes timely handling and prioritization difficult; attackers exploit this weakness by disguising their malicious activities in the vast stream of cyber security alerts. One of the drivers for this information overload is defense tools reporting an ever growing amount of false alarms due to their inability to identify malicious events accurately.
- The increasingly distributed nature of attacks and the use of “distributed offensive” patterns by attackers makes the defense even harder.
Rationale for CollaborationCollaboration, as proven countless times, creates value that is beyond the sum of the participating elements. This applies also to the cyber world. Collaboration across organizations can contribute to defense enormously. For example, consider the time it takes to identify the propagation of threats as an early warning system – the time span decreases exponentially in proportion to the number of collaborating participants. This is highly important to identify attacks targeting mass audiences more quickly as they tend to spread in an epidemic like patterns. Collaboration in the form of expertise sharing is another area of value – one of the main roadblocks to progress in cyber security is the shortage of talent. The exchange of resources and knowledge would go a long way in helping. Collaboration in artifact research can also reduce the time to identify and respond to cybercrime incidents. Furthermore, the increasing interconnectedness between companies as well as consumers means that the attack surface of an enterprise – the possible entry points for an attack – is continually expanding. Collaboration can serve as an important counter to this weakness. A recent phenomenon that may be inhibiting progress towards real collaboration is the perception of cybersecurity as a competitive advantage. Establishing a robust cyber security defense presents many challenges and requires substantial resources, and customers increasingly expect businesses to make these investments. Many CEOs consider their security posture as a product differentiator and brand asset and, as such, are disinclined to share. I believe this to be short-sighted due to the simple fact that no-one is safe at the moment; shattered trust trumps any security bragging rights in the likely event of a breach. Cyber security needs to progress seriously to stabilize, and I don’t think there is value in small marketing wins which only postpone development in the form of collaboration.
Modus OperandiCyber collaboration across organizations can take many forms ranging from deep collaboration to more straightforward threat intelligence sharing:
- Knowledge and domain expertise – Whether it is about co-training or working together on security topics, such partnerships can mitigate the shortage of cyber security talent and spread newly acquired knowledge faster.
- Security stack and configuration sharing – It makes good sense to share such acquired knowledge although it is now kept close to the chest. Such collaboration would help disseminate and evolve best practices in security postures as well as help gain control over the flood of new emerging technologies, especially as validation processes take extended periods.
- Shared infrastructure – There are quite a few models where multiple companies can share the same infrastructure which has a single cyber security function, for example, cloud services and services rendered by MSSPs. While the current common belief holds that cloud services are less secure for enterprises, from a security investment point of view, there is no reason for this to be the case and it could and should be better. A big portion of such shared infrastructures is hidden in what is called today Shadow IT. A proactive step in this direction is a consortium of companies to build a shared infrastructure which can fit the needs of all its participants. In addition to improving the defense, the cost of security would be offset by all the collaborators.
- Sharing real vital intelligence on encountered threats – Sharing useful indicators of compromise, signatures or patterns of malicious artifacts and the artifacts themselves is where the cyber collaboration industry is currently at.
Challenges on the Path of CollaborationCyber collaboration is not taking off at speed we would like, even though experts may agree to the concept in principal. Why?
- Cultural inhibitions – The state of mind of not cooperating with competition, the fear of losing intellectual property and the fear of losing expertise sits heavily with many decision makers.
- Sharing is limited due to the justified fear of potential exposure of sensitive data – Deep collaboration in the cyber world requires technical solutions to allow the exchange of meaningful information without sacrificing sensitive data.
- Exposure to new supply chain attacks – Real-time and actionable threat intelligence sharing raises questions on the authenticity and integrity of incoming data feeds creating a new weakness point at the core of the enterprise security systems.
- Before an organization can start collaborating on cybersecurity, its internal security function needs to work correctly – this is not necessarily the case with a majority of organizations.
- The brand can be put into some uncertainty as the impact on a single participant in a group of collaborators can damage the public image of other participants.
- The tools, expertise, and know-how required for establishing a cyber collaboration are still nascent.
- As with any emerging topic, there are too many standards and no agreed upon principles yet.
- Collaboration in the world of cyber security has always raised privacy concerns within consumer and citizen groups.
Technical Challenges in Threat Intelligence SharingEven the limited case of real threat intelligence sharing raises a multitude of technical difficulties, and best practices to overcome them have not yet been determined:
- How to achieve a balance between sharing actionable intelligence pieces which must be rich to bee actionable vs. preventing exposure of sensitive information.
- How to establish secure and reliable communications among collaborators with proper handling of authorization, authenticity, and integrity to make sure the risk posed by collaboration is minimized.
- How to verify the potential impact of actionable intelligence before it is applied to other organizations. For example, if one collaborator broadcasts that google.com is a malicious URL then how can the other participants automatically identify it is not something to act upon?
- How do we make sure we don’t amplify the information overload problem by sharing false alerts to other organizations or some means to handle the load?
- Once collaboration is established, how can IT measure the effectiveness of the efforts being invested vs. resource saving and added protection level? How do you calculate Collaboration ROI?
- Many times investigating an incident requires a good understanding of and access to other elements in the network of the attacked enterprise; collaborators naturally cannot have such access, which limits their ability to conduct a cause investigation.
Collaboration GridIn a highly collaborative future, a network of collaborators will appear connecting every organization. Such a network will work according to certain rules, taking into account that countries will be participants as well: Countries - Countries can work as centralized aggregation points, aggregating intelligence from local enterprises and disseminate it to other countries which, in turn, will distribute the received intelligence to their respective local businesses. There should be some filtering on the type of intelligence being disseminated and classification so the propagation and prioritization will be useful. Sector Driven - Each industry has its common threats and popular malicious actors; it’s logical that there would be tighter collaboration among industry participants. Consumers & SMEs - Consumers are the ones excluded from this discussion although they could contribute and gain from this process like anyone else. The same holds true for small to medium-sized businesses, which cannot afford the enterprise-grade collaboration tools currently being built.
Final WordsOne of the biggest questions about cyber collaboration is when it will reach a tipping point. I speculate that it will occur when an unfortunate cyber event takes place, or when startups emerge in a massive number in this area or when countries finally prioritize cyber collaboration and invest the required resources.
In the not so distant future communications will become a utility such as water and electricity, published in Globes IT in Hebrew.
The DARPA Cyber Grand Challenge (CGC) 2016 competition has captured the imagination of many with its AI challenge. In a nutshell, it is a contest where seven highly capable computers compete, and a team owns each computer. Each team creates a piece of software which can autonomously identify flaws in their computer and fix them and identify flaws in the other six computers and hack them. A game inspired by the Catch The Flag (CTF) game which is played by real teams protecting their computer and hacking into others aiming to capture a digital asset which is the flag. In the CGC challenge, the goal is to build an offensive and defensive AI bot that follows the CTF rules.
In recent five years, AI has become a highly popular topic discussed both in the corridors of tech companies as well as outside of it where the amount of money invested in the development of AI aimed at different applications is tremendous and growing. Use cases of industrial and personal robotics, smart human to machine interactions, predictive algorithms of all different sorts, autonomous driving, face and voice recognition and others fantastic use cases. AI as a field in computer science has always sparked the imagination which also resulted in some great sci-fi movies. Recently we hear a growing list of few high-profile thought leaders such as Bill Gates, Stephen Hawking and Elon Musk raising concerns about the risks involved in developing AI. The dreaded nightmare of machines taking over our lives and furthermore aiming to harm us or even worse, annihilate us is always there.
The DARPA CGC competition which is a challenge born out of good intentions aiming to close the ever growing gap between attackers sophistication and defenders toolset has raised concerns from Elon Musk fearing that it can lead to Skynet. Skynet from the Terminator movie as a metaphor for a destructive and malicious AI haunting mankind. Indeed the CGC challenge has set the high bar for AI and one can imagine how a smart software that knows how to attack and defend itself will turn into a malicious and uncontrollable machine driven force. On the other hand, there seems to be a long way until a self-aware mechanical enemy can be created. How long will it take and if at all is the main question that stands in the air. This article is aiming to dissect the underlying risks posed by the CGC contest which is of a real concern and in general contemplates on what is right and wrong in AI.
AI history has parts which are publicly available such as work done in academia as well as parts that are hidden and take place at the labs of many private companies and individuals. The ordinary people outsiders of the industry are exposed only to the effects of AI such as using a smart chat bot that can speak to you intelligently. One way to approach the dissection of the impact of CGC is to track it bottom up and understand how each new concept in the program can lead to a new step in the evolution of AI and imagining future possible steps. The other way which I choose for this article is to start at the end and go backward.
To start at Skynet.
Skynet is defined by Wikipedia as “Rarely depicted visually in any of the Terminator media, Skynet gained self-awareness after it had spread into millions of computer servers all across the world; realising the extent of its abilities, its creators tried to deactivate it. In the interest of self-preservation, Skynet concluded that all of humanity would attempt to destroy it and impede its capability in safeguarding the world. Its operations are almost exclusively performed by servers, mobile devices, drones, military satellites, war-machines, androids and cyborgs (usually a Terminator), and other computer systems. As a programming directive, Skynet's manifestation is that of an overarching, global, artificial intelligence hierarchy (AI takeover), which seeks to exterminate the human race in order to fulfil the mandates of its original coding.”. The definition of Skynet discusses several core capabilities which it has acquired and seem to be a firm basis for its power and behaviour:
A rather vague skill which is borrowed from humans where in translation to machines it may mean the ability to identify its own form, weaknesses, strengths, risks posed by its environment as well as opportunities.
Capacity to identify its shortcomings, awareness to risks, categorizing the actors as agents of risk and take different risk mitigation measures to protect itself. Protect first from destruction and later on from losing territories under control.
The ability to set a goal of protecting its existence’ applying self-defence to survive and adapt to a changing environment.
Capacity to spread its presence into other computing devices which have enough computing power and resources to support it and to allows a method of synchronization among those devices forming a single entity. Sync seems to be obviously implemented via data communications methods, but it is not limited to that. These vague capabilities are interwoven with each other, and there seem to be other more primitive conditions which are required for an active Skynet to emerge.
The following are more atomic principles which are not overlapping with each other:
The ability to recognize its form including recognizing its software components and algorithms as inseparable part of its existence. Following the identification of the elements that comprise the bot then there is a recursive process of learning what the conditions that are required for each element to properly run . For example understanding that a particular OS is required for its SW components to run and that a specific processor is needed for the OS to run and that a specific type of electricity source is required for the processor to work appropriately and on and on. Eventually, the bot should be able to acquire all this knowledge where its boundaries are set in the digital world, and the second principle is extending this knowledge.
The ability to identify objects, conditions and intentions arising from the reality to achieve two things: To broaden the process of self-recognition so for example if the bot understands that it requires an electrical source then identifying the available electrical sources in a particular geographical location is an extension of the physical world. The second goal is to understand the environment in terms of general and specific conditions that have an impact on itself and what are the implications. For example weather or stock markets. Also an understanding of the real life actors which can affect its integrity and these are the humans (or other bots). Machines needs to understand humans in two aspects: their capabilities and their intentions and both eventually are based on a historic view of the digital trails people leave and the ability to predict future behaviour based on the history. If we imagine a logical flow of a machine trying to understand relevant humans following the chain of its self-recognition process then it will identify whom are the people operating the electrical grid that supplies the power to the machine and identifying weaknesses and behavioural patterns of them and then predicting their intentions which eventually may bring the machine to a conclusion that a specific person is posing too much risk on its existence.
The equivalent of human desire in machines is the ability to set a specific goal that is based on knowledge of the environment and itself and then to establish a nonlinear milestone to be achieved. An example goal can be to have a replica of its presence on multiple computers in different geographical locations to reduce the risk of shutdown. Setting a goal and investing efforts towards achieving it also requires the ability to craft strategies and refine them on the fly where strategies here mean a sequence of actions which will get the bot closer to its goal. The machine needs to be pre-seeded with at least one apriori goal which is survival and to apply a top level strategy which continuously aspires for the continuation of operation and reduction of risk.
Humans are the most unpredictable factor for machines to comprehend and as such, they would probably be deemed as enemies very fast in the case of the existence of such intelligent machine. Assuming the technical difficulties standing in front of such intelligent machine such as roaming across different computers, learning the digital and physical environment and gaining the long-term thinking are solved the uncontrolled variable which is humans, people with their own desires and control on the system and free will, would logically be identified as a serious risk to the top level goal of survivability.
What We Have Today
The following is an analysis of the state of the development of AI in light of these three principles with specific commentary on the risks that are induced from the CGC competition:
Today the leading development of AI in that area is in the form of different models which can acquire knowledge and can be used for decision making. Starting from decision trees, machine learning clusters up to deep learning neural networks. These are all models that are specially designed for specific use cases such as face recognition or stock market prediction. The evolution of models, especially in the non supervised field of research, is fast paced and the level of broadness in the perception of models grows as well. The second part that is required to achieve this capability is exploration, discovery and new information understanding where today all models are being fed by humans with specific data sources and significant portions of the knowledge about its form are undocumented and not accessible. Having said that learning machines are gaining access to more and more data sources including the ability to autonomously select access to information sources available via APIs. We can definitely foresee that machines will evolve towards owning significant part of the required capabilities to achieve Self Recognition. In the CGC contest the bots were indeed needed to defend themselves and as such to identify security holes in the software they were running in which is equivalent to recognising themselves. Still it was a very narrowed down application of discovery and exploration with limited and structured models and data sources designed for the particular problem. It seems more as a composition of ready made technologies which were customised towards the particular issue posed by CGC vs. a real non-linear jump in the evolution of AI.
Here there are many trends which help the machines become more aware of their surroundings. Starting from IoT which is wiring the physical world up to digitisation of many aspects of the physical world including human behaviour such as Facebook profiles and Fitbit heart monitors. The data today is not accessible easily to machines since it is distributed and highly variant in its data formats and meaning. Still it exists which is a good start in this direction. Humans on the other hand are again the most difficult nut to crack for machines as well as to other people as we know. Still understanding people may not be that critical for machines since they can be risk averse and not necessarily go too deep to understand humans and just decide to eliminate the risk factor. In the CGC contest understanding the environment did not pose a great challenge as the environment was highly controlled and documented so it was again reusing tools needed for solving the particular problem of how to make sure security holes are not been exposed by others as well as trying to penetrate the same or other security holes in other similar machines. On top of that CGC have created an artificial environment of a new unique OS which was set up in order to make sure vulnerabilities uncovered in the competition are not being used in the wild on real life computers and the side effect of that was the fact that the environment the machines needed to learn was not the real life environment.
Goal setting and strategy crafting are something machines already do in many specific use-case driven products. For example setting the goal of maximizing revenues of a stocks portfolio and then creating and employing different strategies to reach that - goals that are designed and controlled by humans. We did not see yet a machine which has been given a top level goal of survival. There are many developments in the area of business continuation, but still, it is limited to tools aimed to achieve tactical goals and not a grand goal of survivability. The goal of survival is fascinating in the fact that it serves the interest of the machine and in the case it is the only or primary goal then this is when it becomes problematic. The CGC contest was new in the aspect of setting the underlying goal of survivability into the bots, and although the implementation in the competition was narrowed down to the very particular use case, still it made many people think about what survivability may mean to machines.
The real risk posed by CGC was by sparking the thought of how can we teach a machine to survive and once it is reached then Skynet can be closer than ever. Of course no one can control or restrict the imagination of other people and survivability has been on the mind of many before the challenge but still this time it was sponsored by DARPA. It is not new that certain plans to achieve something eventually lead to whole different results and we will see within time whether the CGC contest started a fire in the wrong direction. In a way today we are like the people in Zion as depicted in the Matrix movie where the machines in Zion do not control the people but on the other hand, the people are entirely dependent on them and shutting them down becomes out of the question. In this fragile duo, it is indeed wise to understand where AI research goes and which ways are available to mitigate certain risks. The same as the line of thought being applied to nuclear bombs technology. One approach for risk mitigation is to think about more resilient infrastructure for the next centuries where it won’t be easy for a machine to seize control of critical infrastructure and enslave us.
Now it is 5th of August 2016, few hours after the competition ended and it seems that humanity is intact. As far as we see.
The article will be published as part of the book of TIP16 Program (Trans-disciplinary Innovation Program at Hebrew University) where I had the pleasure and privilege to lead the Cyber and Big Data track.
Chat bots are everywhere. It feels like the early days of mobile apps where you either knew someone who is building an app or many others planning to do so. Chat bots have their magic. It’s a frictionless interface allowing you to chat with someone naturally. The main difference is that on the other side there is a machine and not a person. Still, one as old as I got to think whether it is the end game concerning human-machine interaction or is they just another evolutionary step in the long path of human-machine interactions.
How Did We Get Here?I’ve noticed chat bots for quite a while, and it piqued my curiosity concerning the possible use cases as well as the underlying architecture. What interests me more is Facebook and other AI superpowers ambitions towards them. And chat bots are indeed the next step regarding human-machine communications. We all know where history began when we initially had to communicate via a command line interface limited by a very strict vocabulary of commands. An interface that was reserved for the computer geeks alone. The next evolutionary step was the big wave of graphical user interfaces. Initially the ugly ones but later on in significant leaps of improvements making the user experience smooth as possible but still bounded by the available options and actions in a specific context in a particular application. Alongside graphical user interfaces, we were introduced to search like interfaces where there is a mix of a graphical user interface elements with a command line input which allows extensive textual interaction - here the GUI serves as a navigation tool primarily. And then some other new human-machine interfaces were introduced, each one evolving on its track: the voice interface, the gesture interface (usually hands) and the VR interface. Each one of these interaction paradigms uses different human senses and body parts to express communications onto the machine where the machine can understand you to a certain extent and communicate back. And now we have the chat bots and there’s something about them which is different. In a way it’s the first time you can express yourself freely via texting and the machine will understand your intentions and desires. That's the premise. It does not mean each chat bot can respond to every request as chat bots are confined to the logic that was programmed to them but from a language barrier point of view, a new peak has been reached. So do we experience now the end of the road for human-machine interactions? Last week I’ve met an extraordinary woman, named Zohar Urian (the lucky Hebrew readers can enjoy her super smart blog about creative, innovation, marketing and lots of other cool stuff) and she said that voice would be next which makes a lot of sense. Voice has less friction than typing, its popularity in messaging is only growing, and technology progress is almost there regarding allowing free vocal express where a machine can understand it. Zohar's sentence echoed in my brain which made me go deeper into understanding the anatomy of the human machine interfaces evolution.
The Evolution of Human-Machine InterfacesThe progress in human to machine interactions has evolutionary patterns. Every new paradigm is building on capabilities from the previous paradigm, and eventually the rule of the survivor of the fittest plays a significant role where the winning capabilities survive and evolve. Thinking about its very natural to grow this way as the human factor in this evolution is the dominating one. Every change in this evolution can be decomposed into four dominating factors:
- The brain or the intelligence within the machine - the intelligence which contains the logic available to the human but also the capabilities that define the semantics and boundaries of communications.
- The communications protocol which is provided by the machine such as the ability to decipher audio into words and sentences hence enabling voice interaction.
- The way the human is communicating with the machine which has tight coupling with the machine communication protocol but represents the complementary role.
- The human brain.
Command Line 1st GenerationThe first interface used to send restricted commands to the computer by typing it in a textual screen Machine Brain: Dumb and restricted to set of commands and selection of options per system state Machine Protocol: Textual Human Protocol: Fingers typing Human Brain: Smart
Graphical User InterfacesA 2D interface controlled by a mouse and a keyboard allowing text input, selection of actions and options Machine Brain: Dumb and restricted to set of commands and selection of options per system state Machine Protocol: 2D positioning and textual Human Protocol: 2D hand movement and fingers actions, as well as fingers, typing Human Brain: Smart
Adaptive Graphical User InterfacesSame as previous one though here the GUI is more flexible in its possible input also thanks to situational awareness to the human context (location...) Machine Brain: Getting smarter and able to offer a different set of options based on profiling of the user characteristics. Still limited to set of options and 2D positioning and textual inputs. Machine Protocol: 2D positioning and textual Human Protocol: 2D hand movement and fingers actions, as well as fingers, typing Human Brain: Smart
Voice Interface 1st GenerationThe ability to identify content represented as audio and to translate it into commands and input Machine Brain: Dumb and restricted to set of commands and selection of options per system state Machine Protocol: Listening to audio and content matching within audio track Human Protocol: Restricted set of voice commands Human Brain: Smart
Gesture InterfaceThe ability to identify physical movements and translate them into commands and selection of options Machine Brain: Dumb and restricted to set of commands and selection of options per system state Machine Protocol: Visual reception and content matching within video track Human Protocol: Physical movement of specific body parts in a certain manner Human Brain: Smart
Virtual RealityA 3D interface with the ability to identify full range of body gestures and transfer them into commands Machine Brain: A bit smarter but still restricted to selection from a set of options per system state Machine Protocol: Movement reception via sensors attached to body and projection of peripheral video Human Protocol: Physical movement of specific body parts in a free form Human Brain: Smart
AI ChatbotsA natural language detection capability which can identify within supplied text the rules of human language and transfer them into commands and input Machine Brain: Smarter and flexible thanks to AI capabilities but still restricted to selection of options and capabilities within a certain domain Machine Protocol: Textual Human Protocol: Fingers typing in a free form Human Brain: Smart
Voice Interface 2nd GenerationSame as previous one but with a combination of voice interface and natural language processing Machine Brain: Same as the previous one Machine Protocol: Identification of language patterns and constructs from audio content and translation into text Human Protocol: Free speech Human Brain: Smart
ObservationsThere are several phenomenon and observations from this semi-structured analysis:
- The usage of the combination of communication protocols such as voice and VR will extend the range of communications between human and machines even without changing anything in the computer brain.
- Within time more and more human senses and physical interactions are available for computers to understand which extend the boundaries of communications. Up until today smell has not gone mainstream as well as touching. Pretty sure we will see them in the near term future.
- The human brain always stays the same. Furthermore, the rest of the chain always strives to match the human brain capabilities. It can be viewed as a funnel limiting the human brain from fully expressing itself digitally, and within the time it gets wider.
- An interesting question is whether at some point in time the human brain will get stronger if the communications to machines will be with no boundaries and AI will be stronger.
- We did not witness yet any serious leap which removed one of the elements in the chain and that I would call a revolutionary step (still behaving in an evolutionary manner). Maybe the identification of brain waves and real-time translation to a protocol understandable by a machine will be as such. Removing the need for translating the thoughts into some intermediate medium.
- Once the machine brain becomes smarter in each evolutionary step then the magnitude of expression grows bigger - so the there is progress even without creating more expressive communication protocol.
- Chat bots from a communications point of view in a way are a jump back to the initial protocol of command line though the magnitude of the smartness of the machine brains nowadays makes it a different thing. So it is really about the progress of AI and not chat bots. I may have missed some interfaces, apologies, not an expert in that area:)
Now to The AnswerSo the answer to the main question - chat bots indeed represent a big step regarding streamlining natural language processing for identifying user intentions in writing. In combination with the fact that users a favorite method of communication nowadays is texting makes it a powerful progress. Still, the main thing that thrills here is the AI development, and that is sustainable across all communication protocols. So in simple words, it is just an addition to the arsenal of communication protocols between human and machines, but we are far from seeing the end of this evolution. From the FB and Google point of view, these are new interfaces to their AI capabilities which make them stronger every day thanks to increased usage.
Food for ThoughtIf one conscious AI meets another conscious AI in cyberspace will they communicate via text or voice or something else?
My article Emotional Components drive Cyber Strategies was published in Security Info Watch.
Smartphones will soon become the target of choice for cyber attackers—making cyber warfare a personal matter. The emergence of mobile threats is nothing new, though until now, it has mainly been a phase of testing the waters and building an arms arsenal. Evil-doers are always on the lookout for weaknesses—the easiest to exploit and the most profitable. Now, it is mobile's turn. We are witnessing a historic shift in focus from personal computers, the long-time classic target, to mobile devices. And of course, a lofty rationale lies behind this change. Why Mobile? The dramatic increase in usage of mobile apps concerning nearly every aspect of our lives, the explosive growth in mobile web browsing, and the monopoly that mobile has on personal communications makes our phones a worthy target. In retrospect, we can safely say that most security incidents are our fault: the more we interact with our computer, the higher the chances become that we will open a malicious document, visit a malicious website or mistakenly run a new application that runs havoc on our computer. Attackers have always favored human error, and what is better suited to expose these weaknesses than a computer that is so intimately attached to us 24 hours a day? Mobile presents unique challenges for security. Software patching is broken where the rollout of security fixes for operating systems is anywhere from slow to non-existent on Android, and cumbersome on iOS. The dire Android fragmentation has been the Achilles heel for patching. Apps are not kept updated either where tens of thousands of micro-independent software vendors are behind many of the applications we use daily, security is the last concern on their mind. Another major headache rises from the blurred line between the business and private roles of the phone. A single tap on the screen takes you from your enterprise CRM app to your personal WhatsApp messages, to a health tracking application that contains a database of every vital sign you have shown since you bought your phone. Emerging Mobile Threats Mobile threats grow quickly in number and variety mainly because attackers are well-equipped and well-organized—this occurs at an alarming pace that is unparalleled to any previous emergence of cyber threats in other computing categories. The first big wave of mobile threats to expect is cross-platform attacks, such as web browser exploits, cross-site scripting or ransomware—repurposing of field-proven attacks from the personal computer world onto mobile platforms. An area of innovation is in the methods of persistence employed by mobile attackers, as they will be highly difficult to detect, hiding deep inside applications and different parts of the operating systems. A new genre of mobile-only attacks target weaknesses in hybrid applications. Hybrid applications are called thus since they use the internal web browser engine as part of their architecture, and as a result, introduce many uncontrolled vulnerabilities. A large portion of the apps we are familiar with, including many banking-oriented ones and applications integrated into enterprise systems, were built this way. These provide an easy path for attackers into the back-end systems of many different organizations. The dreaded threat of botnets overflowing onto mobile phones is yet to materialize, though it will eventually happen as it did on all other pervasive computing devices. Wherever there are enough computing power and connectivity, bots appear sooner or later. With mobile, it will be major as the number of devices is high. App stores continue to be the primary distribution channel for rogue software as it is almost impossible to identify automatically malicious apps, quite similar to the challenge of sandboxes that deal with evasive malware. The security balance in the mobile world on the verge of disruption proving to us yet again, that ultimately we are at the mercy of the bad guys as far as cyber security goes. This is the case at least for the time being, as the mobile security industry is still in its infancy—playing a serious catch-up. A variation of this story was published on Wired.co.UK - Hackers are honing in on your mobile phone.
Hackers are honing in on your mobile phone published in Wired
The upcoming tsunami of IoT and Israel's golden opportunity published in Globes IT in Hebrew.