People create technologies to serve a purpose. It starts with a goal in mind and then the creator is going through the design phase and later on builds a technology-based system that can achieve that goal. For example, someone created Google Docs which allows people to write documents online. A system is a composition of constructs and capabilities which are set to be used in a certain intended way. Designers always aspire for generalization in their creation so it can serve other potential uses to enjoy reuse of technologies and resources. This path which starts at the purpose and goes through design, construction, and usage, later on, is the primary paradigm of technological tools.
The challenge arises when technological creations are abused for unintended purposes. Every system has a theoretical spectrum of possible usages dictated by its capabilities, and it may be even impossible to grasp the full potential. The gap between potential vs. intended usages is the root of most if not all cybersecurity problems. The inherent risk in artificial intelligence lies within the same weakness of purpose vs. actual usage as well. Million of examples come to my mind, starting from computer viruses abusing standard operating system mechanisms to harm up to the recent abuse of Facebook’s advertising network to control the minds of US citizens during last elections. The pattern is not unique to technologies alone; it is a general attribute of tools while information technologies in their far reach elevated the risk of misuse.
One way to tackle this weakness is to add a phase into the design process which evaluates the boundaries of potential usages of each new system and devises a self-regulating framework. Each system will have its self-regulatory capability. An effort that should take place during the design phase but also evaluated continuously as the intersection of technologies create other potential uses. A first and fundamental principle in the emerging paradigm of security by design. Any protective measure that is added after the design phase will incur higher implementation costs while its efficiency will be reduced. The later a self-regulating protection is applied, the higher the magnitude of reduction in its effectiveness.
Security in technologies should stop being an afterthought.
Random Thoughts on Cyber Security, Artificial Intelligence, and Future Risks at the OECD Event – AI: Intelligent Machines, Smart Policies
It is the end of the first day of a fascinating event in artificial intelligence, its impact on societies and how policymakers should act upon what seems like a once in lifetime technological revolution. As someone rooted deeply in the world of cybersecurity, I wanted to share my point of view on what the future might hold.
The Present and Future Role of AI in Cyber Security and Vice Verse
Every new day we are witnessing new remarkable results in the field of AI and still, it seems we only scratched the top of it. Developments which reached a certain level of maturity can be seen mostly in the areas of object and pattern recognition which is part of the greater field of perception and different branches of reasoning and decision making. AI has already entered the cyber world via defense tools where most of the applications we see are in the fields of malicious behavior detection in programs and network activity and the first level of reasoning used to deal with the information overload in security departments helping prioritize incidents.
AI has a far more potential contribution in other fields of cybersecurity, existing and emerging ones:
A big industry-wide challenge where AI can be a game changer relates to the scarcity of cybersecurity professionals. Today there is a significant shortage of cybersecurity professionals which are required to perform different tasks starting from maintaining the security configuration in companies up to responding to security incidents. ISACA predicts that there will be a shortage of two million cybersecurity professionals by 2019. AI-driven automation and decision making have the potential to handle a significant portion of the tedious tasks professionals are fulfilling today. With the goal of reducing the volume of jobs to the ones which require the touch of a human expert.
Pervasive Active Intelligent Defense
The extension into active defense is inevitable where AI has the potential to address a significant portion of the threats that today, deterministic solutions can’t handle properly. Mostly effective against automated threats with high propagation potential. An efficient embedding of AI inside active defense will take place in all system layers such as the network, operating systems, hardware devices and middleware forming a coordinated, intelligent defense backbone.
The Double-Edged Sword
A yet to emerge threat will be cyber attacks which are powered themselves by AI. The world of artificial intelligence, the tools, algorithms, and expertise are widely accessible, and cyber attackers will not refrain from abusing them to make their attacks more intelligent and faster. When this threat materializes then AI will be the only possible mitigation. Such attacks will be fast, agile, and in magnitude that the existing defense tools have not experienced yet. A new genre of AI-based defense tools will have to emerge.
Privacy at Risk
Consumers privacy as a whole is sliding on a slippery slope where more and more companies collect information on us, structured data such as demographic information and behavioral patterns studied implicitly while using digital services. Extrapolating the amount of data collected with the new capabilities of big data in conjunction with the multitude of new devices that will enter our life under the category of IoT then we reach an unusually high number of data points per each person. High amounts of personal data distributed across different vendors residing on their central systems increasing our exposure and creating greenfield opportunities for attackers to abuse and exploit us in unimaginable ways. Tackling this risk requires both regulation and usage of different technologies such as blockchain, while AI technologies have also a role. The ability to monitor what is collected on us, possibly moderating what is actually collected vs, what should be collected in regards to rendered services and quantifying our privacy risk is a task for AI.
In recent year we see at an ever-accelerating pace new methods of authentication and in correspondence new attacks breaking those methods. Most authentication schemes are based on a single aspect of interaction with the user to keep the user experience as frictionless as possible. AI can play a role in creating robust and frictionless identification methods which take into account vast amounts of historical and real-time multi-faceted interaction data to deduce the person behind the technology accurately.
AI can contribute to our safety and security in the future far beyond this short list of examples. Areas where the number of data points increases dramatically, and automated decision-making in circumstances of uncertainty is required, the right spot for AI as we know of today.
Is Artificial Intelligence Worrying?
The underlying theme in many AI-related discussions is fear. A very natural reaction to a transformative technology which played a role in many science fiction movies. Breaking down the horror we see two parts: the fear of change which is inevitable as AI indeed is going to transform many areas in our lives and the more primal fear from the emergence of soulless machines aiming to annihilate civilization. I see the threats or opportunities staged into different phases, the short term, medium, long-term and really long term.
The short-term practically means the present and the primary concerns are in the area of hyper-personalization which in simple terms means all the algorithms that get to know us better then we know ourselves. An extensive private knowledge base that is exploited towards goals we never dreamt of. For example, the whole concept of microtargeting on advertising and social networks as we witnessed in the recent elections in the US. Today it is possible to build an intelligent machine that profiles the citizens for demographic, behavioral and psychological attributes. At a second stage, the machine can exploit the micro-targeting capability available on the advertising networks to deliver personalized messages disguised as adverts where the content and the design of the adverts can be adapted automatically to each person with the goal of changing the public state of mind. It happened in the US and can happen everywhere what poses a severe risk for democracy. The root of this short-term threat resides in the loss of truth as we are bound to consume most of our information from digital sources.
We will witness a big wave of automation which will disrupt many industries assuming that whatever can be automated whether if it is bound to a logical or physical effort then it will eventually be automated. This wave will have a dramatic impact on society, many times improving our lives such as in the case of detection of diseases which can be faster with higher accuracy without the human error. These changes across the industries will also have side effects which will challenge society such as increasing the economic inequality, mostly hurting the ones that are already weak. It will widen the gap between knowledge workers vs. others and will further intensify the new inequality based on access to information. People with access to information will have a clear advantage over those who don’t. It is quite difficult to predict whether the impact in some industries would be short-term and workers will flow to other sectors or will it cause overall stability problems, and it is a topic that should be studied further per each industry that is expecting a disruption.
The longer term
We will see more and more intelligent machines that own the power to impact life and death in humans. Examples such as autonomous driving which has can kill someone on the street as well as an intelligent medicine inducer which can kill a patient. The threat is driven by malicious humans who will hack the logic of such systems. Many smart machines we are building can be abused to give superpowers to cyber attackers. It is a severe problem as the ability to protect from such threat cannot be achieved by adding controls into the artificial intelligence as the risk is coming from intelligent humans with malicious intentions and high powers.
The real long-term
This threat still belongs to science fiction which describes a case where machines will turn against humanity while owning the power to cause harm and self-preservation. From the technology point of view, such event can happen, even today if we decide to put our fate into the hands of a malicious algorithm that can self-preserve itself while having access to capabilities that can harm us. The risk here is that society will build AI for good purposes while other humans will abuse it for other purposes which will eventually spiral out of the control of everyone.
What Policy Makers Should Do To Protect Society
Before addressing some specific directions a short discussion on the power limitations of policymakers is required in the world of technology and AI. AI is practically a genre of techniques, mostly software driven, where more and more individuals around the globe are equipping themselves with the capability to create software and later to work on AI. In a very similar fashion to the written words, software is the new way to express oneself and aspiring to set control or regulation on that is destined to fail. Same for idea exchanges. Policymakers should understand these new changed boundaries which dictate new responsibilities as well.
Areas of Impact
Central intervention can become a protective measure for citizens is the way private data is collected, verified and most importantly used. Without data most AI systems cannot operate, and it can be an anchor of control.
Cyber Crime & Collaborative Research
Another area of intervention should be the way cybercrime is enforced by law where there are missing parts in the puzzle of law enforcement such as attribution technologies. Today, attribution is a field of cybersecurity that suffers from under-investment as it is in a way without commercial viability. Centralized investment is required to build the foundations of attribution in the future digital infrastructure. There are other areas in the cyber world where investment in research and development is in the interest of the public and not a single commercial company or government which calls for a joint research across nations. One fascinating area of research could be how to use AI in the regulation itself, especially enforcement of regulation, understanding humans’ reach in a digital world is too short for effective implementation. Another idea is building accountability into AI where we will be able to record decisions taken by algorithms and make them accountable for that. Documenting those decisions should reside in the public domain while maintaining the privacy of the intellectual property of the vendors. Blockchain as a trusted distributed ledger can be the perfect tool for saving such evidence of truth about decisions taken by machines, evidence that can stand in court. An example project in this field is the Serenata de Amor Operation, a grassroots open source project which was built to fight corruption in Brazil by analyzing public expenses looking for anomalies using AI.
A significant paradigm shift policymaker needs to take into account is the long strategic change from centralized systems to distributed technologies as they present much lesser vulnerabilities. A roadmap of centralized systems that should be transformed into distributed once should be studied and created eventually.
Challenges for Policy Makers
- Today AI advancement is considered a competitive frontier among countries and this leads to the state that many developments are kept secret. This path leads to loss of control on technologies and especially their potential future abuse beyond the original purpose. The competitive phenomena create a serious challenge for society as a whole. It is not clear why people treat weapons in magnitude harsher vs. advanced information technology which eventually can cause more harm.
- Our privacy is abused by market forces pushing for profit optimization where consumer protection is at the bottom of priorities. Conflicting forces at play for policymakers.
- People across the world are different in many aspects while AI is a universal language and setting global ethical rules vs. national preferences creates an inherent conflict.
- The question of ownership and accountability of algorithms in a world where algorithms can create damage is an open one with many diverse opinions. It gets complicated since the platforms are global and the rules many times are local.
- What other alternatives there are beyond the basic income idea for the millions that won’t be part of the knowledge ecosystem as it is clear that not every person that loses a job will find a new one. A pre-emptive thinking should be conducted to prevent market turbulences in disrupted industries. An interesting question is how does the growth in population on the planet impacts this equation.
The main point I took from today is to be careful when designing AI tools which are designated towards a specific purpose and how they can be exploited to achieve other means.
UPDATE: Link to my story on the OECD Forum Network.
Recently I’ve been thinking about the intersection of blockchain and AI and although several exciting directions are rising from the intersection of the technologies I want to explore one direction here.
One of the hottest discussions on AI is whether to constraint AI with regulation and ethics to prevent apocalyptic future. Without going into whether it is right or wrong to do so, I think that blockchain can play a crucial role if such future direction materialize. There is a particular group of AI applications, mostly including automated decision making, which can impact life and death. For example, an autonomous driving algorithm which can take a decision that will eventually end with an accident and loss of life. In a world where AI is enforced for compliance with ethics then accountability will be the most crucial aspect of it. To create the technological platform for accountability we need to be able to record decisions taken by algorithms. Documenting those decisions can take place inside the vendor database or a trusted distributed ledger. Recording decisions in the vendor database is somewhat the natural path for implementation of such capability though it suffers from lack of neutrality, lack of authenticity and lack of integrity. In a way, such a decision is a piece of knowledge that should reside in the public domain while maintaining the privacy of the intellectual property of the vendor. Blockchain as a trusted distributed ledger can be the perfect paradigm for saving such evidence of truth about decisions taken by machines, evidence that can stand in court.
The question is whether such blockchain will be a neutral middleware shared by the auto vendors or a service rendered by the government.
I got a call last night on whether I want to come to the morning show on TV and talk about Google’s recent findings of alleged Russian sponsored political advertising. Advertising that could have impacted the last US elections results, joining other similar discoveries on Facebook and Twitter and now Microsoft is also looking for clues. At first instant, I wanted to say, what is there to say about it but still, I agreed as a recent hobby of mine is being guested on TV shows:)
So this event got me reading about the subject quite a bit later at night and this early morning to be well prepared, and the discussion was good, a bit light as expected from a morning show but good enough to be informative for its viewers. What struck me later on while contemplating on the actual findings is the significant vulnerability uncovered in this incident, the mere exploitation of that weakness by Russians (allegedly) and the hazardous path technology has taken us in recent decades while changing human behavior.
The Russian Intervention Theory
The summarize it: there are political forces and citizens in the United States which are worried about the depth of Russian intervention in the elections, and part of that is whether the social networks and digital mediums were exploited via digital advertising and to what extent. The findings until now show that advertising campaigns at the cost of tens of thousands of dollars have been launched via organizations that seem to be tied to the Russians. And these findings take place across the most prominent social networks and search engines. The public does not know yet what was the nature of the advertising on each platform, who is behind this adverts and whether there was some cooperation of the advertisers with the people behind Trump’s campaign. This lack of information and especially the nature of the suspicious adverts leads to many theories, and although my mind is full of crazy ideas it seems that sharing them will only push the truth further away. So I won’t do that. The nature of the adverts is the most important piece of the puzzle since based on their content and variation patterns one can deduce whether they had a synergistic role with Trump’s campaign and what was the thinking behind them. Especially due to the fact the campaigns that were discovered are strangely uniform across all the advertising networks budget wise. As the story unfolds we will become wiser.
How To Tackle This Threat
This phenomenon is of concern to any democracy on the planet with concerned citizens which spend enough time on digital means such as Facebook and there are some ways to improve the situation:
Advertising networks make their money from adverts. The core competence of these companies is to know who you are and to promote commercial offerings in the most seamless way. Advertisements which are of political nature without any commercial offerings behind them are abusing this information targeting and delivery mechanism to control the mindset of people. Same as it happens in advertisements on television where on TV there is a control on such content. There is no rational reason why digital advertising networks will get a free pass to allow anyone to broadcast any message on their networks without no accountability in the case of non-commercial offerings. These networks were not built for brainwashing and the customers, us, deserve a high level of transparency in this case which should be supervised and enforced by the regulator. So if there is an advert which is not of commercial nature, it should be emphasized that it is an advert (many times the adverts blend so good with the content that even identifying them is a difficult task), what is the source of the funding for the advert with a link to the website of the funder. If the advertising networks team up to define a code of ethics which will be self-enforced among them maybe regulation is not needed. At the moment we, the users, are misled and hurt by the way their service is rendered now.
The primary advertising networks (FB, Google, Twitter, Microsoft) have vast machine learning capabilities, and they should employ these to identify anomalies. Assuming regulation will be in place whether governmental or just self-regulation, there will be groups which will try to exploit these rules and here comes the role of technology in the pursuit for identifying deviations from the rules. Whether it is about identifying the source of funding of a campaign automatically and alerting such anomalies at real-time up to identifying automated strategies such as brute force AB testing done by an army of bots. Investing in technology to make sure everyone is complying with the house rules. Part of such an effort is opening up the data about the advertisers and campaigns of non-commercial products to the public to allow third-party companies to work on identification of such anomalies and to innovate in parallel to the advertising networks. Same goes for other elements in the networks which can be abused such as Facebook pages.
Last Thoughts on the Incident
- How come no one identified the adverts in real time during elections. I would imagine there were complaints about specific ads during elections and how come no complaint escalated a more in-depth research into a specific campaign. Maybe there is too much reliance on bots which manage the self-service workflow for such advertising tools – the dark side of automation.
- Looking out for digital signs that the Russians cooperated in this campaign with the Trump campaign seems far-fetched to me. The whole idea of a parallel campaign is the separation where synchronization if such took place it was probably done verbally without any digital traces.
- The mapping of the demographic database that was allegedly created by Cambridge Analytica into the targeting taxonomy of Facebook, for example, is an extremely powerful tool for AB Testing via microtargeting. A perfect cost-efficient tool for mind control.
- Why everyone assumes that the Russians are in favor of Trump? No one that raises the option that maybe the Russians had a different intention or perhaps it was not them. Reminds me alot of the fruitless efforts to attribute cyber attacks.
More thoughts on the weaknesses of systems and what can be done about it in a future post.
The story of the Tower of Babel (or Babylon) has always fascinated me as God got seriously threatened by humans if and only they would all speak the same language. To prevent that God confused all the words spoken by the people on the tower and scattered them across the earth. Regardless of the different personal religious beliefs of whether it happened or not the underlying theory of growing power when humans interconnect is intriguing and we live at times this truth is evident. Writing, print, the Internet, email, messaging, globalization and social networks are all connecting humans — connections which dramatically increase humanity competence in many different frontiers. The development of science and technology can be attributed to communications among people, as Issac Newton once said “standing on the shoulders of giants.” Still, our spoken languages are different, and although English has become a de-facto language for doing business in many parts of the world yet there are many languages across the globe, and the communications barrier is still there. History had also seen multiple efforts to create a unified language such as Esperanto which did not work eventually. Transforming everyone to speak the same language seems almost impossible as language is being taught at a very early age so changing that requires a level of synchronization, co-operation, and motivation which does not exist. Even when you take into account the recent highly impressive developments in natural language processing by computers achieving real-time translation of the presence of the medium will always interfere. A channel in the middle creating conversion overhead and loss of context and meaning.
Artificial Intelligence can be on its path to change that, reverting the story of the Tower of Bable.
Different emerging fields in AI have the potential to merge and turn into a platform used for communicating with others without going through the process of lingual expression and recognition:
Avatar to Avatar
One direction it may happen is that our avatar, our residual digital image on some cloud, will be able to communicate with other avatars in a unified and agnostic language. Google, Facebook, and Amazon build today complex profiling technologies aimed to understand user needs, wishes and intentions. Currently, they do that to optimize their services. Adding to these capabilities means of expression of intentions and desires and on the other side, understanding capabilities can lead to the avatar to avatar communications paradigm. It will take a long time until these avatars will reflect our true self in real-time but still many communications can take place even beforehand. As an example let’s say my avatar knows what I want for birthday and my birthday is coming soon. My friend avatar can ask at any point in time my avatar what do I want to get for my birthday and my avatar can respond in a very relevant manner.
The second path that can take place is in line with the direction of Elon Musk’s Neuralink concept or Facebook’s brain integration idea. Here the brain-to-world connectors will be able not only to output our thoughts to the external world in a digital way but also to understand each other thoughts and transcode them back to our brain. Brain-to-world-to-brain. One caveat in this direction is the assumption that our brain is structured in an agnostic manner based on abstract transferable concepts – if each brain wiring is subjective to the individual’s constructs of understanding the digestion of others’ thoughts will be impossible.
A big difference today vs. the times of Babylon is the size of population which makes the potential of such wiring explosive.
Softbank acquired BostonDynamics, the four legs robots maker, alongside secretive Schaft, two-legged robots maker. Softbank, the perpetual acquirer of emerging leaders, has entered a foray into artificial life by diluting their stakes in media and communications and setting a stronghold into the full supply chain of artificial life. The chain starts with chipsets where ARM was acquired but then a quarter of the holdings were divested since Google (TPU) and others have shown that specialized processors for artificial life are no longer a stronghold of giants such as Intel. The next move was acquiring a significant stake in Nvidia. Nvidia is the leader in general purpose AI processing workhorse but more interesting for Softbank are their themed vertical endeavors such as the package for autonomous driving. These moves set a firm stance in the two ends of the supply chain, the processors and the final products. It lays down a perfect position for creating a Tesla-like company (through holdings) that can own the new emerging segment of artificial creatures. It remains to be seen what would be the initial market for these creatures, whether it will be the consumer market or the defense. Their position in the chipsets domain will allow them to make money either way. The big question is what would be the next big acquisition target in AI. It has to be a significant anchor in the supply chain, right in between the chipsets and the final products and such acquisition will reveal the ultimate intention towards what artificial creatures we will see first coming into reality. A specialized communications infrastructure for communicating with the creatures efficiently (maybe their satellites activity?) as well as some cloud processing framework would make sense.
P.S. The shift from media into AI is a good hint on which market matured already and which one is emerging.
P.S. What does this say about Alphabet, the fact they sold Boston Dynamics?
P.S. I am curious to see what is their stance towards patents in the world of AI
Random thoughts regarding Mary Meeker’s Internet Trends 2017 report:
The main question that popped in mind was where are the rest of the people? Today there are 3.4B internet users where the world has a population of 7.5B. Could be interesting to see who are the other non-digital 4 billion humans. Interesting for reasons such as understanding the growth potential of the internet user base (by the level of difficulty of penetrating the different remaining segments) as well as identifying unique social patterns in general. Understanding the social demographics of the 3.4B connected ones can be valuable as well as a baseline for understanding the rest of the statistics in the presentation.
Another interesting fact is that global smartphones shipments grew by 3% while the growth in smartphones installed base was 12% – that gap represents the pace of the slowdown in the global smartphones market growth and can be used as a predictor for next years.
Interesting to see that the iOS market share in the smartphone world follows similar patterns to Mac in the PC world. In the smartphone world, Apple market share is a bit higher vs. the PC market share but still carries similar proportions.
The gap fill of ad spending vs. time spent in media across time follows nicely the physical law of conservation of mass. Print out, mobile in.
Measuring advertising ROI is still is a challenge even when advertising channels have become fully digital – a symptom of the offline/online divide in conversion tracking which has not been bridged yet.
It seems as if there is a connection between the massive popularity of ad blockers on mobile vs. the advertising potential on mobile. If there is such then the suggested potential can not be fulfilled due to the existence of ad blockers and the level of tolerance users have on mobile which is maybe the reason ad blockers are so popular on mobile in the first place.
99% accurately tracking is phenomenal though the question is whether it can scale as a business model – will a big enough audience opt-in for such tracking and what will be done about the battery drain resultant of such tracking. This hyper monitoring if achieved on a global scale will become an interesting privacy and regulation debate.
Amazon Echo numbers are still small regardless of the hype level.
Could be fascinating to see the level of usage of skills. The number of skills is very impressive but maybe misleading (many find a resemblance to the hyper growth in apps). The increase in the apps world was not only in the number of apps created but also in the explosive growth in usage (downloads, buys) – here we see only the inventory.
This, of course, is a serious turning point in the world of user interfaces and will be reflected in many areas, not only in home assistants.
2.4B Gamers?!? The fine print says that you need to play a game at least once in three months which is not a gamer by my definition.
Do these numbers include shadow IT in the cloud or does it reflect concrete usage of cloud resources by the enterprise? There is a big difference between an organization deploying data center workload into the cloud vs. using a product which is behind the scenes partially hosted in the cloud such as Salesforce. Totally different state of mind in terms of overcoming cloud inhibitions.
The reduction in concerns about data security in the cloud is a good sign of maturity and adoption. Cloud can be as secure as any data center application and even much more though still many are afraid of that uncertainty.
The reasons cloud applications are categorized as not enterprise-ready is not necessarily due to their security weakness. The adoption of cloud products inside the enterprise follow other paths such as level of integration into other systems, customization fit to the specific industry, etc…
The reason for the weaponization of spam is simply due to the higher revenue potential for spam botnets operators. Sending plain spam can earn you money, sending a malware can make you much more.
Remarkable to see that the founders of the largest tech companies are 2nd and 3rd generation of immigrants.
That’s all for now.
IBM stock was hit severely in recent month, mostly due to the disappointment from the latest earnings report. It wasn’t a real disappointment, but IBM had a buildup of expectations from their ongoing turnaround, and the recent earnings announcement has poured cold water on the growing enthusiasm. This post is about IBM’s story but carries a moral which applies to many other companies going through disruption in their industry.
IBM is an enormous business with many product lines, intellectual property reserves, large customers/partners ecosystems and a big pile of cash reserves. IBM has been disrupted in the recent decade by various megatrends including cloud, mobile computing, software as a service and others. IBM started a turnaround which became visible to the investors’ community at the beginning of 2016, a significant change executed quite efficiently across different product lines. This disruption found many other tech companies unprepared, a classic tech disruption where new entrants need to focus only on next-generation products and established players play catch up. An unfair situation where the big players carry the burden of what was not so long time ago fresh and innovative. IBM turnaround was about refocusing into cognitive computing a.k.a AI and although the turnaround is executed very professionally the shackles of the past prevent them from pleasing the impatient investors’ community.
Can Every Business Turn Around?
A turnaround, or a pivot as coined in the startup world, means to change the business plan of an existing enterprise towards a new market/audience requiring a different set of skills/products/technologies. Pivoting in the startup world is a private case of a general business turnaround. In a nutshell, every business at any point in time owns a different set of offerings (products/technologies) and cash reserves. Each offering has customers, prospects, partners and the costs incurred of the creation and the delivery of the offerings to the market. In an industry which is not disrupted the equation of success is quite simple, the money you make on sales of your offerings should be higher than the attached costs. In the early phases of new market creation, it makes sense to wait for that equation to get into play by investing more cash in building the right product as well as establish excellent access to the market. Disruption is first spotted when it’s hard to grow at the same or higher rate, and fundamental change to the offerings is needed such as a full rebuild. This situation happens when new entrants/startups have an economic advantage in entering the market or by creating a new overlapping market. When a market is in its early days of disruption the large enterprises are mostly watching and hoping for the latest trends to fade away. Once the winds of change are blowing too strong, then new thinking is required.
A Disruption is Happening – Now What
Once the changes in the market ring the alarm bell at the top floors, management can take one or more of the following courses of actions:
- Buy into the trend by acquiring technologies/products/teams/early market footprints. The challenges in this course are an efficient absorption of the acquired assets as well as an adaptation of the existing operations towards a new direction based on the newly acquired capabilities.
- Create a new line of products and technologies in-house from scratch realigning existing operations into a dual mode of operation – maintaining the old vs. building the new. Dual offerings that co-exist until a successful internal transfer of leadership to the new product lines take place.
- Build/Invest in a new external entity that is set to create the future offering in a detached manner. The ultimate and contradicting goal of the new business is to eventually cannibalize the existing product lines towards leadership in the market — a controlled competitor.
Each path creates a multitude of opportunities and challenges. Eventually, a gameplan should be devised based on the particular posture of the company and the target market supply chain.
Contemplating About A Turnaround
From a bird’s eye view, all forms of turnarounds have common patterns. Every turnaround has costs. Direct costs of the investment in new products and technologies as well as indirect costs created due to the organizational transformation. Expenses incurred on top of keeping the existing business lines healthy and growing. These additional costs are allocated from the cash reserves or new capital raised from investors. Either way, it is a limited pool of capital which requires a well balanced and aggressive plan with almost no room for mistakes. Any mistake will either hurt the innovation efforts or the margins of the current lines of business and for public companies neither is forgivable. Time is also critical here, and fast execution is critical. If mistakes happen, the path can turn into a slippery slope very quickly.
Besides the financial challenges in running a successful turnaround, there is a multitude of psychological, emotional and organizational issues hanging in the air. First and foremost is the feeling of loss around sunk cost. Usually, before a turnaround is grasped, there are many efforts to revive existing business lines with different investment options such as linear evolution in products, reorganizations, rebranding, and new partnerships. These cost a lot of money and until the understanding that it is not going to work finally sinks the burden of sunk costs has grown very fast. The second big issue is the impact of a turnaround on the organizational chart. People tend not to like changes and turnarounds. The top management is hyper-motivated thanks to the optimistic change consultants, but the employees who make the hierarchies do not necessarily see the full picture nor care about it. It goes down to every single individual who is part of the change, their thoughts about the impact on their career as well as their likings and aspirations. Spreading the move across the organization is kind of black magic and the ones who know how to do that are very rare. The key to a successful organizational change is to have change agents coming from within and not letting the change driven by the consultants who are perceived many timnes as overnight guests. The third strategic concern is the underlying fear of cannibalization. Many times the successful path of a turnaround is death to existing business lines and getting support for that across the board is somewhat problematic.
Should IBM Divest?
A tough question for an outsider like me and I guess pretty challenging even if you are an insider. My point of view is that IBM has reached a firm stance in AI, a position that it is becoming more challenging to maintain over time. AI has in magnitude more potential then the rest of the business and these unique assets should be freed from the burden of the other lines of business. IBM should maintain the strategic connections to the other divisions as they are probably the best distribution channels for those cognitive capabilities.
The Private Case of Startup Pivots
A pivot in startups is tricky and risky. First, there is the psychological barrier of admitting that the direction is wrong. Contradicting the general atmosphere of boundless startup optimism is a challenge. On top of that, there will always be enough naysayers that will complain that there is not sufficient proof that the startup is indeed in the wrong direction. Needless to talk about the disbelievers that will require seeing evidence before going into the new direction. Quite tricky to rationalize plans when decision making is anyway full of intuitions with minimal history. Due to the limited history of many startups and dependence on cash infusion a pivot even if justified is many times a killer. There aren’t too many people in general who have the mental flexibility for a pivot, and you need everyone in the startup on board. The very few pivots I saw that were successful did well thanks to incredible leadership which made everyone follow it half blindly – a leap of faith.
Food for thought – How come we rarely see disruptors buying established disrupted players to gain fast market footprint?
The patents system never got along quite well with software inventions. Software is too fluid for the patenting system, a system that was built a long time ago for inventions with physical aspects. The material point view perceives software as a big pile of electronically powered bits organized in some manner. In recent years the patenting system was bent to cope with software by adding into patent applications artificial additions containing linkage into physical computing components such as storage or CPU so the patent office can approve them. But that is just a patch and not evolution.
The Age of Algorithms
Fast forward to nowadays where AI has become the main innovation frontier – the world of intellectual property is about to be disrupted as well and let me elaborate. Artificial intelligence although a big buzzword, when it goes down to details, it means algorithms. Algorithms are probably the most complicated form of software as it is composed of base structures and functions dictated by the genre of the algorithm such as neural networks, but it also includes the data component. Whether it is the training data or the accumulated knowledge, it eventually is part of the logic which means a functional extension to the basic algorithm. That makes AI in its final form an even less comprehensible piece of software. Many times it is difficult to explain how a live algorithm works even by the developers of the algorithm themselves. So technically speaking patenting an algorithm is in magnitude more complicated. As a side effect of this complexity, there is a problem with the desire to publish an algorithm in the form of a patent. An algorithm is like a secret sauce, and no one wants to reveal their secret sauce to the public since others can copy it quite easily without worrying about litigation. For the sake of example let’s assume someone copies the personalization algorithm of Facebook. Since that algorithm works secretly behind the scenes, it will be difficult up to impossible to prove that someone copied it. The observed results of an algorithm can be achieved in many different ways, and we are exposed only to the results of an algorithm on not to its implementation. Same goes for the concept of a prior art, how can someone prove that no one has implemented that algorithm before?
To summarize, algorithms are inherently tricky to patent; no one wants to expose them via the patenting system as they are indefensible. So if we are going into a future where most of the innovation will be in algorithms, then the value of patents will be diminished dramatically as fewer patents will be created. I believe we are going into a highly proprietary world where the race will not be driven by ownership of intellectual property but rather by the ability to create a competitive intellectual property which works.