Will the number of apps ever stop growing?

I am a big fan of apps! Both as an apps developer and as a smartphone user started way before the days it was even called a smartphone. I own several phones with all possible operating systems and never miss a chance to install any new app I encounter. I may be a major factor in the total 2011 downloads number in app stores:)

Following this self-proclaiming manifest and after I achieved credibility as someone who knows something about apps, I want to go back to the question in the headline. Sarah Perez story on the end of 2011 ?A Web of Apps started with the following lines: “It is remarkable to think that we?re in the early days of the app era when there are already close to 600,000 iOS applications and nearly 400,000 on Android (source: Distimo).”. For me, these lines assume a-priory that the number of apps will keep growing a lot, at least much more than 1,000,000 apps! a common notion nowadays.

I think that every person that is somewhat related to the apps industry assumes whether publicly or silently in their “heart” that this number will grow; otherwise, there is no much of an industry in it, isn’t it so? Of course one can argue that each app can “fatten” up and make more money on its own which means to grow vertical and not horizontal. In my eyes growing vertical after a hyper horizontal growth period is usually a sign for the beginning of a market saturation phase.

To put it in the right perspective, I personally do feel it will grow much more (I got to, I am highly invested in this assumption:) so let’s not start pessimistically:) even considering the mere fact there are so many others not using smartphones and many audiences not being addressed etc.. etc.. then we are ok, aren’t we?

To answer the question above I want to see if we can somehow establish our beliefs on some rationale that is at least discussable. Or in other words, at least something to help the ones who are highly invested in it to be able to get a good night’s sleep calmly.

I think mobile apps and their acceptance present a major breakthrough in the computing world and this happened only thanks to the fact that people actually “met” these apps and discovered their existence (thanks Apple for creating the first effective apps distribution channel). The point of convergence of “capable” computing mobile devices, with good enough to go network connectivity and dynamically loadable small functional units called apps actually created something amazing – the ability to “upgrade” yourself instantly. It always reminds me of the scene from the Matrix where Neo loaded up the Kung Fu learning software and in a minute he was a Kung Fu master. I know we are not there yet but the metaphor has been established. The ability to load new functionality on-demand on a computing device, which is actually your avatar since it goes everywhere with you, and then to be able to operate it is quite a leap from the user perspective.

So if we follow this line of thought then we can predict two trends in the apps world for the near future:

1. Apps will become narrower in functionality within time. We are actually witnessing this trend already where you can see every day a new app achieving a very narrow purpose. This won’t mean there won’t be a place for big “Photoshop” or “Office” like packages which are more like a wholesaler in a box, we will see those but they will not be the majority, not even close. This trend will occur simply because people have shorter and shorter spans of attention and more and more specific needs which can be answered efficiently only by something narrow enough and simple enough to learn and operate it immediately. In general, I put the “blame” for this trend on the low bandwidth we have available today for communicating with our phones and unless someone will invent a direct injection of new functionality into our brains we are stuck with the growing trend of simpler apps. Just to put it in context, narrow apps does not mean “dummy” apps. Actually, the narrowing of the apps happens in terms of explicit functionality while there is an expansion in the implicit dimension of making the app adaptive to the specific context of the user which enables easier operation of the suggested functionality. This adaptation may require more technological investment in the app then other explicit features.? For example, Siri has a very simple and narrow explicit functionality (she just listens and talks back with one button) while behind the scenes it holds a huge technological effort.

2. Thanks to the growing number of sensors and interfaces on mobile devices (and I know of few more developments in this area which are exciting) as well as the better connectivity options we will witness more and more human needs that will be addressable by apps. For example, the accelerometer which now assists people who run with their calculations and effort tracking, something that was not possible before this sensor was available.

These two trends point towards a growth horizon that is very clear that lies on the axis of the diverse set of unique and shared human needs. If we add to this equation locality, languages, gender, ages, cultures, religions, and other social grouping criteria then we run into a very big number. A number that is big enough to take us to the point where we will stop counting how many apps are out there in the very near future.

One question that I still haven’t got an answer for is in the title of the original story that got me started “A web of apps”.? The question is whether we can do a comparison to the growth of the world wide web. I know it is not necessarily what Sarah Perez intended to discuss in her story, which was more about potential apps discovery by connecting apps. Still, there are many talks about whether apps will grow in the same growth pace as websites did. Websites grew and keep growing mainly on the axis of topical interest or knowledge areas and from a gut feeling it feels like it represents a much bigger growth axis. Maybe more on this in a later post.

What do you think? Will it grow forever?

My New iPad 2 is no Faster than my Good Old iPad 1!

I have been enjoying my first iPad for the last year and a few weeks ago I got a new one, iPad 2. I knew I should not expect too many new features on it except for better speed and camera support. Indeed it felt very fast. Very fast in comparison to my first old iPad. And then I got a weird feeling about the improvement as if someone cheated me. Actually, it was not faster at all in comparison to my first iPad If I were to compare it with the speed of the first iPad on the day I bought it and unwrapped it from the box before installing stuff and working with it. Indeed my first iPad became so slow that the second one seems like a miracle but this is just a fix and not a real improvement!

So… Eventually, I understood that actually the iPad product is going through a similar evolutionary path (and I guess also iPhone) to the one Windows/Intel a.k.a Wintel duo went through recent decades. To the ones who don’t understand what I am talking about, the Wintel duo was actually a rat race, every once in awhile new hardware would come, new and shiny and seems to work very fast, and then all the developers of the apps running on it see that there is a “room” for more features and complicated changes. The developers made their software better and then magically the same “new” device would become slower and slower. So painful slow that a new version of the same device seems like a true hero with its speed improvements. But this is just an illusion, the new version just fixes the speed problems incurred by the all the software that was “added” to it during its lifetime, and eventually, every version of the hardware just improves the whole product enough to reach the same initial starting point in terms of speed.

For years it was a duo conducted by MS Windows and Intel and now it seems that the same effect happens on iPad. It seems that once there is a status quo of speed set by some new category hardware (like the iPad, iPhone or PCs were initially when they were launched) then it will never be improved dramatically across versions, actually, the improvements from each version to the other one will be just enough to reach the status quo again. Ok, maybe I am exaggerating but not too much.

Another thought is that on the MS-Windows and Intel duo they were always a suspect of some kind of duopoly coordinating their acts but now it is clear that these are just market forces since on iPad, apps are being developed by disconnected 3rd party companies.

The mysterious thing to me in this behavior is the human angle. Why when there is a new product being invented/built such as the first iPhone then there is always a serious leap in capabilities of the product while later versions are always constrained to some “magical” boundary of improvement. Is it a matter of market demand and competition forcing the companies to small improvements or some kind of framing being created by the mere product definition, a framing that is vague enough to be broken when a new product is being devised.

2010 The Decade of Content Discovery

The last decade, 2000-2009, flourished with new content creation tools: blogging, tweets, videos, personal pages/profiles, and many others. One thing that did not catch the speed of innovation on the content creation side is content discovery tools.

We are still mainly using Google’s interface of search results to find stuff interesting. There were few tryouts for visualizing things differently but none of them prevailed. The feeling of something missing always happens to me when I try to look for some info on the web – If I dive shallow on Google then it seems like it is a topic never being discussed before but once you persist (and waste some good time on the way) things start to pop up.

The problem of content discovery lies in the physical limitations of people’s perception combined with the set of tools available today. Search results or category based navigation are only effective when they work on small datasets. Once the dataset grows beyond some limit they just don’t help. You can see it while looking for an app on the iPhone app store or when you try to catch up with tweets or news/feeds.

I propose to focus this new decade on Content Discovery:) Enough with new data, give me something to use that already exist since I do not care for more that is not “accessible” to me.

My first days on Twitter

I had my twitter account for quite a while but never really twitted. I guess I was part of the million accounts out there, just idle. I did not find a time to blog so automatically I considered tweeting as something I won’t have time for it also. Last week I started tweeting and it is very nice. I enjoy it. Same as blogging but faster, shorter and more in sync with the so many things that happen and go through your head during the day.

I was two weeks ago on a #140conf, which is Jeff Pulver’s twitter and social media conference which was great (And I think is the reason I got curious enough on twitter to start tweeting). The first topic on the conf was the human needs being satisfied by Tweeting and the main theory was of loneliness and how Tweeter breaks it. I have to admit that as a new twitter with less than a 100 followers I still get lonely:) and I understand why the presenters who have more than a quarter of million followers do not feel this way.

Anyway, for now, Twitter is for me a way to clear up my mind of thoughts and ideas I would never pursue further and would like to garbage collect (sorry for the geeky term). A way to help someone, especially someone with a technical question on a topic I know something about, everyone is welcome to try, helping quickly with no strings attached. A way to pass time on my laptop when work is done and the only thing left is reading stuff on the web, which can be quite tiring after a while. It is a way to bookmark stuff (I have a private account for it:). A way to not feel out of social networks, since I am not a “good” user most of the time. A way to eavesdrop on others’ talks, something considered not polite in the real world but most welcome on twitter. A way to write stuff without proofreading as I do now for this post.

For now, I enjoy it.

Easily develop cool UI in native client applications

For a long time, I was contemplating on the best strategy for client application development, mobile clients, or desktop client applications. The problem with native client application development is usually the difficulty of building the UI and applying changes to it over time. Since I did both web development and client development I am accustomed in the web area for the ease of UI creation as well as applying changes to it. In web development all you have to do is have a good designer create something very cool for you, turn it into XHTML/CSS/JS and you have got your UI ready. If you have changed then just modify the HTML/CSS and you are done. For native clients, it is a whole different story. Although UI frameworks and libraries have evolved greatly over time still programming the UI in a structural or object-oriented 3rd generation language is slow and cumbersome needless to say about changes. And I have to add that it never comes out so lovely as web-based designs.

Adobe Air has introduced a new programming pattern with their ability to make a web application behave like a native client application while all the UI hassle is taken away. Basically, they provide a window frame and inside of it you have a special web browser (I guess WebKit) and within the web view, you can have all your beautiful web stuff while integration to the logic and general native application capabilities is easy. This development pattern can be applied to many other development frameworks including QT which has embedded WebKit capabilities, Android Java, Windows Mobile IE view, and others.

The idea is to have all your UI in local resource files (for high response times) of course based on web standards – XHTML/CSS/JS – and to connect the browser events to your core logic. Most of the platforms allow you to expose logic into the browser via Javascript functions. This approach can provide the best of two worlds, highly responsive UI with all native application capabilities while keeping an easy way to create and modify the UI look and feel.

Another cool thing you get with this approach is the ease of platform porting where the UI is already compatible with other platforms since it is based on Web standards.

Wikipedia for Patents?

Recently I have been dealing a lot with patents and I have to say this is not easy! Patents although claimed to be written in English are most of the time just cryptic. It is almost impossible to an effective patent search and even when you get results, just decrypting what is written here is an impossible task. In the field of information retrieval patents I guess are considered something very difficult to crack and I heard of many companies trying to solve the puzzle in different ways, I even heard Thomson have a company that takes every new patent and apply to it new descriptive metadata so it can be retrieved at least. I also heard the same company charges 400$ an hour to use their search engine!!

The only way I can see the information retrieval problem solved is by adding crowds wisdom to the patents base the same way Wikipedia works. A web application that would allow you to annotate and add comments/discussions to patents would have been very helpful. I know it might not be the favorite subject for most of the people to annotate patents but I have to admit that I would personally contribute the things I learned from cryptic patents I had to read and understand for the greater good. There might be some issue with privacy here but anonymity can solve this or even some rewarding marketplace can do some good job here as well.

Does anyone know of such a product?

Machine Operated Web Applications

Software applications have two main perspectives the external perspective where interfaces to the external world are defined and consumed and the internal perspective where an internal structure enables and supports the external interface. Let me elaborate on this:

The internal perspective shows the building blocks and layers within the application allowing specific data flow and processing. To further simplify things let’s take an example from the real world and that is a real building block. We can describe it in a technical and physical description that will detail the concrete, foundations, electricity tunnels, air conditioning, and others. The external perspective of the building is the apartments look and feel, available services for tenants, views from the windows, painting color, type of cupboard handle in the kitchen, and others. In general, all the things that people experience when they interface with the building. So if we go back to the software story then the external perspective is the application UI, data feeds, APIs, and other entities external parties come in touch with.

Twitter with their approach in their API has created something special which can be the basis for a new development paradigm on the web and that would be MOA – Machine Operated Applications. As a background, Twitter provides one unified external interface, which is their API and their website UI seems to be built mostly on this API (I might be exaggerating here and they have capabilities in their UI not available in the API but the major functions are also available in the API). The same API is available for others to consume and operate. The API is split ed into two parts: the first part is a general services API which includes search and other user agnostic services which are very similar to other services other companies provide in their APIs. The second part is the more important one and that is the user-driven API where all data and actions available on Twitter’s web application for a specific user are available via the API itself. This model allowed the huge surge in the number of applications built on Twitter at no time, a developer community that many companies would die for while Twitter did it in a snap.

To describe the model I see for Twitter in a visual way:

[ Twitter infrastructure ] – Internal perspective which is hidden
[ Twitter API ] – Unified external perspective for the product
[ [ Twitter UI ] or [3rd party applications]] – API consumers and operators having equal rights to the external interface (3rd party apps are limited in the number of queries per hour but it is not too serious if you consider that the limitation is per user and not global)

So how all this long and tiring story relates to MOA – machine operated applications? Well once an application can be fully operated and consumed via a formal API (Which is Application Programmable Interface) then robots can use it too. And I do not mean today’s web robots who harvest data, aggregate data, and do some kind of analysis. I am talking about Robots that would work autonomously on behalf of real users or companies and will use the product in a meaningful way. For example, I can imagine a robot that will be my social network expander and it will use data from different areas to understand my interest and current network and will expand it automatically by following new people on Twitter. Following someone is a meaningful action in the virtual and real-world and once a bot will be smart enough to do so then things will change. Twitter with its API approach allows this evolution.

A note: the split of perspectives is similar to the way strategy in companies can be viewed where a company has the internal perspective of operational capability and the external perspective which is the market share, brand recognition, distribution channels, and more.

The web is changing

I have been reading about the whereabouts of News Corp., Google, and Microsoft in recent two weeks and I noticed something weird happening here about but could not put my finger on it. To those who do not know the storyline here is a short description posted on Hitwise today:

Two weeks ago we posted on Rupert Murdoch’s threat to block Google from Indexing News Corp. content. While at first it seemed as though Murdoch was merely posturing with hypotheticals, reports continue to indicate that News Corp. is seriously considering choosing Bing as the exclusive “indexer” of their news content.

via weblogs.hitwise.com

At first, I thought Mr. Murdoch was playing tricks on Google but when Microsoft entered the picture with their proposal to News Corp. to exclusively allow indexing of their sites on Bing only things got clearer. I am not talking about the not surprising tactic from Microsoft’s arsenal but on a totally different thing.

The new phenomenon here is the change of balance between publishers and Google. The status quo until today was that everyone just wished Google would index their websites and the more the merrier. Indexing meant traffic which summed up to more revenues from advertising. Industries have been created on this raw desire to be indexed on Google, for example, SEO and SEM were millions of dollars have been poured into it. News Corp. as a big website with big assets understood that Google is no less important to them than the opposite.

I am not sure whether Bill Gates got it and talked to News Corp or the other way around (though it smells like Bill’s way of thinking) but something has changed here.

Now that this has happened we can contemplate in a few directions. For example, what will happen if other websites will follow through and will deindex themselves from Google? And also is this happening because Google is no longer the main hub for getting users to websites where social bookmarking and tiny URLs on twitter fill the gap? Is this the reason Google is developing operating systems to grab hold of users while they know they lose ground in the pure web market?

I think this is a serious topic for Google to think about.


Is Web 3.0 The Right Name for The Next Internet Uphill?

I get to see here and there the term ‘3.0’ used in reference to the next internet/technology revolution and somehow it does not feel right to me. I am not sure about this but for me, the coined term ‘2.0’ was a metaphor belonging to the concept of software versioning. If the first internet era where infrastructure was established is called ‘1.0’ implying the first version of a product then what we had recently was a ‘2.0’ where the product, hence the internet, has become more streamlined towards users in terms of services, ease of use and diversity. As in software, there are always other versions such as 3.0 and 4.0 etc… but none of these compares to the unique characteristics ‘1.0’ and ‘2.0’ have.

In software usually, the 1.0 includes basic infrastructure for enabling core functionality as well as a diverse set of features coming from the delirious minds of the developers and maybe potential customers. It is usually very spread out and less focused though very appealing thanks to the creative sense of it. The 2.0 usually is an organized effort to address the needs of specific audiences after getting real feedback and real experience with the 1.0 version. I believe the internet did behave in correspondence with this lifecycle during the last two uphills.

If we try to envision the next uphill (assuming there will be one, hopefully) then it can come in two flavors (or more I have to be humble): The ‘3.0’ style where the product/internet becomes less creative and spread out while maturing current capabilities and extending them to address more perfectly different users’ needs. The second option is to have a spinoff on the ‘2.0’ product series but actually to create something new which can and should be called ‘1.0’ again.

I personally prefer option 2:)

The Web Crawls Silently into the Desktop

Recently I got deeply interested in rich Internet technologies such as Adobe Air and Microsoft Silverlight and it is hard to not see the trend of returning to good old desktop applications with one big twist – the web included. These rich desktop applications are naturally integrated into the web with its rich services, content while enjoying UI breakthroughs achieved by browsers and site designers. It is great to see unique and smooth UI concepts being delivered across different platforms without being restricted to each platform’s local UI structural constraints.

Although it is just the early days of Adobe Air and others still the trend of providing users with new rich and broad user experience is exciting.

More to come… (and who can say it is not an exciting time in the tech world:)

P.S. I am mainly talking about Adobe Air since it seems the major platform (for now) to catch up upon users and developers. Having said that I have to mention, Silverlight Out of the Browser experience (horrible name as only Microsoft knows to invent), JavaFX (which they promise to have a big comeback this year?), Titanium (I think mostly enterprise) and all SSB (Single Site Browsers) and local browser enhancers (Google Gears).

UPDATE: Just saw this article Adobe AIR Turns Web Developers into Desktop Developers which is an excellent perspective. The mass of professionals now can cross the barrier thanks to these web-desktop technologies.

Everyone focus now on revenues and efficiency as opposed to last year efforts?

The end of year is full of posts about how all startups and CEOs (now after the market meltdown) are going to be focused in 2009 on revenues, efficiency, listening to customers, making better products, and more…

Just the other day I read Some startup CEOs? New Years’ resolutions where most resolutions sound like boiler plated stuff. It is not that I don’t appreciate efficiency and revenues, don’t get me wrong, but still one has to ask what was the focus last year? I understand the pressures these companies have on them, especially from some of their investors who would like to see results (what are results when you try to build up to something?) but these responses seem to only satisfy the eager investor ears and nothing more.

In general, every time the market goes either up or down new mantras are invented – or reused. Usually, when the market goes up you are required as an entrepreneur to make sure you see for the long strategic term, have a scalable technology that can cope with potential huge market adoption, build a solid and endurable team, and infrastructure as a company as well as others. Actions that usually take a long time cost a lot, and do not bring near term financial results. When the market is down then you should forget about all the nonsense (just previously advised) and now refocus on making money and spending less.

Although the downtrend creates suggestions that seem logical, I am not sure new companies being created, if one considers a new company as a growing living entity, can be bent towards people’s wishes just based on the current weather or market trend. There are companies or products with no revenues in nature – twitter up to the moment, and there are those who require heavy investment in R&D. Sometimes, changing the strategic goals for the venture at a certain point of time can mean the end of life. Everyone wants a cash cow eventually but how can you milk a calf that was just born? Or grow him with minimal food?

RSS based ranking or maybe a new protocol is needed?

RSS is a protocol for transmitting changes within blogs that has been widely adopted and provides a solution for a big problem people had in tracking changes in content effectively. RSS is doing a perfect job in providing updates to content based on time of change but still lacks support for providing other criteria for ordering changes.

At first RSS has been used solely for providing list of recent changes whether for blogs or other content management systems such as wikis. Now it seems that RSS is used more broadly to provide access to recent list of data items that are different one from another not only by time of change but also by order of importance for example.

If we take Techmeme for example, they provide actually and ordered list of stories (blog posts or news) where the order is based on importance. Consuming Techmeme via RSS does not unveil the full picture of importance within Techmeme since date of publication is the only criteria RSS readers know to consider during display. Together with Techmeme we can put any search engine results delivered via RSS, bids for an auction and more..

Does anyone have an idea for a protocol or maybe some trick to reuse RSS while maintaining true order of items during presentation?

Thoughts on application development and setup in windows vs. linux

After many long years of development to both MS Windows platforms and Linux platforms and especially lots of frustration in recent days trying to install/uninstall software on my WinXP to solve a problem I have few conclusions on proprietary vs. open source development.

One of the nice things about development in Microsoft world (or at least seems so until you get into trouble) is that everything wraps up so nicely as if you were in a candy store. There are very nice tools for development and there sophisticated mechanisms for code reuse such as DLLs and libraries as well as well documented APIs and examples. Microsoft as the sole owner of the MS Windows platform has created a complete eco-system of dev tools to enable you as a developer to rapidly develop your own applications. One simple MS-Windows application usually relies on many, many dependencies such as other products installed/service packs/specific operating systems/DLLs, and actually when you wrap your application as a setup package you usually rely on these to exist in the target computer and to be (hopefully) compliant with your specific application requirements. To package all dependencies into your application is not relevant and some do package unique extensions that are required but most just deliver their own app relying on Microsoft to handle dependencies.

This mechanism works very nice in a controlled environment world where you fully control the setup of your test machines but once you go into the wild where the variety of combinations of configurations are enormous, things usually break. Especially after a few install/uninstalls of few apps where each one leaves its trails of broken dependencies. Actually, as I see it an MS-Windows PC that has received a normal “dosage” of installs/uninstalls becomes actually chaotic in terms of what is actually residing in it. Here the lack of awareness of what your application actually needs and depends on works to your worst.

On Linux/Open Source world when you develop you usually know what you are depending on and the approach is more minimalistic in terms of depending on too many components to already reside on the target computer unless it is a specific runtime platform like Perl or python. This approach makes your dev/packaging process seem at first more “dirty” and requires hands-on experience but the end result is a way more stable deployment then what you could achieve in MS World. Usually, install/uninstall of software on Linux does not affect the rest of the system, and the instructions on how to make your software work can become very clear. This is attributed to the way Linux and open source, in general, have been developed by different independent minds who found an effective integration approach unlike MS “integrative” one.

I am not sure whether it is MS goal to achieve a perfect automatic/transparent platform but still, eventually, the more hand-on approach seems to work better. This opens up some thoughts on APP store-like approaches for iPhone for example where there a similar simplistic approach to handle deployment automatically is being taken. Currently in the App store frontier with the low amount of software packages and low complexity of available apps nowadays the problem does not seem to exist but still, time will tell how it will deal with a complexity similar to MS variety.

That is also why people like web apps a lot where no “apparent” setup happens through this environment goes also through changes that make it “stickier” to the target platforms such as Google Gears and native code implementations.? Actually, within browsers, the setup happens on-demand and new code deployment happens automatically where dependencies are minor since the browser separates automatically dependencies between domains.

What does Google Browser means to me?

Google having their own browser is a move I did not anticipate and is actually a brilliant idea in terms of os replacement for other proprietary operating systems, hence Microsoft. I think it will actually be very successful for two reasons:

– being open source

– is powered by a web state of mind (and no one is such as google is)

The fact it is open source I think means a killer for IE since having one proprietary browser and one open-source (Mozilla) is one thing. Having two major players with an open-source browser and one proprietary means the proprietary is bad.

As for myself, I think it means a big change in terms of web-based application and their bright future to become the dominant development platform for new products and services. I find it hard to rationalize now developing platform-specific applications while the ability to cross-platform them is so easy.

Anyway, I am very glad about this. As for chrome, I played with that a bit this morning and after a while, it stuck my laptop but I guess this is only early-stage problems.


Google is the 21st Century Mainframe!

All the big guys are rushing these days to launch as many s as possible to “captivate” web surfers in their “club”. in a dramatic and maybe a little bit panicked response to ‘s threats and ‘s renovated website started launching an application a day. It doesn’t matter anymore what it is, as long it is new and it does something at all then it should be launched – that seems to be their higher strategic guideline.

Google is becoming a one central computing center that does everything in a very similar fashion to the 60’s – 70’s state of mind. Of course, we don’t see the MF green characters anymore but still, one web address, one state of mind, and one business culture to provide everything for everyone!

I do think Microsoft is doing a good job at scaring Google with its large announcements (which I think is the main and maybe only competitive tactic and tool MS has to deploy against Google right now). They have got them into forgetting their un-disputed “rule” in information retrieval via a search engine and into rushing for growth somewhere else where they are fresh. Of course, in a similar line of thought, except for launching new applications daily, there are the low-cost s of every piece of web 2.0 technology just to enrich the weapon arsenal.

I think that Google, Microsoft, and Yahoo should leave aside the “consolidation” textbook everyone else in the industry has embraced (, , MS) and give some space for innovation to bloom and flourish outside their factories. Everything really innovative about “web 2.0” and even the term itself was invented during the exact time Microsoft and Yahoo has stopped believing in the net potential as well as many others. Now when they believe in it again, they kill it softly with their warm hugs.

is not just a matter of funding and clever R&D processes. It is MAINLY a matter of different points of view brought up by different people who can still think that there is a place for their dream. Knowing you have to fight with MS, Google, or Yahoo for the first 100K users scares me and every new entrepreneur as well.

Google?s Aspired Hegemony

After writing yesterday about the launch of at Should Google Lead the Web Development Tools Market? I realized that has changed profoundly from what they were at first.

At the beginning, Google was an by really making the “matter” accessible to everyone. They have contributed immensely in making the web a useful and enjoyable place to be.

Ever since Google raised their head towards direct competition with (the notoriously centralistic company) they have become more and more like what they have taught us to hate – their opponent. Google with their ever-growing spree of applications launched almost every week in recent years are becoming a kind of a large web application development powerhouse with a very large userbase and less of a web hub as they use to be. Killing us softly with their features:)

We can try to attribute this change to the perceived rules of engagement at the big pond, where Microsoft “had” to create sophisticated “lock-in” methods on the level (apps tightly coupled with dev tools that are tightly coupled with operating systems) in order to maintain leadership. But still, I can’t forget that the new competition between Google and Microsoft on the dominance of the web takes place on a competitive landscape Google created with their own hands. The web as a new competitive landscape is profoundly different then the personal/business computing landscape most of all for the distributed nature of the web. User “lock-in” may not be the smartest way to maintain dominance in a distributed world.

Google seems to follow Microsoft by adding rather centralistic products such as the recent Google Pages Beta launch (a web authoring and hosting application), , or even . I humbly think that a better way for Google to maintain the distinction from Microsoft in their own playground should be to increase their reach into more content as well as other new enabling capabilities such as as you would expect from a distributed world platform leader and not by adding few more tools. These new applications rush needless to say ambitions over operating systems may lead only to marginal benefit both for users as well as for Google in comparison to what they have done before.

The legendary “lock-in” strategy which is the holy grail in the personal and organizational computing world has many good reasons to be a “holy grail” in a world that is dramatically less diverse than the web. Of course, it does not have to be automatically the right answer for creating platform leadership on the web. The highly distributed nature of the web has always embraced loosely coupled innovation (open source) in terms of the relationship between the vendor and the user and creating one central point of service on the web just doesn’t fit in.

I think that keeping this wrong trail Google will end up as another closed community portal (AOL?) which does many things but is not meaningful (except for their shareholders).

Just a thought – It seems that the new fight over the web in the next decade will be on the “lock-in” mentality between the state of mind of the open-source world vs. the proprietary world (Microsoft and Google and many others).

P.S. This centralistic approach takes me back 15 years ago to the time I was at the army and part of an IBM mainframe applications team where centrality was the main and only way of thinking.

A Product Roadmap in a Feed

was initially an idea about a new / tool for enterprises in the . Since then many things have changed including our concept and and probably the only permanent thing here is me and Strategic Board the name itself:)

One of the building blocks a competitive intelligence tool is required to have in order to be effective is comparisons and more specifically s. Product comparisons, whether it is about comparing the of different products, conducting a or building a table, are done manually with many errors accumulated in the information collection process. Information is collected ad hoc, via rumours, web sites, google cache pages:), some lost presentation of a competitor and other "creative" ways.

/capabilities in a feed is a concept I was contemplating on as a viable future direction for product manufacturers in the . Let's imagine that every company, (, , , , and others) will provide several rss feeds that are named after every product they manufacture. For example feeds such as:
product_xxx_pricing.rss: updates (date – price change – xml price tags – description)
product_xxx_features.rss: product feature releases ( date – feature description – affecting module – maybe some taxonomy identifier)
product_xxx_roadmap.rss: planned product releases ( similar as the features feed but with future dates)
product_xxx_issues.rss: bugs and problems in a product (This kind of feed already exists in several forms)

Needless to say what this kind of information pool can do to automate and improve many processes that exist and don't exist today (comparison shopping, competitive analysis, customer service/support, etc…) and of course will make many s happy:)

The point I want to deal with here is the objections that immediately rise from the high level of transparency required from a manufacturer to do so. The main fear is the fact that your competitor will know where you are right now in terms of development and what are your future plans and that will potentially enable them to devise a preventing and winning .

All along the history of commerce this fear has been the border line for openness and it has always been broken to serve a higher level of communications required between a company and its constituencies (customers, suppliers, investors, employees). We can see a similar scenario in who expose their guts to the and the public eyes and still it does not kill them. I believe that we are dealing with the same story here all over again, and that it is just a matter of having a strong industry leader taking up the torch to show the advantages (wake up IBM).

The strategic fear in this case is much more realistic when you hide the details and stay on the watch and really does not exist in the situation when you are open on what you have to offer. In other words, the fear itself is what frightens and nothing else.

Being open on what is your value proposition takes the sting out of the fear equation and creates a new level of comfort between the company and the customers that is unparallel to what your competitors have to offer.

If I assume that I have a good argument for opening up the so called "secretive" plans and details on what a company has to offer then here is a very small list of good things that can emerge from this move (and I believe that many other good minds can think on much better ideas)

1) Online and at all levels, regardless whether the product you want to evaluate is a small mini camera or a , getting an objective comparison instantly can be invaluable.

2) Deep stock analysis – having the history of pricing, features and roadmap can give a very strong picture on who and what is behind the corporate curtains.

3) Online and vendor-neutral product configurators that let you build a product from different vendors together with price optimizations that let you as a buyer the power of awareness to your real options.

4) Ever strong feedback loop between enthusiastic/disappointed users and the vendor thanks to this information flow. Imagine the use of on a post about a specific product pricing/feature, which is also integrated into the company sales/support/marketing modules.

5) applications to help achieve a greater understanding on an industry acceleration rate or downfall.

6) Vendor s that track vendors deliverables vs promises.

And much much more!

My opinion is that most objections are only perceived problems and the level of transparency that exists de facto in an industry today is not really rational as we wish to think it is. Companies usually become open on what their competitors have done few weeks ago just to level up without giving much though to it. See what is doing to .

As for Strategic Board we plan on doing that (at least a features vs. roadmap feeds) and I will update on the availability of the feeds in this blog.


Can Microsoft afford to ignore Linux?

completed the of , their new and line of business – The Windows Observer–Antivirus, Anti-Spyware Strategy Moves Forward for Microsoft.

One line from the news caught my eye as something that makes immediate common sense but may not be right strategically after all “Not surprisingly, Microsoft will discontinue new sales of Sybari’s products for the ( and ) and operating systems. It will, however, continue to sell and support Sybari software running on ‘s platform; the Notes installed base is predominantly -based.“.

The reporter’s common sense as well as Microsoft’s led to the almost automatic decision for Microsoft to discontinue the Linux product line and just keep Windows-based products alive. Common sense tells us why should a company like Microsoft supports the endorsement of Linux, a direct rival on its core product – the operating system.

Still, following the same line of thought a question arises, If Microsoft stops supporting this Linux-based product line, will that affect in any way world-wide Linux adoption and endorsement by other vendors and users? The only thing a move like that does is a statement of PR and market positioning that they don’t believe in the existence of a viable future of the Linux platform.

Let’s imagine the crazy scenario where Microsoft does keep this product line alive and even invests in it some more resources. What would be their benefits and downsides on a move like that?
1) Gain domain expertise and intimate acquaintance with Linux developers and the most important users. Linux does and will exist regardless of the decision to discontinue this product line and while years ago a move like that could have killed the “unborn child” today it is more of acknowledging and getting to know your growing “illegal son”.
2) It can help them understand better the economics of Linux enterprise and consumer users as well as keep a close eye on its adoption patterns.
3) Provide a much more friendly positioning to enterprise buyers and consumers that already know that MS-Windows is no more the only alternative for running applications; an example alternative is the application environment.
4) Yesterday I read a post How Microsoft Lost the API War on “” blog that discusses thoroughly how the longstanding fortress of Microsoft operating system and its API lock-in strategy erode. It might be just an expert opinion that will not hold true by Microsoft executives but still, it has a lot of common sense in it. This can be an opportunity for Microsoft to fit in the new computing landscape that is evidently mixed by nature.

1) The stock may suffer temporarily by Microsoft’s move that admits openly Linux is here to stay.
2) The internal pride and enterprise-wide goal of keeping the title of the ubiquitous operating system will vanish. This is a matter of cultural change.

I personally think it is time for Microsoft to acknowledge the Linux paradigm shit, wisely, and stop pretending it does not exist.

Update: See complete coverage on the move internetnews.

Single Sign-On for News Sites?

Many news sites require a username and a password, which is understandable in terms of specific business model requirements. Still, the burden for newsreaders, who are required to register and maintain account information for each individual site becomes a real problem. Especially considering the huge cross-linking the blogosphere offers for online news sites.

I think that a central service, which will provide a single sign-on service for these sites will be very popular. At least for me:) and a few colleagues of mine as well. A different approach can be to integrate this capability into news who serve the links to those news sites.

Anyone who cares to do something about this I will love to assist if I can.

Web-based Apps Offline Capability

It seems that web-based applications can accomplish today the most extensive and complex tasks that were possible before only by locally installed software. One aspect that has not been addressed by either or Firefox, the leading web browsing software, is working offline. Although Microsoft has mentioned it in the past under the hat of Smart Client architecture still current products do not show any sign of support.

Offline capability is something not trivial for browsers to implement due to the unique needs each application has and the inability of applying a generic approach to support these different needs.

Once implemented it will remove many barriers to network computing and will enable full productivity over the web. R&D and IT maintenance costs will be lower as well as removal or at least weakening of vendor lock-in with the installed software. This futuristic scenario is the dream of many vendors that wish to play and win based on the quality of software and features and not on sunk cost decisions.

A hybrid solution for this problem can be in the way of downloading a reducted copy of the web application locally (Can be downloaded by the browser as part of current “work offline” implementation) and when the user is disconnected from the net, the web browser communicates with the local version of the application that has limited but complete functionality. The offline “copy” of the application will be developed by web application developers. Once connected back to the net, the browser can transmit these changes to the site as part of the startup phase (Should be implemented in a secure manner of course) where inconsistencies and confirmation can be displayed to the connected user.

This can be implemented easily within current browser frameworks and without many incompatibility issues.

If to be done by , who tend to implement new capabilities faster, a very strong! the competitive edge will emerge.

Suggested Innovation in Structured Feed Publishing and Aggregation

Yesterday I wrote about the news that opened their tech support knowledgebase via s Structured Corporate Feeds? with a new concept of structured RSS and I thought to elaborate on it further to make the idea more useful.

RSS feeds in the perspective of infrastructure tools enable today an efficient mechanism for detecting changes in distributed content and it mainly serves for personal publishing via blogging tools serving publishers and news reading tools serving readers on the other end (And of course other aggregation and indexing services to better serve information identification and classification purposes).

Exposing corporate systems such as the corporate tech support knowledgebase via RSS creates a new pattern of information transfer that has several unique attributes:

1) The corporate information is transformed from a structured format (The corporate database system) into a stream of plain text updates with one structured attribute, which is the date of publication.

2) The extensive meaning embedded in the corporate structure is lost due to this transformation.

3) Relationships between pieces of information that existed in the corporate system are lost as well.

Even in the simple case of knowledge base updates; if we consider the experience of someone reading this information via the knowledge base browser Microsoft provides, then the experience itself is structured even if it is not very noticeable and reading it via a plain news reader that presents it as plain text loses information and capabilities on the way. Capabilities such as archiving, classification, search, and others that were embedded in the same information while it resided on the corporate system.

This loss of structure is maybe not very dramatic at this stage of the where only a fraction of corporate information systems is tunneled via RSS. If this practice will be adopted widely then a restructuring behavior on the other end will and should emerge. The transformation between the structured content into plain text and then again into a structured format in the end (It can be a different level of restructuring that will be controlled by the publisher) can be implemented using many methods. One of the methods can be a mapping of fields that will be part of the XML content and will be re-interpreted on the other end.

Different uses of RSS and corporate systems that I can think of are: CRM sales figures to partners and vice versa, Investor relations info to stock markets and stock information consumers, Updates of product features to partners and customers, Security fixes to customers, Press releases to the public, All PRM implementations (Partnership Relationship Management), Internal operational indicators to corporate performance systems and more…

RSS can also be elevated to include binary code updates as well and that can be a platform for many new s as well.

As a final note, I think that the basic nature of RSS feeds being served in a machine-readable format creates many opportunities for innovation that is beyond basic change detection and aggregation mechanisms. Including innovation such as automatic language translation for example.

If anyone is interested to dig in one of these concepts and make something useful out of it I would love to assist if I can.

Is Microsoft the Big Bad Wolf?

I am writing many times on strategies and competition related to Microsoft and I wanted to clear the reason why I relate to this company a lot and why I am not on their side with my advice especially when it comes to competition with new ventures:

1) To put it up front I think that Microsoft is an excellent company with many strong core capabilities and deep strategic thinking that makes them a very worthy competitor.

2) Many times their core competencies are too tough for new ventures to handle and the immediate response of "lose the fight" before engaging it becomes more prevalent. This is a state of mind that is a killer for innovation and innovation is something dear to me in person. Microsoft of course is not the one to blame for doing a great job for themselves actually we should thank them for raising the bar.

3) Microsoft according to my experience and not based on "hard" statistics is the number one news generator in the IT industry. They do a lot and they talk about what they do a lot; this makes it just naturally to relate to them a lot.

That's all.

Why CEO should blog – my personal experience

relates to USAToday article on Blogging and CEO on a post Blogs and Feeds: CEO Blogs — Where Angels(?) Fear to Tread. I am a of a new venture company and a for the last four months and I wanted to write down what do I get from it:

1) Feedback on my thoughts – As a CEO and a person in general I have different opinions on various subjects related whether to my industry, other industries, innovation, regulation, and other market conditions. Through blogging these opinions get the most candid responses that truly reflect what other people think. I rather listen to these thoughts than not.

2) As a blogger people are more open towards me and are less intimidated by the CEO curtain – a plus for me.

3) People know that I am accessible and that they have the option to communicate with me at any time they like if they see it fit. These give customers/partners/prospects more confidence in the company and its products and services. Good for business.

4) As for the fear of making mistakes – I write the content for myself so I control what to write and what not to write and mistakenly exposing secrets is not higher risk than the one that exists in other public engagements. Actually, it is lower. I do agree that this is a platform that lets you talk more than I used to do before it and that naturally creates a risk for doing mistakes.

5) Audience that hears me – after building your audience you have an unparalleled public relations tool that is accessible in real-time for any announcement.

6) Time – the only disadvantage I find in blogging is the time it takes to do it right. I hope that within time blogging tools will become more convenient and easier to use to save some time.

Although I am talking from the hat of a new venture CEO and not as a CEO of an established company I do think that these points are valid to any kind of CEO and other top management.

Software As a Service – Perspectives

My perspectives on the important subject of as a presented on Venture Chronicles by Jeff Nolan: Software as a Service – Part 1.

Evolving Relationships – Technology is nowadays an integral part of businesses in all sectors and the general trend of evolving and de-coupling the dependencies enterprise customers has on technology vendors shows its signs also on the evolution of the way technology is delivered. In the customer’s perspective, the financial alignment of paying on software as a service and consuming it in a less intrusive method of delivery (, , and more Vs. Hardcore deployment) presents a step forward in the relationship with s. Different methods of delivering software as a service present different advancements in specific industries’ vendor-customer relationships.

Independence Illusion – I think that customers (who use as a service) that think they can stop working with one vendor on one day and start working with a different vendor on the day after are not taking into consideration: migration costs of historic information into the new system, new training costs, general adoption (Will they like it?) by enterprise users, level of confidence in the old system robustness and more. In real life, it isn’t easy to switch between vendors even if you have the clause in the agreement.

time – I think that this new model enables vendors to bring applications quicker but “dirtier” to the market – Early enterprise adopters who don’t care too much for , and can enjoy more s in a specific time frame but still bringing an to the level where a mainstream customer will be satisfied requires sometimes even more time on development and packaging due to the extra thinking that needs to be done (How do we implement an integrated security scheme with a third party for example?).