Non-linearity of technology adoption
When I was in business school I remember a class where a partner from a big consulting firm was talking about how they had done extensive research and concluded that broadband would never gain significant traction in the US without government subsidies. His primary evidence was a survey of consumers they had done asking them if they were willing to pay for broadband access at various price points.
Of course the flaw in this reasoning is that, at the time, there weren’t many websites or apps that made good use of broadband. This was 2002 – before YouTube, Skype, Ajax-enabled web apps and so on. In the language of economics, broadband and broadband apps are complementary goods – the existence of one makes the other more valuable. Broadband didn’t have complements yet so it wasn’t that valuable.
Complement effects are one of the main reasons that technology adoption is non-linear. There are other reasons, including network effects, viral product features, and plain old faddishness.
Twitter has network effects – it is more valuable to me when more people use it. By opening up the API they also gained complement effects – there are tons of interesting Twitter-related products that make it more useful. Facebook also has network effects and with its app program and Facebook Connect gets complement effects.
You can understand a large portion of technology business strategy by understanding strategies around complements. One major point: companies generally try to reduce the price of their products complements (Joel Spolsky has an excellent discussion of the topic here). If you think of the consumer as having a willingness to pay a fixed N for product A plus complementary product B, then each side is fighting for a bigger piece of the pie. This is why, for example, cable companies and content companies are constantly battling. It is also why Google wants open source operating systems to win, and for broadband to be cheap and ubiquitous.
Clay Christensen has a really interesting theory about how technology “value chains” evolve over time. Basically they typically start out with a single company creating the whole thing, or most of it. (Think of mobile phones or the PC). This is because early products require tight integration to squeeze out maximum performance and usability. Over time, standard “APIs” start to develop between layers, and the whole product gains performance/usability to spare. Thus the chain begins to stratify and adjacent sections start fighting to commoditize one another. In the early days it’s not at all obvious which segments of the chain will win. That is why, for example, IBM let Microsoft own DOS. They bet on the hardware. One of Christensen’s interesting observations is, in the steady state, you usually end up with alternating commoditized and non-commoditized segments of the chain.
Microsoft Windows & Office was the big non-commoditized winner of the PC. Dell did very well precisely because they saw early on that hardware was becoming commodotized. In a commoditized market you can still make money but your strategy should be based on lowering costs.
Be wary of analysts and consultants who draw lines to extrapolate technology trends. You are much better off thinking about complements, network effects, and studying how technology markets have evolved in the past.
Link to original post
Other Posts by Chris Dixon
The moderated business community for business intelligence, predictive analytics, and data professionals.