November 17, 2009 at 2:45 pm #85672
WOULD offer that this Blog posting on Monday the 16th is going to heat up the “blogsphere” big time…
From Tim O’Reilly’s blog
Title: The War For the Web
On Friday, my latest tweet was automatically posted to my Facebook news feed, as always. But this time, Tom Scoville noticed a difference: the link in the posting was no longer active.
It turns out that a lot of other people had noticed this too. Mashable wrote about the problem on Saturday morning: Facebook Unlinks Your Twitter Links.
if you’re posting web links (Bit.ly, TinyURL) to your Twitter feed and using the Twitter Facebook app to share those updates on Facebook too, none of those links are hyperlinked. Your friends will need to copy and paste the links into a browser to make them work.
If this is a design decision on Facebook’s part, it’s an extremely odd one: we’d like to think it’s an inconvenient bug, and we have a mail in to Facebook to check. Suffice to say, the issue is site-wide: it’s not just you.
As it turns out, it wasn’t just links imported from Twitter. All outbound links were temporarily disabled, unless users explicitly added them as links via an “attach” dialogue. I went to Facebook, and tried posting a link to this blog directly in my status feed, and saw the same behavior: links were no longer automatically made clickable. You can see that in the image that is the destination of the first link in this piece.
The problem was quickly fixed, with URLs in status updates automatically now linkified again. The consensus was that it was in fact a bug, but it’s little surprise that people suspected otherwise, given the increasing amount of effort Facebook puts into warning people that they are leaving Facebook for the big bad unsafe Internet:
All of this is well-intentioned, I’m sure. After all, Facebook is attempting to put in place privacy controls that allow its users to manage the visibility of their information — and the Web’s expectation of universal visibility is not necessarily the best default for much of the information posted on Facebook. But let’s not kid ourselves: Facebook is a new kind of web site (or an old kind redux), a world of its own, playing by different rules.
But this isn’t just about Facebook.
The Apple iPhone is the hottest web access device around, and like Facebook, while it connects to the web, it plays by a different set of rules. Anyone can put up a website, or launch a new Windows or Mac OS X or Linux application, without anyone’s permission. But put an app onto the iPhone? That requires Apple’s blessing.
There is one glaring loophole: anyone can create a web application, which any user can save as clickable application on their phone. But these web applications have limits – there are key capabilities of the phone that are not accessible to web applications. HTML 5 can introduce all the new application-like features it wants, but they will work only for web applications, and can’t access key aspects of the phone with Apple’s permission. And as we saw earlier this year with Apple’s rejection of the Google Voice application, Apple isn’t shy about blocking applications that it considers threatening to their core business, or that of their partners.
And now, of course, we see the latest salvo in the war against the accepted rules of interoperability on the web: Rupert Murdoch’s threat to take the Wall Street Journal out of the Google search index. While most people have repeated the existing wisdom that to do so would be suicide for the Journal, a few contrarian observers have noted the leverage Murdoch holds. Mark Cuban argues that Twitter now trumps search engines when it comes to breaking news. Even more provocatively, Jason Calacanis suggested, a few weeks before Murdoch’s announcement, that all big media companies need to do to cut Google off at the knees would be to block Google, while cutting an exclusive deal with Bing to be found only in Microsoft’s search index.
Of course, Google wouldn’t take that lying down, and would likely make its own exclusive deals, leading to a showdown that would make the browser wars of the 90s seem tame.
I’m not saying that News Corp and other mainstream media publications would adopt Jason’s suggested strategy, or that it would work if they did, but it is becoming clear to me that we are heading into a bloody period of competition that could be extremely unfriendly to the interoperable web as we know it today.
If you’ve followed my thinking about Web 2.0 from the beginning, you know that I believe we are engaged in a long term project to build an internet operating system. (Check out the program for the first O’Reilly Emerging Technology Conference in 2002 (pdf).) In my talks over the years, I’ve argued that there are two models of operating system, which I have characterized as “One Ring to Rule Them All” and “Small Pieces Loosely Joined,” with the latter represented by a routing map of the Internet.
The first is the winner-takes-all world that we saw with Microsoft Windows on the PC, a world that promises simplicity and ease of use, but ends up diminishing user and developer choice as the operating system provider.
The second is an operating system that works like the Internet itself, like the web, and like open source operating systems like Linux: a world that is admittedly less polished, less controlled, but one that is profoundly generative of new innovations because anyone can bring new ideas to the market without having to ask permission of anyone.
I’ve outlined a few of the ways that big players like Facebook, Apple, and News Corp are potentially breaking the “small pieces loosely joined” model of the Internet. But perhaps most threatening of all are the natural monopolies created by Web 2.0 network effects.
One of the points I’ve made repeatedly about Web 2.0 is that it is the design of systems that get better the more people use them, and that over time, such systems have a natural tendency towards monopoly.
And so we’ve grown used to a world with one dominant search engine, one dominant online encyclopedia, one dominant online retailer, one dominant auction site, one dominant online classified site, and we’ve been readying ourselves for one dominant social network.
But what happens when a company with one of these natural monopolies uses it to gain dominance in other, adjacent areas? I’ve been watching with a mixture of admiration and alarm as Google has taken their dominance in search and used it to take control of other, adjacent data-driven applications. I noted this first with speech recognition, but it’s had the biggest business impact so far in location-based services.
A few weeks ago, Google offered free turn-by-turn directions for Android phones. This is awesome news for consumers, who previously could get this only in dedicated GPS devices or with high-priced iPhone apps. But it’s also a sign just how competitive the web is getting, and just how powerful Google is getting, because they understand that “data is the Intel Inside” of the next generation of computer applications.
Nokia paid $8 billion for NavTeq, the leading provider of such turn-by-turn directions. GPS-maker TomTom paid $3.7 billion for TeleAtlas, the #2 provider in the market. Google quietly built an equivalent service, and is now giving it away for free — but only to their own business partners. Everyone else still has to pay high fees to NavTeq and TeleAtlas. What’s more, Google upped the ante by adding in such features as Street View.
Most interestingly, this move sets the stage for the future competition between Google and Apple. (Bill Gurley’s analysis is an essential read.) Apple controls access to the dominant device of the mobile web; Google controls access to one of the most important mobile applications, and so far, is making it available for free only on Android. Google’s prowess is not just in search, but in mapping, speech recognition, automated translation, and other applications driven by huge, intelligent databases that only a few providers can offer. Microsoft and Nokia control comparable assets, but they too are Apple competitors, and unlike Google, their business model depends on selling access to those assets, not giving them away for free.
It could be that everyone will figure out how to play nicely with each other, and we’ll see a continuation of the interoperable web model we’ve enjoyed for the past two decades. But I’m betting that things are going to get ugly. We’re heading into a war for control of the web. And in the end, it’s more than that, it’s a war against the web as an interoperable platform. Instead, we’re facing the prospect of Facebook as the platform, Apple as the platform, Google as the platform, Amazon as the platform, where big companies slug it out until one is king of the hill.
And it’s time for developers to take a stand. If you don’t want a repeat of the PC era, place your bets now on open systems. Don’t wait till it’s too late.
P.S. One prediction: Microsoft will emerge as a champion of the open web platform, supporting interoperable web services from many independent players, much as IBM emerged as the leading enterprise backer of Linux.
©2005-2009, O’Reilly Media, Inc
November 17, 2009 at 2:59 pm #85674
From David Ascher’s blog
I’ve tended to limit my link referrals to my Twitter feed over the last year, but I wanted to advertise Tim O’Reilly’s latest post on this channel as well (it also feels great to have more than 100 characters to express myself!). Tim explains well what the new battlegrounds for the future of the web are. It’s a war that’s currently being fought with shiny discounted hardware, free access to proprietary data, and competing “privileged” interfaces to the web. The stakes are huge, but oh-so-hard for people to grasp, as much of the mechanics of who wins what depend on economics which are far removed from the battleground:
* People don’t pay transparently for mobile services or devices
* People don’t pay for online news (although some surveys indicate many would)
* People often end up “subscribing” to brands (Apple, Google, Facebook) and becoming brand consumers rather than active participants in their own digital life. That delegation of trust is often pragmatic, but it’s worrisome if unchecked by alternatives.
* The heterogeneity of the original internet can lead to an appearance of chaos, and many people prefer simpler, more uniform experiences. Both technical and psychological factors encourage centralization of services with single providers. Financially as well, “small, independent startups” have huge incentives to become part of one of the big centers of mass.
Finally, the huge psychological distance between the value of free services and the costs that funds them is one of the big topics that puzzle. It applies to “how come I can get free map directions from Google but I have to pay to get them from TomTom?” as well as “how can I convince my neighbors that electing so-and-so to office will mean more tax revenue overall, which in turn will mean better schools?”. In both cases, the number of steps between cost and service is huge, and coupling them tighter would destroy the huge advantages that centralization and scale offer. (If I knew more about the derivatives crash I could make some pithy reference here).
I agree with Tim that “If you don’t want a repeat of the PC era, place your bets now on open systems. Don’t wait till it’s too late.” I think he’d also agree that we need to think beyond code and copyright. That’s like going to war with trucks but no tanks. For the open, distributed, heterogeneous web to thrive, we need to incorporate thinking from a host of other fields, such as contract law, design, psychology, consumer behavior, brand marketing, and more. Figuring out how to engage thinkers and leaders in those fields is likely one of the critical, still missing steps.
You must be logged in to reply to this topic.