by Brian Prentice | May 7, 2010 | 1 Comment
Two days ago, at the Web 2.0 Expo, Adobe’s Kevin Lynch (CTO and SVP, Experience & Technology Organization) made the following comments in relationship to Apple and his view that they want to created a “walled garden” on the web:
If you look at what’s going on now, it’s like railroads in the 1800′s. People were using different gauged rails. Your cars would literally not run on those rails. That’s counter to the web. The ‘rails’ now are companies forcing people to write for a particular OS, which has a high cost to switch.
Lynch’s analogy is very relevant. But not for the reasons he seems to be alluding to.
By 1860 there were seven different rail gauges in use in the United States. This was not the result of a conscious effort by railroads to avoid national standardization. Rail companies like the ones we learned about from playing Monopoly (Pennsylvania Railroad, The Baltimore & Ohio Railroad, Reading Railroad, Short Line) emerged by serving local markets. Different gauges were not a problem until the networks expanded to the point where carriages needed to cross different railroads. But by that time the problem was entrenched. And given the cost of changing the existing track, each railroad wanted everyone else to adopt their gauge as the standard.
To overcome the lack of gauge standards, cargo had to be unloaded from the carriages on one network to those on another. This 19th century version of infrastructure integration became an industry in it’s own right. Interestingly, it was these rail integrators that most strongly opposed the move towards any standard. After all, it meant the end to their livelihoods. In fact, moves to standardize the gauges led to riots in Erie, Pennsylvania in 1853 – a city where three different gauges converged.
Eventually the standard gauge of 4’8.5” emerged. But it didn’t happen because railroad tycoons agreed on a gauge standard. It happened because the U.S. Congress mandated that gauge for the new rail network know as the Union Pacific. The original congressional act gave President Lincoln the power to decide on the gauge which he established at 5’. That power was rescinded by Congress which then set the gauge standard to what it is today as a direct result of intense lobbying by the large Northeastern railways. So it looks like corporate lobbying was as effective in 19th century America as it is today.
So, what lessons do I take away from Lynch’s railway analogy:
- When infrastructure of any type emerges in a free market environment the only standard of any importance is ubiquity
- The benefits of being the ubiquitous standard and the costs of having to convert to it are enormous. Therefore, in a free market environment, companies will do whatever it takes to achieve the former and avoid the latter.
- Demand-side mandates are significantly more powerful in establishing standards than supply-side accords.
- The most significant demand-side mandates come government funded national infrastructure initiatives. But when governments establish standards in this way the politics of money and influence will have a significant impact on the decision making process.
As I would apply this to the Apple v. Adobe stoush:
- Apple and Adobe’s current disagreement has evolved over time.
- Each company made pragmatic design and technology decisions to secure the market viability of their products and over time and both were successful in establishing ubiquitous standards in their respective areas.
- Now that those areas are overlapping both are engaged in a battle to keep from having to cede their ubiquitous standard to the other.
- Its unlikely that a demand-side mandate will emerge any time soon. But in the event it does it is in the best interest of both Apple and Adobe to keep driving their standards against the other as market clout will have an impact in the decisions made on the demand side.
- Establishing standards creates losers. And the biggest losers are the ones that act as brokers between incompatible systems. At the moment, Adobe looks a bit more like a broker than a system provider.
The bottom line here is that both Apple and Adobe are acting in an entirely predictable fashion in remarkably similar ways to achieve essentially the same objective because both organizations are equally effected by common market forces.
For either party to paint itself as somehow holier and more open than the other is really an insult to everyone’s collective intelligence.
Category: Uncategorized Tags:
by Brian Prentice | April 29, 2010 | 3 Comments
Remember the Android mobile device platform circa 2008? That was Google’s open source masterstroke. Backed by the Open Handset Alliance, it was set to commoditize the mobile operating system market and break down barriers between the mobile internet and their search-based advertising business.
Fast forward to 2010. Now we have the Android mobile device platform – lightning rod for patent infringement actions. This is Google’s open source miscalculation. It’s putting members of the Open Handset Alliance into a patent infringement purgatory, while showing to one and all the limitations of using open source software as a tool to commoditize your competitors’ business.
Now, before all you anti-patent advocates get worked up let’s just me just say your concerns are duly noted. But so long as software patents exist they need to be factored into the planning process for new open source projects. Particularly when their impact will be highly disruptive to well-established players. That, by the looks of things, simply didn’t happen with Android. So what…did everyone contributing to Android just forget to check? Maybe everyone thought is was someone else’s responsibility. Or could it be that everyone involved with Android just didn’t think patent infringement was going to be an issue this time around? It’s got to be something because there was enough collective knowledge in that group to recognize that the mobile device space was covered by a maze of patents.
As I see it, I don’t think HTC’s patent problems with Android can be understated. I think it puts a serious crimp in Android’s game plan and, by extension, will force a massive, industry-wide rethink on how to use open source strategically.
What’s important to note here is that neither Apple or Microsoft has gone after Google, the center of gravity for the Android project. They’re going after the organizations using it – the manufacturers. Last week GigaOM posted some interesting insights into the Android’s fragmentation problem. The patent issues are only going to exacerbate this. Each manufacturer will have their own strategy to deal with the patent claims of other organizations as they each have their own patent portfolios (or lack thereof) to base those decisions on. By necessity then, Microsoft and Apple’s approach with Motorola will probably be very different.
The net result isn’t pretty. Google may be the centralizing agent for Android code development but IP risk assessment ends up being distributed and duplicated across the entire handset alliance. That means further problems for unified adoption because some manufacturers will likely be more exposed to potential infringement actions or licensing agreements. On the other hand, there’s an increased chance of manufacturer-specific Android forks emerging from those that have patents they wish to exploit in their own Android handsets. While this type of project diversity is generally welcomed in the open source world, it will only undermine Android as a viable development platform. And that outcome ultimately hurts everyone involved in the handset alliance.
The solution, of course, is to centralize IP risk assessment and IP risk mitigation. And that is the open source rethink I’m referring to. The community model for all future open source projects of any note can no longer be focused strictly on code development. From here on out it must also include IP management. That means patent assessors will need to join the ranks of code committers. Prior art detectives will be as valued as code contributors. Patent pools will need to be bound to projects and shared indemnification systems will need to be devised.
That begs a particular question. If one organization insists on playing a dominant role in a project’s community how much of the responsibility for overseeing these IP issues falls on them? And that throws the spotlight for this whole situation right back on Google. What, if anything, is Google doing to assess the patent rights of others through their ongoing development process? If something is being done then why doesn’t there appear to be an indemnification system in place – even if it’s limited in nature? Is Google filing new patents connected to their Android development? If so, will they be pooled into a patent commons?
Google has chosen silence on this entire matter so it’s impossible to know what they’re thinking or planning. For everyone’s sake I sure hope there’s something cooking because this issue isn’t going to stop with Android. It’s now looking like there will be similar problems for Google’s VP8 codec. A lot of the good work Google has done in pushing open source will start to unravel if they’re seen as throwing the rest of the community to the lions while passively watching them get devoured.
Category: Uncategorized Tags:
by Brian Prentice | April 20, 2010 | 11 Comments
“The beginning of wisdom is a definition of terms.” – Socrates
An old saying that has particular relevance in the world of information technology. Our industry loves couching ideas in clever new terminology. Sometimes the attempts are laughable. But every so often a new term will resonate. Unfortunately in IT, it’s only a matter of time before everyone co-opts the term for their own purposes. And once a term ends up mean anything it ultimately means nothing.
Such is the perilous state of the term “community.”
This should be of particular concern to anyone with an interest in the future of open source software because “community” has always been one of its most unique, albeit amorphous, qualities. Unfortunately there are many self-described open source advocates that are fuelling this terms irrelevance.
If “community” ends up losing it’s meaning through commercially-driven misappropriation than all efforts to expand the understanding and adoption of open source will be severely hampered. It’s like a free market version of Newspeak. The broader interests of open source require a stringent definition of community – one which captures its unique value in the context of information technology. With that base line established we should all call BS on those that attempt to use in any other way.
As I see it there are four working definitions of collective behaviour relevant to the world of information technology. They are:
- Crowd – a collection of people whose defining characteristic is proximity (physical or digital)
- Mob – see definition for crowd, add anger (thank you anonymous blog comments)
- Affiliation – a collection of people whose defining characteristic is a shared interest. Affiliations can emerge quickly and are more self-sustaining than crowds. A key reason is that there tends to be a focal point, sometimes a member, through which interact flows through or centers around. As long as the focal point of an affiliation participates then the rest of the affiliation can simply observe.
- Community – a collection of people whose defining characteristic is shared participation. Communities are ultimately geared towards some form of action. What drives the collective participation of the community is the individual vested interest of each member. Finding an intersection between members’ individual vested interest is highly complex and that means communities are uniquely difficult to catalyze and sustain. Furthermore, communities are only viable when a critical mass of its membership contribute. Unlike an affiliation, too much observation kills a community.
There is a strong affinity between open source and community The reason for this symbiosis is the open source license agreement. Modification and redistribution rights make it impossible for a single entity to control the underlying asset – the code. That simple fact creates a certain element of trust – or to be more accurate it removes a certain element of distrust – that can hamper shared participation in code development and maintenance. But to reiterate the point, that participation occurs because a community member has a vested interest in doing so. Maybe they need to undermine a competitors market position. Maybe they need to distribute the cost of building a key sub-component for their product. Maybe they just want to build their personal reputation.
However, defining and assessing communities must be based on the type of participation being shared. I would argue that a community of code contributors is a very different group than a community of people providing support and advice. The membership may overlap – but the nature of the shared participation is different. On that basis then an open source project might have several different types communities. But the simple application of an open source license agreement doesn’t guarantee any community will emerge, as many projects can testify.
On the other hand, a project’s users or a company’s customers do not constitute a community. That’s because there isn’t really any shared participation. This group is better defined as an affiliation (sometimes even a crowd). The same can be said about groups of channel partners like resellers or system integrators.
This is exactly where things are starting to get unstuck. Increasingly it is fashionable to use to the term community to represent the sum total of all relationships a company has – regardless of who it is. In this context everyone is on equal status which is clearly a nonsense. One more additional user probably makes no difference to every other user. But another code contributor can have a significant impact.
That existing proprietary vendors are co-opting the term community in this way is not surprising. But I find it hard not to be cynical when I see a growing number of “open source vendors” – particularly those subscribing to open-core business model do this.
The open-core business model is designed around converting users of a free, functionally-reduced, “community-supported,” open source version to a proprietary, paid-for full version. This focus on conversion rates drives their perspective on community. In their world, anyone that can potentially be converted into a paying customer is a member of their “community.” But when we start applying a proper definition of community, as in a community of code contributors or a community of project support providers, then things don’t look so rosy for a lot of these open-core vendors. The fact is that many are simply failing to get any traction catalyzing these groups. So it’s convenient to lump everyone – including those who casually download the open source version of their product – into a big bucket they call community. Its suits their marketing objectives. And it certainly doesn’t hurt their ability to shake some extra funding from the venture capital money tree.
Using the term community in this fashion is a little like pissing in the open source pool. It’s anti-social behavior that the perpetrator hopes will go unnoticed but which, if done by too many, soils the entire pool. If a community can be anyone and everyone than it devalues the honest-to goodness-communities that have emerged around some, but not all, open source projects. And make no mistake – catalyzing and sustaining a community is half magic and half hard work. When it happens it needs to be uniquely recognized and rewarded.
So sorry for being pedantic but if community is going to have any meaning it must be limited to collective behavior based on shared participation. And I’m prepared to call BS if a vendor does otherwise.
I hope you do too!
Category: Uncategorized Tags:
by Brian Prentice | April 1, 2010 | 11 Comments
In a move set to rock the information technology industry, Microsoft has announced plans to migrate the world-famous Microsoft Windows computer operating system to an open source licensing agreement and provide it free of charge to end users, businesses and original equipment manufacturers (known as OEM).
The announcement was made today by Microsoft CEO, Steve Ballmer. Flanked by celebrity life coach and spiritual advisor, Eckhart Tolle, Ballmer laid out the reasons for this momentous change.
“Over the last couple of years I’ve been working through some challenging personal growth issues with Eckhart,” said Ballmer. “I came to realize that this whole anti-open source thing of mine was a blocking mechanism. What I know now is that calling Linux a cancer or questioning open source innovation was a convenient way for me to avoid looking within and finding my inner stillness. I had lost touch with myself and, with it, I had lost myself in the world.”
Ballmer handed his talking stick to Tolle who continued, “the less you open your heart to others, the more your heart suffers. Steve realized that the less open his company’s code was, the more that code would suffer. This announcement today that Microsoft Windows will become a free, open source offering for people around the world is an amazing personal breakthrough for Steve.” The two embraced, tears clearly visible on both men’s faces.
The mood across Microsoft’s Redmond campus has been one of elation. According to Tobias O’Toole, Group Product Alignment, Alliance & Evangelism Marketing Manager for Microsoft User Experience Design Team, “We’ve all known that Steve’s over-exuberant, billionaire capitalist antics were just a way for him to avoid getting in touch with the shy, sensitive man we know he is. We’re all just so delighted he’s discovered his inner angel and extended a hand of peace and love to the open source community and the world at large. After all, I think it’s safe to say that every employee at Microsoft is a huge fan of open source software.”
And there’s confirmation now from Bill Gates that the entire Microsoft vs. open source battle of the last decade was concocted to achieve exactly this outcome. Sitting next to a clean-shaven, and noticeably thinner Richard Stallman, Gates explained, “did you ever see that Adam Sandler movie ‘Anger Management?’ It’s about a guy who’s set up by his girlfriend and this apparently crazy doctor named Buddy Rydell in an elaborate scheme to help him get confront his feelings of inadequacy. Well, we’ve basically been doing the same thing with Steve. It’s Richard here, an old friend from my Harvard days, who’s been playing the part of Buddy Rydell. He really knew how to push Steve’s buttons and I’m eternally grateful he did.”
Said Stallman, “it’s been a trying couple of years and you know, I really should be glad that Steve is fully blissed out. But honestly, all I’ve been thinking about is how great it will be to drop this Jerry Garcia look. The beard has been itching like crazy for the last ten years. And don’t even start me on how embarrassing it was to eat a plate of spaghetti bolognese in public. Heck, If it wasn’t for the generous Microsoft share grant my good friend Bill provided me back ‘93 I probably would have thrown in the towel a long ago,” laughed Stallman.
“Microsoft is a company dedicated to personal growth and enlightenment, whether it’s with our customers or our employees,” said Ballmer after regaining his composure. “I guess I’ve been so focused on seeing this happen with others that I just forget to give myself a little me-time and to realize that I deserve my share of happiness. But through personal growth has come an understanding on my part that the big, broad open source community is as committed to creating a loving and harmonious world as Microsoft is. There’s no reason to fight anymore. So, today I’m also announcing a ‘Summit of Love.’ Next week I’ll be meeting with Linus Torvalds, Jim Whitehurst and the entire board of the OSI to create a single, unified operating system that will be free and open source.” Holding up copies of the press release Ballmer reiterated, “there will be peace in our time.”
But financial analysts were weary. Asked how this move would impact the gross margin central to Microsoft’s market capitalization, Ballmer shot back through another stream of tears. “What I know now is that when you give love you get it back. Maybe not right away. Maybe not in this lifetime! But it comes back. I’m not going to be deterred from doing the right thing just because a quarterly-driven accounting system can’t properly measure that simple reality.”
Microsoft’s share priced has doubled since the news has broken, confirming that Ballmer’s sentiments were aligned with market expectations.
Update – April Fools
Category: Uncategorized Tags:
by Brian Prentice | March 31, 2010 | 19 Comments
Attention corporate IT customers – this blog post is for you. If you haven’t already had an “open-core” software vendor knocking on your door you probably will be soon. It’s important that you’re able to separate the hype from the substance when you hear them talk of their innovative business model.
Open Core, if you’re not aware, is being pushed by many start up companies as a new approach to delivering products combining open source and proprietary software. There may be others nodding in agreement that this in fact a dazzling new business model. Regardless of the way that vendor struts, you should trust your instincts. You’ll soon realize that the fabric making up the garb of their stated innovation is a fabrication. They’ll then be exposed for exactly who they are – a good old fashion software vendor. Just like every other one you’ve come to know.
The open-core emperor has no clothes.
Let’s keep in mind that when we start talking about business models, what matters is not how a vendor generates incremental revenue but how you generate incremental value. In order to understand whether that’s going to happen or not we should start with the foundation of the open-core model – the distinction between a full-feature proprietary version and a free, open-source functional subset of that offering.
Now, if this sounds familiar to you then you’d be correct. That’s called “freemium” in the consumer world. In the corporate market, attempting to broaden the appeal of a software solution by parring back the functional footprint into a low cost alternative has been a staple mid-market strategy of enterprise software companies for over a decade. Just think of IBM’s Express product portfolio or Siebel Professional Edition. Unfortunately, these product strategies have largely fallen well short of expectations. By and large, organizations want products that represent a nuanced understanding of their needs rather than a product manager’s arbitrary functional pruning process.
And arbitrary is the operative word. A couple of years ago I looked at a number of open-core providers (if you’re a Gartner client you can refer to the research note – “Commercial Open Source – Is All That Glitters Usually Sold?”) and found that none of them had a consistent decision framework in place nor any publicly available covenants that explain to potential users the criteria they use in determining which new capabilities will be made available only in their commercial version. Furthermore I have personally been told by one such open-core provider that the reason a new feature, which was clearly of value to all users, was only being provided in the paid-for, proprietary version was that they “had investors they needed to satisfy.”
Besides, what you already know is that this type of functional separation creates what Gartner refers to as a “super-size trigger.” The minute you require a feature only available in the full version then the entirety of your commitment needs to be scaled up and re-costed to the full-cost offering. If you’re like most corporate IT customers I speak to – at least the ones considering solutions from open core providers – then chances are you’ll be starting your assessment based on their full version product rather than the free open source offering. But on the outside chance that you’re considering starting off with a a community-supported open source version than you should realize that you also face a relationship super-size trigger. Should a functional disparity between what you need and what’s available drive you to the full version, you’ll then be linked to the provider through a proprietary license agreement. Either way, any direct value from an open source license is lost to you.
This is where the hype starts to creep in. The idea that a functionally complete, proprietary solution is somehow unique because it was built atop an open source base fails to recognize the fact that many proprietary solutions are being built using open source components. Open-core providers deserve no brownie points from you because ultimately the end result is the same. You’re licensing a proprietary solution from an organization which builds it with fee open source components. The direction that happens – either open-to-proprietary or proprietary-to-open – is meaningless to you.
That is, of course, unless you are prepared forgo the benefits of the proprietary solution and opt for the open source offering. This entails committing to that projects community for support and code contributions while reciprocating yourselves. But that’s highly unlikely for most corporate IT users. The occasional piece of community supported assistance is common and a code contribution every now and then is not unreasonable. But what we know is that corporate users prefer having a vendor provide support and code maintenance services for things like operating systems, databases, business intelligence software, enterprise content management and other key IT solutions. As of 2006, 97% of Linux users were under a service contract from an external service provider. Of course, these types of support and maintenance agreements are available from open core vendors – all as part of the paid proprietary offering.
Even the very definition of “community” is being adapted to suit the open core narrative. What has largely interested the corporate IT world is the concept of a community as a collection of code contributors working outside a normal project/company structure. But now open core providers are extending the term community to include users and even resellers. That, of course, is what we’ve all been calling a software ecosystem for the last twenty years. Same old, same old – just co-opted terminology used to describe it.
You see, when you start peeling back some of the value propositions being attached to open core business models what starts to appear is a picture of a bog standard software provider trying to use the latest phraseology to cut through the noise of a crowded marketplace. Be clear, there’s nothing nefarious going on with open core. It’s just that there’s just nothing particularly new or innovative going on either.
I’m pretty darn sure that most corporate IT users will figure this out quickly, if they haven’t already done so. And when that reality starts sinking in with the open core providers I have a feeling we’ll be hearing a whole lot less about this business model.
Category: The Future of Ownership - IP & IT Industry Tags:
by Brian Prentice | March 23, 2010 | 8 Comments
Last week I had the pleasure to participate at the Open Source Business Conference in San Francisco (thank you Matt Asay). I ran into a lot of very smart and very committed people and had some deep and meaningful conversations about the future of open source. Regardless, I was left with a deep impression that the thinking around open source software in Silicon Valley is on a whole different wavelength than the rest of the world.
The audience at OSBC seemed most comprised of software vendors – either the established vendors like Microsoft, SAP and Adobe or the smaller open source-specific startups.
As expected, each group was positioning open source in their own way. For the established vendors, open source was was being position as basically an extension to their existing business model. That’s entirely predictable and a bit boring. After all, these vendors are always co-opting the shifting IT landscape in order to say “ya, we do that too.” But in the context of open source this is a huge breakthrough. For years most of these vendors saw it as something that would disrupt their business. Now they’re comfortable enough to be able to say about open source, “ya, we do that too.”
On the other hand the open source-specific startups, particularly the denizens of Silicon Valley, were pushing the point that open source was much more than a simple extension to existing software vendor business models. Their view was that open source required a different approach. The predominate model being advocated was “open-core.” But what really struck me is that this commitment to a “new approach” seemed largely based on obtain financing. Their focus was on appealing to venture capitalists – not end users.
That wouldn’t be a problem in and of itself if it weren’t for the fact that there’s a yawning gap between the value open source provides a venture capitalist (VC) and what it provides an end user.
The VC communities’ interest in open source, as I see it, is based on the view that a project’s associated community will lower development and sales costs. That allows them to build an attractive proposition when selling the company. And their current thinking on the best way to do this is through open-core business models.The net result is a certain open source groupthink. First there are open source startups that want to get financing. They’re the ones trying to apply open-core licensing to their business strategy in order to attract VCs. Then there are the group that already have their funding. They’re the ones who are trying to convince everyone, especially themselves and their VC partners, that open-core is all it’s cracked up to be.
But there a couple of problems. The first is that open-core is a largely a re-tread of tired, old SMB packaging strategies which have almost universally failed in the market. Business don’t blindly jump into a free open source offering and then upgrade to a full-cost, proprietary product like it was some stimulus-response behaviour. From my experience they assess these products, from day one, based on the full version. That eliminates any sales benefit from the open source component of the overall strategy which, in turn, makes these open-core vendors just like any other small software provider slugging it out in a crowded market space. Strike one! Furthermore, I’m not sure most open-core business models have been successful in building large external code contributions. Strike two!
But at the end of the day these flaws will be mostly borne by the open source-specific startups, not their VC partners. As we know, the venture capital model accepts a certain failure rate. They really only need a handful of their investments to pay off. Those lucky few that can get a license run rate off the back of a community (which I heard described as partners, customers, and some mysterious non-aligned code contributors – basically just an extension of a good old software ecosystem) will be sold off for a handsome profit. And who are those likely buyers? Increasingly it appears to be the very same established vendor community that are saying “ya, I do open source too!”
So much for a compelling new business model! Strike three, you’re outta here.
So strong is this apparent pull for funding that founders of these open source-specific startups are willing participants in this open source crap shoot. They’re all hoping to be the one that makes it big when odds are they’ll be one of losers – and losing along with it their time, energy, and youth.
This, of course, is the reality distortion field that Silicon Valley is so famous for. It’s being brought to you by the same people who, a decade ago, were telling us about the riches that would flow from commercializing eyeballs. That this reality distortion field has extended to open source is not surprising. But that it’s being wrapped up with so much sanctimonious debate is what’s disappointing.
But it’s not all bad. There are a number of open source providers that neither are, nor are planning to, ingratiate themselves with Sand HIll Road. They tend to be located outside Silicon Valley and have been largely growing organically. When I speak to these guys they’re far less dogmatic about the inherent value of open source because dogma doesn’t wash with business users. If open source is going to disrupt the business models of the established software vendors I think it’s going to be this group that figures out how.
Category: The Future of Ownership - IP & IT Industry Tags:
by Brian Prentice | March 14, 2010 | 2 Comments
Nick Wingfield wrote an entertaining piece for the Wall Street Journal describing the challenges facing Microsoft employees who have chosen to use an Apple iPhone. As Wingfiled points out, iPhones are in plain site on the Redmond campus. But that doesn’t sit well with many Microsoft executives who see this as little different than Coca Cola employees drinking a Pepsi with lunch, Ford employees pulling into the car park in shiny new Hondas, or US government employees hanging a portrait of Kim Jung-Il in their offices.
The most famous incident occurred at an all-company meeting last year. When an employee took a picture of Steve Ballmer with his iPhone, he was single out for some gentle, albeit public, ribbing with Ballmer grabbing the phone and pretending to stomp on it (no iPhone was harmed in the making of his point).
All this reminds me of my personal experience as a Microsoft employee when I insisted on using a Palm Pilot. So, apparently, things haven’t changed much inside Microsoft. But externally Microsoft is a very different company. And what makes them different is their strategic interest in cross-licensing their IP to competitors.
So, I’m left trying to reconcile the examples Wingfield highlights of executive humbuggery about the iPhone against the statements of Marshall Phelps, Microsoft’s corporate Vice President for Intellectual Property and Strategy. In his book, “Burning the Ships: Intellectual Property and the Transformation of Microsoft,” Phelps says;
“The IP collaborations, many forged by top Microsoft executives using IP as the glue to cement the deals together – have enabled Microsoft to establish valuable joint product development with other firms…In short, these IP-enabled collaborations have lead to greater success for Microsoft in the marketplace, materially enhanced the company’s bottom line and advanced the interests of our shareholders.”
Phelp’s observation certainly applies to the iPhone given the ActiveSynch technology that Apple has licensed from Microsoft. The iPhone would be a far less compelling device for corporate users without its seamless integration with Microsoft Exchange Server.
So, why then didn’t Steve Ballmer, upon seeing the guy taking his photo with his iPhone, grab the phone, raise it up and tell the audience “folks, did you know that we’re making money whenever one of these things sell? Our presence in the mobile phone market isn’t limited to Windows Phone 7. It’s pervasive due to our domination of enterprise collaboration. ”
That seems like the message Microsoft CEO should have been sending. Clearly there’s some conflicting perspectives.
But don’t be too hasty in your judgement of Microsoft because those conflicting perspectives will be broadly shared by most of us in the near future.
The underlying challenge stems from the way we understand the nature of intellectual property in software. Traditionally IP, be it copyright or patent, has been seen as a means to an end. It’s stuff that’s embedded into a product. But increasingly IP is being seen as having its own distinct value. It’s becoming the product. Fundamentally software is still licensed, but increasingly it’s being done on a more granular scale and with organizations that might previously have been seen as competitors. All of this is bound to the growth in open innovation.
This will pose is a serious disruption to the status quo. Established vendor sales strategies will be upended and so will enterprise sourcing practices. It will force a rethink of many OEM relationships. Vendor-user relationships will become bi-directional as it’s likely software vendors will cross-license or direct license IP from the organizations they consider today to be their customers. This, in turn, will force enterprise IT organizations to ponder whether they are, in fact, software vendors themselves.
Is it any wonder that Microsoft executives are sometimes operating a little across purpose.
Microsoft is on the bleeding edge of what it means to be a platform provider in the new millennium. A software “product” has always been a packaging construct – a way to draw a meaningful boundary around a code base. In world of highly granular IP licensing, platform success can’t be achieved focusing on selling a set of integrated products to customers. Rather it will be based on achieving technical ubiquity by intelligently licensing IP to anyone and everyone.
Category: The Future of Ownership - IP & IT Industry Tags:
by Brian Prentice | March 9, 2010 | 2 Comments
I’m gobsmacked. Flummoxed even.
Sometime I run across things I have to read a couple of times just to make sure I’m not hallucinating. A recent set of utterances by the International Intellectual Property Alliance (IIPA) has been one such case.
It appears that the IIPA have recommended that Indonesia be added to the US Trade Representative Special 301 status, in part, because they are “…endorsing the use and adoption of open source software within government organizations.” Now, so we’re all clear, the annual Special 301report, according to Intellectual Property Watch;
“…unilaterally evaluates US trading partners on the effectiveness and adequacy of their intellectual property rights protections to combat counterfeiting, internet and digital piracy, or intellectual property as it relates to health policy.”
In fairness to IIPA, this was one of several issues that drove their recommendation. Although I’m assuming that if it’s important enough to highlight so prominently in a list of grievances it must be important enough for the IIPA to see as a problem in its own right. And that would mean a lot of other countries around the world would fall foul of IIPA’s concerns. Hmm, I wonder if the State of California can be given Special 301 status?
Regardless, you might be scratching your head wondering exactly how a circular endorsing the use of open source software would run a country afoul of the US government’s stated objectives. Well, the answer according to IIPA is because this Indonesian circular:
“…simply weakens the software industry and undermines its long-term competitiveness by creating an artificial preference for companies offering open source software and related services, even as it denies many legitimate companies access to the government market. Rather than fostering a system that will allow users to benefit from the best solution available in the market, irrespective of the development model, it encourages a mindset that does not give due consideration to the value to intellectual creations. As such, it fails to build respect for intellectual property rights and also limits the ability of government or public-sector customers (e.g., State-owned enterprise) to choose the best solutions to meet the needs of their organizations and the Indonesian people. It also amounts to a significant market access barrier for the software industry.”
Oh really? Which software industry is the IIPA referring to exactly? Does that include Red Hat, which by IIPA’s criteria must be an illegitimate software company? Does that include all the American software companies frantically figuring out how to embed open source software into their proprietary offerings? How about cloud computing providers that, by-and-large, rely on open source software for their underlying infrastructure? Let’s not forget about the professional services industry which increasing sees open source as key business enabler. The list goes on.
And pray tell, how does a clearly defined copyright agreement – which is what open source software is predicated on – “…encourage a mindset that does not give due consideration to the value of intellectual creation.”
Here’s a key point that the IIPA seems not to understand. Increasingly a key criterion used in deciding what “the best solution available in the market” actually is, is the absence of entity-specific IP control. And that feature is only available with open source software. This is a massively important issue for governments around the world as they look at a software industry largely dominated by US firms. Furthermore, as I’ve noted before, the failure of US software firms to craft variable global pricing models is an invitation to foreign governments with weak currencies relative to the US$ to craft these policies.
But let’s just parse out this out from the Indonesian Government’s perspective. In order to avoid the ire of the IIPA they will need to avoid the following actions which would seemingly be quite acceptable in the US:
- Use Red Hat Linux in their data centers
- Utilize mySQL in e-government sites
- Run Oracle databases on Oracle Enterprise Linux
- Build government private cloud offerings on anything but proprietary software
- Use Google Chrome, Apple Safari or Firefox to browse the web (each either being open source, or relying heavily on open source components). Even recent version of Internet Explorer are questionable as Microsoft has been making some features available via Creative Commons licenses.
What is striking is how each of these scenarios are clearly not in the best interest of the software industry. And that begs the question – who are the IPAA and why are they presuming to speak on behalf of the software industry?
IIPA appears to be a umbrella organization covering a number of member association like the Association of American Publishers (AAP), The Motion Picture Association of America (MPAA) and the controversial Recording Industry Association of America (RIAA). The only member association that has any meaningful link to the IT industry is the Business Software Alliance (BSA). This is an organization whose primary focus is software piracy. Now, if the BSA is somehow linking open source software to software piracy that is a non sequitor of monumental proportions.
Even worse is the damage that this type of policy advocacy is doing to BSA’s own members’ businesses and reputations. It runs contrary to the open source efforts of some members like Apple and WebKit or Cisco and Etch. It undermines the credibility of some members efforts to embrace open source software and open source principles like Microsoft’s OSI approval of the Ms-PL and Ms-RL licenses. It is detrimental to the long term revenue outlook of members like Intel. And it makes an absolute mockery of the enormous contributions of IBM across many different open source projects.
Like I said…absolutely gobsmacking.
Category: The Future of Ownership - IP & IT Industry Tags:
by Brian Prentice | March 3, 2010 | 1 Comment
Let’s keep a couple of things in mind with the Apple v. HTC complaint. We’re not talking about some patent aggregation company, shopping a case against an IT provider in the East Texas district court, seeking big dollars for the infringement of some obtuse, questionable software patent that provides only ancillary capabilities. We’re talking about some high profile patents which a high profile company has made a significant effort in highlighting are being used in a high profile and highly innovative device. Furthermore, the fact that these patents are being implemented on a particular machine means that they probably even fall within the boundaries of the Bilski test (currently being considered by the Supreme Court).
Sure, the validity of Apple’s patents will ultimately have to be decided in the courts – if this case ever gets there. But on face value, without detailed legal analysis, I’m inclined to see this whole episode as a clear-cut case of patent infringement.
So then, why sue HTC? Why not sue Google?
There’s certainly been a bit of conjecture on this point. Jonathan Zittrain, a professor at Harvard Law School, conjectures that Apple is targeting the little fish first. Perhaps he’s right. What I’m wondering is whether it makes sense to avoid Google all together.
First off, I can’t help but wonder whether the open source nature of Android makes this a more complicated matter of affixing blame. Can you sue Google on behalf of the Android community? I don’t know. But it seems to make sense to sue the device manufacturer for enabling the infringing functionality on their device. At the end of the day Apple achieves the same objective – either shutting down the use of their IP or extracting a license fee that can remove any price advantage a device manufacturer has over the iPhone.
But then, what about Google? Do they carry some responsibility here? I sure think so!
There has been some insightful reporting done by Nancy Gohring and Juan Carlos Perez which shows that Google takes a dominate role amongst the Open Handset Alliance. Android, as the article points out, is developed in-house at Google. If that’s the case than it is likely that it’s Google behind the offending multi-touch capabilities. It’s HTC that’s been accused of the infringement – but it’s Google that’s enabling the infringement.
“We are not a party to this lawsuit. However, we stand behind our Android operating system and the partners who have helped us to develop it,” said a Google spokesperson in an email they sent to TechCrunch. I’d love to get some more detail on what the “standing behind” part of that statement means. Is it the type of “standing behind” that involves sharing legal costs and any potential damages? Or is the type of “standing behind” your buddies do when you’re out drinking and they suggest you should go hit on the girl who’s football-playing boyfriend just went to the bathroom.
“Ya, come on – she likes you. Can’t you tell. What do you have to worry about.”
Then, when you’re recovering in hospital – “gee mate, I never believed someone could get hit that hard and survive. Once you get out of here I’ll buy you a beer”
All this makes for an interesting scenario with Android. There are already concerns being raised about the impact of proliferating versions of Android on the development community. If device manufacturers now risk patent infringement actions because Google hasn’t properly factored those considerations in prior to implementing new features in a project they largely control, well, I think the near-term impact of Android will be greatly diminished.
Category: Uncategorized Tags:
by Brian Prentice | February 19, 2010 | 4 Comments
Lately I’ve been reading “Burning the Ships: Intellectual Property and the Transformation of Microsoft” – a book written by Marshall Phelps (Microsoft’s VP of Intellectual Property Policy & Strategy) and David Kline (co-author of “Rembrandts in the Attic”). While I’m reserving the right to write up a proper book report (my 5th grade teacher would be so proud) once I’ve finished it, there was one particular insight I thought was worth sharing on my blog right away.
Throughout the initial chapters of the book Phelps continually states that Microsoft’s assertion of their intellectual property rights – particular their patents – has been a key part of a shared innovation strategy. I know, I know – I had same initial reaction. Self-serving corporate propaganda. But the more I read, and the more I thought about this, the more I started coming around to his thinking.
As some background, Phelps explains in some detail how Microsoft moved away from the “non assertion of patents” (NAP) clause in their OEM agreements. NAP clauses had been the backbone of how Microsoft protected themselves, and other OEMS, from claims of patent infringement. That was seen, and I believe rightly so, as being monopolistic by those OEMs. So, Phelps moved the organization towards the use of broad patent cross-licensing (PCL) agreements with these organizations.
The underlying objective of a NAP and a PCL are essentially the same – to protect an organization from patent infringement claims. But the approach to achieving those aims are as different as night and day. The NAP achieves this outcome by restraining a particular behaviour – suing for patent infringement. A PCL does this by encouraging another type of behaviour – cross-pollinating innovation between organizations. The whole point of a PCL is to say, “why sue each other, let’s just give each other access to our respective portfolios and spend our energy figuring out how we can our share our ideas.”
This is a disconcerting perspective if you see IP as being tool of exclusion – an exclusive right that derives its value from the ability to stop others from doing something. And I must admit, that has often been my default view of IP. According to Phelps, that view was broadly held by most of the staff at Microsoft too!
Instead, Phelps is advocating a different view of IP, particularly patents, as a tool of inclusion – a right that can be used to bring organizations together. Looking at patents in this light makes it difficult to avoid direct comparisons with the underlying objectives of open source. Instead of the confrontational dichotomy so often drawn between open source software and software patents, the two can actually be channelled for exactly the same objective.
In this context the contrast between open and proprietary blurs. Yes, they are distinct states. But how do these different states create value?
I think the answer lies in Geoffrey Moore’s work on the Flow of Innovation and, specifically, the difference between core and context. Patents are relevant in the area of core capabilities – the activities that create distinct value for an organization because they create clear competitive differentiation. As Moore points out, core capabilities are areas where organizations invest more resources. But nowadays Fortune 500 companies are questioning the value of internalizing all that investment. Companies like Proctor & Gamble have proven how success one can be by integrating innovation between organizations. Patents and PCLs then become a critical requirement in being able to innovate core capabilities in a world of open innovation.
Context, on the flip side, is everything that is not core. And, to Moore’s point, organizations seek to extract resources in these areas. Open source has unique value in achieving that objective. There are few mechanisms more successful than open source in remove price and supplier distortions that make resource extraction difficult, if not impossible. But where patents underpin open innovation, open source underpins shared commoditization. Or, as I define it, patents are critical to achieving core competency while open source is critical in achieving collective competency.
Therefore, open source and patent must co-exist as two components of a comprehensive approach to opening up an organization.
Having said all this there is one inconvenient truth that Phelps avoids – at least up to the point in the book I’ve read up to. What happens to all those organizations that can’t build up juicy patent portfolios because of the cost, overhead and complexity associated with doing so? How do they get to participate in this brave new world of PCL-based shared innovation?
But that’s a topic for another day.
Category: The Future of Ownership - IP & IT Industry Tags: