The CEO title also – at least in Europe – was deemed only approiate if you were at the head of a fairly large and preferably multi-national organization. Sure, startups of just two people sometimes had democratically divided all available CxO titles among themselves (Hi, I am the CT/F/MO and this is Bart who is the COO and CISO). But since founder sounds so much cooler, we nowadays see less and less of that. Also founder has the distinct advantage that you get to keep that title if you grow (Have a look at the movie ”Jobs” to get an idea of the process I mean). Soon the CEO title may go the way of the VP title, which was rapidly enhanced by adding terms like Senior, Corporate, Executive and of course Senior-Corporate-Executive VP. Let me know if you get the first card that says Sr. or Corporate CEO.
Now I am not implying or even assuming that adding multiple CEO’s is just about cosmetics and ego’s. In many cases there is a need to have units that are more agile, more aggressive and more focussed than the typical large corporate multinational. And if you as headquarter actually manage the unit as if you are merely a shareholder – meaning you sit on the non-executive board of this CEO and decide on firing/hiring/paying him but do not get involved in running the unit yourself in any way shape or form – than fine. We used to have the term “general manager” to describe that role, but within the aforementioned modern matrix organization general managers often cannot even decide when to change the aforementioned lightbulb (as facility management does so on a global basis), or on how to organize sales (as corporate product units and lines of business leaders appoint product and sales leads into his unit).
Personally I am a big believer in organizing using cell structures. The late Eckard Wintzen – founder of Origin, later part of Atos Origin – wrote a great little book about this called “Eckarts Notes” (in Dutch and strangely enough never translated, but for a summary in English see http://reinout.vanrees.org/weblog/2011/01/23/eckarts-notes.html). The cell approach – where you split cells if they get tpo big to be managed by one person – was pioneered earlier and is still in use today by other IT service companies. The general idea was that within a unit of between 50-100 people you don’t need a HR department (as the general manager knows each of his people – and their strengths and weaknesses), you don’t need facility management (as you can tell people to clean up their own mess behind them) and you on’t need a purchasing department (as the GM does large purchases himself and leaves individual purchases to the (empowered) individuals . It’s a very entrepreneurial approach, some practitioners even incorporate each cell as a separate company, meaning that cells can go out of business if not managed well.
Tell you what. Why don’t you have as many CEO’s as you like (but do give them full P&L responsibility and operational/tactical and strategic authority for their cell, unit or division), but stick to one CCO (Yes, one Chief Cloud Officer) who coordinates the internal cloud services (that your employees consume) and the external cloud services (that you provide to your customers). Something to ponder on in 2014? Is cloud really more about enabling cells to thrive and prosper, than about centralizing everything into a large grey monotony?
Category: Uncategorized Tags:
by Gregor Petri | October 31, 2013 | Comments Off
Most people by now agree that “Build a better mousetrap and the world will beat a path to your door” is not a recipe for commercial success in technology innovation. In many cases it is not the quality of the technology that determines the winner. It is about timing, branding and addressing the right problem with the right audience using a fairly adequate solution.
The history of IT is full of examples of technologies that were not necessarily superior, but that turned out to become winners. Who would have expected the fairly random and uncontrolled TCP/IP to win over Tokenring or other more robust technologies. Not to mention the classic battle of Windows versus OS/2. The question is whether the cloud race will run along significantly different paths.
All who lived through trying to implement serious enterprise and business solutions on top of these historic “winners” remember how hard this actually was. Not to say it was impossible, but it did require some serious high-wire acrobatics and advanced juggling. Think of the tools that companies had to deploy or develop to manage the infamous DLL hell and the advanced acrobatics needed to manage memory space or the database tricks needed to live with page level locking.
Luckily winning is not the end state for technology innovations. It is merely the beginning of a ongoing race to becoming better, faster and robuster. But for customers moving from 8 or 16 to 32 or 64 bit was far from a ride in the park. It required hard work and meant leaving some casualties behind (mainly in the form of applications not being able to make the transition).
The cloud race will likely be subtly – but not radically – different from these historic technology rides. Aspiring providers are frantically working on building better mousetraps, while established providers (but how established can one be in such a young and rapidly growing market) are aggressively expanding or even reinventing their offerings.
Companies that tried to run or build enterprise solutions on windows 3.1 now agree that in retrospect it was more bleeding than leading edge (although is did establish good starting competitive positions for some). In retrospect it was always hard to predict the infliction point. At exactly what point did the technology reach a level that it became feasible as a mass solution. Time will tell whether we will look back at today’s clouds effort as brave (but a bit foolish) or as brilliant (and a major step forward).
Category: Uncategorized Tags:
by Gregor Petri | September 30, 2013 | 1 Comment
The Washington Post recently ran an article by By Andrea Peterson on RIM (now BlackBerry), with a chart they called ”The decline of blackberry in one chart“. But more than the story of BlackBerry it rang home for me the enormous dynamics of a relative new industry.
As their chart showed the 4 vendors that together had about hundred percent market share in 2005 barely managed to hold on to 20% by 2013. In only 8 years they went from hero to zero, and were replaced by platforms that were introduced in 2007 (Apple) and 2009 (Android). I don’t cover mobile platforms so see this data mainly as a consumer, but it did make me wonder about the cloud market.
The mobile market in 2005 was by no stretch of imagination a startup market, I was on my third cellphone, after having enjoyed a car bound phone (car bound because it took up about half the boot) for about 4 years. The vendors were established, companies were handing out cellphones to most of their road warriors. Something that actually started in Europe – my US colleagues initially were juggling company provided calling cards and dialing codes – but by 2005 this was pretty much a global movement. A movement that felt more mature, established and business as usual as today’s cloud market.
And see below what happened then. So a good question to ask (and an interesting debate to have) is where the cloud market is today. And the cloud market off course is not a homogeneous market. It becomes even more interesting if we ask the question for SaaS, for IaaS and for PaaS. What is the probability of today’s leading cloud vendors becoming tomorrow’s cloud market gorilla’s?
Yes, the end of the year is slowly nearing (with fall up upon us and the shortest day already behind us), so time to start reflecting on the future. Have a look at the graph, but do remember that the chart shows relative share. If the chart would show absolute market size ift would have the shape of a cone and the leaders of 2005 would be mere rounding errors by 2013 (just like total cloud spend today only is about 5% of today’s overall enterprise IT market?).
Interested in your thoughts, please let me know via the comments.
PS For a behind the scenes view on Blackberry see this long form article from the Canadian Globe and Mail: “Inside the Fall of Blackberry“
Category: Uncategorized Tags:
by Gregor Petri | August 31, 2013 | 1 Comment
Conventional wisdom is that the fastest connection between two points- for example between today and tomorrow – is a straight line, but just like in aviation this is not necessarily true in cloud computing. First because cloud computing is not one thing (not one dot on the map) it is a conglomerate of many different types of services (IaaS, PaaS, SaaS, BPaaS) each with its own characteristics and following its own timeline.
This makes it very difficult (if even useful) to get organisations to agree on a cloud strategy. A colleague of mine once compared it to leading five blind folded people each to a separate part of an elephant and then afterwards asking them to agree what they just “saw” and what action to take. Trying to extrapolate these many views into the future and agree a possible path or strategy in such a diverse environment is even harder. That however should not stop us from trying. The illustration on the right actually comes from a research note just published* on the topic that identifies three factors that will significantly impact cloud adoption in the enterprise space.
As Gartner made the note – in anticipation of the upcoming Symposium Season and the Outsourcing Summits in London and Orlando – generally available via this press release, I wont try and give an even shorter summary here. Suffice to say that some established technology marketing truths - like the ones Geoffrey Moore described over twenty years in his classic “Crossing the Chasm” – still hold true, even today.
Category: Uncategorized Tags:
by Gregor Petri | July 29, 2013 | 1 Comment
Cloud is at the center of a convergence trend that is impacting people across all of ICT. This convergence is breaking down the walls that separated the traditional silo’s of IT, networking, storage and security. But with this breaking down of the walls we also need to better understand the subtleties of each others domains in more details.
A famous urban legend is that eskimo’s have many words for snow, as it makes sense to – if you spend your whole day in snow – to distinguish the subtle and not so subtle differences.
Similar in IT, where others simply refer to IT as IT , the people living in IT tend to distiguish between operations, development, support (helpdesk), testing, portfolio management, information and master data management, etc. etc.
And the same is true for networking, where others see the network (or even the internet) as a homogenous blob, the people running and managing the network distinguish many parts and layers, seperate it out into core and non core and many other subtleties.
The cloud (together with its enbaling peer: software defined functionality) is rapidly changing that. Anyone who tried to set up a simple IT fuction like cloud compute at one of the many cloud IaaS providers will have noticed how many of the configuration questions are network related (and not trivial to answer).
The challenge for network folk is slightly different as more of their traditional network functions are no longer implemented in or on dedicated network kit (their kit) but run on general purpose compute infrastructure as software. SDN (software defined networking) and NFV (network function virtalisation) are two of the main drivers in this area.
But even a bigger driver to learn and understand each other languages is the fact that the clouds inherent “as a service” model is driving a move from beeing organized along horizontal or functional layers (network, storage, compute, applications, support) to beeing organized around services (CRM, Collaboration, Supply Chain, etc.). These services (typically implemented as SaaS services) namely include their own implementation of all the underlying layers inside their service.
In theory (and often in practice) these services hide the underlying complexities from their end users, but often not from the teams supporting or offering (for example in the case of “private SaaS”) these services. These teams will need a more holistic and less silood view of the whole stack than they had in the past.
And that will mean we will all need to speak and understand the languages and words that are used in the layers that used to be foreign to our own areas of expertise.
At some point we may even develop a simplified high level language that goes across the domains. A bit like Esperanto or like Fangalo, the language that miners used in South Africa, a mix of words from Dutch, English and the about 500 local languages, with a simplified grammar, no distinction between past, present and other tenses and a vocabulary of only about 2000 words. It could be learned quit rapidly and allowed people to work in cross functional and cross national teams within a very short time.
In the cloud we could have such a “digital” language (or skill set) for people working on the cloud (whose tasks will be more technical than traditional business tasks) , but still a lot less technical (and less specialised) than the task of the people working behind the cloud (engineering the very complex cloud engines and networks that power the cloud).
Category: Uncategorized Tags:
by Gregor Petri | June 9, 2013 | Comments Off
Unless you have been under a rock for the last week it was impossible not to notice the uproar regarding the Guardian’s story on alleged information collection , allegedly called PRISM that -again allegedly- involved several major cloud service providers. The most detailed and nuanced piece so far – but it is only Sunday when I am writing this – is this one from the Washington Post.
As at this stage many things are unclear and some reports may be incorrect, I – for one – have not decided whether I will move my personal information from the many US based providers that I use in my personal live to local alternatives. But in this blog I do want to share my (strictly personal) views and thinking on the topic and explore potential alternatives. As usual I will stay far away from any politics in my blogs (something that must be doable given that the public reactions from different political sides are so varied and diverse).
Till today , individuals – like myself – often took a relaxed view towards protection of their privacy, using phrases like: “Well, nothing I do here is secret or illegal, so if they wanna peak, no problem”. But illegal in an international context is a relative term. Think of copyright law, where what is legal in one country (for example downloading copyrighted materials for personal use), leads to several year of incarceration in other countries, or think of controversies around travel of people carrying a certain disease or -maybe in the future – a certain gen, or of people of a certain origin. Currently the – already controversial – access to this data is only permitted for anti-terrorism and not for fraud-related or other criminal investigations. But we need to take into account that regimes may change and that as a result also this applicability can change (for example the detailed and accurate paper-based administration systems of local government entities in my country lead to significant, unforeseen and unintended harm following the regime change during WWII).
The increase of control that comes with massive centralized data-processing always carries some drawbacks (as Nicholas Carr – again with remarkable timing – republished just prior to all this hitting the press) and use of alternatives may to some extend be similar to the now famous statement about Democracy: Democracy is far from ideal, but it sure is better than any of the alternatives tried so far. For those individuals who want to try an alternative, here are some thoughts on cloud services to replace the ones currently under scrutiny or discussion.
- Email: Most of the providers listed as part of the program deliver (free) email services. Many European individuals started using these because they delivered convenient webmail that did not tie email addresses to a particular ISP (and thus allowed changing internet provider without being locked in to their proprietary email domain name). Maybe it is time to reconsider ISP-provided email, but at the same time investigate the use of your own domain name (which makes your email a lot more portable). Make however sure that the mail provider your choose is not just owned by a European company , but that it runs under European jurisdiction (for example a European owned mail alternative I looked at turned out to be “a corporation organized and existing under the laws of the State of Delaware”).
- VoIP Calls: Although leading consumer VoIP provider Skype started from Luxembourg, it is now part of a US headquartered corporation. Also most alternative voice and video calling solutions come from US based companies (with some even limiting their services to US based consumers only). Although European Telco’s have been talking about offering VoiP based alternatives to their regular mobile and fixed voice services, only very few have gone to market yet (check you local providers for possibilities) and even fewer offer it as a cost effective alternative for international calling.
- Social Networks: Up to a few years ago most leading social networks in Europe were national providers, but today Facebook is very much the name of the game. If it was not for editorial independence a media corporation like the Telegraaf group might consider leveraging the current media driven FUD to drive local consumers back to the recently acquired (and formerly leading) social network Hyves. However, moving to a new social network all by yourself is not a very social thing to do (and kind of defeats the purpose of a social network) so some group orchestration may be required.
- Short Message Services: So far the reporting did not mention any short message services , such as WhatsApp, Instamessage, Viber etc. Nor did it include other new web destinations becoming popular with the under 20ties (as their parents took over on Facebook) . Many of those however, such as Instagram and Tumbler have been recently acquired by the named providers. Twitter is a chapter by itself as most activities on Twitter are public by nature (and unlike some other providers they have put up a brave fight to keep their private services private).
- Professional Networks: Also professional networks, like LinkedIn have not been explicitly mentioned so far (likely because the job market for the type of activities under investigation does not rely on these types of services ), but here some local alternatives do still exist. Unfortunately the alternatives are often very local (limited to one language area) and do not help much in an increasingly pan-European or inter-continental job market.
- Dropbox: I could have used the more neutral term file replication here, but DropBox has – in a remarkably short time – pulled a Xerox on the market and made its brand name the generic name for these types of service. Alternatives do exist – from independent European companies as well as from Telco’s and ISPs and even from providers of networked hard disks. Maybe this is a good time for companies – who so far largely turned a blind eye towards the (shadow) use of such services, to offer internal – but just as convenient – alternatives to their employees.
- Cloud IaaS/PaaS Providers: Also these have not yet explicitly been mentioned. Maybe because the typical consumer does not use these providers to build their own personal photo of file storage and sharing facility (mainly because higher level alternatives like Flicker and DropBox are so much more convenient to achieve the same result). Also these lower level services offer a lot more options for the user to protect his own data (like using encryption). Regardless of these consideration, this area is a domain where several local alternatives do exist, both at a national and a pan-European level. Some of these providers are even global offering services from facilities they run in “neutral” – but latency-wise quit closeby – locations like Canada or Switzerland.
So far most of the discussion has been about individuals and their data. The interesting thing is that the European Data Protection Directive has implemented the roles of Data Subject, Data Controller and Data Processor. For individuals (Data Subjects) the cloud service providers mentioned in the current media hype are in many cases both the Data Controller and Data Processor. For companies using these same cloud service provider firms, they themselves remain the Data Controller, while their customers and employees are the Data Subjects and the Cloud Service Providers are the Data Processors (which – according to my limited legal knowledge – can significantly change the applicable law and the entity held eventually responsible).
Category: Uncategorized Tags:
by Gregor Petri | May 20, 2013 | 2 Comments
If an article, 10 years after its initial publication date, is featured in several look backs, reviews, Q&As and still gathers reactions and emotional analysis, it can be concluded it must have struck a chord – or in this case – more an open nerve.
In May 2003*, the Harvard Business Review published “IT Doesn’t Matter” , an article by then still largely unknown editor ”at large” Nicholas Carr.
The premise of the article was that infrastructure has a diminishing impact on competitiveness and that IT was infrastructure (although Carr in the recent Q&A seems to indicate he meant IT Infrastructure is Infrastructure, a lot less controversial idea).
Given all the recent analysis around, I only want to zoom in on one aspect.
What still amazes me after all these years is how the last decade of IT was impacted/hindered/predicted/paralleled (pick one based on your personal emotional state with regard to the article) by the three short recommendations that were included – almost as an afterthought – in a small breakout box on page 8 of the article.
The article gave the following three “New rules for IT management”
- Spend Less : Which arguably coincided with a decade of corporate IT anorexia?
- Follow, don’t lead : Today we know that consumer-play IT – and not corporate IT – leads most of IT innovation (think Facebook, Twitter, Google, Netflix), and Web-scale IT is arguably about corporates following consumer-plays?
- Focus on vulnerabilities (as for any utility the dependence on external providers increases) - Which ironically is today’s main argument for corporate’s preference for private (over public) clouds?
Category: Uncategorized Tags:
by Gregor Petri | April 30, 2013 | 1 Comment
Even though today’s crowning ceremony in Amsterdam enjoyed some modest sunshine, the temperatures across Europe are at an all time low. A more reliable indication that spring has started, are the annual Cool Vendor reports being published.
For the first time this series includes a note dedicated to cloud activity in Europe. The “Cool Vendors in the European Cloud Computing Market, 2013” note describes four European vendor making a difference in the local and global cloud market. The report also points to several other European cool vendors featured in other notes. Such as in the “Cool Vendors in Cloud Services Brokerage Enablers, 2013“, “Cool Vendors in Cloud Services Brokerages, 2013” and “Cool Vendors in Cloud Management, 2013“.
One of the featured cool vendors is directly involved in the Amsterdam crowning ceremony by hosting the social crowd control app that gives the many visitors real-time insight into movements and current volumes of people at the different venues. A nice example of a nexus application that gathers social information from mobile devices, performs real-time analysis of that big data type information in the cloud and makes it available again to the crowd via a mobile app.
Category: Uncategorized Tags:
by Gregor Petri | March 24, 2013 | 2 Comments
The ‘third’ in this cloud menage a trois is the network, which is joining Storage and Compute as a Software Defined Resources that can be allocated on demand through a self-service API or portal. As a term “Software Defined” is in race to catch up with established but equally vague terms such as “on demand” and “as a Service” and surfacing in all kinds of combinations.
The current frontrunner – Software Defined Networking (SDN) – might very well already be the most hyped term of 2013. All network technology providers are busy either building or acquiring SDN capabilities but the largest acquisition to date (for more than one billion dollars) was done by a virtualization platform provider. Meanwhile most network providers are looking to leverage SDN to increase the agility and reduce the cost of the services they provide.
An important reason for the interest in SDN is the size of the market it is promising to disrupt. The total revenue for Network and Communication services makes up a large part total worldwide IT spending. Of the total 3.7 trillion IT spending in 2013 46 percent will be spent on Telecom Services (next to 8 percent on software, 22 percent on hardware and 25 percent on IT Services). Any development that such a large proportion of the total cost influences can count on great interest, also from the telecom industry itself. At the SDN World Congress in Darmstadt no less than 13 of the largest telecommunications companies announced a joint initiative to promote ‘Network Function Virtualization. This initiative encourages network technology providers to enable their network functions to run on (clouds of) industry standard servers instead of on proprietary hardware appliances.
The main advantage of a software defined network – just like any other kind of software defined infrastructure construct – is that it no longer consists of dedicated and proprietary boxes with names like: firewall, load balancer, router, etc. If an organization tomorrow suddenly two times more firewalls than load balancers needs (or vice versa), they can just provision other software on their existing hardware. In addition everything that is controlled by software can be much easier automated than something that is based on hardware. And this offers benefits not just to providers but also to end users as it can reduce the time needed to reconfigure en network to changing needs from several days or weeks to a few hours, or even shorter. And as a result the network can become as dynamic a cloud resource as compute and storage already were.
But let’s not forget that for many organizations the transition to ‘as a Service’ and “on demand” began in the network area. Back in the 80-ties companies started to give up their own in-house controlled and managed wide area networks in exchange for the use of shared packet-switched networks, often based of X.25 and commonly called the “public data network”. Maybe something to remember for those currently afraid of “public clouds”?
PS Press release March 25: Gartner Says Software Defined Networking Creates a New Approach to Delivering Business Agility
Category: Uncategorized Tags:
by Gregor Petri | February 8, 2013 | Comments Off
In that era the term BPR also became popular. BPR stood for Business Process Re-engineering (see for example Hammer, M. and Champy, J. A.: (1993) Reengineering the Corporation: A Manifesto for Business Revolution), a movement that in my perspective*really gained steam from the realization that throwing new technology (ERP) at existing processes, only offered limited improvement potential. As Business Process Re-engineering often resulted in massive redundancies and layoffs, BPR got kind of a bad rap, especially among employees and unions (remember those?). Today the term Business Process Redesign is a more socially acceptable reincarnation, and even though that could fly under the same acronym (eg. BPR II), people seem to prefer to call it by its full name.Nowadays all the buzz is around cloud. And also here people are finding that just throwing this new idea at existing processes is not bringing the benefits expected, not to mention that the cost of absorbing it into our existing ways of working is very high (remember how much we spend on trying to force fit ERP packages into our existing processes by modifying them beyond recognition, only to reimplement these packages a few years and a few releases later in a more off the shelf manner?).
And increasingly the topic of adapting processes to the cloud is becoming popular. Last month my note “Cloud Solution Providers Ignore Customer Processes at Their own Peril” published, and this month KPMG published a study called “the cloud takes shape“. In that study KPMG uses the term Business Process Redesign and states “One of the most important lessons uncovered by this year’s research is that business process redesign must occur in tandem with cloud adoption if organizations hope to achieve the full potential of their cloud investments.” In my note I describe the need to challenge the status quo on processes using the term Cloud Process Re-engineering (the cloud generation is likely to have a less emotional response to re-engineering, while some of its disruptiveness still rings through).
Some IT departments I speak to seem to think of cloud computing largely as a way to move their applications to the cloud. But if an enterprise organization with thousands of applications running in their datacenter, moves these apps to the cloud, they will still have thousands of apps. And more importantly those apps will do the same they did before, just running somewhere else (so you might ask “why bother?” as the savings on hardware, housing and electricity are small compared to the overall cost and today’s management is much more interested in new (revenue) possibilities and agility than in efficiencies that in the end form a mere rounding error in the companies overall bottom line). If organizations however look at each process and decide how the cloud can optimally support or even completely eliminate that process (for example through Business Process as a Service (BPaaS) or by participating in multienterprise applications) the potential impact can become much more significant.
Category: Uncategorized Tags: