by Mark Driver | October 2, 2014 | 3 Comments
We just published the 2014 edition of our Market Clock Report For Programming Languages. Gartner clients can get the full report from our website but I wanted to share a new element of the report with everyone here as well.
As is the case for much of Gartner’s research, the primary data source used to assess programming language trends is direct feedback from our clients via inquiry conversations. Gartner clients span a wide spectrum of end-user and vendor profiles, ranging from small to very large enterprises and from the industry leading edge to industry conservative adopters. These technology adopters typically follow a bell curve and, consequently, the bulk of Gartner inquiries within a mature market are dominated by 68% of early and late majority adopters (i.e., the mainstream). As a result, early innovators and conservative adopters (but particularly early adopters) tend to be underrepresented in overall inquiry volume numbers.
Moreover, a sizable portion of Gartner’s client base is focused on corporate IT challenges. The use of languages within other contexts such as R&D, operational technology projects and product engineering tend to be underrepresented. As a result, to most accurately capture the language usage trends of those adopters, which might fall out of the Gartner client “sweet spot,” we have examined the trends in language adoption and usage as reported from a variety of publicly available data sources as well. These sources range from public code repositories, developer forums, published books, job postings and even Twitter traffic.
In this research, we have examined the frequency in which languages are used in a variety of projects hosted on popular sites, such as sourceforge, github, and codeplex among others. Given the popularity and transparency of open-source projects, we have also examined the frequency of language use reported in popular project registry (index) sites, such as www.freecode.com (formally freshmeat) and Blackduck Open Hub (formally Ohloh.net).
In addition, we have also examined the frequency and volume of languages discussed in popular developer portals and developer forums, such as www.stackoverflow.com, www.slashdot.org and www.reddit.com, among others.
Of course, none of these data sources is sufficient in its entirety to provide complete assessment of programming language usage/popularity. For example, public project-hosting sites do not accurately represent older, host-based, legacy languages, such as COBOL and PL/I. However, when these sources are combined in a composite view, clear patterns emerge that provide real-world reflections on how these programming languages are used in the industry. These trends change — sometimes abruptly — over the course of many years. Consequently, we intend to update this research at least once per year, and potentially expand the list of languages and profiles that we cover from year to year as new technologies and trends emerge.
This year we taken all of these data sources and combined them together along with our own inquiry trend data to create a 2014 Gartner Programming Language Index. We’ve ranked the top 35 languages which we believe reflect a balance between the top languages used across the industry in general and the niche of languages (e.g. PL/I, COBOL) that remain important to mainstream (and more conservative) IT organizations.
Have any questions or comments?
Please post them here so that we might refine and improve the index for 2015.
Find me on twitter as well at @marksdriver
||Gartner 2014 Programming Language Index Rating
|Visual Basic .NET
Category: Uncategorized Tags:
by Mark Driver | January 31, 2011 | Comments Off
I’ve just published a new research note on a “CIO’s Perspective On Open Source”. Gartner clients can get the full report here.
I’ve included the highlights below but as usual the real ‘meat’ of the analysis is in the full report….
A CIO’s Perceptive On Open Source
An open-source governance program is the key to maximizing open-source software (OSS) IT value and minimizing risk. The CIO’s office is the key to successfully executing this effort within the enterprise.
- The presence of open source is inevitable within mainstream mission-critical IT portfolios.
- OSS assets can affect IT initiatives in positive and negative ways through gains or losses in such things as efficiency, productivity, functionality and security.
- The principal risks of open source are driven by unmanaged software assets that can introduce technical and legal challenges (e.g., security, intellectual property management and audit compliance).
- The IT benefits of open source are driven by a confluence of cost optimization, flexibility and innovation when managed properly.
- Above all other considerations, the successful execution of an open-source governance program drives the difference between positive and negative impact.
- The CIO’s office is uniquely qualified to sponsor a corporate governance open-source program, enforce it within the boundaries of IT and promote it across business units as well.
- Educate IT and business leaders regarding the positive and negative aspects of open-source assets within the corporate technology portfolio.
- Establish an open-source governance program that is sponsored by the office of the CIO and promoted by a cross-disciplined team of key IT and business leaders. Its mission will be to minimize potential risks and maximize the value afforded by open-source solutions.
- Within this program, focus first on eliminating the potential loss of IT value through risk associated with unmanaged (or undermanaged) open-source assets. As soon as the program is established, expand its scope to more aggressively leverage open-source solutions for positive IT value benefits (when and where appropriate).
Category: Uncategorized Tags:
by Mark Driver | January 28, 2011 | Comments Off
I’ve just published a new research report on ‘Business Unit Application Development’ (BUAD) best practices. Clients can access the full report here but below is short summary of the content as well. Of course, the ‘meat’ of the analysis is in the research note (about 15 pages).
Characteristics and Best Practices of Business Unit Application Development
Conflicting pressures have long challenged enterprises as they’ve endeavored to balance the risks and the rewards of business unit application development (BUAD) efforts next to corporate IT strategies. During the next several years, the effects of these risks and the rewards will increase dramatically. In the past, several practical, technical limitations kept most BUAD efforts within a relatively self-constrained environment. The environment also will change during the next several years, as new technologies wipe away nearly all barriers and alter accessibility to business-unit-developed solutions. New solutions will touch a broader audience and impact enterprises in significantly new ways.
- BUAD efforts will expand significantly and at increasing rates within nearly all mainstream enterprises for the near future.
The impact of BUAD efforts — positive and negative — will increase significantly as these solutions are deployed to larger audiences and tackle increasingly mission-critical subjects.
- It is unrealistic to think that all BUAD efforts can be fully managed within corporate IT guidelines; instead, governance programs should focus on the appropriate level of management required for specific classes of applications.
- The goal of the application organization should be to facilitate business unit development where appropriate, not to resist it.
- Articulate the risks, rewards and long-term costs of BUAD efforts for the enterprise.
- Establish a BUAD management program that is run collaboratively between IT and business units.
- Determine the scope of BUAD efforts to be managed and controlled under enterprise IT guidelines versus unmanaged projects that should be isolated from core IT services.
- Focus strict management efforts on those BUAD projects that demonstrate significant business value and/or risk (e.g., mission-critical).
- Isolate smaller, simpler and less-important BUAD projects from potential risks, but avoid efforts to fully manage them.
- Conduct an inventory of mission-critical and complex BUAD projects to determine business risk and BUAD-IT collaboration opportunities.
- Audit and measure BUAD efforts using established ROI and total cost of ownership (TCO) techniques over time to clarify long-term costs and business value.
- Establish criteria for how BUAD efforts can make the transition to formal IT stewardship when they grow beyond the bounds of acceptable scope.
- Establish policies to identify citizen development activities that border on being mission-critical, so that they can be evaluated and potentially make the transition to formal BUAD or corporate IT processes.
Category: Uncategorized Tags:
by Mark Driver | January 19, 2011 | 4 Comments
On January 11thMike Milinkovich at Eclipse announced the Orion platform. Here are some of my early thoughts.
According to the folks at Eclipse, Orion IS intended to be “the very beginning of an open tool integration platform using the idioms and metaphors of the web”.
There is surprising amount of meat in that short statement.
First to come to mind is that Orion IS NOT intended to be traditional Eclipse (i.e. desktop Eclipse as we know it today) ported to the cloud. It’s an entirely new codebase. It is also not necessarily targeted at the existing eclipse community (at least not the sizable portion of this community focused on traditional enterprise Java) and at the moment there is no obvious migration path for desktop Eclipse efforts to Orion. The Orion community will emerge as a new community in its own right focused very heavily on web and cloud centric efforts.
Secondly, Orion efforts are currently in an infantile state of progression. What we have today is barely more than a proof of concept – just enough substance to catch the imagination of community developers and potential ISVs (who could potentially embed the technology into their own PaaSsolutions). Unlike the original Eclipse platform which was developed internally by IBM and delivered more-or-less complete and intact, Eclipse intends Orion to be a community effort from the ground up. On the upside this means a potentially stronger community commitment and uptake than we saw with Eclipse (which was a whirlwind in its own right).
Today there is no official Orion project at Eclipse; the code is currently tucked away in the E4 project but its expected that the effort will emerge as its own project over the next several months. Interestingly IBM has done most (all?) the initial work on Orion and I assume it will continue to lead the effort when it becomes an official project as well. But expectations are also that Orion will likely expand to a family of complimentary projects where different companies and individuals lead efforts that are related to the broader ‘Orion platform’.
Orion could be a very big deal.
There is a wide range of potential outcomes for Orion over time. In a grand scope, we could see Orion become ‘Eclipse the next generation’; although I think the Eclipse foundation is understandably downplaying that scenario today. This eventuality would require a large commitment from developers and IT providers, and a similar community effort to the desktop Eclipse phenomenon. In addition some level of inter-op between plug-ins would help this along as well. In any event this is one to watch in 2011.
Category: Uncategorized Tags:
by Mark Driver | August 13, 2010 | 5 Comments
On Thursday Oracle sued Google over seven counts of software patent infringement and one count of copyright infringement.
But wait! Isn’t java open source now?
And software patents are kryptonite to open source.
So what’s going on here?
Here’s my take so far….
Then Sun and now Oracle makes a lot of money licensing Java technology for mobile and embedded solutions. In fact, this accounts for the large majority of the revenue they drive directly from Java.
Android is rapidly expanding its market share and is now BIG money and getting bigger every day.
Google is a major Java shop and it knows that Java is popular among smart phone developers as well.
But unlike virtually every other embedded platform vendor that leverages Java (and many of them do), Google does not license Java from Oracle. There are probably a number of reasons for this but IMO it comes down to Google’s vision of Android as a completely open platform rather than any real economic burden.
So Google wants Java on Android but doesnt want to pay for it and doesnt want to be restricted to compatibility test kits etc.
In theory it *could* leverage the openJDK but in reality the license for the Java Micro Edition (JME) contains a ‘poison pill’ which renders the realistic use of open source on mobile devices laughable. Check out this blog entry from awhile back for details — basically it boils down to the classpatch exception in the GPL or lack there of.
So lacking an open source option from Oracle, Google decides to leverage an open source and clean room implementation from Apache Harmony and the Dalvik virtual machine run-time. These technologies are complete clean room implementations free (supposedly) of Oracle’s copyrighted intellectual property. Ironically Oracle was a proponent of Harmony before its acquisition.
At this point the developer world says ‘how clever Google! you go on with your bad self!” But Sun and eventually Oracle might say ‘how clever Google but you still need to pay us.’
Today every device that ships with Android is potential money out of Oracle’s pocket.
Because these technologies are developed independent of the OpenJDK project they are not protected by the patent agreement within the GPL. So even though they are open source they are susceptible to patent infringement claims.
Harmony and Dalvik would seemingly get them around copyright based IP issues since they do not leverage Oracle’s direct code base. But Oracle has called their bluff and asserts IP violation based on a number of software patents that apply to Java.
You cannot simply code around patents. You have to remove the feature that its patented. In this case killing the platform. So patents are the ultimate IP weapon.
Say what you will about software patents good or bad. But Oracle has every right under law to enforce their IP and compel Google to license the technology. The validity of the lawsuit is IMO not really an issue.
Google has a few options now…
- Fight the lawsuit by getting the patents invalidated — hard to do and a long and drawn out effort.
- Counter sue with its own patents claims and get Oracle to back down or come to cross patent license agreement.
- License Java from Oracle — write them a check with lots of zeros.
- Enter into some special agreement such as a one time payment.
Its strikes me as odd that we have one supposedly open source technology (official Java) suing another open source Java implementation (Android) over patent issues.
Oracle and Google may settle this in a matter of days and quietly sweep it under the rug in time for the next news cycle. These types of skirmishes happen every day. Most don’t make it to court and the ones that do are typically handled quickly. But this one involve two MEGA vendors over the disposition of the IT industry predominant platform for application development that spans from smart phones to mainframes.
IMO the lawsuit is a lose/lose proposition for Oracle in particular…
If Oracle wins, it will send a strong message to the industry that Java isn’t as open as was assumed. There is already an under-current of bleeding edge developers that consider Java to be ‘legacy’ 20th century technology. If it looks like Oracle is aggresively ascerting its control over Java then these discussion get really interesting. You think the JCP is dead today? You ain’t seen nothing yet.
If Google wins, then other JME licensees will asked why they are paying as well. It will establish a precedent for independent versions of Java and effectively minimize if not downright nullify Oracles stewardshp of Java over the long term. It wont take long for this to spread beyond the mobile/embeded markets to the enterprise as well.
Yes maybe I’m making a mountain out of a molehill on this issue but over the last couple of years I’ve seen a growing groundswell of interest among developers from many ranks who are looking for excuses to move on to “something” new.
This seemingly minor license battle between two mega vendors may very well be the spark that lights a larger debate over the nature of open source Java, the future of Harmony, etc. Then again… maybe its big news for a couple of days at best.
I’m betting on the former.
Category: Uncategorized Tags:
by Mark Driver | December 8, 2009 | 12 Comments
We’ve just published our 2010 predictions research note for open source. Clients can find it here and can view last year’s here as well.
This research note is a list of some heavy hitting trends that will have a significant impact over the next 12 to 36 months. It is certainly not the total extent of our OSS related research for the year but is meant as a jumping off point for the 2010 agenda.
For 2010 year we’ve focused on 3 predictions related to the “business of open source”. Its impossible to do justice to the complete rationale behind these predictions without reading the full note but I’ll share some very brief commentary here anyway.
I’d strongly encourage clients to read the note and even schedule a direct phone conversation to dig into the meat of the predictions and the rationale behind each one..
As an aside, we once assigned probabilities to our strategic planning assumptions (predictions) but in recent years we’ve dropped that practice. Personally I miss the probabilities, they allowed us to make forward thinking statements with measured degrees of confidence. For example…
- A 60% probability meant the the prediction was based mostly on a hunch with very limited analytical evidence beyond a small niche of technology elites.
- A 70% probability meant that we saw early evidence to suggest a strong degree of confidence. For example we might be aware of early vendor R&D pipelines, or innovator/early-adopter activities.
- An 80% probability meant that we saw very strong momentum among early majority adopters, vendor product portfolios, etc.
- A 90% probability meant the prediction was virtually a done deal
Typically, predictions would start with a 5 year time line and low probabilities the probabilities would grow as the timeline shrank or we’d abandon the SPA and report why if it didn’t track as predicted. IMO the model worked very well but alas we don’t use probabilities anymore. The reasons are varied but we apparently received a lot of feedback that clients simply didn’t understand or consider them in most SPA’s.
The reason mention this is because without probabilities our predictions often come across as “crystal ball’ efforts when in reality there is a lot of research and analyst behind each one. I’ve added probabilities back in this blog entry in an attempt to increase the context of each SPA.
Strategic Planning Assumptions
- By 2011, growing diversity among open-source adopters will result in three distinct categories of OSS: (1) community projects, with broad developer networks; (2) vendor-centric projects controlled by commercial technology providers; (3) and commercial community projects, which have vendor-independent support channels.
90% probability. To those with an ‘inside baseball’ perspective on OSS this is already reality. However most in the mainstream IT community are unaware of the diversity among OSS projects and the recent trends toward vendor controlled communities. In the next couple of years the issues will become much better understood and we will begin to see a true taxonomy emerge.
- By 2012, at least 70% of the revenue from commercial OSS will come from vendor-centric projects with dual-license business models.
80% probability. This is may true today but the lack of revenue among broader market OSS products compared to Linux isn’t large enough yet to make this one a done deal. What is clear is that the overwhelming majority of ‘commercial oss’ efforts are based on a dual license model – vendor prefer the ‘open core’ moniker because it sounds more OSS friendly but its essentially the same thing.
- By 2013, more than 50% of new open-source projects will leverage licenses that require code reciprocity (aka “affero”-style licenses) when hosted on external-facing servers; this is an increase from fewer than 5% in 2009.
60% probability. This prediction is perhaps most controversial. Its based on a growing trend among new projects I’m tracking, commercial vendor trends, and investor strategies. This is one to watch.
- The open source software (OSS) model is not anti-commercial, but it doesn’t depend on commercial success.
- More-conservative open-source adopters will require a more robust commercial support channel for open-source solutions than technologically aggressive adopters. In these cases, users must often accept compromises between the “open” nature of the OSS model and the competitive realities of commercial software providers.
- The most successful commercial, open-source vendor strategies rely on dual-license strategies that blend elements of traditional closed-source and open-source dynamics.
- Differentiate the specific requirements for meeting minimal levels of quality of service (QoS) for individual open-source projects based on maturity and adopter profile.
- Integrate commercial open-source support strategies into broader enterprise software asset management initiatives.
- Understand that vendor-centric OSS will sacrifice the breadth and depth of the large developer community for stronger commercial support from a smaller number (often only one) of vendors.
- Plan for changes in historical open-source licensing and business models driven by emerging software as a service (SaaS) and cloud-computing infrastructures.
Category: Uncategorized Tags: open source
by Mark Driver | December 3, 2009 | 24 Comments
I just published a research note on PHP. Clients can find it here
The research note goes into *much* more detail but the overview is below.
Keep in mind that this content is targetted at mainstream IT organizations.
PHP has been a cornerstone technology on the Web for more than a decade. While its adoption among mainstream IT organizations has been limited in the past, many corporate application development (AD) projects are discovering the unique benefits of PHP.
- The PHP worldwide developer count will grow to as high as 5 million developers by 2013, up from 3 million in 2007 and 4 million in 2009.
- In the short term, PHP will remain a widely adopted Web development technology.
- Over the long term, PHP will encounter increased competition from technologies such as Microsoft ASP.NET, Java, Python, Ruby, etc.
- Consider PHP for projects that require a combination open and nonproprietary technology, on which to build architecturally basic (but not necessarily small) dynamic Web applications.
- Consider PHP as a supporting technology in a broader portfolio of AD technologies, where it can provide a specialized toolset for building Web graphical user interface (GUI) front ends to service-oriented architecture (SOA) back-end services.
- Consider adopting and customizing industry-proven Web solutions (e.g., Drupal, MediaWiki, etc.) built on PHP before building solutions from scratch.
Category: Uncategorized Tags:
by Mark Driver | October 6, 2009 | 16 Comments
ColdFusion has been around practically as long as the web itself . Today it retains a loyal but relatively small developer base (Adobe counts it at around 750K developers compared to my own estimates of about 48 trillion for .NET).
Overall CF has lost market share percentage to competing technologies for years (ASP, Java, PHP, etc.) and in virtually all cases when a developer tool loses momentum like this, it results in an inevitable march to oblivion — albeit sometimes a very slow one. AD tools generaly dont make a market turn-around, instead developers migrate to the next big thing and rarely look back. But if it CAN happen then it SHOULD happen with ColdFusion. It is far to easy to pigeon-hole CF as a ‘legacy’ toolset but if you did you’d be wrong.
Adobe has just released version 9 with an impressive list of new features that stand toe-to-toe with anything you’ll find from Microsoft, IBM, Oracle, or any of the elite open source options as well.
Yes, CF is still very much a proprietary toolset — despite growing OSS options. But many web developers are also finding that sometimes a little proprietary (emphase on “little”) is worth it if you can cut your development time by an order magnitude.
Here’s the bottom line: no other web development toolset available today gives you an equal balance of flexibilility, scalability and out-of-the-box RAD experience for dynamic web applications than ColdFusion. There are plenty that do a better job one of these areas; there are few that do a slightly better job in two out of three; but there are none that match CF in all three areas.
Have you looked at ColdFusion recently? If not then start with my recent research note (assuming your a Gartner client of course)and then check out the newest version at http://www.adobe.com/products/coldfusion/
p.s. looks like Adobe has a copy of the report here as well.
Category: Uncategorized Tags:
by Mark Driver | October 6, 2009 | 3 Comments
My colleague Ray Valdes has posted a nice blog entry regarding the announcement on Monday at Adobe’s Max Conference that Flash applications will be deployable to Apple’s iphone. Here’s my take…
A major theme of this year’s conference was the upcoming 10.1 release of the Flash player targeted specifically at a wide range of smart phone devices. Adobe has covered nearly all the bases but Apple remains the missing piece — a massive missing piece — of the Flash ubiquity story.
It pretty clear why Adobe wants Flash on the iphone but Apple says no, pointing to potential performance issues as the major reason. Of course, this is smokescreen in this analyst’s humble opinion.
The main reason Apple says no to Flash is in order to maintain tight control over rich application experiences through its application store. In other words, if I can access cool Flash games via safari with built in flash then why would I pay for them or more specifically why would I pay Apple for them?
Effectively, a true Flash experience on the iphone (or any device for that matter) makes it impossible to police the content on the device (from porn to games and everything in between). This is unacceptable to Apple.
On Monday Adobe announced Flash applications on the iPhone. But… not really… at least not entirely.
Basically the next generation of the Flash developer IDE will allow you to compile Flash applications down to native iphone code passing the need for the flash run-time player altogether. However, these applications wont run in the brower and must still be accessed and installed via Apple’s app store. So Apple loses nothing and Adobe get less than they wanted — far less.
However there’s still real value here. Developers get another tool set to develop iphone applications. Apple gets a massive influx of new applications to its device. So Adobe get its foothold on the iphone — maybe a toehold. No true “Flash on the web” experience but standalone applications are the next best thing. Overall its a good move for everyone involved except…
1. I’ve always been weary of cross compilers. If the process turns out to be truly turn-key with complete compatibility and performance then GREAT. If not then it could lead to a nightmare of forked code.
2. As a more subtle but longer reaching issue, I fear that Adobe’s announcement may damage its own the long standing message behind Flash. The run-time has been the key but now (when push comes to shove) it doesn’t seem the run-time is as important as it might appear. If Flash can create native compiled applications for the iphone then why not RIM and Palm as well?
Personally I’m torn. Yes its cool to have Flash applicaitons (if not the ‘Flash” runtime) on the iphone. However its also not the true ‘Flash as an integral element of web’ message that Adobe evangelizes either. Its a step forward no doubt but it doesn’t close the book on the issue by far.
Category: Uncategorized Tags:
by Mark Driver | June 16, 2009 | Comments Off
I just published a research note that outlines some key trends and strategies to maximize OSS investments in the ‘down’ economy. You can find it here.
Let me know what you think.
How is your IT org leveraging OSS in today’s IT environment?
Category: Uncategorized Tags: open source