Richard Jones

A member of the Gartner Blog Network

Richard Jones
Managing VP
5 years at Gartner
28 years IT industry

Richard Jones is the managing vice president for Cloud, Data Center, and Wireless/Mobility in the Gartner Technical Professionals research group. He covers disaster recovery, business continuity, x86 server operating systems… Read Full Bio

Oracle Database 11g R2 Support on RHEL & OL 6 – What’s Up?

by Richard Jones  |  April 7, 2012  |  Comments Off

Many of the Linux analysts at Gartner, including myself, have received customer inquiries regarding Oracle’s lack of certification and support of Oracle Database 11g R2 on Red Hat Enterprise Linux (RHEL) 6 and Oracle’s own Red Hat compatible kernel in Oracle Linux (OL) 6.  Oracle does certify and support Oracle Database 11g R2 on its Unbreakable Enterprise Kernel (UEK – including the recently released R2) that is a more recent hardening of the Linux mainline kernel with a focus on performance and scalability for Oracle database, middleware, and applications.

Fortunately, Oracle recently announced that Database 11g R2 and Oracle Fusion Middleware 11gR1 will be supported on RHEL 6 and OL 6 running the Red Hat Compatible Kernel within 90 days of their March 22, 2012 announcement. So this brings me to the point of my blog.  Conspiracy theorists love to think of all sorts of reasons for Oracle’s non-support of RHEL 6 up to this point; from Oracle wanting to destroy Red Hat (which I can’t see would help them or anyone for that matter) to who-knows-what, but tend to gloss over the non-glamorous possibilities.  Many of you know that I worked for a software vendor for over 20 years; specifically a systems software vendor.  At one point early in my career, I started an engineering “special ops” team.  Think of these engineers as “Navy Seals” specifically assigned to track down, pinpoint, and fix software issues reported by customers.  We called this team the “Critical Problem Resolution” team – or CPR for short (yes – “CPR” was chosen for its affiliation to saving lives in the medical field).  In that role, I learned a whole lot about debugging problems, especially interactions between multiple vendors’ software when integrated into a complete system.  Also I learned of the vast difference between systems debugging and application debugging – two different worlds.  Bottom line:  Debugging and fixing a problem is proportional to the ease with which the problem can be reproduced.  Intermittent problems were my worst nightmare.  We would only get a whack at the problem very infrequently, meaning that it would take us a whole lot of time to pinpoint the issue in order to solve it.  We loved easy to reproduce problems as we could fix or work-around them quickly.  However, there would occasionally arise an issue that we would discover is founded in an architectural mistake – those were difficult to fix, as often fixing it for one situation would break other configurations.  Often times systems engineers resolve these by adding configuration switches to alter behavior based on what is configuration it is run in.

The Oracle Database Quality Assurance team has a battery of test suites that they run through in order to qualify the data base on each platform.  Now, an RDBMS is a unique piece of code (actually lots of modules) in that it is both systems and applications in the nature of its operations.  In addition, RDBMS’s have the ability to stress an operating system and underlying hardware more so than other applications and systems.  Stress testing to hundreds of threads that are all contending for shared resources, allocating and freeing memory in many different sizes and ways, spin-locking across multiple cores, and hitting the I/O channels harder than most anything else.  Now add onto this, configuration testing: Oracle’s support of Oracle Database 11g R2 on RHEL 6 also includes the support of the Oracle Real Application Cluster option (RAC). RAC is yet another animal and has a lot of dependence on tight OS timing, hence sensitivity to timing issues in the underlying hardware and operating system.  Remember that RAC must synchronize RDBMS state between all of the nodes in the cluster and with every RAC node changing state in response to SQL queries; these must be coordinated and synchronized across the entire cluster.  In addition, node failure, removal, or addition must also be properly brought in and out of this synchronized system (managed by a distributed lock manager – DLM).  Cluster DLM’s are the fiercest of all systems on an operating system.  The amount of testing is truly intense and a DLM will find all sorts of hidden timing issues in today’s complex SMP operating systems and hardware. RDBMS testing is a unique beast, and as expected can stress a system and find subtle errors and timing windows that no other tests are able to find.  As a result, all issues that Oracle finds, regardless of their sources, have to be resolved prior to supporting the RDBMS on any given platform, and this could take time.  Furthermore fixing any issues found can be done in a number of ways:  repair in the operating system code (Linux), repair in the RDBMS code (such as change to use Upstart instead of init as Red Hat moved from the System V init method to the newer Linux Upstart in RHEL 6), or work-around in either place.  Unfortunately, the scope of the fix (especially if it is a work-around in the RDBMS code) typically results in a full run-through of all the test cases.  With something as complex as an RDBMS, this can take a long time.

In summary, all the indications are that Oracle has been working through many issues with Oracle Database 11g R2 on RHEL and OL 6 kernel and were not prepared to certify and support it on those platforms until they were confident those issues had been resolved.   I had learned back in my “CPR” days that promising a fix in a given amount of time would get me in trouble if I did so prior to the engineering team pin-pointing the actual issue.  But management would always push me hard for a date/time for a fix.  Can you say “being between a rock and a hard place?”

Comments Off

Category: Linux     Tags: , ,

More Details on Novell-CPTN Deal

by Richard Jones  |  January 24, 2011  |  Comments Off

An Infoweek news article broke late last week shedding some additional light on the patents that are part of the CPTN deal. When Novell and Microsoft struck the patent cross-license deal back in 2006, undoubtedly the terms and conditions of the cross license included clauses indicating what was to happen should either party be acquired. I had suspected that this is what spawned the CPTN deal, and now with an additional hint from Novell that the patents focus on "enterprise-level computer systems management software" and that Novell continues reiterate that Unix copyrights are not part of the deal tend to lead one to believe that these patents are probably related to Novell Directory Services, identity management and other infrastructure software Novell sells or has sold in its past and not its Linux business.

I received a number of comments to my previous post that merit addressing.  First comment was that I was wrong in that Novell actually did obtain seven patents as part of its UnixWare deals (see attachment D at Groklaw). Unfortunately three have expired already, three more were abandoned, and only one stands until 2014.  We do not know if any of these are part of the CPTN deal, but the Infoweek article indicates that 20 of the 882 had expired with one being counted twice after Novell had done due-diligence on the patents included in the deal.  What do you think? Having been on the inside of Novell (granted – 4 years ago now), I don’t think Novell would have included any Linux/Unix patents in the deal.  Remember that Novell helped co-found the Open Invention Network in 2005 and had donated Linux related patents to that network.  If the same patents donated are also included in the CPTN deal, Linux already has those patents in its portfolio.

Secondly, some of my co-workers rightfully pointed out that an industry exists around buying/selling patents and prosecuting other vendors or entities for royalties due to infringement.  In my previous post, I had attempted to paint the picture of how Novell has historically used its patents and how it views the whole patent sub-culture.  The Infoweek article supports my point.  Novell and some other vendors think of those organizations that buy/sell and prosecute infringement as trolls – a negative connotation of what they do that is actually a legitimate business as per the concept of patents.  So while some vendors may dislike other organizations actions with respect to patent law, it does create thriving business that does help the economy move forward (again, vendors may argue this point.)

Your thoughts are always welcome – what do you think?

Comments Off

Category: Uncategorized     Tags:

Novell-Attachmate and the CPTN Patent Deal

by Richard Jones  |  December 17, 2010  |  4 Comments

A colleague of mine, Drue Reeves, pointed me to a recent Channel Register article indicating that it is not just Microsoft behind the CPTN holding company that was created to acquire key Novell patents.  The article indicates that it appears Apple, EMC, and Oracle were also involved with Microsoft in creating the holding company.  To me, this spells that these companies were together concerned about Novell patents getting into the hands of patent trolls.

As many of you know, I worked for Novell in its research and development division for 20 years, and for about six of those years, I served as a technical expert on Novell’s internal Inventions Committee – a group of lawyers and engineers that analyze and review intellectual property submissions from the R&D teams to determine if the invention is worthy of patent application.  As a result, I have an unusual knowledge of Novell’s patent portfolio.  More importantly, in that position, I learned that large technology companies will purposefully seek out and attend defunct technology company intellectual property auctions, with the only goal of purchasing patents to keep them out of the hands of patent trolls.  All of the companies listed know each other from attending these types of events together.  So it makes sense that they would talk with Novell about ensuring its patent portfolio doesn’t fall into the wrong hands. 

So what do large technology companies do with their patents?  These days they serve two goals for the most part:  Protection from other companies and a “big stick” when negotiating with a potential business partner.  However, on occasion large technology vendors use their patents to extract royalties from another company that has used its patented technology and is hurting its revenue stream. Another patent analogy is the arms race during the cold war between the USA and former Soviet Union – the arms (patents) kept each other in step and behaving and bring both to the ‘negotiating’ table on many occasions.

However, if a patent troll gets a hold of a patent, they are only after one thing: royalties while offering nothing tangible in return.  Some have termed this “patent extortion.” Technology vendors do not like this as it can seriously disrupt a product line (and its revenue stream).

Knowing how Novell thinks and the partnerships it has forged with many other technology vendors, this move to place its intellectual property with a holding company is about protecting the industry, and customers, from patent trolls.

Oh, and another side note:  One bit of confusion that I continually see crop up from the Open Source Linux community is that the sale of Novell patents to CPTN spells bad news for them.  NOT TRUE.  Novell never owned UNIX patents (needless to say that any would have long since expired anyway).  Novell owns the UNIX copyrights.  That’s a different animal altogether, and Novell still retains those copyrights – they were not a part of the CPTN deal.

4 Comments »

Category: Uncategorized     Tags:

Flying Through the Clouds

by Richard Jones  |  December 1, 2010  |  1 Comment

In my last post after getting back onto the blogging network, I mentioned that I’ve had a heavy travel schedule over the past few months.  Most of my travel has been out visiting IT organizations; with the lion’s share of the discussions focused on cloud computing.  I’ve been intrigued with the successes that I’ve heard during my travels – most everyone I’ve talked with is planning, building, or has already built and is using an internal cloud.

I have to mention the paper that Chris Wolf recently published (Gartner subscription to the IT1 [Burton] service is required) titled: Stuck Between Stations: From Traditional Data Center to Internal Cloud.  In this document, Chris sets forth a guidance framework that outlines the five steps to get to a cloud enabled state:

  • Technologically proficient
  • Operationally ready
  • Application centric
  • Service oriented
  • Cloud enabled

  He covers five areas of maturity that organizations must address to move down the steps listed above:

  • Governance
  • Service automation
  • Service management
  • Cloud infrastructure management
  • HIaaS infrastructure

As I’ve talked to IT organizations moving along the path to “cloud enabled,” I’d have to say that most are currently between the “technologically proficient” and “operationally ready” phases.  Another interesting aspect is that “service oriented” defines what most people think of as an internal cloud today.  “Cloud enabled” means that an internal cloud has been interfaced with external clouds to allow for workload balancing, spill over for capacity on-demand, and lower-cost operations leveraging those external clouds for qualifying workloads.  Given that, some of you that I spoke with feel that “service oriented” will be your end goal (at least for now).

This brings up another observation:  Thus far, everyone I’ve talked to that is operating an internal cloud has indicated to me that they had to roll their own.  Off-the-shelf software was not to be found to build a cloud interface when they started. As a result, they were forced to build their own service catalogue and workflow interfaces with connecting middleware to the back end virtual infrastructure (VMware vSphere in most cases).  Granted, VMware released vCloud Director at VMworld in August-September of this year. Other companies have also released or are releasing software to solve this problem, such as CA, Canonical (Ubuntu), Novell, and Red Hat.  Hardware vendors such as HP, IBM, etc. are bundling solutions with hardware. And as is typical of a new market, I think there could be nearly 100 startups offering “cloud-in-a-box” solutions. But all of these have one thing in common:  They are v1.0 and the ability to customize is either immature or lacking. As a result, these offerings may help those in the design phase, yet for those further down the road, probably not.

Moving to “cloud enabled” requires that an organization implement policy and controls whose functions are enabled by workload meta-data.  Think of defining a workload with specific requirements, such as availability, performance, security, data protection, lifespan, and termination (archive, delete, secure delete, etc.).  Many have talked about service levels: Diamond, Platinum, Gold, Silver, etc. that define groupings of requirements. This level of workload or VM meta-data is required in order to ensure that the workload is dynamically placed with an internal or external cloud service and can be properly migrated between services so that its requirements are met while keeping the costs as low as possible.

But while the concept of dynamic workload mobility based on policy driven from workload service level requirements is a fine end-goal, IT organizations must take each step at a time to get to that end goal.  This brings me to one last observation:  Governance seems to be the biggest hurdle, not technology, for the organizations I’ve spoken with.

Let me know your thoughts…

1 Comment »

Category: Uncategorized     Tags:

Back on the Blogging Network

by Richard Jones  |  November 30, 2010  |  Comments Off

I guess one may call it a comedy of errors.  Last August, I upgraded my Gartner laptop to Windows 7 from Windows XP.  I like Windows 7, but during the transition, the VPN client that we Gartner Analysts use didn’t get configured correctly.  Now fixing it requires being connected to the internal Gartner network so that IT can manage my machine without the VPN client loaded.  Here’s the catch 22 – I work from a home office, so in order to fix my issue, I had to schedule time to travel to a Gartner office when I could meet with someone from IT to re-install and configure the client.

Travel travel travel – and just plain being busy made weeks turn into months and finally just before the US Thanksgiving break, I got a chance to meet up with IT in a Gartner office to get the reconfiguration done.  Yahoo!  Now I am able to get back into the blogging network (sorry for the silence so long.)

This is one of those examples when a heavy travel schedule prevented me from getting to needed maintenance, and things just had to wait.

Comments Off

Category: Uncategorized     Tags:

VMware and Novell

by Richard Jones  |  June 10, 2010  |  1 Comment

The news on June 9th of VMware and Novell’s partnership probably surprised some people.  However, in many ways, this makes sense.  I had blogged two and a half years ago about VMware’s vulnerability that 80+% of guest operating systems hosted on its hypervisor are Windows servers.  With Microsoft’s push of Hyper-v, the inevitable will happen.  At 80+% Windows guests running on VMware, the hypervisor has effectively been filling a hole in the Windows eco-system.  Microsoft is now aggressively working to fill that hole themselves with Hyper-V.

But fortunately VMware is doing the right things by going after the cloud, as trying to stick to the old path will result in a slow and painful death.  Freeing itself from any dependencies on the Windows franchise and moving to be the premier private and internal cloud services vendor is the right thing for VMware.  So, adopting SLES as the operating system layer and the core libraries for their appliances allows them to do just that.  Some may wonder “why not RedHat?”  RedHat is out to compete with VMware for the same thing as evidenced by RedHat’s launch of RedHat Enterprise Virtualization last November.  Novell decided not to compete at the hypervisor layer – a decision they took back in August 2008 when Citrix, Microsoft, and VMware all decided to offer their hypervisors for free but rather charging for the management of them. 

The more interesting speculation surrounds the possibility of an acquisition of Novell by VMware (as Novell has recently gone up for sale – spurred by the Eliott offer).  Novell has a great deal of intellectual property in the area of identity management and coupled with its recent drive towards identity enabled and policy driven intelligent workload management, this is something that is of great value to building out enterprise class clouds.  Novell also has desktop management technology that can enhance the VMware View product portfolio.  And Novell’s work to move to open source web based collaboration solutions, such as Novell Pulse, can also be leveraged as a packaged cloud service.

Another twist in this story is the Microsoft-Novell Linux deal.  Interestingly enough, a VMware/Novell pact or possible acquisition may actually help Microsoft from the perspective of SLES based appliances – easier to migrate between VMware and Hyper-V.  While interoperability at this level is a boon for customers, history has shown the alignments – in the long run – will end up with majorities being aligned to single vendor solution stacks, i.e. Windows on Hyper-V and SLES on VMware.

Let me know what you think!

1 Comment »

Category: Uncategorized     Tags: