Darin Stewart

A member of the Gartner Blog Network

Darin Stewart
Research Director
1 year with Gartner
15 years IT industry

Darin Stewart is a research director for Gartner in the Collaboration and Content Strategies service. He covers a broad range of technologies that together comprise enterprise content management. Read Full Bio

Coming Up for Air and Diving Into Catalyst 2012

by Darin Stewart  |  August 8, 2012  |  2 Comments

For the past few weeks I’ve been immersed in work on open innovation, linked data and expertise sharing.  I’ve managed to sneak out a couple of related blog posts during that time but most of my effort has be focused on writing a new research report, “Radical Openness: Profiting from data you didn’t create, people you don’t employ and ideas you didn’t have”. This is all pretty far outside of my normal areas of coverage, but it has been fun participating in the Gartner Maverick Research Incubator.  The paper is finally done (well, a draft anyway).  I planned to return to my regular research as it works its way through the editorial food chain. So this morning I took a deep breath, ready to return to my regular coverage.  Then I looked at the calendar. Catalyst is only two weeks away! So I’ll be heads down again to prepare.  I’m speaking on four topics this year as well as participating in an experts panel on enterprise metadata.  The abstracts and key points for each talk are below.  If you are attending the conference, please say hello. If you can’t make it, I’m always happy to talk as part of the Gartner for Technical Professionals service. I hope to see you in San Diego.

Content Management in the Cloud: What’s Possible and When to Use It

Moving content management into the cloud can be a very appealing prospect. It many situations Cloud-CMS can be the cure for all your content ills. In other situations it can be the worst mistake you will ever make. This session will give you the tools you need to decide when you should put you content in the cloud and more importantly, when not to.

  • When is it appropriate (or not) to move content into the cloud?
  • What are the critical features of a Cloud-based CMS?
  • How do I manage a hybrid on-premise / in-cloud content environment?

Content Management for Mobile Devices

Content Mobility is about precision content delivery. Because users are interacting with your content for shorter periods of time, with a smaller display and more distractions, it is critical to make the most of that limited window of opportunity. This session explains how to leverage three progressive stages of web content management to create a cohesive, focused, conversational experience for your users that crosses channels, platforms and sessions.

  • Optimize content for mobile delivery
  • Deliver content relevant to the situation at hand.
  • Create a web experience tailored to a specific individual

The Semantic Web and Linked Data: Leveraging Information Where It Lives

The modern enterprise exists within a world of information abundance. Semantic web technologies provide a practical solution for integrating diverse resources and analyzing them in a way that preserves their original meaning. Linked open data is a rich source of data that can augment internal, proprietary information sources to yield insights without the expense of creating the data internally. Together the semantic web and linked data can provide a distinct competitive advantage to the information intensive enterprise.

  • What it is Linked Data?
  • Is the Semantic Web real?
  • What does this mean for my company and what should we be doing about it?

Metadata in the Real World

Metadata is essential to effective information management. It can make data more findable, useable and valuable. Even so, effective metadata can be difficult to design and implement. Getting users to adopt and apply metadata with any consistency can seem nearly impossible. Rest assured, the rewards are well worth the effort. This session will explain how to establish an effective metadata practice in your enterprise.

  • How do I create an appropriate metadata framework?
  • How do I facilitate adoption and application of metadata?
  • How do I justify a metadata initiative?

 

Gartner Catalyst Conference

2 Comments »

Category: Enterprise Content Managment     Tags: , , ,

The Red Queen’s Deadly Effect on Innovation

by Darin Stewart  |  August 6, 2012  |  2 Comments

The one element of innovation that seems incontrovertible is that it must happen faster than it has in the past. Product lifecycles are shrinking dramatically across all industries. In the automotive industry, for example, a 48-month development cycle and six year model were once standard practice. Today concept-to-production times are below 24 months and several industry leaders are aiming to bring that period down to 12 months. Add to this the impact of globalization, which not only introduces new markets but new competitors.

Most companies respond to this challenge with an attitude of “work harder, faster” investing more in core competencies and familiar resources. This is a losing proposition. It amounts to doing more of the same, just at a quicker pace. Rather than accelerating innovation, this approach traps companies in what can be termed the “Red Queen Effect.” This pernicious dynamic is named for the advice Lewis Carroll’s Red Queen offers to Alice in “Through the Looking Glass.”

image

 

Well, in our country,” said Alice, still panting a little, “you’d generally get to somewhere else — if you run very fast for a long time, as we’ve been doing.” “A slow sort of country!” said the Queen. “Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to go somewhere else, you must run at least twice as fast as that!”

 

In sum, the Red Queen is saying that you must exert ever more effort just to maintain your current position. This approach may yield short-term benefits and avoid a bit of risk but ultimately cannot compete in a globalized economy. In their book “The Global Brain” Satish Nambisan and Mohanbir Sawhney describe the Red Queen effect in terms of its impact on business.

“Despite having hundreds of in-house scientists and engineers working tirelessly on innovation projects…innovation pipelines are not delivering the results they need to sustain growth. Innovation productivity is declining while the cost of new product development is increasing day by day. Investing more dollars into internal R&D efforts does not seem to produce the desired payoffs.”

Red Queen thinking is a learned behavior. Companies are continually looking for ways to improve their competitiveness. The impetus for change usually comes from a rival in the marketplace. When a threat is perceived, the organization makes incremental improvements in efficiency, messaging and products to strengthen their position. The rival, also looking for ways to improve, sees these actions, learns from them and makes its own incremental improvements in response. This arms race approach to organizational improvement strengthens both organizations over time and intensifies competition. It does, however, have a downside.

Red Queen organizations tend to use history as their guide for action, responding to new situations with whatever worked in the past. This worked well in a slower paced, more localized market. As the world has expanded and accelerated, lessons learned in an earlier setting quickly become irrelevant or even counter-productive. In other words, what worked them, probably won’t work now. In addition, this learned behavior approach cannot account for new players with whom the company does not share a co-evolutionary history.

Industry is littered with examples of disruptive, dark horses for which established players have no response. Consider the music industry’s litigious and ultimately futile response to digital downloads. Sears & Roebucks fiercely clung to catalogs and brick and mortar as their competitors moved online. Major airlines were driven out of business by upstart budget carriers. Organizations need to learn the lessons of the past but should not be constrained by them. Fortunately, what can be learned can be unlearned and the Red Queen can be overthrown. To do so, however, you must first meet and defeat her three henchmen: the Man of Genius, the Alchemist and the Hidden Expert.

Who these henchmen are and how they can be overcome is the subject of my forthcoming Maverick research report Radical Openness: Profiting from data you didn’t create, people you don’t employ and ideas you didn’t have.  I will also be speaking on the subject at Gartner’s US Symposium in Orlando this November.  Hope to see you there.


2 Comments »

Category: Innovation     Tags: , ,

The World Has Changed. Innovation Strategy Should Too.

by Darin Stewart  |  August 1, 2012  |  3 Comments

Innovation is the lifeblood of the modern enterprise. Companies must continually revise, refine and expand their portfolio of products and competencies if they are to thrive in the modern economy. The ability to identify or create new opportunities and to respond to those opportunities is often the determining factor in the success of failure of an organization. Continual innovation is no longer just an advantage. It is a survival skill.

But true innovation is hard. It is expensive. It is risky. And by many measures it is in decline. Invention is up as reflected in the number of patent applications submitted each year, but invention is not innovation. As Jan Fagerberg of Oslo University puts it, “Invention is the first occurrence of an idea for a new product or process, while innovation is the first attempt to carry it out into practice.” Innovation only occurs when invention is acted upon and a true benefit, social, economic or strategic is actually realized. This usually involves a network of engineers, product managers, marketers, financiers and a host of other players. Each of these actors contributes to the process of turning the creative spark captured in a new invention into the tangible benefits indicative of true innovation.

clip_image002

US Patent Applications 1963-2011 (source US Patent Office)

Most intellectual property (IP), the umbrella label for invention of any sort, never makes it to this stage. Too often it is filed away as a hedge against future litigation or used as a tool to instigate litigation. More commonly, a patent gets shelved simply because the owning company doesn’t know what to do with it. The classic “not invented here syndrome” has evolved into a pervasive attitude of “That’s not what we do here” or even “That’s not what we sell here.” As a result, companies are sacrificing opportunity on the altar of core competency. If a potential innovation does not cleanly align with the company’s current business model, it is unlikely to be acted upon except to ensure that competitors cannot benefit from it in any way.

The current climate of knowledge hording and defensive licensing is not new. Closed innovation has been the standard accepted model for over a century. Internal research and development activities lead to internally developed products that are then distributed by the company. This approach has held sway from Thomas Edison’s Menlo Park through AT&T’s Bell Labs and beyond. In a pre-internet world, such an approach made perfect sense and worked well. That world no longer exists. Globalization and modern telecommunications have moved us from an environment of knowledge scarcity and skills secrecy to one of knowledge abundance and ubiquitous access to expertise. The world has changed. It is time for our approach to innovation to change as well. The closed model of innovation must become open, radically open.

Thriving in a global economy requires embracing opportunity regardless of where it originates, who participates and how it is ultimately delivered. To accomplish this, the boundaries of the enterprise need to become porous. The inflow of external ideas, assets and information should be placed on an equal footing with the outflow of product and licenses. Proprietary innovation, expertise and data should no longer be jealously guarded and labeled as “for internal use only.” Rather than forcing innovation to conform to a particular business model, the open enterprise must look for what new business models any given innovation may suggest and partner with whomever is best positioned to realize its potential.

3 Comments »

Category: Innovation     Tags: , ,

Cloud Content Management Is Not About Cost Savings

by Darin Stewart  |  May 25, 2012  |  3 Comments

I recently copied my music library from a server sitting in my garage to Google Play in the cloud.  Well, most of it anyway.  After a week of round the clock uploading, I still had a few thousand tracks left on my hard drive when I hit Google’s quota limit. I didn’t go through this exercise because I was too cheap to buy more storage for my home server.  I migrated my media to the cloud because I wanted easy access to my music from anywhere, at any time, from any device.  As I talk with organizations about what to do with their content management infrastructure, I find the same dynamic at work.  Sure they would love to save some money in the data center, but the real driver is the need for people to access the content they depend on to do their jobs from outside the firewall. 

The dynamic is similar to the early days of business social computing.  IT departments were terrified of it and execs didn’t get it, so they just said “no”.  Of course that didn’t stop people from using the tools to get their jobs done.  They just turned to consumer oriented solutions like Facebook and Twitter.  As a result, the tools the enterprise was trying to keep out found their way in anyway, just in an ungoverned, unmanaged and insecure way. The same thing is happening now as staff turn to dropbox, senduit or some other file sharing service so they can get their content from home or on the road without jumping through IT department hoops.  This is not a good dynamic for enterprise content management.

I am still not convinced that full blown content management belongs in the cloud (with the possible exception of some SMBs).  Security isn’t the issue, it’s integration with other on-premise enterprise applications.  Any deep integration is going to be trickier with a cloud solution and is going to eat up most savings realized by moving to a SaaS or PaaS solution. Consider also that all of your content will still need to be migrated to the cloud if you go that route.  This is an expensive and error prone process.  Metadata often gets lost or garbled along the way.  I received a text from my daughter this morning informing me that an album tagged as David Bowie now consists of various tracks by Duran Duran and Tchaikovsky. The effort of getting your content into the cloud and cleaned up once its there should not be underestimated or under budgeted.  And remember, I still have a few gigabytes of music on my home server so I’m effectively paying for and supporting a hybrid solution.  At times I long for the simple old days of cassette mix tapes.

I’m working on a new research document that will detail out these and other issues, but for now its enough to note that money should not be the primary driver to the cloud. Ubiquitous, yet managed access to content should be the real goal.  In most cases a hybrid solution that has one foot in the cloud and one foot in the data center is the best approach. This is not the clean slam dunk on ECM savings IT execs want to hear. Getting content to your staff when and where they need it might even cost a little extra, but its worth it.

3 Comments »

Category: cloud Enterprise Content Managment     Tags: , , , ,

Google’s Knowledge Graph: Yeah, that’s the Semantic Web (sort of)

by Darin Stewart  |  May 17, 2012  |  2 Comments

Google is about to get a whole lot more useful. Yesterday, the search titan announced the “Knowledge Graph” a functional enhancement that attempts to provide actual information about the subject of your query rather than just a list of links. This might be helpful, but the really interesting bit is the part about the graph. As Google SVP Amit Singhal put it in his blog post:

 

The Knowledge Graph also helps us understand the relationships between things. Marie Curie is a person in the Knowledge Graph, and she had two children, one of whom also won a Nobel Prize, as well as a husband, Pierre Curie, who claimed a third Nobel Prize for the family. All of these are linked in our graph. It’s not just a catalog of objects; it also models all these inter-relationships. It’s the intelligence between these different entities that’s the key.”

 

That’s what a graph is, a structured set of meaningful relationships. The great challenge of the web is to bring some sort of useful order to the chaos of available online resources. Search is pretty good at finding stuff, but does little to show how things relate to each other. I am likely to miss huge swaths of useful information just because I don’t know enough to ask the right questions. I need a guide, something like a knowledgeable clerk in a bookstore or a good librarian who can point me to important titles and authors I would have otherwise missed. This is what Google is attempting to provide with the Knowledge Graph. Not just the answer to what you asked, but also the answers to the questions you probably should have asked. They are linking information together in a meaningful way and presenting the integrated results to the user. Pretty neat trick. Of course, the dirty little secret of the Knowledge Graph is that you don’t need to be Google to create one. You just need to know a little about how the Semantic Web works.

A couple of years ago, Google purchased a company called Metaweb. As part of the deal Google took ownership of Freebase a massive public database of Linked Open Data, data that is structured in a semantically meaningful way and linked to other useful information. In other words, Freebase was a huge graph of knowledge available to the public, one of many. With a few tools, some semantic know-how and a bit of elbow grease, you could create your own knowledge graph that integrated these public sources with your own internal, proprietary data. The biotech and intelligence industries have been doing it for years.

Google mentions Freebase in passing, but otherwise doesn’t say much about the semantic sources they are leveraging. I think this is the result of a couple of trends in the semantic realm. Last year I wrote a document for Gartner entitled “Finding Meaning in the Enterprise: A Semantic Web and Linked Data Primer.” In a section on the future of the Semantic Web, I said:

 

Semantic technology vendors … are beginning to learn that their customers don’t want to hear about ontologies, inference rules, and other nuances of the semantic technologies underlying their products. … As a result of this dynamic, semantic technologies are being absorbed into the platform and hidden from users. This trend will continue as more and more platforms add semantic capabilities and adopt semantic standards.”

 

When published, this document was received with the deafening sound of … crickets. I shouldn’t have been surprised. Unless you are an information science geek, it can be hard to relate to this stuff. One vendor recently reported that, during a meeting with a potential customer, “the client put a hat in the middle of the table and said that anyone who used the word ‘ontology’ would have to put a dollar into it.” Google understands this and is using it to its advantage, and potentially to our disadvantage.

The Knowledge Graph is not on a par with PageRank and the rest of the Google secret sauce. While they have certainly invested a lot of resources and brain sweat in Knowledge Graph, Google didn’t invent Linked Data and certainly didn’t create that vast majority of the information they are exposing. Linked Open Data is a public resource created by countless hours of effort from anonymous stewards. Acknowledging that contribution would not only be respectful, it would incentivize the creation of even more Linked Data, which would in turn make the Knowledge Graph even more powerful and valuable. The potential for a virtuous cycle is being missed here. Google has done a tremendous service in exposing some Linked Data to the end user. They could do a much greater service if they exposed it as a SPAQRL endpoint. Somehow I don’t expect it to show up in the Google API anytime soon.

I’ve expressed concern over the privatization of the semantic web before. I don’t think this is quite the same thing. Maybe this is more of a “don’t show us how the sausage is made” dynamic. It’s hard to blame Google for letting people assume the Knowledge Graph is more of their magic. But if IT leaders and practitioners continue to think they can’t do this stuff because they aren’t Google, opportunities are going to be missed. In fact, they already are. I find it ironic that one of the objections raised to the Semantic Web is that it all sounds too much like science fiction. In his blog post Singhal hails the Knowledge Graph as Google’s first baby step towards the Star Trek computer. If we don’t start to step up, when that computer eventually materializes it will be ad-driven. We need to get more comfortable with semantic technologies and bringing them into the enterprise. The more Linked Open Data available, the more powerful the graph becomes for all of us. It’s time to get more involved or as Jean-Luc Picard might say, “Engage!”

2 Comments »

Category: Knowledge Management search Semantic Web     Tags: , , , ,

Mobile Devices are the Convenience Stores of the Web

by Darin Stewart  |  May 14, 2012  |  1 Comment

Adobe conducted a study last year that found customers visiting a website from a tablet are more likely to make a purchase than those visiting from a desktop. They also spend more per purchase, as much as 21% more. This trend has not gone unnoticed by retailers and other companies looking for ways to expand their market reach. Improving mobile presence, including location-based services, personalization and tracking capabilities will be the core focus of most online retailers over the next 18 months.

Despite this commitment, most companies struggle with effectively engaging a mobile audience. They find that their homepage, carefully crafted for a desktop browser, drives away mobile visitors. Layering on the out-of-the-box mobile profile provided by the WCM system only makes matters worse. The reason is that a “shrink to fit” approach to the mobile web is not a viable solution. Mobile users have different goals and behaviors than their desktop-bound equivalents. This is true even if the mobile visitor and the desktop visitor are the same person accessing your website in a different context at a different time. For example, a desktop visitor spends more time on a homepage with a lower bounce rate than they will with other content and pages across the site. For mobile visitors, this trend is reversed with longer visits to content pages and little attention given to the homepage.

This is largely due to when mobile users come to your website, such as when they are standing in line at the grocery store. 80% of mobile web access happens during a user’s miscellaneous down time. People pull out their mobile devices when they have just a little time to kill. Whether they are waiting for a meeting to start (or under the table after the meeting has started), waiting for their kids after school or in line for a bank teller, that is when they are most likely to go to the mobile web. This leads to a convenience store approach to web surfing.

When a visitor is comfortable in their office or den, with a luxurious screen and full-sized keyboard in front of them, they are likely to spend some quality time on your website as they would perusing the aisles of a full service, brick and mortar retail store. When they only have a minute with a tiny display and cramped keypad they want to dash in, get what they need and move on. A mobile web presence needs to fulfill this need. At the same time, it should encourage the user to return to your flagship store, the desktop oriented website, when they have more time. 59% of visitors to a mobile website later follow up with a visit to the main website on a PC. When they do, you should be able to greet them at the door by name.

Content Mobility is about more than simply ensuring that your content is readable on a two-inch screen. It is about precision content delivery. When users are interacting with your content for shorter periods of time, with a smaller display and more distractions, it is critical to make the most of that limited window of opportunity. This requires more than optimizing content for display on a small screen. It is requires leveraging the unique affordances of a mobile device and creating a cohesive, focused, conversational experience for the user that crosses channels, platforms and sessions.

1 Comment »

Category: Mobile web content management     Tags: , , ,

Leveraging Expertise Beyond The Enterprise

by Darin Stewart  |  May 8, 2012  |  Comments Off

I recently received an invitation to attend the VIVO Implementation Fest being held this month in Boulder Colorado. VIVO is an open source, expertise discovery platform for the semantic web. It enables the discovery of research and scholarship across disciplinary and administrative boundaries including across institutions. It does this through interlinked public, profiles of people and other research-related information. The goal is to create a national network of scientists. In the three years since its creation, VIVO has gone a long way toward doing so and the innovations have followed. I think industry could learn a lot from this effort.

Science is complicated; multi-disciplinary science even more so. No single company, no matter how large, is going to have all the expertise and resources necessary to exploit every innovation or discover the next blockbuster product. Pharmaceutical giant Merck realized this over a decade ago in their 2000 annual report:

“Merck accounts for about 1 percent of the biomedical research in the world. To tap into the remaining 99 percent, we must actively reach out to universities, research institutions and companies worldwide to bring the best of technology and potential products into Merck. The cascade of knowledge…is far too complex for any one company to handle alone.”

This is as true for manufacturing and merchandising as it is for medicine. Innovation in all industries still tends to follow the traditional “man of genius” model, which has held sway from Edison’s Menlo Park to AT&T’s Bell Labs. We hire smart people (hopefully a few of them are really smart) give them resources and hope for the best. We stick to a vertical integration model in which internal research and development activities lead to internally developed products that are then distributed by our own company through our own channels. This model is no longer competitive. The world has moved from an environment of knowledge scarcity to one of knowledge abundance, but most of that knowledge doesn’t reside in our own firm. We go to great lengths to hire the best and the brightest, but inevitably, as Sun Microsystems cofounder Bill Joy is fond of pointing out, the smartest and most talented people still work for someone else. That doesn’t mean they can’t also work for you.

Universities and academic research centers exchange faculty and share facilities all the time. This quickly and inexpensively expands the capabilities of a team and institution for the situation at hand, enabling them to undertake projects that would otherwise be out of reach. The exchange of knowledge enriches all participating institutions long after the project ends and the team disbands. The social networks established and strengthened by the collaboration tend to outlive the project that facilitated their creation. Why should this dynamic be restricted to the ivory tower of academia? Companies make a lot of noise and spend a lot of effort trying to foster collaboration among their own teams and departments, but that is usually where it ends. We rarely look beyond our own staff, resources and business models to find non-obvious opportunity.

It shouldn’t be this way. The building blocks for expertise discovery and exchange already exist in most organizations. They just aren’t being leveraged. I discussed this in a previous post “Knowing What You Know: Expertise Discovery and Management” nearly a year ago. In the interim, the tools have improved, the opportunities have grown and the available relevant data has exploded. We should now take the next step. In addition to exchanging information, we should start exchanging bodies. We’ve embraced this approach for decades in the form of consultants and professional services. We get their expertise when we need it and they get our billable hours when they want it. The side effect of this arrangement is that we pay consultants to increase their own knowledge and competencies. We can get a bit of that to if we are willing to pay extra for “knowledge transfer.”

With a bit more openness and coordination, it is possible to move toward a less mercenary footing. Consultancies will always be useful, but loaning and receiving staff with partnering firms results in a much richer collaboration and deeper knowledge transfer. Billable hours can be replaced with simple reciprocity or even in some cases by jointly owned intellectual property. The terms of the exchange will of course be negotiation and tailored to circumstances, but the end result is that you may gain access to a person you could really use, but can’t hire. In return, that person gains new experience, new context and the all important social ties that form the bedrock of professional networks. The future of business is collaboration, not just between departments, but between companies. Documenting the expertise that exists in your organization and expertise that you need is a good first step. Publishing that information or a circumscribed portion of it, in the manner of VIVO and its compatriots, is a good next step. The smartest people may work for someone else, but that doesn’t mean they can’t help out once in a while.

Comments Off

Category: Collaboration Knowledge Management Semantic Web     Tags: , , , ,

The Real Problem with ECM (hint: it isn’t the platform)

by Darin Stewart  |  May 4, 2012  |  3 Comments

I talk daily to companies from a broad range of industries. These organizations run the whole gamut of company sizes, from small boutique operations to huge distributed enterprises. Even with that diversity, everyone wants to talk about content management. That’s probably for two reasons. First, regardless of your industry or company size, you depend on information and content to do business. To a greater or lesser extent, every organization is a content intensive enterprise. The second reason is that everyone thinks their content management practice is broken. They’re usually right. Content has become so voluminous and diverse in its forms and how it comes into the enterprise, that pretty much every organization experiences some level of content related dysfunction. This is a big problem, hence why they call Gartner.

Knowledge and information are among the most valuable assets any organization possesses. Most of those assets (Gartner pegs it at 80-90%) exist in the form of unstructured content, such as documents, rich media and web assets. Companies sense that there is untapped value to be had from those resources. Intelligence and insight are trapped in forgotten and inaccessible documents. Money is lost due to inefficiencies in content creation and use. Without consistent and reliable access to these assets it is difficult for an organization to function efficiently and impossible to perform optimally. Companies want to get control of their content, but don’t know how to go about doing so.

So we blame the platform. Actually, we blame the IT guys and then they blame the platform. We start looking at all the moving parts, the search engine, the repository (or more likely repositories. They tend to proliferate like mushrooms), the authoring tools. But the platform is only part of the problem, an often isn’t to blame at all. The real problem is primarily the content itself and the processes and practices surrounding its lifecycle. That is what ECM is really about. Not the technology.

At the root of this issue this the fact that most enterprises simply don’t know what unstructured content they have. Interestingly, they often do have a handle on structured content. For example, they usually know how many customer databases they have and which systems maintain them. However, most information managers would be hard pressed to provide definitive answers to basic questions about their unstructured resources. Where is a particular piece of content? Who owns it? What version is current? How long should we keep it? Answers to such rudimentary questions remain out of reach for most organizations. It is easy to say that there is just too much content to be managed, but this misses the point. If you don’t know what you have, you cannot say you have too much. It is entirely possible you have too little content, or too much of the wrong sort. The real issue is that most organizations have too much unmanaged content.

Unstructured content tends to grow in an uncontrolled, ungoverned manner. Users create, distribute and store information according to their own needs. When they cannot find information they will recreate it. This leads to the ongoing proliferation of redundant and often conflicting content. Organizations in general and IT departments in particular do not know how to arrest and reverse the situation. The most common response when leadership complains is to simply provide more storage thus kicking the can down the road. They never directly address the content problem.

I recently took a look at this problem of creating an effective ECM environment and boiled the process down into six steps. (These are elaborated in the Gartner Solution Path “Creating an Effective ECM Environment“).

  1. Review Content Lifecycle and Define Requirements.
  2. Determine Appropriate Form of Content Management.
  3. Evaluate Current State of Your Content.
  4. Establish ECM Governance.
  5. Establish Content Management Environment.
  6. Perform Ongoing Content Hygiene and Enhancement.

 

Each of these steps is applicable to all ECM environments. The extent to which they are implemented will depend on resources and circumstances. The most important thing to remember is that you don’t create an effective ECM environment overnight and you don’t do it all at once. Too many companies start with a vague sense that things aren’t working and try to boil the ocean. I’ve seen a lot of rip and replace exercises triggered by a single, highly visible (and often unrelated) incident ranging from a failed discovery request to a CEO with an iPad. Fire drills and knee jerks are never the foundation of a solid content strategy. You have to know in advance what you are trying to accomplish and what the desired end state should look like. Once you have that vision articulated, stick to your roadmap and you’ll get there one step at a time.

3 Comments »

Category: Enterprise Content Managment     Tags: , ,

Cloud Content Management Is Not A Cure-All

by Darin Stewart  |  May 2, 2012  |  1 Comment

I am not yet fully converted to the gospel of Cloud Content Management. The data center is undeniably in decline, but it’s not quite dead yet. Despite the evangelism of cloud-oriented vendors, moving things off-premises is not always a good idea. This is especially true for content management.  Don’t get me wrong. In many cases moving to the cloud is a slam dunk.  Web Content Management is a good example.

Serving up basic web content from an on-premise data center is more-or-less indefensible these days. Devoting scarce resources to the care and feeding of a WCM platform and the supporting infrastructure makes no sense when the content is intended for a broad public audience beyond the firewall. A SaaS WCM solution (as opposed to simply moving your platform to some outsourced hosting environment) can handle the traffic spikes, emergency content changes and platform upgrades that would otherwise consume your staff and frustrate your users. Remember though, basic web content, is the operative phrase here.  This rosy picture starts to break down a bit once your web offerings move beyond brochure-ware.  Tight integration with on-premise backend systems can still present a challenge for cloud solutions.  For example, driving web personalization off of a CRM system can create a powerful web experience.  Integrating a legacy CRM platform entrenched in the enterprise infrastructure with a WCM personalization engine floating around the cloud can be a nightmare.  Sometimes our old plumbing gets dragged along with us when we move outside our own walls.

Then there is the matter of security. Yes, technically this is a non-issue.  If you layer on the right security protocols, ensure the content is encrypted both in motion and at rest and manage access appropriately, keeping your content in the cloud isn’t a whole lot different from keeping it in your own data center down the street.  The big difference is that in your own data center, you’re the only one with the keys.  The Patriot Act empowers the U.S. government to compel any organization to turn over any and all data they may possess, including yours, without informing the data owner that they have done so.  For many non-U.S. companies, this makes the cloud a non-starter. If your content management provider also provides your encryption, your data is effectively wide open to Uncle Sam. Companies like CipherCloud have seized on this as a selling point for their cloud-based encryption services.  Your data may not be in your own data center, but at least you are the only one who can decrypt it.  The government can still demand your data, but at least you’ll know about it. This is more of a concern with documents than public web content, but it is a legitimate concern.

The list of concerns, exceptions and corner cases goes on and on. The point is that while the issues of security and privacy can be addressed in the cloud as well as they can on-premises, the cost and complexity of the solution can potentially outweigh the benefits and savings. What is emerging is a multi-tiered hybrid approach to content management that leverages both the cloud and the data center.  Content is being segregated into two tiers: critical and collaborative.  These broad categories provide a reasonable principle of division for the current state of cloud content management.

Critical content presents either high value or high risk to the enterprises and in many cases both.  It tends to be stable and finalized in its form. Depending on your industry, you may be legally required to declare certain content as critical and therefore subject to compliance requirements and records management.  In other cases, content may be critical simply because of its role in the enterprise. Collaborative content tends to be of “lesser” value and represents a lower risk to the enterprise than critical content.  It also tends to be much more volatile. Collaborative content is meant to be shared.  This can be while the content is being developed and reviewed or as the end result of that process.  At some point in its lifecycle, collaborative content may become critical content.  When this happens there is a well defined point, process and procedure for the transformation. Or at least there should be.  That hand-off can also provide a very nice interface between the data center and the cloud.

The paper free office never materialized.  The data center free infrastructure has yet to materialize.  It may happen, but we are not there yet, especially in the context of content management.  At this point, it is still necessary to look very closely at what you need to do with your content and assess the costs and risks associated with both cloud and on-premise solutions.  Sometimes it makes sense to move critical content into the cloud.  Sometimes collaborative content needs to stay behind the firewall.  Neither approach meets all needs under all circumstances in an effective manner. For now, its best to have a foot firmly planted in both worlds and to keep your options open.

1 Comment »

Category: Collaboration Enterprise Content Managment web content management     Tags: , , , ,

Musings on eBook Publishing

by Darin Stewart  |  February 16, 2012  |  3 Comments

I own a lot of books.  Our family library (an enclosed bay of our garage) is lined floor to ceiling with shelves sagging under the weight of a few thousand cloth-bound volumes.  I’m also an avid fan of electronic readers. At one time or another, I have owned just about every eReader ever produced. Last night, I purchased a new Barnes & Noble Nook Simple Touch. We’ve come a long way from Sony’s original DD-1 Electronic Book Player with its ascii texts on mini-cd, but not as far as I’d like.  As I was loading eBooks onto my new device I had to explain (read: justify…again) to my wife why I often buy both the electronic and physical versions of any given title.  The short answer is,  “I need both.”

I prefer the reading experience on an eReader.  When lying in bed, a 1,000 page novel can be a bit cumbersome and carrying a dozen books through airport security is not fun. My nook weighs about as much as a hearty sandwich, holds hundreds of titles and fits in my pocket. This is great for casual reading and traveling, but eReaders still fall flat when it comes to research and reference. When I need to find to a specific passage or some annotation I made while reading (I am a compulsive margin scribbler), I’ll take hardcopy every time.  It just isn’t possible to “flip through” an eBook. So I buy the electronic copy to read and the physical copy to reference. (yeah, my wife doesn’t buy the argument either).

So why doesn’t the publishing industry offer me a package deal?  Sell me the physical book and throw in an access code that lets me download the electronic version.  I would be much happier paying the full retail price of the book, maybe even a bit of a premium, if I didn’t have to make a separate purchase to take it with me on a plane.  This is already the model with many Blu-ray DVDs;  buy the physical disk and get a code to download the movie to your iPad.  Not only would this make me (and my wife) much happier and less concerned about discount prices, it could also bolster Brick and Mortar bookstores.  I’d be much more likely to make the trip to a physical store to peruse titles, if I knew that any book I buy will also be available on my Nook or iPad (I’ve given up on the Kindle ever being open).  Don’t make me choose between books and bits.

While there I’d probably buy a biscotti and cappuccino as well.  hmm…lots of cross-selling potential…

3 Comments »

Category: Uncategorized     Tags: , , , ,