by Doug Laney | December 16, 2014 | 3 Comments
Going into the 2014 holiday season, North Pole Inc. (ticker: XMAS), the leading global distributor of presents to good girls and boys, called upon Gartner to assess and advise on its information related needs and opportunities.
STAMFORD, Conn., December 16, 2014—
Over the past quarter, Gartner was again given exclusive access to the operations and information systems of North Pole Inc. (NPI), to help it set a strategic path for improved information management and analytic capabilities. For nearly two centuries NPI has struggled to support its growing operation and respond proactively to competitive pressures through the use of emerging technologies and best practices.
“We do a jolly good job year after year,” claims NPI’s Founder and CEO, Santa Claus, “but I have really put the pressure on my IT management team to achieve better efficiencies and creatively use information to innovate.”
As a long-time Gartner client, NPI has read about how other enterprises have selectively adopted information technologies, embraced new architectures and approaches, and acquired the necessary skills. “Now it’s our turn,” exclaimed NPI’s CIO Frederick Ellefsen. “We’ve now fully embraced ‘big data’ and the significant opportunities indicated by the confluence of mobile, cloud, social and information—Gartner’s Nexus of Forces— along with digital business best practices, so we didn’t want to be left out in the cold, so to speak.”
“This is a unique opportunity for Gartner to be exposed to the inner workings of one of the world’s most secretive yet successful enterprises,” said Peter Sondergaard, Gartner SVP Research. “We were pleased to be able to offer our services and insights to NPI.”
Gartner’s review of NPI’s systems revealed an operation not too dissimilar to other distributors and some major retailers, but on a much larger scale. However due to NPI’s unique legal status it has no finance department, nor does it have sales or marketing functions.
Figure 1 – North Pole Inc. Operations
Santa’s Systems Portfolio
Key systems in NPI’s portfolio manage orders, inventory, quality testing, elfin performance and activities, along with tracking human behavior, correspondence, wish lists and contact information, and also environmental impact data. To achieve NPI’s objective of embracing the concept of “infonomics” (i.e. managing and leveraging information as an actual enterprise asset), Gartner first completed an inventory of the NPI’s extensive wealth of information assets:
- Toy Order Management System (“Tommy”) – Toy orders and order tracking of 5.5 billion orders; supplier and 2nd level supply chain and parts level visibility of 4.6 million suppliers; toy orders and order tracking of 5.5 billion orders
- Toy Inventory Management System (“Timmy”) – Receiving and inventory data on 6.9 billion toys
- Toy Assurance Management System (“Tammy”) – Test results and repairs/returns data on all toys received (average of three safety and quality tests per toy) totaling 21 billion tests annually
- Content system for Relations, Inbound Gift Request and Letters (CRINGLE) – Processing, scanning, content extraction and analysis of 6.5 million letters, emails and calls, and recording 19.5 million gifts requested
- Naughty or Nice Information Tracking System (NITS) – Processing and tagging of 16.8 trillion person-to-person interactions throughout the year
- Scheduling, Logistics and Expedited Distribution System (SLEDS) — Handling of 500,000 appearance requests and 280,000 actual mall and other appearances; the operation of 7700 gift express hubs and the logistics and maintenance of the half-million sleighs servicing them; and night-of-delivery (NOD) routing
- Kontact Information & Directory System (KIDS) – Basic contact, rooftop and chimney configuration information on 2.3 billion gift recipients and their 880 million households
- Helper Organization, Operations and Orchestration (HO-HO-HO) – Scheduling and coordination of elf workforce job responsibilities and activities; also coordinates elf housing and food service
- Job Information, Guidance, Learning & Elf Management System (JINGLES) – General elf resource (ER) system for tracking the performance, benefits and training activities of 230 million elves, along with ongoing recruiting activities
- Study for Negating the Outcome of Warming (SNOW) – A longitudinal study as part of NPI’s sustainability efforts. Millions of climate, atmospheric, emissions, deforestation, and animal and human population data points are collected annually to help NPI achieve its target of carbon neutrality by 2020
[See bottom of article for North Pole Inc. Core Data Requirements and Database Sizing]
Data Quality as Pure as the Driven Snow
Due to impeccable information governance and quality processes, a world-class master data management (MDM) program, an impressive team of data elves, robust data quality technology, and unwavering executive-level commitment and involvement, NPI’s information assets show no signs of significant completeness, accuracy, integrity or other quality issues according to sample data profiling using Gartner’s data quality assessment toolkit.
Analytic Opportunities Beyond Just “Naughty or Nice”
From a business intelligence perspective, Gartner found that NPI is lagging others in the shipping and distribution industry. Its enterprise data warehouse , called “Chimneys”, is really a collection of stovepipe query and reporting systems, some still relying on first-generation BI tools like Red Brick. Gartner recommended evolving to a logical data warehouse architecture for most low-frequency queries to enable more insightful cross-functional, federated analytics.
Some predictive analytics is done to select appropriate toys based on NITS behavior modeling, demographics and prior-year presents. Gartner recommended that this system be enhanced to account for factors such as sibling response, damage/loss propensity, and social content analysis. NPI however is working on mobile-enabling Santa in the field during mall appearances so he can advise on toy availability and alternatives (as necessary) in real-time while a child is on his lap. This system is expected to be in place for the 2015 holiday season. Gartner analysts pointed out that this new capability would also require enhancing its “Tommy” toy order management system to capture full catalog and supply chain information from its suppliers. Today NPI only maintains this tracking data on actual orders.
Although NPI does a great job of social media participation, including a multi-channel Twitter strategy (i.e. @santa, @officialsanta, @santaclaus, @santa_claus, etc.), Gartner recommended that NPI begin tapping and analyzing social media streams. Social sentiment analysis will help NPI identify emerging “hot toys” for pre-ordering, and identify early warning signals of quality-related issues. NPI also took into consideration the idea of integrating global economic data to better focus its gift giving on those in the greatest need. However, NPI like many organizations is struggling to hire or train a team of data scientists. “Advanced analytics just isn’t a core elfin competency,” lamented Mr. Ellefsen. “We’re definitely going to have to fly-up outside talent for a period of time.”
Operational Efficiency at Times Glacial
Gartner also advised NPI on how to consolidate its ordering process and information. Since the late 1970s, NPI has being consolidating inbound shipments using its gift express hubs scattered secretly in forests around the world. However it still orders and inventories gifts from suppliers one-by-one. “Our ‘Tommy’ system is definitely outmoded,” admitted Mr. Ellefsen. With sophisticated demand analysis, order pattern matching and smart RFID-enabled inventory management, Gartner believes NPI could save 70-80% of its current TOM processing expense.
No More Cookie Cutter Approaches to Data Management
Regarding the human behavior tracking system (NITS), Gartner suggested that in today’s world perhaps both online interactions (text, email, social media) and human-to-animal interactions should also be captured and tagged as “naughty” or “nice”, and that a broader 5-point Likert scale or automated video/audio analysis might improve measurement precision. NPI is obviously concerned by the size and performance of this already 168 terabyte system, but will be looking into HDFS or other NoSQL alternatives to support expanded tracking ideas. “For obvious reasons, we got away from inverted tree data management structures years ago,” Mr. Ellefsen chuckled.
Gartner and NPI also discussed a long-term cloud strategy. But with over 200 terabytes of online operational data, austere personally identifiable information (PII) privacy and security requirements, and spotty connectivity at its arctic headquarters, Gartner recommended that at this time NPI only consider hosted data solutions for its 7700 gift express hubs.
A Big Sack of New Ideas for Big Data
During the “Workshop at the Workshop” session as it was called, Gartner helped NPI conceive many innovative ways to use information, including:
- selecting toys that would encourage naughtier kids to be nicer
- putting de-identified data online for suppliers to analyze
- realtime NOD (night of delivery) routing and navigation via integrated weather, GPS and air traffic data to optimize Santa’s 10,200 takeoffs, landings and deliveries per second
- 3D printers for custom toy fabrication to reduce sourcing and inventory expenses
- autonomous drone-technology sleighs and robotic Santas to further optimize toy delivery and keep up with growing demand
- developing an “Internet of Toys” capability to enable better collaboration among kids and self-reporting toy diagnostics
- “quantified elf” capabilities for enhanced worker performance
- launching northpoleinc.com to place holiday wishes, maintain wish-lists, check realtime naughty-or-nice indices, receive gift notifications, etc.
- launching a mobile app packed with additional functions such as sleigh tracking, product scanning, and even realtime milk & cookie delivery so Santa receives the freshest snacks (with gluten-free, nut-free, sugar-free, and even kosher and halal options for mixed-religion families)
However the entire NPI management team was quick to squash the subject of transitioning to an outsourced, mobile-enabled parental workforce. “Elves have magical capabilities beyond those of most humans,” Mr. Claus interrupted, “Not to mention a tremendously strong union.”
For those interested in learning more about how NPI and other organizations are innovating with information, be sure to attend one of Gartner’s 2015 Analytics and Information Management Summits. It is rumored that NPI representatives will be there for networking with attendees.
Doug Laney, VP Research, Information Innovation and Strategy
Gartner, Inc. (NYSE: IT) is the world’s leading information technology research and advisory company. We deliver the technology-related insight necessary for our clients to make the right decisions, every day. From CIOs and senior IT leaders in corporations and government agencies, to business leaders in high-tech and telecom enterprises and professional services firms, to technology investors, we are the valuable partner to clients in over 9,100 distinct enterprises worldwide. Through the resources of Gartner Research, Gartner Executive Programs, Gartner Consulting and Gartner Events, we work with every client to research, analyze and interpret the business of IT within the context of their individual role. Founded in 1979, Gartner is headquartered in Stamford, Connecticut, USA, and has 6,600 associates, including more than 1,500 research analysts and consultants, and clients in 85 countries. For more information, email firstname.lastname@example.org or visit gartner.com.
North Pole Inc. Core Data Requirements and Database Sizing*
* For non-believers, these data sizings were derived from various sources: Population data used to determine the number of worldwide Christians (2.3B) and Christian households (884M) is from the US Census, the Catholic Education Resource Center, the Christian Post, and the the Global Population Clock. The average number of presents from Santa (3, excluding stocking stuffers) is from Babycenter.com and CircleofMoms.com. The number of person-to-person interactions (20/day) for calculating the volume of “naught/nice” data comes from the Tilted Forum Project on Humanity, Sexuality and Philosophy. The amount of correspondence Santa receives is from a Wired Magazine article (500K letters annually) and extrapolated to include emails and worldwide correspondence. The number of toy makers (1547 in US) is from toydirectory.com and is extrapolated to include worldwide toy makers, suppliers and parts. The number of shopping malls (105,000 in US) is from the International Council of Shopping Centers. And package delivery, transportation and personnel numbers are extrapolated from public FedEx data.
Category: Uncategorized Tags: analytics, BI, big data, bigdata, business intelligence, christmas, cloud, data management, data warehouse, holidays, humor, information management, mobile, predictive analytics, santa, social media
by Doug Laney | December 2, 2014 | Submit a Comment
As we at Gartner plan our research agendas for 2015 (and as you set your 2015 and beyond information and analytics strategies) it’s good to reflect on what we thought and wrote over the past year. So without further ado, here it is:
In Agenda Overview for Information Innovation and Governance, 2014 I shared out that our research was going to feature what happens as Big Data becomes a mainstream concept and how strategies and organizations will need to evolve. First and foremost, information needs to be treated as an actual business asset. The key issues for IT and business leaders that this research agenda we focused on during the year were:
- What are the keys to improved information leadership and vision?
- How should organizations strategize, plan and govern traditional and new forms of information such as big data?
- What are the range of sources and uses of information that should be considered by businesses?
- How can organizations use established economic principles to measure and improve the value of their information assets and the investments in them?
In How Organizations Can Monetize Customer Data Olive Huang and I posited that customer data has discernible monetary value and suggested several business models and approaches to monetizing it, both directly and indirectly. And in the companion piece, Improving the Value of Customer Data Through Applied Infonomics, we showed how to apply our seven principles of infonomics to manage and measure the value of information as if it were a balance sheet asset, leading to its improved realized value.
With Frank Buytendijk in Information 2020: Beyond Big Data we highlighted research and ideas on how to address coming organizational conflicts, grab hold of the exciting promise of information, and fortify against the real fears of information misuse. In this piece we include a table that shows nine distinct ways information management is already, or will be, changing in your organization, particularly due to Big Data becoming commonplace.
In Customer Analytics and the Art of the Possible With Big Data Jenny Sussin and I continued to pound home the notion that information of almost any variety can be a boon for sales, marketing and customer service functions, and therefore should be treated as an essential corporate asset. And we shared that in 2014 business and IT professionals now deem new product and service innovation a better use of Big Data than even marketing and sales growth. Still, we included several real-world stories about how organizations have radically transformed sales and marketing related business functions through applied information and advanced analytics.
Our crowd favorite, Cool Vendors in Information Innovation, 2014, discussed how there are now millions of available online open data sets available as a fuel additive for business performance and innovation, and featured a few vendors taking a leadership role in innovating with open data, including:
- HG Data that pulls data from the web to discover what kinds of IT products a particular organization is using
- Prevedere that mashes an organization’s own data with up to one million exogenous data sets to discover predictive indicators
- ProgrammableWeb that aggregates available APIs and online data sources into a searchable directory
- SkyFoundry that helps organizations manage and generate value from internet-of-things (IoT) data
Of course data science is seen as the primary means to go beyond basic BI to achieve diagnostic, predictive and prescriptive analytics. But skills are in terribly short supply and will continue to be so into the foreseeable future. So Alexander Linden and I promoted a none-to-radical idea: crowdsourcing. In Four Steps to Effective Crowdsourcing of Data Science Projects we describe how to leverage the power of community and the Internet effectively for targeted advanced analytic needs.
Andrew White and I decided to explore economic theory and practice in the context of information assets. In our piece Increase the Return on Your Information Investments With the Information Yield Curve we adapted the traditional yield curve concept to the discipline of enterprise information management (EIM). We showed how and why the information rate of return (IRR) accelerates then flattens as an organization evolves from immature to mature to optimized EIM, at which point the yield curve becomes asymptotic to both the current state-of-the-art technology and the universe of available data. We also identified dozens of forces pushing downward and upward on the curve, and how to discourage and encourage them respectively.
In a work of extreme collaboration, many of our top analytics and information management & strategy analysts came together to produce, Answering Big Data’s 10 Biggest Vision and Strategy Questions, in which we addressed the top questions we receive on the topic of Big Data from our clients, including:
- How to communicate the value and economics of Big Data projects
- Understanding the many uses and sources of Big Data, internal and external
- Reconsidering information leadership, organization and skills
- Identifying key Big Data strategy, planning and governance
- Leveraging social, mobile and cloud as and when appropriate
And finally, in our 2015 Predicts pieces related to information and analytics (i.e. Predicts 2015: Big Data Challenges Move From Technology to the Organization;; Predicts 2015: Information Governance and MDM Will Be Foundational to Improving Digital Culture; Predicts 2015: The Intersection of Information Innovation and Business Digitalization; Predicts 2015: Power Shift in Business Intelligence and Analytics Will Fuel Disruption) I personally offered an analysis of and recommendations for the following strategic planning assumptions:
- Through 2017, fewer than half of lagging organizations will have made cultural or business model adjustments sufficient to benefit from big data.
- Through 2016, less than 10% of self-service BI initiatives will be governed sufficiently to prevent inconsistencies that adversely affect the business.
- By 2017, 50% of information governance initiatives will have incorporated the concept of information advocacy, to ensure they are value-driven.
- By 2020, information will be used to reinvent, digitalize or eliminate 80% of business processes and products from a decade earlier.
During the year I also researched and advised clients on using and publishing open data, and the emerging role of the chief data officer (CDO) that I presented at Gartner Symposium, our BI/Analytics and Information Management Summits, CIO Summits. Additionally, I continued compiling our growing library of hundreds of real-world examples how organizations around the world and in every industry are using information and analytics in high-value, transformative ways.
Finally, I published infonomics-related pieces in Forbes (The Hidden Tax Advantage of Monetizing Your Data and The Hidden Shareholder Boost From Information Assets) and in this blog (Twitter’s Secret Nest Egg is in Plain Sight and To Twitter, You’re Worth $101.70).
As always, thanks for reading my research, articles, blog and tweets! I hope you find them interesting, informative, and especially, inspirational.
Follow Doug on Twitter: @Doug_Laney
Category: Uncategorized Tags: analytics, big data, business intelligence, cdo, chief data officer, crm, customer data, data governance, data science, infonomics, information, information governance, information innovation, information strategy, innovation, internet of things, iot, predictions
by Doug Laney | September 5, 2014 | Comments Off
The countdown to Gartner Symposium is on! So for each day over the next month in this space you will see a countdown of my favorite ways actual organizations are transforming business processes and inventing new business models and offerings using available information assets, big data and analytics. Don’t just be impressed, be inspired! How can you adapt and adopt these ideas for your own business?
#1 Here’s a great example of sophisticated text analytics outperforming human curators:
Thanks for following this compendium of examples of the art of the possible with information. We have compiled several hundred such examples to help inspire and motivate our clients to do bigger and better things with data than just report on it.
Follow me on Twitter @doug_laney and learn more about Gartner’s Information Innovation research initiative. And if you would like your company’s or customer’s innovative use of information featured in our library of hundreds of examples, contact me via twitter.
Category: Uncategorized Tags: analytics, big data, business intelligence, data science, data scientist, diagnostics, infonomics, information, innovation, mexico, prediction, retail, statistics, symposium, tennis
by Doug Laney | August 10, 2014 | Comments Off
As you swipe your loyalty card at the grocery store, the register automatically discounts certain items in your basket as they pass by the scanner. But is it really a discount? Of course the store advertises that you will receive discounts by signing up for and using a loyalty card. But is it really because you’re loyal? Do you get bigger discounts the more often you shop? Not likely. Perhaps the loyalty card is encouraging loyalty, but heck, you can obtain one from any major grocer. Therefore, it seems there’s something more going on here than just a loyalty-based discount.
In reality, “loyalty-based discount” is code for “free food in exchange information about you and your purchase.” More than your loyalty, grocers and other and merchants with similar programs are after your data.
The Information Economy Matures
Over three decades ago FedEx’s CEO Fred Smith proclaimed that “The information about the package is just as important as the package itself.” Since then, this realization and mindset has swept across every corner of commerce. Recently, we see companies purely in the business of accumulating and selling data with stratospheric valuation multiples. But why should they be the sole purveyors of data? The grocer “loyalty” example has been around for decades, but it’s a B2C (business to customer) model. Many businesses leaders today are realizing that this model can be extended to B2B (business to business) scenarios as well.
Most notoriously, retailers and, yes, grocers such as Dollar General, RiteAid and Kroger have made certain data sets commercially available to partners, suppliers and others for a fee. Kroger generates an impressive $100M annually in incremental revenue this way. Indeed, businesses in nearly every sector from telecommunications to energy to manufacturing to financial services have sought Gartner’s counsel on forming internal efforts to package, productize, price and promote their own information products. And as you would expect, over the past few years, information marketplaces (e.g. Microsoft Azure, ProgrammableWeb, The Data Exchange, datamarket.com Quandle) have emerged as Ebay-like matchmakers for sellers and buyers of data. Oh yeah, even Ebay itself has gotten in on the action, with a new category for “information products.”
Not Accounting for What Counts in Information Barter Transactions
But before we get overly excited about and fixated on selling data for cash, let’s get to the real fascinating part: taxation. Yes, taxation. Back to our grocer and you. Remember, you’re trading information about you and your purchase in exchange for free food, not cash. According to generally accepted accounting principles (GAAP), discounted transactions are recorded at the value of the money exchanged, not the cost of goods. Cash is king in these transactions. However, if we presume or demonstrate that the grocer is monetizing the incremental data received by virtue of a personally-identifiable loyalty card being used, then the transaction (or part of it) ostensibly becomes a barter transaction. Stay with me now…
Barter transactions are recorded by both parties based on the value of the good or service received, yet there’s no requirement that both parties perceive the same value for what they have received. Here’s the rub: data has no value according to the accounting profession. That’s right, despite what 80% of business executives surveyed by Gartner believe, your a company’s information assets are not assets at all–at least and quite conveniently by neither the accounting profession (e.g. FASB, IFRS, AICPA, IAS) nor government revenue services (e.g. IRS). Even for companies like ACNielsen and S&P that long have been purveyors of data, and more recent ones like Google, Facebook and Twitter, their vast storehouses of information assets are nowhere to be found on their balance sheets.
Therefore, if you receive information in return for providing any good or service, arguably, this value of the transaction for accounting and tax purposes could be recorded as zero or negligible. Now you may not want to find yourself in a position of arguing this with an IRS auditor or tax court, but it’s certainly interesting to note how the everyday grocery transaction is an indicator for what could or should be in store for your data. Regardless the possible tax advantages or not, bartering with or for data unquestionably opens up entirely new avenues of commerce, even for traditional businesses. Welcome to the wondrous world of infonomics.
This piece originally appeared, in part, in Forbes: The Hidden Tax Advantage of Monetizing Your Data
Follow me on Twitter @doug_laney
For more on infonomics:
The Hidden Shareholder Boost From Information Assets, Forbes
Six Ways to Measure the Value of your Information assets, TechTarget/SearchCIO
Infonomics Treats Data as a Business Asset, TechTarget/SearchCIO
Improving the Value of Customer Data Through Applied Infonomics, Gartner Research
The Hidden Tax Advantage of Monetizing Your Data, Forbes
How Organizations Can Monetize Customer Data, Gartner Research
Predicts 2014: Innovating With Information Will Demand New Data, Organizations and Ideas, Gartner Research
To Twitter You’re Worth $101.70, Gartner Blog Network
Twitter’s Secret Nest Egg is in Plain Sight, Gartner Blog Network
Infonomics: The New Economics of Information, Financial Times
Putting a price on information: The nascent field of infonomics, Interview in SearchCIO
Predicts 2013: Information Innovation, Gartner Research
The Birth of Infonomics and the New Economics of Information, Gartner Maverick Research
Toolkit: Assessing Key Data Quality Dimensions, Gartner Research
Introducing Infonomics, Gartner Research
An Introduction to Infonomics, interview in Information Age
Infonomics-The Practice of Information Economics, Forbes
Extracting Value from Information, interview in Financial Times (requires free registration)
To Facebook You’re Worth $80.95, Wall Street Journal
Tobin’s Q & A: Evidence of Information’s Real Market Value, Gartner Blog Network
Infonomics Discussion Group on LinkedIn
Category: Uncategorized Tags: accounting, big data, data value, economics, infonomics, information innovation, information management, information strategy, information value
by Doug Laney | November 13, 2013 | Comments Off
Many vendors and pundits have attempted to augment Gartner’s original “3Vs” from the late 1990s with clever(?) “V”s of their own. However, the 3Vs were intended to define the proportional dimensions and challenges specific to big data. Other “V”s like veracity, validity, value, viability, etc. are aspirational qualities of all data, not definitional qualities of big data. Conflating inherent aspects with important objectives leads to poor prioritization and planning. For example, if you’re like many organizations, your terabytes of streamed sensor, log file or multimedia data may not have veracity (data quality) issues at all, but your megabytes of master data may be in total disarray.
As author and analytics strategy consultant Seth Grimes observes in his InformationWeek piece Big Data: Avoid ‘Wanna V’ Confusion, “When a concept resonates, as big data has, vendors, pundits and gurus — the revisionists — spin it for their own ends….In my opinion, the wanna-V backers and the contrarians mistake interpretive, derived qualities for essential attributes.”
Also follow Doug on Twitter @Doug_Laney
Category: Uncategorized Tags: 3Vs, batman, big data, bigdata, data, information, information management, variety, velocity, veracity, volume
by Doug Laney | November 12, 2013 | Comments Off
Prior to Facebook’s IPO, I published a piece in the Wall Street Journal suggesting what the economic value of one of its active users was at the time: To Facebook You’re Worth $80.95. So why not reprise the concept by exploring the infonomics of Twitter?
Twitter’s S1 IPO filing reports that there are over 500 million tweets per day from 215 million active users. That’s about 900-some tweets per user per year. (Over 1000 tweets per year? Consider yourself above average!) Twitter’s S1 balance sheet identifies $964 million in assets, and as of this writing, TWTR’s market cap is $22.83 billion
As I argued in the WSJ piece, since companies like Facebook and Twitter are nearly pure information-based businesses, the difference between their market cap and reported assets represents the value of their information assets. Or more precisely: current investor expectations of Twitter’s ability to monetize its data, expressed in net present dollars. This means the value of Twitter’s data is $21.86 billion, assuming a year-long valuable life expectancy of a tweet. True, tweets are not easily searchable after a few days on the wire, but this doesn’t mean they’re not without value to Twitter.
Note: Due to arcane and archaic accounting practices dating back to the Great Depression, then reinforced in the aftermath of the 9/11 terrorist attacks, information assets are not considered corporate assets and therefore are nowhere to be found on balance sheets of any company. For more on this see my piece, Infonomics: The New Economics of Information in the Financial Times, or my Gartner research note, The Birth of Infonomics and the New Economics of Information.
So, with 215 million active users, this means that each one of us, as of this writing, is worth $101.70 to Twitter. In terms of revenue however, we generate a scant $1.47 per year for Twitter. Each measly tweet itself is worth 12 cents and generates 17 one-thousandths of a cent ($0.0017) in revenue.
How does Twitter monetize its data? Today mostly via advertising revenue (85-87% according to its S1). It delivers 2 billion tweets per day to desktops and mobile devices, so there’s plenty of room to slip in some ads. Twitter also has special deals with others to provide access to the Twitter Firehose (full data stream) and resell its content. As I suggested in my previous Gartner Blog Network piece, Twitter’s Secret Nest Egg is in Plain Sight, ultimately Twitter will shift to syndicating its data, over advertising, as a primary source of revenue.
Sure, Twitter and Facebook are extreme cases with extreme numbers to go along with them. Still, consider the vast amount of data your organization collects, that if sanitized, packaged and marketed effectively could introduce an entire new revenue stream for you—perhaps even self-funding your ongoing enterprise data warehouse or nascent big data initiative as some of our clients have done.
Yes, of course you can follow Doug on Twitter @Doug_Laney
Category: Uncategorized Tags: analytics, big data, bigdata, economics, facebook, finance, infonomics, monetization, social media, tweet, twitter, valuation, value
by Doug Laney | November 8, 2013 | Comments Off
With all the chirping about Twitter’s ability or inability to generate sufficient revenue via advertising income, it is important to consider an alternative revenue potential even more significant: syndicating its content.
Twitter’s own Terms of Service make it perfectly clear who has unlimited distribution rights to the content you post. Them.
By submitting, posting or displaying Content on or through the Services, you grant us a worldwide, non-exclusive, royalty-free license (with the right to sublicense) to use, copy, reproduce, process, adapt, modify, publish, transmit, display and distribute such Content in any and all media or distribution methods (now known or later developed).
Yes, Twitter also claims your content is yours (for obvious liability reasons), and that you can “reproduce, modify, create derivative works, distribute, sell, transfer, publicly display, publicly perform, transmit, or otherwise use the Content.” But “you have to use the Twitter API” to do so, and Twitter’s increasingly restrictive revisions of its API, have ruffled developers’ feathers by severely crippling applications that repurpose Twitter content, even putting some out of businesses. As to content, the API only provides access to a “collection of relevant Tweets” (i.e. subset of only those indexed) for the past several day’s tweets, and only for specified search parameters.
Therefore, you only can get broad-spectrum, longitudinal access to tweets, let alone historical ones, on a wing and a prayer. Or by special arrangement. Twitter’s big data is severely restricted to those without a special firehose access licensing and reseller agreement that only very few partners have (e.g. Gnip and DataSift). And Twitter has shown a propensity to flip the bird to enterprising developers by clamping-down on API functionality. The point is that at any moment Twitter could migrate its strategy to become the sole syndicator of historical Twitter content and Twitter firehose access. While this may not seem consistent with Twitter’s culture, remember it’s now a public company beholden to its NYSE:TWTR flock, not the good people of the interwebs.
The value of Twitter’s content to understand and leverage trends and sentiment about markets, products and companies is greater than the value of any twadvertisement. Use cases for customer support, product development, marketing and sales, corporate strategy and development, etc. render Twitter content invaluable to nearly any organization in any industry and geography. Therefore, syndicating its content is likely to be the primary way Twitter ultimately soars to greater heights. Just as likely, Twitter will create a fee-based API or application for self-service analytics.
As we watch how Twitter and other social media companies hatch new ideas for monetizing their content, let this be a lesson about the potential of collecting, packaging and marketing your company’s increasing storehouse of information assets. We are just at the dawn of infonomics and monetizing enterprise data. The early birds will catch the worm, so get cracking.
Yes, of course you can follow Doug on Twitter @Doug_Laney
Category: Uncategorized Tags: big data, bigdata, content, enterprise content, infonomics, information assets, monetization, social media, tweet, twitter, twtr
by Doug Laney | August 19, 2013 | Comments Off
As summer wanes and the kids are heading back to school, I got to thinking about what a Big Data university program might look like if taught by some of the top minds at Gartner. So if you are matriculating this year or just considering enrolling with Gartner, here are your syllabus and instructors for Big Data University (BDU), home of the Fighting Petabytes:
Big Data Hype 101, Professor Nick Huedecker
The Nexus of Forces: Information, Social, Mobile and Cloud 201, Professors Daryl Plummer, Chris Howard
Big Data Strategy Essentials 201, Professors Doug Laney, Frank Buytendijk
Enterprise Information Management 101, Professors Mark Beyer, Roxane Edjlali, Nick Huedecker
Big Data Architecture 201, Professors Mark Beyer, Marcus Collins
Data Governance and Quality 301, Professors Ted Friedman, Debra Logan
Data Science and Advanced Analytics 201, Professors Lisa Kart, Alexander Linden, Svetlana Sicular, Doug Laney
Big Data File Systems 301, Professors Merv Adrian, Donald Feinberg, Marcus Collins, Roxane Edjlali
Self-Service Business Intelligence 201, Professors Kurt Schlegel, Rita Sallam, Neil Chandler, Daniel Yuen
Big Data Privacy and Ethics 101, Professors Frank Buytendijk, Jay Heiser
Mobile Business Intelligence 301, Professors Joao Tapadinhas, Lyn Robison
Big Data Analytics Technologies Lab, Professors Carlie Idoine, Svetlana Sicular, Rita Sallam, Neil Chandler, Jamie Popkin
International Studies in Big Data, Professors Hideaki Horiuchi, Donald Feinberg, Bhavish Sood, Dan Sommer, Daniel Yuen, Alexander Linden, Frank Buytedijk, Eric Thoo
Social and Collaborative Analytics 201, Professors Carol Rozwell, Rita Sallam
Executive Education in Big Data 101, Professors Hung LeHong, Mark Raskino, Doug Laney
Innovating with Information 301, Professors Doug Laney, Frank Buytendijk, Lisa Kart
Business Intelligence Competency Centers 201, Professors Bill Hostmann, Kurt Schlegel
Data Integration Approaches and Technologies 201, Professors Colleen Graham, Mark Beyer, Roxane Edjlali
Of course there are several electives to choose from as well:
The Role of the Chief Data Officer, Professors Debra Logan, Mark Raskino, Joe Bugajski, Doug Laney
Infonomics and the Economics of Information, Professors Doug Laney, Andrew White
Sentiment Analysis, Professors Jamie Popkin, Gareth Herschel
Master Data Management, Professors Andrew White, Bill O’Kane
Big Data in Financial Services, Professor Mary Knox
Big Data in Telecommunications, Professor Mei Selvage
Big Data and Analytics Service Providers and Outsourcing, Alex Soejarto
Big Data and Operational Technology, Professors Kristian Steenstrup, Doug Laney
Complex Event Processing, Professor Roy Schulte
Digital Marketing, Yvonne Genovese, Gareth Herschel
Performance Management, Dr. Christopher Iervolino, Professor Nigel Raynor
Dean of the College of Business Intelligence, Analytics and Performance Management: Ian Bertram
Dean of the College of Enterprise Information Management: Regina Casonato
To see Gartner Big Data University “professor” biographies, visit: http://www.gartner.com/analysts/coverage.do
To schedule remote office hours with any professor, contact email@example.com
For your required Gartner BDU reading list, visit: http://www.gartner.com/technology/topics/big-data.jsp
To attend find out how to see and meet with your favorite Gartner “professors” at one of our upcoming global Symposia or Summits, visit: http://www.gartner.com/technology/symposium/orlando/ and http://www.gartner.com/technology/summits/na/business-intelligence/
We look forward to seeing you in class!
Also follow Doug on Twitter @Doug_Laney
Category: Uncategorized Tags: analytics, BI, big data, bigdata, business intelligence, data science, data scientist, eim, enterprise information management, infonomics, information management
by Doug Laney | May 24, 2013 | 1 Comment
As we watch America’s greatest auto racing spectacle this Memorial Day weekend, what we won’t see is even bigger than the event itself, faster than the cars themselves, and more varied than the driver personalities. Of course I’m talking about the data. Racing teams now eat Big Data for breakfast, lunch and dinner. And for snacks in-between.
Outside, Indy cars and their cousin Formula-1 cars may be covered with dozens of sponsor logos, but inside they’re smattered with nearly 200 sensors constantly measuring the performance of the engine, clutch, gearbox, differential, fuel system, oil, steering, tires, drag reduction system (DRS), and dozens of other components, as well as the drivers’ health. These sensors spew about 1GB of telemetry per race to engineers pouring over them during the race and data scientists crunching them between races. According to McClaren, its computers run a thousand simulations during the race. After just a couple laps they can predict the performance of each subsystem with up to 90% accuracy. And since most of these subsystems can be tuned during the race, engineers pit crews and drivers can proactively make minute adjustments throughout the race as the car and conditions change.
Throughout the season, based on this accumulated data warehouse of information on car performance, driver performance, tracks and conditions, racing teams will make 50 or more mods per day. And for each season, new cars are built from the ground up using 95% new parts designed using this data.
Of course all these modifications need to adhere to fluctuating, fastidious and unforgiving racing league specifications. So analytics to ensure compliance is just as important.
Telemetry Tech on the Track
So what’s behind all this Big Data wizardry? Here’s a summary of some of what McLaren Electronics has built and baked into and around its team’s cars:
- Its latest data collection device, the TAG-320, features 4000MIPS of processing power, 512MB internal RAM, 8GB of logged data capacity, 13 buses, up to 100kHz analog sampling rate, internal accelerometer, 4000 logging channels, and a 1Gbps Ethernet link speed. Most of these characteristics are a 5-10x improvement over the previous 2008 TAG-310b model.
- The ATLAS (Advanced Telemetry & Linked Acquisition System) is a suite of analytics tools for real time storage, analysis, visualization and manipulation of data. It provides a customizable workbook, graphical timelines and other comparative visualization, heuristic car system checks, automated data alignment and sequencing, and a Microsoft SQL Server API. ATLAS offers analysis features called functions to combine parameters and develop sophisticated analytics, checks to automatically assess any car component, and markers to automatically or manually pinpoint the time when some anomaly happens.
- Accelerated data analytics is achieved using SAP’s HANA in-memory database
- Its Remote Data Server (RDS) enables live telemetry to be viewed simultaneously anywhere in the world by factory engineers, parts suppliers and data analysts
- Simulation capabilities using MATLAB (Simulink) can determine what might happen under different track or race situations, or if a driver behavior or car system were changed
- Special servers are used for collecting and integrating weather and other external data
Is Your Business on Track with Big Data?
All the excitement of auto racing aside, consider the key underlying components of what racing teams are doing to accelerate the performance of their cars and drivers and how these techniques can and should apply to your albeit relatively mundane business.
Use this checklist to see if your business will have a checkered future or get the checkered flag:
- Are you sufficiently monitoring key business processes, systems and personnel using available sensors and instrumentation?
- Are your data streams collected frequently enough for real-time process adjustments (i.e. complex event processing)?
- Do your business processes support real-time or near real-time inputs to adjust their operation or performance?
- Can you anticipate business process or system failures before they occur, or are you doing too much reactive maintenance?
- Do you centrally collect data about business function performance?
- Do you make use of advances in high-performance analytics such as in-memory databases, NoSQL databases, data warehouse appliances, etc.?
- Do you gather important external data (e.g. weather, economic) to supplement and integrate with your own data?
- Do you synchronize, align and integrate data that comes from different streams?
- Do you make your data available to key business partners, suppliers and customers to help them provide better products and services to you?
- Do you have a common, sophisticated analytics platform that includes the ability to establish new analytic functions, alerts, triggers, visualizations?
- Can you run simulations on business systems while they’re operating and also between events to adjust strategies?
- Does your architecture support multiple users around the world seeing real-time business performance simultaneously?
- Do you have teams of business experts, product/service experts and data scientists collaborating on making sense of the data?
- Do you modify your products or services as frequently as you could or should based on available data?
- Do you also use data you collect to develop new products or services as frequently as you could or should?
Racing teams are able to invest in advanced analytics because millions of dollars and euros are on the line from hundreds of sponsors. Hopefully your own big data project sponsors appreciate that big money is on the line for your business as well. Winning the race in your industry now probably depends on it.
Also follow Doug on Twitter @Doug_Laney
Category: Uncategorized Tags: analytics, auto racing, big data, business intelligence, indianapolis 500, indy 500, operational technology, performance management, racing, telemetry
by Doug Laney | April 3, 2013 | 3 Comments
Given all the hype over Big Data and concerns of data ownership, I thought it would be interesting to explore who actually owns Big Data, no I mean really owns “big data.” Yes, the trademark. Next stop, the United States Patent and Trademark Office online database.
Talk about Big Data. The database contains a treasure trove of over 8 million patents and 16 million filings dating back to Samuel Hopkins’s 1790 registered process of making potash, an ingredient used in fertilizer (signed by President George Washington no-less), and the oldest active trademark, SAMSON, registered for a brand of rope in 1884, among the nearly 3 million trademarks. With almost 200,000 patent applications and 100,000 trademark applications a year and growing, so are the ranks of the examiners–almost doubling since 2005.
But back to “Big Data”. The term has been in use since at least the mid 1990’s, seemingly coined by Silicon Graphics chief engineer John Mashey who gave a seminar entitled “”Big Data & the Next Wave of InfraStress.” However since he never trademarked it, who did?
Those of you pioneers in data warehousing will remember a boutique consulting firm, often joined at the hip with Teradata, based in Chicago called Knightsbridge Solutions. Knightsbridge specialized in building large databases and data warehouses before it was absorbed into HP. On January 9, 2001, a Knightsbridge attorney filed the trademark and “big data” became a US citizen or whatever. However, they must have liked the term about as much as most of the industry does today (despite its popularity), as they abandoned the trademark less than a year later.
It wasn’t until ten years passed that an enterprising man in Texas reclaimed it only to abandon it again months later. Poor Big Data! It’s been declared dead twice even before it slides into the Gartner® Hype Cycle™ Trough of Disillusionment™. Not to worry, a fledgling VC called Big Data Boston Ventures nabbed the mark last summer. Until they launch, it seems to be the only asset in their portfolio.
Good news for those of you feeling like you missed the boat, there are plenty of variants still available. The USPTO site lists only 44 related marks including clever ones such as “Bigdata”, “Making Big Data Small”, “Big Data for the Little Guy”, “Rocket Fuel for Big Data Apps”, “Dominating Big Data”, “Wala! Big Data Simplified”, and my personal favorite that integrates large information and lager libation: “Big Data on Tap.”
Here’s to you Big Data! You’ve made your mark.
Follow Doug on Twitter: @Doug_Laney
Category: Uncategorized Tags: big data, intellectual property, patent, trademark