Great storytelling is at the heart of any meaningful content marketing effort – so this won’t be a post where the machines take over. But, what I can say with confidence is that in an era of content shock and too much storytelling (both good and bad), scaling relevant and personal content is only possible if we learn some new tricks and healthy habits.
Last week, I attended the Intelligent Content Conference (#intelcontent) in Las Vegas. The nerdier sister of Content Marketing World, ICC spends its time digging into much of the ‘how’ of modern content. How do I think about taxonomy? How do I store my content and integrate tools like DAM, WCM, CRM and MA to get this content ecosystem to work? Do I need to think about AI in the world of content? (That answer is a resounding yes.)
I have a sneaking suspicion that this will be the first of several posts recounting different session takeaways, conversation highlights and vendor discoveries. The event was small, but rich with people who recognize the need to simultaneously hone the craft of customer-centric storytelling and build the skills, technology, and governance for real-time delivery of relevant content audiences will actually consume and find useful.
Storytelling Must Become Structured
We’ve talked about the idea of atomic content and the importance of assigning attributes and metadata to content in other posts. The group of attendees I met last week couldn’t have been more on board with these ideas – and light years ahead of most marketers I talk to who are working on content marketing. As we seek to publish tailored content across more channels on the right device to the right persona at the right time, we need to break content into components that can be assigned to the exact, right ‘moment’. The only way to do so is to ensure that we can describe those content ‘atoms’ and reconstruct them on the fly using a variety of systems for reuse and targeting.
Today, most content is built as a completed asset. That could be an ebook, video or interactive tool. Even though that ebook might have 50 to 60 different components that could stand alone or become part of a different asset these components are stuck in the ebook. Val Swisher suggested that “PDFs are where content goes to die.” A pretty apt statement when thinking about the lack of insights into what ever happened during the reading of said PDF … and for many of the brilliant individual assets within it.
This doesn’t mean ebooks are bad. What it does mean is that we – content marketers, designers and anyone else in the content supply chain – must stop making content blobs. Think critically about the assets you are building and how they could be atomized. (Chris Ross & I will both be talking about this at the Gartner Digital Marketing Conference next month). Which boilerplate do you use 100 times per week? Could that live somewhere and be a dynamic part of other documents? What about the informative charts and quotes that live in all your case studies? Can they become structured? Tagged? Reused? Those are the ‘easy’ things to think about templatizing and tagging with helpful attributes. The hard work is thinking about the macro level taxonomy, the long term habit-forming practice of making all – or much – of our content structured in nature. [Hint: Marketing has a lot to learn from technical writers and their use of XML.]
New Skills and Talents – ahem Content Engineering – May Be Required
Marketing is both daunting and exciting right now – today’s skills may not be enough to excel tomorrow, but there are also so many wondrous things to learn! I met my first ‘Content Engineer’, Matt Balogh, at the ICC – and I was pretty pumped because a Creative Director friend of mine and I had been brainstorming about where she could find someone for her team that would combine the mentality of a creative with the skills of a developer and devops lead.
It turns out there are not a lot of places to find them – yet. Rather, leaders will need devs who are curious and want to learn about marketing – ready to combine disciplines on a day to day basis. A content engineer will think about how to break apart and in a sense ‘recompile’ content across flows, channels and formats. He or she will partner with content creators to take advantage of the taxonomy and tagging (our new healthy habit) to give our tools raw material to work with as we use everything from rules to AI to dynamically compose relevant messages. [Look out for future posts here – so much to discuss.]
AI: Humans and Bots will Partner, But Data Integrity is Paramount
Of the 14 sessions I was able to attend, 8 were either about AI or spent at least half their time discussing the outcomes of an AI driven program. The most advanced story came from Sam Han who shared Washington Posts’ creation and use of a bot called Heliograf to increase coverage and complement journalists over the last year.
The team at WaPo went from idea to internal pilot in two months – delivering a bot that could create basic, structured coverage of the presidential primaries in template form. Seeing an opportunity to let bots do the rote work, freeing journalists to do heavier hitting pieces, the team next set their sights on the Olympics and split up coverage of the Brazil contests between bots and humans, delivering more coverage and significant engagement across multiple channels. All that was working up to their biggest effort – coverage of the 2016 elections – senate, house and presidential.
For that greatest contest, the team made improvements on several areas as they’d learned lessons and built trust along the way. These included more testing, live monitoring and the ability for human:Heliograf co-authoring. Heliograf was making live updates to important stats-based parts of the articles while journalists focused on details and backstory with tools to override the bot if and where needed for storyline. The effort was deemed a success and Sam’s team – an engineering team – continues to evolve Heliograf seeking opportunities to contribute in ever more meaningful ways.
None of that could have happened without an abundance of data, real time access and the tools to parse that information intelligently. None of our marketing dreams of personalized relevant content at scale can happen without enough accurate data. Despite the feeling that we all have too much data already – we aren’t yet well positioned to use it.
Katrina Neal, a former Cisco content marketer and LinkedIn evangelist, suggested that “biggest threat to AI is unreliable data, not necessarily human misuse”, during her talk about the rise of the data scientist in marketing. In his talk, Pavan Arora, who leads content marketing for IBM Watson, pointed out that 80% of data scientists time is spent preparing data – exhorting marketers to enrich their data (read: metadata) so it can be used in machine learning. Ultimately, circling us back to this idea of structured, tagged content … and the need for marketers to start changing our behavior when it comes to how we create and store assets and their attributes. Has your content team started to think about or act on atomic content and build the healthy habit of tagging to an agreed taxonomy?