In many people’s minds, marketing is a right brain discipline — all about creativity, big ideas, and pretty pictures.
Those things are important, of course, but modern marketing is also all about the left brain. Without analytics, we have no way to know if all those big ideas are impacting our business strategy.
Computational marketing takes that emphasis on data and math to another level, aiming to harness the full power of modern computing to drive marketing strategy. It provides a path to derive actionable insights from the huge amounts of information inside our digital footprints, one that only accelerates in conjunction with open source technology.
To learn more about this next stage in marketing, I spoke with Swede White, media and communications specialist at Wolfram Research, one of the world’s pioneers in computation and computational knowledge.
What does “Computational Marketing” mean to you?
The idea is fairly simple across domains and disciplines: Harness the power of computation to provide otherwise hidden insights from data.
Ten years ago, or, hell, even five, the buzzword was Big Data. Now that is largely replaced with Data Science. What that means is we now have the tools to actually compute the “big data” and glean insights from it. This was previously unavailable due to the expense (literally and in terms of processing power) of computation. We’re seeing a democratization of big data, hence the proliferation of data science and programming in areas where we have not seen it before.
Our CEO, Stephen Wolfram, talks about Computational X a lot. And we’re going to see that more and more. For example, Computational Law, Computational Medicine, Computational Criminal Justice, just to name a few that are emerging right now.
Everyone has a little bit of a data scientist in them, whether you’re in PR, marketing, HR, sales, legal, what have you, and that should be nurtured to move organizations in the direction of complete computational decision making.
Can you share an example of what this looks like in action?
Real, scientific social network analysis seems to be seeping into marketing and public relations. There are a number of platforms that can provide you with some basic information about who is saying what at any given time. But let’s say I want to effect change somehow. Who do I target, and how do I know scientifically who I should target?
With some simple programming tools, you can gather data from Twitter, in conjunction with your CRM data, run queries on various topics, hash tags, what have you, then organize this into a network with all this data.
Ok, great, so now you have a pretty graph to present to the C-levels. What then?
I’m trained as a sociologist, so this is where I get excited. We can take our network and start to think about a concept like centrality or betweenness-centrality. Basically, we’re interested in which actor in a network has the highest degree of social or bridging capital that can accelerate the flow of information through a network.
So, outside of Twitter, for example, meta-data can be gathered about what trade associations a group of people are in. We can then assemble an association matrix to then build a network from. And, again, we’re interested in who is central in that network of people that may seemingly be unrelated. A good way to think about this is what person would make the network fall apart if they were removed from it? There’s your broker of information across otherwise disparate associations–your target.
This type of analysis used to be computationally expensive to handle and take a fair amount of sophisticated programming skills and/or very expensive software. But today, there really is no excuse to not just learn a programming language and get right to it. With all the resources available online for both programming and the basics of network analysis, middle schoolers can quite literally do it. Your marketing team, well really all your teams, should as well.
Data has always played a key role in marketing strategy. What makes this evolution different?
Marketing does have a good history of using the scientific method in analysis for something like A/B testing of email content to increase open rates. That’s how I managed to get an open rate up from 9 percent to 15 percent at a small business. I “scienced the hell out of it” to figure out what worked and what did not by running experiments.
But the first difference in current trends is automation. With machine learning and artificial intelligence, we are able to free up our work by delegating tasks to machines. For example, we now have firehoses of data flowing into organizations at breakneck speed. There are thousands of data points per person, largely gathered through consumer behavior, but increasingly from social data to construct psychometric profiles of individuals by categorizing them into something like the Big Five psychological scheme. We can then determine if someone who is narcissistic is more likely to tweet about something they accomplished using a product or service. Spoiler: They are.
What I think some people don’t realize is that we can use an API to extract location data from those huge swaths of photos from Instagram and Twitter to create profiles of individuals and user types and then construct routines out of geolocation data. We can then link that with more traditional types of consumer purchasing data, CRM data, or whatever other sources you may have.
We can then plug in some very granular demographic data on various locales, along with geolocation and psychometrics, like the Big Five I mentioned, and construct probabilistic models on the likelihood someone will behave in a certain way given some kind of intervention, such as a targeted ad, piece of content, or some other sort of message or incentive, sometimes called nudging in behavioral economics.
It starts to seem a bit creepy, but this is the reality we live in.
Join the conversation