10.0 (skippable) intro
More than a year I know. At least you can't say I'm spamming your inbox like a bot (I'm not cool enough to be that).
Not much has changed: time went by and I spent it as always: wasting it on the internet looking at funny memes and unfulfilling attempts at prototypes. Joined R/GA. Studied a bit of data - took a pretty intense (for a math challenged person like me) MIT course on data science and an University Of Illinois one on data visualization - which were interesting for several reasons (like: the vastness of the topic, the dystonia between online chatter and true application, the intellectually dishonest ways in which data get used to build personal "klout" - insert hipster irony here).
The upside: it kept me busy and I didn't write anything. The downside: it made me think and now I'm writing. Oh well.
10.1 actively resisting truth
I've been trying hard to avoid references to the highly flawed, self-defined democratic process that brought us the presidency of the Tweeting Maniac, and the behind-the-scene mayhem of false news propagated by media platforms who claim are not news companies (but they really are, as they've begrudgingly started to acknowledge) - although tangential pieces like kraft rejecting 85% of impressions for quality concerns, do you trust big data? try googling the holocaust or advertisers wasted over £600m on non-viewable ads last year were kind of building traction towards that. Then Wired, the technobible, came to my rescue.
Every since I renewed my subscription, Wired has been - literally - bombing me with offer mails to subscribe. Which is not exactly what you expect from a company who writes about technology and data quite often. That made me think (also as always, thanks to Dan Hon) about how flawed the "unsubscribe" process is (I unsubscribed from the first, but other mails were seemingly coming from different Wired mailing lists), and generally how impossible it is becoming to say no to a process.
Think applications that ask you to review a certain function, with only options available being yes or maybe later. They do so because let's face it, being on the end receiving of a no is a massive moodkiller - and possibly a liability to shareholders, which is where you don't want to be in our short-term economy. But deciding upstream what to measure and what not to, is also a massive limitation to what can be learned and how these learnings impact design.
The point is: yes we might have a gazillion data points but which ones are any relevant? On top of that, we just touched data creation here, without even touching the downstream part - the actual data point selection - which again might bring us back to the Tweeting Maniac and how he bases his tourettic outburts on secondary sources (Breitbart) despite being the man who has access to the best sources in the US.
Which leads to a kind of big question (for whom I have no answer, as always): how actively do we resist truth, in data points selection? And how systemic this approach is?
10.2 human flaw as persistent variable
This truth-resistant design (in actions which generate data) is not just necessarily limited to modal dialogues like the app example above. It can happen anywhere, even at algorithm level. Because, again, humans.
An algorithm is a sequence of unambiguous instructions. Unambiguous meaning that there should no room for subjective interpretation. But who decides the variables analyzed? As Cathy O'Neil puts it, algorithms are opinions formalized through math. They can predict outcomes based on variables that we decide to be correct, emphasis on subjectivity.Which kind of contradicts the “no room for subjective interpretation” bit above. And aren’t we suckers for consistency?
A piece on Forbes estimates at 4300% the annual growth of data produced (which, whatever, but the irony of a piece like that not explaining in any way how that figure is calculated is inescapable). Pretty much everything is potential a data point, today. Transactions. Mobility. Haptic feedbacks. API. Voice search. To which, big data.
Don't get me wrong, I do understand this angst for understanding and easy explanations: digital and interactive allows us to measure many things, and everyone - marketing especially - craves for reliable measurements to validate (rather than inform, think at years of focus groups). As Dan puts it, we choose what we want to measure, and then we change things so that what we measure moves in the way we want it to move. Which is another active resistance to truth.
10.3 functional integration to data integration
Another interesting thing is how everyone talks about data (I even read bits where people claim that chatbots are strategic assets because among the rest they produce data, which again, whatever, but the irony of something small like a chatbot being defined strategic speaks volumes of how a lot of marketing evaluates its priorities) without talking about complex issues like data architecture and cleansing.
The point is that most organizations have traditionally been quite slow to adapt to change and been quite bad at handling complexity. So this picture gets painted where big data is like Tinder for marketing: lots and lots of options that you can take a quick look at and sweep right (or left).
A bit of a tangent here, but AI as a larger topic, seems to start suffering from the same buzz bias: while the conversation on LinkedIn around how robots will steal our jobs was peaking, the MIT Tech Review published a really interesting piece AI’s PR Problem in which they claim that had artificial intelligence been named something less spooky, we’d probably worry about it less. They suggest let's call it predictive analysis, because singularity is still way away.
Predictive analysis sounds more mundane indeed but also more useful. I might not need an intelligent machine, but most likely I can use a machine that helps me predict patterns in a reliable way to be more useful to my users (for example not sending an email to subscribe two weeks after the person subscribed, Wired). But the word predictive is kind of worrying because it requires systemic thinking, which is what we all want desperately to avoid isn’t it? Just please give us Marketing Tinder and we’re happy.
To go back to conversational interfaces - my chatbot rant - I don’t believe they are strategic, but at least they are focused: directly addressing a barrier, and producing smart data (rather than big) on the back of it. But such interface needs to be extremely focused, and there's not much leeway there, either it answers or not. No maybe later. Which goes back to the design.
Design, in the end, is the central point. And it basically means that before ongoing big data, companies - those that, always according to Forbes utilize just a fraction of that huge amount of data (again the same rigor in facts) - need to think systematically about their product and services and make them relevant, useful and integrated (think sport wearables: they're useful and they produce lots of data that can generate further usefulness).
On the back of a functionally integrated environment, you can create a solid data strategy, that manages complexity and provides real value. Otherwise is just like our dear old focus groups, and having potentially access to lots of knowledge to then just hear the most opinionated person in the room it’s a bit of waste of time.