The hot topic at a client organisation at present is data-driven development (DDD). Not only a topic, but an approach that is well integrated within the engineering team. Sitting alongside various measures – analytics and heatmapping tools (on both legacy and new infrastructures), NPS, and A/B testing – various departments are grappling to implement code and get access to the data stream that will soon hit all product team’s shores.
Its intent is to be a more accurate measure of friction in customer experience, where they struggle, drop out of the product and gather their feedback along the way. What it won’t do is sweep the floor of the qualitative approaches – the data team have been very clear about that.
Some product managers however have missed this vital point. Those uninitiated, or still in a world where Google Analytics used to tell all, believe that a number alone does not lie. A feature or flow only gets 2% usage – ‘ah, off with its head!’.
Once, I would have agreed with this response. Many years of experience later – and to some product managers chagrin – has told me, the question we need to ask is, why. Why is there only 2% usage?
It may be painfully obvious to the team why – usability problems, doesn’t work on mobile, wrong place in the user’s flow, too many other competing options, etc.
Descoping is too a factor. All too many times, I’ve seen due process followed – user research, market research, diligent design, testing – leading to a design solution that stands on its merits. Only to have it ripped apart by technical limitations, reducing time scales, dwindling budgets, development team reshuffles and changing priorities.
So when a validated design solution only fulfills a shadow of its intended glory when implemented, is it really a surprise if only 2% of customers use it? The number identifies the problem; it may be the design or it may point to process. But only after we dig around and ask, why.
At the end of the day, these are all assumptions and need validation. A number alone does not state this. It simply lights the way to finding out a problem.
Research. Insight. Discovery. It’s a cyclical process.
Simply looking at a number and saying – well no one’s using it, we’re taking it out of the product – is a naive approach. A clear intent led to its inclusion; the execution failed.
This idea left me wondering, DDD comes after the fact. It’s measuring the impact of a feature or interaction AFTER it’s been implemented. It’s assuming that an upfront design-led process is in place and working. That a customer’s experience influences a process, not only at development level but at a higher strategic level. Influencing a product offering, driving the discovery into new markets, fully exploring a user’s world and context to ascertain how technology can help them – rather than devise a solution and push it to them.
While DDD may be a move in the right direction, it raises the question of, is it the right place for an organisation to start?