A Closer Look at Testing: Influence on Change and Transformation
2040’s Ideas and Innovations Newsletter
Issue 181, October 10, 2024
A core strategy in any decision-making is whether we are asking the right questions. Theodore Levitt, Harvard Business School professor and economist asked the classic question, “What business are you in?” Transformative examples are Uber is a technology company, not a ride-sharing company — ditto for Airbnb, Google and Amazon as tech companies, not the obvious.
Okay for asking questions. But what about challenging the sacrosanct strategy of testing new concepts? Testing is the Holy Grail in our data-obsessed business culture. Technically, testing is a “stage in the product development process where a detailed description of a product (and of its attributes and benefits) is presented to prospective customers or users to assess their attitudes and intentions toward the product. In other words, concept testing is a way to optimize your innovation, drastically reduce the risk of project failure and limit excessive costs. Concept testing is crucial for product developers to determine the innovation’s chance of success,” according to Umi.
But take a step back and ask if you are sure you’re testing the right thing to the right audience. Identifying the right test is on the same level as asking the right questions.
Testing Innovation with Blinders
We have often written about the importance of innovations. However, we always coach that the need to identify innovations that align with the brand and most importantly with the customer in mind.
Here’s a real-life example. An outsourced marketing agency, hired by an eager, but inexperienced CEO, took over the marketing function of a digital media brand. The editor, fatigued by the CEO’s nagging to “try something new” created a new weekly opinion post on the state of the industry he covers. In collaboration with one of his reporters, they assumed the opinion post would be offered to their online audience as an amplification of the brand’s edgy, opinionated voice.
So, the innovation got the green light from the executive committee and its long-term future would be determined based on the results of a test. The marketing CEO took the idea and ran with it, saying the test would run on the brand’s LinkedIn page (with few followers) and the weekend review that the media brand posts. The editor wanted the new opinion piece to run on the website where it would reach a wider and more relevant audience.
So, what happened? There was little to no traction with the new post. LinkedIn readers were not the brand’s subscribers, and the weekend review as intended solely to enhance the week’s posts and was known to have limited page views.
The situation begs the question, “What are we testing and for whom?” Was it to attract new readers? Strengthen the core audience? Refresh the brand? Who would determine whether the opinion piece was worthy of a long life? When the new post launched, the editor was still at odds with the marketing team about its purpose. Since the page views were low, the editor suggested a pivot to post it on the site where there were loyal, core readers who looked to the brand for its iconoclastic reporting.
Here’s what the marketing CEO said: “I want to clarify something. The purpose of testing is to develop a baseline of how something performs. That’s it. If the testing approach changes, the data will be thrown out, and the clock starts again. If you continually change the testing approach, the goal of establishing a baseline will never be reached. At the end of the test period, when the data is reviewed, there are multiple possible outcomes.
- Drop the idea based on data and performance
- Adopt the idea based on data and performance
- Structure another test.
The marketing CEO continued: “This is and has been, our approach for everything new we have tried for the brand, and I see no reason to change the approach now. It seems as if the real issue here is that you are not getting the response you desire in the test, so you are questioning the methodology used to test the idea.”
When the editor pushed back requesting the piece to be posted on the site, the marketing CEO responded, “Adding it to the website will require workflow changes and impact our publishing approach. In addition, I thought the plan was to test this for three months, look at the data, and then decide what to do. I’m going to advocate for sticking to the plan at this point. That is why we’re testing.”
Our observation of this exercise is not to question the methodology but to question to purpose of the test. When the editor pushed back again, he was told, “It’s what we collectively agreed to. I have no more time to spend on this at this point. If you disagree with the approach, then please feel free to take it back to the group that made the original decision.”
Limited Vision
If you’re like us, you would assess this situation as a clear case of power positioning, a command-and control mentality, hubris and a litany of other un-useful collaborative teamwork. Testing has become the standard of a nimble, innovative culture. Agility of course is a mantra of many who believe that iterative progress, with testing, promotes a positive path at least in the right direction. But false innovation is the sure step to discouraging a workforce from trying new things.
Here’s another example, from a different organization. An editor was asked to implement an SEO strategy by using ChatGPT to identify keywords in every article, and then ensure they were embedded in the content by reverse-engineering the text. Since AI seems to be the answer to everything and the media’s darling and nemesis, this mandate now permeates the newsrooms and most organizations. Using AI to create content might better be disclosed in any newsroom as, “I asked GPT for the answer or to write this, so please don’t hold me responsible for its output.”
After two weeks of taking the suggested path, the search results for the recently published pieces were lackluster at best compared to the past articles that used human input for SEO. When the editor suggested that the actions were being sniffed out by Google itself, the marketing CEO strongly disagreed and said to continue the practice while his team tried to figure out the reason.
Again, asking what the test is designed to accomplish transcends the test itself. And in this case, the methodology may be the culprit.
Critical Thinking
Critical thinking is the one essential tool every organization needs today to ensure that the right questions and the right tests are vetted. To do things following a mechanical formula or based on faulty footing because it is the way it’s always been done isn’t going to lead to critical success in an innovation or transformation model. Like most research and surveys, you can frontload them to game the results you want. With testing, you can manipulate the test to fail or to tell you what you want to hear.
A Macro View
So, what does this mean and why does it matter?
The real-life scenario reveals the folly of innovation for innovation’s sake, testing with the wrong assumptions and test parameters for a meaningful outcome, and not identifying what customers really want — particularly in context of new ideas generated internally. In the end, if customers don’t want something, then doing it for the sake of doing it isn’t going to produce results. But to add to the complexity of the scenario, you need to identify the right customer for a test.
It takes critical thinking and as well as a lack of ego to ask the right questions and prioritize plans and innovations in context of everything else you are doing. It takes honesty and a thick skin to be able to face up to the reality of decisions and what to hold onto and what to let go. Are we hoping to get lucky and the effort will go viral increasing our reach and influence? Are we attempting to emulate another organization’s success even though we are a very different entity?
Controlling the Narrative
Power positioning, threats, and overcontrol typically reveal a gap or cover-up in expertise. In truth, our marketing CEO knows nothing about what he is doing. To his credit, he is trying something new, but to his fault, he is inflexible about considering a different viewpoint or input. Following a formula, more often than not, is a sign of playing it safe by falling back on a familiar path.
Groupthink also is dangerous as most people in the room will agree because everyone else is agreeing. They don’t want to appear to be the bad guy, or they are so desperate for success against a backdrop of failure, that they readily agree to anything.
To Test or Not to Test
The point of our investigation is to challenge the status quo of testing for testing’s sake. Innovation is not easy unless you’re a one-person operation and you can make your own decisions. The tradition of “throwing spaghetti against the wall to see what sticks” is irrational in a digital marketplace. At 2040 we work with our clients to help them pause before they rush into any test or innovation to look at the concept systemically to ensure it aligns with the market orientation and shared brand purpose.
- As you look back at our scenario, what do you think will happen?
- What are the prospects of doing something new for newness’s sake?
And perhaps the main consideration is being asked to test without infrastructure and budget to execute becomes a fool’s errand.
We would love to hear from you, let us know your thoughts.
If you want to learn even more about transformation. Get your copy of The Truth About Transformation.
Explore this issue and all past issues on 2040’s Website or via our Substack Newsletter.
Get “The Truth about Transformation”
The 2040 construct to change and transformation. What’s the biggest reason organizations fail? They don’t honor, respect, and acknowledge the human factor.
We have compiled a playbook for organizations of all sizes to consider all the elements that comprise change and we have included some provocative case studies that illustrate how transformation can quickly derail.
Now available in paperback.