Ladenzeile talks TechCon: 4 learnings from our Experimentation Program

TechCon is an internal conference within the entire Axel Springer group that connects tech- and product enthusiasts from all over the world to learn, inspire and share experiences. This year, Anaïs Pitou and Danielle Chang-Matthias from Ladenzeile’s Growth Team stepped into the spotlight to share learnings from their recent work journey: 4 things we wish we knew before starting our experimentation program.

At Ladenzeile, the name of our Growth Team holds its purpose: to constantly seek and define new opportunities for our platforms to grow through experimentation. To avoid double work on experimentations, ensure that knowledge is kept centralized and keep an overview of the bigger picture, the team created a special unit called the Experimentation Task Force. This group consists of cross-departmental team members from our Product, Marketing, Design, Growth and Data Teams with the goals to:

  • Prioritize and clear dependencies between teams 
  • Gain speed in testing, learnings and implementing 
  • Promote a culture of experimentation based on user centricity 
  • Pick the right experiment for the idea and go away from test-only mindset

In the front line of running the initiative, we have our two in-house experts Anaïs and Danielle. Over the time they’ve gathered insightful learning experiences from their journey, that they now also shared within our Axel Springer Familiy. Here’s what they told the listeners at TechCon:

1. Involving everyone means involving no one

Starting out, we invited everyone to have a seat at the table. Even though the idea of letting everyone have a say came from a good place, we soon had to realize that involving everyone actually means involving no one: meetings were often very time-consuming with very little outcome and only few participants were proactive. We learned that we needed to differentiate between people who need to be informed or consulted on a regular basis and people who are actively part of the decision & implementation.

By now only inviting core people, our task force operates with increased efficiency in testing, learnings and implementation. Meetings are shorter with more proactive participation, making it faster for us to draw concrete action points focusing on strategic topics.

2. Not every idea should be an AB test

Ideas are nice to have, and we surely had many of them kicking-off this new project. On top of that, we got a new AB testing tool, which we were very excited to get started with. We wanted to test – the more, the faster, the better. This, however, resulted in a lot of lost time and resources, too much micro testing and often a missing connection between qualitative and AB tests. This led to experimentation fatigue from too many insignificant results.

What we learned was that when we instead aligned our experimentation work with our company strategy, we suddenly had more significant and impactful tests.
We halved the number of AB tests and achieved a more diversified pallet of experimentation. Today we focus on doing more qualitative research to support AB testing, we rollout implementations at low risk and manage to reach our users way faster.

3. The devil’s in the detail – or in the QA

In the rush and excitement of our new experimentation program, we slightly underestimated the importance of QA checks. Sometimes we would do them quick and dirty, just one day before launch and with the test driver. This also meant that we launched quite a few buggy tests that we then had to re-run, which in the end costed us both time and money.

We learned that by implementing clearer guidelines for QA, the credibility of our test results improved significantly.
Today we use automated and manual QA, as well as a checklist. We also invite external people to join to avoid bias. And the result? An even closer collaboration between our teams, more efficient and credible work – and now being able to foresee technical challenges before launching a test.

4. Failure is part of the progress

Last, but not least, one of our biggest failures was thinking that failed experimentation meant no learning. When we changed from rushing over unsuccessful results to focusing on connecting the learnings from each experiment, we gained much deeper insights and understanding of the user problem.

We now learned that experiments only fail if we don’t learn. This has led us to using much more qualitative research to inform our user problems and shape the right solutions.

“I feel very proud to be able to present our team’s work to the AS community. From the nodding heads in the audience while we presented and the discussion afterwards, we realized how much other companies shared the same struggles in experimentation,” Danielle Chang-Matthias says about the experience at TechCon. And on that note, what better way to round off the topic than how our two Growth Experts ended their talk:

“I have not failed 10,000 times. I have not failed once. I have succeeded in proving that those 10,000 ways will not work. When I have eliminated the ways that will not work, I will find the way that will work.”

– Thomas Edison

You might also like

Become a partner

To get started, select your market and apply to list your products on our website.

To your Partner Dashboard

To monitor and steer your campaign, select your market and log in to your Partner Dashboard