In the competitive landscape of today’s business world, achieving product-market fit is crucial for the success and growth of any organization. This elusive state signifies the perfect alignment between a product or service and the needs of its target market, resulting in strong customer demand and sustainable business growth.
Building a product that truly resonates with its target audience is challenging. It requires diligent research and a clear understanding of customers’ needs and often involves a degree of trial and error. Another equally important thing is having a cross-functional team of marketers, sales, product, and engineering. At Databox, we revisited this entire process again with our latest product, Benchmark Groups.
It started with a brave idea to disrupt the ol’ boys of benchmarks, the Gartners and Forresters, and the need to invest large sums of money to get to the benchmarks companies need to see how they stand compared to businesses like them.
At Databox, we understand the importance of product-market fit and its impact on our success. We have embarked on a journey to find the right fit for our new free product, “Benchmark Groups,” recognizing that this process is an ongoing one that requires experimentation and perseverance.
Benchmark Groups aims to bring affordable business performance benchmarks to small and medium-sized businesses based on the objective data gathered directly from the tools they connected to one of our products. The core idea behind this was to start and scale a network (see The Cold Start Problem by Andrew Chen for more on this topic) and offer a platform for businesses to build their subnetworks called Benchmark Groups consisting of either their clients, peers, or prospects, that allow group members to compare their performance on selected metrics with each other.
This blog post will explore our journey of discovering product-market fit for Benchmark Groups and share our experiences, including the (hard) lessons we’ve learned. But before we continue, we need to explain some of the terms we’ll use, which are key to understanding the journey.
A benchmark group, also known as a cohort, refers to a group of businesses that have shared characteristics or criteria for performance comparison and analysis.
A benchmark for a specific metric is a set of data points calculated from values for this metric collected from a group of businesses matching specific criteria (e.g., up to 50 employees, B2B, and Marketing & Advertising industry sector). It includes a median value and values for the 25th and 75th percentile (top and bottom performers). When a benchmark is personalized (e.g., company data for a metric is compared to the median value), we also include the company’s value (your performance) and the rank (e.g., You are outranking 60%), providing additional information on whether the company outperforms or is outperformed by others in the cohort for a selected metric.
Now it’s time to share our journey.
When we first dreamed up our plan for Benchmark Groups, it felt like we had it all figured out. We hoped that businesses would flock to our product and would want to create their groups and invite others from their network to join, but of course, we knew it most likely would not be that easy.
To jump-start the use of the platform, we created more than fifty publicly available benchmark groups with different combinations of company sizes, industries, business types, and metrics from various popular data providers, e.g., Google Analytics, Facebook Ads, etc.
Next, we thought we’d only have to ask new users to fill out a form about their industry, company size, revenue, business type, and a few other less relevant questions since we’re already asking. They’d willingly do it, and voila, they’d be perfectly paired with a few benchmark groups matching their profile. The idea was simple: get them into the right group, and they’d have access to all these excellent benchmarks, and we’d have all the information needed to provide them with even more and better benchmarks. How hard can it be?
Turns out, way harder than we anticipated – honestly, it was a struggle.
We were not even close to gaining the traction we expected. People were signing up and disappearing after the first visit, or they were eager to see the benchmarks but unwilling to provide the required company metadata, so they dropped off at this point. Some quickly clicked through the form by selecting the first options available, only to get disappointed afterward because they didn’t match the group characteristics and couldn’t get into the group they were trying to join. The most challenging obstacle we hit was the reluctance of most visitors to connect their data sources. This was a wake-up call that made us step back and think about where we relied too much on our assumptions without validating them enough.
We realized we were too optimistic in thinking that people would instantly recognize the awesomeness of Benchmark Groups and willingly give us anything we asked for. We forgot one key thing: generally, people want to see at least some value for themselves before committing.
Through interviews with our existing and potential users, we learned that they want to glimpse the benchmarks before they feel comfortable sharing their company data with us, regardless of how securely we gather it and how anonymous the benchmarks are. Lesson learned. We needed to showcase better what we offer before asking for all these annoying company metadata questions and expecting them to connect their data.
For the record, we gather and store the data securely and fully anonymize it for the benchmarks. Sometimes, we get questions like “Can I see the benchmark for my competitor Company XYZ?” and the answer is: “No, you can’t see the performance of an individual company, and they can’t see yours, either.” 🙂
Another important realization was that we introduced a new concept of benchmark groups, but its meaning didn’t come as naturally to our users as we thought it would. This can quickly happen to teams when they breathe and live the product for months and introduce new terms that make sense to them but might not make immediate sense to the users. We had to find a way to explain it better and work more on the awareness of how benchmarks can help SMB businesses that previously might not have access to the benchmarks because they were either too expensive or non-existent for their niche.
All these lessons made us rethink the approach, break down the barriers, and start building more awareness about what we’re doing through our networks and activities from the Go-To-Market teams.
After reflecting on the shortcomings of our initial approach in finding product-market fit for Benchmark Groups, we had to make significant adjustments and iterate on our strategy. Armed with the lessons we had learned so far, we defined the necessary changes: reevaluate the onboarding process, start educating our internal teams and the public, and leverage the feedback loop as much as possible.
First, we revisited the signup and onboarding flows. We removed any step that was not crucial for users to get to the benchmarks as fast as possible. No more hoarding the nice-to-have data that we might use in the future.
Next, we simplified the process of joining the group. We eliminated the requirement for the user’s company metadata to match the group’s conditions perfectly before joining. We allowed users to join any group they wanted. We just warned them that their data was not included in the calculation of the benchmarks (we made sure of that) and that the group and its benchmarks may not be relevant to their company because they are for a different industry, business type, or company size, etc.
We didn’t experiment and iterate on the product features and flows alone but also on the ways to build more awareness and understanding of what we are trying to achieve with the benchmarks.
We developed educational content for our internal teams to understand the value of the benchmarks for our existing customers in our primary product, Databox Analytics, and prospective customers. For a limited time, we offered free personalized benchmark reports our Customer Success and Business Development team had to generate manually. The feedback we got from them was that using benchmark reports made it much easier to initiate a conversation with a prospective customer and give them something valuable on the first call.
We utilized our LinkedIn networks, especially our CEO Peter Caputa, sometimes known as “the benchmark guy,” and experimented with different types of posts. We developed a couple of blog post types ranging from a full-length research piece comparing benchmarks for different business types, industries, and company sizes to a simple post sharing just one benchmark chart from a specific group.
We regularly mentioned different benchmarks and benchmark groups in our podcast Metrics & Chill and newsletter Move the Needle, and we started collaborating more closely with some of our current benchmark partners – agencies willing to be our early adopters, like SmartBug Media, Etna Interactive, and Good2bSocial.
All these education attempts resulted in more signups, but the signups were not increasing as fast as we desired and not in the volume we hoped. In times like these, it’s easy to fall prey to pointing the finger at this department or that team, especially if we think that we have the best product ever and that all it takes is to pour more and more eyeballs through the top of the funnel and the rest is history. While this might be true for some cases, it’s more reasonable to consider that there might be a glitch in understanding the product’s value for the user.
In our case, we were acutely aware that it takes a village to “raise a product” and that we all must stick together to figure things out. Our village consisted of a combination of members from marketing, business development, product & engineering, and management (CEO), a combination that was not that common before. Despite the lack of overnight success, we collaborated closely, brainstormed daily, and evaluated our tactics weekly: be like a duck, calm on the outside, but paddle like crazy underneath.
One of the most critical aspects of our new approach was talking with more people, potential users, agency owners, and individual business owners. Sometimes, this meant staying late at the office since the world is still not in the same timezone, but it was worth it (almost) every time. Listen, discuss, think, brainstorm, iterate, improve, evaluate, integrate. Rinse and repeat. Through these feedback loops, we learned that we’re still missing something and need to rethink our approach yet again. So we did.
Our product serves two main audiences: individual businesses looking to compare themselves to industry standards for data-driven decision-making and agencies looking for alternatives to cold prospecting. Inspired by Tim Ferriss and his question, “What would this look like if it were easy?” we riffed on the idea of how to make it easier for individuals to get to the benchmarks and for the agencies to get more prospective clients and qualified leads by publishing original research and interesting content that would attract them.
We came up with two experiments: ungating the access to benchmarks for everyone and driving collaborative growth through co-marketing with surveys for the agencies.
Until this point, benchmarks were available only after registering and joining a group. It was time to change this, make it easy for people to see benchmarks, and give them the AHA moment without friction. So, we just open the gates? Of course, it was not that easy. As all product managers know, it never is, but this should never stop us.
First, we had to get the buy-in from the stakeholders. Was there reluctance? You bet. Why would we give everything away for free? To everyone? Without registration?! But… that is not how a business works. Actually, this is exactly how some businesses work sometimes until they get enough traction. TikTok was a company that used this approach and succeeded; therefore, we named this experiment “the TikTok experiment.”
We managed to persuade management that this is a valuable experiment to do in more than one way and that we can always put the gates back in. Since we calculate benchmarks monthly, we claimed that we don’t have to fear that visitors would see the benchmarks once and never return because now they’ve seen it all. We wanted to allow them to explore freely, share, talk about benchmarks, and invite others to the platform to see it with their own eyes. Virality rarely happens behind closed doors.
Next, we had to make an architectural change to how we calculated benchmarks, which we called metric-centric benchmarks. This was not a small change, and it took us one quarter to do it, so I’m grateful to the decision makers that they believed in our “I have a dream” and let us pull this off. We were smart about it, which enabled us to later include Benchmarks in Databox Analytics. We are currently in the process of adding them to our Metric Library.
As a part of this experiment, we built Benchmark Explorer, which allows visitors to search for benchmarks for any metric we support. We were inspired by the Moz Keyword Explorer. We moved away from visualizing the benchmarks with charts and offered a simple interface to search, filter, and explore the numbers.
We dropped our assumption that we know exactly what our users need based on their company information – this is what we did with the Benchmark Groups. With the Explorer, we encouraged curiosity and exploration.
We knew that, eventually, we’d have to ask visitors to sign up, so the main challenge was making the experience worth creating an account. Therefore, we introduced custom benchmark reports, a way for the visitors to build their reports from a mix and match of the metrics and filters they want and then share this report freely via public links.
Custom benchmark reports are an extension of the Explorer, letting the visitors visualize benchmarks for their selection of metrics. The only gated action is saving reports to associate them with the correct user and let the user edit them later.
When our Customer Success and Business Development teams were offering free benchmark reports as a part of their service, we got feedback from them and the customers that they’d like to have a report not only on their cohort but also on others, e.g., a B2B customer was curious about B2C benchmarks, too. Custom benchmark reports gave this flexibility to the user by removing the limitations of Benchmark Groups.
Make it up.Make it real.Make it scale.~ Unknown author
This quote is the closest to describing what we’ve been doing for a while now to change how agencies can proactively create demand by collaborating with their prospects, clients, and partners. The collaborative growth approach results from experiments, energetic brainstorming and sparring sessions, feedback from our network, and a combination of things that have worked for us for a while now… and benchmarks!
The idea behind collaborative growth is relatively simple but rarely practiced. All you have to do is stop talking “at” people and start talking “with” them. Stop teaching and start sharing what you’re learning instead. We figured out that co-creating the content with your customers, prospects, and partners helps make your content unique and impossible to duplicate and establish you as an unquestionable expert in your market. You can learn more about collaborative growth by listening to our Metrics&Chill podcast episode on this topic (or reading about it here).
From talking with our potential partners, we learned that for them, performance benchmarks based on metric data represented only one side of the story – our side of it. They were even more interested in process benchmarks gathered through surveys we’ve been running for years and publishing the results on the blog, which for them was the other side of the story – their side. We thought, why shouldn’t we have the whole story but start with the survey first? Instead of asking them for their data from the start, we hypothesized that by providing value through aggregated survey results as process benchmarks, users would be more willing to connect their data to compare to similar companies to get the whole picture.
We started pairing our relevant surveys with our relevant benchmark groups as proof of concept if we could get insights to fuel or content. We could, and we did!
Ideally, we could learn what top performers do through survey responses segmented by benchmark performance from their real, objective data. Our end goal is to get enough overlapping data from survey respondents AND benchmark group members connecting their data sources in the product, but we started with what we had.
Combining both types of benchmarks opened the gates of original research and provided new opportunities for sharing our learnings with the community. We were ready to do an FFF round of experiments – with our LinkedIn friends, followers, and family – team members creating their niche groups and special surveys and sharing their learnings day and night.
We started running pilot co-marketing campaigns with our first benchmark partners – agencies that, as a part of preparing for the campaign, created their benchmark groups and collaborated on developing specialized surveys. At the same time, we helped them with the benchmark-related content to show them the ropes.
This was a learning process for both sides because some things that came naturally to us since we’ve been thinking about the benchmarks day and night for the past year were not easy for them.
For some, it was a lack of an army of content marketers to produce the content with the same gusto we did, for others, especially smaller agencies, it was challenging to get the first 15 members to connect their data in the group regardless of using the survey – mainly because our flow was not exactly perfect. Fortunately, with our own AHA moment, we found a way to kick-start partner groups with our data so their prospects can see the benchmarks as soon as they join their groups.
For us, our manual process was taking its toll. Preparing surveys takes time and requires a lot of back and forth with the partner to get the questions right. Funnily enough, not every partner wanted to have a custom-made survey, they were pretty happy with ours. What if it were easy for both parties? The idea of embedding the surveys was born.
In January, we launched our scalable process for running co-marketing campaigns with our benchmark partners, and we’re not stopping here. We are running ten of them, and one of the latest is research with our benchmark partner Jasper.ai about how marketing agencies use AI.
Remember what the most important principles of collaborative growth are? Sharing. Co-creating. With this in mind, we prepared a free Benchmark Groups Certification Course to share what we learned and help others do the same. We plan to launch it at the end of February, and we have a waiting list in style – a LinkedIn post.
Was it easy to get to this point? Not even close. Was it challenging? Most of the time. Was it fun? Sometimes. Was it worth it? Definitely.
Finding a product-market fit is not for the faint of heart or those unfamiliar with resilience. We are far from done. We have only started building a scalable funnel to drive collaborative growth and help millions of businesses leverage data to improve their performance.
Follow us on LinkedIn to stay tuned for the launch of the certification course. In the meantime, feel free to explore the benchmarks or start building your benchmark group.
Our journey to a product-market fit for Benchmark Groups is part of a series of technical articles that offer a look into the inner workings of our technology, architecture, and product & engineering processes. The authors of these articles are our product or engineering leaders, architects, and other senior members of our team who are sharing their thoughts, ideas, challenges, or other innovative approaches we’ve taken to constantly deliver more value to our customers through our products.
Mateja Verlič Brunčič, PhD is the Director of Product at Databox and the visionary behind Databox’s second product, Benchmark Groups. With her exceptional problem-solving skills, relentless drive, and curiosity, Mateja has passionately led the development of Benchmark Groups, and has been instrumental in shaping the product so that it meets the ever-changing needs of our customers.
Stay tuned for a stream of technical insights and cutting-edge thoughts as we continue to enhance our products through the power of data and AI.
is a Director of Product at Databox. She's passionate about users, product, and engineering, and firm believer into empowering teams and life-long learning.
Get practical strategies that drive consistent growth