Clickbait headline ✅
The Startup Rollercoaster: Launched on AppSumo, and—drumroll, please—sold to a whopping crowd of 12! Talk about a party flop.
Detective Mode: Scratched our heads, then dove headfirst into software reviews, searching for the elusive 'why' behind our spectacular faceplant.
Nugget Hunting: Struck gold by analyzing software reviews. Turns out users were telling us exactly what they needed all along. Company still went belly up, sort of.
RAG Swag: Literally read thousands of reviews, then stumbled upon LLMs and OpenAI (thank god). Now that thing is cranking out insights like binge-watching on Netflix.
Why Bother? 👉 Reviews are jam-packed with intel and can turbocharge your understanding of customers, helping dodge startup disasters.
The Bigger Picture: It's not just about dodging bullets; it's about speeding towards success - think insights paving the way to problem-solution fit, maybe even product-market fit, and potentially that Lambo waiting at the finish line.
And how to score a Lambo as a result - well, sort of.
TL;DR
Clickbait headline ✅
The Startup Rollercoaster: Launched on AppSumo, and—drumroll, please—sold to a whopping crowd of 12! Talk about a party flop.
Detective Mode: Scratched our heads, then dove headfirst into software reviews, searching for the elusive 'why' behind our spectacular faceplant.
Nugget Hunting: Struck gold by analyzing software reviews. Turns out users were telling us exactly what they needed all along. Company still went belly up, sort of.
RAG Swag: Literally read thousands of reviews, then stumbled upon LLMs and OpenAI (thank god). Now that thing is cranking out insights like binge-watching on Netflix.
Why Bother? 👉 Reviews are jam-packed with intel and can turbocharge your understanding of customers, helping dodge startup disasters.
The Bigger Picture: It's not just about dodging bullets; it's about speeding towards success - think insights paving the way to problem-solution fit, maybe even product-market fit, and potentially that Lambo waiting at the finish line.
Experience teaches harsh, but it teaches best.
Back in early 2022, I found myself in a really unique and uncomfortable position. I just joined a startup that was in the midst of developing a new generation of their process automation platform. When I came on board to lead their Marketing department, they had already spent more than a year building it, investing significant time and resources into what was expected to be a groundbreaking product. At first, it seemed promising - the UX was far superior compared to the legacy platform and notably easier for noobs like me to create functioning, no-code workflows. Then after many months of hard work, we finally launched on AppSumo. 12 people bought it - ouch! It felt like throwing a party where everyone RSVPs 'yes' and then you end up dancing alone with your cat.
At the time I was puzzled. I simply did not understand why this wouldn’t attract more interest from users, given that the lifetime deal was an absolute steal (rhyme time)! The feedback from the AppSumo launch was clear: our product wasn’t ready. Users described it as ‘incomplete’ or ‘a nice little toy’. It was the equivalent of being friend-zoned by a cute girl after mustering up all your courage to ask her out on a date.
Now in hindsight, after extensively using similar platforms like (formerly Integromat), I do understand why nobody wanted it. But back then this wasn’t obvious to me. As the initial sting of this rejection waned, we grew increasingly determined to understand what was missing to make the product truly desirable.
What made the whole situation even more challenging was the fact that by then, our team had grown to over 40 employees, and with just around €1M in ARR from the legacy business, we were burning through our seed funding at an alarming rate. Time was running out.
In an effort to identify the 3-4 key features needed to transform this into a box-office hit, I reached out to users of the old platform. While I could gain some insights, existing customers often had highly specific and customized solutions, making it damn near impossible to outline a clear blueprint for the ‘perfect product’. Pressured by financial constraints and limited by the availability of our customers, my ability to conduct interviews was capped at around three per day - not nearly enough to figure this out in time.
Consequently, I decided to pivot and picked up an an alternative research method I had sort of stumbled upon earlier: analyzing software reviews of direct competitors and adjacent products. What I discovered was an absolute goldmine - a treasure trove of likes, dislikes, expectations, use cases, and feature requests. I spent a week, maybe even longer, meticulously combing through this feedback, collecting every tiny bit of useful information in a spreadsheet and ultimately crafting a comprehensive roadmap for our devs to ship asap.
The insights from our research were enlightening, but time was against us. Our last-ditch effort felt like a Hail Mary and in the end, it didn't pan out. There was simply not enough time to apply what we had learned from the reviews and we couldn't secure the necessary funds to bridge this implementation phase. Also, the initial negative feedback from the AppSumo launch had taken its toll, morale plummeted, and we had to lay off 90% of our staff. It was not a pleasant experience, to say the least.
But what has stuck with me ever since is this: Analyzing software reviews can provide rapid insights into what users really think and want—insights that traditional methods couldn't offer, at least not for me, and certainly not as quickly.
The startup graveyard
The sad truth is that 90% of startups fail. While the failure rate for year one is around 10%, this number jumps to a staggering 70% in years two to five. By year ten, 90% of them have vanished, according to the United States Bureau of Labor Statistics. So, it's safe to say that the path of a startup is not for the faint of heart.
Now this begs the question: why do so many fail? What are the specific reasons they go out of business? Rarely can the failure be attributed to a single issue; instead, it's usually a mix of several factors. However, what consistently stands out in these statistics is the lack of product-market fit, which appears to be a primary driver of failure.
But the failure to secure product-market fit isn't just about misidentifying the need; it often stems from not understanding the customers deeply enough. Startups get so wrapped up in their brilliant solutions that they forget the basic rule: it doesn't matter how good your product is if no one wants it. And customer discovery isn’t just a step in the beginning, it’s a continuous loop of feedback and iteration, which, if overlooked, leads to the creation of products that no one asked for.
The Promised Land: Product-market fit
Product-market can be described as the Holy Grail every startup aspires to discover. It is to startups what Wimbledon is to tennis players—a prestigious milestone that not only validates skill and perseverance but also marks a pivotal moment towards success. For many, achieving this is like reaching Mount Olympus, where the most successful ventures are rewarded for aligning their products perfectly with market demands.
But what exactly is product-market fit? Over the years, I’ve come to realize that there's some ambiguity or elusiveness surrounding the term. It’s frequently thrown around in the SaaS world, and my impression is that many, if asked, wouldn’t be able to define it accurately. Some simply equate it with building something the market wants, and they consider it achieved when customers are buying, using, and ideally promoting the company’s product.
According to entrepreneur and investor Marc Andreessen, who helped popularize this concept, product-market fit is best described as a scenario where:
"The customers are buying the product just as fast as you can make it—or usage is growing just as fast as you can add more servers. Money from customers is piling up in your company checking account. You’re hiring sales and customer support staff as fast as you can […]."
Sounds awesome, right? This level of success, as depicted by Andreessen, is a rare feat for most startups. But before even dreaming of such explosive growth and hitting the metaphorical nail on the head, every startup must first navigate the initial challenge of what is often referred to as problem-solution fit.
As the name implies, this means that you’ve first identified the problem (ideally an urgent and important one) and then offer a solution that effectively addresses the issue.
As Michael Seibel put it eloquently:
“If your friend was standing next to you and their hair was on fire, that fire would be the only thing they really cared about in this world. It wouldn’t matter if they were hungry, just suffered a bad breakup, or were running late to a meeting—they’d prioritize putting the fire out. If you handed them a hose—the perfect product/solution—they would put the fire out immediately and go on their way. If you handed them a brick they would still grab it and try to hit themselves on the head to put out the fire. You need to find problems so dire that users are willing try half-baked, v1, imperfect solutions.”
Now this involves knowing your customers inside out and understanding their needs so thoroughly that you can articulate their problems possibly better than they can themselves. This level of understanding will almost certainly lead to problem-solution fit and it essentially sets the stage for product-market fit, where your solution not only addresses the problem but also resonates strongly with your audience, paving the way for future growth and success.
Know Thy Customer
It shouldn’t come as a surprise that in order to understand our customers' problems and develop solutions that elegantly solve them, we must engage in some form of market research. However, in terms of popularity, research usually ranks somewhere between watching paint dry and sitting through a long, monotonous PowerPoint presentation (with lots of text on each slide).
While traditional research methodologies are essential, they can also be laborious and time-consuming. Scaling these methods for larger studies often proves challenging, and accurately assessing nuances in sentiment requires considerable skill.
But let’s quickly explore the most common options we have at our disposal and see how well they are suited to hone in on customer expectations, preferences and pain points.
Choose your weapon, mate.
Desk research, or as most of us might call it, Googling, is an obvious choice for many - it’s fast, efficient, and requires virtually no prep work. All you need to do is read through a bunch of blog posts, watch YouTube videos, or scan various industry reports. But, as with many things, if the source data is poor, you’ll face the classic 'garbage in, garbage out' dilemma.
Focus groups are like hosting a dinner party - intimate, insightful, and often full of surprises. It involves gathering a small group from your target market and discussing your product or service at length, all while having the old-school charm of face-to-face interaction. However, managing these sessions can be like herding cats; they’re hard to organize, scale, and can sometimes lead to echo chambers if not moderated skillfully. Plus, the feedback can be biased by dominant personalities who sway the group’s opinions.
Focus groups are like hosting a dinner party—intimate, insightful, and often full of surprises. You engage a small target audience in face-to-face discussions about your product or service. However, managing these sessions can be challenging; they're hard to organize, difficult to scale, and can turn into echo chambers without skillful moderation. Moreover, dominant personalities may sway the group’s opinions in a certain direction.
One-to-one interviews are like deep dives into the minds of your customers. You ask questions, they spill the beans, and you gather rich insights that are hard to capture in any form of survey. But this requires mastering the art of asking the right questions - much like the principles outlined in the Mom Test. Success hinges on a skilled interviewer and whether you have sufficient time to conduct these interviews one at a time.
Surveys are the Swiss Army knife of market research - versatile, efficient, and capable of reaching a vast audience quickly. You set the questions, and the answers start rolling in from corners far and wide, helping you quantify preferences, opinions, and behaviors. However, surveys often lack the depth that personal interviews offer, as they don't allow for follow-up questions that probe deeper into respondents' thoughts and feelings.
Social listening is like having your ear to the ground in a bustling market square - tuning into the buzz and chatter about your product, service, or related hashtags. It’s fantastic for seeing how people naturally talk about your field, whether that be direct brand mentions or broader industry topics. The catch is that social media conversations can be noisy, scattered, and not always directly relevant to your specific queries.
Now we've got a solid lineup of traditional research methods—good ol’ Googling, focus groups, surveys, and one-on-one interviews. They’re great, don’t get me wrong. They're like the reliable tools in a craftsman's belt. But in the fast-paced startup world, sometimes you need to dial it up a notch…
The paradigm shift
That’s where comes in. This tool isn’t just faster; it simplifies everything. It allows us to sift through millions of reviews quickly and with minimal effort. It’s like having a fast-forward button for market research. No more need to schedule interviews or - god forbid - organize focus groups. No more chasing down people to fill out surveys or endlessly scouring blog posts to gather intel (and then having to analyze it all). Simply ask the chatbot, and it will tell you what users have said about similar products, what preferences they have, what drives them mad, and what features they want.
That’s why we built this - to make the whole procedure fast and effortless. Because let’s be honest, while essential, doing research isn’t the part most of us are passionate about. We’d rather be shipping products.
With the advent of LLMs (Large Language Models for those who might've missed the memo), analyzing large bodies of text is now as easy as stealing candy from a baby. It’s an absolute superpower for qualitative research that operates at an unprecedented scale. So why not just feed in product reviews to get the job done?
Previously, the sheer volume of data was overwhelming - like trying to drink from a fire hose. But now, we can sip and savor every detail, identifying patterns, assess sentiments, and catching emerging trends as they unfold. These powerhouse models do more than just read; they understand context, cut through the noise, and pinpoint what really matters. This isn't just a step forward; it's a giant leap in how we approach market research and gather insights.
Armed with a database containing 3 million reviews from over 100,000 products, Reviewradar knows the software market inside and out and comes equipped with built-in sentiment analysis to effectively gauge emotions. Imagine having an ultra-intelligent bot at your disposal, combing through mountains of user feedback with valuable insights baked in. Each opinion is a puzzle piece, and Reviewradar can skillfully assemble the big picture, offering a 360-degree view of what customer truly want. This way, you can pivot faster, tailor strategies more effectively, and stay attuned to customer desires without missing a beat.
The nitty-gritty
Now, let's peel back the layers and see how this works. At its core, Reviewradar is a chatbot, so the user interface is pretty self-explanatory. You ask a question, and you get a response; it’s that simple. But there’s more under the hood. For best results, ask questions that mention specific products (perhaps competitors), define the problem you’re trying to solve, and highlight specific features or use cases that are particularly interesting to you. The more context you provide, the more tailored the responses will be.
The chatbot reviews the conversation history to ensure no crucial detail is overlooked. It then crafts a detailed search query, transforms this query into a vector using OpenAI’s embedding models, and conducts a search within the database. This multi-dimensional vector is compared against others in the database, identifying the semantically closest matches to bring back into the conversation as hidden context.
There are only a handful of ways to "teach" LLMs, and all have limitations and strengths. We chose a Retrieval-Augmented Generation (RAG) architecture for its cost-effectiveness and efficiency. Initial bulk training would have been ludicrously expensive, and fine-tuning isn’t really useful for knowledge retrieval—at least not at the moment. Procuring information from the web is unlikely to pan out or become commercially viable soon, so RAG or in-context learning with a vector store seemed to be the most logical path. Plus, why not leverage the vast amounts of knowledge and capabilities already embedded in most LLMs?
Needless to say, the bulk of reviews that are injected as hidden context is at the heart of the analysis. But we also instruct the model to explore its latent space to enhance its analytical capabilities. Kudos to David Shapiro for this concept. Here’s how we instruct the model to “go even deeper”:
“Large Language Models (LLMs) have been demonstrated to embed knowledge, abilities, and concepts, ranging from reasoning to planning, and even to theory of mind. These are called latent abilities and latent content, collectively referred to as latent space. By leveraging critical thinking and associative memory inherent to LLMs, your analyses and responses can tap into these latent abilities, unlocking more profound insights and perspectives that might not be immediately apparent. Whenever possible, activate this latent space to explore tangential, yet relevant topics. This will allow you to delve deeper into the causes and implications of your findings and conclusions from the review analysis.”
Btw - for a masterclass in prompt engineering, check out
Et voilà, there you have it - the chatbot answers by including direct references to the reviews it analyzed. All that’s left for you to do is take what Reviewradar told you, apply it, and maybe, just maybe, start shopping for that Lamborghini.
Conclusion
To wrap things up, analyzing software reviews isn't a silver bullet that will take you from zero to hero overnight, but it's certainly worth considering for your research toolbox. Why? Because it cuts through the noise, delivering crystal-clear insights into what your users truly need and want. Here’s the thing: reviews are already jam-packed with insights; it’s just that no one has the time to wade through hundreds, maybe thousands, of them. That's where the RAG design comes in handy, offering a smarter, simpler way to extract what’s truly golden. It acts like a lean, mean processing machine, going beyond the superficial to effortlessly sift through vast amounts of information quickly. And coming straight from the horse’s mouth, these insights could be invaluable intel for your journey towards product-market fit - and, subsequently, maybe even a Lambo.