10× Growth Mindset: The real reason teams stop at $10K–$100K in spend

Welcome back to 10× Growth Mindset, a series where we sit down with growth leaders behind subscription apps to unpack how 10× growth really happens, through patterns, decision logic, and the way teams think when they scale.
Today’s guest is Samet Durgun, a mobile growth operator who’s been in the game since the early performance days. He’s worked with large consumer brands and subscription apps in highly competitive markets, and after seeing dozens of scaling attempts up close, he realized that growth rarely breaks because of the channel itself. More often, teams misdiagnose what’s actually going wrong.
Samet often calls himself a “growth therapist.” As he puts it:
“In therapy, someone explains everything that feels wrong, and the therapist helps identify the fundamentals underneath. When founders come to me with ten different problems, I’m less interested in fixing each one separately. I want to understand what all of them boil down to.”
In this conversation, we unpack why early CAC numbers on Meta can be misleading if you don’t understand learning dynamics, why 10× budget doesn’t automatically mean 10× worse unit economics, how teams get “blind” to their own data when scaling from 10 to 100, and what it takes to turn one winning creative into a repeatable growth engine instead of a lucky spike.
Let’s dive in.

Campaignswell: When teams say they want 10× growth, what do they usually misunderstand about what that actually requires?
Samet: First, we need to clarify what 10× even means. Are we talking about going from $1K to $10K in monthly spend, or from $10K to $100K? The stage matters a lot .
<highlight-pink>The hardest jump is usually from zero to around $10K per month<highlight-pink>, because so many fundamentals need to be in place at the same time . At that stage, teams often underestimate how important the foundation is. Onboarding, pricing, paywall structure, value proposition, and retention all need to work together. If those pieces aren’t aligned, scaling spend won’t fix anything.
Another misunderstanding is around paid acquisition, especially Meta. I often see teams look at early high CAC numbers and immediately conclude that “Meta doesn’t work for us.” They stop too early. But in many cases, if you spend enough to let the algorithm learn — for example, reaching around 50 events per week — performance improves significantly over time . <highlight-pink>Fear of spending blocks growth before the system has a chance to stabilize.<highlight-pink>
At higher levels, like $10K to $100K, the misunderstanding shifts. Teams might scale without fixing their margin first. If your margin is already small, increasing budget just amplifies that pressure. Or they assume one winning creative equals a scalable strategy. When that creative fatigues, <highlight-pink>growth stalls because there was no system for consistently finding new hits .<highlight-pink>
And when you move from around $100K to millions in spend, the pattern shifts toward precision. Teams need a clear understanding of what actually drives their growth . If creator marketing shows real traction, you hire the strongest creative marketer you can find. If Meta is the main driver, you deepen that expertise. If your product is complex and data-heavy, you strengthen data management.
At that level, growth depends on repeating what works instead of hoping for another spike. The key question becomes: <highlight-pink>how do we repeat this success<highlight-pink> and grow with the least extra effort? Effort has to be directed exactly where the leverage is. You can still experiment, and one person can explore new ideas, but the core engine is repeatable success.

Campaignswell: Looking across many apps and teams, what patterns consistently separate those that compound growth from those that hit a ceiling?
Samet: You can look at this from a few angles: mindset, creatives, operations, product, and economics.
If we start with economics, <highlight-pink>compounding teams scale when their blended ROAS is positive<highlight-pink>. They don’t let attribution noise scare them. Stalling teams often make excuses like “we’re not measuring enough,” or they rely too heavily on one creator getting most of the spend and use that as a reason not to move.
On the creative side, <highlight-pink>compounding teams constantly test new concepts and variations of winners.<highlight-pink> Teams that stall sit on old creatives until they die. One winner dominates for months, then fatigues, and there is nothing behind it.
Operationally, <highlight-pink>compounding teams make clear decisions about what to kill and what to push based on data.<highlight-pink> Stalling teams rely on gut feelings or superstitions about how creatives or the Meta algorithm work, and they can’t really explain those decisions with data. I do appreciate gut feeling, but it should be connected to evidence. Ignoring available data and calling it intuition is where it becomes a problem.
From a product perspective, <highlight-pink>compounding teams fix fundamentals first<highlight-pink> — onboarding, paywall, pricing, retention — and optimize those before scaling. Stalling teams try to scale before the product is ready, and when it doesn’t work, they blame the algorithm or say Meta doesn’t work for them.
And finally, unit economics. <highlight-pink>Compounding teams know how much they can afford to pay<highlight-pink> because they understand their LTV, cash flow, and the earning potential of their channels. Stalling teams often cannot clearly say what cost per action they can afford. I’ve even seen teams spending large budgets while optimizing for installs and expecting it to somehow work.
In the end, what separates these teams is <highlight-pink>how well they understand their numbers<highlight-pink>, how honest they are about product readiness, how closely they follow the data, and whether they build repeatable processes around what works instead of reacting emotionally to short-term results.

Campaignswell: When growth stalls, most teams jump straight into tactics. How do you personally diagnose where the real problem sits before any action?
Samet: First, this requires experience. There are hundreds of metrics, and it’s easy to focus on the wrong one. I’ve seen people talk about CTR when they should be discussing ROAS. So before jumping into tactics, you need to step back and look at the most important signals.
I usually try to understand which layer the problem belongs to: creative, media buying, or product.
If it’s a creative issue, one simple signal is whether an old creative is dominating spend and no new winners are emerging. Then you look deeper: are you testing enough? Are you testing variations? Is the team consistently producing new concepts?
On the media buying side, I check whether the algorithm is allocating spend in strange ways, for example, pushing budget toward the wrong creatives or wrong countries. That can point to attribution or targeting issues.
If CPA rises immediately when you scale budget, something is off. In many cases, performance should remain relatively stable unless you are increasing budget extremely aggressively. <highlight-pink>People often think that if €500 per day already feels expensive, €5,000 must be impossible<highlight-pink>, but technically scaling isn't that difficult for Meta. With something tighter: CPA shouldn't spike when you go from €100 to €1,000/day. If it does, something's broken. Real pressure usually hits between €1,000 and €10,000/day, and even then it should be manageable if creative, targeting, and product-market fit are solid. Scaling on Meta isn't technically difficult. The real issue is whether you're already operating at the edge of your tolerable risk.
Attribution issues show up in other ways too. Teams often panic when they see gaps in SKAN or AEM reporting. But those gaps are structural, they'll always be there with those systems. The real solution is web-to-app flows, which close the attribution gap significantly. The tradeoff is it requires engineering resources, so smaller teams often avoid it and just accept the blindness.
So the diagnosis always starts with the question: where exactly is the constraint? Product, creatives, or media buying? Once you isolate that, you can go deeper instead of randomly changing tactics.

Campaignswell: Teams track hundreds of metrics. How do you decide which ones actually deserve to influence decisions?
Samet: It depends on the stage.
For small companies, I look first at creative testing. Are they testing enough creatives? Are they testing them properly in separate ad sets? Are they giving each test enough budget without artificially forcing spend?
One of the simplest but strongest signals is whether a creative actually spends. If the algorithm is set correctly and a creative starts spending naturally, that’s already a good sign you might have something strong.
From there, for creatives specifically, CPI and IPM matter a lot. IPM is especially important because it tells you whether the market is reacting to your message.
Beyond creatives, blended ROAS is critical. By blended, I mean paid plus organic, unattributed revenue combined. Looking only at attributed data can distort decisions.
Then CAC and payback period are fundamental. If you’re running subscriptions, especially weekly ones, or you upsell coins inside the product, you need to understand how much additional revenue will come later. That’s where pLTV becomes very important.
I’ve worked with teams that know they can afford 40% ROAS on day one because historically that translates into 100%+ long-term ROAS. But that confidence only exists if you actually have predictive LTV data. Without it, you’re guessing.
Another important point for small teams: every channel needs a minimum spend or minimum number of events to work properly. If you don’t reach that threshold, the algorithm simply doesn’t stabilize.

Campaignswell: Do you recommend using predictive metrics in app growth? If yes, which ones do you actually trust, for whom do they make sense, and what kinds of decisions are they useful for?
Samet: Predictive metrics are very useful, especially when you don’t have a long enough lookback window.
For example, your one-year ROAS might be much higher than your three-month ROAS, but if you don’t have one year of data, you can’t optimize for it. So you need to set realistic goals. If you only have three months of reliable data, optimize toward predicted three-month ROAS.
If you know that more revenue will come later, you can allow yourself to spend more aggressively, maybe even scale budget 5×, but only if your predictive model supports that. Otherwise, you risk burning cash.
Another important aspect is transparency. If you’re using a predictive LTV service, you need to understand why the model predicts what it predicts. It shouldn’t be a black box.
<highlight-pink>For example, I like to compare day-zero ROAS, cumulative ROAS by cohort, and predicted LTV on weekly charts<highlight-pink>. I want to see how those curves behave relative to each other. If predicted LTV makes sense compared to real cumulative ROAS, that gives confidence.
But predictive LTV must reflect product complexity. If you sell weekly subscriptions with a three-day trial and yearly subscriptions without trial, that matters. If you have yearly subscriptions that include limited credits and users buy extra coins on top, that matters too. Complex monetization structures must be reflected in the prediction.
Predictive models rely heavily on day-zero signals. So you need to understand how day-zero behavior connects to future revenue in your specific business.
Before trusting predictive metrics, ask: does this provider really understand my product and its nuances? If they don’t, the numbers might look clean, but they won’t be reliable.
Campaignswell: Have you seen cases where teams made confident decisions based on the wrong data setup? What happened?
Samet: Yes. A very common example is <highlight-pink>manual pLTV models<highlight-pink>.
Some teams build their own predictive LTV in Google Sheets. They apply multipliers to day-zero revenue and assume that’s “normal.” But pLTV is highly dynamic. It changes by country, by month, by operating system, even by creative. Treating it as a static multiplier creates false confidence.
Another problem is not maintaining the model. Even if it once worked, if you don’t constantly update and validate it, it quickly becomes detached from reality.
In one case, we weren’t sure what our correct day-zero target should be. The assumptions were based too much on historical averages, and pLTV curves were zigzagging heavily. We couldn’t clearly define how much we could afford to spend.
When we looked deeper, we realized that on certain days we had cohorts with unusual behavior, for example, more users buying extra credits. That distorted the multiplier logic. The predictive LTV provider was actually looking at the right signals, but our manual interpretation missed the nuance.
Once we gained confidence in the PLTV logic and validated it properly, we were able to set realistic daily spend goals.
And this matters. For a small company, spending $10K per day versus $50K per day can determine whether you survive the next few months. The stakes are very high.
That’s why we worked more closely with the predictive provider, had bi-weekly calls, and made sure we fully understood what the model was doing. Feeling secure in the data changed how aggressively we were willing to scale.

Campaignswell: If manual data handling is such a problem, what’s the most effective way to structure data as teams scale?
Samet: I’ve seen so many messy setups. And honestly, when I look at data that doesn’t make sense structurally, I get completely confused. If it’s too complicated, if you can’t break it down properly, if it’s not structured like a map tree where you can go granular, it becomes almost unusable.
<highlight-pink>For example<highlight-pink>, one client was sending dozens of events to their MMP, but not the one that actually mattered: a combined purchase event covering all their monetization types (subscriptions, coins, upsells). The dashboard looked complete, but value optimization was impossible.
As a decision maker, I need to be able to quickly slice the data from different angles. If I want to see performance by country, then by OS, then drill down into a specific creative or cohort, I should be able to do that in seconds. When switching between views or combining filters feels complicated, decision-making slows down.
If the data is structured correctly, I can do my best work. But <highlight-pink>often the person building the dashboard doesn’t really understand marketing<highlight-pink>. They just present what they think looks right, and it doesn’t help with real decisions.
That’s why I genuinely dislike complex spreadsheets. With my clients, I try to move to proper structured data as early as possible. At least use Tableau, Looker, or a solid PLTV solution. Understand which companies are solving which part of the stack.
For example, what I liked about Campaignswell is that you get all the dimensions, filters, and metrics in one place, and you can run PLTV on top of that. That already replaces multiple separate tools. For me, that combination makes it a very strong service.
Campaignswell: In your experience, how does the order of focus between product, monetization, and marketing shape long-term outcomes?
Samet: This might sound counterintuitive. People often expect me to say: “Don’t do marketing before your product is ready.” But I see it differently. Marketing directly impacts how your product monetizes.
<highlight-pink>For example<highlight-pink>, some founders tell me: “Our organic install-to-payer rate is 2%. That’s too low. We can’t make paid acquisition profitable.” But when you optimize paid campaigns for purchases instead of installs, the install-to-purchase rate can jump to 20%. And people are surprised: “How can organic be 2% and paid be 20%?” The answer is simple: paid optimization targets people who are more likely to pay. Yes, CPI becomes higher, but ROAS improves because the algorithm is actively looking for buyers.
So marketing isn’t as separate from product readiness as many people think. You don’t always need to feel fully ready before testing paid acquisition. Sometimes marketing reveals monetization potential you didn’t see organically.
Of course, it can also fail. And if it fails, that’s useful too. It’s feedback. If paid traffic doesn’t convert, maybe something fundamental is missing — pricing, paywall, onboarding, value proposition.
I sometimes use marketing as a way to test real product-market fit. If it works, you scale. If it doesn’t, you go back and fix the fundamentals.
Campaignswell: When you first meet a team, what signals tell you they can scale 10×? And on the flip side, what red flags suggest a team will keep running on a treadmill unless their mindset changes?
Samet: A strong positive signal is <highlight-pink>ownership<highlight-pink>. The person responsible for growth understands not only their direct task, but also what surrounds it. They ask the right questions and think beyond their narrow scope.
<highlight-pink>For example<highlight-pink>, in creative work, it’s easy to say “we’re making variations.” But a red flag is when those variations are barely different. A green flag is when someone understands how small differences can materially change performance and applies logic, business understanding, and data to create meaningful variation.
Another strong signal is <highlight-pink>cross-functional collaboration<highlight-pink>. Teams that scale talk openly. Nothing is hidden in private DMs. Conversations happen in threads where others can jump in quickly. That reduces delays and prevents lost opportunities.
I also value teams where <highlight-pink>people are not afraid to admit mistakes<highlight-pink>. Growth involves trial and error. If the culture punishes mistakes, people stop experimenting. If the culture supports learning, the team improves faster.
On the red flag side, fear is the biggest limiter. Teams are often too scared to try, too scared to challenge assumptions, sometimes too scared of their bosses.
A concrete example is geographic expansion. I live in Germany, and I’ve seen many strong German products that never expand beyond the German-speaking market. They assume the US is too crowded or that their product won’t work abroad. But often they tried two influencers in the US and concluded it doesn’t work — while in Germany they worked with hundreds. Scaling to bigger markets requires the same level of effort and persistence.
If you want to 10× revenue, it probably won’t come from one small market. It will come from expanding into larger ones and competing seriously there. <highlight-pink>Assuming something won’t work before testing it properly is a red flag.<highlight-pink>
In short, teams that scale are curious, collaborative, and willing to challenge their own limits. Teams that stall are usually constrained by fear and narrow thinking.

Campaignswell: What’s the minimal team setup required to grow a subscription app? Which roles are essential at different stages?
Samet: At the early stages, founders need to understand all core areas of the business — product, design, paid marketing, creatives. Not necessarily to execute everything themselves, but to ask the right questions and not feel intimidated by any part of the system.
If you look at large companies, even billion-dollar ones, CEOs are often involved in what looks like minor decisions. But something like a creative direction can become a major strategic lever when you’re expanding.
Another thing I see often is founders asking for case studies as proof of competence. I don’t think case studies determine whether someone is good. What matters is how they approach problems.
When someone asks me whether I’ve worked in the same app genre before, my answer is that adaptation matters more than genre. You don’t need to be a cat to sell cat food. The real value is understanding systems, being able to adapt, and bringing patterns learned from other companies.
Also, hiring someone for one narrow skill can backfire. Maybe you hire someone for Meta, but what you really need is creator marketing or stronger creative relationships. The best people understand marketing broadly and can shift based on what the business actually needs.
Campaignswell: What tools or marketing setup should every serious growth team have in place?
Samet: Some things are individual, but data infrastructure is not optional. <highlight-pink>You need a proper data tool<highlight-pink>. Ideally, with pLTV capabilities. And you need a creative intelligence layer. Those two are foundational.
Creative intelligence tools are extremely important. Being able to track multiple accounts, monitor competitors, see which creatives are running and how markets shift daily gives a huge advantage.
Everything else depends on stage and complexity.
<highlight-pink>For example<highlight-pink>, a small gaming company might start with Facebook SDK. As they grow, they may move to an MMP. At a larger scale, they might use CAPI, web-to-app, or web-to-web setups. Complexity increases over time, and tools should evolve with it.
If influencer marketing is a major channel, invest in the right tools for that. If you have a creative team, make sure they never hesitate to subscribe to tools they need, including AI tools. Small personal tools can be just as important as enterprise ones in today’s environment.
But at the core, strong data infrastructure and creative intelligence are non-negotiable.
Campaignswell: Finally, what’s something about growth you’re currently rethinking or questioning yourself?
Samet: One big theme for me right now is speed versus impact. It’s tempting to move fast: launch a new channel, jump on TikTok, produce 200 creatives with AI. But moving fast can also break things fast.
<highlight-pink>For example<highlight-pink>, before expanding to a new channel like TikTok, I now ask: did we really extract everything possible from Meta? Did we analyze it deeply enough? Or are we just chasing a new trend?
The same applies to creatives. I used to produce a huge number of creatives using AI tools. On paper, it looked efficient. But when I reviewed the results, I realized that my cost per winner didn’t improve. In some cases, it actually slowed me down because I was managing too much noise.
Now I focus much more on cost per winner instead of number of creatives. Volume alone doesn’t mean progress.
I’m also more selective with tools. Instead of using ten different tools, I might use three. The question isn’t how many things you’re doing. It’s whether those things are actually increasing your chances of finding winners.
So for me, the shift has been from speed for its own sake to controlled, high-impact execution and being very clear about the risk before introducing a new system or process.
A quick jump to Campaignswell
If you’re thinking about LTV, payback windows, and how hard you can actually push spend without blowing up cash flow, that’s exactly the layer we obsess over at Campaignswell.
We built it for subscription apps that scale on paid. It connects your ad data, revenue data, and cohorts in one place and turns that into predictive LTV you can actually use for decisions. So when CAC jumps, you don’t panic, you know how it plays out over time, by country, OS, creative, and cohort.
If you’re curious how this would look on your own numbers, book a demo.
We’ll walk through your setup and show you what’s really driving your growth.

Co-founder & CEO at Campaignswell










