The Lean Startup: Difference between revisions
Content deleted Content added
No edit summary |
No edit summary |
||
Line 46:
=== II – Steer ===
🦘 '''5 – Leap.''' In 2004, three college sophomores—Mark Zuckerberg, Dustin Moskovitz, and Chris Hughes—arrived in Silicon Valley with a fledgling campus social network and, despite little revenue and only 150,000 registered users, raised $500,000 in venture capital, followed less than a year later by $12.7 million. Investors cared that usage was intense and spreading: more than half of active users returned every day, and within weeks of launch on 4 February 2004, nearly three-quarters of Harvard undergraduates were using the site without a dollar of paid marketing. That pattern validated two leap-of-faith assumptions: a value hypothesis (students found the product genuinely useful) and a growth hypothesis (adoption accelerated through tight campus networks). The chapter names those two assumptions as the riskiest parts of any plan and urges teams to make them explicit. Experiments then revolve around turning those assumptions into testable hypotheses rather than debating abstractions or copying precedents. Planning runs “backwards” through the loop: decide what must be learned, specify the measurement, then build the smallest product that can produce that learning. With each iteration, evidence replaces rhetoric, and the engine of growth either catches or stalls. When learning shows the model won’t work, the remedy is a deliberate strategic change, not incremental polishing. Analogy-driven storytelling gives way to data about real behavior. ''The problem with analogies like this is that they obscure the true leap of faith.''
🧫 '''6 – Test.''' Groupon’s origin story begins with The Point, an activism platform in Chicago that struggled until a simple, handmade experiment—twenty people buying a two-for-one pizza coupon in the restaurant below the office—proved a different path. Early “MVP” execution was unapologetically scrappy: a basic blog, coupons as PDFs assembled by hand, and manual fulfillment, which nonetheless put the company on pace for $1 billion in sales and deals across more than 375 cities worldwide. A video MVP did similar work for Dropbox: a short screencast seeded with in-jokes for the Digg community triggered rapid sign-ups—more than 10,000 Diggs within twenty-four hours—before expensive sync technology was built. The concierge MVP shows the same logic at human scale: in Austin, CEO Manuel Rosso and his VP of product built Food on the Table around a single paying family, visiting weekly, curating recipes tied to the local grocer’s specials, and collecting a $9.95 check by hand. As confidence grew, the team replaced visits with e-mail, automated price parsing, and later online payments, scaling only what proved useful. A “Wizard of Oz” variant let Max Ventilla and Damon Horowitz fake hard technology behind Aardvark’s Q&A interface, learning what to build only after real use revealed it; Google later acquired Aardvark for a reported $50 million. Across these cases, tests elicit behavior, not opinions, and they define success in advance so results are unambiguous. The point is not to be frugal for its own sake but to learn faster than rivals can copy features or spend on polish. When an experiment teaches nothing, the effort was waste; when it teaches quickly, even a crude artifact is a win. ''A minimum viable product (MVP) helps entrepreneurs start the process of learning as quickly as possible.''
📏 '''7 – Measure.''' Startups begin as models on paper; progress becomes real only when learning is made visible through innovation accounting. At IMVU, an early MVP was buggy and sales were low, but the team shipped new features daily for roughly seven months and still saw funnel metrics flat, forcing a clearer view of what to measure. Cohort analysis replaced aggregates: each month’s new users was tracked from registration through first login and beyond, revealing, for example, that about 60% of those who joined in February 2005 logged in at least once. With that lens, the work proceeds in three steps: establish a baseline with an MVP, tune the engine of growth with targeted changes, and then decide to pivot or persevere. Vanity metrics—page hits, raw totals—obscure cause and effect; actionable metrics tie specific changes to outcomes and make next steps obvious. Accessibility matters too: reports must be simple, people-based, and widely shared so every contributor can see the same story. Auditable data lets anyone spot-check results, which curbs “success theater” and builds trust when hard calls are needed. Grockit’s founder Farbood Nivi drew on years at Princeton Review and Kaplan and institutionalized this discipline with daily split-test summaries mailed to every employee, making learning milestones concrete. When the baseline stops moving despite honest tuning, the numbers make the case for a pivot without drama. ''Innovation accounting enables startups to prove objectively that they are learning how to grow a sustainable business.''
🔄 '''8 – Pivot (or Persevere).''' Votizen demonstrates how evidence turns into course correction: CEO David Binetti gathered early signals from prospective civic participants—interest in action, trust in verified voter status, and frustration with an empty social network—and refocused the product on a single feature that worked. The result, @2gov, helped citizens contact their elected officials quickly via existing networks such as Twitter while converting that digital intent into paper letters delivered to congressional offices. With innovation accounting as a guide, each milestone shortened the time between iterations, revealing what to keep and what to drop. To keep decisions from drifting, teams schedule regular “pivot or persevere” meetings, neither so frequent that noise overwhelms signal nor so rare that sunk costs lock in a bad path. A pivot is not a tweak; it changes a fundamental element—scope (zoom-in/zoom-out), customer segment or need, channel, value capture or business architecture, engine of growth, platform, or even the underlying technology. The discipline is to treat each pivot as a new strategic hypothesis and test it with a fresh MVP. By making the stakes explicit and the evidence public, fear of being wrong gives way to momentum from learning. Misapplied pride or attachment to past effort is the enemy; the method respects vision by insisting it meet the market. ''That change is called a pivot: a structured course correction designed to test a new fundamental hypothesis about the product, strategy, and engine of growth.''
=== III – Accelerate ===
| |||