Principles: Difference between revisions

Content deleted Content added
No edit summary
No edit summary
Line 89:
👥 '''24 – To get the people right ....''' The work begins with choosing Responsible Parties: the people accountable for goals, outcomes, and the machines that produce them, with the ultimate RP being the one who bears the consequences. Every person reports to someone, and energy is directed by “the force behind the thing,” not by titles alone. Hiring starts with design: define the role, then match values, abilities, and skills—in that order—so nature fits the seat. Don’t assume success elsewhere will translate; test for character and capability with evidence rather than impressions, and think like a coach building a roster that plays well together. Candidates who ask great questions and challenge thinking are prized; “warts” are shown up front so fit is real, and teams “play jazz” with compatible partners who also push one another. Once on board, people are trained, tested, evaluated, and sorted continuously; patterns in performance are built from specifics up. Managers “squeeze the dots” (without over-squeezing) to connect many small data points into fair assessments that can be acted on. Metrics, surveys, and formal reviews support judgment so strengths are leveraged and weaknesses addressed. Accountability is tied to clear responsibilities, and job slip is guarded against so roles don’t blur and standards don’t drift. Over time, the right people in the right seats compound faster than any process tweak. Getting the people right is getting the machine right. ''Match the person to the design.''
 
🧑‍🤝‍🧑 '''25 – Remember that the WHO is more important than the WHAT.''' In a growing organization, the fastest way to improve any initiative is to name the Responsible Party (RP) for the goal, the outcome, and the “machine” that produces it. The person chosen must be the one who truly bears the consequences, not a committee or a figurehead with diffused authority. Clear lines of reporting matter: everyone should report to someone, and work should be traced back to the force behind the thing, not to titles or polite fictions. When accountability is explicit, priorities line up, trade-offs are made faster, and problems stop bouncing around the org chart. The same clarity prevents “group-think” ownership—if many own it, no one owns it—and makes escalation straightforward when an RP needs help. RPs are picked for track record and logic under pressure, because outcomes—not optics—prove judgment. Defining who is in charge does not silence others; it sets the stage for candid input before the RP decides and owns the result. As stakes rise, so does the need to separate decision rights from popularity and to keep responsibilities intact through transitions. The discipline of naming RPs scales from single tasks to whole departments, creating a lattice of ownership that keeps the culture honest. Put simply, get the people right at the point of control and the work becomes simpler and truer. The book’s work principles treat this choice as the keystone that aligns incentives, information, and speed. ''Recognize that the most important decision for you to make is who you choose as your Responsible Parties.''
🧑‍🤝‍🧑 '''25 – Remember that the WHO is more important than the WHAT.'''
 
📝 '''26 – Hire right, because the penalties for hiring wrong are huge.''' Hiring starts with design: define the job as a system, then match the person to the design by looking first at values, then abilities, then skills. Evidence beats impressions, so prior success is weighed alongside cause-and-effect thinking relevant to the seat, and no one assumes that success elsewhere will automatically transfer. The interview is two-way and unvarnished: candidates see the warts, ask lots of real questions, and understand the deal before anyone says yes. Teams work best when people “play jazz” together—compatible enough to sync quickly, different enough to challenge one another into better ideas. Compensation follows the person rather than the job title, with stability and opportunity balanced, performance metrics tied (at least loosely) to pay, and a bias to “north of fair” to retain great partners. Generosity is a policy, not a mood, because long-term partnerships outlast transactional bargains. References, tests, and cases build a picture from specifics up, and hiring bars don’t drop for convenience when timelines pinch. The process never really ends: after the offer, training and early tests confirm fit or prompt a quick, fair exit. Getting this right compounds culture and results for years; getting it wrong drains time, money, and trust. Hiring therefore sits at the junction of realism and aspiration: see the person clearly, see the job clearly, and only then connect the two. ''Match the person to the design.''
📝 '''26 – Hire right, because the penalties for hiring wrong are huge.'''
 
📊 '''27 – Constantly train, test, evaluate, and sort people.''' Improvement is treated as a loop, not a speech: people evolve through frank assessments of strengths and, especially, weaknesses that get in the way. Accuracy beats kindness in the short run because, in the long run, accuracy and kindness become the same thing—clear feedback lets people grow or move on. Managers document observations, build syntheses from specifics, and keep an open ledger of dots (discrete data points) so patterns are visible and bias is reduced. Dots are squeezed—looked at together—without being oversqueezed into snap judgments, and exceptions are surfaced before they become norms. Performance surveys, metrics, and formal reviews buttress judgment, and results over time outweigh a single good or bad week. Sorting is continuous, not cruel: some will rise quickly, some will improve steadily, and some will fit better elsewhere, and the system acknowledges all three with respect. People who learn fast get bigger tests; those who resist feedback get coaching and clear timelines; those who can’t or won’t adapt are moved out for everyone’s sake. The goal is a fair machine that rewards learning speed and reliability, not politics or charm. Consistency across managers keeps the bar the same from team to team so movement feels principled, not arbitrary. The practice converts pain into progress and prevents one person’s blind spots from becoming the group’s stagnation. Over time, the organization becomes a school that ships products, not a school of hard knocks. ''Evaluate accurately, not kindly.''
📊 '''27 – Constantly train, test, evaluate, and sort people.'''
 
⚙️ '''28 – To build and evolve your machine ....''' Management shifts from firefighting to engineering when leaders look down on their machines from a higher level and continually compare outcomes with goals. The job is to design, run, and improve a system: build great metrics, beware of getting sucked down into noise, and keep the big picture tied to day-to-day realities. Probing is routine and welcomed—problems are forecast before they land, levels below direct reports are heard, and answers are tested rather than assumed. Hearing well becomes a learned skill: train your ear, make probing transparent, and reward people who surface issues early. Accountability is explicit: contracts about who does what by when are honored or reset in sync, and failures are sorted into broken contracts versus missing contracts. Guardrails exist where needed, from role clarity to escalation paths, without smothering initiative, and “department slip” is watched so responsibilities don’t blur. Strategy holds steady while tactics adapt—expediency never outruns the mission—and both are written down so the machine remembers. As lessons accumulate, rules are simplified, tools are refined, and governance adds checks so no single person becomes the last line of defense. The manager’s day is spent orchestrating people and processes, not starring in every scene. This stance keeps learning compounding and stress tolerable because the same few moves—diagnose, design, do—repeat across problems. In short, the work of leadership is the craft of system design more than heroics. ''Understand that a great manager is essentially an organizational engineer.''
⚙️ '''28 – To build and evolve your machine ....'''
 
🕹️ '''29 – Manage as someone operating a machine to achieve a goal.''' Running an organization is treating it like a machine built to reach explicit goals: compare actual outcomes with what the machine should be producing and keep improving the design. From a high vantage point—zoomed out like viewing continents from space—toggle down into cities, neighborhoods, then the room to see how people and processes create results through time. Constantly test whether misses come from a flawed design or from people not handling their responsibilities, and remember that one-off blips differ from patterns you’ll see by sampling enough cases. Scan and probe everything you’re responsible for, varying your involvement based on confidence; get closer when warning signs appear and stay close enough to avoid surprises. Probe to the level below your direct reports to understand how they manage, listen for threads worth pulling, and keep contact tight when a crisis brews. Keep roles crisp: everyone reports to someone, decision rights sit with the Responsible Party, and performance is traced to “the force behind the thing,” not titles. Guard against “job slip” when circumstances quietly change a role without explicit agreement, and reset responsibilities before confusion hardens. Use simple, objective tools—Issue Logs, metrics, daily updates, and checklists—to make the machine’s health visible and to spot-check reality. Remember who has what responsibilities so well-intended “help” doesn’t turn into everyone chasing the same ball and leaving positions uncovered. Managing this way shifts the work from heroics to engineering: design, run, learn, and redesign. Treated as a machine, the organization compounds learning because outcomes feed back into better rules and clearer ownership.
🕹️ '''29 – Manage as someone operating a machine to achieve a goal.'''
 
🚫 '''30 – Perceive and don't tolerate problems.''' In 2011, a small error in a client memo—caught by two senior investment advisors after it had already gone out—exposed a wider breakdown in the client service quality-control process. A review showed that memos weren’t being properly graded by colleagues and that the tracking metrics meant to monitor standards had drifted, which meant more mistakes were slipping out unnoticed. The team did the work most people avoid: surfaced problems publicly, treated the pain as a signal rather than a shame, and looked for patterns instead of hunting for a single culprit. Using a simple “five whys” drill-down, they moved from the symptom (an error) to the proximate cause (no effective QC), then to capacity constraints, then to a misjudged workload, and finally to the root cause: a manager who failed to anticipate problems and plan. People were reminded that managers exist to get at truth and excellence, not to keep everyone comfortable; defensiveness is common, so the process has to insist on facts. The lesson extended beyond memos: celebrate finding what is not going well so it can be fixed, and insist that every problem be written down, owned, and tracked to closure. Habitually tolerating small misses grows systemic failure; treating each miss as a data point builds a clear picture of where the machine needs redesign. When issues are raised early and often, the organization stays calm because surprises are rare and solutions are routine. The point isn’t blame—it’s clarity about what to change so outcomes improve. In a culture that welcomes bad news, speed increases because less energy goes into hiding it.
🚫 '''30 – Perceive and don't tolerate problems.'''
 
🔬 '''31 – Diagnose problems to get at their roots.''' The objective of diagnosis is specific: identify the exact people or designs that caused a problem and see whether they have a pattern of causing problems. The most common failure is treating issues as one-offs and jumping to fixes, which guarantees recurrence; the second is depersonalizing the diagnosis so no one’s strengths and weaknesses are actually examined. A structured drill-down helps: in about four hours, a small group can gain an 80/20 understanding of a troubled area by listing the biggest outcomes, asking who is responsible for each, and separating design flaws from performance gaps. The steps are kept separate to avoid meandering, the questions are tight and non-random, and all relevant people are included so buy-in and information quality rise together. “Why?” is asked repeatedly until the root cause emerges—e.g., insufficient staffing traces to poor anticipation and planning by a specific manager—which points to either a people change, a design change, or both. Managers are reminded that their duty is to get to truth and excellence, even when that means firing, reassigning, or adding guardrails to compensate for a person’s blind spots. Diagnoses look at second- and third-order effects so fixes don’t create new failures, and they distinguish between capacity issues, cultural norms, and individual reliability. When done well, the output is a clear problem statement tied to responsible names, a small set of root causes, and a basis for an actionable plan. This way of working converts pain into learning and prevents the insanity of repeating the same action while expecting a different result. Over time, repeated, accurate diagnoses become a map of the organization’s real strengths and recurring weak links.
🔬 '''31 – Diagnose problems to get at their roots.'''
 
🛠️ '''32 – Design improvements to your machine to get around your problems.''' After the diagnosis, design begins like a movie script: who will do what, in what sequence, with what metrics and timelines, until the goal is reached. At Bridgewater, the client service analytics team used that approach after standards slipped; David McCormick—then head of the Client Service Department—moved quickly to implement changes. He replaced people who had let quality erode, selected a high-standards investment thinker as the new Responsible Party, and rebuilt the process so work was graded, tracked, and reviewed on a standing schedule. The redesign included clear interfaces between departments and protections against “department slip,” in which support groups try to set direction rather than support those who own outcomes. Monthly reviews compared planned and actual progress, adjusted workloads to real capacity, and kept the system from drifting back to old habits. Designs were visualized across scenarios—Which people in which roles? What incentives or penalties? Which tools?—so second- and third-order effects could be anticipated before launch. Disputes about who does what were resolved “at the point of the pyramid” by aligning responsibilities with the person who owns the goal. The work product of design is specific, not abstract: named tasks, dated milestones, and metrics tied to accountable people. Once in motion, the plan is executed transparently and revised as reality teaches, which keeps learning tight and momentum steady. Seen this way, design is creativity constrained by facts: it turns root-cause insight into a machine that will not repeat the same failure. Done repeatedly, it builds an organization that gets better because its fixes are engineered, not improvised.
🛠️ '''32 – Design improvements to your machine to get around your problems.'''
 
✅ '''33 – Do what you set out to do.'''