If you like this presentation – show it...
Building A Successful Organization By Mastering Failure John Goulah (@johngoulah) Etsy
Marketplace • $1.93B Annual GMS 2014 • 1.4M active sellers • 20M+ active buyers • 30% international GMS • 57%+ mobile visits
Infrastructure • over 5500 MySQL databases • 750K graphite metrics/min • 1.3GB logs written/min • 50M - 75M gearman jobs / day • 30-50 deploys / day
Company • Headquartered in Brooklyn • Over 700 employees • 7 ofﬁces around the world • 80+ dogs / 80+ cats
Learning Org a company that facilitates the learning of its members and continuously transforms itself
Systems Thinking process of understanding how people, structure, and processes inﬂuence one another within a larger system
Personal Mastery an individual holds great importance in a learning organization
Mental Models the assumptions held by individials and organizations
Shared Vision creates a common identity that provides focus and energy for learning
Team Learning the problem solving capacity of the organization is improved through better access to knowledge and expertise
Learning About Failure • architecture reviews • operability reviews • blameless post mortems
failure and success come from the same source
can study the system at any time
inflection points • architecture reviews • early feedback and discussion • operability reviews • held before launching • blameless post mortems • held after a failure
Architecture Reviews understand the costs and beneﬁts of a proposed solution, and discuss alternatives
Etsy Tech Axioms • we use a small number of well known tools • all technology decisions come with trade offs • with new technology, many of those trade offs are unknown • we’re growing. things change
with new technology many of those tradeoffs are unknown
Departures a departure is when new technologies or patterns are introduced that deviate from the current known methods of operating the system and maintaining the software
How do I know I need an architecture review? when there is a perceived departure from current technology choices or patterns
How early do you hold them? early enough to be able to bail out or make major course corrections
Who should come? • the people presenting the change • key stakeholders (sr. engineers, or arch review working group) • everyone else that wants to learn about the proposed changes to the system
Architecture Review Meeting Format
Preparation • a proposal is written in a shared document and circulated • comments are added, discussed, and potentially resolved in advance • initial questions for the meeting are collected in a tool such as google moderator
Some General Questions • Do we understand the costs of this departure? • Have we asked hard questions about trade-offs? • What will this prohibit us from doing in the future?
Some General Questions (cont) • Are we impacting visibility, measurability, debuggability and other operability concerns? • Are we impacting testability, security, translatability, performance and other product quality concerns? • Does it makes sense?
The Arch Review • proposal is presented to the group • discuss questions and concerns • decide if we are moving forward or need further discussion
you're saying my project might not move forward?
Why might this end a project? • we learned through this discussion that an alternative is better • we ﬁnd goals overlap with other projects that are in progress • we discover that it isn't worth the costs now that we have a better idea what they are
At the end we should have • detailed notes from the conversation • agreement on tricky components and document them • a compilation of learnings and questions • a decision of whether to keep going with the project, stop and rethink, or gather more information
Operability Reviews understand how the system could break, how we will know, and how we will react
When do we do operability reviews? • after architecture reviews in the product lifecycle, generally right before launch • when we need to gain increased conﬁdence for launch due to the technology, product, or communication choices being risky • if there's a chance you'd surprise teams that operate the software
Who comes to the operability review? representatives from: • Product • Development • Operations • Community/Support • QA
Some Questions • Has the feature been tested enough to deploy to production? • Does everyone know when it will go live, and who will push the feature? • Is there communication about the feature ready to go out with the feature? • Is it possible to turn up this feature on a percentage basis, dark launch, or gameday it?
Some Questions (cont) • Does the launch involves any new production infrastructure? • If so, are those pieces in monitoring or metrics collection? • If so, is there a deployment pipeline in place? • If so, is there a development environment set up to make it work in dev? • If so, are there tests that can be and are run on CI?
Contingency Checklist a list of things that could possibly go "wrong" with a new feature, what we could do about it
Issue What could possibly go wrong with the feature launched in production?
Likelihood What is the likelihood of each item going wrong?
Comments Any comments about the item?
Impact This is just a measure of how impactful this will be if it does actually turn out to be a concern.
Engineering What do we do to mitigate the issue with the item (i.e. can we gracefully degrade?)
Onsite Messaging What is the messaging to the user in the forums, blog, and social media if this needs graceful degradation?
PR Is PR needed for the contingency (i.e. larger scale failure)
Blameless Post Mortems
What does it cover? • walking through a timeline of events • learning how things are expected to work "normally", adding the context of everyone’s perspective • exploring what we might do to improve things for the future
Local Rationality we want to know how it made sense for someone to do what they did at the time
searching for second stories instead of human error • asking why is leading to who is responsible • asking how leads to what
Avoiding Human Error Human error points directly to individuals in a complex system. But, in complex systems, system behaviour is driven fundamentally by the goals of the system and the system structure. People just provide the ﬂexibility to make it work.
Avoiding Human Error (cont) Human error implies deviation from “normal” or "ideal", but in complex situations and tasks there is often no normal ideal that can be precisely and exactly described, many variable interconnected touchpoints inﬂuence decisions that are made
Recognizing Human Error • be aware of other terms for it: slip, lapse, distraction, mistake, deviation, carelessness, malpractice, recklessness, violation, misjudgement, etc • don’t point to individuals when you really want to understand system itself and the work • how do you feel when something goes wrong? • is it to ﬁnd who did it / who screwed up, or to ﬁnd how it happened?
Other Things to Avoid
Root Cause • it leads to a simplistic and linear explanation of how events transpired • linear mental models of causality don’t capture what is needed to improve the safety of a system • ignores the complexity of an event, which is what should be explored if we are going to learn • leads directly to blaming things on human error
Nietzschean anxiety when situations appear both threatening and ambiguous we seem to demand a clear causal agency; because if we cannot establish this agency then the "problem" is potentially irresolvable
Hindsight Bias inclination, after an event has occurred, to see the event as having been predictable, despite there having been little or no objective basis for predicting it
Counterfactuals the human tendency to create possible alternatives to life events that have already occurred; something that is contrary to what actually happened
Post Mortem Meeting Format
Meeting Format • Timeline • Discussion • Remediation Items
Timeline • a rough timeline scaffolding is required • talk about facts that were known at the time, even if hindsight reveals misunderstandings in what we knew • look out for knowledge that some people were aware of, that others were not, and dig into that • no judgement about actions or knowledge (counterfactuals) • tell people to hold that thought if they jump to remediation items at this point
Timeline (cont) • continually ask "What are we missing?" until those involved feel its complete • continually ask "Does everyone agree this is the order in which events took place?" • make sure to include important times for events that happened (alerts, discoveries) • reach a consensus on the timeline and move on to the discussion
Discussion • When an action or decision was taken in the timeline, ask the person: "Think back to what you knew at the time, why did that action make sense to you at the time?" • Did we clean up anything after we were stable, how long did it take? • Was there any troubleshooting fatigue?
Discussion (cont) • Did we do a good job with communication (site status, support, forums, etc)? • Were all tools on hand and working, ready to use when we needed them during the issue? Where there tools we would have liked to have? • Did we have enough metrics visibility to diagnose the issue? • Was there collaborative and thoughtful communication during the issue?
Remediation • Remediation items should have tickets associated with them to follow up on • There can be further post meeting discussion on these but tasks should not linger
Remediation questions • What things could we do to prevent this exact thing from happening in the future? • What things could we do to make troubleshooting similar incidents in the future easier?
We Can Learn Before and After Failure
Before • Architecture reviews for new technology • Operability reviews to gain launch conﬁdence
After • Postmortems are done soon after a failure • avoid human error, counterfactuals, hindsight bias, and root cause
Questions? John Goulah (@johngoulah) Etsy