'

Building a Successful Organization By Mastering Failure

Понравилась презентация – покажи это...





Слайд 0

Building A Successful Organization By Mastering Failure John Goulah (@johngoulah) Etsy


Слайд 1


Слайд 2

Marketplace • $1.93B Annual GMS 2014 • 1.4M active sellers • 20M+ active buyers • 30% international GMS • 57%+ mobile visits


Слайд 3

Infrastructure • over 5500 MySQL databases • 750K graphite metrics/min • 1.3GB logs written/min • 50M - 75M gearman jobs / day • 30-50 deploys / day


Слайд 4

Company • Headquartered in Brooklyn • Over 700 employees • 7 offices around the world • 80+ dogs / 80+ cats


Слайд 5

Values


Слайд 6


Слайд 7

Learning Org a company that facilitates the learning of its members and continuously transforms itself


Слайд 8

Five Disciplines


Слайд 9

Systems Thinking process of understanding how people, structure, and processes influence one another within a larger system


Слайд 10

Personal Mastery an individual holds great importance in a learning organization


Слайд 11

Mental Models the assumptions held by individials and organizations


Слайд 12

Shared Vision creates a common identity that provides focus and energy for learning


Слайд 13

Team Learning the problem solving capacity of the organization is improved through better access to knowledge and expertise


Слайд 14

Learning About Failure • architecture reviews • operability reviews • blameless post mortems


Слайд 15

failure and success come from the same source


Слайд 16

context


Слайд 17

can study the system at any time


Слайд 18

inflection points • architecture reviews • early feedback and discussion • operability reviews • held before launching • blameless post mortems • held after a failure


Слайд 19

Architecture Reviews


Слайд 20

Architecture Reviews understand the costs and benefits of a proposed solution, and discuss alternatives


Слайд 21

Etsy Tech Axioms • we use a small number of well known tools • all technology decisions come with trade offs • with new technology, many of those trade offs are unknown • we’re growing. things change


Слайд 22

with new technology many of those tradeoffs are unknown


Слайд 23

Departures a departure is when new technologies or patterns are introduced that deviate from the current known methods of operating the system and maintaining the software


Слайд 24

How do I know I need an architecture review? when there is a perceived departure from current technology choices or patterns


Слайд 25

How early do you hold them? early enough to be able to bail out or make major course corrections


Слайд 26

Who should come? • the people presenting the change • key stakeholders (sr. engineers, or arch review working group) • everyone else that wants to learn about the proposed changes to the system


Слайд 27

Architecture Review Meeting Format


Слайд 28

Preparation • a proposal is written in a shared document and circulated • comments are added, discussed, and potentially resolved in advance • initial questions for the meeting are collected in a tool such as google moderator


Слайд 29

Some General Questions • Do we understand the costs of this departure? • Have we asked hard questions about trade-offs? • What will this prohibit us from doing in the future?


Слайд 30

Some General Questions (cont) • Are we impacting visibility, measurability, debuggability and other operability concerns? • Are we impacting testability, security, translatability, performance and other product quality concerns? • Does it makes sense?


Слайд 31

The Arch Review • proposal is presented to the group • discuss questions and concerns • decide if we are moving forward or need further discussion


Слайд 32

you're saying my project might not move forward?


Слайд 33

Why might this end a project? • we learned through this discussion that an alternative is better • we find goals overlap with other projects that are in progress • we discover that it isn't worth the costs now that we have a better idea what they are


Слайд 34

At the end we should have • detailed notes from the conversation • agreement on tricky components and document them • a compilation of learnings and questions • a decision of whether to keep going with the project, stop and rethink, or gather more information


Слайд 35

Operability Reviews


Слайд 36

Operability Reviews understand how the system could break, how we will know, and how we will react


Слайд 37

When do we do operability reviews? • after architecture reviews in the product lifecycle, generally right before launch • when we need to gain increased confidence for launch due to the technology, product, or communication choices being risky • if there's a chance you'd surprise teams that operate the software


Слайд 38

Who comes to the operability review? representatives from: • Product • Development • Operations • Community/Support • QA


Слайд 39

Some Questions • Has the feature been tested enough to deploy to production? • Does everyone know when it will go live, and who will push the feature? • Is there communication about the feature ready to go out with the feature? • Is it possible to turn up this feature on a percentage basis, dark launch, or gameday it?


Слайд 40

Some Questions (cont) • Does the launch involves any new production infrastructure? • If so, are those pieces in monitoring or metrics collection? • If so, is there a deployment pipeline in place? • If so, is there a development environment set up to make it work in dev? • If so, are there tests that can be and are run on CI?


Слайд 41

Contingency Checklist


Слайд 42

Contingency Checklist a list of things that could possibly go "wrong" with a new feature, what we could do about it


Слайд 43

Issue What could possibly go wrong with the feature launched in production?


Слайд 44

Likelihood What is the likelihood of each item going wrong?


Слайд 45

Comments Any comments about the item?


Слайд 46

Impact This is just a measure of how impactful this will be if it does actually turn out to be a concern.


Слайд 47

Engineering What do we do to mitigate the issue with the item (i.e. can we gracefully degrade?)


Слайд 48

Onsite Messaging What is the messaging to the user in the forums, blog, and social media if this needs graceful degradation?


Слайд 49

PR Is PR needed for the contingency (i.e. larger scale failure)


Слайд 50

Blameless Post Mortems


Слайд 51

What is a post mortem? a postmortem is a facilitated meeting during which people involved/interested/close to an accident or incident debriefs together on how we think the event came about


Слайд 52

What does it cover? • walking through a timeline of events • learning how things are expected to work "normally", adding the context of everyone’s perspective • exploring what we might do to improve things for the future


Слайд 53

Local Rationality we want to know how it made sense for someone to do what they did at the time


Слайд 54

searching for second stories instead of human error • asking why is leading to who is responsible • asking how leads to what


Слайд 55

Avoiding Human Error Human error points directly to individuals in a complex system. But, in complex systems, system behaviour is driven fundamentally by the goals of the system and the system structure. People just provide the flexibility to make it work.


Слайд 56

Avoiding Human Error (cont) Human error implies deviation from “normal” or "ideal", but in complex situations and tasks there is often no normal ideal that can be precisely and exactly described, many variable interconnected touchpoints influence decisions that are made


Слайд 57

Recognizing Human Error • be aware of other terms for it: slip, lapse, distraction, mistake, deviation, carelessness, malpractice, recklessness, violation, misjudgement, etc • don’t point to individuals when you really want to understand system itself and the work • how do you feel when something goes wrong? • is it to find who did it / who screwed up, or to find how it happened?


Слайд 58

Other Things to Avoid


Слайд 59

Root Cause • it leads to a simplistic and linear explanation of how events transpired • linear mental models of causality don’t capture what is needed to improve the safety of a system • ignores the complexity of an event, which is what should be explored if we are going to learn • leads directly to blaming things on human error


Слайд 60

Nietzschean anxiety when situations appear both threatening and ambiguous we seem to demand a clear causal agency; because if we cannot establish this agency then the "problem" is potentially irresolvable


Слайд 61

Hindsight Bias inclination, after an event has occurred, to see the event as having been predictable, despite there having been little or no objective basis for predicting it


Слайд 62

Counterfactuals the human tendency to create possible alternatives to life events that have already occurred; something that is contrary to what actually happened


Слайд 63

Morgue https://github.com/etsy/morgue


Слайд 64

Post Mortem Meeting Format


Слайд 65

Meeting Format • Timeline • Discussion • Remediation Items


Слайд 66

Timeline • a rough timeline scaffolding is required • talk about facts that were known at the time, even if hindsight reveals misunderstandings in what we knew • look out for knowledge that some people were aware of, that others were not, and dig into that • no judgement about actions or knowledge (counterfactuals) • tell people to hold that thought if they jump to remediation items at this point


Слайд 67

Timeline (cont) • continually ask "What are we missing?" until those involved feel its complete • continually ask "Does everyone agree this is the order in which events took place?" • make sure to include important times for events that happened (alerts, discoveries) • reach a consensus on the timeline and move on to the discussion


Слайд 68

Discussion • When an action or decision was taken in the timeline, ask the person: "Think back to what you knew at the time, why did that action make sense to you at the time?" • Did we clean up anything after we were stable, how long did it take? • Was there any troubleshooting fatigue?


Слайд 69

Discussion (cont) • Did we do a good job with communication (site status, support, forums, etc)? • Were all tools on hand and working, ready to use when we needed them during the issue? Where there tools we would have liked to have? • Did we have enough metrics visibility to diagnose the issue? • Was there collaborative and thoughtful communication during the issue?


Слайд 70

Remediation • Remediation items should have tickets associated with them to follow up on • There can be further post meeting discussion on these but tasks should not linger


Слайд 71

Remediation questions • What things could we do to prevent this exact thing from happening in the future? • What things could we do to make troubleshooting similar incidents in the future easier?


Слайд 72

In Summary


Слайд 73

We Can Learn Before and After Failure


Слайд 74

Before • Architecture reviews for new technology • Operability reviews to gain launch confidence


Слайд 75

After • Postmortems are done soon after a failure • avoid human error, counterfactuals, hindsight bias, and root cause


Слайд 76

Questions? John Goulah (@johngoulah) Etsy


Слайд 77


×

HTML:





Ссылка: