Compliance and Risk Management For Fun and Profit with Elliot Murphy
Compliance and risk management gets a bad reputation in engineering circles: there’s the “it’s just unnecessary overhead!” camp and also the “risk management is just the Department of No” camp. It doesn’t have to be that way, though. In this episode of Real World DevOps, I’m joined by Elliot Murphy, CEO of KindlyOps, to talk about how compliance and risk management can be forces of good and how to do that work without the stress-inducing toil and headache.
About the Guest
Elliot Murphy is a senior executive and technologist with more than 20 years of success in critical software infrastructure, online services, and healthcare. Elliot is the founder of Kindly Ops, a cybersecurity firm bringing DevOps approaches to Governance Risk & Compliance, serving regulated industries such as biotech and fintech. His interest in Governance Risk & Compliance began when working as CTO of a healthcare startup and realizing the burden of regulatory compliance was slowing life-saving healthcare innovations from being brought to market.
- Book recommendation: People-Centric Security by Lance Hayden
- Book recommendation: How to Measure Anything in Cybersecurity Risk by Douglas Hubbard and Richard Seiersen
- Book recommendation:How to Measure Anything by Douglas Hubbard
Mike Julian: Running infrastructure at scale is hard, it's messy, it's complicated, and it has a tendency to go sideways in the middle of the night. Rather than talk about the idealized versions of things, we're going to talk about the rough edges. We're going to talk about what it's really like running infrastructure at scale. Welcome to the Real World DevOps podcast. I'm your host, Mike Julian, editor and analyst for Monitoring Weekly and author of O’Reilly's Practical Monitoring.
Mike Julian: This episode is sponsored by the lovely folks at Influx Data. If you're listening to this podcast, you're probably also interested in better monitoring tools, and that's where Influx comes in. Personally, I'm a huge fan of their products, and I often recommend them to my own clients. You're probably familiar with their time series database, Influx DB. But, you may not be as familiar with their other tools: Telegraf for metrics collection from systems, Chronograf for visualization, and Kapacitor for real-time streaming. All of this is available as open source, and they also have a hosted commercial version too. You can check all this out at influxdata.com.
Mike Julian: Hi, everyone, my name's Mike Julian. I'm here with Elliot Murphy, the CEO of KindlyOps. Welcome to the show, Elliot.
Elliot Murphy: Hey, nice to be here.
Mike Julian: Why don't you tell us a bit about what you and KindlyOps does?
Elliot Murphy: Yeah, so we're a cybersecurity consulting firm, and we mostly help regulated companies, biotechs, and fintechs with governance, risk, and compliance. We take a DevOps approach, so all of the good practices from that world, and apply them into what's traditionally been a pretty boring and bureaucratic set of activities.
Mike Julian: Okay, so what are these regulations we're talking about?
Elliot Murphy: Some people have heard a thousand times before, HIPAA, PCI, SOC 2. Others are a little newer, GDPR, or more specialized like FISMA. There's regulations from the Food and Drug Administration around medications and therapeutics. Pretty much every country in the world and every industry has their own set of regulations or laws. As software technology expands ever further into different parts of business, even very old businesses that didn't have much technology in the past, there's regulations that people have to comply with.
Mike Julian: Okay, so that makes a lot of sense, but every time I've been in a company with these regulatory requirements, it's really just been a huge pain in the ass. I don't like dealing with it. No one I worked with likes dealing with it. We meet the bare minimum and hope for the best. How do you view these regulations? Do you have that same take on them?
Elliot Murphy: It's funny, a few years ago, I was working as the CTO of a healthcare startup founded by a doctor. She had just some amazing innovations, and I got to see firsthand how expensive and burdensome regulatory compliance was. Then, even more upsetting, as I was meeting people, scientists and researchers at universities, I was hearing all these stories of how they were discouraged or prevented or didn't want to bother bringing actual life-saving innovations to market because of the burden of regulatory compliance. So I'm right there with you that by default they can have a pretty high cost. Our entire mission has been to try and get the value out of those regulations but without the human toil. It's a pretty existential question, why even have rules and laws?
Mike Julian: Right.
Elliot Murphy: I think some rules and laws are good, but they don't have to be as
miserable as they have been. We really take a human-centric approach.
Mike Julian: Okay. What does that even mean, a human-centric approach to regulation? Regulations are, to me, they're handed down from on high. It's like, "You comply or face the consequences."
Elliot Murphy: Yeah, so it's funny, we've actually developed this four-step process, and it's very, very common when you're talking with people who are excited about building products and are excited about serving patients and customers and are just excited about doing their core job, they're usually pretty annoyed at externally imposed rules that aren't quite obvious how they're beneficial. Depending on people's life experiences, they have these different perspectives. Some people, particularly, if they've been doing risk and compliance work for their whole career, they see the value, and they get frustrated that other people push back on regulations. What does it actually mean to put people first? We start with building empathy for different worldviews. One definition of culture is that it's beliefs and assumptions that drive decisions and behavior. That makes sense. But, then, if you think about that again, what is another word for beliefs and assumptions that you use to make decisions, drive behavior? Those are mental models. That's a very freeing realization that, okay, what we're actually talking about when we say culture, security culture, for example, are mental models. You and I are going to have our favorite mental models, our default mental models. They're mental models we go back to immediately when we're in a high-stress situation. But, we're perfectly capable of learning other worldviews, other mental models and deciding when it's beneficial to apply them. We can try them on for size, and we can say, "Oh, yeah, I can see. Even though this is not my favorite way to think about the problem, I can see there's some utility in this case or that case."
Mike Julian: Yeah.
Elliot Murphy: Dr. Lance Hayden actually developed this fantastic Creative Commons licensed security culture diagnostic. It's called, in his book, People-Centric Security, and so that outlines four different cultures or mental models. There's a trust culture, an autonomy culture, a compliance culture, and a process culture. You see so strongly in every single company that I go into they have one that they really prefer, one or two that they really prefer and tends to be their default identity. Then, they're usually struggling with getting proficient at using the other ones when appropriate. If it's a large organization, a government organization, a healthcare organization, they're going to tend to be really, really strong process culture or compliance culture. They're really typically struggling to take calculated risks, do innovation, to build trust and collaboration.
Elliot Murphy: Startups, non-profits tend to be very, very strong with trust culture and autonomy culture. You live or die on your own. You get to innovate. You can do anything you want. We care mostly about the people and our relationships. But, they really tend to struggle when it comes time to deal with compliance or deal with a stronger process. That very first thing we do is just give people the tools and the language to understand that there are multiple valid mental models, and that it's worth understanding and empathizing with people who might be coming from a different preferred one.
Mike Julian: That's such a fantastic way to look at it. I used to work for the Department of Energy, and we were really strong on process. My facility that I worked at was, it was part of the Manhattan Project. We'd been around since the 1940s. We were really big on process and really big on compliance. We actually had a risk management team. That's all they did. We called them the non-technical side of security.
Elliot Murphy: Yeah.
Mike Julian: But, any time someone wanted to do a thing, if security wanted to implement some new process, it had to go through risk management. Risk management's default answer was no, so much so that I would often go to the director of risk management, "Hey, I have a question," or "I have an idea," and he'd look up and say, "No," in just deadpan. That was the end of the discussion. He hadn't heard my idea, it was just to him, no. Why add anything? We don't need it. We're fine. Compliance is already working. Why change it?
Elliot Murphy: Yeah, it can be extremely frustrating, and it can also be frustrating for people in those same organizations who are looking at the massive amounts of money they're investing and setting ... At the same time, they're trying to uphold the process and compliance stuff, they're trying to innovate, they're trying to do new things, and they're frustrated at the speed of delivery or the speed of lack of delivery.
Mike Julian: Right. Yeah, definitely been there too. But, on the other side, you talked about these startups. It's a common thing where a small, scrappy startup would get acquired by a giant behemoth, for example, say, a bank acquires a small FinTech startup. Well, the fintech startup is used to moving quickly and playing fast and loose with compliance rules, and they get acquired by a bank. The bank is, they're experts at compliance and process. Whereas the startup is, they're experts at the other two you've been talking about. Now you get a clash of cultures going on.
Elliot Murphy: Yeah, there absolutely is a clash of cultures and one of the big ideas in DevOps is you need to understand the rest of the business. Don't just do your technology stuff in a vacuum but understand the business. Understanding that regulatory controls are crucial to those businesses is absolutely necessary. The reason the banks care so much about that is, because they will lose their banking license if they don't comply. When you get those clash of cultures, particularly, in startups, one of my favorite ways to try and explain it is to explain that there's two buckets of work. One is doing a good job. Usually, technical people, people who are building product in a startup, they are super excited about doing a good job. That's why they come to work in the morning.
Elliot Murphy: The second half of the work is getting credit for doing a good job. That's where they're sometimes digging in their heels and doing, frankly, doing a terrible job at first of communicating and showing evidence and getting credit for the good work that they've already done. A lot of what we do to help people is help them understand, "Listen, there are things that you can do that will generate automated evidence that will make your existing good work more visible and transparent to the rest of the business, and you can start getting credit for it."
Mike Julian: Okay, how does that actually work? How would you approach such a thing of trying to, both doing compliance and getting the credit for doing that compliance work, especially, considering a lot of your peers aren't going to really care?
Elliot Murphy: Yeah, so, in our four-step process, we talked about the first step being build empathy. The second thing we typically do, and this is on the startup side, someone who's trying to come into this world, is put in place a controls baseline.
Mike Julian: Okay, what does that mean?
Elliot Murphy: You're typically going to have imposed on you some set of regulations that you need to comply with. It might have been decided you're going to do SOC 2, because of the nature of your business, it might be PCI or HIPAA. Then, associated with those set of regulations are going to be a set of controls, required controls. Or, you might just be trying to make part of the company more mature. A thing I've been seeing take off a lot in the last year is the NIST cybersecurity framework, the CSF.
Mike Julian: That's a great framework.
Elliot Murphy: Yeah, banks are using it. Other companies are using it. Those, even companies that might not necessarily have a more specific thing like SOC 2 or HIPAA or GDPR, they're still picking up this cybersecurity framework, just because of concern of the sometimes scary security environment that we're operating in. This is where all of the good practices of DevOps about removing toil and doing automation and using version control and code for expressing policy comes into play in a big way. Instead of all kinds of manual checks around controls that are really frustrating, you can enhance your continuous delivery pipelines to do a lot of those checks for you, quicker, faster, easier, and the important thing there is the right way to do something, the safe way to do something, has to be the most convenient way to do it.
Mike Julian: Yeah.
Elliot Murphy: If you make it more convenient to have your builds pass the security checks, that's the way everybody's going to do it. Fundamentally, everybody showed up to work that day to do a good job anyway. They get frustrated when they see things done in a really inefficient way. Taking that controls baseline, and making sure you automate it as much as possible, integrate it natively into your cloud systems, and all the other systems that you're using, people get right onboard. They're super happy with it, and they actually get excited when it finds things for them. They're like, "Oh, this server build blah, blah, blah, is non-compliant, and I can fix it." Just, it's amazing the engagement you see.
Mike Julian: Yeah, absolutely. I was looking on your website and saw that you gave a talk a couple years ago where in the talk you made a comment about many years ago it used to be that people would complain about version control adding overhead. Therefore, we're not going to use it, because it slows us down. But, that has continually happened with every new thing we have. Yet, we wouldn't look at version control and say, "No, that adds overhead that would slow us down." In fact, it's quite the opposite.
Elliot Murphy: Yeah, if it's slowing you down-
Mike Julian: You wouldn't say-
Elliot Murphy: ... you're doing it wrong.
Mike Julian: ... not use it. Yeah. Now, it's like, "If we don't use it, it's going to slow us down."
Elliot Murphy: Yeah, exactly.
Mike Julian: Do you think we're going to get there with security too?
Elliot Murphy: I'm going to say we're going to get there with risk.
Mike Julian: Okay.
Elliot Murphy: Security is one component of risk.
Mike Julian: Okay.
Elliot Murphy: There's a lot of technology change in the security environment happening very, very rapidly. There's a lot of technology change happening in general. I don't see that stabilizing the way version control as a particular technique has stabilized. There's a ton of change that's going to keep happening there for the next ten years.
Mike Julian: Yeah, that makes sense.
Elliot Murphy: But, where I am seeing a total breakthrough and shift is in risk, and that's actually step three of our four-step process. Once you have a controls baseline in place, controls are, and compliance in general, is like a minimum bar, a lowest common denominator. It might cause a bunch of improvement in your organization to get you up to that minimum bar. But, it doesn't in any way guarantee continuous improvement that in year two, three, and four, you're going to continue to get better. That's where doing risk analysis comes in. A lot of people, most people that I've talked to, their experience with risk, you talked about working with the risk team at Department of Energy has been qualitative. We've got low, medium, and high risks, one through five.
Mike Julian: Yup.
Elliot Murphy: That is frustrating for people, because it's an intellectually terrible way to think about the problem. It's primitive. It's not okay. There are so much better ways to deal with it. The better way to deal with it is called quantitative risk assessment instead of qualitative. There's actually an open standard, I think it was published in 2009, and it's only now starting to take off. But you can do quantitative risk assessment, and you can actually make significantly better decisions on the risk that you're taking on.
Elliot Murphy: Peter Drucker famously said that all profit in business comes from risk. When you're trying to have detailed debates about, do we need this control or that control, or does this regulation make any sense? Can we just accept this risk versus needing to do something about it? How do you make that conversation meaningful? The entire point of all of that stuff, of all of that debate, is to give you a better quality decision on the risk that you're going to take in the business. You might decide not to do that part of the business. You might decide to buy some insurance and transfer risk. You might decide to have some new controls. You might decide you're spending way too much on controls that aren't actually helping your risk and get rid of them. That quantitative-based risk analysis with the open FAIR standard is the thing that everybody working around security, around technology is going to have to be conversant in. It's just such a better, you have a better quality conversation.
Mike Julian: That's incredible insight, because how my own experience working with risk management, and people that are focused on compliance, how they, or, at least, I view their view of this is, that they're trying to get rid of all risk. But, what you're saying is that's not really possible.
Elliot Murphy Mm-mm (negative).
Mike Julian: It's not actually the job anyways. What we're actually talking about is understanding what the risk is and then coming up with ways to determine what we should do about it. How much risk are we willing to accept?
Elliot Murphy: Exactly. That's where things get really exciting, because even the most controversial statements, so engineers love to be contrary when I'm first talking about quantitative risk, and they like to throw up these cases of like, well, some things you can't put a financial value on, and what about the value of human life, and all of this stuff. If you really just step back and realize there are very different risk profiles for people, very different risk tolerances — even something that seems on its face to be obvious. I'll just use an example of some medication, say, what if there was a medication that if you took this medication there was a 25% chance it would kill you? That sounds terrible, right? Shouldn't-
Mike Julian: Right.
Elliot Murphy: ... do that.
Mike Julian: Yeah.
Elliot Murphy: What if you were a person who had a 1% chance of living another six months because of your cancer or something else? Now-
Mike Julian: I might consider taking that risk now.
Elliot Murphy: I think most people would take the medicine.
Mike Julian: Right.
Elliot Murphy: That's why actually having a structured conversation about the risk is so crucial, because you actually get really good quality decisions that take reality into account versus just high, medium, low, we don't accept high risk in this organization. That doesn't make any sense. But, going, "Well, given this scenario, this is an appropriate trade-off, and we've done the analysis." You actually have the analysis that you can show, and that's getting credit for our work. That's a case where it actually starts to enable you to take on more risk than you would have been able to take on without putting in the work. This why just security isn't enough, just compliance isn't enough. This whole world of governance, risk, and compliance and getting credit for your work, it actually becomes an enabler for the business rather than a blocker.
Mike Julian: That's wild to me that having more structured conversations around risk actually makes things better.
Elliot Murphy: Yeah, that's the whole idea. There'd be no point in coming to work in the morning if we didn't really believe that it would make it better for people. That was like, that goes back to the mission in the first place. By default, a lot of this stuff was really frustrating for people, and it doesn't need to be. We've seen these improvements happen in all kinds of areas of business where work is frustrating from automating a system configuration to monitoring to various business processes. This is just another one where, that some really exciting ways of thinking about the problem have started to come together.
Mike Julian: Yeah, absolutely. You mentioned there's your four-step process. We've hit three. What's your number four?
Elliot Murphy: Fourth is about a learning organization. You got the first step of building empathy, just giving yourself the headspace to actually think about the problem. Two is your controls baseline, that's your minimum bar. Three is doing your scenario-based risk assessment with open FAIR. Then, four is ... The scenario-based risk assessment, that's where you're actually doing unique things to your company, not just a lowest common denominator, but unique things that reduce risk or help you manage risk for the things that are unique to your company, the unique challenges you face. Then, four is about continuous improvement. How do you actually get better year after year? How do you make sure that all of the people across your organization are able to help you get better?
Elliot Murphy: There's a second framework from Lance Hayden called FORCE, the FORCE metrics, and I just loved the work he did in this People-Centric Security book. FORCE is an acronym, and these 25 metrics are about gathering, benefiting from these five different domains. The five domains are failure, operations, resilience, complexity, and expertise. Those are dramatically different metrics than, how many security incidents did we have last year, and thinking somehow that if you bully people into not reporting security incidents that you're somehow doing better as a company.
Elliot Murphy: Then, those metrics like when you talk about learning from failure, building resilience, benefiting from complexity and expertise, that directly leads you to doing some of the other good practices from the DevOps and site reliability worlds like experiential learning with game day exercises, using different analysis frameworks like Cynefin for helping make sense of very unknown, chaotic areas that you might be exploring and then trying to move the work in those different complexity domains to be more well-understood, more structured, and just thinking about it, just like with risk, thinking about it in a more structured way. When you start doing that in an organization, it is amazing just doing the survey for the FORCE metrics the first time, the conversations that come out of it, right?
Mike Julian: Mm-hmm (affirmative).
Elliot Murphy: You'll have a management team say, "We are absolutely, 100% agree with this, this, and this." You'll have an engineering team say, "We completely disagree that we do any of those things." You'll have a business team split down the middle. Just the ability that the organization has to learn from that temperature taking and then improve their communication and change some of their practices, that gets into this virtuous cycle where now you have the company improving, and everybody's talking about how to do it better. They're engaged, and they're doing a genuinely better job. The quality improvements and the safety improvements that you're going to get from that are so far beyond what you get from a compliance checklist, the minimum bar.
Mike Julian: Yeah, absolutely. I'm seeing two different directions that you could come from on this compliance stuff. One, you've got the startup who's just learned they have to become HIPAA compliant to go after some customers they want, or they've just been acquired. Two, you've got people working at very large process and compliance-oriented companies that are trying to become more nimble. They're trying to take advantage of more stuff in the market they could be doing. For the startup and for the large enterprise, and they're trying to move in different directions, do you have advice for each one?
Elliot Murphy: The startups, I think we've talked about a little bit more with setting a controls baseline and then doing the scenario-based risk assessment.
Mike Julian: Right.
Elliot Murphy: In a large enterprise, you probably already have the controls baseline. You probably already have a lot of process, and you're trying to bring more innovation in. That is the amazing thing about this open FAIR standard for doing quantitative-based risk assessment, because just as much as that shows you where you can add controls, it also shows you where you're overcontrolled. In the FAIR model, risk is very clearly defined in a dollar value and with probability, and so the way you measure risk, the quantity that you use is annual loss exposure. Then, the inputs for that are loss events and loss magnitudes. You multiply the probability of your loss events by the probability of the loss magnitudes, and you have a probability distribution that says, "We might lose between, 80% chance of losing between $10,000 and $20,000 next year."
Elliot Murphy: You can think about it like, a real easy example to think about it is like stealing laptops. What's my risk of losing laptops next year? We've got a hundred people in the company. We've got a hundred laptops. We have this much travel. Maybe, you do all of the work, and you figure, our loss magnitude per laptop that we lose is probably $3,000, on the upper end, it might be $10,000. Probably, we're going to lose between four and ten laptops next year. There's all kinds of techniques you can use to do those estimates, but that gives you a number.
Mike Julian: Right.
Elliot Murphy: Then, in a highly controlled environment where you're trying to enable some innovation, doing that type of risk analysis around a new proposed activity can be totally eye opening. Okay, what's our loss exposure from this activity? It was like, “Well, you have no controls in this environment. You want to do all this stuff. You're going to have very high loss events.” Sure. But, what's your loss magnitudes? “Oh, well, there's actually no customer data in the environment, and worst-case, we might lose a few thousand because of X, Y, or Z.” That's a ... You can have a way better quality decision based on that.
Mike Julian: I think a lot of people listening, especially, like myself, I've personally experienced this. Where you realize that your team, your company is, they're spending $500,000 to protect against a $10,000 loss.
Elliot Murphy: Exactly.
Mike Julian: We're like, "I think you've got it backwards."
Elliot Murphy: Yeah, and that just doesn't make sense. That's why one-size-fits-all doesn't cut it. It's also why you have to put in the work around being able to communicate, get credit for your work. You might be able to intuitively to jump to the conclusion that it's fine for me to try this experiment, but it's really worth learning how to communicate the risk analysis that you've done, and that you're capable of complying with processes in other parts of the business, so that you're permitted to go off and do those things. You can't just jump straight to the end and not show your work.
Mike Julian: Right, as much as I tell my math teacher in high school that I totally can, it doesn't work like that in the real world. Especially, not when you have a company that must remain compliant. If you have auditors and regulatory people talking to your company, you want to stay off the SEC's radar.
Elliot Murphy: Yeah, you want to stay off their radar, but I think even more important than that, you want to be able to prove that you're doing a good job, if people are questioning. That manifests in very important ways. You might want to prove that when that event X happens, we weren't negligent. Therefore, people can stay out of jail. The broader and broader you look across multiple departments, multiple companies, multiple countries, bad stuff happens every day. It's super, super important to put in the work that you're meeting a reasonable standard of care. You see this really prominently in healthcare. I think we need to start using those same sort of mental models across all of tech.
Mike Julian: Yeah, I completely agree. It's interesting you mentioned healthcare, because healthcare in the US, particularly, is, there's a lot of healthcare startups, and there's a lot of healthcare innovation happening. But, it feels like the hospitals and the large organizations are just now catching up. I'm starting to see a lot of innovation from the insurance companies about just how they manage their systems, how they approach these problems. In fact, it seems like a lot of the insurance companies are doing a better job than some of the startups I've seen as far as innovation and how they think about risk. Then, again, that's also the entire reason why they exist.
Elliot Murphy: That's the entire reason that they exist, and they've been doing it for over a hundred years. It is amazing how much there is to learn from the insurance world, from actuarial science and things like that. Some of those techniques which were incredibly labor-intensive or incredibly expensive are now available to anybody with a spreadsheet or a Jupyter notebook. That's what's really exciting is that these tried and true techniques for working with risk calculations are now available to absolutely anyone.
Mike Julian: Yeah, which is just wild to me. That's so cool.
Elliot Murphy: For sure.
Mike Julian: We've talked quite a bit about theory of compliance and what all this means, and how people can work to improve it. What's something that people can do today or this week, something much more actionable?
Elliot Murphy: I think the biggest set of tools you can immediately take on as an individual is understanding how to think about quantitative risk and recalibrating the way you think about measuring uncertainty. The thing I would ask everybody to do is read the book, How to Measure Anything in Cybersecurity Risk.
Mike Julian: I love the ... Is that actually a separate book, How to Measure Anything in Cybersecurity?
Elliot Murphy: There is, so there's a co-
Mike Julian: That's awesome.
Elliot Murphy: Douglas Hubbard wrote the original famous, How to Measure Anything.
Mike Julian: Which I have read, and it's a wonderful book, but then again, I really hope you like Monte Carlo simulations, because oh, boy, you get some.
Elliot Murphy: Yeah, you're going to get some. Then, there's a much more recent book called How to Measure Anything in Cybersecurity Risk, and there's a coauthor for that one, I think it's Richard Seiersen. I probably got the name wrong. But, that one tends to get much more concrete with applying the techniques in the domain of cybersecurity. But, just reading through that will completely change your ability to talk about it-
Mike Julian: That's-
Elliot Murphy: ... and-
Mike Julian: ... awesome.
Elliot Murphy: ... think about and make decisions about it.
Mike Julian: Yeah, and piggybacking off of that recommendation, I also highly recommend reading the original, those who haven't. It is also a wonderful book and opens your eyes to measurements, things that ... How do you measure something that seems unmeasurable?
Elliot Murphy: Yeah.
Mike Julian: Well, everything is actually measurable. Measuring a movement from one place to another is actually a measurement. How much did it change? Well, that's a different question. You mentioned this earlier about there's a 20% chance that we're going to lose between this much money and this other much money. That's actually a valid measurement. Most people only think in terms of absolute measurements, but we don't have to.
Elliot Murphy: Absolutely, and that is the big idea, right? Is that Hubbard points out that measurement is reduction in uncertainty. That's such a useful definition, because you can say, "There's no way for me to estimate that." "Well, is it more than a million?" "Oh, definitely, not." "Is it more than one?" "Definitely." "Oh, well, now we've got a measurement." Then, there's other things we can do to narrow that measurement, and we're never going to get it down to a single point maybe, but you can then do useful calculations with that range of probabilities.
Mike Julian: Yeah, that's amazing. All right, well, Elliot, it has been fantastic having you on here. Where can people find out more about you and your work?
Elliot Murphy: My consulting company is kindlyops.com. We publish a bunch of articles there about this stuff, conference talks, open source tools that make some of this stuff easier, so kindlyops.com.
Mike Julian: Awesome, well, thank you so much for joining the show.
Elliot Murphy: Thank you, I really enjoyed it.
Mike Julian: Thanks for listening to the Real World DevOps Podcast. If you want to stay up-to-date on the latest episodes, you can find us at realworlddevops.com and on iTunes, Google Play, or wherever it is you get your podcasts. I'll see you in the next episode.
2019 Duckbill Group, LLC