In this episode of DevOps Unbound, hosts Alan Shimel and Mitch Ashley are joined by Helen Beal of the DevOps Institute, Viktoria Praschl of Tricentis, Frank Ohlhorst, principal analyst with Accelerated Strategies Group and Alex Hidalgo of Nobl9 to discuss how to use DevOps metrics effectively to improve delivery performance and help teams become high-performing with a culture of continuous improvement. The video is below and a transcript of the conversation follows.
Alan Shimel: Hey, everyone. This is Alan Shimel and you’re watching DevOps Unbound. DevOps Unbound is sponsored by Tricentis and we’re happy always for their sponsorship and helping up put this show together. DevOps Unbound is an every-other-week session with experts from around the world talking about relevant topics to DevOps. In addition to that, about once a month or every six weeks we will do a live roundtable which is open to you, our audience, to come on, come in, ask questions live, and really kind of steer the conversation. We do have another live roundtable coming up. If you check the staging-devopsy.kinsta.cloud website you can register for that. Today’s conversation, though, is prerecorded and unfortunately that means I get to ask all the questions, not you.
But let me introduce you to what I think is an amazing panel we have lined up for us today. Well, actually, before I do that let me tell you what we’re going to talk about today on DevOps Unbound. So, it’s the issue of metrics and kind of the ghost in the machine, the dark side of metrics – lies, damn lies, metrics – whatever you want to call it. How do we measure success or failure? How do we measure progress in things, in areas like DevOps or digital transformation? What does success really look like? Our panel is going to talk about that today. Let me introduce you to them.
First, I’m happy to introduce Frank Ohlhorst. Frank, if you wouldn’t mind saying a few words about yourself?
Frank Ohlhorst: Sure. And I’ll be quick as not to bore people.
Shimel: Okay.
Ohlhorst: I’m Frank Ohlhorst. I’m a journalist, author, and consultant. And I currently work over at DevOps as a contributor, at Security Boulevard as a contributor, and I am also a contributing analyst. And that’s pretty much my story, other than I’ve got 30-plus years – I hate to date myself – 30-plus years in the IT field and I’ve worked both for the federal government, I’ve been a developer, and I’ve put a lot of effort into cybersecurity.
Shimel: Excellent. Thanks, Frank. Next, let me introduce you to one of my friends from the other side of the pond. I’ve known her now for eight years or more. The one and only Helen Beal. Helen, welcome.
Helen Beal: It’s a pleasure to be here as always and spending time with you and the others. So, thank you very much for the opportunity. I’m Helen Beal. I’m Chief Ambassador at DevOps Institute, and therefore one of the co-creators of ADOC, which is the Assessment of DevOps Capability, which I may mention once or twice during today’s session. I’m also chair of the Value Stream Management Consortium, and prior to all of that I was coaching in DevOps in a number of different companies across Europe and the Middle East for several years.
Shimel: Thank you and welcome, Helen. Next up, let me introduce you to Viktoria Praschl. Viktoria, if I mispronounced your last name I apologize. I just realized when I looked at it that I thought I remembered it, but maybe not. Go ahead.
Viktoria Praschl: It was totally correctly pronounced, so you did a good job.
Shimel: Thank you.
Praschl: Hey, everybody. Happy to be here today. I’m Viktoria Praschl. I’m with Tricentis, a software and quality assurance company based out of Austria and Vienna. And I’m responsible for our UP and sales and transformation team where we work a lot with customers on how they transform their organization when it comes to DevOps and quality assurance.
Shimel: Excellent. Thank you, Viktoria. Our last panel member today also is somewhat of a regular; he’s been with us a number of times. And that’s author, expert Alex Hidalgo. Alex, why don’t you give them a little background and welcome?
Alex Hidalgo: Thanks so much, Alan. Hi, my name is Alex Hidalgo. I am the Director of Site Reliability Engineering at Nobl9, which is a startup focused entirely on providing the best possible service level objectives, the best possible SLO platform. I’m also an author of Implementing Service Level Objectives, and I’ve been working in this space for – in some way or another the last two decades or so. My current area of research is really into how to bring reliability lessons from other industries into the tech world and what we can learn from others.
Shimel: Excellent. Thank you, Alex. And then, last but not least, my cohost for DevOps Unbound, my partner in business, Mitch Ashley. Mitchell?
Mitch Ashley: As always, good to be with you, Alan. And I love our panel. You all are fantastic; I’m excited to hear what you say. As Alan mentioned, business partner and cohost. I run Accelerated Strategies Group, an analyst firm that’s focused on DevOps, cloud native, cybersecurity. And also, I serve as CTO with MediaOps, the company that puts this event on. And both a practitioner and product creator in software, software as a service, cybersecurity for a number of years. I’m not going to name or number how many they are, so you’ll have to guess.
Shimel: [Laughs] Okay. I’m not guessing. But anyway. All right. Let’s get into this week’s DevOps Unbound. So, I remember a VC mentor of mine getting up in front of – we had launched a new company, he stood up and said something to the effect that in this brave new world we all live in everything can be measured and we measure everything, and it is based upon those measurements we determine who wins and who loses and by how much.
So, that was 20 years ago, maybe more, a little more than 20 years ago: 22 years ago. I wonder, is it still true? Or have numbers, like everything else, with fake news and all of that, kind of lost their meaning and lost their importance in today’s organizations? Helen, can you –?
Beal: I’m not sure it’s ever been true. From my experience with metrics I think what I’ve always observed is just people fall into one of two camps. So, either they’re not measuring enough, or they don’t have enough metrics, or they have far too many. I think there’s a cognitive load limit on how many metrics we can handle and we probably need about a handful.
I think it’s true that we need to have telemetry everywhere, which is of course a very popular DevOps adage. I think we need to have the facility to be able to get to the metrics that we want, but we have to be able to choose them and be able to think of them as an adaptable framework. So, for example, deployment frequency, of course a very popular metric in DevOps, is very useful if you’re moving from deploying every three months to once a week or more. But once you’re deploying several times a day it’s a much less interesting metric.
The adage I always use is if you can’t measure it you can’t improve it, which is a Peter Drucker phrase. And I do have a problem with that as well because I think you can improve things without measuring them. But the thing is that you’ll never know. And to me that’s the most important thing about measurement: It’s about being conscious about what you’re doing.
Shimel: Guys, any thoughts on that?
Ohlhorst: I think it also comes down to an issue of context. People can measure whatever they want out there and they often do, and they spend so much time measuring that they don’t take into account the context of the measurement. You can measure certain processes in the DevOps process and they really don’t matter when it comes down to it when you’re looking at trying to achieve a particular goal.
Shimel: They don’t. You know what, Alex, we’re going to let you go, and then, Viktoria, you follow up.
Hidalgo: Yeah, I think one of the big problems that I often see is that people end up trying to measure themselves against others. They hear “Oh, at my last company” – or “I heard at Google or whatever they’re able to deploy X number of times a day and let’s aim for that.” But why? Do you need to deploy that many times a day? Is it actually applicable to you or your application or your organization or your customers? Are you actually gaining anything from trying to do things the way you know others are? Every situation is unique. Every company is unique. Every organization is unique. Every application is unique. And that means you really need to take a step back and think about “What is right for us? What do we need to improve upon?”
Praschl: Yeah, that’s what I really wanted to say, basically similar, that you take a step back and you really need to look at what’s your baseline. Where are you and what do you want to achieve? What are the outcomes you’re looking for? And I think it’s very important: The metrics will change over time depending – as you mature your processes and as you mature in your organization you will have to constantly change and adjust your metrics which you are measuring. So, I think the key thing is to really initially take a look: “Where am I? And what are the most important initial metrics where I start with?” And then, constantly, basically changing and improving toward further goals.
Shimel: Great. I think – I’m reminded of the adage “Just because you can doesn’t mean you should.” And I think that’s where we have – and not just in DevOps, frankly, in IT in general. That’s where we run into problems. Because it’s digital almost by definition it’s very easy to grab metrics on all of these digital things, on all these. But just because you can doesn’t mean you should. And just because you can doesn’t mean it’s necessarily indicative. I think, as Helen mentioned, when you’re going to a release a quarter to a release a week, ooh, that’s big news. That’s good metrics. But – and to Alex’s point, not everyone needs to be Google or Amazon and release hundreds of times a day or whatever it is – dozens of times a day. And I think that’s where we get ourselves kind of wrapped around the metrics axle.
Ashley: I think it goes back to the question of why. If you don’t understand why you’re measuring something, then why do it? Because everybody is not going to be on the same page about how you want to utilize that information. I think about the VC quote that – I remember I was there for that too, Alan. And in a financial world winners and losers, how much return on investment, that’s pretty clear. But I also think there’s a lot of fear around numbers because people are afraid that it’s going to be used against them or it’s not going to be understood, what they really mean. In a continuous improvement context it’s really to – that’s an aid to help you get better and know if you’re getting better, and you have to decide if it’s the right things, which you’ve got to put that context, as Frank mentioned.
And we have to be on the same page about what it is we really – what does it mean to put a release into production? Is that really – is that a canary release? Is it a real – across every data center that we are in the cloud? What does it mean to do a release? So, you have to get to that specificity. We’re all in agreement that’s – and that’s important to us because we either want to measure ourselves by it or improve at it.
Beal: I think it’s been implied by a lot of the speakers already, but I just wanted to really want to say explicitly as well that I don’t think metrics shouldn’t be inflicted on people. They should belong to the team. The teams need to generate them themselves. You can set things like KPIs at a higher level and have them connect, but things like OKR should be really generated and owned by the team themselves. And that really speaks to some of the cultural elements at DevOps, which are much harder but not impossible to measure, but things like distributing of autonomy – sorry, distributing authority and having autonomy at the team level to choose and define and improve the work, and that move from leadership from being managers and leadership becoming coaches that are moving impediments out of the way.
I think, Alex, you were inferring this as well, the importance of the team-level metric.
Hidalgo: Yeah. One of the things I’d like to kind of bounce off there is I explicitly try to always use the word “measure” because “measure” means something different then “collect.” I think Helen mentioned earlier it’s good to have telemetry everywhere, and so you have numbers everywhere, you have metrics everywhere, you have time series everywhere. And you run into the multiple comparison problem where you don’t really know what you’re looking at anymore. You run into spurious correlations. Have you ever seen that website where it can track people drowning in pools correlates almost perfectly with the number of Nic Cage movies released each year and things like that? And that’s where you end up when you have too many numbers.
And that’s why I always like to say you measure, because “measure” means you’re taking the important signals out, and you measure flour to bake a cake; you don’t just collect flour. Otherwise, you’re going to not end up with a cake. And it has to be an intentional act, and this kind of – like Helen just said, you can’t inflict metrics on people. You need to let them collect it correctly and measure it correctly.
Ohlhorst: If we look at DevOps from the pipeline’s perspective it starts to become clear maybe where we should be measuring things or why we should be measuring things. And one of the first things that a lot of organizations run into when they implement a DevOps strategy is they run into bottlenecks because they did not consider certain things. So, perhaps the first place to start with measurement is to try to identify bottlenecks and then go from there. This way you’re improving your process while you’re getting some value out of the metrics, and you’re also establishing what type of metrics you need.
Beal: I’m glad Frank brought that up because I really think that – for me, there are two classes of metrics, though, that are the most important and underpin everything. And one thing I will say about the DevOps pipeline is in my opinion it doesn’t start early enough. There’s that fuzzy front end and I think we need to expand it right back to where the idea happens. But my duality of metrics is basically if we measure flow and we measure realization of value we really have a concept of what the customer experience looks like. So, we know how fast we’re able to deliver the things that they need and we’re starting to understand whether we are actually delivering the things that they wanted.
Hidalgo: And I just wanted to add: Frank’s point about bottlenecks is great, but we can get meta about it. We want to make sure we’re very carefully selecting what we’re measuring. And identifying bottlenecks might be a great start but we need to carefully select which bottlenecks we even want to address in the first place. Just because we can see that something slows down somewhere it doesn’t mean that that’s necessarily an area of focus.
Beal: We could be waiting six weeks for a security check and that’s really hard to get that out of the way, but we could maybe – and Viktoria may be having an example around, I don’t know, test automation or something. Viktoria?
Praschl: Yeah, I mean, also test automation and _____, you have tons of metrics you can actually measure just from a curious perspective, and if you don’t measure the right things there as well it will seem as if it’s at a standstill. But I think it’s also very important to – I liked the bottlenecks you just mentioned, Frank, because one is the team level, but then you also want to see how does it look from an end-to-end process? How does my overall customer journey look like? And I see quite often companies, actually, they start and they identify the right metrics on the team level, they get the teams there, but then they are failing on the overall picture to bring everything together. A lot of people struggle with that. That’s what I see.
Ohlhorst: I think the mistake that a lot of the teams make nowadays is they either look at this and they try to relate either DevOps to a waterfall ideology where there’s a beginning and an end and you can measure certain points or they look at DevOps as this rigid structure on how you do things, where the whole idea behind DevOps is it’s flexible and it can be easily modified to meet specific needs. So, I think people run not only into an issue of where “Okay, where do I establish metrics? Why am I doing metrics?” but also how you interpret those metrics. I mean, like we were talking about before, release frequency and things like that are great for what we use as a term is “vanity metrics.” It’s “Hey, we accomplished something.” But when it really comes down to it, what did we accomplish and how did that improve the process? And I think people are losing sight of that.
Beal: It’s been said already, I think, here but I think that focus on outcomes is really, really key. And I think there’s a couple of other things I wanted to bring up. One is that we talk a lot about being data-driven in DevOps and I’m starting to think actually about being more insights-driven. And the other conversation I’ve had recently is about doing less hypothesis-driven development and more impact-driven development, which I think is another really interesting iteration on our ways of working. And it reminds me of when Alex was talking about this idea of having telemetry everywhere, which is wonderful. And so, the idea then is that you can get any metrics you want. And you don’t want all of them but at least you have the ability to choose them. But as Alex pointed out, there is this downside to this, that we are just overwhelmed with data. And I think that’s something that AI can help us with there as well. I mean, it’s just a fact: There’s far too much data in the world for us to cope with. We need somebody or something probably that we’ve created, a machine to help us with it. So, Alex, I wonder what your thoughts were on AI in the context of measuring our journeys.
Hidalgo: You’ve asked a very interesting person about this because I’m actually fairly dubious that a lot of these models will actually help. You need a lot of data, a lot of data, to train a model. And most people actually don’t have that much. We may be talking about “Oh, we have too much data” and “We’re swimming in data” and this and that. Very few people have enough data to properly train models. Can this work in some contexts? I think so. And I think a lot of smart people are working on the problem. But I do think there’s a little bit of an overblown focus on it today because I think it’s a lot of trying to fit a square peg in a round hole. We need to – like everything else, as we’ve been talking about, you need to be careful and you need to think about what outcomes you’re really looking for and then decide “Is AI, is machine learning the right thing here?” Because it – just because it’s a buzzword, just because lots of people want to do it doesn’t mean it’s always going to work. You need to make sure that you’re picking the right tool for the right job.
Ohlhorst: Yeah, and the same can be said with automation when it comes down to it. If you don’t know the possibilities or the probabilities around a process how can you automate it unless you have the data to provide you with that insight?
Hidalgo: Which, I should say, those words just remind me of what we can do with data is statistics. And actually, when you see me flinch a bit about “Oh, should we use ML?” or “Should we use AI?” I think a lot of the places where people are doing ML they’re actually just using stats. And that’s great because statistics can actually compensate for not having enough data. That’s kind of what it does best in some sense. So, I am a huge fan of using models. I’m a huge fan of using math. Just, again, let’s make sure we’re using the right math. Do we need to build an ML model if a simple regression tells us everything we need to know?
Ashley: Yeah, I do agree with you about the AI, machine learning of it. It takes a ton of data, if that’s a measurement, in the data world. I want to go to something that you said, Helen. I really like the word “outcome.” What are the outcomes that – have we achieved an outcome when we’re developing software or creating an experience or something out of the process? Because ultimately, that’s what matters. We can improve the things within it. We can automate the processes within how we create software. But it’s all for a purpose. It’s all for – if we’re not doing a better job of delivering on an outcome, maybe those things we’re working on internally are sort of navel-gazing, if you will, looking internally but it isn’t really that valuable. Let’s work on the things that we can connect to an outcome. That’s what I would propose.
Beal: It’s funny. The other day – it was at DevOps Enterprise Summit and she went up to speak because she used the phrase “time to market,” and I kind of in the Slack chat went “Oh, time to value,” because obviously there’s no point in getting something out the door if it’s useless, whereas with “time to value” at least we’re recognizing that it’s been – giving something useful to something. And then, John Smart corrected me with “time to learning.” And I thought, “Oh, no, actually, it has moved.” We’ve moved from time to market, time to value, and now we’re in time to learning, where we’re recognizing that actually only – something only becomes useful when we’ve seen it have value and we’ve learned from it and decided to do something else.
But yeah, it’s the same as this kind of impact-driven development as well, this idea that we really focus on having an idea about what we’re trying to do, and then, importantly, measuring that it did what we thought it was going to do. And we’ve been traditionally really bad at doing this in the technology world because we’ve been really project-driven. And when we’re project-driven we have huge amounts of requirements which are (a) really difficult to track, the impacts of, because we deliver them all at once and we weren’t doing things like canary testing which does allow us to look at a much more granular level. And the other problem with being project-driven is we’re always under so much pressure to deliver. Our testing always gets squished because our development overruns and our launch date never moves. So, by the time things go live we’re so exhausted, and then we disband the team and put them into other places anyway, so no one ever goes and has a look and says, “Did that really deliver the four million quid that we – or $4 million that we said it would?”
I wonder what other people think about how metrics change as we move from project to product.
Shimel: So, I’ll quickly jump in here on that, and that is – and it’s, Helen, something you mentioned, Alex mentioned. I think what we’re guilty of – and that’s part – in the abstract we talked about the dark side, and it’s not that “Mitchell, I am your father,” but there’s a dark side of these metrics which are that in the absence of facts or in the absence of knowledge we try to fashion metrics to compensate. So, with something like DevOps where there’s a lot of cultural aspects that come into play, there’s a lot of intangibles that come into play, and they’re almost by definition hard then to measure because they’re intangible; they’re not tangible. And so, we compensate by coming up with what some may consider frankly nonsense metrics that make us feel better or make us feel like somehow we are measuring the immeasurable. And I think that is an important part of this whole thing. Measure what you can, but are there things that are just immeasurable? Anyone?
Beal: Yeah, I – sorry, me again for a second. I don’t believe so. As a coach and a consultant for a long time I was discovering clients that felt that their biggest problem with deploying DevOps or implementing DevOps was the cultural side. So, that seems like something that you can’t measure. And it’s true that you can’t really get data out of Jenkins to tell you what’s happening in your culture but there are other ways to do measurement. And I think it was Viktoria – sorry if it wasn’t Viktoria, but somebody said the important of baselining your current state, and that is absolutely essential. And I think people don’t always do that. They kind of dive in to try and get metrics without really kind of going “Okay, this is where we’re starting from figuring it out.”
So, from the human aspect – I mean, talking about ADOC, which I mentioned earlier, the Assessment of DevOps Capability, we have 5 different dimensions in that and each dimension has 12 subtopics, so we have 60 topics in total. And I’m starting to sound like we’ve got way too many metrics, but there’s a lot of subtlety in the way that it’s baselined. And the first dimension is human aspects. And within that we’re really getting a feel across an organization, team by team, where they feel their level of capability is around things like psychological safety, around transformational leadership, around how much autonomy and mastery and purpose they have in their daily role.
So, you’d be surprised, I think, about how possible it is to measure cultural capability without necessarily having a knowledge of how many – or relating it to how many artifacts you’ve got in your artifact repository. There are other ways of doing this. And sorry, I interrupted you, Alex.
Hidalgo: No. I think those are all great points, but we do have to, as with everything else, approach it with a little bit of caution because especially if these are self-reported, if these are numbers that you’re looking at like “Are you happy on your team?” “Do you think your organization is headed in the right direction?” – those normal kind of questions and then asking people yes or no, or do you put it on a sliding bar from one to five? And studies show most people are going to just take a one or five anyway. And that’s not saying this is bad. It’s just something else that’s another caveat. It’s another “Let’s make sure we’re looking at this correctly.” “Let’s understand that some percentage of our employees are just going to click through it because they don’t want to fill out this form.” If it is this self-reporting thing, you know?
Beal: It’s such an important point you make, Alex. And it’s something that we deal with when we engage with organizations. There’s such a lot to talk about, because what you’ve actually identified there isn’t a problem with the tool; it’s a problem with the culture. And if people are not feeling that they are part of the change that the organization is trying to make and they’re not empowered to have their say and their voice and be involved in it then they will skew their answers.
So, we absolutely have to help people understand how important their answers are, what impact they’re going to have on their future and their ability to grow themselves in their organization and make them feel part of the change. But yeah, humans can be a tricky business. But back to your point about data volumes, again, in this kind of assessment environment it is quite important to have enough data to weed out variations from people that maybe don’t care or are skewing for other reasons.
Hidalgo: Which, again, you can do some basis statistics to ensure that you are dropping your outliers. You – to be able to get very useful data out of this stuff, again, you’ve just got to make sure you’re looking at it in the right way.
Shimel: Viktoria?
Praschl: Yeah, I think quite often we just force metrics on teams, so that’s why I like it a lot, what you said, Helen, that you need to also understand the culture. So, where is the team from a metrics perspective? Do they understand why we are measuring? Or are you already in a stage where they say, “Okay, I understand why we are measuring certain things,” but now we need to constantly figure out which metrics we are measuring?
So, I think it goes back to the team and explaining the “why” and then really helping them or work with them together, once you have the outcome, what do we do with that? How can we improve? What do we do with the outcome of the metric? What do we do with the numbers? So, I like it a lot, what you just mentioned on assessing the culture.
Beal: Yeah, and that’s a great point. We baseline but we baseline for a reason, because we – I talk about the improvement culture a lot, so that four-step process where we start by establishing the long-term vision direction, then we establish current state, then we establish our next target state, and then we PDCA or experiment between those two states. So, kind of the finishing part of that first cycle is defining the hypotheses and executing them, and then we start again. And we’re not baselining now; we’re comparing it against our baseline at a time period of three, six months and what’s happened with our experiments in between.
Praschl: Yeah, because I think most companies, they just don’t know. They don’t know where they stand. They don’t know where the team stands. They don’t know where the culture stands. So, that’s the starting point to get everything rolling.
Beal: And it’s a big investment, Viktoria, for people to go down this path not to know how far down the path they’ve got.
Praschl: I think for that you need the executive buy-in as well. So, it’s like making sure that you have from the top also the support that you do that. And then, as you go into this constant learning process, I think it becomes very important there.
Hidalgo: And trying to look at how far down the path you’ve gone, that – it’s going to, again, be different for everyone. Is it more important how far or how far you have to go? And is there even an end to the path? Or is this a process that’s ongoing forever? And just another example of having to be thoughtful about what’s being measured.
Beal: I don’t believe there’s an end to the path, which is why I use words like “capability” over “maturity,” because “maturity” indicates that there is some kind of a horizon that you can reach. But of course, no matter how many more capabilities you’re building yourself we live in a market that’s forever inventing new ways of doing things and new tools to support the things that we’re doing. So, it’s an ever-moving target that we’re trying to respond to.
Ashley: I think there’s a whole nother side of metrics I don’t know if we’ve touched on yet, and that is the metric itself can be valuable and inform us about some things, but I also think the analysis of what’s – why? What are the potential causes? In a quality process you do a Pareto diagram to kind of analyze what you think are potential contributing factors and what maybe is influencing why it’s happening in a certain way to try to get to – if you’re – for example, if you’re doing a process improvement what parts of this – a lot of things go into whatever created that metric; how do we know what we might work on first or doing some experimentation with?
But the ultimate thing is there’s a lot of learning that comes out of that process. It really kind of helps you understand “Wow, I had no idea that’s actually how it was working.” And now that we know that with a little more fidelity we can make some adjustments.
Beal: That’s powerful. I wanted to respond to Alex, actually, and then I forgot – it went out of my mind. But this idea that the team needs the right metrics – and Viktoria and I have both said those metrics should not be inflicted on the team, but the other thing is most organizations are made up of multiple teams, and different teams, like Alex said, have different capability patterns. But it’s very interesting to try and understand, for example, why one team is really good at something like diversity and inclusion or really good at AI ops or something, and not saying that the other teams are bad but trying to learn from those success stories. And I think it’s probably the Phoenix Project that put it, it was making local discoveries and global improvements.
And I think that kind of mindset of looking for the strengths and spreading the strengths across a business is really important when we’re doing things like measuring people on their journey because the – we talked about culturally that people are kind of either scared of this stuff because they’re expecting to get beaten up about it or they’re cynical and they’re not giving true answers. But there’s another side of this, that people are also quite afraid of being assessed or even assessing themselves because somebody’s going to go “Well, you’re shocking at deployment frequency. What are you going to do about it?”
And it’s like we have to change that whole mindset in an organization that they are a dynamic learning organization. It’s all about continuous learning and not about beating people up over the head or removing their annual bonuses or the like, which are things that I’ve seen in various places.
Ohlhorst: Which gets us back to that key point of the “Why are we measuring?” Making sure people on the team know why we are measuring things, not only from a cultural perspective, but also from a value perspective so that they know that by them giving their answers based upon queries that there is value to that, and that is the “why” behind it. And ultimately, the goal is to improve processes or accelerate SDLC or whatever it is, but there is a goal there that is not just random data being thrown out asking “What flavor of gum do you like?”
Praschl: Yeah, I think it’s also –
[Crosstalk]Praschl: It’s important to get them to understand the “why.” It’s usually a long and not as easy of a process. But once they have this and understand it you have the buy-in and the outcome will be way more worth it if you do that way.
Hidalgo: A thing that I’ve found that can help with this kind of stuff is people underestimate how powerful interviews can be. Instead of just asking people to self-report or just collecting random numbers out of your computer system sit down with people and talk to them. Not for five minutes. Schedule an hour. Sit down, find out what is this person thinking? Make sure you’re talking to the right people, not just in terms of an organizational structure, because you do want to make sure you’re talking to people at all levels of the company and all roles. But also make sure that the people you’re interviewing are comfortable with it in the first place. Not everyone has the personality to want to sit down for an hour.
So, another example: Select carefully. Measure carefully. But I’ve found so much value in just – instead of trying to do this in some – a method that is based around metrics. We’re talking about metrics. Sometimes you don’t need metrics. Sometimes you just need the human interaction and you can walk away with that if you really want to then throw that in that kind of scale and attach a number to it maybe. But you can learn a lot just by talking to people.
Beal: I think the best assessments are always a combination of quantitative and qualitative data. And I think we’re seeing this increasingly in the data world as well. About 80 percent plus of most businesses’ data is unstructured, which by its very nature is that kind of textural response data, whether it’s on social media or in your logs in Splunk. We’re seeing more and more people understanding that it’s not just about what’s in an RTMS. So, I absolutely agree with you, Alex. Yeah.
I think there’s a big advantage, though, which I’m going to point out, around ADOC. So, some organizations are very large, so when you’re spending an hour with people and there’s 8000 people it’s quite impossible to speak with 8000 people. So, actually having an ability to empower everybody to be involved in the change by giving them a platform on which to share their opinions is quite useful. And of course, you can put open text fields in there as well and get those snippets and insights that you would miss if you were just getting a one to five or yes or nos.
I did want to maybe slightly disagree with one thing you said, though, which is about the people that don’t want to talk. It always kind of worries me – and this is the big thing about, I think, diversity and inclusion and things like Heloxi that they are trying to give equal voices to people, is I don’t want us to fall into that trap where we think “Oh, that developer is really quiet and introverted; we won’t ask them what they think because they don’t really like talking.” It’s like they’ve probably got just as much right to their opinion as everyone else and we shouldn’t avoid hearing it because we just think maybe they don’t like chatting as much as other people.
Hidalgo: I totally agree. But from my point of view you’ve just got to make sure that whoever is comfortable with the process – people don’t want to feel singled out. Just like everything else, depending on the person, depending on the culture, depending on the team, they might feel like “Why am I being picked? Is this a bad thing?” So, it’s just looking thoughtful. That’s all I meant. Definitely don’t want to exclude anyone. Definitely want to make sure we’re getting every possible voice. Just want to make sure people feel safe about it.
Beal: Totally. And that psychological safety is so important. I learned this from one of my clients years ago. We didn’t use to do things like participant briefings, and now I think it’s really important to get everybody that’s involved in this type of work and talk to them about what DevOps is and the “why” that we’ve all talked about – so, organization performance and succeeding in a disruptive market and all that kind of stuff – and then give them the comfort about anything that they share is between them and the people that are collecting that information. And they may be quoted but they’ll never be associated with what they said. Their words may be used as an example but they can be completely honest. And actually, they are being asked to be involved because their opinion is so highly valued in the organization and we want to gather their insight, so they shouldn’t feel fearful or that their job is at risk. They should feel honored and hopefully excited to be involved in a process that’s intended to make their future more fun and secure.
Hidalgo: The only other thing I’d add there is – and again, I totally agree; I think we’re actually –
Beal: Finally agreeing.
Hidalgo: – in agreement here. Yeah. Yeah, exactly. But the one other thing I would add is that even if we have a strong suspicion or even have the numbers to prove that our organizations or our teams generally are psychologically safe and we’re building a good culture here and this and that, you don’t know what it was like for people at previous companies. People can carry baggage for a long time. It can take people a long time to adjust, especially as your career goes longer and longer. You have these memories and not all of them are great. And so, even if you think “Ah, our company as a whole, we are scouring very high,” for individuals it may not have anything to do with the current organization at all; they could still be carrying some scars from previous ones.
Beal: Yeah, and there’s –
[Crosstalk]Shimel: Viktoria?
Praschl: It’s about creating the safe space. So, some probably will take longer to open up, but usually if you see that others are joining and the more comfortable you feel, you also get those people to open up down the road.
Shimel: I think we’re all humans, to borrow a phrase, and we bring our own experiences, outlooks, prejudices to whatever metrics we’re going to measure and to whatever we’re going to be in. And I think we can’t lose sight of that either.
Beal: Right. Those scars that Alex was talking about, I just wanted to talk about one other element of this, which is that the scars are negatives. We’re never badly affected by positive experiences, if that makes sense. So, from a neuroscience perspective we basically train our minds to be fearful. We keep on avoiding – we have an avoidance response that’s trained into us. So, I think I really like to try to use neuroscience when I’m helping people through a DevOps change because there are things we can learn from it, things like neuroplasticity and these different avoidance and approach responses that we have in different scenarios. And actually looking at an organization as a whole is great, looking at different teams and how they compare, but actually understanding, as Alan has just said, that how individual humans operate and in the context of their previous experiences and just how we’re built, because we’re all built differently from _____ anyway, is really important. And that’s really where the transformational leadership comes in and having people that understand how to be coaches and how to understand people at a personal level.
Ohlhorst: Interesting that we’re talking about the process from almost applying a “flight or fight” type of ideology to it, that here we are talking about software development and DevOps in particular and pipelines and all that, and so many people so often forget that there are people involved in the process. And that is where you come up against the most challenges.
Shimel: Yeah. It’s about the humans. Guys, we are – we’re about out of time, though we could probably sit here and talk for the rest of the day. But it wouldn’t – I don’t know if anyone would want to watch, and we all have other things to do. What a great conversation on metrics and DevOps and people and everything else today. Maybe we’ll come back and we’ll run a part two of this or something. But for now I need to pull the plug.
I want to thank our friends at Tricentis for making DevOps Unbound possible with their sponsorship. Alex, Helen, Viktoria, Frank, great conversation, man. Great conversation, men and women. Good stuff. Mitchell, you want to take us home?
Ashley: What I’m struck by is the richness of just the word “metrics” and what all it can mean and the depths of it and aspects, whether it’s the human side of it, whether it’s the “why,” whether it’s the outcome. There’s several – I think there’s several good takeaways there from this, and so I hope it’s helped everyone kind of think about maybe why you’re doing what you’re doing and how you want to lead that going forward.
Shimel: Excellent. All right. This is Alan Shimel for MediaOps, staging-devopsy.kinsta.cloud, here on DevOps Unbound on TechStrong TV in the TechStrong TV network. We’ll see you in about two weeks with another great DevOps Unbound episode, but for now this is it. Thank you, everyone. Thank you for watching. Take care..
[End of Audio]