Welcome to The Secure Developer, a podcast about security for developers, covering security tools and practices you can and should adopt into your development workflow. The Secure Developer is hosted by Guy Podjarny, CEO and co-founder of Snyk.
In this episode of The Secure Developer, Guy talks with Chef CTO Adam Jacob about the role security can play in DevOps and continuous integration/deployment. They cover the differences between baked-in and bolted-on security and how automation with Habitat can change the way developers approach secure coding. Adam Jacob is the co-founder and CTO of Chef, formerly known as Opscode, where he wrote the open-source, automated applications project Habitat.
This podcast is brought to you by Heavybit, a program that helps developer product companies take their product to market.
[soundcloud url=”https://api.soundcloud.com/tracks/295763125?secret_token=s-AfcYk” params=”auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=false” width=”100%” height=”166″ iframe=”true” /]Guy Podjarny: Hello, everybody, and welcome back to The Secure Developer. Today we have an awesome guest on the show, Adam Jacob from Chef. Adam, thanks for joining us.
Adam Jacob: Hi.
Guy: We’ll talk about various cool topics we teed up, that I think are interesting with the world of security and the world of DevOps and CI/CD, and some very interesting new package management, build system capabilities coming out of Chef that I think are very relevant to the security play.
Before we dig in, for the few of you that might not know Adam Jacob or Chef, Adam, do you want to give a quick intro of your background?
Adam: Sure, I think there’s probably more than a few who have no idea who I am. I’m Adam. I wrote Chef, originally. I’m the CTO at Chef, and I wrote a thing called Habitat not that long ago that does application automation. And that’s sort of new stuff.
Then, mostly what all of that really boils down to is, I spent the last 10 years going around talking to big web companies like Facebook and Google and Yahoo, and I’ve also spent a bunch of time with startups. And then I’ve also gotten to go see really large enterprises ranging from giant banks to insurance companies, retail companies like Nordstrom, Wal-Mart.
So I get to just travel around and see what everybody is doing and see what they’re worried about and try to help them get better in terms of the time it takes or the speed to deliver, or their organization or their culture. So a bunch of it is software, but a lot of it is just helping people sort of understand how better to build their organization.
Guy: Yeah, I guess there’s always this conversation about whether DevOps and, well, continuous deployment maybe is a little bit more specific, but DevOps is really more about the tools and more about the people and it tends to be a consensus that it’s more people.
Adam: It’s both, right? Like, if you have a great culture, it’s really easy to say that you want to have a good culture. It’s really easy to be like, “Oh, I want to empower people,” or, “We want to streamline a process,” or whatever. It takes nothing. It’s just words.
Usually, it’s the technology that reinforces those cultural behaviors that hold you back.
So, a good example is, “Oh, I want to do continuous delivery, but we use a terrible source control system that makes it almost impossible to do effective continuous delivery. But because that’s the source control system we use, we’ll never change.”
Therefore, which one’s true? Do you want to do continuous delivery, or do you love your bad source control system? So there’s this reinforcing circle that hides inside there, and I think you see that all the time in security, too, that same circle and that same behavior.
Everybody says, “Oh, we want to be secure.” I was working with a bank who I won’t name, and I was on an engagement for a couple weeks.One of my first questions that’s going to help them do continuous delivery, and they wanted to, their first target was, to harden the operating system.
I asked them, “Okay, do you have the hardening spec? Do you know what you want to harden?” And they were like, “Oh yeah, yeah, of course we do. We’re a global bank.” And I’m like, “That’s amazing, great! I would love to see that.” And they’re like, “Oh, we’ll have it for you on the day you arrive.”
I left three weeks later, and they had never found it. And what they had realized was it didn’t exist. Everybody thought someone else had built that, and that it was someone else’s job, and no one actually did. It was just this loose conglomeration of stuff that theoretically they were supposed to do, but no one actually could track or knew. And that’s a global bank. It probably should matter.
Guy: I think sometimes it’s about the mess or the fact that tools can help surface information and hold information in a way that is accessible. And sometimes it’s just the sheer obstacle for many of these more complex topics when you talk about deep ops topics around how containers operate or how some machines are orchestrated.
Or in the world of security, when you talk about deeper security, understanding about what’s an attack, what’s not, and that sort of a constantly moving landscape. Just fundamentally, you have to have the tools. You cannot overcome those problems by sheer education. And at the same time,
if you have tools and people don’t know how to use them, or they have no understanding of what they’re for, then you would eventually not achieve what you’re aiming for.
Adam: Context is everything, right? Tooling is great, but it’s not enough. One of the things that we do at Chef and in the security world, is we have a thing called InSpec, which is a language for letting you describe security posture and code.
So you can say, “This particular machine should have this particular policy. That policy means that port 25 should be open, you should be able to auth this way, you shouldn’t be able to auth this way.”You should be able to talk about packages being installed, you should be able to talk about all of those things and then relate them back up to the actual security line items and talk about their severity.
We care about this thing because of this piece of HIPAA, or we care about it because of this piece of CIS, or whatever. And those tools are great because they sort of combine the documentation of, this is what the standard is, with the check that actually is executable that says, “And are we meeting the standard, yes or no?”
I think whether it’s InSpec or tools like InSpec, when you think about that operational part of security and how it applies to those large enterprises, especially large enterprises, but the big web, too, more and more it’s becoming that policy is executable.
So that conversation between security and developers and operators becomes a conversation around code as opposed to a conversation around documents, and controls, which I think is really the conversation most people have now.
Some security guy wrote a control and then here’s the list of people that say that they can validate the control. But is the control any good?
Guy: Is it actually happening?
Adam: No. I mean, it’s not, right?
Guy: Yeah.
Adam: And that’s true everywhere. We wrote a control. Here’s the list of 10 people that you could go interview that know how to do that control. Okay, what about the other 10 people who could also do that, that aren’t on the interview list that wasn’t updated on the last year? Do they always do the procedure the right way?
Do they always make that process right? And of course, the answer is no. But we just sort of let it ride because the auditor passed.
Guy: Yeah, it’s the notion of protecting yourself from audits, not from attacks.
Adam: Exactly!
Guy: It becomes sort of increasingly there. I love that notion. So with InSpec or, in general, with security, one of the key challenges is that it’s invisible. So if you’re not doing it, you really have no immediate indicator to the fact that it’s not happening.
The user experience, if you will, of not monitoring, not watching for a certain vulnerability, and the user experience of watching for it and having nothing happen, is the same thing, which is, nothing happens, right?
Adam: Yeah, which is nothing.
Guy: Which is good news. So I think anything that helps define and articulate the controls that are supposed to be happening, giving you some mechanisms to understand that the action has been taken so you know that InSpec, the locking or whatever, say, if we use a simple example, the limiting of ports being open, that has been explicitly articulated and explicitly tracked by the tools here, and you can get informed that it has happened.
We see this in Snyk a lot. We watch projects for vulnerabilities, for vulnerable dependencies, and then you have really, again, it comes back to the fact that you have the same user experience. If you’re not watching a project, or if you’re watching a project for vulnerabilities and none happened there, and we have a lot of these conversations with users today about, “How do you want to hear about it? How do you want to know that you’re watching it?”
And today, we remain fairly simple. We just remain in this ongoing report that just sort of shows you, “Hey, you’re monitoring five projects” or something, and they have those things.
Adam: Right, it shows that it reacts often, or whatever, and we can measure change.
Guy: But it’s great to sort of increasingly have those and have some requirement, whatever, in your Chef and InSpec, and that defines it as well, project X is being monitored for X, Y, Z, so you know it’s being enforced.
Adam: Right, and then you stick that in the pipelines.
When you think about continuous delivery and security, security and the validation of that compliance is part of the process by which software goes into production.
And it’s part of the way that software gets maintained once it’s there, and it’s part of that build process. You bake it in sort of throughout the SDLC, instead of it being a thing that happens at the end.
And that’s one of those things that’s obviously a good idea. So, as soon as you hear it, you’re like, “Oh, of course, security should be baked into the process, all through the whole thing.”
Guy: The whole built-in versus bolt-on.
Adam: Yeah, right? Sort of, duh. But actually, doing it is actually completely–
Guy: A whole different ball game.
Adam: A whole other ball game. And the thing that we really came to realize, especially with InSpec, but just sort of in general, is that if you can’t figure out how to manage that security posture the same way you manage the rest of what you do, it’s really difficult to then tell a software developer that it’s their responsibility to ensure that that posture is good or bad. Because, sure, they can make that they write “good code.” You can’t hear the air quotes, but m I was making little air quotes in my brain, probably.
Guy: I can attest to that.
Adam: It’s good, but you can’t really ask them to understand the posture of what it’s going to be like when it’s deployed.
Because the distance from a software developer making a decision to a software developer talking about how that software should be in production, and what it’s posture ought to be, is so vast. And their ability to influence it is so low that it’s really difficult to come back to them and be like, “Oh, this was your responsibility.” Like, clearly it was on you.
And when those tools give you the ability to talk about it as code. They allow people to participate, you can code-review them. You can have security people audit the code as opposed to audit your documentation, and then have those things as a living piece of that deployment model, I think everything gets better and it gets a lot more secure.
One hard part of doing things that way is that it’s very not the way that most security regimes are set up right now. If you go talk to a security officer and you ask them, “Hey, could we remain compliant to whatever the standard is, HIPAA?” If everything we did was continuously delivered, nine times out of 10, the answer is just a flat no.
And what’s interesting is that there is, one out of 10, where that security officer is like, “I don’t know, maybe. Tell me more about how that would work.” And I think if I roll my clock back three years, it was 10 out of 10 that we’re telling me no and that was crazy. And now it’s like nine of 10. And I predict in six months, it’ll be six of 10. And then in two years, it’ll be like talking about, “Should you be doing continuous delivery or should you be doing Agile?”
Guy: It comes back actually to the commentations sometimes or even that document or something that big bank did not have. Like, these things eventually come back to the documents and to the guidelines. So at the end of the day, you go down the compliance route and you actually have that in the compliance document that it says, well, if you are using continuous delivery, then to be HIPAA-compliant, you need to ensure that you’re doing these things as opposed to just sort of having the goal in it.
Many of these regulations have the same flaw, the same notion that says, “Well, you have to do these actions,” as opposed to, “You have to achieve these goals.” So they give the vague goals, but then everybody prescribes to whatever recommended actions that have been prescribed, because that makes passing the audit the easiest.
Adam: Yeah, and I mean,
the relationship between the auditor and your security posture is pretty tight.
Like, the real test of your security posture, in most cases, is the auditor. Not like a pen test, or people actually trying to break you. I mean, they are, but you’re not doing it proactively.
I think when you think about those CD pipelines, the idea that they’re applied continuously, that as applications change, that you’re reapplying the security posture to see if that application has done something that violates that posture, and that when you change the security posture, you’re re-validating the applications and you’re doing it sort of throughout the whole cycle, it’s super powerful. It’s what people are starting to be able to do. I think you don’t see it a lot yet, but it’s sort of the future.
Guy: I think that’s interesting to understand, the potential versus what’s happening, right? High-level continuous deployment and your whole infrastructure’s code, the fact that you’ve prescribed it, built it in, allows you to get predictability. You know what’s where, and you know that a certain test or a certain enforcement has been done to the extent of bugs. But at least, not as human error.
On the flip side, it requires security auditors to change how they behave. It’s not just about that compliance, it’s also the fact that today, many of these security audits, you gave the example of a security auditor reviewing your code, are done as gates.
They’re done as a way that says ‘stop here,’ which is the antithesis to continuous.
The whole notion of continuous is just, you roll out. It’s okay to pause for a moment, but you can’t stop. You can’t accumulate a backlog, because otherwise it’ll deteriorate, unless you automate that.
Adam: Yes and no, right? There’s continuous deployment, and then there’s continuous delivery,and they’re not quite the same. So yes, in continuous deployment, there’s nowhere to pause. I think in continuous delivery, there is.
In continuous delivery, the idea is you’re shipping when you should be able to ship, any time that the business required shipping or it made sense to ship. And that’s different than saying, “We just ship every time you commit,” right? And so in continuous delivery, I think there’s plenty of space to say that, “Hey, this project, in order to ship, requires a security review.” And that process can still be continuous.
The question is, that should be the only gate between you and getting to production. So if they say yes, could you ship? And then the question of course is, is that true for every commit? Like, could you ship today, if today was the time? The answer to that is very rarely yes, and so that’s where the difficulty comes in.
Guy: Yeah, and I think it’s an interesting comment. I accept the delta between continuous delivery and continuous deployment, but it seems to me that from security route, well, first of all, even from a quality perspective, one of the values of continuous, both delivery and deployment, is the fact that you spot errors when they’re small, at any given time.
So definitely, for continuous deployment, when you ship every small code change, whatever, at some low resolution, when a problem occurs, it’s much easier to pinpoint to what was the issue, the source of the problem.
That value proposition is pretty compelling in security as well, right? If you are looking for a security flaw, being able to, only looking at it within the range of whatever the 100 lines of code to change versus the 100,000 lines of code or 100 million that existed, is very valuable. If you accumulate those changes and you need to now do the security audit, I would argue you’re not actually ready to deliver in that spot.
This podcast is brought to you by Heavybit, a program that helps developer-focused companies take their product to market. Throughout the program, founders work on developer traction, product market fit and customer development. If you like what you hear, be sure to subscribe to Heavybit’s Podcast Network on iTunes.