[00:02]
Viktor Petersson
Today's guest on the show is Chris Swan, an OG from the London tech world and we first met many years back and I bumped into him most recently at State of OpenCon in London where we're both speaking at State of OpenCon.
[00:17]
Viktor Petersson
Chris gave a talk about Open SSF scorecards and I really got intrigued by this concept and afterwards I had a lot of questions so I asked him to come on the show to explore this subject a bit further.
[00:29]
Viktor Petersson
So welcome to the show, Chris.
[00:32]
Chris Swan
Thanks Victor.
[00:33]
Viktor Petersson
And perhaps for the viewers, perhaps you can give a bit of backstory about yourself, your experience in tech and basically who you are for sure.
[00:44]
Chris Swan
So I'm presently an engineer atSign, that's a company where we're building a Networking 2.0 platform that helps people connect to things and entities and other people over end to end encrypted channels through personal data services.
[01:03]
Chris Swan
That's obviously security critical functionality which is what got us into OpenSSF and scorecards.
[01:13]
Chris Swan
Going back a little further, I started my career in the Royal Navy as a combat systems engineer, found my way from there into sort of the end of the dot com bubble doing.com consulting which took me into financial services.
[01:30]
Chris Swan
Spent a bunch of time in banks building large scale systems and then found my way into the startup world which I think is when our paths first crossed when I was at Cohesive Networks, so my first time around building next generation networky things.
[01:50]
Chris Swan
So at the time networking for the cloud and bringing containers into networking.
[01:56]
Chris Swan
Since then I spent a bunch of time at CSE and then DXE in the IT services world, but kind of happy to be back at the coalface cutting code in startup land.
[02:10]
Viktor Petersson
Yeah, definitely got a lot of experience there to bring to the table, so maybe we can start.
[02:15]
Viktor Petersson
For people not familiar with OpenSSF, it's a relatively new organization or entity I guess maybe.
[02:21]
Viktor Petersson
Can you give some more backstory what OpenSSF is?
[02:23]
Viktor Petersson
I know you're not directly involved, but just to give some more backstory of what that is to start with.
[02:27]
Chris Swan
So OpenSSF is the open Source Security Foundation.
[02:31]
Chris Swan
It sits within Linux foundation, so it's kind of there alongside of perhaps more familiar organizations like Cloud Native Compute Foundation.
[02:43]
Chris Swan
If I kind of look at the primary movers, there's an awful lot of energy initially seems to have come from Google and so I think Google has been on a mission to lift the bar for everybody on security, but also to show that what they're doing and what they're building is secure.
[03:07]
Chris Swan
So this is where I first came across Scorecard.
[03:11]
Chris Swan
So A little while back, the Dart and Flutter projects, which we make a great deal of use of at Sign, did a blog post about how they'd adopted Scorecard and put scorecards onto their repos.
[03:26]
Chris Swan
And this immediately seemed to us like a good way of showing people that we care about security.
[03:33]
Chris Swan
So I saw Google using this as a way of saying, here is a badge on our repo, which is showing you a visual indicator of we care about the security of what we're putting in front of you here.
[03:48]
Chris Swan
And so we adopted it for similar purposes as a way of showing that we care.
[03:56]
Chris Swan
And when I'm talking to our sales guys about this, they say they get questions from potential customers about security and they point at the scorecards, and that's very often the end of that conversation.
[04:11]
Chris Swan
The prospect's happy that they've been shown something that gives them confidence that we're doing the right things.
[04:20]
Viktor Petersson
That's really interesting.
[04:21]
Viktor Petersson
And then you guys at Sign, those are open source repos, correct?
[04:26]
Chris Swan
Yes, they are.
[04:28]
Viktor Petersson
So in that sense, it makes a lot of sense.
[04:30]
Viktor Petersson
What about for private repos?
[04:32]
Viktor Petersson
Can this be used for the same methodology for private repos and just show the scorecard number or some kind of manifestation of that?
[04:38]
Viktor Petersson
Or is it purely open source?
[04:40]
Chris Swan
I guess if it's a private repo, who are you showing that you care?
[04:46]
Chris Swan
It can be the insiders in your own organization.
[04:49]
Chris Swan
So there's really very little to stop you from using the scorecard on a private repo.
[04:57]
Chris Swan
And I think the discipline that comes from kind of going through the exercise of getting a decent score on a scorecard is something that you would want to do as an organization anyway.
[05:10]
Chris Swan
And so, yeah, doing it for private repos makes a ton of sense.
[05:15]
Chris Swan
If I look at our own practice, a lot of the things that we do in our public repositories have now become sort of so baked into the culture of the organization that they're just part of how we do private repos.
[05:30]
Chris Swan
Anyway.
[05:30]
Chris Swan
We've not gone as far as actually implementing Scorecard on private repos, but we're sort of beneficiaries of having done it on a chunk of the public repos.
[05:41]
Viktor Petersson
So it's more like it has driven a cultural shift or a behavioral shift across the teams to even to apply the same methodology across private repositories.
[05:49]
Chris Swan
Absolutely.
[05:51]
Viktor Petersson
Okay, so that's really interesting.
[05:52]
Viktor Petersson
Let's dive in and unpack a little bit what an SF scorecard is.
[05:57]
Viktor Petersson
I know we probably gotta dive into demo in a little bit, but before we dive into the actual demo.
[06:01]
Viktor Petersson
Maybe we can start with like what constitutes the scorecard, what goes into it, what are the criterias and what's been picked up and so on.
[06:10]
Chris Swan
So it's looking at a bunch of different measures to produce a composite score.
[06:18]
Chris Swan
So things like dependencies.
[06:22]
Chris Swan
Are you diligently managing your dependencies?
[06:25]
Chris Swan
And one of the things that it can go and measure there is are you pinning your dependencies to Shars?
[06:31]
Chris Swan
Right, right.
[06:33]
Viktor Petersson
So not only version, but even sha's.
[06:36]
Viktor Petersson
Okay.
[06:39]
Chris Swan
And so it's looking then into your GitHub action workflows, your lock files for the package managers that supported, and checking that you've got things pinned to Shars there.
[06:56]
Chris Swan
And so you need to do that to get a score of 10.
[07:01]
Chris Swan
So that's just one piece.
[07:03]
Chris Swan
Another piece would be are you running CI and doing automated testing, are you doing static source code analysis or are you doing dynamic source analysis?
[07:18]
Viktor Petersson
And how does it do this?
[07:19]
Viktor Petersson
Does it look at like, oh, I'm using Sonar Cloud for automatic audit, or I'm using Dependabot for mobility scanning.
[07:27]
Viktor Petersson
Like how does it actually pick up on those variables?
[07:30]
Chris Swan
So it's got a bunch of, I guess, the commonly used tools applying to the commonly used language and frameworks that it's able to go and measure things.
[07:43]
Chris Swan
There are definitely white spaces around that.
[07:48]
Chris Swan
And being an open source project itself, there are routes for people to contribute and get their preferred tools into the fold.
[08:01]
Chris Swan
So it's not entirely covering every single possibility, but with the stuff we've done, and we're a little bit off the beaten path with a lot of stuff in Dart and Flutter, we quite often find that there's not, for example, tool support in LaunchDarkly or Honeycomb or Snyk for Dart.
[08:30]
Chris Swan
And yet we've found that, and maybe because of the effort within Google to get Dart scorecarded, it is actually reasonably well covered by the Scorecard framework.
[08:42]
Chris Swan
And so the only place that we really run into trouble there is on dynamic analysis, because that's normally third party tools.
[08:52]
Chris Swan
And so if you look at Dart itself, it's using an open fuzzing project.
[08:59]
Chris Swan
But that's kind of unique to Dart itself.
[09:03]
Chris Swan
It's not like I can pick up that open fuzzing project and apply it to my use of Dart.
[09:09]
Chris Swan
So you end up with a few things that are difficult to accomplish.
[09:14]
Chris Swan
And that kind of leads me into one of the measures is compliance to OpenSSF best practices, which were previously called CII best practices.
[09:26]
Chris Swan
And that's quite an extensive questionnaire that you can sort of fill in on an online form.
[09:33]
Chris Swan
And there's three levels there.
[09:37]
Chris Swan
So there's a sort of passing level which gets you, I think a score of five and then there's a silver level and your score goes up a bit more and then there's a gold level.
[09:45]
Chris Swan
And even just getting a passing score is a lot of spade work.
[09:53]
Chris Swan
So that was one of the things as were implementing it initially.
[09:58]
Chris Swan
We kind of filled out those forms as a sort of first pass at here's where we are.
[10:04]
Chris Swan
And it gave us a measure of okay, there's a bunch of things we're going to have to go and do to actually finish here.
[10:11]
Viktor Petersson
And these are like, yes, we're using CI more like a subjective assessment or like, yes, we have a unit test, yes, we have integration tests.
[10:19]
Viktor Petersson
Are they actually going like, what's your code coverage or to what degree?
[10:24]
Chris Swan
So a lot of it is simply questionnaire.
[10:28]
Chris Swan
It does ask for evidence and I think where people provide evidence, then that provides a bit of an audit trail for anybody coming and checking their homework, especially where it is open source and they can go and look at the repo and see that thing has actually been implemented.
[10:46]
Chris Swan
Right.
[10:47]
Chris Swan
So if you take something like testing, one of the questions would be, are you doing tests?
[10:53]
Chris Swan
And you can say yes.
[10:54]
Chris Swan
But another question might be something like, do you have a policy that new tests should be added every time you add new functionality and you may have a document that contains a written down version of that policy?
[11:10]
Chris Swan
Or you might more informally have an expectation in a dev team that's just how we do things around here.
[11:17]
Chris Swan
It might even be evidenced in not one place, but in a series of PRs where it's sort of, you can see the tests are there or you can see that when the tests weren't there, the PR reviewer was saying, what about tests?
[11:33]
Viktor Petersson
Right, right.
[11:34]
Viktor Petersson
Okay.
[11:35]
Viktor Petersson
Yeah.
[11:36]
Viktor Petersson
Because I think, I mean, we are starting to emerge more like a blueprint for how software is built on GitHub today, with GitHub Actions now becoming kind of the blueprint for at least for a lot of companies out there using.
[11:48]
Viktor Petersson
So I guess you can infer a lot from.
[11:51]
Viktor Petersson
Based on just parsing the YAML files from GitHub Action, see what you're doing.
[11:56]
Viktor Petersson
But it sounds like you're almost getting into compliance territory like socarize or whatever when they ask for evidence and so on.
[12:04]
Viktor Petersson
And with regards to, with these workflows, does it infer things like you require pull requests and you require to merge to master, you require more than One review sign commits.
[12:17]
Viktor Petersson
Does it go into that territory as well, or is that.
[12:19]
Chris Swan
Absolutely.
[12:20]
Chris Swan
So one of the areas that it looks at is branch protection.
[12:23]
Viktor Petersson
Okay.
[12:24]
Chris Swan
And so it'll actually look at the detail of the branch protection config on your repo and it'll specifically award marks based on a bunch of the things that you just rattled off there.
[12:36]
Chris Swan
So if you are not mandating reviews, then that's a big thing.
[12:44]
Chris Swan
If you're not having mandatory tests, that's another one, and so on.
[12:51]
Chris Swan
So there's certain things that it's not very pushy about.
[12:54]
Chris Swan
So signed commits.
[12:57]
Chris Swan
I could choose to have mandated signed commits.
[13:02]
Chris Swan
I think that's not something that's particularly friendly in an open source community.
[13:09]
Viktor Petersson
With the new changes at GitHub though, where you can actually use your SSH key to sign, which I think is kind of moot, but you can do it.
[13:15]
Viktor Petersson
I guess the bar is slightly lower than having to set up PGP for that, I guess.
[13:18]
Chris Swan
But yes, it absolutely is.
[13:20]
Chris Swan
And certainly that's my preferred way of signing commits these days.
[13:24]
Chris Swan
Yeah, but that's some.
[13:28]
Chris Swan
That's one of the areas where I'm glad it's not a measure in the scorecard as yet, because it's.
[13:34]
Chris Swan
It's something that I'm not super keen on inflicting upon, you know, every.
[13:40]
Chris Swan
Everybody that wants to come along and make a pr, especially when we get into, you know, the Hacktoberfest type of contributors, where you're very often dealing with people who are new to open source and just trying to get started.
[13:58]
Viktor Petersson
Yeah, I mean, and that's spot on, I guess, for open source projects.
[14:02]
Viktor Petersson
Probably less important, more like as a corporate policy, for instance, that's cleanly.
[14:06]
Viktor Petersson
We require signed commits for everything that goes into our repos, but we can control that because they're all employees.
[14:13]
Viktor Petersson
Whereas if it's an open source project.
[14:15]
Viktor Petersson
Very different.
[14:15]
Viktor Petersson
Yeah, I completely agree with that.
[14:19]
Viktor Petersson
And then talk to me for a bit about language support, because a lot of these things are language specific.
[14:24]
Viktor Petersson
Right.
[14:24]
Viktor Petersson
You mentioned Dart and Rust, but Dart and Flutter.
[14:28]
Viktor Petersson
How about other languages like Rust, Python, Go?
[14:31]
Viktor Petersson
What's the ecosystem like for those level of features?
[14:35]
Chris Swan
So it's reasonably well covered.
[14:39]
Chris Swan
If I look at the languages we've been working in, we're quite polyglot because we're building a protocol and a platform and providing a whole bunch of SDKs to support that.
[14:52]
Chris Swan
So I've been working with stuff in C C, Java, Rust, Go, Python, Micropython, and so all of those seem to be reasonably well understood by the scorecard.
[15:13]
Chris Swan
Some of those are challenging in terms of again things like dynamic analysis tools.
[15:22]
Chris Swan
So where you're kind of on the well trodden path with languages like say Java, then the route is very clear.
[15:32]
Chris Swan
You could go and pick from a whole range of dynamic analysis tools and you'd be good with that.
[15:39]
Chris Swan
Whereas with some of the less traveled areas, Micropython springs to mind, you're a bit more sort of on your own with that.
[15:53]
Chris Swan
Or it may just be that you can't actually do dynamic analysis and you just have to accept that.
[16:02]
Chris Swan
And you getting a score of 8 which gets you a green badge.
[16:10]
Chris Swan
I think there's a bit of a Pareto thing to this.
[16:13]
Chris Swan
So I think it's kind of 20% of the effort to get 80% of the score.
[16:17]
Viktor Petersson
Right.
[16:18]
Chris Swan
And then it's all uphill from there because it's 80% of the effort to get the other residual 20%.
[16:26]
Chris Swan
And so yeah, I'm quite happy that our range of scorecards is all sort of in the eights.
[16:35]
Chris Swan
Eight upwards.
[16:37]
Chris Swan
Getting nines and above is really super hard.
[16:43]
Chris Swan
And you don't even see that on the Google Repos or even the OpenSSF's own repos.
[16:52]
Chris Swan
And so good luck to people trying to accomplish that.
[16:58]
Chris Swan
And 10 kind of is almost a horizon objective.
[17:02]
Viktor Petersson
You could do it in hello World, but a real project might be difficult.
[17:05]
Chris Swan
Right.
[17:06]
Chris Swan
Well, so actually even hello World, you'd have to put so much stuff around hello World to get that.
[17:18]
Chris Swan
How are you going to test hello World?
[17:21]
Chris Swan
Where's your it prints it.
[17:22]
Viktor Petersson
Great.
[17:24]
Chris Swan
So yeah, actually you can test hello World pretty easily, but how are you going to fuzz it?
[17:32]
Chris Swan
And so if you've got the right language, then of course you can just wheel out a fuzzer and say I'm going to fuzz my hello World.
[17:40]
Chris Swan
But if not, a bunch of these things are fairly challenging and a lot of the stuff, especially when it gets to how are you going to get gold on the best practices is getting into questions of organization and culture and policy about how you do development as much as it is artifacts in front of you.
[18:06]
Viktor Petersson
So it goes into how you run your team, how you do agile and so on as well.
[18:12]
Chris Swan
Not quite that invasive because I think it has to be very accommodating to, you know, different strokes for different folks.
[18:21]
Chris Swan
But then again, as I kind of go through those questionnaires, it feels to me like there's a lot of bias towards sea, for instance, that there's a whole bunch of sins of the past have been committed in C development.
[18:39]
Chris Swan
And so there's a whole bunch of best practices to make sure that we're not repeating those sins of the past.
[18:45]
Chris Swan
And of course, if you're using something other than C like Rust and so memory safety isn't the same concern as it was before, then you're maybe answering not applicable to those questions, rather than, yes, we're doing this stuff to make sure that we're not repeating the sins of the past.
[19:06]
Viktor Petersson
Right.
[19:07]
Viktor Petersson
So it sounds like it's opinionated by a lot of things with regard to things like linting and best practices.
[19:14]
Viktor Petersson
Is that also covered or is that separate entirely?
[19:18]
Chris Swan
So linting is definitely a piece of the puzzle.
[19:23]
Chris Swan
And it's one of the things that's got me doing markdown linting for our docs repos.
[19:33]
Viktor Petersson
That is a pain in the ass.
[19:34]
Viktor Petersson
I've done that.
[19:35]
Viktor Petersson
And it's definitely counterintuitive.
[19:37]
Viktor Petersson
When you start doing it, you get.
[19:38]
Chris Swan
Used to it, but yes, exactly.
[19:41]
Chris Swan
And I think you do get used to it.
[19:44]
Chris Swan
And then these become sort of norms in the organization.
[19:48]
Chris Swan
And so I've recently seen colleagues now submitting nicely linted markdown.
[19:56]
Chris Swan
Even in the repos where we don't have automated markdown linting because it's becoming part of their standard tool workflow, it's sort of, oh, I've written some markdown, so let's just lint it before I do a commit with it.
[20:10]
Chris Swan
And it's kind of great to see that discipline taking hold.
[20:15]
Chris Swan
If I go back to sort of earlier in our journey, then we introduced conventional commits and semantic PR to check that people were doing conventional commits.
[20:28]
Chris Swan
And at first that was a bit of a lift for the organization.
[20:36]
Chris Swan
And especially as we had new people coming on board, less experienced interns and the like, it was sort of, oh, I don't understand what this thing is, and so I'm not going to do it.
[20:51]
Chris Swan
Whereas now it's just so embedded in how we do things.
[20:56]
Chris Swan
My favorite definition of culture is the way things work around here.
[21:01]
Chris Swan
And so conventional commits is part of the way things work around here at Sign.
[21:06]
Chris Swan
And so even though we've never mandated it, we have the Somatic PR check installed as an app, and it'll show the horrible little red cross for anybody that hasn't done a conventional commitment.
[21:22]
Chris Swan
And we've then seen that as a whole generation of interns has flown through the organization, the people before them, kind of getting them started, just said, this is part of how we work around here.
[21:39]
Chris Swan
And it kind of happened much more organically the other thing thinking back a bit was the implementation of All Star, and this is another OpenSSF project and it's something that I think is a good foundation to doing scorecards.
[22:02]
Chris Swan
So this is again, an app that can be installed into a GitHub organization.
[22:09]
Chris Swan
There's a certain amount of config can be applied to it from a dot repo, but it's looking for some of the same things as a scorecard is.
[22:19]
Chris Swan
So do you have brand protection?
[22:22]
Chris Swan
Do you make sure that you don't have binaries in your repos, have you got a security MD and these kinds of questions.
[22:32]
Chris Swan
So a bunch of it is about repo content and a bunch of it is about repo config.
[22:38]
Chris Swan
And so it's checking across those areas and it'll automatically raise issues if something is found or if something reoccurs.
[22:52]
Chris Swan
So, for example, start of the year we had the CES show in Vegas.
[23:03]
Chris Swan
One of my colleagues went off to there.
[23:05]
Chris Swan
We were demoing some stuff with our partners at qt and he came back and very enthusiastically kind of checked in all of the demo code.
[23:14]
Chris Swan
And in amongst that, I think it was Python, was a whole bunch of compiled Python.
[23:23]
Viktor Petersson
Right.
[23:24]
Chris Swan
And so immediately the All Star checker kind of then reopens the issue on the demo repo as a, we shouldn't have binaries in here.
[23:37]
Chris Swan
And it's kind of, do we need that binary there?
[23:39]
Chris Swan
No, we don't actually need that binary there.
[23:41]
Chris Swan
So we can go and purge it.
[23:44]
Viktor Petersson
So one of the things that, in my experience, at least when applying security, or I should say automated security, I guess, with pull request checks and whatnot, is that it always, at least in the beginning, introduce a fair bit of friction before people are used to it.
[23:59]
Viktor Petersson
Right.
[24:00]
Viktor Petersson
And in particular, when you need to get things out fast, it's always a trade off between speed of delivery and being secure.
[24:07]
Viktor Petersson
What has been your experience around that?
[24:09]
Viktor Petersson
Because I mean, at least in my experience that there is a pain point around that as you start, at least before it becomes how we do things around here, but at least before you get that point in terms of how much did you feel it slowed things down.
[24:24]
Chris Swan
So we've not done much to the sort of mainline path through, you know, continuous delivery pipelines.
[24:37]
Chris Swan
And I've always had a target of 10 minutes, the sort of Dave Foley approach.
[24:44]
Chris Swan
People should be getting feedback quickly enough that they're not context switching out of the work that they were doing.
[24:50]
Chris Swan
Right.
[24:52]
Chris Swan
And actually, you know, I generally try and orchestrate the pipeline so that they're Completing in less than five minutes.
[25:02]
Viktor Petersson
That's doable for smaller projects.
[25:03]
Viktor Petersson
But if like a very big project, it's very difficult target to meet if you have a lot of code needs to be compiled, for instance across platforms and whatnot.
[25:12]
Chris Swan
And so we try and carve things up into small enough chunks that we are looking at small pieces and getting that quick feedback.
[25:22]
Chris Swan
The scorecard itself is just another parallel GitHub action workflow.
[25:28]
Chris Swan
Right.
[25:29]
Chris Swan
So it's non disruptive.
[25:32]
Chris Swan
What's occasionally happening there is, you know, colleagues are doing work that ends up digging the scorecard and then there's some follow up remedial work to get the score back to where we want it to be.
[25:48]
Viktor Petersson
Right.
[25:48]
Viktor Petersson
So you don't have as an acceptance criteria, you block it.
[25:51]
Viktor Petersson
If somebody pushes something that drops a score, you would do as a follow up task rather than a hold on, you cannot allow to merge this pr.
[25:59]
Chris Swan
So it's not implemented as a kind of pre check on a pr.
[26:06]
Chris Swan
It's more as things are being merged to trunk.
[26:09]
Chris Swan
It's constantly updating to reflect the reality of what it finds there.
[26:13]
Viktor Petersson
Right.
[26:14]
Viktor Petersson
Okay, that's fair enough.
[26:16]
Viktor Petersson
So we covered a couple of the basics of what it is.
[26:19]
Viktor Petersson
Do you want to dive into maybe a demo how this actually works?
[26:22]
Viktor Petersson
Because I think that would be a super interesting way to show people how this actually works in the real world.
[26:28]
Viktor Petersson
Because we talk about theory behind it, but seeing the code always makes it easy to understand it.
[26:33]
Viktor Petersson
At least I find it.
[26:34]
Chris Swan
Yeah, I can do sort of a quick walkthrough.
[26:39]
Chris Swan
So let me figure out how to do a screen share with this thing.
[26:44]
Viktor Petersson
There we go.
[26:45]
Chris Swan
All right, so this is a summary page that we have off the profile for at.
[26:54]
Chris Swan
And these are the repos that I kind of called out from that profile.
[27:00]
Chris Swan
And so each of those has got an accompanying scorecard and we just do a little table here to show them all off.
[27:09]
Chris Swan
And so I'll jump into.
[27:13]
Chris Swan
So this is the implementation of our personal data service.
[27:19]
Chris Swan
And this was the first thing that I implemented scorecard on.
[27:24]
Chris Swan
So I suggest to people that they kind of find a representative repo when they're kind of first doing this and cut their teeth on that representative repo and then kind of rinse and repeat the same processes across the rest of the repos that they want to do in their organization.
[27:45]
Chris Swan
Actually this one turned out to be maybe a bit harder than being representative.
[27:54]
Chris Swan
So the work involved in doing this one was maybe more than some of the others that I could have chosen, but perhaps actually that did make it a good place to start because once I'd done this one, it was pretty much all of the rest after that were actually easier.
[28:11]
Chris Swan
So I can see the scorecard itself here and that's showing a nice green badge and a score of 8.9.
[28:21]
Chris Swan
If I was to click on that scorecard, it'll take you into this results file.
[28:28]
Chris Swan
So this is a SARIF file that's been generated by the last actions run.
[28:35]
Chris Swan
Obviously this is just an eye chart of JSON, which is pretty difficult to make sense of unless you have got tools to pick it apart.
[28:51]
Chris Swan
So we can also look at how that got generated in the first place.
[28:58]
Chris Swan
So going into my GitHub actions here, we can see that we've got the scorecards supply chain security action that's running every time we're having a commit being merged into trunk.
[29:14]
Chris Swan
And so all of our PRs are doing this scorecard analysis.
[29:20]
Chris Swan
And I can have a look at what actually went on in that.
[29:25]
Chris Swan
So you know, it's pulling the action from Google's cloud registry.
[29:33]
Chris Swan
The main sort of work is happening here as it does run analysis.
[29:38]
Viktor Petersson
So that entire analysis only took 31 seconds.
[29:41]
Viktor Petersson
That's pretty quick, actually.
[29:43]
Viktor Petersson
Is there a separate work?
[29:44]
Chris Swan
Yes, and it's because it's not doing the tests, it's looking for the evidence that the tests are being done.
[29:56]
Viktor Petersson
Right.
[29:56]
Chris Swan
And so if I was to take something like static source analysis, it's not doing the static source analysis, it's looking for the fact that the static source analysis is happening.
[30:10]
Viktor Petersson
So it parses your GitHub Actions workflows, I presume to look for that.
[30:14]
Chris Swan
Absolutely.
[30:15]
Chris Swan
And yeah, so you could be.
[30:18]
Chris Swan
If I go into one of the other repositories I think might be a good example here.
[30:26]
Chris Swan
If I look at closed PRs on this and actually I've picked the wrong one.
[30:37]
Chris Swan
I knew it was a C language one, but I think it's this one here.
[30:46]
Chris Swan
And this is just a docs one.
[30:48]
Chris Swan
But we're using SonarCloud.
[30:49]
Viktor Petersson
Right.
[30:50]
Chris Swan
And some of the tests that we wanted to do for this ESP32 flavor of C, I wasn't finding an open source tool that I was comfortable with.
[31:08]
Chris Swan
And so we're using the sort of open source enabled version of the Sonar cloud service in this case.
[31:16]
Chris Swan
And so the scorecard is consuming the output of Sonar cloud as part of building.
[31:25]
Viktor Petersson
Okay, so using.
[31:26]
Viktor Petersson
So if that quality gate failed, it would pick up on that.
[31:30]
Viktor Petersson
Okay.
[31:30]
Chris Swan
Yes, it would.
[31:32]
Viktor Petersson
Okay, so that makes it slightly more intelligent.
[31:34]
Viktor Petersson
Than just checking that you actually do have some tooling in place.
[31:38]
Chris Swan
Oh yeah.
[31:39]
Chris Swan
It's not just looking have I got the tooling?
[31:41]
Chris Swan
But what is the tooling saying about the environment that it's finding?
[31:46]
Chris Swan
And so it's going through that whole range of checks and actually it's good to visualize them.
[31:55]
Chris Swan
So one of the Linux foundation folks put together this Scorecard report app which visualizes them on a radar chart here.
[32:06]
Chris Swan
And this gives us quite a nice way of walking through the various things it's looking for and the score that's kind of coming out of this particular run that happened.
[32:20]
Chris Swan
So binary artifacts.
[32:23]
Chris Swan
We touched upon branch protection.
[32:24]
Chris Swan
We talked about CII best practices are those questionnaires.
[32:30]
Chris Swan
And so you can see there that there's a passing score but not gold, whereas pretty much the rest of it is looking 10 out of 10 there.
[32:42]
Chris Swan
So code review contributors, dangerous workflows, dependency update tool.
[32:48]
Chris Swan
So all of those checks have passed fuzzing were absolutely nowhere.
[32:52]
Chris Swan
And this kind of comes to that.
[32:57]
Chris Swan
There isn't really a fuzzing tool I can deploy against Dart code at the moment.
[33:01]
Chris Swan
So it's found a license, it's found evidence that the whole thing's maintained.
[33:06]
Chris Swan
It's happy with how we're doing.
[33:08]
Chris Swan
Packaging PIN dependencies is almost a 10, so there's a slight ding there.
[33:16]
Chris Swan
Static source code analysis is happening.
[33:21]
Chris Swan
In that case, Dart has a built in Dart analyze and so we can simply run that and feed it that we have a security policy.
[33:36]
Chris Swan
We're not doing signed releases.
[33:39]
Chris Swan
And so that might get us into some of the other OpenSSF projects like Salsa and various tools around building software, bills of material and signing artifacts and.
[33:55]
Viktor Petersson
Those sorts of things and signing releases.
[33:58]
Viktor Petersson
So that's just not merely like signing a git tag, it's more than that or it's more comprehensive than that.
[34:06]
Viktor Petersson
If you do it git tag it and release and that sign, that's not sufficient for the signing process.
[34:11]
Viktor Petersson
It requires to go further than that.
[34:12]
Chris Swan
Yeah, that's about taking the package that's the result of what you're building here and signing that package in a way that's going to be meaningful to a consumer of that package.
[34:27]
Viktor Petersson
Okay, so it can still be used standalone to inspect, essentially.
[34:30]
Viktor Petersson
Yeah, got it.
[34:32]
Viktor Petersson
Okay.
[34:33]
Viktor Petersson
Okay.
[34:35]
Viktor Petersson
So this obviously gives you a good overview of where you stand.
[34:38]
Viktor Petersson
And you said this is an open source project from the OpenSSF project.
[34:43]
Viktor Petersson
Okay, that's really cool.
[34:45]
Viktor Petersson
So the other thing I wanted to dive in on, which is kind of, you gave me a really Good segue into that is sboms.
[34:51]
Viktor Petersson
Right.
[34:52]
Viktor Petersson
Because that's.
[34:53]
Viktor Petersson
Well, there are a lot of talks around that.
[34:56]
Viktor Petersson
And not least at the conference at the State of Open, there were a lot of talks around sbombs and where.
[35:03]
Viktor Petersson
Well, I guess maybe we should start with what's SBOMs and why should people care?
[35:09]
Chris Swan
So an SBOM is a software bill of materials.
[35:13]
Chris Swan
And what it's doing is saying, these are the components that I've used within the software that I am providing to you.
[35:24]
Chris Swan
Whether that's open source or whether that's a commercial product.
[35:29]
Chris Swan
People should care because you don't want to be using things that have got known vulnerabilities in there.
[35:36]
Chris Swan
And the SBOM really provides the mechanics for knowing about those known vulnerabilities and whether a given thing is carrying known vulnerabilities with it.
[35:53]
Chris Swan
This really came about from a number of people in the security side of the industry focusing their attention on the US Federal government.
[36:06]
Chris Swan
Initially, what they were trying to do was get a law passed through Congress which would essentially say if the federal government is buying something, then the federal government is going to insist on a software bill of materials.
[36:22]
Chris Swan
And if inside of that software bill of materials, it's clear that you're selling something that has known vulnerabilities in it, then that's probably going to be a.
[36:30]
Viktor Petersson
Problem that would most certainly annoy a lot of legacy vendors.
[36:36]
Chris Swan
Yeah, so that's not what happened.
[36:41]
Chris Swan
What ended up happening was rather than a law being passed through Congress, there was an executive order issued by the Biden administration which is having essentially the same effect.
[36:57]
Chris Swan
So it's still providing a mandate for federal purchasers to.
[37:04]
Chris Swan
To require software bill of materials from their suppliers.
[37:08]
Chris Swan
And so swinging the searchlight onto our suppliers, selling things to the government that contain known vulnerabilities, and what are they doing about that?
[37:23]
Chris Swan
Now, this was done from the very start with the sort of notion that if this was being done for the federal government, then it would drag kind of everybody else along with it.
[37:36]
Chris Swan
If everybody supplying the federal government had to do this, then they might as well make their software bill of materials available to all of their other customers.
[37:47]
Chris Swan
And so everybody would benefit from supply chain transparency and knowledge of what's inside my stuff and what's got known vulnerabilities in it.
[38:00]
Chris Swan
This then gets you into building a set of tooling and processes around.
[38:07]
Chris Swan
Okay, when I looked, when I bought the thing, it had no known vulnerabilities in.
[38:11]
Chris Swan
But we're discovering new vulnerabilities every day.
[38:14]
Chris Swan
And so are there now vulnerabilities in because there's new stuff being found and if so, you know what's happening to resolve that.
[38:25]
Chris Swan
Right.
[38:26]
Chris Swan
You know, is the thing being patched and am I having to redeploy the patch version or am I having to put other mitigations in place in order to compensate for the known vulnerabilities that I've got in the overall environment?
[38:42]
Viktor Petersson
And I mean, and from my understanding there are like debates happening right now about what the NASPEN should look like.
[38:49]
Viktor Petersson
There is no gold standard to this date about what NSPOM looks like.
[38:53]
Viktor Petersson
It's some form of JSON usually, but I don't think there is a complete standard for how that looks like to this day.
[39:00]
Viktor Petersson
Right.
[39:01]
Chris Swan
So there's a couple of popular representations of SBoMS.
[39:07]
Chris Swan
And so the tooling as it stands today tends to use one or other or both of those as an input and will normally be consuming a vulnerability feed which will be based upon CVEs but may also be including other sources of vulnerability.
[39:37]
Chris Swan
And so that's helping people take a look at what they know that they have and what they think is potentially going to be wrong with it and then make risk based decisions according to the vulnerability that reveals and then their overall threat model and risk posture and other considerations that are going along with that.
[40:05]
Viktor Petersson
Right.
[40:05]
Viktor Petersson
And if you generate, I mean I would imagine you would always generate these files in your CI CD pipeline as part of your release build, right?
[40:14]
Viktor Petersson
Where you build a, well, semantic versioning and you have, as part of that release you have an SBoM, right?
[40:20]
Viktor Petersson
And then you provide that to your customer essentially and say here is the SBOM for this particular version and I would imagine then fast forward into the future a little bit.
[40:32]
Viktor Petersson
That file would then be uploaded to some kind of monitoring tool.
[40:36]
Viktor Petersson
So that would use data like sneak or something similar to scan that dependency list against then and then flag that there are vulnerabilities.
[40:45]
Viktor Petersson
But with every new version of the software, imagine in the case of Screenly, for instance, our signage players upgrade automatically as soon as we push.
[40:55]
Viktor Petersson
Essentially there's a canary place to it.
[40:57]
Viktor Petersson
But that snapshot was only relevant for that release, so that you have to constantly provide new SBOMs with every new release.
[41:05]
Viktor Petersson
Right.
[41:06]
Viktor Petersson
So there's a lot of moving parts.
[41:07]
Viktor Petersson
Have you seen any tooling around that accommodate for that moving part of it?
[41:13]
Chris Swan
Yes, I have.
[41:15]
Chris Swan
So this is a very sort of hot area at the moment.
[41:21]
Chris Swan
So there's a lot of developments happening with tooling to help people understand what they've got Understand where the vulnerabilities lie and what they've got.
[41:32]
Chris Swan
Keep up to date with both.
[41:34]
Chris Swan
So as you say, the SBOMs are going to be constantly iterating, especially where people are consuming stuff from the end of somebody else's continuous delivery pipeline.
[41:46]
Chris Swan
So in something like your case, where you're making very frequent updates to production systems, the SBOM associated with that is going to be, you know, tumbling over pretty rapidly.
[42:00]
Chris Swan
And so one of your customers concerned about vulnerability would hopefully see a process where a CVE gets raised.
[42:12]
Chris Swan
There's a known vulnerability in probably not even the code that you're writing, but a whole bunch of the underlying dependencies.
[42:21]
Chris Swan
And so that will initially raise a red flag and say, I'm now dealing with vulnerable software.
[42:31]
Chris Swan
But at the same time you'll be getting notifications into say, dependabot saying there's a known vulnerability, you need to bump to this version.
[42:40]
Chris Swan
You crank the continuous delivery pipeline handle, it spits out a new set of artifacts, it's got a new SBoM saying, I'm now using the known non vulnerable version.
[42:51]
Chris Swan
We're all good here.
[42:53]
Chris Swan
Now that's a very simplistic telling of the story though, because where all of this gets problematic is if I'm using an underlying dependency as part of software that I'm shipping and it becomes a known vulnerability, but it's not fixed immediately.
[43:15]
Chris Swan
But I'm not even using the bit of it that's broken.
[43:19]
Viktor Petersson
Right, right.
[43:20]
Viktor Petersson
This comes back to like the vulnerability checks in general.
[43:23]
Viktor Petersson
Right.
[43:24]
Viktor Petersson
And that's where it gets really murky.
[43:27]
Chris Swan
And so that more sort of nuanced analysis of I've got a known vulnerability, it's not getting fixed immediately, but it doesn't matter because my code isn't using the methods that are understood to be problematic, then actually all is still well in terms of that particular product.
[43:53]
Chris Swan
But the ability to measure that and articulate that is because of that additional nuance, much harder to reach at the moment.
[44:04]
Chris Swan
And so I think because we're in the early days after the executive order and the early days of building tools to deal with that, we're still in sort of the black and white case of are there vulnerabilities or are they not?
[44:22]
Chris Swan
And I think there's a whole bunch of especially open source, where things are moving pretty quick and that stuff gets fixed.
[44:32]
Chris Swan
That's great.
[44:33]
Chris Swan
So say Python cryptography, you know, we use Python cryptography is at the heart of what we do.
[44:42]
Chris Swan
So of course we use the cryptography library.
[44:45]
Viktor Petersson
Right.
[44:45]
Chris Swan
That thing is being patched on an almost daily basis.
[44:50]
Viktor Petersson
Yeah, we use it too.
[44:51]
Viktor Petersson
I know.
[44:53]
Chris Swan
And so, yeah, CVEs happen.
[44:59]
Chris Swan
The team are super responsive to that and new patch releases come along to deal with OCVs.
[45:05]
Chris Swan
And as I look at those, you know, it's not quite daily, but it's probably a few a week bumps to Python cryptography.
[45:15]
Chris Swan
It's maybe half of them are security related and half of them are functionality related.
[45:20]
Chris Swan
And sometimes it's like, oh, they fixed the security bug yesterday and that led to some other corner case functionality stuff that, you know, maybe wasn't being highlighted in the previous test coverage.
[45:34]
Chris Swan
And you know, everybody will learn and get better going forward.
[45:37]
Viktor Petersson
Right.
[45:38]
Chris Swan
And so there's another bump the following day which is dealing with the fallout rather than actually fixing a security vulnerability.
[45:47]
Chris Swan
But those things are moving forward really super fast.
[45:51]
Chris Swan
But then you get other dependencies where they're not as actively maintained and somebody flags up a security critical piece of problematic code, but the maintainer doesn't do anything about that at all or for months or whatever.
[46:11]
Chris Swan
And that then gets you into, well, is it actually a problem for how I'm using that?
[46:19]
Viktor Petersson
Yeah.
[46:20]
Viktor Petersson
I mean that brings us up to the whole big debate on of all open source project builds, another open source project.
[46:29]
Viktor Petersson
And at the end of the day there's always like one or two libraries that everybody uses that there are like two maintainers.
[46:34]
Viktor Petersson
Like OpenSL stands like a good example of that when there were their vulnerabilities last year or the year before.
[46:41]
Viktor Petersson
And there are like handful of maintainers that basically support all of the Internet.
[46:45]
Viktor Petersson
Right?
[46:46]
Viktor Petersson
Yeah.
[46:47]
Viktor Petersson
And there are similar libraries that are used everywhere, but there's like one guy doing this on an evening and 500 or like 140500 depending on this.
[46:58]
Viktor Petersson
As for their production environments.
[47:00]
Viktor Petersson
Right.
[47:01]
Viktor Petersson
Which raises debate on like how do we solve that for the future because somebody got to pay for this and a lot of people that make a lot of money are using them but not contributing whatsoever.
[47:12]
Viktor Petersson
Right, yeah.
[47:14]
Chris Swan
And I think that's probably a whole nother podcast's worth.
[47:16]
Viktor Petersson
Oh yeah, absolutely.
[47:20]
Chris Swan
The funding model for Open Source is fundamentally broken today.
[47:25]
Chris Swan
Yeah, it's problematic.
[47:27]
Chris Swan
I think one of the parts of the opening keynote at State of opencon kind of addressed that.
[47:35]
Chris Swan
The whole question of what does a post open source landscape look like?
[47:41]
Chris Swan
And I think enterprise users of open source and you know, services companies that build their value on top of open source could be doing more.
[47:58]
Viktor Petersson
Yeah, no, I mean, I think you're right.
[48:00]
Viktor Petersson
This should be another episode about licensing like Redis Springs to mind as a good kind of like David Goliath story.
[48:07]
Viktor Petersson
Right.
[48:08]
Viktor Petersson
Fighting against the cloud vendors.
[48:11]
Viktor Petersson
Right.
[48:12]
Viktor Petersson
But I mean the whole supply chain security side, I think it's very interesting and I think what we really, I mean, at least I'm envisioning a future workflow where as part of your CICD pipeline, you submit your SBOM to like a third party that you tag it with your sember or whatever and then that system will then integrate with enterprise vendors that you sell to.
[48:38]
Viktor Petersson
And then if there is a vulnerability in something, your security staff at your company can sign off that say, hey, actually this doesn't matter because we don't use this code path and that's perhaps you can get around it.
[48:49]
Viktor Petersson
That's at least how I would envision a workflow look like.
[48:53]
Chris Swan
There's certainly going to be, I think, commercial opportunities on top of what people are building as open source.
[49:01]
Chris Swan
And I also like the vision at GitHub on this, which was articulated when Nat was their CEO, that since most open source is in GitHub, they can actually see the dependency graph from end to end.
[49:17]
Chris Swan
And so it's within their gift to build tooling which can track a taint all the way through the sort of various layers of dependency.
[49:28]
Viktor Petersson
Oh, I would most certainly envision a future scenario where either GitHub built this or they acquire someone who has built it.
[49:34]
Viktor Petersson
Like just to do with the dependable guys, right?
[49:36]
Viktor Petersson
Yeah, because it just slots straight into their portfolio and makes a lot of sense for them to actually have an offering like this.
[49:46]
Viktor Petersson
The last thing I wanted to cover, which is something that I'm not sure, you might have not covered that, but might not have seen that.
[49:52]
Viktor Petersson
But NIST 2.0 was released last week or this week.
[49:56]
Viktor Petersson
Was it the new NIST framework?
[49:58]
Viktor Petersson
I'm not sure that you have any thoughts around that, given that it's the first Updates is like 97 to NIST, I believe.
[50:04]
Viktor Petersson
I'm not sure you had a chance to look at that.
[50:06]
Chris Swan
So I've very briefly taken a look at it.
[50:10]
Chris Swan
It came up in one of the Zero Trust working groups that I participate in the other day.
[50:17]
Chris Swan
And yeah, I've also been looking at various commentaries on what's different about 2.0.
[50:27]
Chris Swan
It's worth just articulating what is the NIST Cybersecurity Framework.
[50:32]
Chris Swan
And so this is really an almanac of security practices pieced together from NIST guidance.
[50:46]
Chris Swan
So NIST in the past has released a whole bunch of guidance around things like authentication and authorization, but there's also a whole bunch of other industry standards in various areas.
[50:59]
Chris Swan
So ISO 27000 series, COBIT, and so all of these things had things to say about organizations and approaches to security.
[51:13]
Chris Swan
And the original NIST cybersecurity framework pulled them all together.
[51:19]
Chris Swan
And so I always viewed it really as an almanac, having consolidated NIST and COBIT and ISO 27000 series and a few other things into one place.
[51:35]
Chris Swan
It was in many ways the sort of building code for securing an organization.
[51:45]
Chris Swan
And I saw the primary consumer of that as being CISO or CISO being able to pick it up and say, if I'm doing everything that's in the cybersecurity framework, then I'm doing everything that could possibly be expected of me.
[52:04]
Chris Swan
I first came across it sort of in earnest back in cohesive days because at that time there was a certain amount of dialogue going on that the cybersecurity framework was really put out by NIST to be for government and especially federal government.
[52:25]
Chris Swan
But I think it's been widely adopted by, in the US at least, a lot of state governments and local governments.
[52:33]
Viktor Petersson
Again, I've seen it being picked up a fair bit up from the medical industry as well, so hospital and so on, like people that actually care about security.
[52:41]
Viktor Petersson
They.
[52:42]
Viktor Petersson
And I think at least my advantage point is, I think I covered this in one of the other episodes.
[52:47]
Viktor Petersson
Like for me, ISO and SOC2, they are security for salespeople, but NIST is security for actual security engineers.
[52:56]
Viktor Petersson
And I think that's how I see that.
[52:59]
Chris Swan
And so it was being waved at the financial services industry in a way that there was some notion that the financial services regulators would start demanding cybersecurity framework compliance from the organizations that they were regulating.
[53:21]
Chris Swan
That didn't actually happen.
[53:23]
Chris Swan
And I think it largely didn't actually happen because the banks, et cetera, were mostly doing pretty much all of what was in there anyway.
[53:32]
Chris Swan
And I don't think it would have substantially changed the dialogue.
[53:36]
Chris Swan
So that was a decade ago.
[53:39]
Chris Swan
And SO Cybersecurity Framework 2.0 released in the last few days is a refresh of cybersecurity framework.
[53:49]
Chris Swan
Some are arguing not enough of a refresh.
[53:54]
Chris Swan
And I think this is perhaps because Bruce Schneier observed many years ago, it's like the old attacks never die, and the old regulation that was born from the old attacks is also a permanent fixture.
[54:14]
Chris Swan
And so if you're making an almanac, there's none of the old stuff that you really get to cut out.
[54:20]
Chris Swan
And so when you get into conversations about shiny new stuff like zero trust networks or software builds of material or whatever else you may or may not add it in.
[54:32]
Chris Swan
And so if I take something like zero trust, can I use a zero trust approach to put in place controls and monitoring regimes for those controls that will allow me to show evidence for achieving certain areas of the NIST framework?
[54:52]
Chris Swan
Absolutely.
[54:53]
Chris Swan
Does the NIST framework still mostly read like it's expecting you to do traditional networking with firewalls and hard on the outside, soft in the middle?
[55:04]
Chris Swan
Maybe it does, because not a great deal has changed there.
[55:08]
Chris Swan
The big change is the introduction of a governance section to the framework.
[55:16]
Chris Swan
And I think if you go back to the initial release, there was a sense that was a missed opportunity and it's one that's been resolved.
[55:25]
Chris Swan
And of course, sorry, what does that.
[55:28]
Viktor Petersson
Actually mean in this context?
[55:30]
Viktor Petersson
What do you mean by governance in that context?
[55:33]
Chris Swan
So if you look at the framework and the areas it addressed could be portrayed as a wheel.
[55:41]
Chris Swan
And as he moved around the wheel, there were different things.
[55:43]
Chris Swan
So a bit like the radar chart I was showing for the scorecard a few Minutes ago with CSF2, there's now an inner wheel of governance, which is to say all of the aspects of the cybersecurity framework relate to a set of governance activities and supporting processes to bring everything together.
[56:10]
Viktor Petersson
Okay, so in terms of SBOMs, you said that's kind of.
[56:15]
Viktor Petersson
It's still.
[56:16]
Viktor Petersson
Flaggers are nice to have or is it something that you reckon is going to be pushed?
[56:19]
Chris Swan
Haven't got into that detail.
[56:20]
Chris Swan
So the SBOM is being pushed by the executive order.
[56:29]
Chris Swan
The NIST Cybersecurity Framework 2.0 was something else that NIST were told that they had to get 2.0 out by, I think, April next year.
[56:41]
Chris Swan
So they are way ahead of schedule.
[56:44]
Viktor Petersson
But this is a final version, it's not a draft.
[56:46]
Viktor Petersson
Right.
[56:47]
Chris Swan
So yeah, this is the final version.
[56:50]
Chris Swan
This is 2.0.
[56:52]
Chris Swan
So I think it's gone from 1.0 to 1.1 and it stayed the same for a very long time.
[56:58]
Chris Swan
And then not directly from the, I think the same executive order that brought us SBoM, but from a related executive order.
[57:10]
Chris Swan
NIST were tasked to update the cybersecurity framework.
[57:13]
Chris Swan
Job done.
[57:15]
Chris Swan
They have.
[57:16]
Chris Swan
Have I had the opportunity to delve in to see whether things like SBOM are now being explicitly referenced in there?
[57:23]
Chris Swan
No, that's not an opportunity I've had yet.
[57:27]
Chris Swan
And again, I don't think it matters much either way because the SBOM has got, I think, enough momentum behind it with the executive order.
[57:40]
Chris Swan
But also everything that's happening at Caesar in order to help ease the implementation of the executive order.
[57:49]
Viktor Petersson
Yeah, we actually had our first customer request for nestbom at Screenly last week.
[57:55]
Viktor Petersson
So it is actually starting to happen with a big medical group.
[57:58]
Viktor Petersson
So it is starting to happen.
[57:59]
Viktor Petersson
So it's actually not only theoretical anymore, it's actually part of compliance for very savvy customers, I guess.
[58:07]
Viktor Petersson
But very early days still, I would imagine.
[58:10]
Chris Swan
So you probably saw the panel at state of OpenCon that was preceding my talk about scorecards and there was a handful of practitioners who were each working inside of organizations on their security.
[58:29]
Chris Swan
And what I was hearing from all of them is that they had mechanisms already in place to consume SBOMs, to analyze SBOMs against their vulnerability feeds, and to take various actions against their threat models and risk profiles to deal with what they were receiving from that.
[58:53]
Chris Swan
So it is early days and they were probably there on that panel because they're advanced practitioners and not.
[59:00]
Viktor Petersson
Oh yeah, those are very savvy companies, all of them.
[59:02]
Viktor Petersson
Right.
[59:02]
Viktor Petersson
So they're not like your average company.
[59:05]
Viktor Petersson
Right.
[59:08]
Viktor Petersson
Chris, this has been super interesting.
[59:11]
Viktor Petersson
Thank you so much for coming on the show.
[59:13]
Viktor Petersson
Do you want to do a quick shout out to Sign where people can learn more and perhaps learn more about anything you want, really, before we wrap up the show?
[59:23]
Chris Swan
I think something that the audience for this will really appreciate is a thing that began as a demo that we built at Sign for the platform, but has turned into a product.
[59:38]
Chris Swan
So SSH no Ports is a tool we've built for people to do remote admin.
[59:45]
Chris Swan
We've got people using it for things like home labs, but we've also got organizations using it to deal with field deployed Internet of Things type devices.
[59:57]
Chris Swan
And as the name suggests, you can get an SSH console onto a device with no ports open, but also things that move around networks and stuff.
[01:00:10]
Chris Swan
So I'd encourage people to just check that out and have a play.
[01:00:14]
Viktor Petersson
And it is also open source, which I think is important to highlight in this context.
[01:00:17]
Chris Swan
For sure it is open source.
[01:00:20]
Chris Swan
Yeah.
[01:00:20]
Viktor Petersson
Perfect.
[01:00:21]
Viktor Petersson
Again, thank you so much for coming on the show, Chris.
[01:00:23]
Viktor Petersson
Very much appreciate it.
[01:00:24]
Viktor Petersson
Talk to you soon.
[01:00:25]
Chris Swan
Thanks, Chris.