We were having a debate in the house as to which book would be the first one taken. We put a collection of books in there. And it turns out that the 1974 edition of "Lord of the Rings" was the first book that has been taken from the library.
CRAIG BOX: Was that all three volumes in one?
ADAM GLICK: Yes the entire "Lord of the Rings," including all the maps.
CRAIG BOX: How did you have space in the little free library for anything else?
ADAM GLICK: [CHUCKLING] Well, it's a large box and it's a very small copy. It's a mixture of both.
CRAIG BOX: Well, 1974 is almost 50 years ago. And there were two anniversaries in the last week.
First of all, the internet is 50 years old. Vint Cerf, the chief internet evangelist at Google, and one of the co-inventors of the TCP/IP protocol--
ADAM GLICK: Not to mention, a snappy dresser.
CRAIG BOX: Indeed. He's published a set of recollections of his time involved with the internet. Obviously, right from the beginning. On October 29, 1969, he said that the first packet was sent, which "pioneered our understanding of operational packet switching technology and made everything else available".
Also 50 years old this week is television news in New Zealand. There were individual news broadcasts in cities in New Zealand, but the first time that they were all networked, that there was one national news bulletin sent all around the country, was 50 years ago this week. You'll see some links in the show notes.
There are still many of those presenters still around. Not actively presenting, but their comments are available. And interesting looking back obviously at the times, at the event, they say that the marriage of Princess Anne was the first international event that was able to be broadcast live around the country at the time it was happening.
And now we just take for granted that we can see everything all around the world as it happens. Just like this podcast.
ADAM GLICK: Shall we get to the news?
CRAIG BOX: Let's get to the news.
ADAM GLICK: Sysdig has released their third annual Container Usage Report, giving an insight into the deployments of everyone using their SaaS or on-prem products. Key insights include the densification of container hosts with the median number of containers per host doubling to 30, from 15 in 2018 and 10 in 2017.
And that Nginx is the most popular single open-source program running containers. You can read the summary of the report online or exchange your email address for the full version.
CRAIG BOX: Rancher Labs storage project Longhorn has been accepted into the CNCF as a sandbox project. Longhorn is cloud native, distributed block storage for Kubernetes. And features include volume snapshots, built-in backup and restore, and live online upgrades. To learn more about the history of Longhorn, listen to episode 57 with Darren Shepherd from Rancher.
ADAM GLICK: Version 0.4 of Crossplane has been released by the team at Upbound, who also created the Rook Project. This version uses Rook to make it possible to provision Yugabyte and Cockroach DB distributed databases, both of which use Rook's PostgresSQL storage capabilities. Improvements to documentation in UX round out the release.
CRAIG BOX: Helm continues its march to 3.0, with two release candidate versions this week. The RCs have been released to help gather feedback from the community, as well as give users a chance to test Helm in staging environments, before 3.0 is officially released. If you want to dig deep into our back catalog, you can learn about what to expect from Helm 3 in episode 11 with Vic Iglesias.
ADAM GLICK: CloudEvents has gone 1.0 and moved from the sandbox to the incubation stage in the CNCF. CloudEvents is a vendor-neutral specification for defining the format of event data. CloudEvents is implemented by Knative, as well as products from Azure, Red Hat, and Serverless.com.
CRAIG BOX: Data Center Knowledge looked at service meshes this week and called out Istio as leading the pack. Author Christine Hall interviewed contributors Zack Butcher from Tetrate and Lin Sun from IBM.
Butcher pointed out that Istio was the only product that is able to fully expose 100% of the Envoy proxy's power, and acknowledged that, like Kubernetes, it can be complicated if you use it without management tooling.
Sun is leading a working group around Istio user experience that has made many improvements in the 1.3 and the upcoming 1.4 releases.
ADAM GLICK: Another team working on the user experience at Istio is Banzai Cloud, who this week launched version 1.0 of their Backyards service mesh.
Backyards takes Istio and Banzai's open-source Istio operator and adds production-ready deployments of Prometheus, Grafana, and Jaeger, as well as their own custom management dashboard CLI and GraphQL API. It provides a simplified structure of request matches, routes, and actions on top of Istio's traffic management primitives. Banzai says that it provides the simplicity of less featureful mesh products with the velocity and compatibility of Istio and Envoy.
CRAIG BOX: Another project based on Envoy, the Contour ingress controller, has reached 1.0. Contour was first released two years ago this week, and has evolved to support dynamic configuration updates and multi-team ingress delegation, while maintaining a lightweight profile.
Following the 1.0 release, the team plans to burn down their backlog and look to support the evolving Kubernetes ingress API objects. If you're interested in getting involved, the project has also started monthly community meetings.
ADAM GLICK: The Envoy proxy itself has released version 1.12 this week. To learn more about Envoy, check out episode 33 with Matt Klein.
CRAIG BOX: Two new security features in Google Kubernetes Engine this week. Application layer secrets encryption is now generally available, which provides envelope encryption of Kubernetes secrets. A local key is used to encrypt the secrets in etcd. And that key is encrypted with another key stored in Google's Cloud KMS, not in Kubernetes. This model allows you to regularly rotate the outer key without having to re-encrypt all the secrets. And you can also optionally use a key backed by hardware security module.
Using Google's container storage interface driver, you can also now specify customer-managed encryption keys for Google Cloud persistent disks. This feature is available in beta.
ADAM GLICK: Microsoft is announcing Azure Arc in preview this week at their Ignite conference. Arc appears to be a meta management layer to manage Azure and on-premises and Edge technologies under the Azure Stack brand, similar to what VMware has announced with Tanzu earlier this year. The new product purports to bring a single control plane for managing Azure and Azure Stack, which were previously disconnected, and mentions being able to manage Kubernetes clusters on other clouds. This announcement also brings AKS to GA in Azure Stack Hub.
Details of the technical workings of Arc are scarce at the time of recording, but the opinion of Gartner VP Sid Nag is that "Azure Arc seems to be Microsoft's attempt to answer Amazon with Outposts and Google with Anthos".
In addition, Microsoft's Channel 9 has posted five new explainer videos by AKS's Brendan Burns covering topics, including the pod life cycle and admission controllers.
CRAIG BOX: The CNCF had a number of announcements this week. Amongst them, a case study of AlphaSense and AI startup and its use of Kubernetes, a description of how the TiKV project built a large scale distributed system using the Raft algorithm, and a call to join the CNCF Meetup Program. If you do run a Kubernetes or CNCF-related meetup you most certainly want to check out the last of these, as the program offers several benefits, including greater awareness, CNCF swag, and the ability to use the CNCF's Meetup account.
ADAM GLICK: The Kubernetes SIG Docs team conducted a documentation survey in September, and they have released their findings. They say that the respondents would like more example code, more detailed content, and more diagrams in the concepts, tasks, and references sections.
70% said that Kubernetes documentation is the first place they look for information about Kubernetes, although 74% of respondents would like the tutorial section to contain advanced content. The survey provides feedback on how SIG Docs can make the documentation better for all of us. So thank you to all of you who answered the call.
CRAIG BOX: The Knative Project provides serverless building blocks for Kubernetes users. But these building blocks can be used no matter what kind of workload you have. Ahmet Alp Balkan, developer advocate at Google Cloud and guest of episode 66, writes that you can get all of the benefits of Knative's auto-scaling and request handling. And it might only be as difficult as changing your Kubernetes deployment type to a Knative service, as the two a largely compatible.
ADAM GLICK: Scott McCarty from Red Hat has posted an interesting blog based on his experience as an IT pro handling large changes in service demand. He relies on his experience at an e-card company during peak times, such as Valentine's Day in the United States, to talk about what the real challenges are in scaling. In short, business problems at scale are hard. Kubernetes is not.
Although he doesn't get into the technical issues managing Kubernetes, he points out that Kubernetes makes it easier than ever to scale services with demand. And that there are many companies that can help you build and run your Kubernetes clusters, including hosted services.
He also argues that every company that goes digital needs to be prepared for the challenges of running services at scale, as API calls and web interest can ramp up and down dramatically. He thus argues that the best use of your time is to master the Kubernetes primitives and leave the rest of the work to the software and vendors.
CRAIG BOX: Finally, some sad news from the community. Brad Childs, co-chair of SIG Storage, passed away in his home this week. His co-chair, Saad Ali, says that Brad's impact will continue to be felt in the products and communities he helped build. And that he will remember Brad for speaking his mind and being quick with a joke. Our condolences to Brad's friends and family.
ADAM GLICK: And that's the news.
ADAM GLICK: Gerred Dillon is staff software engineer at D2iQ, who works on controllers for Kubernetes. In the past, he's worked on large scale projects at Rally, Iron.io, Deis and Replicated. He participates in the Kubernetes community, helping in various areas, such as the Kubebuilder sub-project, and SIG Architecture. Welcome to the show, Gerred.
GERRED DILLON: Thanks for having me. I'm glad to be here.
CRAIG BOX: You work for D2iQ, which used to be known as Mesosphere. And I understand you were brought on board for the wealth of Kubernetes experience that you have.
GERRED DILLON: Yeah, I've been working with Kubernetes since the very early releases of it, building software on top of it, and knew some people who were at the company working on Kubernetes stuff there and working on the Kubernetes program. So got brought aboard to work on various projects inside of Kubernetes.
And since then, it's grown a lot. At D2iQ, we have a lot of products on it. And so far, I mean, it's a large growing program we have there.
ADAM GLICK: Any war stories from those previous companies?
GERRED DILLON: In one of those pasts, I ran a large scale Kubernetes cluster that was doing some augmented reality and machine learning work. And in the early days at Kubernetes, things were a little bit more rough. We were deploying with Terraform or-- this was on AWS and before EKS. So we were using Kops at the time.
And having to strap yourself to a pager of a Kubernetes cluster with hundreds of nodes and using GPUs when they weren't really supported yet, is a little bit harrowing and led to more outages than I would like.
But one of the things that I recommend to anyone who's interested in working on Kubernetes and using it is go actually run this stuff in production because it's a lot different to have the perspective of you're someone building out these APIs and working on it and working on the project, versus having to deal with that TD with your hair on fire at 3:00 AM. That's probably the most fun war story. But now I try to avoid production in my day-to-day job as much as possible.
CRAIG BOX: There are a lot of people running both Mesos and Kubernetes in production. How would you describe the differences between these two clustering systems?
GERRED DILLON: Mesos is really a base core of resource pooling and scheduling abstraction. So think of it as a very simple way to take a bunch of machines and pool their resources together, and not much else by itself.
So I have 10 VMs and I have 40 cores and 186 gigs of RAM and so much hard drive. And what makes Mesos different from Kubernetes in this regard is that everything above that you have to bring to the table yourself and write something called frameworks. So if we were just to contrast that real quick to Kubernetes before diving to that, your typical first introduction to it is it's a system for orchestrating containers, right?
And if you look at what Kubernetes really is, it goes a lot deeper than that. It's an extensible API server that has a whole bunch of controllers surrounding it that tries to reconcile the state based on that API, right? So you have an actual state and a desired state. And the controllers advance you to that state.
And Mesos by itself does not have that concept. It just has this concept of resource pooling and APIs for performing some scheduling.
CRAIG BOX: There are many application frameworks that run on top of Mesos. Some are more general purpose. But I understand some are specific to particular workloads and could be considered more like the operators that we're now becoming used to on Kubernetes.
GERRED DILLON: That's true. And it goes a little bit more to that in that even beyond operators, even something like you'd have in controller manager in Kubernetes, would be a framework on top of Mesos. One of the most well-known example of that is Marathon.
So there's a lot of alignment there. But the difference between frameworks and any operators that you would build on top of Kubernetes, is that frameworks in Mesos-- bring your own paradigm to the table, right? Whereas Kubernetes has the state reconciliation process. You don't necessarily have that. You don't have to do that with Mesos frameworks.
So for example, if you want to write a Hadoop job executor or framework for Mesos, you can bake that into however you want to write the code in order to interact with the Mesos API. Whereas if you were to do that with Kubernetes, you're boiling down to the base abstractions inside of Kubernetes where you'd spin up a job at the end of the day.
CRAIG BOX: DC/OS is Mesosphere's commercial Mesos product. And there is a set of operators or application frameworks available for it called the DC/OS Commons. What are they?
GERRED DILLON: The DC/OS Commons was recognition that when people were building out these frameworks they would write out the same code over and over and over again for the same boilerplate for this class of applications. And this being stateful, complicated data services on top of Mesos. So the DC/OS Commons took that whole paradigm, wrote out an abstraction framework that worked for that specific use case so that developers of these frameworks could be a lot more productive by building on top of the SDK.
What is nice about the SDK is it comes with hatches that you can get down to the lower levels, if you don't fully fit into the paradigms that are set forth by the DC/OS Commons. So if anytime I need to override custom plan, I can go write some code, drop in a jar, and let the SDK pick that up so that we can have some more advanced logic around that.
ADAM GLICK: Mesosphere, and now D2iQ, has brought the same approach to Kubernetes. And you've been working on a project called KUDO. Can you explain what KUDO is?
GERRED DILLON: KUDO is the Kubernetes Universal Declarative Operator. What that means is that it's a high level toolkit and runtime for writing and running operators on top of Kubernetes.
Before we talk about operators, I think we should back up a little bit and talk about controllers since everything in Kubernetes is a controller, except for the API server.
And what a controller is it's a process that attempts to reconcile actual desired state. And all operators are controllers. They're trying to take the state of a set of Kubernetes resources and get them to a certain point.
The difference is is that controllers are very general in nature. Controller manager is managing this concept of deployments and services and a bunch of other stuff. And it doesn't care about what the underlying software is doing.
The scheduler is assigning pods to nodes. And outside of any user input, it's not aware of any underlying runtime or anything that's happening. You can use node selectors to see where things go, but it doesn't care what it's actually running.
So the difference between an operator in any given controller is that really an operator is starting to build in domain-specific knowledge around a really specific piece of software and how to run it. And in short, what I like say on the KUDO team is you ship your software with its own run book.
ADAM GLICK: So what makes KUDO universal? What's the "universal operator" part of it?
GERRED DILLON: What makes KUDO a universal operator is that it's a slightly different approach to writing operators. If you look at all of the tooling for writing operators today, they're very generative. And so what you do is you kick off a project and you have a one-to-one mapping for the operator that you're writing to the software it's maintaining.
So when we say universal operator, we mean that it's polymorphic in the sense that you have the one KUDO controller managing multiple different operators. And the reason we did this is we noticed that writing operators also has a lot of the same boilerplate between them. Even when you're using products like Kubebuilder or operator SDK, they get you 50%, 60% of the way there.
But then you need to start rolling in your domain. You still have to do things like writing events for Kubernetes. You have to have a pretty deep understanding of the Kubernetes API still.
And so we are higher level than that, and maybe even a Omakase in that we optimize for a certain set of constraints. And if you can fit into that optimization, then you can really focus on getting your stateful application deployed on Kubernetes and not spend all this time with all the Kubernetes API development pieces that you still have to do with other tooling.
CRAIG BOX: In that KUDO is a declarative operator, how am I configuring it? Am I writing charts for Helm? Am I writing domain-specific language? Or am I writing something in code, like I would with one of those traditional tools that you mentioned?
GERRED DILLON: We're working on ways to expand upon this. But as of this current release, we have a client-side CRD called an operator and parameters. And so you write out your entire operator as the workflow engine-- plans, phases, steps, tasks. And you map those back to a library of tasks that we have.
After you've written this operator, then users can actually go and deploy that into their cluster. And it's represented as that set of CRDs.
So if you have a deploy plan when you go and deploy, it's going to run through that deploy plan. And it's going to run through applying a set of manifests.
And we have a couple different task types right now. We have apply. We'll have a Helm Chart task in the next release that'll allow you to deploy a Helm Chart as part of a task. We have a pipe operator, a couple of other things.
So at its core, what you're writing out is a workflow for performing a given lifecycle action for that piece of software. And that might be deploying, that might be upgrading, that might be updating, that might be a parameter change. That might even be scaling.
So if you were to consider software like etcd, scaling isn't just a matter of throwing more replicas at it. You have to call the add member API for etcd in order to bring that into the cluster. And same with scaling down.
So each plan is really focused around a different lifecycle event. And you can have custom versions of those. So you can have a backup or restore. If it makes sense for your application, you might have a compaction plan. But it's pretty extensible from that.
Now, we're looking at the future of how to bring in more custom events. We're looking at cloud events. We're looking at some other tooling there. And then we're looking at higher level languages, like Starlark or TypeScript. But we're really basing that on community feedback, based on our framework now, and where people want to go with it.
GERRED DILLON: I love Metacontroller. It's a really great project. It got proposed to being moved under the Kubebuilder subproject last month. And that's in process.
Metacontroller has a lot of the same paradigms as KUDO, in that it's a polymorphic controller. And so you're configuring this controller with various software that you want to run. For example, the Vitess operator used to be run using the Metacontroller CRDs. And those are instantiation of controllers based on custom resource definitions. So it would go through and build up an entire Vitess cluster. And they did that entirely through Jsonnet.
What's really nice about Metacontroller is that you can plug in any templating language that you want because Metacontroller is calling external APIs in order to actually get the templating for an application. So we were inspired a lot in KUDO by how Metacontroller operators are actually written.
The difference is we wanted to go in a bit of a different direction with KUDO by optimizing for a certain class of software, and that was complicated stateful software. And this type of software ends up having dependencies, something that, for example, the Vitess operator didn't handle well.
So with the Vitess operator, it required you to already have the etcd operator running. Because one of the resources that it would instantiate was the etcd cluster CRD. So expected it to be already there and running.
One thing we're working on KUDO is this idea that operators don't exist in a vacuum. They exist alongside other software. And dependency management is a big feature of KUDO where, as part of this plan-- and really we're building up an entire graph of this software and their interactions with each other-- you can have dependencies that are in lockstep.
CRAIG BOX: You mentioned stateful applications and the workflow required to deploy them and upgrade them in various steps. Another area has that same pattern is deployment of applications, and some common tooling for that is now in the Tekton project, which is now part of the Continuous Delivery Foundation. Have you considered a way to work with Tekton descriptors?
GERRED DILLON: We are looking at that. One of the things we found early on was that people did not want us writing another application definition format. For example, when Helm was around. But we needed to do that research in order to understand what the definition format for our operator should be and what it's supposed to be doing.
And so we ended up having a KUDO language for writing out these templates. And now we have a workflow descriptor as well.
So as we get further with that, we may find overlaps with Tekton or other CRDs to allow that to be opted into, much like you can opt into a Helm chart. But have a self-contained default if you want to just use that self-contained default.
One of the things we don't want with KUDO is much like a lot of software in the Kubernetes space, we don't want you to have to install KUDO plus 1,000 other dependencies to have to get going with it. We want to be friendly, we want everyone to be able to opt into having KUDO as part of a larger ecosystem.
But I don't want someone to have to go deploy a Kafka operator. And with it, they get Tiller, Tekton, everything else. So there is a kernel there of getting started with it. And then expanding it into production and integrating, playing nicely with the whole community.
So that's more where we're going, like Tekton. And we're looking at other things as well. But we're taking it with a grain of salt to make sure that we're solving for the right problem, rather than just bringing in a grab bag of different tools that are in the Kubernetes ecosystem.
ADAM GLICK: KUDO is built on top of the Kubebuilder SDK. What does that give you, and how did you make that choice?
GERRED DILLON: We generated out KUDO with one of the earliest versions of Kubebuilder. And from there, we actually haven't done much with it. We haven't stayed up to date with the Kubebuilder versions. But it's important to know what Kubebuilder is. Kubebuilder is a generator and a set of patterns for writing operators on top of tooling like controller runtime and controller tools and Kubernetes APIs.
So controller runtime is really the project that we're using. And that's the project that powers Kubebuilder. And what we discovered early on, especially as we get into more and more CRD management-- so the feature version of KUDO for example have dynamic CRDs where, instead of having a kind of instance, you can white label your CRDs. And I want to touch back more on that. But with Kubebuilder and a lot of these static generation tools, they're assuming this monomorphic pattern where I'm writing an operator for one controller.
Whereas with KUDO, as we start to progress, we're doing a lot more dynamic controller starts and stops, which the current setup and architecture for how Kubebuilder layout does not work for us.
So controller type and runtime does work though, because it's just a set of libraries for interacting with and building controllers. And Kubebuilder is more like that-- Rails generate new controller for operators. So Kubebuilder's a fantastic project. And we wouldn't have gone any other way.
But at this point, we're just using the lower level tooling that are afforded to it, and forging our own patterns that work for our project. And the Kubebuilder team is working on developing more and more patterns around building operators for running a single process.
CRAIG BOX: One of the use cases for KUDO is, as you mentioned earlier, moving people from the Mesos engine over to Kubernetes. How are you seeing people make that journey?
GERRED DILLON: A lot of what DC/OS users are using these frameworks for is large scale data processing. When we set out to figure out the requirements and how users were operating with their existing services on top of the DC/OS SDK, considering the workloads that they were using most often, and started to optimize KUDO to create those operators first.
So that those users who were wanting to expand their presence, either in DC/OS to Kubernetes or in general just do more with Kubernetes in other environments, had a very similar workflow and stability expectation that they would have with any of the DC/OS data services.
So the way people are making that journey is we have a distribution for Kubernetes called Konvoy. And with that distribution, we provide internal support for getting going with KUDO and that core set of data services that were very popular internally. So we're continuously evaluating what the needs of those customers are and optimizing the actual data services that we're creating on top of KUDO.
And then areas where we find we need to make refinements in KUDO back out into the open-source KUDO team. And we start working on those.
CRAIG BOX: What are some of the projects that have built operators using KUDO?
GERRED DILLON: We have a few in-house. We have a Kafka for KUDO, we have Cassandra, and we have a Spark operator for KUDO. That forms a holistic data processing stack.
In the community, we have MayaData working on the OpenEBS operator on top of KUDO. And then Lightbend just added in whole bunch of templates for building operators on top of KUDO that use Akka and the Akka stack. And so that's actually informing a lot about how we're using KUDO to go beyond just running data services, but anybody managing a complicated stateful app.
So some of those templates are really amazing and worth checking out. They're in our operators repo, and shows a clustered Akka system that's being managed by KUDO.
ADAM GLICK: Six months after it was announced, KUDO was proposed to the CNCF. Why did you want to donate it to the CNCF, and why so early in the project's lifetime?
GERRED DILLON: We built this project from the get-go really to enable open governance. And we wanted to foster that from the very beginning.
So when we started this whole process, the CNCF's SIG Application Delivery hadn't been created yet. And there was a lot of questions about what projects were appropriate for the CNCF sandbox and which weren't.
So we wanted to err on the side of vendor neutrality early and signal that in our project and keep it on our roadmap. And we saw the value in not only getting KUDO into the CNCF, but also starting to get other operator frameworks into the CNCF and a standardization process around working with and writing and vetting operators that are production grade.
So part of it was raising the question around, where should all of this work around operators go. And how do we ensure that KUDO is a part of this? And how do we start to do that process?
Since the CNCF SIG Application Delivery has started to look more at this problem, we've done a presentation to the TOC. We need to do a presentation with that SIG Application Delivery.
We've actually decided to back up a little as we hone our messaging more and focus on our next arc of features. That gives us a little bit more differentiation on that day 2 and application awareness. And then we're going to start up those conversations again.
But we want people when doing research on operators, there is an ecosystem there that they can rely on to know what operators are vetted, ready for production, which have been abandoned, which aren't being worked on, and where things are going.
So KUDO going into the CNCF is a part of a much more holistic strategy for promoting this pattern in the wild. But for the project itself, personally, it's really about open governance since the beginning.
CRAIG BOX: As the CNCF builds this Application Delivery SIG out, how have you built a community around the KUDO project in particular?
GERRED DILLON: Going back to the open governance, we started out by building a really friendly community, and led with that. Anyone at KUDO is welcome to contribute. We follow a very familiar cap process if you already work on Kubernetes.
Some differences-- we don't have special interest groups right now, since we're a small project. And so we shrunk down our governance to be just enough to keep that project going.
From there, we talk about every chance we get. I'm here, right? We have meet-ups, blogs, conferences. I talked at the CNCF webinar earlier in the month.
And what we're trying to do is really sell our goals and missions, rather than the tooling itself. And so far, people have been really receptive to that problem of, how do I run stateful software on Kubernetes, and do it without setting my hair on fire.
And the people we see coming to the community want to help shape that solution. Because what we're not doing is handing down a solution from above-- we're trying a bunch of things, we're failing fast based on our actual users. And we ship a new version out, we try some new things. We put in feature flags recently so that we can start to run these experiments without affecting people who are actually using these different operators in their real environments.
So we have really two personas for KUDO. We have the operator developer, and then we have the end user. And we're trying to serve both of them at the same time and get both them involved in the community at the same time so we can really figure out both how operators want to develop operators and how end users want to run their applications in a familiar way.
And then from there, we participated in Digital Ocean's Hacktoberfest. And that brought a lot of people into our community. We got nine new contributors that month.
And we have a ton of people opening issues. Some are opening PR. We had people writing operators. So that was actually a really nice thing to help get people involved in what we were doing at KUDO.
And then we have the KUDO Slack. And that's a fairly active place where people are asking questions.
And one big thing we did was, following a very Kubernetes familiar process to drive people either into our Slack, and from there, into a GitHub issue, or drive people over straight towards GitHub issues directly so that we can really respond to their feedback and keep people tightly involved in our development process.
CRAIG BOX: Finally, your profile on the KUDO website says you're a, quote, "certified" barbecue master and that you also, quote, "take care of your chickens." Does that mean you keep them away from the grill, or that you season them really well and cook them to be very tender?
GERRED DILLON: [LAUGHS] One of our core team members, Matthias, wrote this bio to heckle me into finishing my biography for our blog, because it took me a few weeks to even look at the PR for all these bios. And it backfired on him because I love it. I just love it.
I guess some of both. I have a flock or-- as we learned-- a brood of chickens. But they're for eggs only. And I think one of them decided to be a pet. I think I might have a house chicken now because she comes up to the door for snacks and tries to get inside.
But if you're asking me how to brine a chicken, I can tell you a bit about that because I dry brine everything. And you should too. Because it makes everything that you're going to smoke or grill taste a lot better.
ADAM GLICK: Gerred, thanks for joining us today.
GERRED DILLON: Thanks, Adam. I really appreciate it.
ADAM GLICK: You can find Gerred Dillon on GitHub at github.com/gerred, and in the Kubernetes Slack at #KUDO.
CRAIG BOX: Thanks for listening. As always, if you've enjoyed the show, please help us spread the word and tell a friend. We'd love if you'd write a review for us on iTunes, if that's your thing.
If you have any feedback for us, you can find us on Twitter @KubernetesPod, or reach us by email at Kubernetes Podcast at google.com.
ADAM GLICK: You can also check out our website at KubernetesPodcast.com where you'll find transcripts and show notes. Until next time, take care.
CRAIG BOX: Catch you next week.