CRAIG BOX: That sounds like a lot.
ADAM GLICK: It is a lot. It makes it a lot of fun. It's beautiful outside, but it does play a little chaos with the traffic. So for any of our listeners in the area, I hope that they're staying safe.
CRAIG BOX: I had a trip to Seattle a few years back where the snow just came on out of nowhere. And I was at the Space Needle, trying to get back to my hotel, and got in a cab with a driver who was probably not used to that kind of road condition. And we skidded around a little bit before he finally settled in. That was a fun experience.
ADAM GLICK: Wow. It adds a little excitement to your day.
CRAIG BOX: It does. Let's get to the news.
ADAM GLICK: No abandoned project or idea is ever really gone in open source. VMware deprecated the ksonnet project, a tool for generating Kubernetes configuration using the Jsonnet templating language in February of last year. The idea is back in the form of Tanka, a new project from Grafana. Tanka isn't a fork of ksonnet, but instead another implementation of the same ideas.
In a Hacker News comment, one of the authors said that they are aware of Customize, which is built into Kube Control and fills a similar function. But Grafana already uses Jsonnet for other non-Kubernetes work. And they wanted to use the same tool for more than just Kubernetes manifests.
ADAM GLICK: Falco, an open source intrusion and anomaly detection project that has been in the CNCF sandbox since October 2018, has moved up to the incubation phase. Falco intercepts system calls and checks them with community sourced detection rules to help reduce risk and strengthen security. Incubation was unlocked based on the growth Falco has seen, including a 257% increase in downloads and 100% increase in commits year over year. Congratulations to the Falco team for reaching this milestone.
CRAIG BOX: For our Certified Kubernetes Application Developer listeners, good news. Your certificate is now valid for three years. This is up from the two years the program launched with, a change that was also made to the Certified Kubernetes Administrator Program last May. The CNCF says that almost 2400 people have acquired the developer certification.
ADAM GLICK: Ingress controller Contour has released version 1.1. This version fixes the CVEs disclosed in December for its Envoy engine, as well as adding support for rewriting URL prefixes and specifying the protocol of a service.
CRAIG BOX: Google Cloud's Dan Lorenc wants to secure the internet. Dan, a founder of Tekton and Minikube and guest of episode 39, has written a post explaining the dangers of including open source libraries hosted online, without thought. As third parties or even authors are sneaking cryptocurrency mining and other malicious software into code, the need for tracking software provenance is greater than ever. Dan proposes some directional solutions and asks anyone with ideas in the space to join a new security SIG in the Tekton project.
ADAM GLICK: Daniel Finneran from VMware has written a guide to designing and building HA Kubernetes on bare metal. He's developed a set of tools called Plunder, going with the nautical theme, to help automate the process. His post covers many of the decisions and considerations that a hosted vendor makes for you and that you will have to make for yourself when you run Kubernetes on your own.
CRAIG BOX: One of the hidden pitfalls of moving to cloud from bare metal is that you will no longer have the same IO characteristics you are used to. Your instance will be subject to various IO quotas. And exhausting these is a common problem for Azure Kubernetes Service customers. As with other cloud providers, IO quotas scale with disk size, and the default 100 gigabyte disks will have a low quota. Microsoft has written a guide on how to identify when you are being throttled and what is causing it, along with the suggestion on moving the Docker data root to local storage, which is available on each node. This change is being tested to become the default in 2020.
ADAM GLICK: All your containers might run on port 80, but the machine running them has but one port 80 to offer. Kubernetes manages this network magic transparently, but if you are new to the space and want to learn how it all works, security vendor StackRox has published an introduction which promises to demystify it, at least in the default case.
CRAIG BOX: Developers who will run production code on Kubernetes can benefit from access to it during the dev cycle. There are a number of ways that you can achieve this. Daniel Thiry from DevSpace has written up the pros and cons of each, including the trade-off of paying for cloud infrastructure versus running locally.
ADAM GLICK: Monitoring company Datadog has posted a comprehensive three-part series on monitoring the Istio service mesh. Istio introduces a control plane and attaches a sidecar to each meshed workload. All these components can be monitored and their logs analyzed. Datadog points out which metrics to watch, as well as provides instructions on how to do it with their products.
CRAIG BOX: The standard deployment of Istio includes a single external load balancer on a single external IP address. Should you want more than one, it's possible. Peter Jausovec from Learn Cloud Native has written up his method for doing so.
ADAM GLICK: The Monitoring Monitoring newsletter newsletter has taken a look this week at Big Prometheus, systems that help you aggregate all your metrics in one place. We talked to the authors of M3, one such system, in episode 84. Author Clay Smith compares it with Thanos, Cortex, and VictoriaMetrics. Consider your scale before you go down this road. Because Smith and the famous author Tom Wilkie both suggest that a HA pair of Prometheus clusters is currently enough for 80% to 90% of users.
CRAIG BOX: The Cloud Native community wanted Helm 3 to be different and now, after a long wait, are having to come to terms with the fact that Helm 3 is different. Jack Morris has gone through the migration and written up the breaking changes he found. A migration script addresses most chart-based issues, but command line flag changes and different namespace visibility may lead you to getting different results than you were used to. Check out his post to keep your CD system running smoothly.
ADAM GLICK: Many people have moved from a push model for CD, like Helm, to a pull model, like the popular GitOps paradigm. Alex Kaskasoli, a self-described "recovering pentester", highlights the security advantages of pull-based CD, which means a central deployment tool no longer needs administrative access to your cluster in order to run. Kaskasoli closes by saying he wants to see a shift to pull-based builds to offer the same advantages to CI systems.
CRAIG BOX: Even when you have valid credentials in the kubeconfig file, you often have to authenticate to applications that run on the cluster, such as the Kubernetes dashboard. The team at BanzaiCloud have developed a pattern to allow zero touch authentication on Kubernetes, which, in their example, lets you connect to their backyards service mesh dashboard without first starting 'kubectl proxy'. Banzai also announced their Bank-Vaults operator has added support for multi-region deployments of Hashicorp Vault.
ADAM GLICK: ContainerJournal starts the year looking at the evolving relationship between OpenStack and Kubernetes. People still need a way to provision infrastructure to run their Kubernetes on. And while vendors are starting to offer bare metal solutions, author Mike Vizard doesn't think that people who have made an investment in OpenStack are looking to give it up. We will explore this concept further in an upcoming episode.
CRAIG BOX: Google Cloud posted a roundup of the security enhancements they made to GKE over the last 12 months, including rebasing system images on distro lists-based images and adding a new viewer role for public access so they could remove permissions from the unauthenticated user.
ADAM GLICK: The CNCF has published their transparency report on the KubeCon and CloudNativeCon event in North America in November 2019. Almost 12,000 people attended, up almost 50% from Seattle the year before, and with 65% of the attendees being first-time attendees.
CRAIG BOX: Finally, and also from the CNCF, a case study on SaaS vendor Zendesk and their decision to use Kubernetes. Senior principal engineer Jon Moter is quoted as saying, "Kubernetes seemed like it was designed to solve pretty much exactly the problems we were having. Google knows a thing or two about containers. So it felt like, all right, if we're going to make a bet, let's go with that one." We couldn't agree more.
ADAM GLICK: And that's the news.
CRAIG BOX: Lin Sun is a senior technical staff member and Master Inventor at IBM, where she has spent the past 14 years doing software engineering in areas including cloud and open technologies. She currently works on the Istio service mesh. Welcome to the show, Lin.
LIN SUN: Thanks for having me.
ADAM GLICK: For those that may not be aware, what is a master inventor?
LIN SUN: Well, it's a really IBM only title, I should say. Within IBM, there is a patent inventing process where you can grow up to a Master Inventor.
CRAIG BOX: Did they patent that process?
LIN SUN: I don't believe so. [CHUCKLES] So the way it works is we have a point system for patents. The moment you have a patent filed, you get three points. If you have an article published on ip.com, you get one point. You typically need about 12 patents to be filed and at least one to be issued. And also you have to prove that you are continuously mentoring new inventors within IBM.
So that's kind of the criteria to become master inventor. I actually was on the Patent Review Board. I was leading the Master Inventor review board for IBM Cloud for this year. So it's really interesting to see. We apply the criterias onto who gets to be selected as Master Inventor.
CRAIG BOX: By the time you got to 150, did they need to invent new categories?
LIN SUN: They actually-- the corporation actually set categories for different years. And every year, they actually have tagged certain categories as high priority categories, which they actually give you special bonuses. So if you happen to file a patent within a strategic area, you actually get, like, additional-- I can't remember the exact dollar amount. I think it may be $1,500 for working on strategic areas. So it's really interesting. And I've noticed from the past 10 years, the areas are certainly evolving as well, so that's interesting.
ADAM GLICK: What are some of your favorite patents that you've filed?
LIN SUN: It's really hard to remember all that many. I would say the first one is definitely one of my favorites. I remember that was a little bit over 10 years ago, back when my daughter was born. And I was sending her pictures. We were using a messenger system called SameTime. You can think of it as like Slack. I was sending her picture to one of my co-workers.
And she was like, you know what? I had a super embarrassing moment because I accidentally replied to my male co-worker, said, "your baby looks so cute". So we started brainstorming on that problem. We were like, wow, that's a really common problem where you actually send irrelevant messages to your co-workers without understanding the context of the history.
So that became our first patent - we proposed a confirmation warning system for instant messenger to be able to analyze the context of the message, and who are you sending to, to see if the message actually makes sense or generate a warning to a user. So it's a small thing, but it was quite interesting.
From there, we've actually done some really, really interesting things. One other example where I would quote is when my kids were in elementary school and within my co-inventor group, we also have people who have kids at school or daycare, and we were talking about the yearbook one day. And then we started talk about, you know what? When you look at the yearbook, sometimes you were thinking, it may not have your kid's picture or may not have their best friend's picture. Or maybe it doesn't represent the class really well.
So we were like, it would be really, really cool if there would be a system behind to look at all the photos to help you to intelligently to select a photo to make sure it actually represents the class, and also the grade, really well proportionally. So that was another thing interesting. It's something I enjoy doing as an aside. You can tell it's actually not really related to my work. It's just something fun to talk with my co-inventors at work.
CRAIG BOX: Let's talk about your work then. How did you get involved with the Istio project?
LIN SUN: Back in 2017 - I believe when Istio 0.1 was launched, in May at GlueCon, so right after that. My boss-- certainly within a corporate, you have multiple bosses. So back then Briana Frank and Jason McGee were my bosses. And they pulled me over and asked me, hey, this is a new project. IBM is a founder of this project. It's really interesting. We are looking at you switching to this project. And asked me for my opinion.
Certainly, I was scared. You know, as a woman, you always feel when something new is thrown at you, you're not sure. Are you giving up your existing role where you spent so much effort to establish yourself? And then you could move onto this shiny cool thing. So at the end, I decided to take up the challenge. I was super excited.
The project was really a collaboration project between IBM, Google, and Lyft, and also within IBM. I was in the IBM Cloud CTO office. And it was a collaborative effort between the Cloud CTO office and also the IBM Research Team. So it's been a really amazing journey for me to get on the project at a really early stage and be able to help influence the project.
CRAIG BOX: Before Istio, there was a project from IBM called Amalgam8, with the number eight at the end. What can you tell us about that project?
LIN SUN: Amalgam8 was really IBM's point of view of service mesh. I believe that project was a collaborative effort between the IBM Research Team and also the IBM Cloud Team. Even before Amalgam8, IBM actually was running a service in IBM Cloud - and before that, when it was called IBM Bluemix - we had a service called Service Proxy, which is our viewpoint of service mesh. And we kind of externalized that, through collaboration with the research team, as this project called Amalgam8.
And when we first opened sourced it, we actually got a lot of feedback on that project, "why are we open sourcing this really cool thing?" There was debate within the company to just see, is it something we really want to open source? But thankfully, the leader on our side, Jason, and also the leader on the research side, Tamar, they were very open to open sourcing it and take it to the next level. And I believe through conversation with Google at KubeCon 2016 towards the end, we decide to collaborate together with Google on the Istio project.
ADAM GLICK: People hear the term "service mesh" a lot these days, but not everyone may be aware of what a service mesh is. What would you describe a service mesh as, in your own words?
LIN SUN: If I had, like, one or two words to think about service mesh--
CRAIG BOX: You can have as many as you like.
LIN SUN: I know, but I would try to see what's the best way to highlight for the user. I would say a key word is "extract". What service meshes really do for the user is to extract the complexity of the networking out of the user's application, of their services, so the user doesn't have to worry about that. They would trust the service mesh implementers to provide that extraction for them.
So a simple example, I tend to think about service mesh is, as a user, I think of microservices as a room - like we are in right now - where we want to focus on the essential functionality of the room. I could spend all the effort to figure out, how do I do network retries? How do I do circuit-breaking? How do I do logins? How do I do telemetry with my services? But if somebody actually gave me a storage box next to my room, if I can actually throw all the non-essential pieces outside of my room into that storage box, then why not take them up on that?
CRAIG BOX: The storage box, the proxy that powers Istio, is the Envoy proxy. And before, the Amalgam8 project used Nginx as its proxy. What do you think the advantages were of adopting Envoy? Why do you think the Istio project made that choice?
LIN SUN: That's actually a really good decision I'm very thankful that we made. Back in 2016 when the Amalgam8 project was out there, I don't believe Envoy was at the point that everybody saw it as a winner in the service proxy space. And Nginx was certainly very, very popular.
Thankfully, we, as a community, picked Envoy. And I think the primary reason was Envoy was extremely lightweight and was written in C++. And it was proving to handle, 2 million requests per second, and it was battle tested at Lyft. That was humongous, right? To be able to test with many, many microservices in their production environment gave us a lot of confidence in Envoy.
And to be really frank, just to see how Envoy grew within CNCF, it proved to us that Istio project made the right choice. And also if you look at the service mesh landscape, there are so many homegrown control planes, and also some of the other control plane projects that are built on top of Envoy. So I would say Envoy is becoming the standard for the sidecar proxy for most service mesh projects now.
CRAIG BOX: Istio 1.0 was launched in July 2018. There was a nine-month gap between 1.0 and 1.1. And then since then, we've had quarterly releases, on schedule. Why did that release cycle take so long, and what have the releases been since then?
LIN SUN: Wow, yeah, so 1.1. We found a lot of issues with 1.0, especially a lot of issues with performance and scalability. It took a really long time to have the Sidecar API in place. There was tremendous debate within the community to standardize the Sidecar API.
I remember when I participated in the networking Working Group meeting, there were rounds and rounds of debates about how is the best way to position that API, in an intuitive way, for a user to be able to consume, to be able to leverage. Because we believe that's an important API for a user to be able to config - what are the scope of the Envoy configurations, they can scope it per namespace.
So it took us a long time to get that API in and also get the implementation ready. And a lot of people were joking with us about "it took a woman nine months to produce a baby", and it took us nine months to really, really carefully through a lot of work to produce Istio 1.1.
CRAIG BOX: Since 1.1, the project has been making quarterly releases, with the most recent 1.4 release in November. What changed in order to make those quarterly releases possible?
LIN SUN: It's a lot of hard work from the community for sure. So from the 1.1 release, we realized it's not acceptable to have a release out in nine months. We decided we were going to look at how we can release faster on a reasonable cadence. And we decided on the three months release cycle, because we want to follow the footprint of Kubernetes.
And I can't say Istio really depends on [specific versions of] Kubernetes, but if we look at our Istio installation documentation, we do recommend certain versions of Kubernetes to use. Because those are the ones we have the bandwidth to test with. So it makes sense for us to align with the Kubernetes releases. So we did that for 1.2 and 1.3. And we made a exception for the 1.4 release, mainly because we didn't want to release in December.
I mean, it's the same thing. People don't want to release on Friday, right? We thought that the team members have been working really hard throughout the year. And most of the members will be on vacation in December. So we didn't want to do that. So we actually shifted the 1.4 release a little bit earlier, to November. And it was actually a good thing, right in time for Service Mesh Con and KubeCon.
The community has decided that automation was one of the key things for us, to be able to ship the releases at the right cadence. And the other thing we agreed as a community is, if something doesn't land in this release, that's OK, ecause there's another train in three months. It doesn't make sense to hold up the release, but just to line a feature up for the next release, maybe make a dark launch, make it experimental even before Alpha, for people to try it. So I think that worked out really well with that momentum of "nothing's perfect" and it's most important to get the release out on a cadence to be able to prove that we can do it.
ADAM GLICK: One of the features in Istio is mutual TLS or mutual Transport Layer Security. And it's basically the evolution of what people think of as kind of SSL and the way that people do public key cryptography as two servers that talk to each other. For developers, what does that mean in terms of when they're building something and Istio is involved? What do they have to do for security? Or what does it give them for security because of it?
LIN SUN: That's actually cited as the most important feature of Istio when we talk to our customers. So from a developer perspective, they are essentially trusting Istio to handle the secure communication among the microservices for them, instead of them to actually handle "what's the identity of my services, what's the identity of my target service, how am I going to encrypt the traffic, am I going to be using HTTPS to talk to the other target service?" They no longer need to worry about that.
From a developer perspective, they would be communicating to the other target service through HTTP. And then the sidecar proxy, as it traps the incoming and outgoing traffic, it's going to encrypt and also do mutual TLS handshaking, do service identity checking to help the services flow that traffic to be mutual TLS.
ADAM GLICK: And all of that's just transparent to a developer. They don't need to think about it because whoever is building and running the Istio environment, that gets taken care of at their level?
LIN SUN: That is right, but the user does have control, as they're onboarding their microservices on to Istio, they do have control over, "I want this thing to be permissive mutual TLS if mutual TLS is not working. I wanted to be able to fall back to plain traffic so that I can debug my flow".
That's a totally OK scenario, which is also why in Istio, there is a configuration called strict mutual TLS, and that's a recommended pattern. We recommend users, as they're onboarding their services to Istio, to be able to leverage mutual TLS. We recommend users to start with permissive mutual TLS, get their services onto the mesh. And once they are happy and once they get the telemetry flow and they have got the basic functions of the mesh, when they need to tighten up the security, they can enable the strict mutual TLS.
ADAM GLICK: What are some of your favorite features that came in Istio 1.4?
LIN SUN: I would say the top favorite feature would be auto mutual TLS. That's something we started to discuss back at KubeCon Europe, where we were like, "why can't we automatically config the DestinationRule for a user?" So if you ever use Istio, if you ever follow our tutorial, you will find out one thing we actually ask you to do is, if you enable authentication policy on your target service, we actually require you to configure a DestinationRule to have that security policy with mutual TLS.
That's really not obvious to an average user, and we actually got tons of issues in GitHub. People were reporting 503 errors because of the fact that they forgot the DestinationRule. So we took the initiative, opened an issue in GitHub, and we got tremendous feedback from people in the community, from Google and Tetrate. We have a couple of engineers, and founded that work, and worked it through. We were hopeful to land it in 1.3, but it was a little bit too stretched, so we actually made it in 1.4. It was dark launched in 1.4, but it actually works really well when I tried it. So I'm super pleased to see that launched in 1.4.
Besides that, I would say the client-go library, to allow users to programmatically access the Istio API, has been getting a lot of traction also. That's also one of my favorite features, and it's really a collaboration effort within the community with Google and the Red Hat team, and also the Aspen Mesh and IBM teams. It's really good thing to see it landed in 1.4 also.
The other thing on the user experience side, "istioctl analyze" is a really cool tool. If you haven't tried it, I highly recommend you try it, because what it does is it allows you to take a YAML file and run it through this analyze program and to tell you what exactly could be wrong with your Istio configuration. That's something the community has been waiting for a long time.
ADAM GLICK: Can you talk to some of the end user benefits that come from those things, such as adding MTLS?
LIN SUN: Yeah, so for example, auto mutual TLS, from the user perspective, I think the biggest thing is transparency to the user. As you move from permissive mode to strict mutual TLS, you don't have to worry about creating that extra DestinationRule, right? If you don't need DestinationRules, which are not required, then you don't have to worry about creating them. So I love the transparency aspect of that.
The other thing I also love about transparency aspect of what the community has done is it used to be you have to modify your deployment YAML file to declare the containerPort for Istio. Now it's no longer a requirement,since 1.3, so you don't really need to declare a containerPort for Istio. You may need to do it for Kubernetes, which is fine. And people get that. It's just, you no longer need it for Istio.
The other thing that's on the horizon-- I think it might be alpha right now-- it's dark launched, is the intelligent protocol detection. If you look at the Istio documentation that we always ask user to open up the service YAML file to name their service port. So that was not intuitive, and that was a hurdle that I tripped on when I was first onboarding my guestbook services on to Istio. It took me a couple of days to figure that out. So thankfully with the intelligent protocol detection, when we make that stable, it's going to make that feature more transparent for the user, so a user doesn't need to name the service port.
CRAIG BOX: Istio, like Kubernetes, has a number of objects which you could consider lower level, or an assembly language for how you might program an environment. And then there are a variety of tools and platforms that build high level abstractions on that. A lot of people think Istio is complicated because all they see are the introductions which tell them to program in that lower level assembly language. What would you say to those people?
LIN SUN: I think they have an interesting perspective. And it's also because of the way we are telling people to look at Istio. If you look at our user documentation on istio.io, interestingly, we actually tell people how to do traffic management first, before you do telemetry, before you do security, right? So that's actually really interesting because if you look at the network APIs, I mean, they're really complicated. It took me personally a long time just to understand the differences between Gateways, VirtualServices, and DestinationRules. (By the way, these names are really hard to pronounce!)
One of the pieces of feedback that as steering committee, we've been providing to the community, is we would like to see a refactor of our documentation. Instead of focusing on the networking APIs first, and asking people to do a bunch of things on networking, we should be asking users to do telemetry, because they don't have to learn any of the Istio API. They could get visibility in Kiali or in Grafana or in Prometheus right away, without learning any of the Istio API.
And the other thing would be security, right? Istio comes with mutual TLS, and even auto mutual TLS in 1.4. So that's something with no configuration, or really minimal configuration. And that's actually an approach we're taking in the book that we actually teach people traffic management, the network API in the last chapter. Because we don't believe that's what people would do in the first place. And honestly, most of the users, when we talk to them, they were looking at onboarding the mesh for security, mutual TLS and telemetry first, before they even have multiple versions of microservices. I mean, that's the last thing they are really looking at with service mesh.
CRAIG BOX: There are a number of things that have been identified as non-obvious about the way that Istio is configured. And in order to address a lot of those, you established a working group in the community around user experience. What has that group been doing?
LIN SUN: Back in January, I think I started a proposal within the TOC, saying that it's really critical for the project to be successful, we need to have a dedicated user experience group, to look at the overall experience of Istio and to be able to collaborate with all the other working groups within Istio. Since the founding of the project, Ed Snible has been a great lead of that project. And I also see a couple of emerging leaders in that working group.
As part of that, we're being actually made a lot of improvement to istioctl. It has been a tool that a lot of people were questioning if we need, because they felt that if kubectl could replace most of the commands of istioctl, then why do we need it?
So as part of the workgroup, we kind of formalized these are the things that istioctl can bring to the table. Some interesting commands were landed, such as "add to mesh", "describe a pod"; when you have a pod running in the mesh, we actually have a command to help you to describe the pod and tell you what may be wrong with that pod from a service mesh/Istio perspective.
istioctl install was also created as part of that working group in collaboration with the environment working group. If you haven't tried the latest Istio, I encourage you to try it. We actually finally came down to one single command to install Istio. It used to be like two commands, which we hated.
CRAIG BOX: That's a 100% improvement.
LIN SUN: [LAUGHS] Right, exactly. Yeah. And also 'istioctl analyze' is the other command that's landed as part of that working group. So really kudos to the UX working group for the amazing work they have done to make Istio so much easier to use and to make it so much so much easier for a user to onboard their microservices onto the mesh.
CRAIG BOX: You have a number of roles in the project. You're on the Istio steering committee and technical oversight committee. And you're also the build test release workgroup lead. What do all of those roles involve?
LIN SUN: I probably hold too many roles, honestly, that's what I would say. [CHUCKLING]
Steering committee is based on contribution. In the Istio project, we are looking at each company's contribution , and I don't think this has been published yet, but what we are looking at is really, how many 5% you have. So let's say if IBM is 15%, that would give us, three seats on the steering committee, for example.
So as part of the steering committee, we are looking at the overall marketing of the Istio project. We are looking at the overall governance model of the Istio project. We are looking at-- not the technical strategy because that's more delegated to the technical oversight committee. but we're looking at the overall strategy of the project.
The technical oversight committee does a whole lot of more technical things. We try to review most of the Istio APIs. We try to review anything, basically any working group members or the leads want to bring up to the technical oversight committee for either attention, or for approval, or when there's conflicts.
It's interesting. The group is running sometimes really ad hoc - sometimes we have a well-organized agenda, and then there was one time I remember we opened up the call, there was no agenda. And then people started talking and then we figured out the whole hour quickly. So interestingly, the technical oversight is recorded. And it's also open for anyone interested to participate. So that's totally open to anybody, if people are interested in help shaping the technical direction of the Istio project.
CRAIG BOX: The technical direction of the Istio project extends beyond what people think of as Cloud Native. Istio has the ability to add virtual machines into its mesh. And I understand it's moving in a direction where you can run it without any Kubernetes involved at all. What can you tell us about that direction?
LIN SUN: That's something we are really excited about. In the past few recent technical oversight committee meetings, we've been discussing "istiod" as a project. Interestingly enough, when Istio was first released, we were like, "we better eat our own dog food". And we want to consume service mesh ourself, right? So we have these distributed control plane components of Pilot, Citadel, and the Mixer. And we were like, "well, we want each of the components to run with with its own sidecar, so we can get telemetry data, so we can do mutual TLS among the control plane components".
So we did all that. But most recently, we started questioning ourself-- as most of what our users are questioning themself-- do you really need a service mesh? And we kind of came to the conclusion that due to the fact the Istio project is a single project released by a single team, instead of multiple teams, and the fact that we actually write the different Istio components in the same programming language, and the fact that we actually have had a lot of issues because we have a distributed control plane, so we actually decided to do an experimental project to try to bring our control plane components together.
We were joking in a TOC meeting that we're back to monolithic ourselves on the control plane. But that's something being cooked, because we really want to see how much of an improvement that is. And we think it's actually easier for people who operate the mesh, right? Instead of worrying about Galley, worrying about Citadel, Mixer and Pilot, they could just worry about one single control plane component, which is really cool.
Also interesting about this is not only just having istiod for a single control plane, people could run Istio on a separate environment, and then istiod could potentially manage that data plane as an "Istio minion" [CHUCKLES]. That's an interesting pattern, I think, being recognized in the community as a way that people might want to operate and run Istio.
CRAIG BOX: IBM is quite famous for making things like mainframes. How can we connect workloads that run outside the modern environment, even, into the service mesh?
LIN SUN: That's something I am really hopeful, that as this part of this monolithic control plane project, we could potentially easily move Istio to run on mainframe, in addition to VM and Kubernetes.
CRAIG BOX: IBM announced two projects at the recent KubeCon, kui and iter8, with the number eight again. It's good to see it come back from the Amalgam8 days. What can you tell us about those projects?
LIN SUN: The iter8 project is the one I'm a whole lot more involved in, with the IBM Research Team - the same IBM Research Team who actually we work closely with on the Amalgam8 project. They developed iter8. And it was a project that's being cooked within IBM for probably a little more than two years now. I think it's good timing for the project to be launched at KubeCon first of all, because I feel like Istio is at a point that the network API, and some of the APIs of Istio, is really getting mature. So it's so much easier for people to build toolings on top of Istio.
What iter8 is really trying to do is trying to provide analytics for canary deployments for users. So if you are using Istio, if you are doing canary deployments, you are going to run into challenges as to, is your canary going to be better than your base? There's a lot of data points you have to look at. What iter8 does really nicely is to bring the dashboards visually together for you, so you can look at the base and how it compares with the canary and to make an informative decision out of that.
ADAM GLICK: Can you tell us more about your new Istio book?
LIN SUN: I'm so glad it's over. We finally sent the book to production. A couple of months ago, our marketing team was really asking us-- they really pushed us hard. "We want to produce a book to explain about Istio. Who can do it?" At the time, I was like, no, no, no, I don't have time to write a book. The project is so fast evolving. There were actually multiple publishers who reached out to me privately before this to write a book. I was always thinking, for a fast moving project, why would you write a book? Why would you spend the time?
And the thing is, the marketing team finally convinced me because this is a really, really short book. Its only targeted to be a report. In O'Reilly's term, a report is only 75 pages. We were like, wow, that's a minimum effort. And for me, I like to check boxes for things. So writing a book is not something I have done. It's like - I have 150 patents. So I could save my time from writing patent applications, maybe spend that time to write the book. [CHUCKLES] I just like to check things off.
We finally decided to take up the challenge, with my co-author, Dan Berg, to write a book. And it actually turns out to be really, really cool for us to be able to, from a different lens, to explain Istio to our users. The users we are thinking about are developers, the mesh operators, the security operators, the operators team. So it's really interesting for me, personally, to actually have a working example from a user's lens to look at Istio, to look at "how do I actually incrementally onboard my microservices into Istio, and what are the hurdles I run into as part of that migration process?" And be able to share that with the user.
And as part of that, we also look at the broader ecosystem of service mesh projects. And we shared our approach on why we landed on Istio? What are the things we evaluated as a key criteria, as we select a service mesh project? So those are interesting, too. I know Istio is a really fast moving project, but a lot of the concepts in the network API, they will remain there. So that makes me feel good.
ADAM GLICK: Lin, it's been great having you on the show. Thanks for coming on.
LIN SUN: Thanks so much for having me.
ADAM GLICK: You can find Lin Sun on Twitter at @linsun_unc.
CRAIG BOX: Thanks for listening. As always, if you enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on Twitter at @kubernetespod, or reach us by email at firstname.lastname@example.org.
ADAM GLICK: You can also check out our website at KubernetesPodcast.com, where you'll find transcripts and show notes. Until next time, take care.
CRAIG BOX: See you next week.