The following is a recap of Andre Boutet’s Q&A with Logz.io CTO Jonah Kowall from OpenObservability 2020. It has been edited for brevity and clarity.
At OpenObservability, we had the pleasure to sit down with Andre Boutet, the Senior Director of Cloud Operations and Services for OneSpan. Andre had a conversation with our CTO, Jonah Kowall, around what observability means to his team and his organization. Teaser: It’s not just about ensuring uptime and availability for external systems. It’s a philosophy with a foundation on supporting the entire development lifecycle.
At OneSpan, Andre oversees a team that delivers monitoring technology that allows internal stakeholders to observe what is happening to their underlying infrastructure, cloud services and security environment as they build systems for internal and external use.
In the below, modified recap from the session, Andre highlights the uniqueness of his team’s process, the importance of shared ownership across the cloud operations function, and the value that open source affords to enabling a “plu-and-play” and “future-proofed” approach to observability. (And, in case you are interested, OneSpan is actively recruiting across all lines of business).
Could you tell us a little bit about your background and yourself, please?
Yes, thank you. I’ve basically been working in technology for over 20 years. Pretty much my whole career has been built around building software and data centers and mainly focused on infrastructure technology. However, for the last 12 years, I’ve been focused on building high availability SaaS cloud offerings and solutions. I started off in the hosted IVR world, working with large banks and building high availability systems around voice IVR and highly secure IVR systems. Then about 7-8 years ago I moved over to commerce. I began working with some massive e-commerce companies generating fairly large volumes of transactions and building highly-scalable and customizable solutions.
Then about two years ago, I made the jump into OneSpan, a publicly traded cybersecurity technology company based in Chicago, Illinois that offers a cloud-based and open architected anti-fraud platform and is historically known for its multi-factor authentication and electronic signature software.
We are interested in your team structure and evolution. Could you talk to us a little bit about how you structured it and why you made some of those decisions?
When I started here about two years ago, there was a foundational idea that our previously on-prem products needed to be cloudified and brought into cloud, even though they were mainly being run out of IaaS providers.
And essentially, as we know, when we bring on-prem products into cloud, they’re not fit for cloud, as they’re not cloud native, per se. You can do as much as you can to make them as functional as possible, but they need constant care and monitoring. So, what we decided early on is to expand the scope of the cloud operations team. By doing so, we can really take ownership of this process. We are sustaining production, building on production, monitoring production, taking actions on production, and creating a DevOps structure to supplement and assist.
How did this model evolve over time?
The thought process has evolved over time, but we maintain one common philosophy, which is we build self-serve services that allow for the consumption of cloud resources via blueprints and repeatable infrastructure as code. And what this does, is it allows for our internal stakeholders and consumers to observe what’s happening in the various different services we’re providing in the underlying infrastructure, as well as related to security, while doing what they need to do to best serve our customers.
We’ve also started to focus within the last year or so on partnering up with various different service providers the focus being around building a unique developer experience, which spans the whole SDLC life cycle. The end-state goal here is that our developers need to be able to focus on one thing and one thing only, which is building kick-ass code.
A Philosophy of Monitoring
And so, this is the main ethos that drives us. We handle the monitoring so that our developers can build products that serve our customers. In order to meet those objectives, as I mentioned before, we have a DevOps team that ideates, builds, delivers, and secures consumable services in conjunction with the enterprise platform architects, and in conjunction with our security architects.
Then we have a cloud operations team that supports, manages, and maintains those services, whether they be for internal, external, or for R&D. The idea of building a shared ownership model is, I believe, one of the foundational keys of our success and in our ability to drive this developer experience, drive uptime and availability, while improving feature rollout for our end-state customers, mainly banks and other financial institutions.
It’s very ambitious to build a whole self-service portal and self-service experience. This sounds like a model and process that could benefit the broader open source community, too.
And this is the great thing behind the openness of the various open source platforms we work with. With open source, the plug and play nature of these tools, and the fact that we can expand or customize wrappers, or even build our own bolt-on services and extend some of the services we offer through these open platforms is significant.
When we look at self-service UI, which is one of our core delivery mechanism, it’s being rolled out to various groups in the organization.
It’s pretty interesting to see how we can lift, shift, plug and play independently of the end-state where we’re building out, whether it be a hyper scaler, a private cloud or even something in house, IT, as an example.
All this customization is made possible by the open source nature of the software we use.
So, shifting to observability, when someone requests something in the portal and you do the automated deployment, the instrumentation, the visibility is already there because you’re controlling the life cycle of that asset or component with your technology—Logz.io, for example, for log management.
Could you talk a little bit about your strategy and where you see things going with your observability program?
Yes, for us, observability and monitoring allow us to better support all our end-state customers or consumers.
The external part of it, I won’t focus on that a lot over here. I think everybody understands the importance of observability and monitoring with regards to uptime and availability of services. If we focus on what’s internal to R&D, though, we come back to the developer experience, and this is key, and front and center to our success at OneSpan.
The Developer Experience
Also, taking care of the developer experience by helping our engineers to find, isolate, repair, fix, correct issues is important. Finding anything that may be happening within code delivery, within its performance, or even potentially within its optimization with regards to the end -state where it’s deployed, are all critical.
Leveraging observability tools like Logz.io really helps with this vision and execution. Now we start talking about things like seamless alerting; reporting into things like Slack or into Confluence; not needing to have eyes on something 24/7 when you’re running performance tests; being able to pull out next-day reports that facilitate the enterprise architect or the developer’s ability to see what happened during that eight-hour performance run, rather than going through hundreds of thousands of lines of error codes.
So the ability to use an observability platform like Logz.io to enable quicker internal response, to enable us to find things quicker, is critical in being able to roll out updates to our service more quickly.
And lastly, to our external R&D, the business operations part of what we do is also important. Our team can help the management side and the C-level suite by allowing them to be able to see or gain better insights into how well we are forecasting, how well we are estimating, how complex are the products that we’re delivering.
We help them evaluate metrics and reports, or even to drill down and say, “Why is one Scrum team performing better than the other?” or, “Why is this team not struggling as much as this other team seems to be struggling with this project?”
Then, whatever we find can be seamlessly rolled out and used across the portfolio or the suite of services that we offer.
And this is what makes observability and the ability to tie in these tools very important to us. If we do this right, we gain the trust of our C-Suite, and our product management team continues to believe in our ability to adequately forecast events. And ultimately for the end-state customer or for the business itself, this results in better performance management, better cost forecasting, better pricing of features, and making sure that we’re instrumenting against those various different vectors that allow us to understand what the true cost of building something new really will be.
And how does open source play into this, in general, and your future plans around observability?
So I’ve been around a long time, and I remember days where you used to buy into things that required you to instrument your stuff to work with their code. And this has always been sort of a very interesting thing where we would spend a ton of time integrating into a tool which would then remove our flexibility or ability to move away or migrate away from it. And in the past, we made decisions that you were sort of damned if you do, damned if you don’t. You’re stuck with it. You are locked in.
And what we’re seeing now with open source is fantastic. We’re seeing the projects evolve, allowing us to reuse ideas that are out there in the open source community. You can take these ideas and bolt them on, or you can move them, or you can unbolt and re-bolt, or you can even create your own and then make it available to community.
Binding to Open Source
So, I like the idea of not being bound to a specific vendor, but being bound to an open source back-end technology. I like the plug -and-play approach to observability. I like the play that’s being done throughout some of the vendors, of using open source technology to build products on top of it.
And, we are a Logz.io customer. The idea behind what attracted me to Logz.io was the fact that we can do things with the ELK platform that may not be in open source function. With Logz.io, you get the benefit of open source systems and community, but also a real service to make deployment easier. And this is what we’re trying to do—forget about what’s behind the tool. Focus less on scaling and maintaining it. With this approach, we can pick the best of the best in open source to deliver the best user experience possible to our teams.
Extending Use Cases
And lastly on that topic, open source allows us to build based on our use cases. If somebody else has been there and done that already, then we can utilize it. It allows us to future-proof what we build, then maintain and ultimately deliver it to our stakeholders. So the future-proofing and this idea of not carrying legacy technology really interests us. It’s part of what we do day-to-day — and month-over-month — with regards to delivery.
You can view the full video interview here, too: