Thursday, February 10, 2011

On the Path to Continuous Delivery–Part 1

In this post, I’m going to begin describing the way that I’ve been addressing our pain points associated with delivery of our software. This will be the first of a number of posts that delve into the specific ways that we are altering our methodology to be able to build and deploy our software faster and better each iteration.

First, a bit of background…

When you look at the process that was in place when I joined the company, it was somewhat disjointed. Some parts of it were working pretty well, while other parts were a complete mess (like the deployment process). That’s probably not all that different from what the situation is at many companies (assuming they have a process at all). But considering our company uses Scrum and considers itself Agile, it seemed the process wasn’t allowing us to realize the full benefits of our methodology.

The phrase “Continuous Delivery” was new to me until I attended Agile2010 in Orlando last year; however, “continuous integration” was old news. I’d been using CI for quite some time and found it to be an invaluable part of the software development process. I attended Jez Humble’s session on Continuous Delivery. This practice focuses on making your entire product continuously ready for deployment at any time, all the time. That means all parts of your application – application code, database schema, and any data needed by the app to function. A quick poll of the session’s attendees showed a wide variance in their delivery iteration lengths. The least frequent was once a year and the most frequent was once every six hours. Both were extreme, but it made me realize that if you can deliver your working software every six hours, you could certainly do it once a sprint, once a week, or even daily if needed.

Jez’s talk started me thinking more deeply about how continuous integration represents only the first step in the overall process. What we needed was to apply some of the same principles from CI into how we package and deploy our software as well. What we needed was a way to go from requirements to delivery as continuously and quickly as possible. What we needed was more automation and more repeatability.

Martin Fowler also attended Jez’s session and added some commentary. One of the tenets that he spoke of was about bringing pain points forward in your process and doing them often. The idea being that it would force you to address them and smooth them out. For us, that significant pain was in our deployments.So I began from the backend of the process (deployments) and worked my way backward into the development build/test/stage process. This allowed me to tackle our biggest pain first, look for ways to improve it, and from that learn what our earlier parts of the process needed to produce in order to make deployments easier.

We are faced with a number of complications that make our transition difficult. What fun would there be if it was simple? In particular, we have six development teams (not counting the database development group) and two sets of development tools/environments (Visual Studio 2008/2010 and TFS 2008/2010). Up until this point, the database developer group did not use version control and had their own set of tools, so you could really say we had three sets.

Each team has a slightly different methodology, only one using CI, with the others having some similarities in their TFS build project structure. These six teams are building numerous applications on differing sprint schedules with varying degrees of cross-team integration. They are also responsible for maintenance of their applications as well, which the business expects to be delivered intra-sprint, particularly for more critical bug fixes.

As each team completes their various updates, including testing, they funnel all of their deployment requests to a single team (of two people right now) that review, accept, and perform the request to push the update into the production environment. Today, those steps look very different from team to team and from request to request. And to make it worse, a significant portion of it is manual – from file copies to individual SQL script executions one-by-one. This team is on the verge of burn-out as you can imagine.

That pretty well sums up the current state of affairs. In my next post, I’ll describe the automation pieces that have been built and are now beginning to be put in place by the teams to ease the deployment burden.

1 comment: