Showing posts with label SDLC. Show all posts
Showing posts with label SDLC. Show all posts

Wednesday, June 22, 2011

Troubleshooting NAntContrib FxCop Task Error

I’m working on setting up my continuous integration process for a new application. I’m nearing the end of that work, but one outstanding step needs to be done – getting FxCop working against the code base to ensure good code is being committed.

At first, I set up my NAnt script to simply ‘exec’ the fxcopcmd.exe application. While that worked and I did get my results output to the file system, the NAnt script continued on happily even when I had messages that needed to be addressed. I found a few blogs that suggested that I could parse the return code from fxcopcmd.exe, but it always seemed to return 0 – even when there were messages.

That’s when I decided to use the NAntContrib FxCop task in my script. It’s easy enough to integrate. Simply add a <loadtasks> element that points to the nant.contrib.tasks.dll and then add the <fxcop> task to your script. When I did this, I was stymied by a new error:

Error creating FileSet.
    Request for the permission of type 'System.Security.Permissions.EnvironmentPermission, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.

Turns out that this issue is caused by a rather simple oversight. When you download the .Zip file that contains the NAntContrib DLLs, you have to be sure to unblock them, otherwise the O/S blocks loading them and hence you get the error above. To make matters worse, when I first extracted the files, I hadn’t unblocked the zip, so of course all of the contained files were blocked. When I unblocked the zip and re-extracted overtop of the existing files, it didn’t automatically unblock them – I had to go through them one-by-one and unblock. Once that nuisance was out of the way, the process began working.

Now on to other more pressing matters Smile

 

[Edit]

After I posted this, I realized that while FxCopCmd was running, none of the results were being saved to the output file and consequently the build wasn’t failing. In order for FxCop to save the results, make sure your .fxcop project has these settings:

<SaveMessages>
   <Project Status="None" NewOnly="False" />
   <Report Status="Active" NewOnly="False" />
  </SaveMessages>

The important one is the <Report Status=”Active”… > element.

Thursday, June 2, 2011

Reflecting on a Successful Software Release

What goes into a successful software release? I think to really answer that, it would boil down to a really abstract answer – “It depends.” In this blog post, I’ll give you a quick look at what went into making our most recent release a success.

First a little background …

The project that was just delivered was several years in the making. From concept to delivery took a LONG time. The reasons behind this were numerous; some were technical, but the majority were business-related.

The team members that worked on the project in the early days were not the same members that ended up delivering the solution. A couple of hangers-on were there for the whole duration, but largely it was a new team. Even the product owner from the client side was a new person. The team was also a geographically-dispersed multi-national team with language and time zone challenges.

The project’s development methodology changed over time. The process that persisted the longest and the one that was in place at the time of its delivery was a Scrum-hybrid that was chiefly centered on daily stand-ups. It lacked some of the important best practices that successful Scrum practitioners rely on. For example, it wasn’t until the last few weeks that the team had a well-defined and prioritized backlog.

Key Drivers for Success? I’ll give you the top 3 reasons – at least from my point of view, which was as a latecomer to the project.

The background of the project paints a pretty challenging picture for any team that finds itself facing the pressure to deliver a product. And this is certainly true for the team that delivered our product. So what made it possible to pull this off?

#1 Reason – Commitment

The team that delivered the project to the satisfaction of the client was determined to make it happen. Everyone involved knew the stakes and knew that success depended on each of their individual contributions to pull it off. I’ve been involved with projects in the past and almost without exception those that failed to deliver, or at least to deliver on time, were a direct result of a lack of personal commitment from the entire team.

#2 Reason – A Spirited Team Leader

I’m convinced that what brought the team’s commitment into focus and pulled things together at the end was the efforts of a new Scrum master. The team’s newest business analyst was a success catalyst. Her efforts to wrestle the last set of features and defects into a manageable backlog and focus the team’s efforts onto the right areas was a critical piece of the puzzle that helped the team cross the finish line.

#3 Reason – A Little Bit of Luck

Let’s face it, the project background that I gave at the beginning set the stage for another disastrous failure. However, as luck would have it, a number of events came about near the end that made success possible. For example, a new product owner became more engaged near the end and helped to focus the attention on must-haves. The team gained some new members that had experience with delivering results. They pulled together with the original team members and raised the bar for everyone. And lastly, the work of the team was largely usable and had relatively few “big ticket” issues. Had there been some time bombs that were unrealized by the new team, it could have been a very different result.

What was missing that would have made things better?

My top two things on this list are pretty solid in my mind: First – A well-groomed product backlog with priorities set by the product owner. Having that view gives the team a picture of what defines success. Anything short of that and it’s like shooting at a moving target. Second – Team continuity. Churn on a team can be a killer. When people leave they take with them something that can’t be underestimated – knowledge. If that person is a pivotal person on the team it can spell doom. Sometimes that knowledge can be rebuilt – other times not. Reduce your churn to improve your team’s velocity.

So what now? Well our team held a retrospective where a lot of great feedback was gathered and plans were made to put things into place to fix things. We’ll be relying on Scrum best practices to help guide our new process and hopefully will help reduce or eliminate some of the challenges earlier in the cycle.

Good luck on your next project!

Thursday, February 24, 2011

On the Path to Continuous Delivery–Part 2

In my last post on this topic, I gave a bit of background on our work in progress to develop a continuous delivery practice. In this post, I’m going to describe the first part of that effort, which focused on the automated deployment of the compiled and tested software.

You might be wondering why we started at the backend of the process rather than working on getting continuous integration working and addressing issues at the front end of our process. Well, there were two primary reasons for this. One, deployments of our software to our production systems was the most manual, error-prone, and problematic part of our process. Two, to a certain extent, with exception given to the database components, we had build processes in place that did a decent job of automating the compilation, packaging, and signing of our software (even if it wasn’t done on a continuous basis). It seemed that our biggest immediate benefit would come from automating deployments. It was also felt that once that piece was in place, we could then optimize the front end of our process and then feed the output from there to our new deployment process thereby closing the loop and providing a full end-to-end solution.

When we put down our requirements for automated deployments we came up with this short list of must-haves:

  1. The process must support all projects irrespective of their current build environments (TFS 2008 or TFS 2010).
  2. A deployment must define a set of features that will be deployed together as a unit. In our terms, a feature represents a piece of the application stack such as the client UI, the web services middle tier, or the SQL database scripts. As a stack, all of the features defined in the deployment must install successfully, or none of them will be deployed. We treat this as a logical installation transaction.
  3. The deployments must be re-runnable. Our ultimate goal was to make this step of the delivery process integrate with automated processes in development and testing, so we wanted the deployments to be able to be run over and over again. We also felt this would allow us to iron out issues with the deployment prior to production thereby ensuring a higher degree of reliability.
  4. The deployment process must be a click-button automated process that requires no human intervention to fully execute.
  5. The process must support a rollback capability in order to allow our platform administrators to back out a deployment and restore a previous version in case of a problem.
  6. The packaging of application components and the automation around their deployment must provide a standardized structure that supports not only current feature types, but new as-yet not thought of features. For example, we knew we wanted to support deployments of SSRS reports, Informatica workflows, and other types of components.

My approach to satisfying these requirements was to develop a standardized directory structure into which application features would be dropped as well as a set of master scripts that had knowledge of the drop zone structure and the types of features that we wanted to deploy.

The drop zone directory structure is organized around three basic concepts: the Scrum team delivering the product; the deployment feature set; and the lifecycle phase of the deployment; i.e., DEV, TEST, or PROD. This allows each team to work independently and build up deployments for each of their products separately. Further, by supporting the lifecycle phase, it allows deployments to be rerun in each phase and tested prior to the final push to production.

The master script itself is little more than a wrapper around a set of feature-specific scripts that are responsible for the push of individual features. The master script is responsible for the basic flow of logic for the overall deployment orchestrating the callouts to each feature-specific script as well as logging and emailing results. I wrote all of these scripts in MSBuild, but that is really not important. I could have also done it in NAnt or PowerShell and achieved the same results. In my case, MSBuild seemed to be the best choice because others were familiar with it and we could ultimately integrate with TFS very cleanly.

Each of the feature-specific scripts follow a basic pattern: Initialization, Backup, Deployment, Finalization, and optionally Rollback. The specifics of how each are implemented vary of course, but the basic pattern of logic allows the master script to orchestrate the features in a consistent way. Finally, in order to allow the features themselves to be defined externally from the scripts, I separated their definitions into a separate MSBuild project file that the master script imports. This allows the logic to simply enumerate the features and push them out one by one while allowing a separate process to define the list. This is important because it will allow our build processes in the future to populate the feature list.

The following shows the basic structure of the MSBuild proj files:

image

And here is a simple drop zone structure example:

image

Within each feature group, you would expect to see 1..n feature directories; e.g., one for the ClickOnce client, one or more web services, and one or more database folders.

I won’t go into the scripting specifics in this post, though I might decide to do a separate post to describe some lessons learned in MSBuild while developing these. Suffice it to say that the most important aspect of this initiative was to be ruthless in looking for ways to automate each and every piece of the application deployment.

In the next post in this series, I’ll describe the steps taken to bring our database developer group into the process and how we worked out a way whereby their development scripts were integrated into our deployments. As a part of that post, I’ll take you through some of the complications that have prevented this from happening sooner and how we overcame them.

Thursday, February 10, 2011

On the Path to Continuous Delivery–Part 1

In this post, I’m going to begin describing the way that I’ve been addressing our pain points associated with delivery of our software. This will be the first of a number of posts that delve into the specific ways that we are altering our methodology to be able to build and deploy our software faster and better each iteration.

First, a bit of background…

When you look at the process that was in place when I joined the company, it was somewhat disjointed. Some parts of it were working pretty well, while other parts were a complete mess (like the deployment process). That’s probably not all that different from what the situation is at many companies (assuming they have a process at all). But considering our company uses Scrum and considers itself Agile, it seemed the process wasn’t allowing us to realize the full benefits of our methodology.

The phrase “Continuous Delivery” was new to me until I attended Agile2010 in Orlando last year; however, “continuous integration” was old news. I’d been using CI for quite some time and found it to be an invaluable part of the software development process. I attended Jez Humble’s session on Continuous Delivery. This practice focuses on making your entire product continuously ready for deployment at any time, all the time. That means all parts of your application – application code, database schema, and any data needed by the app to function. A quick poll of the session’s attendees showed a wide variance in their delivery iteration lengths. The least frequent was once a year and the most frequent was once every six hours. Both were extreme, but it made me realize that if you can deliver your working software every six hours, you could certainly do it once a sprint, once a week, or even daily if needed.

Jez’s talk started me thinking more deeply about how continuous integration represents only the first step in the overall process. What we needed was to apply some of the same principles from CI into how we package and deploy our software as well. What we needed was a way to go from requirements to delivery as continuously and quickly as possible. What we needed was more automation and more repeatability.

Martin Fowler also attended Jez’s session and added some commentary. One of the tenets that he spoke of was about bringing pain points forward in your process and doing them often. The idea being that it would force you to address them and smooth them out. For us, that significant pain was in our deployments.So I began from the backend of the process (deployments) and worked my way backward into the development build/test/stage process. This allowed me to tackle our biggest pain first, look for ways to improve it, and from that learn what our earlier parts of the process needed to produce in order to make deployments easier.

We are faced with a number of complications that make our transition difficult. What fun would there be if it was simple? In particular, we have six development teams (not counting the database development group) and two sets of development tools/environments (Visual Studio 2008/2010 and TFS 2008/2010). Up until this point, the database developer group did not use version control and had their own set of tools, so you could really say we had three sets.

Each team has a slightly different methodology, only one using CI, with the others having some similarities in their TFS build project structure. These six teams are building numerous applications on differing sprint schedules with varying degrees of cross-team integration. They are also responsible for maintenance of their applications as well, which the business expects to be delivered intra-sprint, particularly for more critical bug fixes.

As each team completes their various updates, including testing, they funnel all of their deployment requests to a single team (of two people right now) that review, accept, and perform the request to push the update into the production environment. Today, those steps look very different from team to team and from request to request. And to make it worse, a significant portion of it is manual – from file copies to individual SQL script executions one-by-one. This team is on the verge of burn-out as you can imagine.

That pretty well sums up the current state of affairs. In my next post, I’ll describe the automation pieces that have been built and are now beginning to be put in place by the teams to ease the deployment burden.