Friday, November 18, 2011

Duplicate MEF Exports When Export Has Metadata

I just discovered that the Managed Extensibility Framework will produce duplicate exports for parts that are exported with custom metadata. My scenario is pretty simple. I have created an Entity Framework DbContext subclass that is marked up with both [Export] and [DbContextMetadata] attributes. DbContextMetadataAttribute is my own custom metadata attribute.

   1: [Export(typeof(IDbContext))]
   2: [DbContextMetadata(ContextType = "Person")]
   3: public class PersonContext : DbContext, IDbContext
   4: {

The result is shown here:


image


I have verified that this is true by simply commenting out the custom metadata attribute. Interestingly, this behavior is not present when using the MEF ExportMetadataAttribute. I’m planning to dig into this a little more to see why it’s happening, but it certainly was unexpected.


 


Blogger Labels: Duplicate,Exports,Export,Metadata,Framework,custom,scenario,DbContext,DbContextMetadata,DbContextMetadataAttribute,IDbContext,ContextType,Person,PersonContext,behavior,ExportMetadataAttribute

Monday, November 14, 2011

How Much UML Modeling Is Right For Your Team Or Project?

This is a question I see asked a number of times in various forums. It is one of those “It Depends” sort of topics, but is one that I think is worth discussing here. In this post, I’ll give you my perspective on how I’ve used it in my projects.

Probably the most important thing you have to understand is, “Why are you using UML at all?” – assuming you are in the first place, or are considering using it in the near term. As the old saying goes, a picture is worth a thousand words. A UML diagram is simply a picture of part of a (potentially) complex system. And sometimes the best way to break down the complexity is through pictures.UML diagrams convey structure or behavior through pictures of classes and their interactions. It supports numerous diagram types to break down the system into various views, which can be combined in order to represent the system from different angles in order to help bring perspective and clarity.

image   image

Organized in a logical way, the UML diagrams you create help to tell the story of the system you are developing. The $64,000 question is, “Who is the story’s primary audience?”. Generally, the answer to that is the development team that is building the application. However, with the success of Agile, many teams feel that UML is unnecessary, or at best, it’s used in ad hoc ways with very lightweight and high level diagrams – maybe even temporarily on whiteboards during meetings. It’s simply viewed as a means to convey a concept or outline. I certainly agree with the spirit of that. UML can help to speed up the team because it helps to bring understanding and consensus through a shared view of what they’re building. How lightweight or detailed your diagrams are, or even if they are persisted, should be a decision you make based on the team’s appetite and ultimately the overall value add to the team and project. Flexibility is the key.

And no matter what the long term goals are, the short term benefit of an increase in velocity is probably the most valuable takeaway you will realize by using UML.

In order to gauge the right amount of diagraming to provide to the team, I have used the sprint retrospectives to reflect on this with the team and arrive at the right answer. The teams that I’ve worked on have all been comprised of a mixture of experience levels. Typically, the less experienced team members or newer team members gain the most benefit from the diagrams. The more experienced team members still benefit from them, though, as they almost always help to disambiguate the design details. The net effect is that it helps to level the playing field across the team and improves communication and ultimately productivity. I have almost always chosen to persist the diagrams in a modeling tool – even if for no one else but myself for future reference.

In a highly Agile team where practices like TDD are used, the use of UML may likely be perceived as an impediment. You can make a pretty well supported argument that this viewpoint is right. The design should emerge from the creation and iterative refactoring of code and unit tests to flesh out the details. Up front design using UML is the opposite of that approach. Unless you consider it from the perspective of general high-level architectural point of view that is. The UML diagrams you use can describe many higher level aspects of the system like: patterns; guidelines on organizing system components; describing the layers of abstraction;  deployment details; or other high-level details. Let TDD do what it’s best at – creating flexible and resilient designs at a low level of detail.

Most development teams have a certain amount of turnover throughout their lifetime. The reasons for this are myriad, but the bottom line is that you will need to bring new team members up to speed at various points in time throughout the life of your project. If the system that they are working on is reasonably complex, the UML diagrams that you created to convey your design concepts to the original team members can be an excellent way to help the new team members understand what they are working on. I am a firm believer that every team member should get the big picture. They shouldn’t be relegated to some dark corner of the application with little or no visibility into how it all fits together. Use of tools like UML helps to bring everyone up to speed, which is a good thing.

A common argument that I have heard against using UML is that like most documentation it will always be stale when compared to the code. I would say that this is generally true. There are many ways to deal with this, but probably the best way is to just accept that fact and understand that the main use of the tool is to help people understand the system. If the diagrams are too far out of alignment with reality, update them. Most tools support reverse engineering. Use it to refresh the details and update the diagrams. Your future team members will thank you. Just don’t get bogged down in the minutia of sync’ing code with models all the time. That’s one of the quickest killers of usefulness.

In the end, I will also yield with an “It Depends” answer for the original question. Since no two teams and no two projects are exactly alike, I’d say you will have to gauge the right answer based on your current circumstances. For me, I’ve always relied on UML as a means to reducing complexity and organize thought around the structure and design of complex systems. Let your team help you to decide the right answer.

Happy Modeling

Smile

Thursday, November 10, 2011

Fowler Is Right On the Mark About Premature Ramp-Up

Martin Fowler’s most recent post on “PrematureRampUp” http://martinfowler.com/bliki/PrematureRampUp.html is right on the mark. Our team has recently concluded work on two legacy projects, which have been running in parallel for some time. A strategic greenfield project for a custom analytics platform has been slowly gaining momentum. Rather than slam the custom platform project with all the devs from the previous two projects, we have started to migrate them over in pairs. This has proven to be very beneficial in that it has allowed us to control the team member learning curves and ensure that they are properly acclimated to the new project environment. We eliminated a great deal of hectic stress by bringing them over in this way and it also allows the new team members to assist with the knowledge sharing as well. Overall, this is a great way to grow a team to its optimal size.

Wednesday, November 2, 2011

Productivity Tip–Add Your FxCop Project to the Solution

image

I’m very interested in streamlining the workflow that our team uses when developing. That means I need to make the process get out of our way as much as possible. Sometimes, small changes can make a big difference in your overall productivity.

To some, code analysis is a waste of time, but our team sees it as a vital step in ensuring high quality code in the application. Obviously, I don’t want to forego code analysis, but I would like it to be as unobtrusive as possible.

As Martin Fowler says, “if it hurts, do it more often”. Committing changes that break the continuous build for something simple like code analysis violations is a waste of valuable cycles. So we do it earlier and more often. We’ve simply added the .fxcop file to our solution and added all of the project outputs to the project. Now, before committing our changes, we simply run the analysis prior to our commit and eliminate the breaks before they break the build.

image

This keeps our CruiseControl projects green longer and reduces the churn of having to rebuild to fix something we shouldn’t have committed in the first place.

I suggest that in your team retrospectives you reflect on your daily practices and look for small, quick wins like this. You’d be amazed at how much time throughout your day you can reclaim by eliminating friction points like this.

image

Friday, September 2, 2011

What’s It Like Working For A Startup?

OK, well the company I’m at technically isn’t a startup. It’s been around for over 10 years. But its focus has changed from being primarily a research organization to a commercial one that is productizing its successful research initiatives. For all intents and purposes it might as well be considered a startup.

Pros

Probably the greatest benefit to working in a company like this is that almost everything we do is greenfield – and I don’t mean just software development, I mean business process development too. We get the unique opportunity to form the way the business operates from the ground up. Find something that doesn’t work the way you like? Change it. Find something that works really well? Use it. Nothing is off the table.

Even though we aren’t a software development company per se, it does form a significant portion of what we are. And to that end we have the responsibility to deliver high-quality systems. In order to deliver a robust solution to the client we have to employ sound architectural and software engineering practices. And the beauty of it is that we can look to the successes of other companies that have used Agile and Lean methodologies to their advantage and incorporate the same principles here. No having to strip away layers of process in order to make a streamlined software development lifecycle.

Naturally, we’re a small company. That means that in order to get things done people are empowered. Not just empowered to do their day-to-day things, but to actually think and reflect on the way we do things and come together as a team to make real change. It takes the concept of team ownership of code to a whole new level – to the company level – where you are truly an owner in the products and the way they are delivered.

Small companies such as this also bring one very important benefit to the table – that you’re not just a number in a faceless corporate machine. People know you and you know them. It takes the benefits of an Agile team to the next level and brings a sense of camaraderie to the game that makes everyone feel that they have others that they can depend on to get the job done.

Cons

Not surprisingly, as a small company in this phase of its life, budgets are tight. Everything from server hardware costs, to developer tool costs, to travel budgets for conferences is looked at from a necessity perspective. Unlike some companies with deep pockets and seemingly bottomless pits of funds for infrastructure, we have to be vigilant and ruthlessly efficient in our expenditures.

Probably not unique to startups, but common to small companies, is the need to wear many hats. As the software architect, I often field common support needs on the network or server platforms. I participate in presales calls and lend guidance to the business development team to help bridge the gap between techies and non-techies.

Pressure to build up a sustainable and recurring revenue stream is high. A small company operating in a niche market in a tumultuous economy has a lot to worry about. The focus of each and every team member has to be like a laser to deliver a high-quality solution that the client simply can’t afford to live without. Frankly, that should be the focus anyway, so making that a negative probably isn’t fair. The point is that letting your focus slip a little can have dire effects on a small company in a fledgling state.

In Summary

Although there are a number of pros and cons to working in this sort of company the pros far outweigh the cons. So many of the day-to-day impediments that hamstring teams and projects can be eliminated through tight teamwork and the empowerment that comes from knowing that you can make important decisions to improve things.

Funny thing is that you don’t necessarily have to work in a small start-up to have your cake and eat it too. Larger, well established companies can achieve the same results by empowering their employees and fostering a culture of entrepreneurial spirit. The main difference is the depth of the sludge you have to dig through and how far can you go to clean it up.

Good luck!

Smile

Tuesday, August 9, 2011

Visual Studio 2010 Error–The Project File Could Not Be Loaded. Root Element is Missing.

I happened upon this gem of an error today. It is totally a red herring. I was expecting to find some kind of corruption in the XML like a missing <Project> tag or something similar; however, when I compared the .csproj file between revisions in Subversion, I found no substantial differences, only some additional files added to the project. So what gives?

Turns out that this error is caused by a problem with the .user file associated with the project. Delete the .user file and voila! it works. I can’t take credit for figuring it out. Thanks goes to the BizTalk Tips and Tricks blog here.

Monday, August 8, 2011

Using Architecture to Improve Team Velocity

I’ve been following a number of discussions lately on more than one LinkedIn group that I’m a member of that have been focused on an important concern – namely the role of architecture and design in an Agile project. Many people that I’ve spoken with over the last few years are unclear how much or how little time should be spent on this activity, its relative importance to a team, and ultimately the benefit that it brings to a project. The participants in these recent discussions are asking the same questions. In  this post, I will share my perspective on architecture’s role in an Agile team and how it has benefitted us on our current project. First, I’ll focus on three important things that can impact the answer.

Team Composition Impact on Architecture

A recurring theme amongst many of the discussion participants was team composition. More than once it was mentioned that a team of seasoned software developers could forego up front architectural work and “just start coding”. The architecture would then evolve sprint by sprint. Interestingly, no one discussed the scenario of a team composed of a significant percentage of more junior members.

Project Requirements’ Impact on Architecture

What I found most interesting in these discussions was the absence of emphasis on project requirements and the bearing that they may have on the need for architecture. Perhaps it was an oversight or perhaps it’s because there is little or no difference in need based on project requirements. I’m surprised because in my opinion the more complex a system is, the higher the need for a guiding architecture.

Delivery Timeline’s Impact on Architecture

Without question, nearly every team feels an intense level of pressure to deliver more in a shorter period of time with fewer bugs. It’s the universal facet of what we do – bringing business value to a company in the shortest time and in the most cost effective way. A number of participants in these discussions felt that the demand to drive out a product as quickly as possible made the concept of architecture impossible to consider. This feeling was often supported by arguments that the time spent on architecture would negatively impact the ability of the team to focus on the creation of functioning production code.

What’s the Right Answer?

One thing that I’ve found to be universally true is that no two teams and no two projects are exactly alike. And probably the greatest truth I’ve found in software development is that there is no silver bullet that will solve all of your challenges. What works extremely well on one team or for one project may be completely useless in another setting.

With that in mind, however, in my experience there are some recurring themes and enough similarities that I can say with some confidence that architectural activities in a project have proven to be more beneficial than harmful. Perhaps not to the same degree, but certainly the exercise is not a fruitless endeavor. So what about from the perspectives of the three concerns above?

While team composition can certainly affect how a team perceives the degree of need for architecture, my experience has been that even the most seasoned team can benefit from defining architectural guidance for their project. I like to think of it in terms of an orchestra. The most talented musicians need the notes of the musical score to play to. It’s simply a way for them to all stay in tune with one another. While the action of defining architecture is done by the role of “architect”, the person fulfilling that role may alternate within a team of pros. The net result is a team that operates at a higher level of coordination yielding a higher overall velocity.

The need for architecture on a team with junior members is even more pronounced. Without a clear picture of what these team members need to do, the project can and will degrade into a mire of spaghetti and unmaintainable code. Architectural guidance gives these junior team members development goals and a clearer picture of what they are trying to achieve. It makes clearer the constraints that they need to operate within. This has been invaluable as a communication tool for our team, which has a number of junior team members. It helps to frame conversations and eliminates ambiguity. Rework is down substantially since going with a process that involves an architect role.

The solutions that my company develops are not your average run of the mill line of business applications. They are sophisticated custom solutions that bring predictive analytical models to different vertical markets. They challenge us from many angles including data acquisition, cleansing, modeling, and delivery. Scalability, security, and flexibility all contribute to additional complexity. As a result, delivery of these kinds of products requires management of the technical details and the best way to do that is through architecture and design. However, don’t be fooled into thinking that only sophisticated projects require this level of attention. Even basic LOB applications would benefit from architecture and design in order to reduce duplication, improve reuse, and generally improve maintainability.

Our company is no stranger to tight delivery timelines. Our clients need results quickly to improve their bottom line and our solutions are intended to help with that. In the end, this becomes a value proposition where architecture and design is pitted against time to market. A product that doesn’t meet its objectives because it can’t perform well, can’t interoperate well, or is overly expensive to maintain isn’t a solution – it’s an impediment. Many projects forego the effort of defining architecture and design and instead jump right to the implementation phase. While it can get you over the hump in a hurry to deliver a product, the long-term costs of this can be quite extreme. How many projects have had to be rewritten simply because of decisions like this?

Wrapping Up

This has been a long post and if you made it this far, thanks! In conclusion, I argue in favor of applying sound application architecture and design to the projects I work on because I see their benefit. That benefit comes to teams of all types, projects of varying degrees of complexity, and timelines that are either reasonable or tightly compressed. My advice to you? Take the time to define the architecture of your solution, apply sound design principles to the product, and reap the rewards. Those rewards will be overall improved team velocity and a lower defect and rework rate. These will also have the effect of improved morale on the team because they’ll come away with an improved sense of accomplishment and pride in their workmanship.

Thursday, July 28, 2011

Entity Framework 4.1 Primary Key Convention

I’m very pleased with the move to EF 4.1. Using a code first approach to implement my Repository pattern has reduced complexity and given me additional control over my code base and that makes me very happy. I want to point out one area that is a potential sticking point when implementing your entities.

In our system we have a number of entities whose tables have primary keys that are not identities. These entities have their IDs set via initial load scripts at the time the database is deployed using initialization scripts. As a result, the ID fields are simply INTs. The repository for these entities supports adding new entities and require the caller to set the IDs on their entities prior to calling save.

The point of contention comes from the fact that Entity Framework, by default, considers properties marked as [Key] to be identities. You must supply an additional data annotation attribute to override this behavior. The attribute you must apply to your key field is:

[DatabaseGenerated(DatabaseGeneratedOption.None)]

Forgetting to apply this attribute will cause Entity Framework to generate SQL statements on insert that attempt to rely on the key being auto-generated by the identity. The result is a error when you attempt to insert new data that your ID field cannot be NULL.

Using data annotations on your entities allows you to customize numerous behaviors that Entity Framework provides as well other consumers of your entities include data binding for your UI.

Wednesday, June 22, 2011

Troubleshooting NAntContrib FxCop Task Error

I’m working on setting up my continuous integration process for a new application. I’m nearing the end of that work, but one outstanding step needs to be done – getting FxCop working against the code base to ensure good code is being committed.

At first, I set up my NAnt script to simply ‘exec’ the fxcopcmd.exe application. While that worked and I did get my results output to the file system, the NAnt script continued on happily even when I had messages that needed to be addressed. I found a few blogs that suggested that I could parse the return code from fxcopcmd.exe, but it always seemed to return 0 – even when there were messages.

That’s when I decided to use the NAntContrib FxCop task in my script. It’s easy enough to integrate. Simply add a <loadtasks> element that points to the nant.contrib.tasks.dll and then add the <fxcop> task to your script. When I did this, I was stymied by a new error:

Error creating FileSet.
    Request for the permission of type 'System.Security.Permissions.EnvironmentPermission, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.

Turns out that this issue is caused by a rather simple oversight. When you download the .Zip file that contains the NAntContrib DLLs, you have to be sure to unblock them, otherwise the O/S blocks loading them and hence you get the error above. To make matters worse, when I first extracted the files, I hadn’t unblocked the zip, so of course all of the contained files were blocked. When I unblocked the zip and re-extracted overtop of the existing files, it didn’t automatically unblock them – I had to go through them one-by-one and unblock. Once that nuisance was out of the way, the process began working.

Now on to other more pressing matters Smile

 

[Edit]

After I posted this, I realized that while FxCopCmd was running, none of the results were being saved to the output file and consequently the build wasn’t failing. In order for FxCop to save the results, make sure your .fxcop project has these settings:

<SaveMessages>
   <Project Status="None" NewOnly="False" />
   <Report Status="Active" NewOnly="False" />
  </SaveMessages>

The important one is the <Report Status=”Active”… > element.

Thursday, June 2, 2011

Reflecting on a Successful Software Release

What goes into a successful software release? I think to really answer that, it would boil down to a really abstract answer – “It depends.” In this blog post, I’ll give you a quick look at what went into making our most recent release a success.

First a little background …

The project that was just delivered was several years in the making. From concept to delivery took a LONG time. The reasons behind this were numerous; some were technical, but the majority were business-related.

The team members that worked on the project in the early days were not the same members that ended up delivering the solution. A couple of hangers-on were there for the whole duration, but largely it was a new team. Even the product owner from the client side was a new person. The team was also a geographically-dispersed multi-national team with language and time zone challenges.

The project’s development methodology changed over time. The process that persisted the longest and the one that was in place at the time of its delivery was a Scrum-hybrid that was chiefly centered on daily stand-ups. It lacked some of the important best practices that successful Scrum practitioners rely on. For example, it wasn’t until the last few weeks that the team had a well-defined and prioritized backlog.

Key Drivers for Success? I’ll give you the top 3 reasons – at least from my point of view, which was as a latecomer to the project.

The background of the project paints a pretty challenging picture for any team that finds itself facing the pressure to deliver a product. And this is certainly true for the team that delivered our product. So what made it possible to pull this off?

#1 Reason – Commitment

The team that delivered the project to the satisfaction of the client was determined to make it happen. Everyone involved knew the stakes and knew that success depended on each of their individual contributions to pull it off. I’ve been involved with projects in the past and almost without exception those that failed to deliver, or at least to deliver on time, were a direct result of a lack of personal commitment from the entire team.

#2 Reason – A Spirited Team Leader

I’m convinced that what brought the team’s commitment into focus and pulled things together at the end was the efforts of a new Scrum master. The team’s newest business analyst was a success catalyst. Her efforts to wrestle the last set of features and defects into a manageable backlog and focus the team’s efforts onto the right areas was a critical piece of the puzzle that helped the team cross the finish line.

#3 Reason – A Little Bit of Luck

Let’s face it, the project background that I gave at the beginning set the stage for another disastrous failure. However, as luck would have it, a number of events came about near the end that made success possible. For example, a new product owner became more engaged near the end and helped to focus the attention on must-haves. The team gained some new members that had experience with delivering results. They pulled together with the original team members and raised the bar for everyone. And lastly, the work of the team was largely usable and had relatively few “big ticket” issues. Had there been some time bombs that were unrealized by the new team, it could have been a very different result.

What was missing that would have made things better?

My top two things on this list are pretty solid in my mind: First – A well-groomed product backlog with priorities set by the product owner. Having that view gives the team a picture of what defines success. Anything short of that and it’s like shooting at a moving target. Second – Team continuity. Churn on a team can be a killer. When people leave they take with them something that can’t be underestimated – knowledge. If that person is a pivotal person on the team it can spell doom. Sometimes that knowledge can be rebuilt – other times not. Reduce your churn to improve your team’s velocity.

So what now? Well our team held a retrospective where a lot of great feedback was gathered and plans were made to put things into place to fix things. We’ll be relying on Scrum best practices to help guide our new process and hopefully will help reduce or eliminate some of the challenges earlier in the cycle.

Good luck on your next project!

Friday, April 29, 2011

Importing DBF Files Into SQL Server 2008

I’m currently working on a new solution to allow geocoding and data augmentation with demographic data. The demographic data set that we are working with is provided to us as several gigabytes of data separated into logical groupings of .DBF files each with 200+ columns of data and an accompanying data dictionary document in Word format.

I needed to upload all of this data into a new SQL Server 2008 R2 database so that we can do some aggregation and other processing against the data. Unfortunately, SQL Server 2008 R2 does not support a direct import of the data found in the .DBF files. One option is to convert the .DBF files into Excel files. Excel handles the conversion easily. Unfortunately, in my case, the width of the tables is simply too wide for the Microsoft Office Data Access Engine to handle properly. When I attempt to import the resulting Excel files into the database I am greeted with a crash error dialog after a lengthy wait.

The second option, which is the one that I had to go with is to use Microsoft Access to do the conversion. Fortunately, Access will open the .DBF files with no problem. I chose to use the import approach as opposed to the linked tables approach. Due to the size of data that I was dealing with, it was actually faster to load the respective tables into Access, and then use the SQL Server upsizing wizard in Access to push the data into a local SQLExpress instance. Once that had completed, I then detached the database, copied the .MDF and .LDF files to the server and reattached.

The only glitch I’ve had with this process is that for some reason Access will fail to copy data into the SQL Server tables. It’s erratic and I’ve not determined why it does this. There are no errors produced during the export process. It creates the table structure with no problem, but occasionally it will fail to copy the data – even though it runs for several minutes. That’s a battle for another time.

Now I’m off to resume my work on the geocoding functionality that will ultimately pull in this data.

Tuesday, April 26, 2011

Quick Tip on SQL Server 2008 Database Projects

I just imported the scripts from an existing SQL Server 2008 database. All went well, except that some of the stored procedures made reference to objects from system databases, which the newly imported scripts couldn’t resolve.

Database projects support making references to other databases in a few different ways. One way is via .dbschema files that you can generate from your projects. This approach is a loosely-coupled approach and improves reuse across your projects.

In the case of the master or msdb databases, Microsoft ships .dbschema files for these for this exact purpose. Simply add a database reference to your project and browse to the directory where Microsoft ships these files: [Program Files]\Microsoft Visual Studio 10.0\VSTSDB\Extensions\SqlServer\Version\DBSchemas, where Version is the version of SQL Server that you are using (such as 2005 or 2008).

For more info on using database references, here’s the link to MSDN:

http://msdn.microsoft.com/en-us/library/bb386242.aspx

Wednesday, April 20, 2011

New Diggs

It’s been awhile since my last post. The last couple of months have been a whirlwind of activity. Since I last wrote about Continuous Delivery, I’ve switched jobs. The contrasts between the two are innumerable. Went from big corporate to small and personal. I found a place where the project work is genuinely interesting and challenging and above all the people are great to work with.

The beauty of it all is that I get to apply all of the Agile goodness I’ve come to love about my job. In fact, I find all the work on Continuous Delivery to be as important now as ever. It’s not everyday when you find a team that’s beginning greenfield development, who loves the Agile approach (even if they aren’t terribly experienced with it), and who is genuinely concerned about building good software, dependable software, and the pursuit of quality. I have to admit that I was beginning to become skeptical that you could find a team like that – like I was chasing the white whale! But find it I did!

So, now that I’m settled in, I’ll be posting a lot more about the day-to-day challenges of working in a fast-paced, dynamic, and demanding team environment. I’m looking forward to the new challenge and I’m really excited to see the transformation of the new team.

Stay tuned!

Smile

Thursday, February 24, 2011

On the Path to Continuous Delivery–Part 2

In my last post on this topic, I gave a bit of background on our work in progress to develop a continuous delivery practice. In this post, I’m going to describe the first part of that effort, which focused on the automated deployment of the compiled and tested software.

You might be wondering why we started at the backend of the process rather than working on getting continuous integration working and addressing issues at the front end of our process. Well, there were two primary reasons for this. One, deployments of our software to our production systems was the most manual, error-prone, and problematic part of our process. Two, to a certain extent, with exception given to the database components, we had build processes in place that did a decent job of automating the compilation, packaging, and signing of our software (even if it wasn’t done on a continuous basis). It seemed that our biggest immediate benefit would come from automating deployments. It was also felt that once that piece was in place, we could then optimize the front end of our process and then feed the output from there to our new deployment process thereby closing the loop and providing a full end-to-end solution.

When we put down our requirements for automated deployments we came up with this short list of must-haves:

  1. The process must support all projects irrespective of their current build environments (TFS 2008 or TFS 2010).
  2. A deployment must define a set of features that will be deployed together as a unit. In our terms, a feature represents a piece of the application stack such as the client UI, the web services middle tier, or the SQL database scripts. As a stack, all of the features defined in the deployment must install successfully, or none of them will be deployed. We treat this as a logical installation transaction.
  3. The deployments must be re-runnable. Our ultimate goal was to make this step of the delivery process integrate with automated processes in development and testing, so we wanted the deployments to be able to be run over and over again. We also felt this would allow us to iron out issues with the deployment prior to production thereby ensuring a higher degree of reliability.
  4. The deployment process must be a click-button automated process that requires no human intervention to fully execute.
  5. The process must support a rollback capability in order to allow our platform administrators to back out a deployment and restore a previous version in case of a problem.
  6. The packaging of application components and the automation around their deployment must provide a standardized structure that supports not only current feature types, but new as-yet not thought of features. For example, we knew we wanted to support deployments of SSRS reports, Informatica workflows, and other types of components.

My approach to satisfying these requirements was to develop a standardized directory structure into which application features would be dropped as well as a set of master scripts that had knowledge of the drop zone structure and the types of features that we wanted to deploy.

The drop zone directory structure is organized around three basic concepts: the Scrum team delivering the product; the deployment feature set; and the lifecycle phase of the deployment; i.e., DEV, TEST, or PROD. This allows each team to work independently and build up deployments for each of their products separately. Further, by supporting the lifecycle phase, it allows deployments to be rerun in each phase and tested prior to the final push to production.

The master script itself is little more than a wrapper around a set of feature-specific scripts that are responsible for the push of individual features. The master script is responsible for the basic flow of logic for the overall deployment orchestrating the callouts to each feature-specific script as well as logging and emailing results. I wrote all of these scripts in MSBuild, but that is really not important. I could have also done it in NAnt or PowerShell and achieved the same results. In my case, MSBuild seemed to be the best choice because others were familiar with it and we could ultimately integrate with TFS very cleanly.

Each of the feature-specific scripts follow a basic pattern: Initialization, Backup, Deployment, Finalization, and optionally Rollback. The specifics of how each are implemented vary of course, but the basic pattern of logic allows the master script to orchestrate the features in a consistent way. Finally, in order to allow the features themselves to be defined externally from the scripts, I separated their definitions into a separate MSBuild project file that the master script imports. This allows the logic to simply enumerate the features and push them out one by one while allowing a separate process to define the list. This is important because it will allow our build processes in the future to populate the feature list.

The following shows the basic structure of the MSBuild proj files:

image

And here is a simple drop zone structure example:

image

Within each feature group, you would expect to see 1..n feature directories; e.g., one for the ClickOnce client, one or more web services, and one or more database folders.

I won’t go into the scripting specifics in this post, though I might decide to do a separate post to describe some lessons learned in MSBuild while developing these. Suffice it to say that the most important aspect of this initiative was to be ruthless in looking for ways to automate each and every piece of the application deployment.

In the next post in this series, I’ll describe the steps taken to bring our database developer group into the process and how we worked out a way whereby their development scripts were integrated into our deployments. As a part of that post, I’ll take you through some of the complications that have prevented this from happening sooner and how we overcame them.

Thursday, February 10, 2011

On the Path to Continuous Delivery–Part 1

In this post, I’m going to begin describing the way that I’ve been addressing our pain points associated with delivery of our software. This will be the first of a number of posts that delve into the specific ways that we are altering our methodology to be able to build and deploy our software faster and better each iteration.

First, a bit of background…

When you look at the process that was in place when I joined the company, it was somewhat disjointed. Some parts of it were working pretty well, while other parts were a complete mess (like the deployment process). That’s probably not all that different from what the situation is at many companies (assuming they have a process at all). But considering our company uses Scrum and considers itself Agile, it seemed the process wasn’t allowing us to realize the full benefits of our methodology.

The phrase “Continuous Delivery” was new to me until I attended Agile2010 in Orlando last year; however, “continuous integration” was old news. I’d been using CI for quite some time and found it to be an invaluable part of the software development process. I attended Jez Humble’s session on Continuous Delivery. This practice focuses on making your entire product continuously ready for deployment at any time, all the time. That means all parts of your application – application code, database schema, and any data needed by the app to function. A quick poll of the session’s attendees showed a wide variance in their delivery iteration lengths. The least frequent was once a year and the most frequent was once every six hours. Both were extreme, but it made me realize that if you can deliver your working software every six hours, you could certainly do it once a sprint, once a week, or even daily if needed.

Jez’s talk started me thinking more deeply about how continuous integration represents only the first step in the overall process. What we needed was to apply some of the same principles from CI into how we package and deploy our software as well. What we needed was a way to go from requirements to delivery as continuously and quickly as possible. What we needed was more automation and more repeatability.

Martin Fowler also attended Jez’s session and added some commentary. One of the tenets that he spoke of was about bringing pain points forward in your process and doing them often. The idea being that it would force you to address them and smooth them out. For us, that significant pain was in our deployments.So I began from the backend of the process (deployments) and worked my way backward into the development build/test/stage process. This allowed me to tackle our biggest pain first, look for ways to improve it, and from that learn what our earlier parts of the process needed to produce in order to make deployments easier.

We are faced with a number of complications that make our transition difficult. What fun would there be if it was simple? In particular, we have six development teams (not counting the database development group) and two sets of development tools/environments (Visual Studio 2008/2010 and TFS 2008/2010). Up until this point, the database developer group did not use version control and had their own set of tools, so you could really say we had three sets.

Each team has a slightly different methodology, only one using CI, with the others having some similarities in their TFS build project structure. These six teams are building numerous applications on differing sprint schedules with varying degrees of cross-team integration. They are also responsible for maintenance of their applications as well, which the business expects to be delivered intra-sprint, particularly for more critical bug fixes.

As each team completes their various updates, including testing, they funnel all of their deployment requests to a single team (of two people right now) that review, accept, and perform the request to push the update into the production environment. Today, those steps look very different from team to team and from request to request. And to make it worse, a significant portion of it is manual – from file copies to individual SQL script executions one-by-one. This team is on the verge of burn-out as you can imagine.

That pretty well sums up the current state of affairs. In my next post, I’ll describe the automation pieces that have been built and are now beginning to be put in place by the teams to ease the deployment burden.

Tuesday, January 25, 2011

Unable to Add Entity Model to Silverlight 4 Business Application

I decided to try out the Silverlight 4 Business Application template in VS2010. I just wanted to see what the template produced out of the box. When I created the “BusinessApplication1” project it created both the Silverlight and host web projects for me which I expected.

Next, I wanted to add an entity model to the web project to support a new domain service. However, when I went to add the new item to the project, I received the following error: “"The project's target framework does not contain Entity Framework runtime assemblies..". That’s interesting since out of the box the template targeted the .NET 4 framework.

I found that that only way to clear up this error was to do the following:

  1. Change the target framework for the web project to .NET 3.5 and rebuild. The compile fails because the new code requires features from .NET 4.0.
  2. Change the target framework back to .NET 4.0 and rebuild. The compile now succeeds.
  3. Now I can add the entity model to the project.

Looks like there’s a small problem in the business application template in terms of the web application project file.

Monday, January 17, 2011

CAB Event Publishers and Subscribers

CAB Event publishers and subscribers allow your application to be designed in a very modular and decoupled way. That’s a good thing, but it can bite you if you’re unprepared. In this post, I want to describe a situation that recently snagged me while working on a CAB-based application project.

One of the advantages of using CAB, or other composite application frameworks for that matter, is the fact that your code becomes much more loosely coupled. It helps to isolate your classes allowing you to better unit test. It makes it possible to organize the development of your application functionality into discreet units. In order to support this modularization, CAB brings a number of important features to the table that allows your loosely coupled code to share information.

While this modularization and loose coupling is a big benefit in the big scheme of things, it also puts a burden on you to design your application modules with certain things in mind. In particular, each module will not have direct knowledge of other modules loaded at runtime. In our case, modules are organized into their own projects and do not have any references to one another. At most, they will share some references to common projects that provide some base functionality. If your modules are interested in sending or receiving information from each other, there are a number of possible ways to approach this. The most common way and the way that leverages the publish/subscribe pattern is the use of CAB events.

Using CAB events is very straightforward. In your publisher, simply declare the event that your class will raise. You add the CAB EventPublicationAttribute to the event. The constructor for this attribute takes two parameters: a topic string and an enum for the publication scope.

   1: [EventPublication(ConstantsEvent.CurrentLeadSummaryChanged, PublicationScope.Global)]



   2: public event EventHandler<LeadSummaryEventArgs> CurrentLeadSummaryChanged;




We define a set of string constants for our event topics. In the case above, the publication scope is defined as Global so that CAB notifies everyone that the selection of a lead has changed.



The subscribers to the event simply declare their event handler and apply the EventSubscriptionAttribute. This attribute has a couple of different constructors. One simply takes the string topic ID and the other takes the topic ID and a ThreadOption enum, which allows you to control marshaling of the event data. This is useful when your publisher raises their events from a different thread and you need to marshal it to the UI thread for instance.





   1: [EventSubscription(ConstantsEvent.CurrentLeadSummaryChanged)]



   2: public void CurrentLeadChanged(object sender, LeadSummaryEventArgs e)




At development time, you must take the initiative to make sure that the signatures of the two (publisher and subscriber) match. Your code will happily compile, even if the two have differing EventArg types. If you fail to make them match, you will be presented with an ArgumentException at runtime indicating that one cannot be converted to the other.



Admittedly, the fix for the situation that I just ran into is pretty straightforward, but I think it bears pointing out that your development methodology should take this into consideration. If you miss one, probably the easiest way to locate all of the places where the event is used is to simply search the solution by the topic ID. This will allow you to verify that the signatures match.



Long term, add tests to your integration tests to validate that both publisher and subscribers work together. This will help prevent future mismatches and validate that the communication between the two is working well.



Hope this helps if you find yourself in this situation…

Thursday, January 13, 2011

Troubleshooting “Exception has been thrown by the target of an invocation”

You’ve probably run across this Exception in a number of different situations. In my case, I ran into it most recently while doing some plumbing changes on our application, which is a composite application built using CAB (Composite UI Application Block). One of the most common failures we run into when wiring up new views or controllers is when an Exception occurs within the initialization logic of one or both of these.

CAB utilizes ObjectBuilder2  for dependencies. You will most often get the above exception when construction of your object occurs because ObjectBuilder2 is going through Activator to instantiate the object you asked for.

The problem with this Exception is that it masks the actual problem that is occurring. In my most recent case, it was caused by a null reference that wasn’t being checked. Unit testing and checking for a null reference would have solved this problem prior to doing the wiring (obviously); however, lacking those in the code base I’m working in currently, the best option was to break on the Exception in the debugger and see what was up.

It’s somewhat annoying that the InnerException is null in this case. The stack trace did however yield some insight into the problem and helped to solve the problem.

In my case, starting at the top of the chain with the constructor of the new object helped to ferret out the problem. From there, I was able to go back and properly check my state and handle the null situation without it being a problem.

Hope this helps…