Tuesday, April 26, 2011

Quick Tip on SQL Server 2008 Database Projects

I just imported the scripts from an existing SQL Server 2008 database. All went well, except that some of the stored procedures made reference to objects from system databases, which the newly imported scripts couldn’t resolve.

Database projects support making references to other databases in a few different ways. One way is via .dbschema files that you can generate from your projects. This approach is a loosely-coupled approach and improves reuse across your projects.

In the case of the master or msdb databases, Microsoft ships .dbschema files for these for this exact purpose. Simply add a database reference to your project and browse to the directory where Microsoft ships these files: [Program Files]\Microsoft Visual Studio 10.0\VSTSDB\Extensions\SqlServer\Version\DBSchemas, where Version is the version of SQL Server that you are using (such as 2005 or 2008).

For more info on using database references, here’s the link to MSDN:

http://msdn.microsoft.com/en-us/library/bb386242.aspx

Wednesday, April 20, 2011

New Diggs

It’s been awhile since my last post. The last couple of months have been a whirlwind of activity. Since I last wrote about Continuous Delivery, I’ve switched jobs. The contrasts between the two are innumerable. Went from big corporate to small and personal. I found a place where the project work is genuinely interesting and challenging and above all the people are great to work with.

The beauty of it all is that I get to apply all of the Agile goodness I’ve come to love about my job. In fact, I find all the work on Continuous Delivery to be as important now as ever. It’s not everyday when you find a team that’s beginning greenfield development, who loves the Agile approach (even if they aren’t terribly experienced with it), and who is genuinely concerned about building good software, dependable software, and the pursuit of quality. I have to admit that I was beginning to become skeptical that you could find a team like that – like I was chasing the white whale! But find it I did!

So, now that I’m settled in, I’ll be posting a lot more about the day-to-day challenges of working in a fast-paced, dynamic, and demanding team environment. I’m looking forward to the new challenge and I’m really excited to see the transformation of the new team.

Stay tuned!

Smile

Thursday, February 24, 2011

On the Path to Continuous Delivery–Part 2

In my last post on this topic, I gave a bit of background on our work in progress to develop a continuous delivery practice. In this post, I’m going to describe the first part of that effort, which focused on the automated deployment of the compiled and tested software.

You might be wondering why we started at the backend of the process rather than working on getting continuous integration working and addressing issues at the front end of our process. Well, there were two primary reasons for this. One, deployments of our software to our production systems was the most manual, error-prone, and problematic part of our process. Two, to a certain extent, with exception given to the database components, we had build processes in place that did a decent job of automating the compilation, packaging, and signing of our software (even if it wasn’t done on a continuous basis). It seemed that our biggest immediate benefit would come from automating deployments. It was also felt that once that piece was in place, we could then optimize the front end of our process and then feed the output from there to our new deployment process thereby closing the loop and providing a full end-to-end solution.

When we put down our requirements for automated deployments we came up with this short list of must-haves:

  1. The process must support all projects irrespective of their current build environments (TFS 2008 or TFS 2010).
  2. A deployment must define a set of features that will be deployed together as a unit. In our terms, a feature represents a piece of the application stack such as the client UI, the web services middle tier, or the SQL database scripts. As a stack, all of the features defined in the deployment must install successfully, or none of them will be deployed. We treat this as a logical installation transaction.
  3. The deployments must be re-runnable. Our ultimate goal was to make this step of the delivery process integrate with automated processes in development and testing, so we wanted the deployments to be able to be run over and over again. We also felt this would allow us to iron out issues with the deployment prior to production thereby ensuring a higher degree of reliability.
  4. The deployment process must be a click-button automated process that requires no human intervention to fully execute.
  5. The process must support a rollback capability in order to allow our platform administrators to back out a deployment and restore a previous version in case of a problem.
  6. The packaging of application components and the automation around their deployment must provide a standardized structure that supports not only current feature types, but new as-yet not thought of features. For example, we knew we wanted to support deployments of SSRS reports, Informatica workflows, and other types of components.

My approach to satisfying these requirements was to develop a standardized directory structure into which application features would be dropped as well as a set of master scripts that had knowledge of the drop zone structure and the types of features that we wanted to deploy.

The drop zone directory structure is organized around three basic concepts: the Scrum team delivering the product; the deployment feature set; and the lifecycle phase of the deployment; i.e., DEV, TEST, or PROD. This allows each team to work independently and build up deployments for each of their products separately. Further, by supporting the lifecycle phase, it allows deployments to be rerun in each phase and tested prior to the final push to production.

The master script itself is little more than a wrapper around a set of feature-specific scripts that are responsible for the push of individual features. The master script is responsible for the basic flow of logic for the overall deployment orchestrating the callouts to each feature-specific script as well as logging and emailing results. I wrote all of these scripts in MSBuild, but that is really not important. I could have also done it in NAnt or PowerShell and achieved the same results. In my case, MSBuild seemed to be the best choice because others were familiar with it and we could ultimately integrate with TFS very cleanly.

Each of the feature-specific scripts follow a basic pattern: Initialization, Backup, Deployment, Finalization, and optionally Rollback. The specifics of how each are implemented vary of course, but the basic pattern of logic allows the master script to orchestrate the features in a consistent way. Finally, in order to allow the features themselves to be defined externally from the scripts, I separated their definitions into a separate MSBuild project file that the master script imports. This allows the logic to simply enumerate the features and push them out one by one while allowing a separate process to define the list. This is important because it will allow our build processes in the future to populate the feature list.

The following shows the basic structure of the MSBuild proj files:

image

And here is a simple drop zone structure example:

image

Within each feature group, you would expect to see 1..n feature directories; e.g., one for the ClickOnce client, one or more web services, and one or more database folders.

I won’t go into the scripting specifics in this post, though I might decide to do a separate post to describe some lessons learned in MSBuild while developing these. Suffice it to say that the most important aspect of this initiative was to be ruthless in looking for ways to automate each and every piece of the application deployment.

In the next post in this series, I’ll describe the steps taken to bring our database developer group into the process and how we worked out a way whereby their development scripts were integrated into our deployments. As a part of that post, I’ll take you through some of the complications that have prevented this from happening sooner and how we overcame them.

Thursday, February 10, 2011

On the Path to Continuous Delivery–Part 1

In this post, I’m going to begin describing the way that I’ve been addressing our pain points associated with delivery of our software. This will be the first of a number of posts that delve into the specific ways that we are altering our methodology to be able to build and deploy our software faster and better each iteration.

First, a bit of background…

When you look at the process that was in place when I joined the company, it was somewhat disjointed. Some parts of it were working pretty well, while other parts were a complete mess (like the deployment process). That’s probably not all that different from what the situation is at many companies (assuming they have a process at all). But considering our company uses Scrum and considers itself Agile, it seemed the process wasn’t allowing us to realize the full benefits of our methodology.

The phrase “Continuous Delivery” was new to me until I attended Agile2010 in Orlando last year; however, “continuous integration” was old news. I’d been using CI for quite some time and found it to be an invaluable part of the software development process. I attended Jez Humble’s session on Continuous Delivery. This practice focuses on making your entire product continuously ready for deployment at any time, all the time. That means all parts of your application – application code, database schema, and any data needed by the app to function. A quick poll of the session’s attendees showed a wide variance in their delivery iteration lengths. The least frequent was once a year and the most frequent was once every six hours. Both were extreme, but it made me realize that if you can deliver your working software every six hours, you could certainly do it once a sprint, once a week, or even daily if needed.

Jez’s talk started me thinking more deeply about how continuous integration represents only the first step in the overall process. What we needed was to apply some of the same principles from CI into how we package and deploy our software as well. What we needed was a way to go from requirements to delivery as continuously and quickly as possible. What we needed was more automation and more repeatability.

Martin Fowler also attended Jez’s session and added some commentary. One of the tenets that he spoke of was about bringing pain points forward in your process and doing them often. The idea being that it would force you to address them and smooth them out. For us, that significant pain was in our deployments.So I began from the backend of the process (deployments) and worked my way backward into the development build/test/stage process. This allowed me to tackle our biggest pain first, look for ways to improve it, and from that learn what our earlier parts of the process needed to produce in order to make deployments easier.

We are faced with a number of complications that make our transition difficult. What fun would there be if it was simple? In particular, we have six development teams (not counting the database development group) and two sets of development tools/environments (Visual Studio 2008/2010 and TFS 2008/2010). Up until this point, the database developer group did not use version control and had their own set of tools, so you could really say we had three sets.

Each team has a slightly different methodology, only one using CI, with the others having some similarities in their TFS build project structure. These six teams are building numerous applications on differing sprint schedules with varying degrees of cross-team integration. They are also responsible for maintenance of their applications as well, which the business expects to be delivered intra-sprint, particularly for more critical bug fixes.

As each team completes their various updates, including testing, they funnel all of their deployment requests to a single team (of two people right now) that review, accept, and perform the request to push the update into the production environment. Today, those steps look very different from team to team and from request to request. And to make it worse, a significant portion of it is manual – from file copies to individual SQL script executions one-by-one. This team is on the verge of burn-out as you can imagine.

That pretty well sums up the current state of affairs. In my next post, I’ll describe the automation pieces that have been built and are now beginning to be put in place by the teams to ease the deployment burden.

Tuesday, January 25, 2011

Unable to Add Entity Model to Silverlight 4 Business Application

I decided to try out the Silverlight 4 Business Application template in VS2010. I just wanted to see what the template produced out of the box. When I created the “BusinessApplication1” project it created both the Silverlight and host web projects for me which I expected.

Next, I wanted to add an entity model to the web project to support a new domain service. However, when I went to add the new item to the project, I received the following error: “"The project's target framework does not contain Entity Framework runtime assemblies..". That’s interesting since out of the box the template targeted the .NET 4 framework.

I found that that only way to clear up this error was to do the following:

  1. Change the target framework for the web project to .NET 3.5 and rebuild. The compile fails because the new code requires features from .NET 4.0.
  2. Change the target framework back to .NET 4.0 and rebuild. The compile now succeeds.
  3. Now I can add the entity model to the project.

Looks like there’s a small problem in the business application template in terms of the web application project file.

Monday, January 17, 2011

CAB Event Publishers and Subscribers

CAB Event publishers and subscribers allow your application to be designed in a very modular and decoupled way. That’s a good thing, but it can bite you if you’re unprepared. In this post, I want to describe a situation that recently snagged me while working on a CAB-based application project.

One of the advantages of using CAB, or other composite application frameworks for that matter, is the fact that your code becomes much more loosely coupled. It helps to isolate your classes allowing you to better unit test. It makes it possible to organize the development of your application functionality into discreet units. In order to support this modularization, CAB brings a number of important features to the table that allows your loosely coupled code to share information.

While this modularization and loose coupling is a big benefit in the big scheme of things, it also puts a burden on you to design your application modules with certain things in mind. In particular, each module will not have direct knowledge of other modules loaded at runtime. In our case, modules are organized into their own projects and do not have any references to one another. At most, they will share some references to common projects that provide some base functionality. If your modules are interested in sending or receiving information from each other, there are a number of possible ways to approach this. The most common way and the way that leverages the publish/subscribe pattern is the use of CAB events.

Using CAB events is very straightforward. In your publisher, simply declare the event that your class will raise. You add the CAB EventPublicationAttribute to the event. The constructor for this attribute takes two parameters: a topic string and an enum for the publication scope.

   1: [EventPublication(ConstantsEvent.CurrentLeadSummaryChanged, PublicationScope.Global)]



   2: public event EventHandler<LeadSummaryEventArgs> CurrentLeadSummaryChanged;




We define a set of string constants for our event topics. In the case above, the publication scope is defined as Global so that CAB notifies everyone that the selection of a lead has changed.



The subscribers to the event simply declare their event handler and apply the EventSubscriptionAttribute. This attribute has a couple of different constructors. One simply takes the string topic ID and the other takes the topic ID and a ThreadOption enum, which allows you to control marshaling of the event data. This is useful when your publisher raises their events from a different thread and you need to marshal it to the UI thread for instance.





   1: [EventSubscription(ConstantsEvent.CurrentLeadSummaryChanged)]



   2: public void CurrentLeadChanged(object sender, LeadSummaryEventArgs e)




At development time, you must take the initiative to make sure that the signatures of the two (publisher and subscriber) match. Your code will happily compile, even if the two have differing EventArg types. If you fail to make them match, you will be presented with an ArgumentException at runtime indicating that one cannot be converted to the other.



Admittedly, the fix for the situation that I just ran into is pretty straightforward, but I think it bears pointing out that your development methodology should take this into consideration. If you miss one, probably the easiest way to locate all of the places where the event is used is to simply search the solution by the topic ID. This will allow you to verify that the signatures match.



Long term, add tests to your integration tests to validate that both publisher and subscribers work together. This will help prevent future mismatches and validate that the communication between the two is working well.



Hope this helps if you find yourself in this situation…

Thursday, January 13, 2011

Troubleshooting “Exception has been thrown by the target of an invocation”

You’ve probably run across this Exception in a number of different situations. In my case, I ran into it most recently while doing some plumbing changes on our application, which is a composite application built using CAB (Composite UI Application Block). One of the most common failures we run into when wiring up new views or controllers is when an Exception occurs within the initialization logic of one or both of these.

CAB utilizes ObjectBuilder2  for dependencies. You will most often get the above exception when construction of your object occurs because ObjectBuilder2 is going through Activator to instantiate the object you asked for.

The problem with this Exception is that it masks the actual problem that is occurring. In my most recent case, it was caused by a null reference that wasn’t being checked. Unit testing and checking for a null reference would have solved this problem prior to doing the wiring (obviously); however, lacking those in the code base I’m working in currently, the best option was to break on the Exception in the debugger and see what was up.

It’s somewhat annoying that the InnerException is null in this case. The stack trace did however yield some insight into the problem and helped to solve the problem.

In my case, starting at the top of the chain with the constructor of the new object helped to ferret out the problem. From there, I was able to go back and properly check my state and handle the null situation without it being a problem.

Hope this helps…

Thursday, December 30, 2010

Just Read “Making Too Much of TDD”

As I read Michael Feather’s points in this blog post, I found myself agreeing so many times that I felt I had to link to this.

http://www.typepad.com/services/trackback/6a00d8341d798c53ef0147e1235b4c970b

I work in a company where the “scientist” in me is constantly challenged and where most in the developer group fall into the engineer group. I think it’s an excellent example of a polarity that exists (strongly) in our company. The business puts extraordinary pressure on the developers to deliver a working solution in the shortest time possible. The natural response is to forego practices that many in the Agile community consider “best practice” in order to just keep up with the pace of change and new features.

I personally get the same sense of satisfaction when patterns just emerge from the iterative process I follow, which is not strictly TDD, but is more or less a hybrid of it. It bothers me that refactoring is not given much attention in our daily development activities, but I am finding myself now questioning many of the beliefs which I held to be incontrovertible (am I being dogmatic?).

I think my most important takeaway from Michael’s post is the unmistakable position that we MUST question our approaches regularly to improve and to seek new approaches that make us better. Out with the old, in with the new. Constantly focus on pragmatism. If doing something doesn’t improve your product or your time to market, drop it.

I also want to better understand his concept “language granularity”. I believe it has an important impact on how we do development here. It touches on a very important business pattern here. Namely, the constant churn our teams find themselves in and the costs associated with the changes that are required in order to satisfy the latest/greatest requirements.

Great read Michael …

Tuesday, November 16, 2010

When Team Velocity is King

Our team met yesterday to do a walk-through on a project that was developed by a group of contractors for a high-priority project. A significant portion of the project architecture relied on patterns like dependency injection, factories, and the like to gain a high degree of loose coupling. This was motivated by the ever changing requirements for the product owner and the demands that the system be easily extensible.

At various points during the review, some of the team members that had the longest time on the team made comments that the design was too abstract, that “we’d never do a system this way.” When we came across a set of tests that had been commented out, the response was, “Good, we didn’t want to maintain tests anyway.”

Being the new guy on the team, I wanted to understand why they felt that way. The general opinion of the team was that abstraction and unit tests were simply too time consuming to implement and yielded too little value to consider for their applications. This position intrigued me – considering how much adoption and support Agile practices for software engineering have across the industry.

I believe this opinion is rooted in the business’s belief that “better is the enemy of ‘good enough’”. They are more interested in getting applications out quickly, with as few bugs as possible, but when bugs do occur, they are VERY tolerant of them. The cost associated with fixing the bugs, even if they are found in the field by end users, is not seen as a significant reason for being more strict in the development methodologies to prevent them.

Instead, they rely heavily on business analysts acting as QA and end users to ferret out the defects that are most critical and fix them then and there. More esoteric bugs that don’t dramatically affect the usability of the applications are glossed over and may be fixed in the future when time allows (or may not).

All of this is motivated by the belief that “going fast is the single most important requirement for our development teams.”

Velocity is King here – and it’s good to be the King.

Wednesday, September 8, 2010

Manually Deploying a WCF Service to IIS

I recently downloaded the WCF and WF samples to begin looking at the Federation sample. The topic of federated security in WCF is an interesting one and I will write about my experience with it in a future post; however, I wanted to address a more basic situation that I ran into while working with the sample project.

I opened the solution in Visual Studio 2010 and compiled it. After successfully compiling, I attempted to run the client project, which is configured as the startup project. The app started, but the browse books capability failed because I had not deployed the web service that the application required in order to get data. * In the interest of disclosure, I did not follow the recommendation of using the setup.bat to create and deploy the web site and related applications to host the STS and app services because I wanted control over where/how they were created.

The information in this post is admittedly pretty basic, but in my experience these simple steps are sometimes missed making it impossible to test your service. Therefore, in this post I will list the simple steps needed to publish/deploy a WCF service to IIS.

If you need more information on WCF, just check out the Beginner’s Guide to Windows Communication Foundation. There you can find a lot of useful information on writing WCF services and exploring the various hosting options available to you.

Create a New Web Site

After writing your service or compiling the sample you’re working with, you need a web site to host it. Assuming you have a working service implementation…

1.) Click Start\Run and type ‘inetmgr’ (assuming you already have IIS installed)

2) Right click sites, and add a new site. In my case, I created the BookStore site. I created the physical path C:\inetpub\wwwroot\Bookstore

3) I then created a subdirectory for web services. C:\inetpub\wwwroot\Bookstore\Services and a bin folder to put the binaries in that you just compiled. You should end up with a directory structure like this: C:\inetpub\wwwroot\<yoursite>\Services\bin.

Copy the Content to the New Site

4) Copy the DLLs from your services bin\debug or bin\release folder to the …\<yoursite>\Services\bin folder.

5) Copy the web service’s .svc and web.config files into the …\<yoursite>\Services folder. If your service has any additional files that it needs to function, copy them to that folder as well. The Book Store example needed a .txt file for its data.

Create a New Application to Host the Service

6) In the IIS Manager, create a new application for your services folder. Do this by right-clicking the Services folder and select, “Convert to Application”. Select your newly created Services folder for the physical folder and click OK. Make note of the application pool that you selected.

7) Change the application pool’s advanced settings to select the .NET Framework 4.0 (assuming you’re using 4.0 – if not be sure the right version of .NET for you is selected).

Set the ACL on Your Physical File Folders

8) Grant access to the application pool identity to your file folder. The identity that you select will be IIS AppPool\<AppPoolName>.In my case it was IIS AppPool\BookStore. Be sure to grant it Read & execute, List folder contents, and Read.

If you happen to miss this step, IIS cannot load your service assemblies.


 

9) That’s it! To test your service, you should be able to select the .svc file in the IIS Manager and click Browse and get a screen that describes your service.

Good luck!

/imapcgeek