Thursday, December 30, 2010

Just Read “Making Too Much of TDD”

As I read Michael Feather’s points in this blog post, I found myself agreeing so many times that I felt I had to link to this.

http://www.typepad.com/services/trackback/6a00d8341d798c53ef0147e1235b4c970b

I work in a company where the “scientist” in me is constantly challenged and where most in the developer group fall into the engineer group. I think it’s an excellent example of a polarity that exists (strongly) in our company. The business puts extraordinary pressure on the developers to deliver a working solution in the shortest time possible. The natural response is to forego practices that many in the Agile community consider “best practice” in order to just keep up with the pace of change and new features.

I personally get the same sense of satisfaction when patterns just emerge from the iterative process I follow, which is not strictly TDD, but is more or less a hybrid of it. It bothers me that refactoring is not given much attention in our daily development activities, but I am finding myself now questioning many of the beliefs which I held to be incontrovertible (am I being dogmatic?).

I think my most important takeaway from Michael’s post is the unmistakable position that we MUST question our approaches regularly to improve and to seek new approaches that make us better. Out with the old, in with the new. Constantly focus on pragmatism. If doing something doesn’t improve your product or your time to market, drop it.

I also want to better understand his concept “language granularity”. I believe it has an important impact on how we do development here. It touches on a very important business pattern here. Namely, the constant churn our teams find themselves in and the costs associated with the changes that are required in order to satisfy the latest/greatest requirements.

Great read Michael …

Tuesday, November 16, 2010

When Team Velocity is King

Our team met yesterday to do a walk-through on a project that was developed by a group of contractors for a high-priority project. A significant portion of the project architecture relied on patterns like dependency injection, factories, and the like to gain a high degree of loose coupling. This was motivated by the ever changing requirements for the product owner and the demands that the system be easily extensible.

At various points during the review, some of the team members that had the longest time on the team made comments that the design was too abstract, that “we’d never do a system this way.” When we came across a set of tests that had been commented out, the response was, “Good, we didn’t want to maintain tests anyway.”

Being the new guy on the team, I wanted to understand why they felt that way. The general opinion of the team was that abstraction and unit tests were simply too time consuming to implement and yielded too little value to consider for their applications. This position intrigued me – considering how much adoption and support Agile practices for software engineering have across the industry.

I believe this opinion is rooted in the business’s belief that “better is the enemy of ‘good enough’”. They are more interested in getting applications out quickly, with as few bugs as possible, but when bugs do occur, they are VERY tolerant of them. The cost associated with fixing the bugs, even if they are found in the field by end users, is not seen as a significant reason for being more strict in the development methodologies to prevent them.

Instead, they rely heavily on business analysts acting as QA and end users to ferret out the defects that are most critical and fix them then and there. More esoteric bugs that don’t dramatically affect the usability of the applications are glossed over and may be fixed in the future when time allows (or may not).

All of this is motivated by the belief that “going fast is the single most important requirement for our development teams.”

Velocity is King here – and it’s good to be the King.

Wednesday, September 8, 2010

Manually Deploying a WCF Service to IIS

I recently downloaded the WCF and WF samples to begin looking at the Federation sample. The topic of federated security in WCF is an interesting one and I will write about my experience with it in a future post; however, I wanted to address a more basic situation that I ran into while working with the sample project.

I opened the solution in Visual Studio 2010 and compiled it. After successfully compiling, I attempted to run the client project, which is configured as the startup project. The app started, but the browse books capability failed because I had not deployed the web service that the application required in order to get data. * In the interest of disclosure, I did not follow the recommendation of using the setup.bat to create and deploy the web site and related applications to host the STS and app services because I wanted control over where/how they were created.

The information in this post is admittedly pretty basic, but in my experience these simple steps are sometimes missed making it impossible to test your service. Therefore, in this post I will list the simple steps needed to publish/deploy a WCF service to IIS.

If you need more information on WCF, just check out the Beginner’s Guide to Windows Communication Foundation. There you can find a lot of useful information on writing WCF services and exploring the various hosting options available to you.

Create a New Web Site

After writing your service or compiling the sample you’re working with, you need a web site to host it. Assuming you have a working service implementation…

1.) Click Start\Run and type ‘inetmgr’ (assuming you already have IIS installed)

2) Right click sites, and add a new site. In my case, I created the BookStore site. I created the physical path C:\inetpub\wwwroot\Bookstore

3) I then created a subdirectory for web services. C:\inetpub\wwwroot\Bookstore\Services and a bin folder to put the binaries in that you just compiled. You should end up with a directory structure like this: C:\inetpub\wwwroot\<yoursite>\Services\bin.

Copy the Content to the New Site

4) Copy the DLLs from your services bin\debug or bin\release folder to the …\<yoursite>\Services\bin folder.

5) Copy the web service’s .svc and web.config files into the …\<yoursite>\Services folder. If your service has any additional files that it needs to function, copy them to that folder as well. The Book Store example needed a .txt file for its data.

Create a New Application to Host the Service

6) In the IIS Manager, create a new application for your services folder. Do this by right-clicking the Services folder and select, “Convert to Application”. Select your newly created Services folder for the physical folder and click OK. Make note of the application pool that you selected.

7) Change the application pool’s advanced settings to select the .NET Framework 4.0 (assuming you’re using 4.0 – if not be sure the right version of .NET for you is selected).

Set the ACL on Your Physical File Folders

8) Grant access to the application pool identity to your file folder. The identity that you select will be IIS AppPool\<AppPoolName>.In my case it was IIS AppPool\BookStore. Be sure to grant it Read & execute, List folder contents, and Read.

If you happen to miss this step, IIS cannot load your service assemblies.


 

9) That’s it! To test your service, you should be able to select the .svc file in the IIS Manager and click Browse and get a screen that describes your service.

Good luck!

/imapcgeek

Friday, September 3, 2010

What is TDD and BDD and How Do They Relate to One Another?

In this post I will attempt to define TDD and BDD and show how they are similar. I’ll also describe the way that each apply to software development and where I’ve seen shortcomings with how TDD is often applied.

TDD - Test Driven Development/Design

It depends on the age of the information you’re looking at which of the two terms you’ll see used – Development vs. Design. Newer literature will refer to it as Design more often than not. The reason for the change is to emphasize the important software engineering practice that is primarily benefited from its practice.

Can you describe TDD in a sentence?

TDD is the practice of using automated unit tests to drive the creation of classes and methods that satisfy a set of requirements for a software application.

On a team, who does TDD?

TDD is performed by the software developers that are responsible for the delivery of the code that satisfies the requirements of the application. It is not performed by QA or others responsible for quality control of an application.

Why are developers writing these tests? Shouldn’t testers write tests?

Because the primary purpose for TDD isn’t testing per se. It’s strange that it’s called Test Driven Design when its primary purpose isn’t testing. Perhaps it’s a matter of semantics, but in my opinion, the simple unit tests that are created as a part of this process are a byproduct of the effort. The real output from this process is a design that embodies two very important traits: cohesiveness and loose coupling. For information on SOLID design principles, please see: http://www.butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod. This book has an excellent coverage of these principles.

Cohesiveness refers to the degree to which a class’s interface is well organized into single set of related responsibilities. This refers to the ‘S’ and the ‘I’ in the SOLID principles that are widely accepted as good Object-Oriented design principles.

Loosely coupled refers to the degree to which classes that work together to solve a problem are concretely knowledgeable of each other. In other words, how abstract are the relationships. More loosely coupled is desirable. This refers to the ‘L’ and ‘D’ and a bit of ‘O’ of the SOLID principles.

How does TDD fit into a SDLC?

In my opinion, before writing the first TDD test and subsequently any code, developers come together and through a general understanding of the high-level functional and non-functional requirements make a determination of the high-level architecture. They should be able to do this based on past experience with applications that are similar in nature. This decision process includes deciding what technologies will be used (WinForms vs. WPF vs. Mobile; SQL Server; etc.) and what high-level layers exist in the application (UI, Business Logic, Data Access). They should also generally have some ideas about how these layers will interact. For instance, you might determine that because you must support desktop clients, web clients, and automated processing that you wish to separate business logic into a separate layer and make it accessible via a web service. The latter might be skipped to begin with, but I think it’s helpful to form these mental maps early on.

Once the developers have this high-level roadmap, it becomes easier to proceed with work in specific areas, but the next part can be somewhat tricky. It is at this point that the team needs to figure out where to start working. Most TDD examples are very simplistic and in my opinion don’t help much in making this decision. Often, a developer will decide, “I know how I want the database to look so I’ll do the data model and procs and then code up my data access objects.” If that’s where they wish to start, they would create a test fixture for the DAO and then create tests that would drive the new methods that the DAO must provide to satisfy the test. As each test is written, the DAO has logic added to it to satisfy the new test. These tests are often very, very small and simple (that’s a good thing). The developer may try to “guess” all the corner cases that a consumer would hit against the DAO and would create a lot of tests to validate those and subsequently put code into the DAO to handle the cases. As you can imagine, the number of these small unit tests will be many. That’s OK since unit tests are supposed to be fast, atomic, and repeatable. It’s not painful to rerun them many, many times. The end result is that you have a DAO that has a well-defined public API for getting your data and you have a unit test fixture that exercises that logic based on “rules” that the developer established that the DAO was supposed to adhere to.

The above pattern would then repeat itself at the web service layer, the business logic layer, and potentially up into the UI layer. All during this time, you make numerous “assumptions” about how the code will be used by a consumer. As you move higher and higher through the layers of your application, you may be required to tweak classes in lower layers, but that’s OK. You have the benefit of unit tests to help you if your assumptions are wrong and you have to go back and refactor. Each change is supported by a compilation and execution of tests to ensure you didn’t break something along the way. This entire process should be iterative, fast, and focused on small sets of functionality.

However, the above scenario is where I’ve seen TDD fail. Why? Because the view is that the reiteration of changes as you move up the layers is seen to be expensive. It can be costly to go back and touch several layers lower down in the design. You often have to: 1) Identify a missing or incorrect assumption; 2) Identify where to make the change; 3) Change the test(s); 4) Change the code; 5) Recompile; 6) Rerun the tests. If your requirements are constantly in flux or you make lots of assumptions that are wrong, this cycle can be expensive.

Enter BDD…

BDD

Behavior Driven Development/Design

Similar to TDD, the emphasis in this practice is on Design not development. In contrast to TDD, where you often see examples or real-world projects that begin at the bottom of the logical application stack, BDD focuses on “Behavior” aka what the user experiences and sees from the application at the logical “top” of the stack. That’s why I refer to it as a top-down approach while I refer to TDD as a bottom-up approach.

Not surprisingly, it shares a TDD principles: 1) use tests to drive design; 2) use small, fast iterations to identify a requirement, codify that in a test, implement code to make the test pass, repeat. Why is it similar? Because BDD evolved from TDD. I believe many practitioners of TDD realized that the most important aspect of the apps we write is what the user experiences. As a result, the emphasis shifted from the bottom of the stack to the top and BDD was born.

So rather than starting with writing tests for a DAO, you would start with writing tests for a UI-related class that defines the behavior the user wishes to have. Depending on some of the prevalent design patterns for the UI technology you chose, you may have in mind various design patterns that are popular to use with that technology. For instance, Model-View-Controller (MVC) is very popular with web technologies, Model-View-View-Model (MVVM) is popular with WPF, and Model-View-Presenter is often applied for WinForms. Strictly speaking, when you first begin writing your UI tests and supporting code, it’s too early to choose one design pattern over another, but I believe it’s useful to have them in mind as you
code to see where the design is leading you. But most importantly, let the tests and the resulting behavior drive which pattern is the winner.

So why is BDD better than TDD?

Well, I believe that they are very closely related and in reality BDD is nothing more than a refinement in how to approach TDD. In my opinion, BDD is superior because it focuses on the most important aspects of the system you’re building – the thing the user experiences. Secondly, while I don’t have empirical evidence at hand to prove this, I believe it reduces the churn that is often experienced when you build systems from the bottom up. As long as you design and code in small chunks and focus on YAGNI (you ain’t gonna need it) and let the top-level classes drive the requirements for the lower-level classes then the amount of churn is less. Most importantly, you don’t need to make ASSUMPTIONS about what is needed. Your top-level classes as consumers TELL the lower-level classes what is needed and you code that and nothing else. YAGNI wins.

Where do Business Analysts fit into this process?

The emergence of DSL (Domain Specific Language) and tooling to support them have opened a number of possibilities. It allows the Business Analyst (or subject matter expert SME) to define the specifications for how an application should behave in a language they are familiar with. The tooling then “translates” that to executable code and with the help of the developer the required BDD unit tests are born thus driving the development of the app. This is often referred to as “executable specifications”.

One example of a tool like this is SpecFlow. This tool is a .NET tool that supports creating executable specs by integrating with numerous supported unit test frameworks like NUnit, xUnit, etc. The SME can write specifications in English sentences that conform to a predetermined structure that outlines pre- and post-conditions for each “test” and the tool converts those into actual unit tests, which the developer then implements.

@bradwilson of Microsoft did a demo of this tool at #agile2010 and showed how he was able to integrate the resulting test fixtures with a web-automation tool to drive the web UI of a project. The end result was a set of specifications written in English backed by executable tests written in the developers language that drove the creation of the web UI the end user wanted.

I hope this has helped to clarify the differences between TDD, BDD, and how it might apply to your development initiatives.

/imapcgeek

Monday, August 16, 2010

Agile 2010 - Day 1 - Evolutionary Web Development with ASP.NET MVC (Part 1)

Presented by Brad Wilson, Senior Developer Microsoft ASP.NET MVC Team

Part 1 – TDD

The first session for me of the Agile 2010 Conference in Orlando was the “Evolutionary Web Development with ASP.NET MVC” given by Brad Wilson (@bradwilson). The first 90 minutes of the three-hour session was dedicated to Test Driven Development (TDD) basics. Brad and his sidekick Scott not only demonstrated the fundamentals of how testing drives design, but gave a good example of how paired programming works. For me, the jury is still out on the effectiveness of pairing, but then I’ve never been in a team that it was encouraged or embraced so that it could sway me.

Although TDD isn’t practiced at my current company, I do use TDD in my personal programming projects and it was a good reminder for me of the fundamentals. Repetition is what helps to keep our skills fresh

A few key points from the first half that resonated with me were:

1) When developing, think in terms of component development that facilitates isolation making it possible to test your units in isolation

2) Use your tests to drive public API’s – tests should also only exercise publicly visible behavior

3) Your tests should be small, simple, and fast – makes it easier to run them over and over again. If anything, I’d add that the tests should also be repeatable though that wasn’t discussed (maybe it was just that obvious).

4) The basic workflow in TDD is – Green –> Red –> Green. Although, I’ve often seen the workflow start with Red where my tests initially fail until I provide the requisite implementation to make them pass. In fact, re-reading James Newkirk’s book on TDD starts off with the Red->Green->Refactor workflow.

5) Refactor, refactor, refactor – if you aren’t regularly refactoring, your code probably (almost surely) has a lot of smells. The beauty of the unit tests is that it provides the safety net and gives confidence that while refactoring you aren’t breaking the contract that you promised to provide in your implementation.

6) Replace your dependencies with “test doubles” (mock objects) – Brad’s demo touched lightly on Moq (http://code.google.com/p/moq/), which I hadn’t used before. It looks like a good framework and I’ll have to experiment with it myself to see, but having used NMock, and others, it appears to address some of their shortcomings (like mocking classes not just interfaces).

7) TDD is more about design than it is about testing. The conscious act of creating tests that define the specifications for the public API is a radical departure for anyone that has never tried TDD. It can represent quite a leap in style and thought process, but one that can yield excellent results. Most people that aren’t practicing TDD say they believe that the process would slow them down. I believe (though I have no hard evidence to support my argument) that it’s quite the opposite. If nothing else, it certainly yields a cleaner design that makes maintainability higher reducing time over the long haul.

8) A side effect of #7 is that the unit tests aren’t the goal; they’re an artifact of the process. This can be a distraction for some – myself included. Until you realize that the unit tests are a means to an end, you may lose focus on what they are supposed to be doing for you: a.) driving your design and b.) providing a safety net for when you refactor (you want to refactor don’t you?).

9) Refactoring should focus on small, incremental changes. In my second session of the day (which I’ll cover in another post), Joshua Kerievsky (pretty sure it was him) spoke up about the importance of refactoring to patterns to improve overall design.

I came away with a few little catchy tidbits as well.

1) xUnit – uses the word “fact” to avoid the stigma that developers have against testing. I began experimenting with xUnit after the conference and find it equally usable to NUnit, but it also emphasizes some best practices that NUnit missed (as James Newkirk’s blog pointed out, xUnit was developed after some experience with “programmer testing”).

2) “Do the simplest thing that can make it work.”

3) TDD drives a very small incremental development style. Add test, watch it fail, add code, watch it pass.

4) In the real world you would write the acceptance tests first – BDD then TDD. The acceptance tests will fail for quite some time.

5) TDD tests are the documentation for other developers. BDD tests are the documentation for the business.

6) @bradwilson, “TDD is like the scientific method.” You form a hypothesis, write a test to represent it, and then see if your assumption is correct.

7) The basic test structure should be similar to this:

// Arrange
Set up the conditions for the test

// Act
DO whatever work is required

// Assert
Validate that the results are what you expected

See (http://c2.com/cgi/wiki?ArrangeActAssert)

That pretty much sums up the first part of that session. In closing, the important takeaways were using TDD and refactoring to drive your designs and improve code readability and overall health. Though this session talked about using mock objects as a means to loose coupling, a later session with Arlo Belshee discussed other design patterns that can also help to achieve loose coupling, but that don’t necessarily have the same overhead that mocks bring. I’ll cover more on that in a future post.

Wednesday, July 28, 2010

Troubleshooting a Failed Deployment of WCF Services on IIS7

In this blog post I will describe a problem that I recently experienced while developing a new WCF service and deploying it to a local IIS installation on Windows 7. The details of the service itself are unimportant; rather, the specific problems are related to the deployment of the service and with the setup and installation of IIS.

When attempting to browse to the newly deployed service (which was nothing more than a stub at this point), I experienced two problems. First, there was no handler mapping for the .svc extension. After I rectified that, I received the following error:

HTTP Error 500.21 – Internal Server Error
Handler “svc-Integrated” has a bad module “ManagedPipelineHandler” in its module list

According to the error page that was presented, the most likely causes were:

  1. Managed handler is used; however, ASP.NET is not installed or is not installed completely.
  2. There is a typographical error in the configuration for the handler module list.

How I got here:

  1. I installed the Internet Information Services through the Control Panel \ Windows Features dialog.

  2. I then manually created a new Application Pool to host the new services web site.

  3. From there I created a new Web Site to host the service.

  4. Last, I set my publication settings in Visual Studio to push the service and its implementation to the newly created website. Executing the publish from the IDE all things went well.

 

When I browsed to the service in the IIS Manager, I received the dreaded error above.

The root cause of this is mainly a timing issue. Depending on the order in which you install .NET and IIS, then the WCF and ASP.NET handlers and related modules will not be properly configured. Fortunately, resolving the issue is fairly straightforward.

  1. To resolve the problem with a lack of a .svc handler mapping, run the following:
    “%windir%\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\ServiceModelReg.exe –i”

    or if you are running with .NET 4, then the path is:
    “%windir%\Microsoft.NET\Framework\v4.0.30319\ServiceModelReg.exe –i”

    * Note that if you’re running on a 64-bit O/S you will need to change the path to use “Framework64” instead of “Framework”
  2. Lastly, to resolve the error above, you need to re-run the ASP.NET registration using this command:
    “%windir%\Microsoft.NET\Framework\v4.0.30319\aspnet_regiis.exe –i”

Once you’ve done that, you should be able to then access your WCF service. Hope this helps resolve this for you if you run into a similar situation.

del.icio.us Tags: ,,,

Thursday, July 15, 2010

Expression Blend 4

I’m not sure where I missed it, but Expression Blend 4 is now out. I recently went back to create a SketchFlow project to model some stories I’m working on for a proof of concept application and found that Expression Blend 3 does *not* integrate with Visual Studio 2010. I posted a #fail message on Twitter and low and behold @mfcollins3 and @unnir were nice enough to point out that Blend 4 was not only out but *did* work with Visual Studio 2010.

Thanks for the info. I’m downloading and installing Blend 4 as I write this and will hopefully be back to sketching and designing again very soon.

Wednesday, June 30, 2010

Lessons Learned from a Failed Vendor Engagement

We just terminated a POC engagement with a software vendor (to remain unnamed to protect the innocent :)) because of their inability to deliver a workable solution for our needs. From the beginning, the relationship between the two companies was somewhat iffy. I thought I’d share with everyone some insight into what we experienced as well as see if there were some key takeaways that might prove to be useful for the future.

The Problem

Our company does a tremendous amount of custom application development in support of our primary line of business. One area that we have identified that has room for improvement is the monitoring of the runtime health of our applications. We were seeking an enterprise scale solution that would allow us to instrument and monitor our applications and to be preemptive in identifying problems in production environments before they become outages affecting our end users.

The Process

Beginning with a failed LiveMeeting where the vendor completely missed the mark on what we were looking for, we finally got a high-level demo of the “product” that they felt would serve our needs. A number of hour-long telephone conversations were held and lengthy email discussions and questionnaires were completed all in a fact-finding and discovery mission to ensure that the vendor thoroughly understood our problem, our environment including network and hardware infrastructure, application architecture, etc. The conclusion of this was an agreement by the vendor to be on-site to assist with standing up a POC test environment with their components monitoring our applications. Our goal was to spend 2 days installing and configuring the system, wrapping up with a lunchtime demo of the installed environment, and a plan to spend the remainder of a week exercising our applications in that context to understand how the solution would help us. We never got there.

The Failure

Arriving as scheduled on Monday morning, the reps from the vendor arrived and began the process of installation and configuration. After a couple of hours, it became clear that there was confusion on their part about our environment. It forced me to walk through additional whiteboard discussions of what the environment was and how things were organized (reiterating many points that were made through the countless phone calls preceding the POC). To be clear, our environment is not complicated and when compared to many larger enterprises must be considered quite simple actually. So for a vendor that is positioned to address enterprises that are significantly larger, this already presented some concerns for us. Then came the confusion about which of their products we wanted to demo. Again, the details for all of that had been previously discussed, but confusion and lack of clarity reigned when trying to nail down which of their products would really solve the problems we had.

As the second day was winding down we found that they had completely forgotten an entire tier of our architecture and didn’t understand how it played into the big picture. It left us with a 1/2 complete install and they were scheduled to fly out. The best they could offer was to demo what they had and promise to call back in and via a remote session try to configure the last pieces – at the end of the week with teams of people left twiddling their thumbs unable to test what we planned.

The demo was the most frustrating of all. It was fragmented, bouncing between screens and products to try to get the “big picture” of system health. When trying to drill into specific areas, we would get “well this is how it should have worked – you get the idea” rather than really nailing the working example. Under the gun and with time running out before their flights, they packed up and headed out.

Lessons Learned

Some key takeaways for me in the aftermath of this process are enumerated below (in no particular order)

  1. If you supply a product, it is a MUST that you understand your own product. If you can’t install, configure, or use the product to its fullest don’t bother showing up.
  2. You MUST listen to your customer. Remember the old adage “The customer is always right.” Take the time to thoroughly understand the needs.
  3. Do your homework before committing not only your time and money into something, but more importantly the time and money of your customer.
  4. Don’t be afraid to say, “I don’t know the answer to that. I’ll get back to you.” Honesty is always the best policy.
  5. Be honest when you know your product has gaps or is imperfect. Knowing the limitations helps make a more informed decision.
  6. Technology is not a silver bullet; it will not solve all of your problems. It can help you make better informed decisions, but still needs a personal touch to make it work. Having competent people that understand not only your product but your customers and their environments is a MUST.
  7. As a consumer, especially when dealing with products that are this large and complex, get recommendations BEFORE engaging the vendor in a POC

Some of these thoughts are pretty common sense and at least the last one would have saved us the headache and cost of this endeavor had I done that. I hope that others benefit from this experience and the feedback.

Good luck!

Thursday, June 24, 2010

How Do You Manage Multiple Dev Environments?

I often am required to work on multiple versions of a product, some of which may span different versions of .NET, require different versions of Visual Studio, 3rd party controls, etc. Other times, I need a different version of the O/S for development or testing. What’s the best way to manage all those differences and keep from stepping on one environment or another?

My preferred way of doing this juggle is to use Virtual PC. The beauty with using Virtual PC for this is that I can configured a completely self-contained environment that has everything I need to represent the project I’m working on. I can quickly replicate it for the next project, change what is unique and be up and running with an environment that is customized specifically for my needs without breaking the first one, which allows me to return to it when needed.

One caveat with using Virtual PC for this: you need a pretty well configured machine to run it on especially if you want to load up resource intensive tools like VSNET, Resharper, etc. I highly recommend a minimum of 8GB of RAM – more if you can get it. In some cases, I’ll have more than one virtual spun up at a time and having LOTS of RAM is essential.

Perhaps the other tip I can offer here is to take the time to organize your virtual images based on O/S and tool sets installed. It will help you to duplicate an environment quickly and customize it to your needs.

Wednesday, June 23, 2010

How Does VSTS Recognize Which Projects Have Test Classes?

I recently ran into an interesting gotcha with VSTS 2010. I added a Library project to my solution that I intended to populate with integration tests. When I created the project, I chose the Class Library template. I then added a class to the project that would represent my test fixture. Expecting VSTS to leverage the [TestClass] attribute that I applied to the class, I went about adding test cases and attributed them with [TestMethod].

When I went to my Test View, the methods that I’d added didn’t show up :(  The problem was that I expected VSTS to behave similarly to NUnit, which reflects over the classes looking for the [TestFixture] attribute to figure out which classes contain tests. VSTS does not do it that way; rather, you have to modify the .csproj file to include a new element

<ProjectTypeGuids>{3AC096D0-A1C2-E12C-1390-A8335801FDAB};{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}</ProjectTypeGuids>

Once I added that element to the project, the Test View happily found my new tests.

Hope this helps you if you run into a similar problem.

del.icio.us Tags:

Thursday, March 18, 2010

Free eBook: Programming Windows Phone 7 Series (DRAFT Preview)

While we were at MIX, we were given an opportunity to peek into the upcoming book from Charles Petzold on Windows Phone development. He has released a preview of the book that you can download now.

Check it out here:

http://blogs.msdn.com/microsoft_press/archive/2010/03/15/free-ebook-programming-windows-phone-7-series-draft-preview.aspx

Enjoy!

Key Takeaways from MIX 2010

I had an opportunity to attend MIX this year in Las Vegas. I hadn’t been before, so I really didn’t know what to expect. Marketed as a conference for the web doesn’t do it justice – or maybe it’s just because the technologies that Microsoft now brings to the table to satisfy web development is so rich and diverse – but there is a LOT of information there for folks even if they’re not doing web development.

So what were the big takeaways? The message that I came away with was that there were four big areas of focus:

Silverlight

Impressive stuff here. Silverlight 4 really has stepped up to be a 1st Class desktop citizen. It’s ability to run as an out-of-browser application with elevated privileges really bridges the gap between older versions and WPF. I am honestly asking myself if we are seeing Microsoft make WPF irrelevant? While at MIX they demoed Silverlight running on the Windows Phone 7 platform and the Xbox. All with the ability to leverage the Expression Blend designer tool, which is an extremely powerful tool in it’s own right – probably unparalleled for XAML layout.

A big announcement from Microsoft was that they were making the development tools for Windows Phone development FREEEE. You can download them from here.

Cloud Computing

While I didn’t get the opportunity to attend but one session on Windows Azure, it was enough to convince me that it will make a big impact on design decisions for companies that want a fast low cost entry into a secure, reliable, and scalable environment in which to host their apps. This is a very new area, particularly for me, but the advantages are immediately obvious when you look at them – reliable, secure, scalable, open standards-based.

One new service called AppFabric that will really make this shine allows for Azure to integrate with on-premise applications. Imagine being able to run your LOB applications in the Cloud and take advantage of it’s capabilities, but also allow them to reach back into your network for secure data or other internal resources. Very powerful.

Open Data Protocol

While the last thing we want (or need) is another data access acronym to go along with ADO and all of its predecessors, OData (short for Open Data Protocol) is Microsoft’s commitment to data integration and interoperability. Many of the MIX sessions demonstrated consuming data using OData syntax through the browser, through RESTful web services. I expect this will be a game changer when it comes to how we write data services.

During the conference, numerous demos gave the same message – OData will make it possible to mash together data from numerous disparate sources and provide a seamless and compelling user experience. Imagine pulling data from Netflix, the local movie theaters in your area, and your own personal movie database to provide a single application on your phone to search movies. Now imagine doing that using a standards-based syntax that makes it equally easy to consume any/all of those data sources. Now imagine that you want to serve up your data using OData and being able to do it completely from within your web browser! All demoed this week!

IE9

The sessions didn’t have anything to do with the vNext version of Internet Explorer; however, the Day 2 keynote gave us some peeks into what’s in store for IE9. First – it’s FAST. Second – it’s standards based. Third – Did I mention it was fast? It was amazingly fast for developer code. It features a completely redesigned rendering engine that will leverage the power of multi-core CPUs. The demonstration of some of its polygon rendering capabilities and performance were quite compelling.

Its support for HTML5 and video were amazing. The other browsers will finally have something to catch up to. Watching streaming HD video at full frame rates with 50% of the CPU utilization of competitors was impressive – especially since the others were dropping frames like hot rocks.

That said, I’m still leery of the security issues that have plagued IE in the past. Let’s hope they’ve (or will) resolved those.

Summary

Collectively, these technologies come together to provide a rich development, deployment, and support environment. I expect them to go WAY beyond the development of just web applications.

I plan on writing more about each of these as I get the opportunity to explore them and employ them in my projects.

Friday, March 5, 2010

Starting Down the Path of Android Development

Awhile back I decided to begin doing some Windows Mobile development. Overall, it’s been a good experience and the tool support from VS2008 is excellent. The ability to debug your mobile app in a virtual device emulator is extremely helpful to visualize how your app will run on the target device platform.

However, now that I’ve had the chance to begin using the Motorola Droid and seeing their marketplace application, which gives access to scores of free and pay apps, it has convinced me that I need to begin looking at developing for this platform as well. In the short time that I’ve used this new phone, it’s clear that the user experience is far superior to WM6. Not having seen the new Windows Mobile 7 platform more than a couple of screenshots, it does look like it WM7 could be a strong competitor though.

My next posts will be about ramping up and getting started with a basic application for this platform. It should be exciting and hopefully I can help share some of the stumbling blocks that I run across to save you a bit of your own frustration.

del.icio.us Tags: ,,

Saturday, February 20, 2010

Where Scrum Falls Short and Software Engineering Has to Pick Up

For any company that adopts Scrum, there will be an initial period where productivity dips a bit while everyone understands the process, but then within a very short time their productivity will spike higher. Ayende Rahien in a recent post gets into a some excellent detail into where Scrum as a “Product” development process falls short when developing software.

The reasons for this are pretty obvious when you look at Scrum side-by-side with what we know to be best practices for software engineering. It isn’t until you understand what each bring to the table that you can see their shortcomings and then blend the two together to create a process that brings the strengths of both together to bear on the problem.

Scrum’s short iterations, frequent feedback, and quick turn-around are clear winners. It shortens the feedback cycle and makes staying focused on the highest priorities for the Product Owner much easier. It also allows for changing direction when requirements change. However, as Ayende points out, there is nothing in this that is software-specific. That’s where software engineering practices like Test Driven Development, Continuous Integration, and Refactoring come into play. These work together to ensure that the “product” is built using best practices that focus on the actual product and the idiosyncrasies of its development.

It is critical that these best practices be interwoven into your software development process – especially an Agile process like Scrum. Because without them, you are missing out on the aspects that emphasize quality. Over time, as quality declines, so too will your velocity. Ayende describes this:

“You hit the Scrum wall when you adopt Scrum and everything goes well, then, after a few Sprints things don’t work any more - to use an English expression, they go pear shaped. You can’t keep your commitments, you can’t release software, your customers get annoyed and angry, it looks like Scrum is broken.”

How many times have you gotten a few sprints into your project, and then had to put on the brakes to introduce a “go back and fix stuff” sprint? For our team, it happened a lot early on before we adopted those best practices. After adopting them, we rarely had to go back to code due to quality issues; rather, we were more likely to have to revisit code due to changes in requirements or additional features being added. Clearly this is much better – we’re now focusing on business drivers and less on technical debt.

Bottom line – don’t forget to roll into your process the details for following the software engineering best practices. Let Scrum manage what it does well – requirements and deliverables. When you include the best of each of these your product will benefit greatly.

Friday, February 19, 2010

Exporting Table Definitions from SQL Compact Edition Database Files

While working on a mobile app that uses SQLCE, I was unpleasantly surprised to find that there was no built-in capability for the SQL Server 2008 Management Studio to export table definitions into a script file. So when I found that @ErikEJ on the CodePlex site had written this little plug-in I was quite happy.

Thanks Erik For a very useful little utility :)

del.icio.us Tags: ,,

Friday, February 12, 2010

Writing Stored Procedures for SQL Server 2008 in C#

This is my first foray into writing stored procedures for SQL Server in managed code. I decided to check it out since I was already doing some other SQL Server work in support of a WPF-based Agile project management application and thought this would be a good opportunity to explore a little and see how it might apply.

Let me be clear, this is a post really to capture my thoughts and experience along the way. I’ll include links and quotes where it seems applicable.

Decision Points

From “Overview of CLR Integration” - http://msdn.microsoft.com/en-us/library/ms131045.aspx

Choosing Between Transact-SQL and Managed Code

When writing stored procedures, triggers, and user-defined functions, one decision you must make is whether to use traditional Transact-SQL, or a .NET Framework language such as Visual Basic .NET or Visual C#. Use Transact-SQL when the code will mostly perform data access with little or no procedural logic. Use managed code for CPU-intensive functions and procedures that feature complex logic, or when you want to make use of the BCL of the .NET Framework.

Choosing Between Execution in the Server and Execution in the Client

Another factor in your decision about whether to use Transact-SQL or managed code is where you would like your code to reside, the server computer or the client computer. Both Transact-SQL and managed code can be run on the server. This places code and data close together, and allows you to take advantage of the processing power of the server. On the other hand, you may wish to avoid placing processor intensive tasks on your database server. Most client computers today are very powerful, and you may wish to take advantage of this processing power by placing as much code as possible on the client. Managed code can run on a client computer, while Transact-SQL cannot.

* Note: I have some reservations about the suggestions made in the last section. While many business-class PCs today have more processing power, and while scenarios may exist where it would be nice to distribute the computational work to the client, it is likely unreasonable to place the burden of processing on the client PC due to the need to transfer the required data to the client. This seems like a corner case, which probably doesn’t happen often.

Advantages

While reading through the various articles, I came across this list of advantages for using managed code instead of Transact-SQL.

  • Enhanced programming model
  • Enhanced Safety and Security
  • User-Defined Types and Aggregates
  • Common Development Environment
  • Better Performance (for computational sorts of logic; see the point above regarding straight data access)
  • Language richness
  • Reusability of code
  • Extensibility
  • Leverage existing skills
  • Richer development experience
  • Stability and reliability

Step-by-Step

1) Create the database project in your solution. Click the database category and choose the “SQL Server Project”. Give it a name and choose the directory where you want it created.

The wizard will create a .SQL script from which you can test the database objects that you create.

2) I decided to create a subfolder in which I’ll place my new stored procedures

3) After that, right-click the new folder and add a new stored procedure; give it a descriptive name. Use a naming convention that lets you know what the stored procedure does. I prefer to prefix my stored procedures with the name of the module of the app that it applies to; e.g., Products_InsertNewProduct or Products_SelectProductById


4) The stored procedure template will give you a basic outline for a stored procedure. The first thing you’ll notice is that it creates a static method on a partial class. Also, the method is marked up with the Microsoft.SqlServer.Server.SqlProcedure attribute.

5) Modify the signature of the method to include the parameters that you want

6) The implementation of this is pretty much plain vanilla ADO.NET; There are a couple of minor differences to make note of:

a. The connection string for the SqlConnection is based on the current context that the stored procedure is running under:
”context connection = true”

b. When returning data, you use the SqlPipe, which is accessible from the SqlContext

7) Next, compile and deploy the project. Both of these commands are available from the “Build” menu.

That’s about it for now. You can use the Test.Sql script that is generated in the project in order to test your procedures. Going forward, I see lots of potential for code reuse. My next post on managed code for SQL Server will focus on triggers and user defined types.

Have fun!

Related Links

Introduction to SQL Server CLR Integration (ADO.NET)

Overview of CLR Integration

Creating SQL Server Objects in Managed Code

del.icio.us Tags: ,,,

Sunday, January 31, 2010

BDD Tooling

In my previous post on BDD, I alluded to tools that would come along to help with bridging the gap between specifications and test code.

"...new advances in DSLs would provide a potential bridge to generate
functional code to validate the requirements have been met. It seems to
me this would provide a much higher value from a testing and code
quality perspective as you are now writing tests that are 1) driven
directly by the requirements; 2) oriented at a piece of functionality
that directly affects the user."


A tweet from @ryanlanciaux today pointed me to SpecFlow. This tool looks to have a lot of promise in doing just what I described above.

You can read more about SpecFlow and download it from here: http://specflow.org/home.aspx

I plan on downloading SpecFlow this week and giving it a look. I'll post more on my experiences and thoughts regarding this tool after had some time with it.