Wednesday, June 30, 2010

Lessons Learned from a Failed Vendor Engagement

We just terminated a POC engagement with a software vendor (to remain unnamed to protect the innocent :)) because of their inability to deliver a workable solution for our needs. From the beginning, the relationship between the two companies was somewhat iffy. I thought I’d share with everyone some insight into what we experienced as well as see if there were some key takeaways that might prove to be useful for the future.

The Problem

Our company does a tremendous amount of custom application development in support of our primary line of business. One area that we have identified that has room for improvement is the monitoring of the runtime health of our applications. We were seeking an enterprise scale solution that would allow us to instrument and monitor our applications and to be preemptive in identifying problems in production environments before they become outages affecting our end users.

The Process

Beginning with a failed LiveMeeting where the vendor completely missed the mark on what we were looking for, we finally got a high-level demo of the “product” that they felt would serve our needs. A number of hour-long telephone conversations were held and lengthy email discussions and questionnaires were completed all in a fact-finding and discovery mission to ensure that the vendor thoroughly understood our problem, our environment including network and hardware infrastructure, application architecture, etc. The conclusion of this was an agreement by the vendor to be on-site to assist with standing up a POC test environment with their components monitoring our applications. Our goal was to spend 2 days installing and configuring the system, wrapping up with a lunchtime demo of the installed environment, and a plan to spend the remainder of a week exercising our applications in that context to understand how the solution would help us. We never got there.

The Failure

Arriving as scheduled on Monday morning, the reps from the vendor arrived and began the process of installation and configuration. After a couple of hours, it became clear that there was confusion on their part about our environment. It forced me to walk through additional whiteboard discussions of what the environment was and how things were organized (reiterating many points that were made through the countless phone calls preceding the POC). To be clear, our environment is not complicated and when compared to many larger enterprises must be considered quite simple actually. So for a vendor that is positioned to address enterprises that are significantly larger, this already presented some concerns for us. Then came the confusion about which of their products we wanted to demo. Again, the details for all of that had been previously discussed, but confusion and lack of clarity reigned when trying to nail down which of their products would really solve the problems we had.

As the second day was winding down we found that they had completely forgotten an entire tier of our architecture and didn’t understand how it played into the big picture. It left us with a 1/2 complete install and they were scheduled to fly out. The best they could offer was to demo what they had and promise to call back in and via a remote session try to configure the last pieces – at the end of the week with teams of people left twiddling their thumbs unable to test what we planned.

The demo was the most frustrating of all. It was fragmented, bouncing between screens and products to try to get the “big picture” of system health. When trying to drill into specific areas, we would get “well this is how it should have worked – you get the idea” rather than really nailing the working example. Under the gun and with time running out before their flights, they packed up and headed out.

Lessons Learned

Some key takeaways for me in the aftermath of this process are enumerated below (in no particular order)

  1. If you supply a product, it is a MUST that you understand your own product. If you can’t install, configure, or use the product to its fullest don’t bother showing up.
  2. You MUST listen to your customer. Remember the old adage “The customer is always right.” Take the time to thoroughly understand the needs.
  3. Do your homework before committing not only your time and money into something, but more importantly the time and money of your customer.
  4. Don’t be afraid to say, “I don’t know the answer to that. I’ll get back to you.” Honesty is always the best policy.
  5. Be honest when you know your product has gaps or is imperfect. Knowing the limitations helps make a more informed decision.
  6. Technology is not a silver bullet; it will not solve all of your problems. It can help you make better informed decisions, but still needs a personal touch to make it work. Having competent people that understand not only your product but your customers and their environments is a MUST.
  7. As a consumer, especially when dealing with products that are this large and complex, get recommendations BEFORE engaging the vendor in a POC

Some of these thoughts are pretty common sense and at least the last one would have saved us the headache and cost of this endeavor had I done that. I hope that others benefit from this experience and the feedback.

Good luck!

Thursday, June 24, 2010

How Do You Manage Multiple Dev Environments?

I often am required to work on multiple versions of a product, some of which may span different versions of .NET, require different versions of Visual Studio, 3rd party controls, etc. Other times, I need a different version of the O/S for development or testing. What’s the best way to manage all those differences and keep from stepping on one environment or another?

My preferred way of doing this juggle is to use Virtual PC. The beauty with using Virtual PC for this is that I can configured a completely self-contained environment that has everything I need to represent the project I’m working on. I can quickly replicate it for the next project, change what is unique and be up and running with an environment that is customized specifically for my needs without breaking the first one, which allows me to return to it when needed.

One caveat with using Virtual PC for this: you need a pretty well configured machine to run it on especially if you want to load up resource intensive tools like VSNET, Resharper, etc. I highly recommend a minimum of 8GB of RAM – more if you can get it. In some cases, I’ll have more than one virtual spun up at a time and having LOTS of RAM is essential.

Perhaps the other tip I can offer here is to take the time to organize your virtual images based on O/S and tool sets installed. It will help you to duplicate an environment quickly and customize it to your needs.

Wednesday, June 23, 2010

How Does VSTS Recognize Which Projects Have Test Classes?

I recently ran into an interesting gotcha with VSTS 2010. I added a Library project to my solution that I intended to populate with integration tests. When I created the project, I chose the Class Library template. I then added a class to the project that would represent my test fixture. Expecting VSTS to leverage the [TestClass] attribute that I applied to the class, I went about adding test cases and attributed them with [TestMethod].

When I went to my Test View, the methods that I’d added didn’t show up :(  The problem was that I expected VSTS to behave similarly to NUnit, which reflects over the classes looking for the [TestFixture] attribute to figure out which classes contain tests. VSTS does not do it that way; rather, you have to modify the .csproj file to include a new element

<ProjectTypeGuids>{3AC096D0-A1C2-E12C-1390-A8335801FDAB};{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}</ProjectTypeGuids>

Once I added that element to the project, the Test View happily found my new tests.

Hope this helps you if you run into a similar problem.

del.icio.us Tags: