Monday, November 25, 2013

Upgrading to Visual Studio 2013

Although VS 2012 is the best version of Visual Studio that I've worked with to-date, there are a few features that lack a little bit of polish. In particular, the code review features are so-so. For example, it simply doesn't work when trying to do a code review of a file that has been renamed. Nope. Not gonna do it!

I've purchased my Resharper v8.x upgrade and I'm set to go.

First things first. Time to uninstall VS 2012. This results in a lengthy system restore point process. Since I have used this version for quite a while, there are numerous updates that have been applied and countless Visual Studio-related apps in Control Panel that I'm unsure if I have to uninstall them. We'll soon find out.

Once the uninstall of VS 2012 completed, which surprisingly didn't prompt me for a reboot, I began the installation of VS 2013. With a full install this projects using 12GB of space - not a small app by any stretch.

Funny thing is that while this is running, I'm using SharpDevelop 4.3 to continue my coding and unit testing. Need to stay busy! I actually like SharpDevelop for simpler things, but there are a few things that I miss. For example, our projects use the Microsoft test framework and the refactoring support from Resharper is awesome.

Now as the VS 2013 installer wraps up, it let me know that Hyper-V is turned on and that it added me to the Hyper-V Administrators group. It looks like this was necessary for Windows Phone Emulator support.

After a lengthy reboot due to updates that get applied to Windows, Visual Studio was ready to run. One interesting thing to note was the introduction of a sign-in process now that looks to support sync'ing settings across development machines.




I was initially stymied by some errors while trying to sign-in. A quick reboot and restart of Visual Studio and the issues were resolved. Once logged in, your settings are associated to your Microsoft account and will follow you from one development machine to another.







Now I'm off to try out some of the other features and to see if the Code Review functionality performs as I'm hoping for. As an aside, I notice that the Team Explorer interface is much more refined.

Edit: Just a quick note that you will also need to update your version of the TFS Power Tools if you use them. They can be found here: http://visualstudiogallery.msdn.microsoft.com/f017b10c-02b4-4d6d-9845-58a06545627f 



Saturday, November 23, 2013

Trying to Decide On What's My Next Game Console

I have to admit, I don't have a lot of time for gaming these days. I have even less desire to waste money on the next wave of content whose format won't live more than a couple of years. It's one of the reasons why I'm starting to second guess my choice of buying Blu Ray movies when cloud-based content is starting to take off. Frankly, the only use case that doesn't push me more completely in that direction is the sometimes-connected state I find myself in. If that were to go away, I'd be completely cloud-based though I have to admit the pricing model for that content still doesn't match my expectations.

With the holidays coming up, I am beginning to wonder if it isn't time for a new gaming console. I have both an Xbox 360 and a PS/3. The latter was purchased originally as a player for Blu Ray and secondarily as a gaming console. To be honest, the games I have for the PS/3 are very few. As a gaming platform it just never took off; at least not for me. The Xbox 360 on the other hand offered good games. So alas I ended up with two consoles. Not my preference but done out of necessity to get the best of both worlds.

So now I'm back to my quest for deciding between the latest/greatest platform. Should I get the Xbox One or the new PlayStation? Not sure yet, but given that the Xbox will play Blu Ray tells me I can consolidate down to one console. Finally - Yes! I'll be doing some more reviewing and considering, but I know me - in the end I'll probably stick with Xbox because let's face it - I pretty much use every other piece of Microsoft technology out there so what's one more?

Edit:
Seems that Vudu.com is helping to make my decision. They are an Xbox One partner, so delivery of my cloud movies is now available via the Xbox. Nice!

Friday, November 22, 2013

Higher Education Adoption of the Cloud

Business Cloud News writes on the slow move to the cloud by higher education in "Ovum: Higher education lagging in cloud adoption". The focus in the article is on the use of Learning Management Systems for these institutions.

While I agree with the general premise of the article regarding the rate of adoption of the cloud by higher ed, I would argue that the use of cloud platforms will not just be driven by LMS usage, but will extend much further across their enterprises. Student information systems, housing systems, CRM, etc. will also drive them into the cloud - particularly where commodity platforms are much more cost effectively operated there.

Our customers are tech-savvy folks, but they are also pragmatic. The single biggest impediment to adoption, in my opinion, isn't the desire to move to the cloud, but is instead it's maturity. As the next generation of services are made available and stability and reliability go up, that's where you'll see the growth of adoption take off.

Bottom line: Until public cloud providers dramatically improve their product stability and make it a true value add you won't see the education sector move to it in large scale.

Thursday, November 21, 2013

Preparing for the Cloud

Summary
In this post I'm going to discuss three common pitfalls that you should be wary of when you are planning on putting your first application into the cloud. In this post, I'll be focusing on Windows Azure, but the points I make are equally applicable to other public cloud providers.
On-Premises Apps vs Cloud Apps
If you're like many people that have not yet put a production application into the cloud, then you may have a lot of questions about what that means. Developing apps for the cloud often uses tools and languages that you're already familiar with. However, the approach that you take to designing your app should be significantly different.
Probably the most significant way in which these two environments differ is the volatility of the services and infrastructure that you can depend on. Keep in mind that when you are running in a public cloud environment, you are running in a distributed, virtualized data center, and are sharing your resources with many other tenants. The effect of this can be quite severe in some cases.
Transient Errors
The cloud is a highly distributed environment. The resources that you depend on will likely not reside on the same host or even be in the same rack or data center that your app is running. Much of the infrastructure that supports a cloud provider's resources is highly redundant and configured for high availability. But that doesn't mean that blips in connectivity and availability don't occur. Sometimes those blips can extend long enough to become bleeps - usually from you at 2 a.m. when you get that support call.
An important part of your application design when building for the cloud is transient error handling. What exactly does this mean? At its simplest, it means that you need to add retry logic to your code that interacts with services that are external to your app. Which ones? ALL OF THEM. Candidates for this are caching, database, storage, and other services that are probably critical to your app. This can even mean your own services. Consider your web site's service agents when they call out to that SOAP or REST service to get or update data.
Take database connectivity as an example. Typical on-premises applications assume that the database connection will work when you ask for it. Its often considered a hard failure when you are unable to connect to the database. However, in the cloud, this is a situation that can happen regularly when the environment is under stress. Those connections are valuable and in order to maintain as much availability of resources as possible the environment will periodically go through and close connections in the pool. Transient error handling in this case means adding retry logic around opening your connection. A good practice to follow in this case is to employ sliding retry logic. Rather than just retrying over and over immediately after the previous attempt, it is advisable to put logic in to increase the amount of time you wait between retries. A Fibonacci or exponential scale can help. Set a reasonable maximum number of times to retry before your code gives up.
Service Outages
In spite of the best efforts of cloud providers to provide five 9's of service to you, the reality is something much less. Windows Azure currently runs at a rate of around three 9's of availability. What does this mean to you as the consumer of those services? It means you need to think redundancy. For instance, if your app has a requirement for high availability, you should consider building out your deployments to be in more than one data center. Windows Azure provides tools to help with this. Whether you are using Platform as a Service or Infrastructure as a Service VMs, it is very straightforward to deploy and configure multiple instances of your application functionality in more than one data center. Using a service such as Windows Azure Traffic Manager will allow you to stand up a highly available load balancer in front of your web application or web services quickly and easily. Depending on your needs, you can configure WATM to run in a fail over or round robin mode. The latter allows you to get some benefit out of your backup deployments. Don't forget to factor into your budget the increased cost for additional hosted services that you stand up for redundancy.
Oftentimes, failures in services in the cloud will only affect a portion of your application. This makes it extremely important to be able to fail over that portion of the app. Consider a traditional 3-tier application that has a web UI, services for business logic, and a SQL database. If your services tier experiences issues it is critical for your web application users that your UI be able to switch to another set of services. Just like using WATM to handle fail over for your web UI, you can use it to fail over for your services as well. Optionally, your application can handle the fail over to a redundant set of resources depending on your implementation. One layer in an application is particularly more difficult to fail over due to its statefulness. The resource layer in your app, which is often a SQL database, but can also involve file or other persistence mechanisms, is much more difficult to fail over properly. It often involves multi-write/read logic to properly handle near real time redundancy with logic to identify and manage a master and slave relationship between data providers. This is not a trivial exercise. Microsoft provides Data Sync for SQL Azure. It's window for replication is 5 minutes. This may or may not serve your needs. If you need something more granular than that, be prepared to design for it.
Throttling
Windows Azure and the services you have access to are multi-tenant. Throttling is a way of life in the cloud and is used to ensure that the environment is not overrun with load. IT administrators are very aware of the effects that a VM instance can have on its host when it is allowed to consume too many resources. It usually means resource starvation for other VMs on that host. Throttling limits are set by the cloud provider in their infrastructure in order to limit the effects of resource starvation. They are out of your control. The single most important thing to remember about throttling is that it is meant to protect the cloud provider's resources - not your app.  The effects of throttling can be insignificant or they can be debilitating depending on the way your application is designed. In severe cases, resources that you depend on may be temporarily unavailable due to throttling. Just like service outages, throttling can appear to your app that a critical service is unavailable or experiencing transient errors. One way to mitigate this is to make the most efficient use of resources possible in this environment and to employ caching wherever possible to minimize the number of trips necessary to expensive resources such as the database.
Summary
This post provides a high-level view of some of the pitfalls that application developers can run into when deploying their application into the cloud. Follow-on posts will tackle these in detail and look at specific ways to mitigate them.