Start the Day By Planning the Day

It’s a new day! What should I work on? Where should I dedicate my time?

Taking 10 or 15 minutes every morning to plan the day is the most important step. This is your chance to look at everything on your plate and determine what needs to be done today. You cannot do everything, so prioritization is critical.

I have multiple inputs to my morning planning:

  1. Emails. I know, I know, everyone says don’t look at your emails first thing! But, I pretty much ignore my email for the rest of the day, so the first thing I do is get to Inbox Zero every morning. The email either gets deleted or it ends up in my todo list to be prioritized.

  2. Help Desks. We have an internal help desk system that contains requests that could be from another department, or from a customer through our support department. I check this every morning and add any new items to my todo list.

  3. Team Kanban Board. This is where the actual projects my team is working on exist, and is preferably where I spend most of my time. This includes new projects and bugs.

  4. Todoist. I’m a huge Todoist fan (I’ll talk more about this in a future post). It’s just a Todo app, though… use your Todo app of choice. The important thing is that it contains everything that I need to get done that is not in the Team Kanban board. Any emails, help desks, or work that just showed up at my door goes here. It’s so important source to look at for all my open work (except for my kanban board which I do not duplicate here…)

So, now that all of the emails and help desks have been consolidated into my todo list, along with anything else that is there from a previous day or scheduled for today. I’m ready to prioritize.

Each todo item can be organized as:

  1. Today – These items are important and urgent and need to be accomplished today.
  2. Future – These items are important but not urgent, so I reschedule them in Todoist to show up on a future day (either tomorrow or some other future date).
  3. Delegate – These items are important but don’t need to be completed by me. They can be assigned to a teammate or other employee. I assign them out and if important, I setup a todo to check in on it in a few days.
  4. Delete – These items are no longer important and can be completely removed.

I now have a list of everything that needs to be done today, that is outside of what I consider my team’s work. It’s a great day when this list is currently empty!

I then look at the Kanban board and determine what are today’s priorities. I always start at the right-side of the Kanban board and move to the left. If an item needs deployed to Production, that is most important. Then QA bug fixes, then QA deployments, then new development, then design.

I now take all these items (todo list and kanban board priorities) and reprioritize them on my office white board. I enjoy being able to glance at my whiteboard and see where I’m at on the list I created first thing this morning. The final whiteboard list drives my day.

Any new interruptions that come in do not get worked on unless they are critical. Interruptions get sent to my todo list to be prioritized the next morning.

At the end of the day, I re-synchronize by updating my todo list by eliminating everything I’ve accomplished that day. Anything that was not completed gets rescheduled for tomorrow, so that it can be reprioritized the next morning.

Working and prioritizing this way has helped me keep stress under control by always knowing that I am working on what I planned instead of constantly reacting to every interruption during the day. It has also greatly improved my reliability, it is very difficult for a task to slip through the cracks. If it’s important, it will get done.

In future posts, I’ll give more details on how I use Todoist.

Let me know how you plan your day!

Enums & APIs

Enums are a double-edged sword. They are extremely useful to create a set of possible values, but they can be a versioning problem if you ever add a value to that enum.

In a perfect world, an enum represents a closed set of values, so versioning is never a problem because you never add a value to an enum. However, we live in the real, non-perfect world and what seemed like a closed set of values often turns out to be open.

So, let’s dive in.

Beer API

My example API is a Beer API!

I have a GET that returns a Beer, and a POST that accepts a Beer.

[HttpGet]
public ActionResult<models.beer> GetBeer()
{
return new ActionResult<models.beer>(new Models.Beer()
{
Name = “Hop Drop”,
PourType = Beer.Common.PourType.Draft
});
}</models.beer></models.beer>

[HttpPost]
public ActionResult PostBeer(Models.Beer beer)
{
return Ok();
}

The Beer class:

public class Beer
{
public string Name { get; set; }

public PourType PourType { get; set; }

}

And the PourType enum:

public enum PourType
{
Draft = 1,
Bottle = 2
}

The API also converts all enums to strings, instead of integers which I recommend as a best practice.

services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2)
.AddJsonOptions(options =>
{
options.SerializerSettings.Converters.Add(new Newtonsoft.Json.Converters.StringEnumConverter());
});

So, the big question comes down to this definition of PourType in the Beer class.

public PourType PourType { get; set; }

Should it be this insted?

public string PourType { get; set; }

We’re going to investigate this question by considering what happens if we add a new value to PourType, Can = 3.

Let’s look at the pros/cons.

Define As Enum

Pros

When you define PourType as an Enum on Beer, you create discoverability and validation by default. When you add Swagger (as you should do), it defines the possible values of PourType as part of your API. Even better, when you generate client code off of the Swagger, it defines the Enum on the client-side, so they can easily send you the correct value.

Cons

Backwards compatibility is now an issue. When we add Can to the PourType, we have created a new value that the client does not know about. So, if the client requests a Beer, and we return a Beer with the PourType of Can, it will error on deserialization.

Define As String

Pros

This allows new values to be backwards compatible with clients as far as deserialization goes. This will work great in cases where the client doesn’t actually care about the value or the client never uses it as an enum.

However, from the API’s perspective, you have no idea if that is true or not. It could easily cause a runtime error anyway. If the client attempts to convert it to an enum it will error. If the client is using the value in an IF or SWITCH statement, it will lead to unexpected behavior and possibly error.

Cons

The biggest issue is discoverability is gone. The client has no idea what the possible set of values are, it has to pass a string, but has no idea what string.

This could be handled with documentation, but documentation is notoriously out of date and defining it on the API is a much easier process for a client.

So What Do We Do?

Here’s what I’ve settled on.

Enum!

The API should describe itself as completely as possible, including the possible values for an enum value. Without these values, the client has no idea what the possible values are.

So, a new enum should be considered a version change to the API.

There are a couple ways to handle this version change.

Filter

The V1 controller could now filter the Beer list to remove any Beer’s that have a PourType of Can. This may be okay if the Beer only makes sense to clients if they can understand the PourType.

Unknown Value

The Filter method will work in some cases, but in other cases you may still want to return the results because that enum value is not a critical part of the resource.

In this case, make sure your enum has an Unknown value. It will need to be there at V1 for this to work. When the V1 controller gets a Beer with a Can PourType, it can change it to Unknown.

Here’s the enum for PourType:

public enum PourType
{
///

/// Represents an undefined PourType, could be a new PourType that is not yet supported.
///

Unknown = 0,
Draft = 1,
Bottle = 2
}

Because Unknown was listed in the V1 API contract, all clients should have anticipated Unknown as a possibility and handled it. The client can determine how to handle this situation… it could have no impact, it could have a UI to show the specific feature is unavailable, or it could choose to error. The important thing is that the client should already expect this as a possibility.

Resource Solution

One thing that should be considered in this situation is that the enum is actually a resource.

PourType is a set of values that could expand as more ways to drink Beer are invented (Hooray!). It may make more sense to expose the list of PourType values from the API. This prevents any version changes when the PourType adds a new value.

This works well when the client only cares about the list of values (e.g. displaying the values in a combobox). But if the client needs to write logic based on the value it can still have issues with new values, as they will land in the default case.

Exposing the enum as a resource also allows additional behavior to be added to the value, which can help with client logic. For example, we could add a property to PourType for RequiresBottleOpener, so the client could make logic decisions without relying on the “Bottle” value, but just on the RequiresBottleOpener property.

The PourType resource definition:

public class PourType
{
public string Name { get; set; }

public bool RequiresBottleOpener { get; set; }
}

The PourType controller:

[HttpGet]
public ActionResult<ienumerable> GetPourTypes()
{
// In real life, store these values in a database.
return new ActionResult<ienumerable>(
new List{
new PourType {Name = “Draft”},
new PourType {Name = “Bottle”, RequiresBottleOpener = true},
new PourType {Name = “Can”}
});
}
</ienumerable</ienumerable

However, this path does increase complexity at the API and client, so I do not recommend this for every enum. Use the resource approach when you have a clear case of an enum that will have additional values over time.

Conclusion

I have spent a lot of time thinking about this and I believe this is the best path forward for my specific needs.

If you have tackled this issue in a different way, please discuss in the comments. I don’t believe there is a perfect solution to this, so it’d be interesting to see other’s solutions.

The Great Azure DevOps Migration – Part 6: Import

wagons migrating west

This is it! We’ve made it to the import step! This is when we finally move our data into Azure DevOps Service.

If you missed the earlier posts, start here.

I highly recommend Microsoft’s Azure DevOps Service Migration Guide.

Detach Collection

First, you need to detach the collection from TFS. Don’t detach the database in SQL Server, but detach the collection in the Azure DevOps Server.

To detach the collection, open the Azure DevOps Management Tool, go to Collections, and choose Detach on the collection that is going to be imported.

Generate the Database Backup

If you have managed to keep your import under 30 GB, this step is fairly easy. If not, you are in for a harder import because you now need to move your database to a SQL Server Database in Azure. I won’t cover the SQL Server migration as I did not do this step, but here is the guide on how to do this.

So, if you are going the under 30 GB route, you need to create a DACPAC that is going to be imported to Azure DevOps Service. You should be able to run the DACPAC tool from your Developer Command Prompt for Visual Studio or from the following location:

C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\IDE\Extensions\Microsoft\SQLDB\DAC\150

Here is the packaging command:

SqlPackage.exe /sourceconnectionstring:”Data Source=localhost;
Initial Catalog=[COLLECTION_NAME];Integrated Security=True”
/targetFile:C:\dacpac\Tfs_DefaultCollection.dacpac
/action:extract
/p:ExtractAllTableData=true
/p:IgnoreUserLoginMappings=true
/p:IgnorePermissions=true
/p:Storage=Memory

After the packaging is completed, you will have a new DACPAC at C:\dacpac\ with all your import data.

Upload the Package

We’re not going to upload the package directly into Azure DevOps Service. First, we need to upload it to Azure itself. And then we’ll point Azure DevOps Service at the DACPAC in Azure.

The easiest way to do this is to install the Azure Storage Explorer.

Open the Azure Storage Explorer app.
Choose Add Azure Account.
Login to your Azure Account.
Go to Azure Storage Container.
Create a new Blob Container named DACPAC.
Upload the DACPAC file created by SqlPackage.exe.

Create the SAS Key

You need to create a secret key that will allow Azure DevOps Service to access the DACPAC.

In Azure Storage Explorer, right-click the DACPAC folder and choose Get Shared Access Signature…

Set the expiration to one week from today.
Give it read/list rights, nothing else.
Copy the URL for the SAS Key.

This SAS URL should be placed in the import.json file that was in the Logs folder from earlier. Set it in the Source.Location field.

Import

That’s it! We are ready to start the import!

Run the following command from the Data Migration Tool folder:

Run Migrate import /importfile:[IMPORT-JSON-LOCATION]

The import will begin and the command will provide a link to view the status of your import.

It does take a few minutes before you can even see the import page, so don’t panic.

Once the import began, it took about two hours to complete… so this is a good time to take a break.

Validation

You did it! Your migration to Azure DevOps is completed. You should now verify that everything is working correctly.

Users

First, verify your list of users. You can find your users in the Organization Settings. I had to eliminate a lot of users that did not need access to the service. You should then set the correct Access Level for your actual users. We have a number of VS Enterprise subscriptions that I used for most of my developers, and our contractors received Basic access. Most importantly, make sure all users are listed that should be.

This is a great chance to see how much Azure DevOps Service is actually going to cost you, so make sure you set this up just like your Production environment will be.

Source Control

Because you moved your GIT source control, you don’t actually need to re-clone it, you can just redirect your existing local repo to the new location.

You can change your local repo origin with the following command (you can find the REMOTE_GIT_REPO in the Clone button in Azure DevOps Service – Repos – Files).

git remote set-url origin [REMOTE_GIT_REPO]

Billing

Make sure your Billing Account is configured for the service. When you do your Production migration, this is important. You won’t be billed till the first of the next month, so make sure you have Billing and Users setup by the end of the month.

Build / Release Agents

Any local Build / Release agents will need to be reconfigured. I only had about 10 agents running locally, so I chose to just remove them and reinstall them after the final Production run. The Powershell command makes this very easy.

I did not test this with the Dry Run, I simply reconfigued it after the Production migration and everything worked smoothly.

Final Import

And that is it!
We had very few other issues, the dry run went well and the Production migration a few weeks later went very smoothly.

For the final migration, I simply repeated the steps of this Guide and changed the import.json to use Production instead of Dry-Run.

I turned off our local TFS server and am keeping it around but off in case we need the legacy code.

The main thing that came up after final migration was setting Permissions for Users correctly, but I simply adjusted these settings as we went.

Some users had issues with non-Visual Studio tools being unable to connect to the remote repo, but setting their GIT Credentials in Azure DevOps Service – Repos – Files – Clone fixed the issue.

I hope you have learned from my efforts and if you any questions let me know!

The Great Azure DevOps Migration – Part 5: Prepare

wagons migrating west

We’ve validated that our data is ready for import. Now, we need to prepare the data to be imported!
This is a short step, so let’s enjoy the ease of this one.

If you missed the earlier posts, start here.

I highly recommend Microsoft’s Azure DevOps Service Migration Guide.

Prepare Command

In the same way that we used the Migrator validate command earlier, we need to run a Migrator prepare command. This re-runs the validation but also creates a .json file that will be used for the actual import process.

So, open Powershell to the directory that contains the Migrator.exe file (in the DataMigrationTool download). Execute the cmd below:

Migrator prepare /collection:[COLLECTION_ADDRESS] tenantdomainname:[AZURE_TENANT_NAME] /region:CUS

I recommend using the localhost address to your collection to verify that you are pointed at the right server. The tenant domain name is the Azure Active Directory that it will connect to for your newly imported data. The region must be from a narrow list of Azure regions, make sure you choose a supported region. View the full list here.

Execute the command and you will see results similar to the validation run earlier.

If all goes well, you will find the new import.json file in the Logs folder of the DataMigrationTool. Inside Logs, open the newest folder, and open the import.json file in a text editor.

There are a bunch of fields in this file, but we only care about the ones at the very top. Update the following fields:
Target.Name – The name of your organization that will be created at Azure DevOps.
Properties.ImportType – DryRun for this initial test.
The two source fields will be updated in the next post.

Azure Storage

Next, you need to setup an Azure Storage Container. This is the location you will move the file containing all of your TFS data to before importing it into Azure DevOps Service.

In Azure, you just need to create a new Standard Storage Container. This container has to be created in the same data center region as you set in the import.json file. So make sure you pay attention to that!

I simply created a Standard Storage Container in Central US, easy.

What’s Next?

We’re so close! Our data is now prepared for import!

In the next step, we’ll push the data to the Storage container and begin the import process!

The Great Azure DevOps Migration – Part 4: Validation

wagons migrating west

We have the staging server setup. We’ve cleaned out the data that we don’t want to import. We’re almost ready!

We need to run the Azure DevOps Service validation on our local server to verify that there are no issues before importing. This validation will alert us to any issues that need to be resolved before the actual import.

If you missed the earlier posts, start here.

I highly recommend Microsoft’s Azure DevOps Service Migration Guide.

Get the Tool

Start by downloading the Data Migration Tool. This tool contains the Guide, which you should definitely read, and the actual Migration tool.

Copy the .zip for the Data Migration Tool to the staging server and unzip it to the C:.

Run Validation

Open a command prompt and change directory to the unzipped DataMigrationTool folder. This folder contains the Migrator.exe file.

To execute the validation, run Migrator validate /collection:[COLLECTION_NAME] from the command prompt. Make sure you are executing this on your staging server, use localhost:8080 to make sure you are pointed at the right server.

The validation only takes a few minutes to run and it creates a number of log files with the results.

Analyze the Results

You can view the results of the validation in the command prompt or in the log file stored in the Logs folder of the DataMigrationTool. Open Logs, select the Collection you validated, then click the latest folder (one is made for each migration validation), then open DataMigrationTool.log.

I had a few issues that needed to be resolved, which I’ll explain below. You’ll probably get different ones which you can lookup in the Migration Troubleshooting Guide. None of the issues I ran into were especially hard, just had to read up on the fix.

VS403443 Error

This validation error means that you need to rename a work item field. This seems to happen with old databases that have been updated over time. The schema needs to be tweaked to get in sync with the service.

I had about 8 of these to fix, which I was able to do with the witadmin tool inside the Developer Command Prompt.

witadmin changefield /collection:[COLLECTION_NAME] /n:System.IterationId /name:”Iteration Id”

The important thing (which I screwed up at first) was that the /n parameter is the field, and /name is the name that you change it to.

ISVError:100014 Error

This error means that one of the built-in groups is missing required permissions. It needs to be re-added using the TFSSecurity.exe tool.

Use the instructions at Migration Troubleshooting to resolve this issue.

Users

You may get some validation issues about users, but in my test run I fixed my users after the import by making sure the correct users have access and removing users that didn’t belong.

You won’t get charged till the 1st of the following month after import, so you will have time to address any user import issues.

Space

If you get a warning that your import is too large and needs to be done by importing to an Azure SQL Database first, your import is about to get a lot harder. I initially had this warning, and it is the reason that I cleaned out more of the data in my import (in our previous step). If you can get under this limit, it will make your life easier. If you can’t, you’ll need to do a few extra steps on the import which I won’t be doing, but I’ll provide a link to the guide on how to do it.

What’s Next?

We have successfully validated that the Import is ready to go!

Next, we will prepare the actual Migration package.

The Great Azure DevOps Migration – Part 3: Clean

wagons migrating west

Before migrating the TFS data into Azure DevOps, it’s a good idea to eliminate any data that you don’t need to move into the new service. Ten years of TFS has accumulated a huge amount of code, and I really only need to bring my latest repos forward.

This part will show which data to eliminate and the quickest way to do it.

If you missed the earlier posts, start here.

I highly recommend Microsoft’s Azure DevOps Service Migration Guide.

Team Projects

In my case, I had about 50 Team Project Collections. Team Project Collections each contain their own project template, code repository, and work items.

I had about 50 of these because we imported from SourceSafe over 10 years ago and the import process set it up this way. We actually only use one of these projects on an ongoing basis. Over the last 10 years, we migrated the code for active projects into this main project so we could share work items and project templates.

Because of this, I have about 50 extra projects that are old projects that are rarely (if ever) worked on. None of them have their own work items. I don’t want to bring any of these projects into the Azure DevOps service.

The future plan is that if we need to access the code for one of these projects, we’ll import it as a new GIT repository into our Azure DevOps Service project.

Delete Team Projects

So, for Step 1, delete the unnecessary Team Projects. This is most easily done through the Web UI for TFS.

Make sure you are on the Staging TFS Web UI!

In the Web UI, you need to access the Collection’s Settings. In the breadcrumb trail, at the top of the UI, click the root (mine is DefaultCollection). Then click Admin Settings at the bottom left corner. This will show you the full list of projects in your collection. Click the ellipsis next to each project (except for the ones you want to keep) and click Delete.

If any of these projects have a large code-base, this will take a long time. One of my big ones took over an hour, so be prepared to wait.

Team Foundation Version Control

Before GIT, we had TFVC. TFVC was the only source control that TFS supported in the beginning, so if you’ve been using it for long, you probably have lingering TFVC repositories.

We now exclusively use GIT, so I don’t want to migrate any of the TFVC repositories to Azure DevOps Service. If you are using TFVC, you can migrate these repositories… but I recommend you move to GIT anyway because it’s awesome.

My current project that I’m migrating contains both GIT and TFVC, so I want to purge the TFVC before migration. You can’t actually destroy the TFVC repository, it is there forever… but you can clear everything inside of it.

Delete Workspaces

First, delete all the workspaces. TFS won’t let you delete the code until the attached workspaces are gone.

The best way I found to do this was using a blast from the past: TFS Sidekicks!

TFS Sidekicks is a handy tool for TFVC but has fallen away as GIT has taken over. However, it still works in Azure DevOps Server 2019.

Install it onto your staging server and run it. Go to the Workspaces tab, and highlight and delete all the workspaces. Easy!

Delete Code

Now for the code. The best way to delete the code from the database for good is with a command line “tf destroy”. This will eliminate the code completely from the database.

It is also very important that you include the /startcleanup parameter as that will tell the database to remove it immediately. Otherwise, it can take up to five days to be removed.

There is one caveat, that the tf destroy command will fail if it takes too long to run. So, if you have an enormous amount of code, you will need to do it in smaller chunks.

I had a ton of branches persisted in TFVC, so I had to do it one branch at a time. It took a while, so maybe put on a TV show while you do this…

The tf destroy command needs to be run from the Developer Command Prompt for Visual Studio. Type that into Start to find it.

Then run *tf destroy $/[REPOSITORY]/[FOLDER] /startcleanup

If your repository is small enough, skip [FOLDER] and attempt to destroy it all in one run.

Summary

Your old data is cleaned out! This may seem unnecessary, but there are two reasons to do this.

  1. We really want to be under 30 GB before migration to have the simplest migration possible. More on this later…
  2. This is a great opportunity to cut loose clutter that you no longer need. If you think you will need this code in the future, keep it. But in my case, I’m 99% sure I will never need it again. And if I do need it again, I want to migrate it to GIT anyway.

What’s Next?

In the next post, we begin the Validation.

The Great Azure DevOps Migration – Part 2: Setup

The first step to a successful Azure DevOps migration is to setup your staging VM. I want to completely isolate my migration from my live TFS server.

If you missed the Introduction post, get caught up here.

I highly recommend Microsoft’s Azure DevOps Service Migration Guide.

Why a Staging VM?

  • I need to delete some code from my TFS repos before importing to Azure DevOps.
  • I need to make several changes to the project templates to pass validation. I don’t want any of these changes to affect my live TFS server in case I need to rollback.
  • I want the live TFS server to become my backup for the code I’m deleting, in case we ever need to access it again.

These requirements make it safest to completely isolate my staging VM from my live VM.

But First…

The very first thing that needs to be done is update your existing live TFS server to use the latest version of Azure DevOps Server. You need to be within the latest two versions to successfully migrate. So, before even starting this process, get your live TFS server up to date.

Setup the New VM

The first step is to setup a VM that will host the migration. The safest path is to not clone the existing VM, but setup a new VM from scratch. This is because cloning the VM will pull across paths pointing at your live TFS VM.

Server

Save yourself some pain later and just install the latest Windows Server version. I originally used the version that my live TFS server was using (2012), but you’re going to need some tooling later on that requires later versions of Windows Server. So, just install the latest now, it’ll work fine.

You’re also going to need a ton of hard drive space. I ended up increasing the size of my server about 4 times during the process as I kept realizing I needed more. Just give it a ton of space to start. I set it to 1 TB for the final import.

SQL Server

Install the exact same version of SQL Server that the live TFS server is using. Install the Management Tools, too. You will need those later.

Install IIS

IIS is needed to access the new TFS install. Install this from Windows Features.

Install Azure DevOps Server

It is critical that you install the exact same Azure DevOps Server version that you are using on your live TFS server. I initially installed 2019, and then had to go back and install 2019.0.1. They have to match exactly.

After installation, close the startup dialog, we’re going to use a backup.

Take Full Backup of Live TFS Server

The best way to do this is to use the built-in SQL Backup tooling. I don’t actually use this on the live TFS server, but I wanted to use it for this process because it is easiest.

To make it work, just change the TFS databases on the live TFS server to FULL backup temporarily (if they are not already). In Scheduled Backups (on the live TFS server), use Take Full Backup Now.

When the backup is complete, copy it to the C drive of your new staging server.

Be sure to change your live TFS server’s backup settings to their initial settings if necessary.

Hosts File (If You’re Scared…)

If you are scared of accidentally impacting your live TFS server (like I definitely was), you can be extra safe by blocking access to your live TFS server from the staging migration server.

Open the hosts file in C:\Windows\System32\Drivers\Etc\ and add two lines:

127.0.0.1 LIVE_SERVER_NAME
127.0.0.1 LIVE_SERVER_NAME.DOMAIN.local

This is going to point any calls to your live TFS server back to your staging server.

Whew…. feels safer already!

Still Scared? Shutdown the Live TFS Server

If you’re still scared, you can also shutdown the live TFS server while you make the big changes, like deleting old code.

My biggest fear is that I’m accidentally logged into the incorrect VM and running commands on the live TFS server. Or I clicked a bookmark in my browser which unknowingly took me to the live TFS server.

For those reasons, it is probably worth shutting down the live TFS server during this process (if possible). If you can’t do that, just be very careful when deleting code.

Restore Backups

Back on the staging server, restore the backups on the C: by using the Azure DevOps Management Tool.

In Scheduled Backups, choose Restore Databases. This will handle the SQL restore for you.

Configure Azure DevOps Service

Now, in the Azure DevOps Configuration Center, configure the installation. Use the existing databases, and when asked, choose to Configure As A Clone. This will adjust all the URL’s inside the database so they do not point to your live TFS server.

I used the same service users for this TFS install as I used in my live TFS server, however to be extra safe, you could setup new users for this staging server.

Visual Studio 2019

The last step needed is to install Visual Studio 2019. This will be needed later to get access to the TFS commands the SSDT commands.

What’s Next?

We now have our staging migration server setup and we’re ready to start cleaning up the data to be imported.

In the next step, I’m going to eliminate the data that we don’t want to move to our new Azure DevOps Service. This will reduce the size of our import and give us a cleaner final setup.

The Great Azure DevOps Migration – Part 1: Introduction

wagons migrating west

This series is going to describe the process I went through to migrate my company’s on-premises TFS setup to Azure DevOps in the cloud. The process did turn out to be much more time-consuming than I anticipated, so hopefully this can help future migrators!

This guide will cover the issues I ran into with my setup, you should look at the Microsoft docs for any of your specific issues.

The guide will cover a full dry-run of the migration, and then the final live migration. You must do a dry run first!

On Premises

I’ll start by describing my current on-premises setup, and what I expect my final migrated setup to look like.

Version

We transitioned from Visual SourceSafe to TFS 2008 about 10 years ago. Since the initial installation, we have been really good about updating to the latest version of TFS as it was released.

So, our current version of on-premises TFS is running Azure DevOps Server 2019.0.0. TFS was renamed to Azure DevOps Server this year, but it’s still just TFS with a fancier name.

If you haven’t kept your TFS version up to date, you are going to need to upgrade it to the latest version of on-premises TFS before starting this process.

Collections

We have a single TFS collection named DefaultCollection. When migrating to Azure DevOps, each collection gets migrated as a separate account, so having a single collection is the easiest path forward if you have a small team.

Projects

Inside each Collection, you can have multiple Projects. Each Project can have its own Process template and Project settings. We have about 50 projects, but we actually only use one. When we migrated from SourceSafe to TFS 10 years ago, the migration tool converted each project (per application) into an individual TFS project.

Over time, we consolidated the active projects into a single TFS project. So, only one of the 50 projects is under active development, the rest are legacy apps or abandoned apps that are never modified.

For the move to Azure DevOps, I will be moving only the active TFS Project and leaving the old projects behind. If we need those projects in the future, I will move just their current code base into a new Azure DevOps GIT repository.

GIT vs TFVC

GIT did not exist in TFS when we started using it, so about 4 years ago (?) we migrated all of our active code-bases in the active project to use GIT repos instead of TFVC. We did this inside the current project, so that we could maintain our existing Work Items. That means we currently have a Project containing a TFVC repository and multiple GIT repositories.

For the move to Azure DevOps, I am going to only migrate the GIT repositories, so I will need to remove the TFVC repository before completing the import.

Azure

We already have an account in Azure. We don’t host our entire application infrastructure in Azure, but we do host some services there, so we have active subscriptions.

Build Servers

We host two TFS build servers locally. For a few of our applications, we have third-party dependencies that need to be installed on the build server, but the majority of our applications could be built from a non-custom build server.

For now, I plan to use our existing local build servers for all builds, but long-term we should be able to migrate many of our apps to be built in the Azure DevOps service.

Process

We have made some changes to the TFS Process. Mostly, we’ve added fields to work items, modified the work item UI, and we’ve added a few extra work item types.

I believe it will all import smoothly, as we have not made any extreme changes to the Process.

What’s Next?

In the next post, I will cover setting up the new VM to host the staged TFS installation for migration.

  1. Introduction
  2. Setup the Staging VM
  3. Purge Unnecessary Data
  4. Validate the Migration
  5. Prepare the Migration
  6. Migration Dry Run
  7. Live Migration
  8. Conclusion

Too Much Choice

sign showing two choices

My four-year old daughter recently started having trouble making choices. We were picking a new board game to purchase, and the massive number of choices would overwhelm her. When I asked her why she didn’t want to just pick one, she responded that she did want to, but she wasn’t sure which one was the right one? What if the one she picked wasn’t the most fun one?

At first I thought, just pick one! It’s just a board game… But soon I realized that this same indecision plagues me. And probably plagues a lot of us.

She realizes that she isn’t just choosing what she wants to do, she’s choosing what she doesn’t want to do.

What Has Netflix Done?

I remember the simplicity of TV when I was a child. I’d turn on the TV at night and choose between a few channels. I would watch X-Files because it was on at 9 PM.

That limited choice was frustrating at the time, but there was comfort in that simplicity.

Now, I turn on the TV, and flip to Netflix. I can spend 15 minutes just flipping through the lists of shows trying to decide what I should watch. There are so many shows, so many movies… Do I want to invest in starting a new series? Do I want to invest in a 2 hour movie?

I can also switch to HBO, Hulu, or Amazon, to find even more endless lists of choices.

I’m not just choosing which show I want to watch. I’m choosing which shows I’m not going to watch.

And Now It Has Happened to Gaming…

Gaming has been headed this route for a while now. But with XBOX Game Pass being introduced, I find myself in a similar scenario.

I have over 100 games at my fingertips. I will never have time to play them all. Which one should I dedicate my time to?

And games have gotten so big! Playing The Witcher 3 isn’t just a few hours. Some of these games are dozens of hours of open world gameplay.

How do I choose which one to play? Which is the best investment of my time?

I Don’t Always Want Choices

This stresses me out. And I suspect it stresses many of us out.

When I don’t want to make a choice, I just slip back into what’s comfortable to me. Overwatch, which I can play for a couple hours, or I can watch a match of Overwatch League for 90 minutes. It’s an easy choice for me, and one that I make often.

Last night, I was choosing between a new XBOX game to start. I strongly considered The Division II, which I had played for a demo for. But the open-world nature of it, the promise of over 50 hours of gameplay seems exhausting to me. It’s a huge commitment!

I instead went with Wolfenstein: The New Order. I chose it because of its simpler design. I’m following a path. I play through the levels. I don’t need to explore the open-world. The game will tell me when I’m done.

I love open-world games. I love games that go on for dozens of hours. But sometimes there is comfort in a game that tells me what to do. A game that has a definitive end in sight.

Wrap Up

I tell my daughter that decisions are hard, and we will probably make wrong decisions. But we can also make another decision tomorrow. We just have to make a choice and make the best of it.

I wish I had better advice than that for her. But the truth is that we are a society that has access to everything, except for the time to do everything.

Tracking My Custom Processing Metrics with Application Insights

I have a lot of back-end applications that run processing jobs for users. These jobs run all day and for the most part they are pretty quick (less than a few seconds), but sometimes everything bogs down and all the jobs come to a halt. Users are left with a loading spinner on their client app as they wonder what is happening.

What I want is an easy to use dashboard to show me how many of these jobs are running, which user is running them, and how long they are taking. Then I can look at the dashboard and see if there is a spike of total jobs, a spike in job execution time, or a spike in a single user’s jobs.

I have some custom data like userId, accountId (for which account is being processed). I need those fields so I can filter and group the data.

I could do this by writing a record to the database at the end of each execution, but then I have to connect it to PowerBI and write my own reporting queries. I also need to do all this work everytime I add a new process to report on. I want something easy!

App Insights, maybe?

I’ve used App Insights in the past. I plugged in a set of service to service HTTP calls as request / dependency tracing, but it created a fire hose of data that was difficult to extract any useful information from.

This time instead of a fire hose, I’m going to push in the exact data I want and attempt to build a dashboard to show me the data I want.

Prototype

My prototype solution is going to simulate a process running for different users for various lengths of time. I’m going to push that data to App Insights, and generate reports and a dashboard that I can glance at to see how the system is behaving.

I created a console C# app with the .NET Framework (4.7.2). The only Nuget package I need is Microsoft.ApplicationInsights.

Here’s the app without the metrics code. Simple, just processing jobs that take between 1 and 5 seconds for users that range from 1 to 10.

internal class Program
{
    private static readonly Random _random = new Random();

    public static async Task Main(string[] args)
    {
        while (true)
        {
            var userId = _random.Next(1, 10);

            await Process(userId);
            await Task.Delay(1000);
        }
    }

    private static async Task Process(int userId)
    {
        var processingTime = _random.Next(1, 5);
        await Task.Delay(processingTime * 1000);
    }
}

To log to App Insights, I need a reference to a TelemetryClient. TelemetryClient handles the communication to App Insights and I can create a single instance of the class to share for this entire operation.

    private static readonly TelemetryClient _telemetryClient = new TelemetryClient();

I also need to set the InstrumentationKey on the TelemetryClient. The InstrumentationKey comes from your Azure App Insights portal. Without this key, it won’t know where to store your data. You can copy it from the front page of your App Insights in Azure.

    _telemetryClient.InstrumentationKey = "-- ENTER YOUR KEY HERE --";

I need to track the length of the operation on my own using a Stopwatch. There are other ways to let App Insights track the length for me, but I want to do this simple for now.

    var stopWatch = new Stopwatch();
    stopWatch.Start();

    var processingTime = _random.Next(1, 5);
    await Task.Delay(processingTime * 1000);

    stopWatch.Stop();

Now, I’m going to use the TelemetryClient’s TrackEvent method to store the custom Event to AppInsights. TrackEvent takes in an EventName string, which I’ll use to identify this operation at App Insights. I’m calling it “ProcessExecution”.

You can also add two other sets of data to your event. Properties and Metrics.

Properties are the things that you may want to filter and group by. So, in my case UserId is a property. Properties are always strings.

Metrics are the things that you want to report on. In this case, ProcessingTime is the metric. Metrics are always numeric.

    var properties = new Dictionary<string, string> { { "userId", userId.ToString() } };
    var metrics = new Dictionary<string, double> { { "processingTime", stopWatch.ElapsedMilliseconds } };

    _telemetryClient.TrackEvent("ProcessExecution", properties, metrics);

And that’s it! This should give me everything I need to report on how long this processing is taking.

I’ll run the app and let it go for a few minutes, then login to Azure to find my data.

Azure Reporting

In Azure, I go to Application Insights, and choose Analytics from the top menu. This takes me to a query tool that looks SQL-y but it’s not SQL. My immediate reaction is ugh… I don’t want to learn a new query language.

But this is actually pretty easy. I read through this Get Started document on the Kusto language and it taught me everything I needed to know for this:
https://docs.microsoft.com/en-us/azure/azure-monitor/log-query/get-started-queries. Take the time to read through it, it will save you so much time in the long run.

The table we’re interested in is customEvents and we want the records that are named ProcessExecution.

customEvents
| where name == "ProcessExecution"

And there’s our data! (I can’t believe it worked on the first try…)



Now, what I want to see is the average processing time for each minute of the day. Easy with my new Kusto skills!

customEvents
| where name == "ProcessExecution"
| extend processingTime = toint(customMeasurements.processingTime)
| summarize avg(processingTime) by bin(timestamp, 1m)
| order by timestamp desc

This filters the custom events to our ProcessExecution events. Extend grabs the processingTime out of the metrics data that we added (notice it is a string, so we need to cast it to an integer). Summarize lets us average the processingTime over 1 minute intervals by using the bin(timestamp, 1m). That gives me this awesome report.

And now I can even display it as a Chart by clicking on the Chart button above the results.

I can even quickly add this chart to a Dashboard by using the Pin button at the top right of the screen.

This was pretty easy, much easier than I anticipated. I got this done in about an hour, and then got my actual application using it in about 2 more hours. I now have a running dashboard that shows me average execution time, executions per user, and execution failure rates over time.

I’m going to let this run in Production for a while and see how much data it consumes because with App Insights you pay for the data you use. There is also the ability to add Alerts, so I could get an Alert anytime a threshold is hit. So if ExecutionTime goes over 30 seconds, I could alert my development team. I may play with this in the future, but the costs rise when you add Alerts.

I’m pretty excited that this was so easy. Writing this from scratch with a database would have been pretty quick, too, but I wouldn’t have been able to get the quick reporting data and I’d have had to manage all the storage of this data on my own. If this goes well over the next few weeks I’ll add it to even more of my services.