Skip to content

Lucas is Das Bloggin' Posts

Creating a custom bitrate ladder from Azure Media Services Transcoding

When submitting a transcoding job to Azure Media Services with Media Encoder Standard, the documentation will tell you to use one of the provided presets like this:

string configuration = File.ReadAllText(@"c:\supportFiles\preset.json"); // Create a task

ITask task = job.Tasks.AddNew("Media Encoder Standard encoding task", processor, configuration, TaskOptions.None);

//https://docs.microsoft.com/en-us/azure/media-services/media-services-mes-presets-overview

or by Adaptive Streaming by adding a task like this:

ITask task = job.Tasks.AddNew("My encoding task", processor, "Adaptive Streaming", TaskOptions.None);

In the first example, you are creating multi-bitrate mp4s all the way up to 1080, or even 4k if that is the preset you selected.  In the latter example, what this is doing under the covers is great; You are telling AMS to create the bitrate ladder on the fly based on the input, and to let Microsoft work its magic.  But there are limitations to using Adaptive Streaming from C#, one being that you can’t add thumbnails in the same job, for example.

So what if you want a little more control?  I’ve created a fluent interface for creating your own presets and creating a bitrate ladder that doesn’t “up-encode” based on the quality of the original video.

First, we need to define an EncodingPreset class that will eventually be converted to JSON in valid MES preset format:

public class EncodingPreset
     {
         /// <inheritdoc />
         private EncodingPreset()
         {
             Codecs = new List<Codec>();
             Outputs = new List<Output>();
         }

        public double Version { get; set; }
         public List<Codec> Codecs { get; set; }
         public List<Output> Outputs { get; set; }

        public static EncodingPreset GetBaseEncodingPreset()
         {
             var preset = new EncodingPreset
                          {
                              Version = 1.0d
                          };

            preset.Codecs.Add(Codec.GetH264Codec());
             preset.Outputs.Add(Output.GetMp4Output());

            return preset;
         }

        public EncodingPreset AddNormalAudio()
         {
             Codec codec = Codecs.FirstOrDefault(c => c.Type == "AACAudio");
             if (codec == null)
             {
                 Codec audioCodec = Codec.GetNormalAudioCodec();

                Codecs.Add(audioCodec);
             }

            return this;
         }

        public EncodingPreset AddHDAudio()
         {
             Codec codec = Codecs.FirstOrDefault(c => c.Type == "AACAudio");
             if (codec == null)
             {
                 Codec audioCodec = Codec.GetHDAudioCodec();
                 Codecs.Add(audioCodec);
             }

            return this;
         }

        public EncodingPreset AddBitrateLadder(int width, int height, int bitrate)
         {
             IList<ResolutionInfo> orderedLadder = BitrateLadder.OrderedLadder; //lowest to highest resolution
             int originalPixels = width * height;
             var bitrateTolerance = .05;

            var layersToGenerate = new List<ResolutionInfo>
                                    {
                                        new ResolutionInfo // add the original
                                        {
                                            Width = width,
                                            Height = height,
                                            Bitrate = bitrate
                                        }
                                    };
             foreach (ResolutionInfo step in orderedLadder)
             {
                 if (step.Pixels <= originalPixels)
                 {
                     int min = Math.Min(step.Bitrate, bitrate);
                     layersToGenerate.Add(new ResolutionInfo
                                          {
                                              Width = step.Width,
                                              Height = step.Height,
                                              Bitrate = min
                                          });
                 }
             }

            // make the bitrates distinct - not sure i like this
             List<ResolutionInfo> orderedLayersToGenerate = layersToGenerate.OrderBy(info => info.Pixels).ThenBy(info => info.Bitrate).ToList();
             for (var i = 0; i < orderedLayersToGenerate.Count - 1; i++)
             {
                 foreach (ResolutionInfo layerToGenerate in orderedLayersToGenerate.Where(layerToGenerate => orderedLayersToGenerate.Any(info => info.Bitrate == layerToGenerate.Bitrate && info.Pixels != layerToGenerate.Pixels)))
                 {
                     layerToGenerate.Bitrate = layerToGenerate.Bitrate - 1;
                 }
             }

            foreach (ResolutionInfo layerToGenerate in orderedLayersToGenerate.Where(layerToGenerate => !HasExistingStepWithinTolerance(layerToGenerate.Width, layerToGenerate.Height, layerToGenerate.Bitrate, bitrateTolerance)))
             {
                 AddVideoLayer(layerToGenerate.Width, layerToGenerate.Height, layerToGenerate.Bitrate);
             }

            return this;
         }

        private bool HasExistingStepWithinTolerance(int width, int height, int min, double bitrateTolerance)
         {
             Codec codec = Codecs.FirstOrDefault(c => c.Type == "H264Video");
             if (codec == null)
             {
                 return false;
             }
             return codec.H264Layers.Any(layer => layer.Width == width && layer.Height == height && Math.Abs((layer.Bitrate - min) / (double) layer.Bitrate) <= bitrateTolerance);
         }

        public EncodingPreset AddVideoLayer(int width, int height, int bitrate)
         {
             H264Layer h264Layer = H264Layer.GetVideoLayer(width, height, bitrate);
             Codec codec = Codecs.FirstOrDefault(c => c.Type == "H264Video");
             if (codec == null)
             {
                 codec = Codec.GetH264Codec();
                 Codecs.Add(codec);
             }

            if (!codec.H264Layers.Any(layer => layer.Width == width && layer.Height == height && layer.Bitrate == bitrate))
             {
                 codec.H264Layers.Add(h264Layer);
             }

            return this;
         }

        public EncodingPreset AddPngThumbnails()
         {
             Codec codec = Codecs.FirstOrDefault(c => c.Type == "PngImage");
             if (codec == null)
             {
                 PngLayer pngLayer = PngLayer.Get640x360Thumbnail();

                Codec thumbnailCodec = Codec.GetPngThumbnailCodec();
                 thumbnailCodec.Start = "00:00:01";
                 thumbnailCodec.Step = "00:00:01";
                 thumbnailCodec.Range = "00:00:58";
                 thumbnailCodec.Type = "PngImage";
                 thumbnailCodec.PngLayers.Add(pngLayer);

                Codecs.Add(thumbnailCodec);

                Outputs.Add(Output.GetPngThumbnailOutput());
             }

            return this;
         }
     }
}

With supporting classes for the collections and other classes:

 

public class Codec
     {
         private Codec()
         {
         }

        public string KeyFrameInterval { get; set; }
         public List<H264Layer> H264Layers { get; set; }
         public string Type { get; set; }
         public List<PngLayer> PngLayers { get; set; }
         public string Start { get; set; }
         public string Step { get; set; }
         public string Range { get; set; }
         public string Profile { get; set; }
         public int? Channels { get; set; }
         public int? SamplingRate { get; set; }
         public int? Bitrate { get; set; }
         public string Condition { get; set; }

        public static Codec GetH264Codec()
         {
             return new Codec
                    {
                        Type = "H264Video",
                        KeyFrameInterval = "00:00:02",
                        H264Layers = new List<H264Layer>()
                    };
         }

        public static Codec GetNormalAudioCodec()
         {
             return new Codec
                    {
                        Type = "AACAudio",
                        Profile = "AACLC",
                        Channels = 2,
                        SamplingRate = 48000,
                        Bitrate = 128,
                        Condition = "InsertSilenceIfNoAudio"
                    };
         }

        public static Codec GetHDAudioCodec()
         {
             return new Codec
                    {
                        Type = "AACAudio",
                        Profile = "AACLC",
                        Channels = 6,
                        SamplingRate = 48000,
                        Bitrate = 384,
                        Condition = "InsertSilenceIfNoAudio"
                    };
         }

        public static Codec GetPngThumbnailCodec()
         {
             return new Codec
                    {
                        Type = "PngImage",
                        Start = "00:00:01",
                        Step = "00:00:01",
                        Range = "00:00:58",
                        PngLayers = new List<PngLayer>()
                    };
         }
     }

public class Output
  {
      private Output()
      {
      }

     public string FileName { get; set; }
      public Format Format { get; set; }

     public static Output GetMp4Output()
      {
          return new Output
                 {
                     Format = new Format
                              {
                                  Type = "MP4Format"
                              },
                     FileName = "{Basename}_{Width}x{Height}_{VideoBitrate}{Extension}"
                 };
      }

     public static Output GetPngThumbnailOutput()
      {
          return new Output
                 {
                     Format = new Format
                              {
                                  Type = "PngFormat"
                              },
                     FileName = "{Basename}_{Index}{Extension}"
                 };
      }
  }

public class H264Layer
    {
        private H264Layer()
        {
        }

       public string Profile { get; set; }
        public string Level { get; set; }
        public int Bitrate { get; set; }
        public int MaxBitrate { get; set; }
        public string BufferWindow { get; set; }
        public int Width { get; set; }
        public int Height { get; set; }
        public int BFrames { get; set; }
        public int ReferenceFrames { get; set; }
        public bool AdaptiveBFrame { get; set; }
        public string Type { get; set; }
        public string FrameRate { get; set; }

       public static H264Layer GetVideoLayer(int width, int height, int bitrate)
        {
            return new H264Layer
                   {
                       Profile = "Auto",
                       Level = "auto",
                       Bitrate = bitrate,
                       MaxBitrate = bitrate,
                       BufferWindow = "00:00:05",
                       Width = width,
                       Height = height,
                       BFrames = 3,
                       ReferenceFrames = 3,
                       AdaptiveBFrame = true,
                       Type = "H264Layer",
                       FrameRate = "0/1"
                   };
        }
    }

public class PngLayer
    {
        private PngLayer()
        {
        }

       public string Type { get; set; }
        public int Width { get; set; }
        public int Height { get; set; }

       public static PngLayer Get640x360Thumbnail()
        {
            return new PngLayer
                   {
                       Height = 360,
                       Width = 640,
                       Type = "PngLayer"
                   };
        }
    }

public class Format
    {
        public string Type { get; set; }
    }

a class to hold our original video information to compare to our ideal ladder:

public class ResolutionInfo
    {
        public int Width { get; set; }
        public int Height { get; set; }
        public int Bitrate { get; set; }

       public long Pixels
        {
            get
            {
                return Width * Height;
            }
        }
    }

and an extension method to convert to json properly for this case:

public static class EncodingPresetExtensions
     {
         public static string ToJson(this EncodingPreset preset)
         {
             return JsonConvert.SerializeObject(preset,
                                                new JsonSerializerSettings
                                                {
                                                    NullValueHandling = NullValueHandling.Ignore
                                                });
         }
     }

and finally our ideal bitrate ladder:

public static class BitrateLadder
    {
        private static readonly IList<ResolutionInfo> Ladder = new List<ResolutionInfo>();

       static BitrateLadder()
        {
            Ladder.Add(
                       new ResolutionInfo
                       {
                           Bitrate = 20000,
                           Width = 4096,
                           Height = 2304
                       });
            Ladder.Add(
                       new ResolutionInfo
                       {
                           Bitrate = 18000,
                           Width = 3840,
                           Height = 2160
                       });
            Ladder.Add(
                       new ResolutionInfo
                       {
                           Bitrate = 16000,
                           Width = 3840,
                           Height = 2160
                       });
            Ladder.Add(
                       new ResolutionInfo
                       {
                           Bitrate = 14000,
                           Width = 3840,
                           Height = 2160
                       });
            Ladder.Add(
                       new ResolutionInfo
                       {
                           Bitrate = 12000,
                           Width = 2560,
                           Height = 1440
                       });
            Ladder.Add(
                       new ResolutionInfo
                       {
                           Bitrate = 10000,
                           Width = 2560,
                           Height = 1440
                       });
            Ladder.Add(
                       new ResolutionInfo
                       {
                           Bitrate = 8000,
                           Width = 2560,
                           Height = 1440
                       });
            Ladder.Add(
                       new ResolutionInfo
                       {
                           Bitrate = 6000,
                           Width = 1920,
                           Height = 1080
                       });
            Ladder.Add(
                       new ResolutionInfo
                       {
                           Bitrate = 4700,
                           Width = 1920,
                           Height = 1080
                       });
            Ladder.Add(
                       new ResolutionInfo
                       {
                           Bitrate = 3400,
                           Width = 1280,
                           Height = 720
                       });
            Ladder.Add(
                       new ResolutionInfo
                       {
                           Bitrate = 1500,
                           Width = 960,
                           Height = 540
                       });
            Ladder.Add(
                       new ResolutionInfo
                       {
                           Bitrate = 1000,
                           Width = 640,
                           Height = 360
                       });
        }

       /// <inheritdoc />
        public static IList<ResolutionInfo> OrderedLadder
        {
            get
            {
                return Ladder.OrderBy(pair => pair.Pixels).ThenBy(info => info.Bitrate).ToList();
            }
        }
    }

Note that I have set some defaults in these classes for my particular use case.

So let’s talk about the AddBitrateLadder function:

It takes in the width, height, and bitrate from the origin media file so as not to wastefully, “up-encode” it.  Then, it creates a ladder making the “top” layer the original specs, and steps down from there using our ideal bitrate ladder as a guide.  I should also note that AMS keys off the bitrate, so you can not have 2 different resolutions with the same bitrate, and that is why there is code in that method to merely subtract 1 from each bitrate to make them unique if the original video quality is too low to fit into our specified ladder.  Lastly, it includes a tolerance so that you don’t create 2 layers that are virtually identical.

So now I can use this to generate me a custom bitrate ladder with normal audio and thumbnails, for example:

EncodingPreset.GetBaseEncodingPreset()
.AddNormalAudio()
.AddPngThumbnails()
.AddBitrateLadder(playlistItem.AVFile.Width, playlistItem.AVFile.Height, playlistItem.AVFile.Bitrate);

or HD Audio with no thumbnails:

EncodingPreset.GetBaseEncodingPreset()
.AddHDAudio()
.AddBitrateLadder(playlistItem.AVFile.Width, playlistItem.AVFile.Height, playlistItem.AVFile.Bitrate);]

etc. etc.

And it’s totally testable:

[TestMethod]
       public void CalcLayers1920x1080at266()
       {
           List<H264Layer> layers = CalcLayers(1920, 1080, 266);
           Assert.AreEqual(4, layers.Count);

           H264Layer layer1 = layers[0];
           Assert.AreEqual(263, layer1.Bitrate);
           Assert.AreEqual(360, layer1.Height);
           Assert.AreEqual(640, layer1.Width);

           H264Layer layer2 = layers[1];
           Assert.AreEqual(264, layer2.Bitrate);
           Assert.AreEqual(540, layer2.Height);
           Assert.AreEqual(960, layer2.Width);

           H264Layer layer3 = layers[2];
           Assert.AreEqual(265, layer3.Bitrate);
           Assert.AreEqual(720, layer3.Height);
           Assert.AreEqual(1280, layer3.Width);

           H264Layer layer4 = layers[3];
           Assert.AreEqual(266, layer4.Bitrate);
           Assert.AreEqual(1080, layer4.Height);
           Assert.AreEqual(1920, layer4.Width);
       }

 private static List<H264Layer> CalcLayers(int width, int height, int bitrate)
       {
           EncodingPreset preset1 = EncodingPreset.GetBaseEncodingPreset()
                                                  .AddNormalAudio()
                                                  .AddPngThumbnails()
                                                  .AddBitrateLadder(width, height, bitrate);
           return preset1.Codecs.Where(codec => codec.Type == "H264Video")
                         .SelectMany(codec => codec.H264Layers)
                         .ToList();
       }

Then, when it is time to submit my job, I can:

ITask task = job.Tasks.AddNew(“My encoding task”, processor, myPreset.ToJson(), TaskOptions.None);

Boom! Now we have the power of Adaptive Streaming with the benefit of more control over the ideal ladder, as well as other functions of AMS.

Leave a Comment

Auto scaling Media Reserved Units in Azure Media Services

When you spin up an Azure Media Services instance in Azure, you are prompted with a choice:  How many Media Reserved Units do you want?  and what horsepower do you want behind them?

Well, that exactly does that mean?

Reserving a Unit means that when you submit a job to Media Services, you wont go in a public queue in order for your submitted job to start.  This is important, because if the public queue is busy, it could take quite a while for your job to get picked up.  If you have all the time in the world for your job to complete, this isn’t a big deal, but if you are like me with a customer waiting on the job, speed is a priority.  You can choose from 1-10 reserved units (you can request  more via a support request), and they come at a cost.  Also, when you reserve a unit, it has be a specific speed (S1, S2, or S3).

image

So if you want to have 10 reserved units at all times, and you want S3 so the job completes the fastest that Azure offers, that is 80 cents an hour, and that can add up over time.  I should also note that you can NOT reserve zero S2 or S3 units.  If you want to be in the public pool, it has to be S1.  Therefore, you are 4 cents an hour at the very least if you want to have an immediate response time of your jobs by reserving one S1.  I should also note that if you made a support request to get more than 10 units, when you change the speed of those reserved units, the MaxReservableUnits gets reset to 10, and your support request is essentially lost.  I have spoken with Azure support on this, and while they don’t call it a bug, it is something they are addressing in a future release of AMS.

So, the solution I came up with was to auto scale our units with C#.

When a message is sent to my worker role to work with Azure Media Services, I reserve (currently reserved units + 1) S3 units, and when it is done I decrement one S3 unit.  When I hit 0 units, I set the speed back to S1 (because remember you can only have zero units if you are set to S1)

internal static async Task ReserveMediaEncodingUnit(MediaContextBase context, int amount)
      {
          if (ConfigurationProvider.AutoScaleMRU())
          {
              IEncodingReservedUnit encodingReservedUnit = context.EncodingReservedUnits.FirstOrDefault(); //there is always only one of these (https://github.com/Azure/azure-sdk-for-media-services/blob/dev/test/net/Scenario/EncodingReservedUnitDataTests.cs)
              if (encodingReservedUnit != null)
              {
                   encodingReservedUnit.CurrentReservedUnits = Math.Min(amount,
                                                                       ConfigurationProvider.MaxMRUProvisioned() == 0
                                                                           ? encodingReservedUnit.MaxReservableUnits
                                                                           : ConfigurationProvider.MaxMRUProvisioned());
                   encodingReservedUnit.ReservedUnitType = ReservedUnitType.Premium;
                   await encodingReservedUnit.UpdateAsync();
               }
           }
       }

ConfigurationProvider.MaxMRUProvisioned() is a setting I have that is equal to 10.  I did that because I initially put in the service request to get more than 10, only to find out it gets reset back to 10 if you change the speed.  If Microsoft changes this behavior, I can set my setting to 0 and user their variable MaxReservedUnits, without any code changes.

Deallocating units:

 

 internal static async Task DeallocateMediaEncodingUnit(MediaContextBase context, int amount)
       {
           if (ConfigurationProvider.AutoScaleMRU())
           {
               IEncodingReservedUnit encodingReservedUnit = context.EncodingReservedUnits.FirstOrDefault(); //there is always only one of these (https://github.com/Azure/azure-sdk-for-media-services/blob/dev/test/net/Scenario/EncodingReservedUnitDataTests.cs)

              if (encodingReservedUnit != null)
               {
                   encodingReservedUnit.CurrentReservedUnits = Math.Max(0, amount);
                   encodingReservedUnit.ReservedUnitType = encodingReservedUnit.CurrentReservedUnits == 0
                                                               ? ReservedUnitType.Basic
                                                               : ReservedUnitType.Premium;

                  await encodingReservedUnit.UpdateAsync();
               }
           }
       }

If I hit 0 units I can reset:

 

    private static async Task ResetMediaEncodingUnits(MediaContextBase context)
       {
           if (ConfigurationProvider.AutoScaleMRU())
           {
               IEncodingReservedUnit encodingReservedUnit = context.EncodingReservedUnits.FirstOrDefault(); //there is always only one of these (https://github.com/Azure/azure-sdk-for-media-services/blob/dev/test/net/Scenario/EncodingReservedUnitDataTests.cs)

              if (encodingReservedUnit != null)
               {
                   encodingReservedUnit.CurrentReservedUnits = 0;
                   encodingReservedUnit.ReservedUnitType = ReservedUnitType.Basic;
                   await encodingReservedUnit.UpdateAsync();
               }
           }
       }

So now, when my users aren’t transcoding anything, and my AMS instance is sitting idle, I will incur no cost.  And, when they submit a job, I allocate a unit to avoid going to the public pool and the job gets submitted right away and completed with premium speeds.  I can’t guarantee this hack will work forever; when speaking with MS they told me this code has prompted them to think about how reserved units work in AMS, and may change this behavior in the future.

Happy transcoding!

Leave a Comment

Automating Publish of UWP Application to the Windows Store

You have published your app to the store, hooray!  Now you want to automate that deployment with VSTS…

If you are having issues with Microsoft’s documentation to accomplish this task, here is a summary of how I got it to work. (I should note that you must have an app in the store manually first for this to work).

1.  Go to your Dev Center Dashboard (https://developer.microsoft.com/en-us/dashboard/apps/overview).

2.  Click the gear icon in the top right hand corner > Manage Users

3.  “Add Azure AD Application” > “New Azure AD Application”

4.  Give it a name, like “Windows Store Connection”, the reply URL and App ID URL can be anything at this stage.  IMPORTANT for it to be part of the Developer Role so that this can publish to the Store.

image

5.  Click Save, and you will be taken to a confirmation page.  Click Manage Users to see your newly created application in the grid with a guid.  Click that one to edit it.

image

6.  Under Keys, click Add New Key.  Make note of the ClientId, Key, and Azure Tenant Id, you will not be able to see that key again after you leave this page. Upon confirmation, click Manage Users to go back to your list, and then click on the Connection again to confirm that it looks like this:

image

Now we move on to the build step.

1.  Add Step Windows Store – Publish

2.  For the Service Endpoint, click New

Name your connection, something like “WindowsStoreConnection”

Windows Store API Url: https://manage.devcenter.microsoft.com

Azure Tenant Id: what you noted from your creation of the Azure AD Application, or you can find it in the portal (https://portal.azure.com), clicking on the Azure Active Directory > Properties > Directory ID

ClientID: what you noted from the creation of the Azure AD Application

Client Secret: your key that you noted from the key creation of the Azure AD Application

 image

3.  Click OK and choose your newly created service endpoint from the dropdown

4.  Application identification method: ID

5.  Application ID: get this from your Dev Center Dashboard > App Management >  App Identity > Store ID

Now, when your run this build step, it will publish your app to the store and poll the service until it is finished (so keep in mind this could consume one of your build agents for up to 3 business days)

image

TL;DR

The MS docs led me to believe that I could create my Azure AD Application from the portal via the App Registrations.  When I did that, I got the dreaded “503 Service Unavailable” error on my build publish step.  The trick was to create the Azure AD Application from the Windows Dev Center, give it Developer permissions, and tie that application back to my Windows Dev Center connection endpoint.

2 Comments

Creating an snk for signing your assemblies with Visual Studio

There are a number of reasons you may (or may not) want to sign your assemblies, but if you do, here is a simple way of doing it in Visual Studio.

  • Create a snk file by opening a Visual Studio command prompt as Administrator <—IMPORTANT (https://msdn.microsoft.com/en-us/library/ms229859%28v=vs.110%29.aspx)
    • sn -k <YOUR SNK FILE NAME>.snk
  • Then create a Public Key
    • sn -p <YOUR SNK FILE NAME>.snk <YOUR PUBLIC KEY FILE NAME>.PublicKey
  • Get your Public Key Token
    • sn -tp <YOUR PUBLIC KEY FILE NAME>.PublicKey
    • this will output your public key and token to the console, make note of it

Full output from the console will look like this:

image

  • Next, go to the properties of the project containing your assembly you want to sign, and click on the Signing tab.
  • Check Sign the assembly
  • Click the dropdown and Browse to the newly created snk file that you created in Step 1

image

The reason you want to note your public key and public key token is for use in your app.config or for InternalsVisibleTo.

For example, if the assembly you have signed needs to be specified in an InternalsVisibleTo in the assemblyinfo file, you would specify it like this:

<assembly: InternalsVisibleTo("MySignedAssemblyName, PublicKey=<font style="background-color: #ffff00">0024000004800000940000000602000000240000525341310004000001000100155b8d9138457a0be37b064f4f0fa70ceb948f08a7855122f1d6fe9cb89e74b68d60853358a061482d5e62423881caf1cf276d82b11a2e6075939181ab9e1c3dadfcf23082b04d15fb5f9ca20da5bc99b29f830e5c5d23ae9d3dee6f609d0980ed8ba584f348d48921055e13e66c987f5c5712e15285235cb649f0a1e65c0bb2</font>") />

Or, if you were referencing the assembly in your app.config for a custom Logging handler using Enterprise Library, it would look like this:

<exceptionTypes>
    <add name="All Exceptions"
               type="System.Exception, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"
               postHandlingAction="NotifyRethrow">
        <exceptionHandlers>
            <add
                type="MyCustomLogExceptionHandlerClassName, MySignedCustomLogHandlerAssemblyName, Culture=neutral, PublicKeyToken=<font style="background-color: #ffff00">0b4def2ce7bdd21a</font>"
                name="LogExceptionHandler" />
        </exceptionHandlers>
    </add>
</exceptionTypes>
Leave a Comment

Strongly Typing TempData in your MVC Application with Extension Methods

As I’ve stated before, and for those that know me, when working with C# I try to use the compiler as often as I can and keep things strongly typed.  When I started working in MVC, I didn’t like the fact that TempData was defined like this:

public class TempDataDictionary : IDictionary<string, object>, ICollection<KeyValuePair<string, object>>, IEnumerable<KeyValuePair<string, object>>, IEnumerable

While TempData and ViewData being potentially valuable things, <string, object>, really?

Here is how I get around that and use the compiler to my advantage with some simple extension methods:

public static class TempDataExtensions
{
    public static T Get<T>(this TempDataDictionary tempData, string key)
    {
        if (tempData[key] is T)
        {
            var tempDataItem = (T)tempData[key];
            return tempDataItem;
        }
        throw new InvalidCastException(string.Format("Temp Data does not contain type {0} for key {1}", typeof(T), key));
    }
 
    public static void Set<T>(this TempDataDictionary tempData, string key, T value)
    {
        tempData[key] = value;
    }
}

So, in your controller, you can Set to TempData and Get from TempData like this:

public ActionResult Index()
{
    TempData.Set("SomeObjectKey", new SomeObject());
    TempData.Set("SomeBoolKey", true);
    TempData.Set("SomeStringKey", "test");
     
    TempData.Get<SomeObject>("SomeObjectKey"); // returns SomeObject
    TempData.Get<bool>("SomeBoolKey"); // returns a boolean true
    TempData.Get<string>("SomeStringKey"); // returns the string "test"
 
    return View();
}

You can also do the same with ViewData:

public static class ViewDataExtensions
{
    public static T Get<T>(this ViewDataDictionary viewData, string key)
    {
        if (viewData[key] is T)
        {
            var viewDataItem = (T)viewData[key];
            return viewDataItem;
        }
        throw new InvalidCastException(string.Format("View Data does not contain type {0} for key {1}", typeof(T), key));
    }
 
    public static void Set<T>(this ViewDataDictionary viewData, string key, T value)
    {
        viewData[key] = value;
    }
}

I know what you are thinking, this doesn’t stop you from setting TempData the <string, object> way, and you are correct.  To get and set using strong types, you have to have the discipline to use these extension methods.  But, with these tools, you can give yourself a fighting chance.

Leave a Comment

Don’t litter your code with stringly typed settings, mkay?

h9cnk

When using C#, I am kinda a strongly typed bigot and like to use the compiler as much as I can. Since practically every application I have ever worked on has had some sort of setting access from a config file, I felt that there had to be a better way.

So, given this config file:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <appSettings>
    <add key="StringSetting" value="filepath"/>
    <add key="BoolSetting" value="true"/>
    <add key="StringListDelimitedSetting" value="one;two;three"/>
  </appSettings>
</configuration>

I don’t want to litter my code with this everywhere:

//BAD
string value = System.Configuration.ConfigurationManager.AppSettings["StringSetting"];
if (value == "SOMETHING")
{
    //do something
}
  
//WORSE?
string boolValue = System.Configuration.ConfigurationManager.AppSettings["BoolSetting"];
if (boolValue == "YES")
{
    //do something
}
  
//PRODUCES STRONG TYPE BUT EVEN MORE CODE
string someOtherBoolValue = System.Configuration.ConfigurationManager.AppSettings["SomeOtherBoolSetting"];
bool strongBoolValue;
if (Boolean.TryParse(someOtherBoolValue, out strongBoolValue))
{
    if (strongBoolValue)
    {
        //do something
    }
}

So, this is what I do to keep my “stringly” typed settings in one place, strongly typed and make them easily accessible in my code:

public static class AppSettingsExtensions
{
    public static string StringSetting(this NameValueCollection settings)
    {
        string setting = settings["StringSetting"];
        if (setting != null && !string.IsNullOrWhiteSpace(setting))
        {
            return setting;
        }
  
        return string.Empty;
    }
  
    public static bool BoolSetting(this NameValueCollection settings)
    {
        string setting = settings["BoolSetting"];
        if (setting != null && !string.IsNullOrWhiteSpace(setting))
        {
            bool test;
            if (Boolean.TryParse(setting, out test))
            {
                return test;
            }
        }
  
        return false;
    }
  
    public static IEnumerable<string> StringListDelimitedSetting(this NameValueCollection settings)
    {
        string setting = settings["StringListDelimitedSetting"];
        if (setting != null && !string.IsNullOrWhiteSpace(setting))
        {
            return setting.Split(Convert.ToChar(";"), Convert.ToChar(",")).ToList();
        }
  
        return Enumerable.Empty<string>();
    }
}

Accessing settings in code now is simple and gives you a strong type:

//GOOD
string stringSetting = ConfigurationManager.AppSettings.StringSetting();
if (stringSetting == "SOMETHING")
{
    //do something
}
 
//OR
bool boolSetting = ConfigurationManager.AppSettings.BoolSetting();
if (boolSetting)
{
    //do something
}
 
//OR
IEnumerable<string> listSettings = ConfigurationManager.AppSettings.StringListDelimitedSetting();
foreach (string setting in listSettings)
{
    //do something
}

And yes, this works for connections strings as well, just change the type of the extension:

public static class ConnectionStringExtensions
{
    public static string SomeConnectionString(this ConnectionStringSettingsCollection settings)
    {
        ConnectionStringSettings setting = settings["SomeConnectionString"];
        if (setting != null)
        {
            string connectionString = setting.ConnectionString;
            if (!string.IsNullOrWhiteSpace(connectionString))
            {
                return connectionString;
            }
        }
 
        return string.Empty;
    }
}

Accessed like:

string connectionString = ConfigurationManager.ConnectionStrings.SomeConnectionString();

And there you have it, that is a tool I like to keep in my toolbox when working with configuration files.

Leave a Comment

How I missed Codemash 2014 and still learned something

I have to stop having kids around Codemash time, or rather, roughly 9 months before Codemash time.

Every year I attend I usually write a follow up blog post saying how it rejuvenates my love for the profession each year.  I have missed 2 incarnations of Codemash in its history due to my 2 beautiful daughters being born within months of the event (read: WORTH IT), but this year was a little different.  This time I wasn’t able to attend, but I still had that same fire ignited inside me as the years that I was there.

What happened?  Well two things.

1.  I was able to send a young and talented developer in my place that had never been.

2.  The twitter feed.

Let’s start with #1.  I found that ”prepping” someone that was as excited to go as I was the first time made me feel like I was there again.  Make sure you do this, listen to this speaker he/she is great, participate in an open space, eat lots of bacon (he was paleo, so he liked this part), MEET PEOPLE, etc. etc.  In the week upcoming the event, we talked about it everyday.

Secondly, I probably have never used twitter as much as that week, trying to follow along with the events.  Since I was home with the baby and up a lot, I was spending a lot of time at night working on the website for my new business.  I was challenging myself to step out of my .NET comfort zone and build something solely with HTML and JavaScript (maybe it was Codemash from a distance inspiring me).  The problem that I was having was that my first cut at it had a lot of HTML that I was repeating across pages (navigation bar, header, footer, images, etc.) , and I thought there has to be a better way .  My first thought was MVC to deliver a base layout and render a body dynamically, but that broke my rule of trying something new, so that was out.  In comes the Codemash twitter feed where I was hearing a lot about AngularJS, specifically from my friend that went in my place who had attended a class on it and was tweeting about it.  The lights went on.  That will handle my situation AND be something new (but still kinda comfortable since I was used to the controller concept of MVC!).  A couple of nights of reading and doing and viola!, Holeshot Software was finally out there.  Is it spectacular? not really, but it was something I had been wanting to do, it challenged me, and I succeeded, and my business has a landing page with my contact information.

So, all of the reasons that love Codemash so much were still present, even in its absence, including the interaction with the most important  part, the people (only this time it was strictly via Twitter).

Unless something happens around April again, I will see you all next year!

Leave a Comment

Opinions are like Utility.dll, everybody’s got one…

You know what I ‘m talking about.  You want to put that method that sends an email, or that Key/Key Pair dictionary thing the you stole from Jon Skeet’s blog somewhere for all of your colleagues to use and bask in your reusable API glory.  The question is, where do you put it?  On a server share?  Checked in to source control?  On a mapped drive that everyone has?  I suppose, but how are you going to version it?  How are you going to let the people referencing upgrade?  How are you going to handle breaking changes?  Dependency Management?  Well, here is my opinion: Nuget to the rescue.  If you are sharing any code in your organization, hosting an internal Nuget feed is a great way to do that.  Get this set up early and have your CI build continue to publish packages and you can add and share code as quickly as your build server can build it (and run the tests of course).

I have used the integrated Nuget server in TeamCity before, and I have to admit, it’s pretty awesome (minus managing packages, as of this date that still sucks since you can’t issue Nuget.exe commands to it).  For the purposes of this post, however, for those that aren’t running TeamCity or want a process that isn’t married to a third party, we will talk about setting one up from scratch.

Before we get in to putting the package somewhere, lets go over creating the package in the first place.  So, you have your assembly:

image

The first thing you need to do is create a nuspec file.  You can generate this by “nuget spec” in the same folder as your csproj (I just put a copy of Nuget.exe in the same folder to make it easy):

image

image

All that does is generate a nuspec file for you, but if you ask me, it’s pretty weak in terms of actually doing anything for you specific to this project.  Maybe its more useful if as part of your build process you are generating a new nuspec file every time and using this text as wildcards to what you would replace.  Honestly, for now, you can just copy this and and start from there:

<?xml version="1.0"?>
<package >
  <metadata>
    <id>$id$</id>
    <version>$version$</version>
    <title>$title$</title>
    <authors>$author$</authors>
    <owners>$author$</owners>
    <licenseUrl>http://LICENSE_URL_HERE_OR_DELETE_THIS_LINE</licenseUrl>
    <projectUrl>http://PROJECT_URL_HERE_OR_DELETE_THIS_LINE</projectUrl>
    <iconUrl>http://ICON_URL_HERE_OR_DELETE_THIS_LINE</iconUrl>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>$description$</description>
    <releaseNotes>Summary of changes made in this release of the package.</releaseNotes>
    <copyright>Copyright 2013</copyright>
    <tags>Tag1 Tag2</tags>
  </metadata>
</package>

Those $variable$ replacement tokens come from your AssemblyInfo.cs, so make sure that it populated, or just hard code the values instead (we will pass the version into the pack command):

image

Also, IT IS IMPORTANT THE THE NUSPEC FILE IS THE SAME NAME AS THE CSPROJ.  That will come in to play when we package this sucker.  It is a convention thing, and it bit me pretty good when I tried to do this for the first time.

Time to pack it up.  There are MSBuild targets for this as well, but since we have that Nuget.exe right there, we can call “nuget pack” (if you don’t have that, you can pass it a csproj after the word ‘pack’).  In this case we are going to use a few of the optional command line arguments to 1. build the package before packaging 2. Produce debug symbols 3.  Build in release mode 4. Specify a version.  You can specify an OutputDirectory, but it will use the current directory if you don’t specify.

So, “nuget pack –Build –Symbols -Properties Configuration=Release –Version 1.0.0.0”

image

You will have some warnings that you haven’t filled everything out properly, and you can take care of that if you’d like.

Viola!  We have a package! (and symbols, which is great for an internal Nuget feed since you most likely own the code and will want to debug)

image

Now, where to put it?

Let’s start with setting up an internal server. There are 2 ways to do this:

1. a network share (simple, but may have some performance and security complications)

2. a remote feed through an IIS website (probably best, but has a little bit more of a startup effort to implement)

#1 – A Network Share

Create a share somewhere and put your packages in it.

In Visual Studio – Go to Tools > Options > Package Manager > Package Sources.  Add a name and a UNC share location:

image

Now you can consume it in your project that needs it.  First things first, make sure you have this checked:

image

Then you can right click on your project, and Manage Nuget Packages:

imageimage

When you install, you will get a reference to Holeshot.Utility, a packages.config, and a .nuget folder (if you have Restore Nuget Packages on, which I think you will).  Open the NuGet.targets and make sure RestorePackages is true and DownloadNuGetExe is true.  Make sure that the Nuget.exe is NOT checked in to source control, as it will download a new copy every time if it needs to.

imageimage

There you have it, the next time a new Holeshot.Utility is put in the network share, your Holeshot.ProjectThatNeedsUtility will notify you that there has been an update to your reference, and you have the option to take the new version.  Herein lies one of the biggest advantages of this process.  It puts the product owners back in charge.  Don’t want to take the upgrade now because the Minor version changed and that indicates a possible breaking change?  Then don’t take it.  When you are ready, take it, correct any compilation errors (if any), run your tests, and you are in business with a potentially effortless upgrade.

image

image

#2 – A remote feed in IIS

If you have a build server, TFS server, or some other computer that is publicly accessible, that will do the trick.  It’s not like it needs to be a beefy machine.

First, create an Empty ASP.NET Web Application, and install Nuget.Server from the Manage Nuget Packages console.  Notice how it resolves all of its own dependencies (Elmah, Ninject, about 25 others, etc.).  You can have that too in your packages using this process and specifying dependencies in your nuspec file, but that is another post.

image

Now you will have this.  Notice the packages folder, that is where your packages need to go now.

image

Go to the web.config and specify an API key:

image

Publish and browse to the web site:

image

Now we can either push to the feed, or just copy the packages to the folder specified above:

image

BAM!

image

There you have it.  I did all of this while documenting in a matter of 2 hours.  Well worth the effort if you ask me.

Leave a Comment

You Need to Wrap That S*** Up B

I was recently writing a test for an engine that takes in 2 points and calculates the mileage between them.  I had already been given 2 distinct list of points (zips in this case) as input, but for the purposes of this demo I will just stub them out.  It is irrelevant really what the engine takes in, I just wanted to show case this “new to me” method on Enumerable called Zip.

[Test]
public void NonParallelMileageTest()
{
   var originPoints= PointHelper.GetPoints(PointType.Origin);
   var destinationPoints = PointHelper.GetPoints(PointType.Destination);

   IEnumerable<Point> origin100Points = originPoints.Take(100);
   IEnumerable<Point> destination100Points = destinationPoints.Take(100).Reverse(); //to ensure most of the points are different
   List<Tuple<string, string>> originDestinationPair =origin100Points.Zip(destination100Points, (origin, destination) => new Tuple<string, string>(origin.Zip, destination.Zip)).ToList(); //contains 100 elements
   foreach (Tuple<string, string> pair in originDestinationPair
   {
      MileageHelper.GetMileage(pair.Item1, pair.Item2);
      Assert.Greater(mileage, 0);
   }
}

One enumerable can “zip” another enumerable, and input is
1.  the other enumerable, and
2.  a lambda expression with inputs of an item from each enumerable and how you want to create your new object.

In this case I am creating a Tuple (it’s ok, I’m in a test).

Pretty cool.

One other thing to be aware of about the method: if the 2 enumerables you are “zipping up” don’t contain the same amount of elements, it will use the lowest collection count.  See comment above in the sample that the resulting list will contain 100 elements.

So this example only contains 99 elements,

[Test]
public void ParallelMileageTest()
{
   var originPoints= PointHelper.GetPoints(PointType.Origin);
   var destinationPoints = PointHelper.GetPoints(PointType.Destination);
   IEnumerable<Point> origin100Points = originPoints.Take(100);
   IEnumerable<Point> destination100Points = destinationPoints.Take(99).Reverse(); //to ensure most of the points are different      
   List<Tuple<string, string>> originDestinationPair = origin100Points.Zip(destination100Points, (origin, destination) => new Tuple<string, string>(origin.Zip, destination.Zip)).ToList(); //contains 99 elements
   Parallel.ForEach(originDestinationPair , (pair) => MileageHelper.GetMileage(pair.Item1, pair.Item2)
}

I have really been meaning to blog about the Parallel namespace as I have been using it quite a bit lately and love it.  More on that later.

Leave a Comment

Codemash 2012, Is there anything else like it?

I bet there is not.  If there is, I want to see it.

Seriously, Codemash is a “down-home/local” conference that isn’t “down-home” or “local” and one that brings the likes of Scott Hanselman (@shanselman),  and other big names in the industry, just to give an hour long talk, not to mention the excellent key notes that are always good.  When I got home and started to read the twitter feeds on #codemash, the general consensus was HUGS!!!  I may be wrong here, but the twitter feed from #build did not give the aura of love.  Codemash is also a small world.  Not only did I run into a few buddies from college that I haven’t seen in years (Bruce Hubbard. @brucehubbard and Wes Grollmus, @wesg92), I found out that Jon Kruger (@jonkruger) is married to one of my wife’s friends that I shared a locker with for 4 years in high school.  You walk out of this conference thinking that what was once just twitter handles are now actual friends.

With all of that touchy feely stuff aside, the content is just downright amazing.  Codemash has “the law of two feet”, which translates to: If you aren’t learning or contributing….leave without recourse.  Well, not once the whole 3 days did I feel that way.

If you been to the conference but haven’t attended the precompiler, please make an effort to do so.  I actually think it is the most important day of the week.  As I have said before in previous Codemash posts, the people are what make Codemash special, and the precompiler is probably the best place to do that in a semi-professional setting (as opposed to room parties, of course).

/* if you want to read about how I spent my time at this event

This Codemash was slightly different for me this year.  As my work is embarking on some technology that isn’t as familiar to me as what I have been doing for the past 10 years in .NET.  I tried to go sessions where I wasn’t very familiar with the subject matter.  That, of course, is the whole premise of this conference, but I have always spent most of my time strengthening the things I knew, instead of exploring things I didn’t.  This year I spend the morning session of the precompiler with Leon Gersing  (@rubybuddha) and Scott Walker (@pragma_tech) as they walked us through what was essentially 53 examples of JavaScript gotchas.  They were, in fact, proponents of the language, but wanted to point out some of the nuances that would otherwise not seem to make sense to someone like me, and that was extremely helpful.  The afternoon session I spent with Clark Sell (@csell5) and Brandon Satrum (@BrandonSatrom) while they walked us through where HTML5 is, and gave us a wealth of labs and experiments to try stuff on our own.  This class was great, my only complaint was that it should have been a full day, there was so much good content and the 2 speakers spent quite a bit of time putting together these labs.  Four hours just wasn’t enough.  I explained this to Clark and he agreed, but I would have come on Tuesday if I knew I could have had that much hands on stuff to go through.

Thursday was equally as informative.  I actually had so many sessions I wanted to see, it was HARD to narrow down which to go to.  On the contrary, for the morning session, I knew I was going to see Scott Hanselman’s (@shanselman) talk on the Web Stack of Love.  There is a reason this guy is a sought after speaker, and there is a reason there is standing room only in his talks.  He is that good.  Next, Rich Dudley (@rj_dudley) showed us about building applications in Windows 8 with HTML.  He was very energetic (which I am told is baseline) and fun and informative.   Glenn Block’s (@glock) talk on Node.js and Azure was next, and man, Node is pretty cool.  The talk seemed to be more about Node than Azure, which was fine with me given that the likelihood of my current work involving Azure is small, but cool nonetheless.  The rest of the day I learned about CoffeeScript from Brandon Satrum (@BrandonSatrom) and Roslyn from Dustin Campbell (@dcampbell) .  CoffeeScript was very intriguing to me as someone who doesn’t write a lot of JavaScript.  The language seemed to make more sense to me, and it guards you from some of the gotchas that I learned about on Wednesday.  While CoffeeScript is not a replacement for learning JavaScript, I can see it as a valuable tool in doing so.  The Roslyn stuff was also pretty awesome.  It left me wondering if something like this will improve such things as Resharper, or make it harder for them to provide value if a lot of what they do is baked in to Visual Studio…

Friday.  The bittersweet last day of Codemash.  The day you have a “hung over eagerness”to continue from what you learned earlier in the week.  Thankfully, the content was still just as good as ever, and I actually had up to 4 classes per session that I wanted to attend.  Phil Japiske (@skimedic) gave a talk using JustMock that I think I can apply to my current work, and that is always exciting.  Next I attended another Scott Hanselman (@shanselman) talk on Dealing with Information Overload.  This was basically a class on lessons learned by Scott on managing your life and your work.  The biggest takeaway from this sessions was “if there is something in your life that isnt improving it or making you money, delete it”.  He talked about how a large amount of developers have trouble sleeping (myself included) and that is because we are doing a for loop in our heads of the things we didn’t get done and the things we want to do.  The second takeaway was that (paraphrasing here) “every developer should have a blog, I don’t care how mundane the content”.  Every year Codemash reignites my writing in this blog, and this year was no different.  Lastly, I attended a class from Bill Wagner (@billwagner) entitled “C# Stunt Coding”.   Caching the expression tree of a reflection call and compiling it on the fly for subsequent calls to eliminate the performance hit!  Oh my!

*/

I just cant say enough good things about this event.  It is truly that good.  My company sent 10 developers this year, and to quote Michael Letterle (@mletterle) in reference to his company sending just as many: “#thatishowyoudoit”.   I’m already counting down to next year.  Big thanks to the organizers and attendees that make it awesome.

Leave a Comment