Step 2 - Climbing the ladder...
# Thursday, July 4, 2013

Way back in March, the new NHS Health and Symptom checker app was released across the two major smart phone platforms on the 21st. As part of the NHS's new 111 helpline rollout, the existing NHS Direct app's needed to be rebranded to reflect the new service and updated content.

Being responsible for the Android release and being that this was the first (official) time I'd published an app on the Google Play store, I wanted to share a few insights into the process of publishing the app and document it for my own benefit in the future.

Reading the official signing procedures outlined by Google can lead you to think the process is a little involved, but if you're using Eclipse and take Google's advice, the export wizard built into the IDE makes the whole process entirely painless.

Exporting and signing your APK file

Inside Eclipse (ADT v21.0.1-543035):

  1. First step is to update your version numbers inside the package's manifest file. This is to ensure the automatic upgrade process happens correctly, and also I think the store may reject the update if version numbers don't differ.
  2. Right-click the project you want to publish in the package explorer and select Android tools -> Export signed application package... This will open the Export App wizard.
  3. Specify the project to export and click next. I assume most packages will only have one project.
  4. The next step is to specify the Java keystore file which contains the private key you wish to sign your APK file with. In order to publish the app on the Google Play store your app must be signed with a private key that has an expiry period of at least 25 years. This is to ensure the app can be upgraded with the same key in the future.
  5. Next specify which alias to use (alias is the name used to identify the key when it was added to the keystore) and the password for it.
  6. Finally, give the wizard a location and filename where to save the signed APK file.

Once the app has been exported and signed into an APK file it needs to be uploaded to the store.

  1. Log into the Android developer console. These steps are based on the version 2 console.
  2. As this was an upgrade the next step for me was to select the existing app in the list.
  3. Click the APK link in the sub menu beneath the app's icon on the left.
  4. Now click Upload new APK at the top and browse to (or drag) the signed APK into this dialog.
  5. Once uploaded you click the Publish now button to submit this APK to the store. Apparently you used to be able to upload the APK without publishing to allow you time to ready everything prior to officially releasing the app, but I was unable to find this feature this time round.
  6. Finally you can go to the Store Listing section and amend anything that needs to change, probably at least the Recent changes section.

I've been quite granular here with the steps but the whole thing can be done in less than 10 minutes once you know your way around.

Recent posts

Thursday, July 4, 2013 1:32:37 PM (GMT Daylight Time, UTC+01:00)  #    Comments [0]

Posted under: Programming | Technology by

# Wednesday, June 12, 2013
Perforce logo

After adopting Perforce as my source control solution at home I wanted to quickly establish a working automated backup process before I started committing too much hard work to the depot. I did some googling and found some good scripts in the Perforce public depot but the project home page seems to be down at time of writing (12/06/13). I couldn't use these scripts in the end because they used specific backup tools which I wasn't interested in.

So I opted to try and write my own; A) Because I wanted to familiarise myself with how the Perforce system operates, B) because I wanted an excuse to learn PowerShell and C) because I had just bought a new home server and wanted to put it to good use.

The eventual plan I wanted to implement was:

  1. Take the necessary steps to 'snapshot' the Perforce depot.
  2. Compress the potentially large files down into a single archive. Testing everything along the way.
  3. Upload them to the cloud (for now until I get a better storage option in place)

Readers may prefer to change the last step to a local copy to another drive etc, but I had a Box.com account with 15 GB spare and it supports WebDAV so I thought it might be cool to try and get PowerShell to connect and stick the file up there.

Step 1. Snapshot the Perforce depot

I strongly urge any readers attempting to backup a Perforce depot to read the guidelines set out by the company here, there are some very important key concepts that Perforce operates within that backup operators need to keep in mind when attempting to backup and reliably restore a snapshot.

The main points to take away are;

  • There are two kinds of files that Perforce maintains. Versioned files which are the internal representation of the actual files submitted to the VCS by users. And Database files or metadata, which tracks the state of the depot for Perforce's benefit (things like changelists, checked out files etc.).
  • Checkpoints are files which capture a snapshot of the depot at a given point in time.
  • The Journal is a log file containing all the transactions from the last checkpoint.
  • A checkpoint file coupled with a copy of the versioned files and an optional journal file constitutes a Perforce backup. These files are what you need in order to backup and restore a Perforce depot.

The first step recommended by Perforce is to verify the integrity of your depot. This is just good practice and can be achieved with the command:

p4 -u USERNAME -P PASSWORD verify -q //...

Replacing USERNAME and PASSWORD with the details of a valid Perforce user (which I created with backup privileges). -q makes the operation run quietly and reduces output. The final parameter //... tells Perforce which depot (or view) you want to target. Here I'm telling it to just use the root.

Once verified you want to actually create the checkpoint by issuing:

p4d -jc

This will truncate the current journal file and create a new checkpoint with an incremented checkpoint number (checkpoint.n) in your servers root folder (P4ROOT).

Once this operation has been confirmed to have run successfully you'll now have a checkpoint file and an accompanying .MD5 file that you must then cross-reference with the md5 hash of the actual checkpoint file to ensure it was written to disk ok. I'll show you how I achieved this in the PowerShell script in a moment.

Step 2. Compress the backup files

I'm a heavy user of 7zip as it has some straightforward command line options that are great for problems like this so I then run the files through the following command:

7z.exe a -t7z DESTINATION SOURCEFILES

a informs 7zip to create an archive, DESTINATION is the filename you want to save to and SOURCEFILES I pointed to a sub-directory containing all the files to compress.

Once packed I then tested the archive to ensure integrity with:

7z.exe t TARGETFILE -r

t to test the TARGETFILE and r to include all sub-directories in the archive.

Step 3. Upload the file via WebDAV

The final step is specific to PowerShell so I'll list the full code of my WebDAV upload function here:

function UploadToWebDAV($destinationFile, $username, $password, $sourceFile)
{
    write-host "Uploading $sourceFile to $destinationFile `r`n"
    #Set-Variable -name adFileTypeBinary -value 1 -Option Constant

    $objADOStream = New-Object -ComObject ADODB.Stream
    $objADOStream.Open()
    $objADOStream.Type = 1 #$adFileTypeBinary
    $objADOStream.LoadFromFile("$sourceFile")
    $arrbuffer = $objADOStream.Read()

    $objXMLHTTP = New-Object -ComObject MSXML2.ServerXMLHTTP
    $objXMLHTTP.setTimeouts(1000 * 60 * 1, 1000 * 60 * 1, 1000 * 60 * 10, 1000 * 60 * 1)
    
    $objXMLHTTP.Open("PUT", $destinationFile, $False, $username, $password)
    $objXMLHTTP.send($arrbuffer)
}
        

Simple stuff, reads the file, opens the connection and sends.

Wrapping it all up

I wrote two PowerShell scripts to take care of these steps. One run every night to take a copy of the journal file and upload that to the cloud and the other run every week to perform the steps outlined above.

Here's the script which I may eventually get round to sharing on GitHub as a Gist or something.

###############################################################################
##
## Perforce depot backup script
## Weekly
##
## Steps
## 1) Verify depot
## 2) Make a checkpoint file
## 3) Verify checkpoint
## 4) Verify checkpoint by comparing hashes
## 
## This file is then uploaded as part of the nightly backup script.
##
## Perforce backup procedure taken from:
## http://www.perforce.com/perforce/doc.current/manuals/p4sag/02_backup.html
##
###############################################################################


cls

. funcs.ps1


$now = get-date

Function Print($text)
{
    write-host "`r`n"
    write-host "============================================================`r`n"
    write-host "== $text`r`n"
    write-host "============================================================`r`n"
    write-host "`r`n"
}




$username = [Security.Principal.WindowsIdentity]::GetCurrent().Name
Print("$username performing weekly perforce backup ($now)")



###############################################################################
##
##1. Verify the integrity of your server and add MD5 digests and file length
##   metadata to any new files
##
###############################################################################

Print("Verifying depot...")

$result = p4 -u backupuser -P password verify -q //... 2>&1 | out-string

if ($result)
{
    write-host "Depot failed verification: $result"
    exit 1
}
else
{
    write-host "Depot successfully verified.`r`n"
}

###############################################################################
##
##2. Make a checkpoint by invoking p4d with the -jc (journal-create) flag, or
##   by using the p4 admin command
##
###############################################################################

Print("Creating checkpoint...")

$result = p4d -jc 2>&1 | out-string
write-host $result `r`n

###############################################################################
##
##3. Ensure that the checkpoint has been created successfully before backing up
##   any files.
##
###############################################################################

if ($LASTEXITCODE -ne 0)
{
    exit 2
}

###############################################################################
##
##3a. Determine filenames
##
###############################################################################

try
{
    $index1 = $result.IndexOf("(")
    $index2 = $result.IndexOf(")")
    $checkpointFile = $result.Substring($index1 + 1, $index2 - $index1 - 1)
    write-host Checkpoint file = $checkpointFile `r`n

    $index1 = $result.LastIndexOf(" ")
    $index2 = $result.LastIndexOf("...")
    $journalFile = $result.Substring($index1 + 1, $index2 - $index1 - 1)
    write-host Journal file = $journalFile `r`n

    $md5File = $checkpointFile + ".md5"
    write-host MD5 file = $md5File `r`n
}
catch [Exception]
{
    write-host "Unable to parse perforce output for requisite files:"
    write-host $_.Exception.Message
    exit 3
}

###############################################################################
##
##4. Confirm that the checkpoint was correctly written to disk by comparing
##   the MD5 checksum of the checkpoint with the .md5 file created by p4d -jc.
##
###############################################################################

Print("Confirming checkpoint...")

try
{
    $server = "D:\"

    $index1 = $result.IndexOf("= ")
    $md5 = $result.Substring($index1 + 2, 32)
    write-host Checksum = $md5 `r`n

    $result = Get-Content $server\$md5File
    write-host $result`r`n

    $result = $result.Contains($md5)
    if ($result -eq $False)
    {
        write-host "Checkpoint MD5 checksum failed. " + $md5 + " not found in $md5File"
        exit 4
    }
    else
    {
        write-host Checksum verified for $checkpointFile `r`n
    }
}
catch [Exception]
{
    write-host "Failed to compare checkpoint MD5:"
    write-host $_.Exception.Message
    exit 5
}

###############################################################################
##
## Compress
##
###############################################################################


Print("Compressing backup...")



$server = "D:\"
$depot = "depot"


$compression = "-mx9"

$zipdestination = "D:\Backup\Intermediary\"

$zipname = $now.ToString("yyyy-MM-dd-HHmmss") + "_perforce_W_backup.7z"

$result = Archive $zipdestination$zipname $server\$depot\*, $server\$checkpointFile, $server\$checkpointFile.md5, $server\$journalFile $compression


###############################################################################
##
##4a. Upload
##
###############################################################################

Print("Uploading backup...")



$webdav = "https://dav.box.com/dav/" + $zipname
$username = "YOURUSERNAME"
$password = "YOURPASSWORD"

$result = UploadToWebDAV $webdav $username $password $zipdestination$zipname

write-host "Done`r`n"

#TODO verify copy


###############################################################################
##
##4b. Delete intermediary
##
###############################################################################
Print("Cleaning up...")

Remove-Item $zipdestination$zipname

write-host "Done`r`n"

Print("Weekly backup complete")
exit 0
        

You'll notice at the end there I clean up the intermediary local zip file, and there's additional work needed to verify the upload once complete. But hopefully this will get a lot of you started if looking to backup your depot automatically.

Leave a comment if there are any steps which aren't clear and I'll either write a follow-up or amend this post.

Caveat!!

I am yet to test the backups produced by this process.

I'm in the process of moving house and once settled I will be running the collection of weekly backups through a restore process. Like the old adage goes;

"Backups usually succeed, it's restores that often fail."

If I get time I'll write a follow-up article covering the restore process.

Recent posts

Wednesday, June 12, 2013 1:36:43 PM (GMT Daylight Time, UTC+01:00)  #    Comments [0]

Posted under: Game development | Programming | Rise | Technology by

# Wednesday, May 1, 2013

This'll be a quick blog post. I've just needed to unit test the fact that certain Controller actions are decorated with the AuthorizeAttribute and that it's Roles property contains the correct roles.

My first stab was a convoluted process of trying to stub out the HttpContext and returning certain users that may or may not be a member of the target roles. Then I was firing the Controller's action and testing the ActionResult for the correct redirection.

This was as stupid as it sounds.

The correct (not to mention succinct approach) I found on SO:

        var attributes = typeof(SomeController)
            .GetMethod("SomeMethod", new Type[] {  }).GetCustomAttributes(true);
        var attribute = attributes.OfType<AuthorizeAttribute>().Single();

        Assert.AreEqual("Some role, some other role", attribute.Roles);
    

One extension I'm thinking of adding to this is to loop over a bunch of parameterised methods and testing for roles.

Recent posts

Wednesday, May 1, 2013 6:56:31 AM (GMT Daylight Time, UTC+01:00)  #    Comments [0]

Posted under: Programming by

# Friday, April 26, 2013

After seeing many different attempts at managing third party library references in large scale applications, I've seen lots of things that simply don't work and very few things that almost work.

The solution du jour at work was originally to reference everything out of a lib folder beneath the solution under source control. This at least meant a get latest/build was possible (although a little clunky) and everything was version controlled. It was all just a little manual and prone to error as a result.

Eventually we moved across to NuGet for third party libs which solved almost all of those problems over night. The only remaining problem (which continued to be solved via the lib folder) was internal libraries written by our team.

Serendipitously I had recently attended a talk discussing how to leverage NuGet within the enterprise, which covered how to set up a NuGet package server internal to your network. This could then be used to serve your libraries benefiting from all the great features of NuGet, without having to make them public.

How to set up a NuGet server

Following the great advice detailed over at docs.nuget.org I was able to get a server setup and serving packages in less than an hour.

The basic steps you need to take are:

  1. Start by creating a new empty web application in visual studio and adding a reference to the NuGet.Server package. This essentially turns your web app into a NuGet package server.
  2. Optionally tweak the folder where your packages will be stored by editing the packagesPath appsetting.
  3. From this point the server is good to go but you'll probably want to start adding packages so that they are available to install by your users. Simply copy packaged nupkg files into the servers package folder for them to show up in the feed.
  4. Deploy this site as you would any other site and check you receive a message along the lines of "You are running NuGet.Server vXXX" when visiting it.
  5. Finally you (and anyone who wants to pull from this server) need to add the site's URL to your local NuGet package manager. Click on Tools->Library Package Manager->Package Manager Settings, give your new source a name and point it at the URL created in the previous step and click Add.

That's it. The packages within your new local NuGet server should now show up when adding reference from the NuGet dialogue.

How to create a NuGet package

There are only 7 straight forward steps you need to take in order to turn your project into a .nupkg:

  1. Create a specification file by running the command nuget spec from your project route. This will create a kind of manifest file that NuGet will use to detail your package.
  2. Review and amend the nuspec file created to ensure it reflects your project's particulars.
  3. Copy over any content you want copied upon installation of your package into the content folder.
  4. If any of your code requires pre-processing when on the target machine add .pp to the file names within the content folder and replace anything required with the relevant macros.
  5. Do the same for the web.config.
  6. Optionally add an install.ps1 and/or uninstall.ps1 to the tools folder to take care of any actions needed on install or uninstall.
  7. Finally issue a nuget pack command to crush all of that into a .nupkg file ready for submission.

Other types of NuGet private repositories

There are other ways of hosting package files locally. You could just point NuGet at a local path containing your packages. This approach has the benefit of being very straight forward but may not scale well when you factor in permissions and network connectivity.

Another option is commercially hosted NuGet servers. Some products which provide this service include:

One thing to bear in mind when hosting your packages with external companies is to double check the small print. If you require complete ownership of your product, be sure you don't relinquish your ownship when uploading to these providers. Also think about security when signing up, who has access to what?

Are there any downsides to hosting your own NuGet server?

Obviously you need to put the time in to get things up and running and keeping it running. Although once on it's feet, this server should need minimal administration.

Also this approach requires dedicated buy-in from the entire team. Unless you can force this approach onto your team, they'll need to follow your lead when getting their packages in this way. Don't underestimate this point. All the best will in the world is useless in the face of developers not prepared to move forward.

One final point is that the NuGet.server is ultimately just a read-only RSS feed, so the publishing is entirely manual. More control but more hassle.

Some parting advice; do yourself a favour if you're venturing into the world of publishing or consuming NuGet packages and download NuGet Package Explorer. It'll let you browse the contents of a package prior to installing it. Don't forget NuGet packages execute PowerShell scripts so unless you completely trust the publisher it's often prudent to scan through the contents before running it against your machine.

Recent posts

Friday, April 26, 2013 9:50:17 AM (GMT Daylight Time, UTC+01:00)  #    Comments [0]

Posted under: Programming by

# Tuesday, April 16, 2013

It's now been some time since I had TalkTalk's fibre optic broadband installed so I thought I'd write a quick article about how I've found the service and hardware so far.

As I mentioned in my previous post about what to expect from the installation I was very happy with how everything went and the performance I was getting. I'm pleased to say that the performance has been reasonably consistent up to this point with an average download speed of ~16Mbps and upload <2Mbps. Although the upload is a little slow I haven't needed to push it too far yet.

Router technical details

The router itself is actually a rebranded Huawei HG533 which I understand is a far cry from the less attractive, older, white D-Link model.

The router sports a handy WPS button which gives easy connection to the WLAN for devices that support it. Rather then trying to connect to the WLAN and punching in the key when prompted, WPS allows you to simply press the button on the side of the router to put it into negotiate mode. This then allows your device to establish a connection automatically and bypass the need to manually provide a key.

One thing that did annoy me about the router was the obligatory 'basic mode' splash screen you have to click through once you've logged in. Which is compounded by a confirmation dialog when 'advanced mode' is clicked.


However, once safely inside the advanced section the options become much more usable. I must say, the options inside the advanced section of the router's administration interface are considerably more granular then the routers I'm used to. So this was a major plus for me. I have also read that these routers support bridging which is something I've been keen to incorporate into my home network ever since I reconfigured the layout.

Gaming performance

I've finally had a chance to run the line through some online gaming to see if those ellusive <20ms pings could be achieved and on a good server I can confirm very good speeds on the line. Maybe even the best on the servers I was playing. With these low latencies I now have only myself to blame for a missed shot!

In conclusion this router has so-far surpassed a lot of my expectations. It's by no means perfect and there is still a lot to look through and get to grips with, but it is easily one of the better routers I've had to work with.

Technical specifications

Dimensions143mm x 186mm x 38mm
Weight~250g
WLAN802.11b/g/n
Wireless transmission speedsb: 11Mbit/s
g: 54Mbit/s
n: 300Mbit/s

Recent posts

Tuesday, April 16, 2013 4:41:29 PM (GMT Daylight Time, UTC+01:00)  #    Comments [0]

Posted under: Technology by

# Tuesday, March 19, 2013

I like enums. Yet I've seen a lot of developers prefer to litter their code with magic numbers seemingly oblivious to the pain this will cause the developers who need to maintain it.

Enums are just one approach to helping readers of your code know precisely why the value 42 is important when comparing it against some value in a database.

Enums however, as we all know, come with a breed of problems which may sometimes need to be architectured around:

  • They can be assigned unexpected values. var badValue = (MyEnum)123; will force the value 123 into the variable which for all intents and purposes could be out of range.
  • They have a default value. Because enums are essentially integral integers they will have a default value when declared. This will mean they will be set to zero by default which predicates the need for a default field which is assigned a zero value. Something like enum Direction { None = 0, North, East, South, West }. Note here the zero is optional in this example.

There are lots of ways to solve the issues inherent to enums. For example; if your enum exists to list some states of an object or system; consider using the state pattern. If you're listing actions then the command pattern may be more appropriate. Also if the enum ends up controlling some code branching then the strategy pattern could be a better choice.

Enumeration classes

If you really need enums there is a final pattern which I've seen used to great effect in many code bases and is gaining popularity with me. The idea is to turn the enum into a class and add static readonly fields to it to access each of the supported values.

So in the example of colours you might see something like this:

    public class Colour<T> //yes I'm British, move on
    {
        public T Value { get; set; }
        public string Name { get; set; }

        public static readonly Colour<int> Red = new Colour { Value = 1, Name = "Red" };
        public static readonly Colour<int> Green = new Colour { Value = 2, Name = "Green" };
        public static readonly Colour<int> Blue = new Colour { Value = 3, Name = "Blue" };
    }

Once I became familiar with this technique I started to see ways in which I could improve it (like supporting comparisons etc.) so I researched further and found this little piece of knowledge by Jimmy Bogard.

Jimmy proposes extending this pattern by introducing a base class which provides much of the functionality to support a 'rich-enum' design:

public abstract class Enumeration : IComparable
{
    private readonly int _value;
    private readonly string _displayName;

    protected Enumeration()
    {
    }

    protected Enumeration(int value, string displayName)
    {
        _value = value;
        _displayName = displayName;
    }

    public int Value
    {
        get { return _value; }
    }

    public string DisplayName
    {
        get { return _displayName; }
    }

    public override string ToString()
    {
        return DisplayName;
    }

    public static IEnumerable<T> GetAll<T> where T : Enumeration, new()
    {
        var type = typeof(T);
        var fields = type.GetFields(BindingFlags.Public | BindingFlags.Static | BindingFlags.DeclaredOnly);

        foreach (var info in fields)
        {
            var instance = new T();
            var locatedValue = info.GetValue(instance) as T;

            if (locatedValue != null)
            {
                yield return locatedValue;
            }
        }
    }

    public override bool Equals(object obj)
    {
        var otherValue = obj as Enumeration;

        if (otherValue == null)
        {
            return false;
        }

        var typeMatches = GetType().Equals(obj.GetType());
        var valueMatches = _value.Equals(otherValue.Value);

        return typeMatches && valueMatches;
    }

    public override int GetHashCode()
    {
        return _value.GetHashCode();
    }

    public static int AbsoluteDifference(Enumeration firstValue, Enumeration secondValue)
    {
        var absoluteDifference = Math.Abs(firstValue.Value - secondValue.Value);
        return absoluteDifference;
    }

    public static T FromValue<T>(int value) where T : Enumeration, new()
    {
        var matchingItem = parse<T, int>(value, "value", item => item.Value == value);
        return matchingItem;
    }

    public static T FromDisplayName<T>(string displayName) where T : Enumeration, new()
    {
        var matchingItem = parse<T, string>(displayName, "display name", item => item.DisplayName == displayName);
        return matchingItem;
    }

    private static T parse<T, K>(K value, string description, Func<T, bool> predicate) where T : Enumeration, new()
    {
        var matchingItem = GetAll<T>().FirstOrDefault(predicate);

        if (matchingItem == null)
        {
            var message = string.Format("'{0}' is not a valid {1} in {2}", value, description, typeof(T));
            throw new ApplicationException(message);
        }

        return matchingItem;
    }

    public int CompareTo(object other)
    {
        return Value.CompareTo(((Enumeration)other).Value);
    }
}

Lovely, SOLID stuff. Although my first change was to generalise the class.

Further improvements

I asked a question on Stack Overflow back in 2008 about implicit conversions of enums and how come C# doesn't support it (C++ does). The above pattern clearly allows this and the answers I received, extend Jimmy's ideas and provide a very rich set of features for working with classes that behave like enums.

Here's the TL;DR version:

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Diagnostics;
using System.Linq;
using System.Reflection;
using System.Resources;

namespace NMatrix
{

    [DebuggerDisplay("{Value} ({Name})")]
    public abstract class RichEnum<TValue, TDerived>
                : IEquatable<TDerived>,
                  IComparable<TDerived>,
                  IComparable, IComparer<TDerived>
        where TValue : struct, IComparable<TValue>, IEquatable<TValue>
        where TDerived : RichEnum<TValue, TDerived>
    {
        #region Backing Fields

        /// <summary>
        /// The value of the enum item
        /// </summary>
        public readonly TValue Value;

        /// <summary>
        /// The public field name, determined from reflection
        /// </summary>
        private string _name;

        /// <summary>
        /// The DescriptionAttribute, if any, linked to the declaring field
        /// </summary>
        private DescriptionAttribute _descriptionAttribute;

        /// <summary>
        /// Reverse lookup to convert values back to local instances
        /// </summary>
        private static readonly SortedList<TValue, TDerived> _values = new SortedList<TValue, TDerived>();

        #endregion

        #region Constructors

        protected RichEnum(TValue value)
        {
            this.Value = value;
            _values.Add(value, (TDerived)this);
        }

        #endregion

        #region Properties

        public string Name
        {
            get
            {
                return _name;
            }
        }

        public string Description
        {
            get
            {
                if (_descriptionAttribute != null)
                    return _descriptionAttribute.Description;

                return _name;
            }
        }

        #endregion

        #region Initialization

        static RichEnum()
        {
            var fields = typeof(TDerived)
                .GetFields(BindingFlags.Static | BindingFlags.GetField | BindingFlags.Public)
                .Where(t => t.FieldType == typeof(TDerived));

            foreach (var field in fields)
            {
                /*var dummy =*/ field.GetValue(null); // forces static initializer to run for TDerived

                TDerived instance = (TDerived)field.GetValue(null);
                instance._name = field.Name;
                                    instance._descriptionAttribute = field.GetCustomAttributes(true).OfType<DescriptionAttribute>().FirstOrDefault();
            }
        }

        #endregion

        #region Conversion and Equality

        public static TDerived Convert(TValue value)
        {
            return _values[value];
        }

        public static bool TryConvert(TValue value, out TDerived result)
        {
            return _values.TryGetValue(value, out result);
        }

        public static implicit operator TValue(RichEnum<TValue, TDerived> value)
        {
            return value.Value;
        }

        public static implicit operator RichEnum<TValue, TDerived>(TValue value)
        {
            return _values[value];
        }

        public static implicit operator TDerived(RichEnum<TValue, TDerived> value)
        {
            return value;
        }

        public override string ToString()
        {
            return _name;
        }

        #endregion

        #region IEquatable<TDerived> Members

        public override bool Equals(object obj)
        {
            if (obj != null)
            {
                if (obj is TValue)
                    return Value.Equals((TValue)obj);

                if (obj is TDerived)
                    return Value.Equals(((TDerived)obj).Value);
            }
            return false;
        }

        bool IEquatable<TDerived>.Equals(TDerived other)
        {
            return Value.Equals(other.Value);
        }


        public override int GetHashCode()
        {
            return Value.GetHashCode();
        }

        #endregion

        #region IComparable Members

        int IComparable<TDerived>.CompareTo(TDerived other)
        {
            return Value.CompareTo(other.Value);
        }

        int IComparable.CompareTo(object obj)
        {
            if (obj != null)
            {
                if (obj is TValue)
                    return Value.CompareTo((TValue)obj);

                if (obj is TDerived)
                    return Value.CompareTo(((TDerived)obj).Value);
            }
            return -1;
        }

        int IComparer<TDerived>.Compare(TDerived x, TDerived y)
        {
            return (x == null) ? -1 :
                   (y == null) ? 1 :
                    x.Value.CompareTo(y.Value);
        }

        #endregion

        public static IEnumerable<TDerived> Values
        {
            get
            {
                return _values.Values;
            }
        }

        public static TDerived Parse(string name)
        {
            foreach (TDerived value in Values)
                if (0 == string.Compare(value.Name, name, true))
                    return value;

            return null;
        }
    }
}

Enums aren't always evil

Please don't read this post as an attempt to bad-mouth the humble enum. I am a fan, as I mentioned at the beginning. My advice here is that their usage should be tempered with an informed understanding of their strengths and weaknesses. This will hopefully lead to cleaner, coherent and more managable code.

Recent posts

Tuesday, March 19, 2013 4:50:15 PM (GMT Standard Time, UTC+00:00)  #    Comments [0]

Posted under: Programming by

# Friday, January 25, 2013

After discovering the potential of Microsoft's latest language offering TypeScript, I feverishly set about researching whether or not I could get it working in the office.

Unfortunately I'm stuck in the 80's running Windows server 2003 which needless to say, slows me down at every conceivable turn. Installing bleeding edge tech on a decade old OS is like trying to talk a 2 year old into taking advantage of a lie-in every morning!

My first port of call was to find out if it was even possible. Turns out the installer for the TypeScript plugin itself won't even run below win7 (including vista?), so I needed to find another way around.

After futile attempts to get WebMatrix installed (along with the plugin) I gave up that idea and dug around a bit more.

Fortunately I remembered I had a Windows 7 vm instance installed on my machine which meant I could theoretically install vs2012 web express and install the plugin on that. Then compile the ts files back out to my project repo and consume the js files as normal.

Once downloaded and installed, I grabbed the plugin and checked everything was working. The next and final step was to run the compiler (surely the TypeScript 'compiler' is a translator? Nevermind.). Currently I'm running tsc.exe from the command prompt which means I need to point it at my project root using a UNC share.

If you try and run tsc \\someshare you'll be greeted with cmd's CMD does not support UNC paths as current directories.. However! A little known fact about cmd is the ability to push and pop UNC paths using the pushd and popd commands. This will promote the share to a first class directory within your session and let you 'compile' to and from this directory in cmd.

So now I'm developing TypeScript inside a Windows 7 VM, running Visual studio 2012 Web dev express, and outputting the translated js into my projects directory ready for use.

Less than elegant, but a workable workaround until the upgrade fairies look upon me kindly.

Recent posts

Friday, January 25, 2013 9:02:30 AM (GMT Standard Time, UTC+00:00)  #    Comments [0]

Posted under: Programming by

# Wednesday, January 23, 2013

I had to figure out a design recently which required setting immutable state from chained constructors having trivial logic performed prior to initialisation.

When performing common logic inside chained constructors it's oft-recommended to isolate the functionality out into a variant of an Initialise() private method which is invoked from the last (master) constructor in the chain.

Something like:

public MyClass(int data) : this(data, null)
{
    
}

public MyClass(int someNumber, string someText)
{
    Initialise(someNumber, someText);
}

private void Initialise(int someNumber, int someText)
{
    //some logic prior to initialising the object's state
}

Simple stuff. It get's slightly more complicated when the objects fields are declared as readonly. This means the C# compiler will not allow you to initialise the fields outside a constructor or class level.

If some centralised (due to chaining) logic is required prior to their initialisation, the described Initialisation() method above will throw the afore mentioned compile error.

One solution, which certainly isn't the most aesthetic, is to call the Initialise method passing each readonly field in as out params. This allows you to perform your logic and initialise the readonly fields outside the constructors.

Example:

private readonly int _someNumber;
private readonly string _someText;

public MyClass(int someNumber) : this(data, null)
{
    
}

public MyClass(int someNumber, string someText)
{
    Initialise(out _someNumber, someNumber, out _someText, someText);
}

private void Initialise(out int _someNumber, int someNumber, out string _someText, string someText)
{
    //some logic prior to initialising the object's state
}

Further reading:
SO question which helped me solve the issue.

Recent posts

Wednesday, January 23, 2013 10:54:52 PM (GMT Standard Time, UTC+00:00)  #    Comments [0]

Posted under: Programming by

# Saturday, January 19, 2013

Currently I'm at the point in Rise where I'm trying to take the data stored in the map file and translate it to a structure that allows me to efficiently render it to screen.

I have to store each tile in a sort of 'stack' which holds all the height information for things like walls and buildings. My first attempt stored these tiles as an array within the larger array of all the map tiles.

I optimised my Tile type down to 16 bytes and _was_ planning on having maps as big as 1024x1024. With this structure my current in memory footprint for all this map data was weighing in at an easy 300+ meg! Clearly a re-think was required.

The problem I see with this approach is the redundant tiles stored for areas of the map which are completely flat. And lets face it that will be most of the map (on average). Therefore my next approach will be to store the tiles as more of a linked list and forego the ability to perform random access.

I'll try to keep you posted on exactly how I end up doing this.

Recent posts

Saturday, January 19, 2013 8:12:24 AM (GMT Standard Time, UTC+00:00)  #    Comments [0]

Posted under: Rise by

# Tuesday, January 15, 2013
Dan Maxwell's The fairy snag

After many attempts at working out an art workflow for Rise I decided to call upon the talents of a good friend of mine to help with the asset creation. Dan Maxwell is an accomplished digital artist who's been working on game art for the past few years, particularly the cross platform series of Gamebook adventures that are proving increasingly popular on the hand held devices.

With experience in web design and print also, I'm confident Dan has the expertise to deliver the visuals I'm looking for in Rise as well as the professionalism to help me bring the project to completion.

You can see examples of Dan's work on his Behanced portfolio and further contact details via Linkedin.

Hopefully, I'll soon be able to start posting some visuals as the project starts to take shape.

Recent posts

Tuesday, January 15, 2013 10:30:08 PM (GMT Standard Time, UTC+00:00)  #    Comments [0]

Posted under: Rise by

My latest artwork
View my entire gallery.
Archive
<July 2014>
SunMonTueWedThuFriSat
293012345
6789101112
13141516171819
20212223242526
272829303112
3456789
About the author/Disclaimer

Disclaimer
The opinions expressed herein are the personal opinions of Adam Naylor and do not represent my current or previous employer's view in any way.

© Copyright 2014
Adam Naylor
Statistics
Total Posts: 55
This Year: 0
This Month: 0
This Week: 0
Comments: 8
Creative Commons Licence
© 2014, Adam Naylor