Gulp, Bower, Visual Studio and Azure

That’s a lot of technology choices in one headline. But, as you’ve probably guessed, I’ve been spending some time trying to get all of these working seamlessly together, and sadly, can only report that it doesn’t quite work.

Scott Hanselman introduced gulp, grunt, bower and NPM support for Visual Studio 2013 a few days ago, and I can happily say that these plugins work really well. I wanted to change our project to use Bower for front-end script management, and to automatically be able to download those scripts using Gulp – all, of course, tied to our build process. This is pretty simple. Here’s the package.json file that I used:

{
    "name": "Tradies-Web",
    "version": "1.0.0.0",
    "devDependencies": {
        "gulp": "^3.8.8",
        "gulp-bower": "0.0.6",
        "preen": "^1.1.2"
    }
}

I used Angular and Angular-UI-Select as my two test packages for bower, which resulted in the following bower.json file:

{
    "name": "My Awesome App",
    "private": true,

    "dependencies": {
        "angular": "1.3.0",
        "angular-ui-select": "^0.8.3"
    },
    "preen": {
        "angular": [ "angular.js" ],
        "angular-ui-select": [ "dist/select.css", "dist/select.js" ]
    }
}

One thing I’ve found is that the packages from Bower give you a lot of cruft, which is why I’m using preen to remove everything that I don’t want.

Anyway – the final piece of the puzzle is the gulpfile.js which ties it all together. I wanted to use gulp to grab all of my scripts, then to preen them back to only the files I wanted. This took a bit of trial and error to figure out; I was initially trying to pipe the output from bower into preen, but that didn’t work. It turns out that the trick is to define the steps as two different tasks, then make the preen task dependent on the bower task. As follows:

var bower = require("gulp-bower"),
    gulp = require("gulp"),
    preen = require("preen");

gulp.task("get-scripts", function (callback) {
    return bower(callback);
});

gulp.task("clean-scripts", ["get-scripts"], function (callback) {
    return preen.preen({}, callback);
});

gulp.task("default", ["get-scripts", "clean-scripts"]);

Once I’d installed Task Runner Explorer, running this inside Visual Studio was simply a matter of right-clicking on the default task and hitting “Run”. Even better, I could make this a pre-build step. Huzzah!

Gulp support inside Visual Studio

It all works brilliantly.

That is, of course, unless you’re trying to run your website inside a web role using the Azure Emulator. All of a sudden, Azure seems to take a long time to start the web role.

Azure Roles starting

After this friendly dialog has shown for a minute (or so), you then get greeted with this, not-quite-as-friendly message:

Azure Roles stalled

Let me tell you now that continuing to wait is not the right choice. Believe me, I tried!

After much searching and diagnosing, I finally figured out that the problem is the node_modules folder that gets installed when you run npm install in your website folder. Simply having this folder in your website will completely break the emulator. The only reference I could find to this problem was this stackoverflow question, which at the time of writing still has no solution.

So, at this point, I’m left with no choice but to not use gulp, though I’m still happy to use bower (which can be installed in node globally) to manage my client-side scripts. Its one step forward, I guess!

Wake on Lan, Wake on Internet

While trying to reduce my power bill I decided that I need to reduce the power that my PC at home is using. The obvious solution to reduce power is to turn the machine off when I’m not using it. I however need to remote into my machine from work every now and then and having the machine off makes this impossible.

I then tried tinkering with the Windows Power and Sleep Settings which worked well in powering down the machine after a set time of inactivity but once the machine is sleeping my TeamViewer (or RDP) connection won’t be available.

On the suggestion of a friend I then tried SmartPower which allowed me to have my machine awake and available for set time periods, for example between 8am and 5pm or business hours. The rest of the time the machine was allowed to power down to sleep. The only problem with this approach is that my machine is drawing power for 9 hours a day when ninety percent of the time I didn’t need it available.

My next stop was to have my computer go to sleep after five minutes of activity and have it wake on demand. I wanted to wake my computer using my phone not only while on the local wifi connection but also over the internet.

The following outlines how I achieved this on a Windows 8 machine:

Step 1 – Configure Your Network Connection to allow Wake On Lan – Magic Packets

Step 2 – Open the Wake on Lan port in your firewall settings

Step 3 – Create a Port Forwarding Rule on your Router

It is near impossible for me to describe the steps necessary for you to create a port forwarding rule on your router without knowing the details of your router so I won’t even try. I suggest you google on how to create a port forwarding rule for your model of router and then forward port ‘9’ to the machine you want to wake up.

On my Telstra provided ADSL Modem I create a port forwarding rule like so:

Step 4 – Download a Wake on Lan (WOL) app for your Smart Phone

I have an iPhone and found an iOS 7 app called WakeUp. It costs a 99c USD and is very easy to use. There are tons of apps that can meet our needs and some are free. I wanted an app that not only was functional but was easy to use and well designed.

Step 5 – Set up your Smart Phone app to wake your computer from the WIFI and internet

The information you need to enter depends on the app and how it works. Generally though you will need the MAC address of the machine you want to wake up, as well as the IP address on your network for Wake on Lan, and the IP address or Host Name of your machine over the Internet.

To get the local network (LAN) ip address and mac address (physical address) details open a command prompt in windows and type: ipconfig /all

To get the internet IP address to your network you will need to either:

a) use the external IP address which you can get through your modem settings

b) use a external dynamic DNS service (such as no-ip or dyndns) to use a host name for dynamic IP address settings.

Depending on your ISP connection you may have an dynamic IP address (one that changes regularly) or a static IP address. If you have a dynamic IP address it is best to setup an account with a dynamic DNS provider that will give you a human-readable host name that remains up-to-date with the IP address your ISP provides you. Setting up a dynamic DNS provider account is beyond the scope of this article as the configuration for each router is different. A quick google search may assist you here.

Once you have all the settings needed to wake your computer either on your WIFI connection or over the internet you just have to enter them into your chosen Smart Phone app.

I hope this article helps jump-start you on your journey to reduce your on-computers power usage.

WebAPI, Autofac, and Lifetime Scopes

One of the hardest things to get your head around when using Autofac (or really any other dependency injection container) is lifetimes. As a primer, I’d suggest you read Nicholas Blumhardt’s excellent article: An Autofac Lifetime Primer. Bookmark that link; you’ll need to read it more than once to fully digest it!

Specifically with Autofac, the question I’ve seen asked on multiple occasions is:

What’s the difference between InstancePerLifetimeScope, InstancePerHttpRequest, and InstancePerApiRequest?

I’ve often wondered this myself, and in general I haven’t yet found a good answer to the question. So, I’ll attempt to answer that question here.

tl;dr

  • Use InstancePerLifetimeScope if you want your dependency to be resolvable from any scope. Your dependency will be disposed of when the lifetime is disposed. In the case of the root scope, this will not be until the application terminates.
  • Use InstancePerApiRequest/InstancePerHttpRequest if your dependency should only be resolvable from the context of an HTTP/API request. Your dependency will be disposed of at the conclusion of that request.

Continue reading

When @echo off doesn’t turn echo off

Ignoring the debate about the relevance of batch files compared to a proper scripting language; I recently ran into an “interesting” problem with a batch file where using @echo off wasn’t actually turning command echo off. Here’s the first few lines of the batch file I was using:

@echo off
cls

REM -- Compiles FSQL scripts into single SQL executable --
IF EXIST Scripts\UPGRADE.sql del /Q Scripts\UPGRADE.sql

Running the code produced the following output:

Batch file clearing screen

As you can see, echo is definitely not off! I ignored the problem for a while, since – let’s be honest – it’s fairly inconsequential; but after revisiting the script again a week later, the engineer in me could simply no longer handle the fact that @echo off wasn’t working!

It turns out the solution was quite simple, but clearing the screen at the start of the script was preventing me from seeing what the problem was. When I removed the call to cls, my output window changed to this:

Batch file without clearing screen

Oh hello, what are those weird characters in front of the @echo off call? Why, it looks suspiciously like a byte order mark! Why yes; it is a byte order mark! For some reason, the file had been saved in UTF-8 with BOM. I have no idea what editor the file was originally created in or why such a “fancy” encoding was used – but changing it to plain old UTF-8 solved the issue and put me right back into batch file nirvana.

Evaluating a Linq to Entities IQueryable anonymous Select projection outside of the anonymous scope

That title is a mouthful but I think it’s correct.

I’ll demonstrate through code:

public IQueryable GetDocumentIds() {
  var query = from document in dbContext.Documents
              select new { Id = document.DocumentId };
  return query;
}

...

var docIds = GetDocumentIds().ToList();

Anonymous types lose their “typed” information when their scope is destroyed – e.g. leaving a method as a return value.
As such you can’t ToList() the IQueryable as the compiler doesn’t know what type to ToList() it to.

You might think that supplying the type Object or dynamic will work but you’d be wrong.

GetDocumentIds().ToList<Object>();
GetDocumentIds().ToList<dynamic>();

or that a casting to Object or dynamic will solve your problems but no…

GetDocumentIds().Cast<object>.ToList();
GetDocumentIds().Cast<dynamic>.ToList();

The only way I’ve found that will enable you to evaluate and iterate over an anonymously typed “Linq to Entities” projection outside of its scope is to “enumerate over the IQueryable”.

var IList<dynamic> list = new List<dynamic>();

foreach(var item in query) {
  list.Add(item);
}

This not an ideal solution but the only one I’ve found to date to deal with this exact scenario where you may have an unknown at compile-time result returned.

NOTE: If you can use a typed IQueryable you’re life will be more joyous.

Or if you are looking for a more succinct syntax that deceives you into thinking it’s a better solution:

IList<dynamic> results = Enumerable.Cast<object>(query).Cast<dynamic>().ToList();

Integrating Google Tag Manager and Google Analytics in a Single Page Application

Marketing folks love their Google Tag Manager (GTM), and developers love their Google Analytics (GA), and thankfully, the good folks at Google have made it easy enough to get the two working together. However, if you’re one of those bleeding-edge types that only writes Single Page Applications (SPA), getting useful data to flow through GTM into GA is a bit daunting! But, with a little bit of setup work, it’s actually quite simple.

For this example, I’m working with Durandal as my SPA framework, but whatever you’re using should be fine, too. The important thing to know is how to intercept your “page navigation” equivalent event.

Step 1: Add GTM to your application’s “root” page.

Every SPA framework (that I know of!) has some sort of “root” page which is the launching point for everything else. In Durandal, this is Index.cshtml by default. The first thing you need to do is load the GTM container snippet into this root page. (You can find your Container Snippet in your GTM account, under Users & Settings –> Settings).

<body>
    @if (!HttpContext.Current.IsDebuggingEnabled)
    {
        <!-- Google Tag Manager -->
        <!-- Script trimmed for brevity -->
        <!-- End Google Tag Manager -->
    }

    <!-- page body -->
</body>

I’ve trimmed the script here because it’s ugly to look at; but effectively what it does is asynchronously injects the GTM script (from Google’s server) directly into your webpage at runtime.

Step 2: Push Page Change events to GTM via the dataLayer

Since a hashchange doesn’t count as a URL change, you have to manually push your page changes to GTM. If you’re using Durandal’s Router plugin, the easiest place to do this is on the onNavigationComplete event. Here’s how we did it:

var recordPageView = function (url) {
    if (!url)
        return;

    var dataLayer = window.dataLayer;
    if (!dataLayer)
        return;

    // Push page view event through the GTM dataLayer
    dataLayer.push({
        event: "virtualPageView",
        virtualPage: url
    });
};

var onNavigationComplete = router.onNavigationComplete;

router.onNavigationComplete = function (routeInfo, params, module) {
    // Call default onNavigationComplete
    onNavigationComplete.call(this, routeInfo, params, module);

    // Do all the your other things....
    // ...

    // Analytics shizzle
    recordPageView(routeInfo.url);
};

Note that we’re calling the default onNavigationComplete event too, just in case Durandal relies on it to do anything special. (In the current version, all it does is sets the page title.)

Note also that you can call your event whatever you want (we’ve used “virtualPageView”), and you can change the property name to whatever you want (we’ve used “virtualPage”). Actually, you can pass anything you want into the dataLayer – but it will only be useful if you also tell Google Tag Manager what to expect.

Step 3: Setup a Rule in Google Tag Manager to listen for your event

Now that you’re publishing a your page change events, you need to tell GTM to listen for them.

From the Overview page, click the “New Rule” button. You can name the rule whatever you like; we’ve gone with, “Virtual Page View event”. The important part is the rule matching. Effectively, we need to tell GTM that the rule is a match when it sees the event that we’re sending. As such, you should set your condition to {{event}} | equals | virtualPageView.

If you used a different event name in your code (back in Step 2), that’s what you should be using here. Here’s how it looks when its configured.

GTM Rule

Step 4: Setup a Macro in Google Tag Manager to extract your page URL

Now you need to tell GTM how to access your virtual page URL, using a Macro.

From the Overview page, click “New Macro”. You can call your new macro whatever you like – we’ve gone with the name “Virtual Page View”. The important part of this step is ensuring that your Data Layer Variable Name corresponds to the property name that you’re publishing from your page. So, if you’re using “virtualPage” like we did above, that’s what your variable name should be. Here’s how it should look when its all configured:

GTM Virtual Page View macro

Step 5: Setup your Analytics tag in Google Tag Manager

Almost there! Now you just have to tell GTM to activate your analytics tag whenever it hears your new virtual page event. The way you do this will depend if you’re using traditional Analytics, or the newer Universal Analytics. We’ll cover both.

To get started (for either method), click New Tag from your GTM account overview page. Once again, you can name your tag whatever you like; we’ve gone with the nice and simple title, “Analytics”.

  • Select either Google Analytics or Universal Analytics as appropriate for your Tag Type
  • Use your Google Analytics account ID (e.g. UA-12345678-1) as the Web Property ID (for Traditional Analytics) or the Tracking ID (for Universal Analytics).
  • Leave Track Type as “Page View”.
  • Near the bottom of the page, click the +Add Rule to Fire Tag button. In the dialog that comes up, you should see the rule that you configured in Step 3 – select that. Do not select the “All Pages” rule!

Depending on if you’re using Traditional Analytics or Universal Analytics, the last piece of configuration is slightly different.

  • Under More Settings, expand Basic Configuration.
  • If you are using Traditional Analytics, click “Virtual Page Path”. Click the +{{macro}} button, and you should see the macro you configured in Step 4 – select this.
  • If you are using Universal Analytics, click the +{{macro}} button next to Document Path, and select your macro from Step 4.

Here’s how it should look when you’re all configured (note: this is the Traditional Analytics setup, but the Universal Analytics setup looks almost the same).

GTM Tradition Analytics 1

GTM Tradition Analytics 2

Step 6: Publish your Google Tag Manager container

You need to tell GTM that you want to go live with your setup. From the Overview page, simply click Create Version, and then click Publish.

That’s it! You should now have your Analytics data being fed via Tag Manager.

To test it out, simply go to your website and start clicking around. if you’ve done everything correctly, you should see your page views coming in to your Analytics account.

Credit where credit’s due: I found this google forums post quite helpful for pointing me in the right direction.

How to: Get anchor navigation working with Durandal

I’ve started using Durandal as my SPA framework of choice, and so far I really like it. I had actually written a (much less mature) framework to do lots of the same stuff – essentially, getting Knockout, Sammy, Require, jQuery, (etc) all working together – so I’m glad that I can now just use Durandal to do it all for me!

One disadvantage of the Single Page App model is that it “breaks” the traditional anchor navigation of HTML, since it uses the URL hash to define the route to be activated. This is a fair enough thing to lose given everything that you gain from using a single page app; however, it would be great if you could get the benefits of both – for example, to allow you to use a plugin like Bootstrap’s ScrollSpy.

tl;dr

  • You can use a regular expression which includes a bookmark hash when you’re configuring your URL;
  • You need to defer scrolling to your bookmark’s location until after the view has been attached;
  • You can use the routeInfo.hash passed to your view model’s activate method to generate your bookmark URLs dynamically.

Continue reading

Passing your ASP.NET MVC application’s base URL to javascript using RequireJS

I’ve been using RequireJS in conjunction with ASP.NET MVC on my latest project at work, and the combination is quite good! One thing I’ve struggled with though is passing the base URL of the application into my javascript files. Traditionally, I’ve done this using a property like UrlBase on the javascript window object, but now that just feels dirty.

Thankfully, there’s a fairly easy and clean way to accomplish this goal, which I’ll share with you now. First, you need to establish what the base url is. Lots of people seem to come up with a way to do this, but here’s the one that I find most reliable.


Update: Well, you learn something new everyday, and OJ has pointed out that there’s a far easier way, simply using Request.ApplicationPath. This simplifies things a lot, as we no longer need to push anything through the ViewBag; instead we can write the application path directly in the page when we’re defining our appConfig module.


Now, the “tricky” part – hooking this up using RequireJS. Scanning through the RequireJS documentation reveals that you can name your modules when they’re defined. Generally this is discouraged as it limits portability; but that’s ok for us here as portability is pretty much off the table.

The other key point is that you must not try to define your module before the require configuration has loaded. As you can see below, we’re defining a new module named appSettings after we’ve configured RequireJS:

<!-- Contains the require configuration -->
<script src="@Url.Content("~/Scripts/require.js")"
        data-main="@Url.Content("~/scripts/application")" 
        type="text/javascript"></script>
    
<!-- Inject an "appSettings" module -->
<script type="text/javascript">
    define("appSettings", function () {
        return {
            urlBase: "@Request.ApplicationPath"
        };
    });
</script>

From articles I’ve read on the web, there is a possible race-condition here – that require may not have finished configuring before the appSettings module is defined – but I’m yet to see this occur.

Anyway – now that we’ve got our appSettings module, we can just include it as a dependency for other modules when we need to:

define(["appSettings"], function (appSettings) {
    console.log(appSettings.urlBase);

    // Define your module!
});

And, once again, all is well with the world.

Unity, stop throwing SynchronizationLockException

I always like to call a spade a spade. So when it comes to having exceptions thrown, I’d like to think they pop up in exceptional circumstances. Unfortunately that doesn’t hold true when using Microsoft Unity IoC container.

Go ahead and turn on first-chance exceptions and you will notice everytime you call RegisterInstance Unity is throwing and catching a SynchronizationLockException.

The easiest way around this is to update Unity via Nuget since it has apparently been fixed in ¬†2.1.505.2. But for those of us using it as part of Enterprise Library 5 or don’t have the luxury of upgrading the component, I prefer the following workaround proposed by Koen VV on StackOverflow:

/// <summary>
/// KVV 20110502
/// Fix for bug in Unity throwing a synchronizedlockexception at each register
/// </summary>
class LifeTimeManager : ContainerControlledLifetimeManager
{
    protected override void SynchronizedSetValue(object newValue)
    {
        base.SynchronizedGetValue();
        base.SynchronizedSetValue(newValue);
    }
}

Now just use this lifetime manager when registering your instance and you are good to go:

container.RegisterInstance(myInstance, new LifeTimeManager());

MVC 3 SEO Friendly URLs

Continuing on from my previos article on adding a xml sitemap, today we will be looking at adding seo friendly urls to MVC 3.

Rather than re-inventing the wheel here I would instead redirect you to this awesome article. One of the reasons I like this article is it still gives you the ability to use our database indexes since we are still getting our products by their primary key.

The only thing I would add to the article is the ability to redirect mysite.com/Product/1/RandomString to mysite.com/Product/1/Product-Name since we already know what product the user is trying to access if the typo is in the product name. This is also handy if you have already deployed your website with the previous routes.

string expectedName = product.Name.ToSeoUrl();
string actualName = (productName ?? "").ToLower();

// permanently redirect to the correct URL
 if (!string.Equals(expectedName, actualName))
 {
 return RedirectToActionPermanent("Details", new { productName = expectedName });
 }

Now all we need to do is update our MVCSiteMapProvider from before:

var node = new DynamicNode();
node.Title = product.Name.ToSeoUrl();
node.RouteValues.Add("id", product.Id);