The importance of build numbers

If I were to make a prediction, I would say that build numbers are something that are rarely treated as being important in the agency world of web development. That’s not to say milestone releases aren’t given names like “Phase 2”, “August Release” or a major feature name, but every build / release of a project in between, I’d sense largely have build numbers either ignored or never created.

It’s also easy to see why, after all it’s not like we’re producing software that’s going out to the masses to be installed. The solution is essentially just ending up having 1 install on a set of servers. When a new version is built, that replaces everything that came before it and if a bug is found we generally roll forward and fix the bug rather than ever reverting back.

Why use build numbers?

So when we’re constantly coding and improving applications in an agile world why should we care about and use build numbers?

To put it quite simply its just an easy way to identify a snapshot of code that could have actually have been built and then released to a server. This becomes hugely useful in scenarios such as:

  • A bug being reported by an end user
  • An issue being identified by some performance monitoring
  • An issue being picked up in some functionality further on from the site. e.g. in an integration

Without build numbers the only way to react to these scenarios is to look at commit dates in source control or manual release notes that may have been created to try and work out where an issue may have been created and what changed at that time. If the issue had subsequently been fixed you also can’t really give a version description when it was fixed other than a rough date.

Other advantages of build numbers can include:

  • Being able to reference a specific version that has been pen tested
  • Referencing a version that’s been tested with integrations
  • Having approval to release a specific version rather than just the latest on master
  • Anywhere you want to have a conversation referencing releases

Build numbers for deploys

The first step to use build numbers and with the rise in CI, possibly the one thing most people are doing is to start creating build numbers via a build server. By using any type of build server you will end up with build numbers. This instantly gives you a way to know when a build was created and what commits were new within the build.

Start involving an automated deployment setup either using your build server or with other tools like Octopus Deploy and you will now start to get a record of when each build was deployed to each server.

feature-1

Now you have an easy way to not only reference what build was on each environment and when through the deploy history, but also a way to see what went into a build through the build servers change log.

Tag builds in source control

Being able to see the changes that went into each build on your build server is all very good, but it’s still not an ideal situation for finding the exact code version a build relates to.

Thankfully if your using Team City it’s really easy to set it up to create a tag in your source control with each build number. Simply go to the build features section of your projects configuration and add a feature called “VCS Labeling”. This is a step that happens post build in the background and will create a tag in source control including the build number. It has lots of other configuration options, so if you need different tag formats for different branches its got you covered.

If your using GitHub once this is turned on you will be able to see a list of all the tags in the releases section.

GitHub Releases Screen

Update Assembly info

Being able to identify a build in source control and view a history of what should have been on a server at a particular time is all very good, but its also a good idea to be able to easily identify a build for a published version of code. That way just by looking at the code on a server you can tell which build version it is, and not rely on your deployment tool to be correct.

If your using Team City this also also made super simple through a build feature called “Assembly info patcher”. When using this the build number will automatically get patched in without having to edit AssemblyInfo.cs.

Conclusion

By following these tips you will now be able to identify a version by looking at the published code, see a history of when each version was not only built but also released to each envrionment and also have an easy way to find the exact source for that build.

The build number can then be used in any conversations around when a bug was introduced and also be referenced in release notes so everyone can keep track of what versions included what fix’s in a simple to understand format.

Advertisements

How to create a new instance of a dependency inject object

I decided to write this post after seeing it done wrong on a number of occasions and also not finding many great examples of how it should be done. Most articles of DI or IOC tend to focus on objects such as a logger being injected and then calling functions on that object.

Take for instance, this example in the Simple Injector documentation:

using System;
using System.Web.Mvc;

public class UserController : Controller {
    private readonly IUserRepository repository;
    private readonly ILogger logger;

    public UserController(IUserRepository repository, ILogger logger) {
        this.repository = repository;
        this.logger = logger;
    }

    [HttpGet]
    public ActionResult Index(Guid id) {
        this.logger.Log("Index called.");
        IUser user = this.repository.GetById(id);
        return this.View(user);
    }
}

Here we can see the implementation of IUserRepository and ILogger get passed in the constructor and are then used in the Index method.

However what would the example be if we wanted to create a new user and call the user repository to save it? Creating a new instance of IUser from our controller becomes an issue as we don’t know what the implmentation of IUser is.

The incorrect implementation I usually see is someone adding a reference to the implementation and just creating a new instance of it. A bit like this:

public class UserController : Controller {
    [HttpPost]
    public ActionResult SaveUser(UserDetails newUserInfo) {

        IUser user = new User;
        user.FirstName = newUserInfo.FirstName;
        user.LastName = newUserInfo.LastName;
        this.repository.SaveUser(user);
        return this.View(user);
    }
}

This will work, but fundamentally defeats the point of using dependency injection as we now have a dependency on the user objects specific implementation.

The solution I use is to add a factory method to our User Repository which takes care of creating a User object.

public class UserRepository : IUserRepository {
    public User CreateUser() {
        return new User();
    }
}

Responsibility of creating the object now remains with its implmentation and our controller no longer needs to have a reference to anything other than the interface.

public class UserController : Controller {
    [HttpPost]
    public ActionResult SaveUser(UserDetails newUserInfo) {

        IUser user = this.repository.CreateUser();
        user.FirstName = newUserInfo.FirstName;
        user.LastName = newUserInfo.LastName;
        this.repository.SaveUser(user);
        return this.View(user);
    }
}

Redirect to https using URL Rewrite

There’s always been reasons for pages to be served using https rather than http, such as login pages, payment screens etc. Now more than ever it’s become advisable to have entire sites running in https. Server speeds have increased to a level where the extra processing involved in encrypting page content is less of a concern, and Google now also gives a boost to a pages page ranking in Google (not necessarily significant, but every little helps).

If all your pages work in https and http you’ll also need to make sure one does a redirect to the other, otherwise rather than getting the tiny page rank boost from Google, you’ll be suffering from having duplicate pages on your site.

Redirecting to https with URL Rewrite

To set up a rule to redirect all pages from is relatively simple, just add the following to your IIS URL Rewrite rules.

<rule name="Redirect to HTTPS" stopProcessing="true">
  <conditions>
    <add input="{HTTPS}" pattern="^OFF$" />
  </conditions>
  <action type="Redirect" url="https://{HTTP_HOST}{REQUEST_URI}" appendQueryString="false" />
</rule>

The conditions will ensure any page not on https will be caught and the redirect will do a 301 to the same page but on https.

301 Moved Permanently or 303 See Other

I’ve seen some posts/examples and discussions surrounding if the redirect type should be a 301 or a 303 when you redirect to https.

Personally I would choose 301 Moved Permanently as you want search engines etc to all update and point to the new url. You’ve decided that your url from now on should be https, it’s not a temporary redirection and you want any link ranking to be transfered to the new url.

Excluding some URL’s

There’s every chance you don’t actually want every url to redirect to https. You may have a specific folder that can be accessed on either for compatibility with some other “thing”. This can be accomplished by adding a match rule that is negated. e.g.

<rule name="Redirect to HTTPS" stopProcessing="true">
  <match url="images" negate="true" />
  <conditions>
    <add input="{HTTPS}" pattern="^OFF$" />
  </conditions>
  <action type="Redirect" url="https://{HTTP_HOST}{REQUEST_URI}" appendQueryString="false" />
</rule>

In this example any url with the word images in would be excluded from the rewrite rule.

Increasing the Maximum file size for Web.Config

Web-Config-Exceeds-Max-File-Size

This can happen in any ASP.NET Web Application, but as Sitecore 8’s default web.config file is now 246 kb this makes it extremely susceptible to exceeding the default 250 kb limit.

To change the size limit you need to modify/create the following registry keys:

HKLM\SOFTWARE\Microsoft\InetStp\Configuration\MaxWebConfigFileSizeInKB  (REG_DWORD)

On 64-bit machines you may also have to update the following as well

HKLM\SOFTWARE\Wow6432Node\Microsoft\InetStp\Configuration\MaxWebConfigFileSizeInKB (REG_DWORD)

You will probably find that these keys need to be created, rather than just being updated. After changing them you will also need to reset IIS.

Alternatively

Alternatively you can leave the default values at 250 kb and split the web.config files into separate files.

More information on doing this can be found here:

http://www.davidturvey.com/blog/index.php/2009/10/how-to-split-the-web-config-into-mutliple-files/

My personal preference for Sitecore projects is to update the the max file size as this allows keeping the web.config file as close to the default install as possible. The benefit of doing this is it makes upgrades easier, rather than needing to know why your web.config doesn’t match the installation instructions.

Setting IP restrictions in IIS

It’s a frequent scenario that a website your in the process of building needs to be accessible over the internet before it should actually be publicly available over the internet. This can come in the form of clients needing to review staging sites before there live, test sites needing to be accessible to testers who may not be in a location that can access private servers, or working jointly with other suppliers.

This scenario presents a lot of dangers such as, the URL of a site could get leaked early ruining a marketing strategy, or the site could end up in Google destroying the SEO value on the clients current site and even worse, actually get real customers visiting it.

There are only 2 real methods of protecting test/staging sites. One is adding authentication to the site restricting access to people with a valid username and password. The other is IP white-listing so only people from a valid IP can access the site.

In the past I’ve seen people suggest using a robots.txt to tell search engines to ignore the site. This is guaranteed to fail, Google will index a site with a robots file saying not to. Your robot’s file may say don’t crawl, but that auto generated Sitemap will be obeyed an the files indexed. There will also come a time the robots file gets copied live de-indexing the live site, or someone forgets the file on staging and the staging site is indexed.

Using IIS to set up IP restrictions

Using IIS to set up IP restrictions is quick and easy, and what’s best about it is you can set it at the server level and not worry about people forgetting to add it to new sites. Better still you can also easily add configuration at a website level to allow certain people to see certain sites rather than the whole box.

Installing the Feature

First you need to make sure you have the feature installed on IIS. To do this on Windows Server 2012:

IP and Domain Restrictions

  1. Go to Server Manager and click “Add roles and features”
  2. Click next to take you from the Before you begin page to Installation Type
  3. Leave Role-based selected and click next
  4. On the Server Selection screen the server your on should be auto selected. Click next
  5. On the Server Roles screen scroll down to “Web Server (IIS)”. IP and Domain Restrictions is located under Web Server (IIS) > Web Server > Security
  6. Click the check box on IP and Domain Restrictions if its not already selected and complete the wizard to install the features.

Configuring IIS

The set up an IP restriction in IIS do the following:

  1. Open IIS and select your server in the left hand treeview. Alternatively if you wanted to add the restrictions to an individual site, select that site.
  2. Within the IIS section you should have an item titled IP Address and Domain Restrictions

    IP and Domain Restrictions IIS

  3. The configured IP address will be listed out. To add a new one click the “Add Allow Entry” action on the right.
    IP and Domain Restrictions IIS Setting IPs
  4. This screen allows you to set up allow and deny lists, but the restrictions don’t actually have an effect until you edit the feature settings.
    IP and Domain Restrictions IIS Feature Settings
  5. On this screen you need to set the access for unspecified clients to deny. You can also specify a deny action type which alters the status code between unauthorized, forbidden, not found and abort.

What this doesn’t do

What this won’t do is block all traffic not in the allow list to your server. It will only cover IIS, so if you have other services running on your box like SQL Server, Mongo, Apache etc this will all still be publicly available.

Bundling and Minification error with IIS7

.NETs standard tools for bundling and minification are a great asset to the platform solving a lot of problems with only a few lines of code.

However if using IIS 7.0 you may run into a strange issue where the path to your bundles gets created but a 404 is returned whenever there accessed. The functionality is obviously installed and working otherwise the URL wouldn’t be created, but a 404 clearly isn’t what you want.

The solution lies in your web.config file by setting runAllManagedModulesForAllRequests to true

<system.webServer>
    <modules runAllManagedModulesForAllRequests="true">
    </modules>
</system.webServer>

Two Google Maps Tips

Google Map

Centre a map on a collection of pins

The basic way to centre a Google Map is to give it co-ordinates of where you want the map to centre. But what if you have a collection of pin/markers and you want to show all of them but don’t know beforehand where they will be.

The solution is to create a LatLngBounds object and for each of your pins call the extend method of your bounds object. Once this is done call fitBounds on your map.

var bounds = new google.maps.LatLngBounds();

$.each(mapMarkers(), function(index, value) {
    bounds.extend(value.marker.position);
});

map.fitBounds(bounds);

 

Loading a map in a hidden div

The reason for doing this could be that you have a set of tabs and a non-visible one contains the Google Map. If you instantiate a Google Map when it isn’t visible you end up with the smallest map size possible.

One popular solution for this is to only create the map when the tab is being displayed, which is a good option as it means the map is only loaded when it’s viewed. However if your using something like Knockout to bind you’ve views to a model it may not be possible to attach an event to the tab change.

Google Maps actually have an event handler for just this scenario called resize. You simply need to trigger it at the point in which you can size the map.

google.maps.event.trigger(map, 'resize')

What’s your source control strategy?

I’ve seen companies that have no form of source control, companies using backups as source control, people still using Source Safe along with a whole bunch of actual good source control solutions such as Git and SVN. But when I say what’s your source control strategy I don’t mean what tool are you using, I mean how are you using it.

Checking you’re code into source control adds many benefits such as merging, versioning, reverting etc, but to get the most out of it you really need a strategy to define how you will use it. There is no one right way and the strategy you use ultimately depends on your team size and what you are doing.

Here are a few example strategies:

The didn’t know about a strategy approach

You’ve got a source control solution, you’re regularly checking your code in but you’ve never made a branch.

This is actually quite common. Everyone is developing against the same branch, you’ve got all the benefits of seeing who changed which piece of code, you can revert when you need to revert and when you do an update there’s some help with merging.

What you don’t get though is the ability to have work happening in parallel and one being released before the other. If you needed to do a release either you have to have a way of hiding the unfinished updates as the code is pushed to live or some way of reverting just those bits.

A branch for every feature

The other extreme is to have all development work happen on a new branch. When it’s ready to be release the code is merged back into the trunk and then released (plus some testing along the way).

The advantages of this are your trunk is always the same as what’s on live meaning emergency fix’s can be made without trying to find the last release in the source control history. Developers can also work separately without stepping each other toes and can release.

However with more than one development happening at the same time, whoever merges second could have a big job on their hands. All that branching is also going to be time consuming.

Updating branches often

So you’re using branches and to avoid a tricky merge are keeping the branch up to date regularly, possibly daily. Anyone who’s tried to merge a feature branch after 3 months of development can testify that it doesn’t always go smoothly and can take some time.

Only updating the branch at the end

You’ve decided to accept the large merge at the end of the project. You don’t like it but figure all those small merges during the project actually add up to more time than one big one at the end. You can then also concentrated on getting the feature built without the regular merge distraction.

A tag for each release

If you’ve ever needed to find a previous release and only have the history to go on it can be hard. If there’s no comment that says release to live you are basically guessing based on release date and last commit.

A tag is basically the same as a branch and if its part of your release plan, you’ve got a good way of finding each release to live.

A branch for live, a branch for dev

In this setup you have a live branch that always match’s what’s on the live site. There’s less need for tags as commits to this branch are normally also a release to live, and because the branch is always in sync with live, emergency fix’s are taken care of.

The downside with this approach is everyone is working on the dev branch, so exceptions should be made and feature branches used when it makes sense.

So…

So there are some of the strategies I’ve come across, but there will be plenty more. What you do ultimately relates to what kind of work you do, how frequently you release and what kind of team size you have. But the one thing you must do with all of them is make comments on each commit. If you don’t then source control becomes virtually useless.

How Green is you code?

Climate-change

Talk on climate change is something that you cant have missed. This years winter is further evidence that irrespective on debate about the cause something is happening. Many of us now use energy saving bulbs, by appliances with high energy ratings and ensure our homes are sufficiently insulated all to cut down on our energy consumption and the effect it has on the world.

But as developers what about the code that we produce? Do you ever consider how much pollution is caused by the energy required to deliver web pages to users or for an app to function, and what could be done to minimise that pollution.

Ironically many of the things that we can do will ultimately also make an improvement to our users (obviously throwing more servers at a performance issue isn’t going to reduce power consumption so their are exceptions).

Here are some examples:

Reducing the number of web requests

Quite simply the less requests a browser needs to make for files the less power it uses to do so. The benefit fir the user is that a browser will generally only make 7 simultaneous requests, so its also faster. We can easily do this by:

  • Bundling CSS and JavaScript files
  • Using image sprites rather than multiple images
  • Ensuring caching is set properly so that files aren’t repeatedly being requested when the browser already has a local version

Reducing the amount of data that we send

If we don’t send as many bytes then its going to require less packets to send it, which will ultimately require less power to send them. It will also take the user less time to receive them.

  • Optimise images to a sensible size. There’s no point sending an image 10 times the resolution that’s going to be viewed. There’s also no point in using images in a format with a larger file size when it looks the same
  • Minify CSS and JavaScript files to make them as small as possible
  • Don’t include CSS and JavaScript that isn’t being used. How much of that jQuery UI framework are you actually using. Use the tools around to only include the bits you need
  • Write APIs that only include the data that is needed, or give the consumer parameters to choose what fields they have. If your calling an API only get the data you need
  • Use JSON services over XML, they have less mark-up

Think about what your code is doing

Lastly just think about what your code is doing:

  • Are you making multiple calls to a database for the same data
  • Are you posting a list to a server to sort it when the client could instead
  • Are you re-loading an entire page just to sort a list
  • How many objects are you needlessly creating on the server
  • Are you appending string objects when you should be using a Stringbuilder

All these things will ultimately improve the performance of your code as well as reducing power consumption. You may be in a position when you haven’t done any of this because performance isn’t an issue, but power consumption is. The savings you make may be tiny, but tiny changes made by thousands of people leading to the power used by millions of people can have a profound effect.

Also turn your computer and monitor off when you go home!

UI resources for developers

Having a good UI is possibly one of the most important aspects of any development. It doesn’t matter how perfectly the code executes, if the thing looks awful people won’t use it. But it is an area that a lot of developers struggle with.

There have always been things like website skins that you can buy, but I’ve never been a huge fan of these. It’s always seemed odd that you can use an open source CMS for free that has had hundreds of man hours put into it, but a decent skin would cost you £100, and even then would still need a lot of work doing on it.

Thankfully there are some free resources that not only help with the css and making a site responsive, but they also include some fairly decent fonts and layouts.

Bootstrap


Bootstrap is possibly the most popular css framework for building sites with right now. Even Microsoft have adopted it into all the templates that ship with Visual Studio 2013.

One of the main things Bootstrap gives you is a standard set of css classes for doing things like grids and responsive layouts. When people start using a common set of classes to achieve the same thing, things get a lot more compatible and you can already see that starting to happen with Bootstrap.

Bootstraps grid system works on having the illusion that your page is divided up into 12 columns. You then have a set of classes to assign to a div that contain a number, that number is how many columns the div should span over. A bit like a colspan on a table.

These grids are responsive though so as your page shrinks down to a tablet and mobile size it will automatically recognise that the columns won’t fit horizontally and start rearranging them underneath each other.

As a starting point Bootstrap also has some templates of common layouts to get you started.

Bootstrap also has default classes for forms, form fields, headings and lists that will give your site an initial face lift.

Foundation

Foundation in many ways is very similar to Bootstrap. It also has a grid system for layouts and also helps with making a site responsive. There are also default styles for headings, lists and forms but they have also taken things a step further and started to encroach on jQuery UI’s territory with things like tabs and dialog windows.

I haven’t heard of as much industry support but there site is full of documentation and videos on how to use the framework.

Pure

Another CSS framework with yet another grid system. Pure appears to be much simpler than the first two and offers many of the same features. Their site has some good templates that in some ways cover more scenarios that Bootstraps. Personally out of the three I would go with Bootstrap as it appears to have a much higher adoption.

Normalize


If the CSS frameworks seem a little overkill for what you’re after have a look at Normalize. The concept is simple, by including this CSS file in your site as the first CSS file it will overwrite all the default browser styles to create consistency and something that looks a little nicer.

There’s been many incidents where I’ve seen CSS produced that includes a style for every single html element try overcome the differences on browsers, which is a good idea (this is basically what normalize does except someone’s written it for you), but the styles have all been set to the same thing which is generally margin:0, padding:0. On some elements this is fine, on lists though, not so much.

Another option I’ve seen is to define a style on *.* which is equally as bad.

 

Fit Text

Like it or not people are accessing sites from all kinds of devices these days with all kinds of screen sizes. If your site doesn’t scale then you’re going to lose visitors. One issue you will ultimately face at some point is font sizes. These can easily be changed using media queries but another option is to use FitText.

FitText is a really simple bit of JavaScript that will scale your text to fit its containing element. You do have to call a function for each element you want to scale, and it does only work on the width rather than taking the height into account. But it is very cool. Just make sure you have a look at the code because it’s so small this isn’t something you will want in a separate JS file.