Bundling with Gulp in TeamCity

Like most, our front end developers write CSS in LESS or SASS and then use Gulp to compile the result. This is great but up until recently both the compiled and source style files would end up in our source control repository.

While this isn’t a major issue it was more of an annoyance factor when doing a merge that the compiled file would need to be merged as well as the source files. Any conflicts in bundled/minified files also can become problematic to solve. As well as this it also just seems wrong to have both files in a repo, effectively duplicating the file. After all we wouldn’t put compiled dll’s into a repo with their source.

Our solution was to get the build server to start running the gulp tasks to produce the bundled files.

Step 1 – Install Node on the build server

To start we need NodeJS installed on the build server. This allows extensions to be installed via NPM (Node Package Manager), it’s a similar thing to NuGet,

nodejs-install-4

Step 2 – Install the TeamCity plugin for NodeJS

To add built steps for Node and Gulp we need to install a plugin to make them available. Lucking there is one that does such a thing here https://github.com/jonnyzzz/TeamCity.Node

The actual build of the plugin you can download from Jetbrains team city here https://teamcity.jetbrains.com/viewType.html?buildTypeId=bt434. Just login as guest and then download the latest zip from the artifacts of the last build.

To install the plug you need to copy the zip to Team City’s plugin folder. For me this was C:\ProgramData\JetBrains\TeamCity\plugins, if your having trouble finding your’s just go to Administration > Global Settings in Team City and it will tell you the data directory. The plugin folder will be in there.

Restart the TeamCity server and the plugin should now show under Administration > Plugins List

NodeJS Plugin

Step 3 – Add a NPM Setup build step

NPM Build Step

The NPM step with a command of install will pick up dependencies and get the files.

Step 4 – Add a Gulp build step

Gulp Build Step

In your gulp step add the path to the gulp file and the tasks in your gulp file that need to be run. I’m using a gulp file that our front end devs had already created for the solution that contained as task for bundling css and another for bundling js.

Step 5 – Including bundled files in a MSBuild

As the bundled files are no longer included in our Visual Studio solution it also means that they arn’t included in the set of files which will be included in a publish when MSBuild runs.

To overcome this update the .csproj file with a Target with BeforeTargets set to BeforeBuild and list your bundled files as content. In my example I’m included the whole Content\bundles folder

<Target Name="BundlesBeforeBuild" BeforeTargets="BeforeBuild">
    <ItemGroup>
      <Content Include="Content\bundles\**" />
    </ItemGroup>
  </Target>
Advertisements

Using compile options for version compatibility

Here’s the scenario; Your building a module and it needs to be compatible with different versions of a platform. e.g. Sitecore, and everything’s great up until the day you need to call different methods in different versions of the platform. You’d rather not drop support for the old versions, and nor do you want to start maintaining two code bases. So what do you do?

C# Preprocessor Directives

Preprocessor directives provide a way to give the compiler instructions to follow while its compiling a project. By using this we can give the compiler conditions to compile different versions in different ways. Thereby allowing us to maintain one codebase, but produce compilations for different versions of the platform. e.g. One for Sitecore 8.0 and another for Sitecore 9.0.

#if, #else and #endif

When the compiler encounters an #if followed by an #endif, it will only compile the code between the two if the specified symbol had been defined.

#if DEBUG
    Console.WriteLine("Debug version");
#else
    Console.WriteLine("Non Debug version");
#endif

Defining a preprocessor symbol

For the if statement to work, your going to need to define your symbol which is being evaluate.

This can be included in code as follows

#define YOURSYMBOL

A more useful was of defining this however is to include it in your call to MSBuild (this is particularly useful when using a build server).

-define:name[;name2]

If your compiling from Visual Studio an easier solution is to set up a new build configuration with a conditional compilation symbol.

  1. Right click your solution item in Solution Explorer and select Properties
  2. Click Configuration Properties on the left and then Configuration Manager on the right
  3. In the pop up window click the Active solution configuration drop down and then click New
    BuildConfiguration
  4. Enter the name of the build config. In my example above I have SC82 for Sitecore 8.2 and SC90 for Sitecore 9.0.
  5. Click Ok and close all the windows you just opened
  6. Right click the project that your going to build and select Properties
  7. Select the Build tab
  8. Select your build configuration from the configuration at the top
  9. Enter the symbol your using for the #if directives
    Conditional Compilation Symbols

Reference different versions of an assembly

Adding conditions to our code is good, but for this to fully work we also need to reference different versions of the assemblies that are causing the issue in the first place.

There’s no way of doing this through Visual Studio but by editing the .csproj file manually we can update the hint path on a reference to include the configuration name as a variable.

    
       ..\libraries\$(Configuration)\Sitecore.Kernel.dll
      False
    

This example shows how different versions of the Sitecore Kernel can be referenced by keeping each version in a subfolder that corresponds with the build configuration name.

As well as different versions of assemblies, it may also be needed to target different versions of the .net framework. This can be done in the .csproj file by including additional proerty groups that have a condition on the configuration name.

  
    bin\SC82\
    TRACE;SC82
    true
    v4.5.2
  
  
    bin\SC90\
    TRACE;SC90
    true
    v4.6.2
  

In this example I’m targeting .net 4.5.2 for my Sitecore 8.2 configuration and 4.6.2 for my Sitecore 9 configuration.

Useful Links

C# preprocessor directives
-define (C# Compiler Options)

Adding Build Statuses to Pull Requests with TeamCity and GitHub

I’m always looking for ways to improve our build server setup and improve our overall efficiency. So a recent change I’ve made is to get Team City to start building pull requests and pushing the resulting status back to GitHub.

This improves our dev flow by eliminating the need to do any testing on a pull request if we can already see it will fail a build. Previously someone doing a code review would only find out once they’ve checked out the change and built it locally, or even worse after approving the request and then breaking the build.

Pull Request Build Status

What’s particularly good with this setup, is it’s testing the resulting merge rather than just the branch being merged in.

Team City Setup

As this is covering a different scenario to our normal build processes which are focused on preparing a build version to be deployed, I set this up as a second build configuration on our projects.

Version Control Settings Root (VCS Root)

The VCS Root needs to be configured to fetch each pull request that is creating in GitHub. To do this, you will need to add a Branch specification which will tell Team City to monitor additional branches rather than just the default branch specified.

Branch Specification

I’m using the branch specification +:refs/pull/(*/merge) .

This syntax is telling Team City to monitor references to pull for pull request, the * refers to any pull request, and the merge indicates that we only want to resulting merge of the pull request.

When you create a pull request in GitHub, this merge reference is automatically created for what the resulting merge would look like.

In the projects list, builds will now get labels indicating what they were for:

Pull Requests on dashboard

Build Steps

I created my build configurations by duplicating the existing ones we have that take care of creating builds to be passed onto Octopus Deploy for release. If you do this, it’s important to remember to disable all the steps you no longer need.

Build Steps

The less steps you have the quicker your build will run and the quicker the pull request will be updated with a status. Ideally you want the process to finish before someone starts doing a code review! Steps like running Inspections may prove counter productive if the builds are never finished on time.

Triggers

Having a build running automatically for your releases can be a drain on server resources, particularly if you never have any intention of actually doing a deploy for most of them. For this reason our builds are set to manual.

However for statuses to be of any use, they’re going to need to be running automatically so that the status is ready for the code reviewer, so we need to add a VCS Trigger.

Triggers

Build Features

To get Team City to start posting status updates back to GitHub we need to add a build feature. If your on a version of TeamCity prior between 7.1 and 10 then there is a plugin you can grab here https://github.com/jonnyzzz/TeamCity.GitHub. If your on a newer version of TeamCity. i.e. 10+ then the build feature is now built in and is called Commit status publisher. The built in version also has support for Bitbucket, Gerrit, GitLab, JetBrains Upsource and Visual Studio Team Services.

Add the build feature and fill in the config settings.

CommitStatusPublisher.PNG

And that’s it. Your pull requests will now automatically build and have the status sent back to GitHub.

Pull Status Pass.PNG

Not only will you be able to see this status in GitHub, you’ll also be able to click a details link to see the build. Useful in the event that it’s failed and you want to see why.

 

The importance of build numbers

If I were to make a prediction, I would say that build numbers are something that are rarely treated as being important in the agency world of web development. That’s not to say milestone releases aren’t given names like “Phase 2”, “August Release” or a major feature name, but every build / release of a project in between, I’d sense largely have build numbers either ignored or never created.

It’s also easy to see why, after all it’s not like we’re producing software that’s going out to the masses to be installed. The solution is essentially just ending up having 1 install on a set of servers. When a new version is built, that replaces everything that came before it and if a bug is found we generally roll forward and fix the bug rather than ever reverting back.

Why use build numbers?

So when we’re constantly coding and improving applications in an agile world why should we care about and use build numbers?

To put it quite simply its just an easy way to identify a snapshot of code that could have actually have been built and then released to a server. This becomes hugely useful in scenarios such as:

  • A bug being reported by an end user
  • An issue being identified by some performance monitoring
  • An issue being picked up in some functionality further on from the site. e.g. in an integration

Without build numbers the only way to react to these scenarios is to look at commit dates in source control or manual release notes that may have been created to try and work out where an issue may have been created and what changed at that time. If the issue had subsequently been fixed you also can’t really give a version description when it was fixed other than a rough date.

Other advantages of build numbers can include:

  • Being able to reference a specific version that has been pen tested
  • Referencing a version that’s been tested with integrations
  • Having approval to release a specific version rather than just the latest on master
  • Anywhere you want to have a conversation referencing releases

Build numbers for deploys

The first step to use build numbers and with the rise in CI, possibly the one thing most people are doing is to start creating build numbers via a build server. By using any type of build server you will end up with build numbers. This instantly gives you a way to know when a build was created and what commits were new within the build.

Start involving an automated deployment setup either using your build server or with other tools like Octopus Deploy and you will now start to get a record of when each build was deployed to each server.

feature-1

Now you have an easy way to not only reference what build was on each environment and when through the deploy history, but also a way to see what went into a build through the build servers change log.

Tag builds in source control

Being able to see the changes that went into each build on your build server is all very good, but it’s still not an ideal situation for finding the exact code version a build relates to.

Thankfully if your using Team City it’s really easy to set it up to create a tag in your source control with each build number. Simply go to the build features section of your projects configuration and add a feature called “VCS Labeling”. This is a step that happens post build in the background and will create a tag in source control including the build number. It has lots of other configuration options, so if you need different tag formats for different branches its got you covered.

If your using GitHub once this is turned on you will be able to see a list of all the tags in the releases section.

GitHub Releases Screen

Update Assembly info

Being able to identify a build in source control and view a history of what should have been on a server at a particular time is all very good, but its also a good idea to be able to easily identify a build for a published version of code. That way just by looking at the code on a server you can tell which build version it is, and not rely on your deployment tool to be correct.

If your using Team City this also also made super simple through a build feature called “Assembly info patcher”. When using this the build number will automatically get patched in without having to edit AssemblyInfo.cs.

Conclusion

By following these tips you will now be able to identify a version by looking at the published code, see a history of when each version was not only built but also released to each envrionment and also have an easy way to find the exact source for that build.

The build number can then be used in any conversations around when a bug was introduced and also be referenced in release notes so everyone can keep track of what versions included what fix’s in a simple to understand format.

Sitecore: Setting up Mongo with Authentication

One of the new features in Sitecore 7.5 was the xDB, a new analytics database that rather than using SQL Server was Mongo based. If your main job is as a Sitecore dev, like me this may be the first time you’ve ever had to use Mongo, or a least the first time you’ve ever had to commercially use it.

Installing Mongo DB is a relatively painless experience, Mongo provide a decent enough guide to installing on Windows. Half way through you may be wondering why its running in a console window, but eventually you get to instructions for setting it up as a service.

However there are a couple of gotchas:

  1. Sitecore doesn’t work with the latest version of Mongo. At time of writing the latest version of Sitecore is 8.0 rev. 150427 and Mongo is 3.0. Sitecore however only supports version 2.6.x of Mongo though.
  2. None of the installation instructions include any details on authentication. Which while this is fine for your local dev machine is not so great for a live site.

So here’s my guide for getting up and running with Mongo DB and Sitecore.

Installing Mongo

First off get Mongo DB installed as a service on your machine. I’m not going to include instructions here for this bit, just follow the guide on the Mongo DB website.

One thing to note though, the Windows installer for Mongo 2.6 will by default install Mongo to C:\Program Files\MongoDB 2.6 Standard\. However the instructions for setting up Mongo include example commands expecting it to be installed in C:\mongodb\

Setting Up Authentication

Once you’ve got Mongo up and running you’ll need to create a user.

  1. Open a command prompt and connect to MongoDB
    C:\Program Files\MongoDB 2.6 Standard\bin\mongo.exe
    
  2. Switch to the admin db by doing
    use admin
    
  3. Create the user using
    db.createUser({user: "admin_mongo",pwd: "your password",roles: [ { role: "userAdminAnyDatabase", db:"admin" }, { role: "root", db:"admin" } ]  })
    
  4. To verify the user has been created you can use the following commands
    db.auth("admin_mongo", "your password")
    db.getUsers()
    

Mongo Create Admin User

Next update your config file to require authentication by adding auth=true. You will also need to restart the service for the change to take effect.

logpath=c:\data\log\mongod.log
dbpath=c:\data\db
auth=true

Sitecore uses 4 db’s in Mongo; Analytics, tracking_live, tracking_history and tracking_contact. We need to create a user in each of them.

Open a mongo shell again, switch to the admin db and connect with your admin login.

Now create each of the users as follows:

use analytics
db.createUser({user: "mongo_user",pwd: "your password",roles: [ { role: "readWrite", db:"analytics" } ]  })

use tracking_live
db.createUser({user: "mongo_user",pwd: "your password",roles: [ { role: "readWrite", db:"tracking_live" } ]  })

use tracking_history
db.createUser({user: "mongo_user",pwd: "your password",roles: [ { role: "readWrite", db:"tracking_history" } ]  })

use tracking_contact
db.createUser({user: "mongo_user",pwd: "your password",roles: [ { role: "readWrite", db:"tracking_contact" } ]  })

Sitecore Connection Strings

All that’s left now is to update the connection strings in Sitecore.

The default connection strings Sitecore give you look like this:

  
  
  
  

The format for a connection string with authentication is

mongodb://[username:password@]host1[:port1]/database

Your connection string should now look something like

  
  
  
  

Setting IP restrictions in IIS

It’s a frequent scenario that a website your in the process of building needs to be accessible over the internet before it should actually be publicly available over the internet. This can come in the form of clients needing to review staging sites before there live, test sites needing to be accessible to testers who may not be in a location that can access private servers, or working jointly with other suppliers.

This scenario presents a lot of dangers such as, the URL of a site could get leaked early ruining a marketing strategy, or the site could end up in Google destroying the SEO value on the clients current site and even worse, actually get real customers visiting it.

There are only 2 real methods of protecting test/staging sites. One is adding authentication to the site restricting access to people with a valid username and password. The other is IP white-listing so only people from a valid IP can access the site.

In the past I’ve seen people suggest using a robots.txt to tell search engines to ignore the site. This is guaranteed to fail, Google will index a site with a robots file saying not to. Your robot’s file may say don’t crawl, but that auto generated Sitemap will be obeyed an the files indexed. There will also come a time the robots file gets copied live de-indexing the live site, or someone forgets the file on staging and the staging site is indexed.

Using IIS to set up IP restrictions

Using IIS to set up IP restrictions is quick and easy, and what’s best about it is you can set it at the server level and not worry about people forgetting to add it to new sites. Better still you can also easily add configuration at a website level to allow certain people to see certain sites rather than the whole box.

Installing the Feature

First you need to make sure you have the feature installed on IIS. To do this on Windows Server 2012:

IP and Domain Restrictions

  1. Go to Server Manager and click “Add roles and features”
  2. Click next to take you from the Before you begin page to Installation Type
  3. Leave Role-based selected and click next
  4. On the Server Selection screen the server your on should be auto selected. Click next
  5. On the Server Roles screen scroll down to “Web Server (IIS)”. IP and Domain Restrictions is located under Web Server (IIS) > Web Server > Security
  6. Click the check box on IP and Domain Restrictions if its not already selected and complete the wizard to install the features.

Configuring IIS

The set up an IP restriction in IIS do the following:

  1. Open IIS and select your server in the left hand treeview. Alternatively if you wanted to add the restrictions to an individual site, select that site.
  2. Within the IIS section you should have an item titled IP Address and Domain RestrictionsIP and Domain Restrictions IIS
  3. The configured IP address will be listed out. To add a new one click the “Add Allow Entry” action on the right.
    IP and Domain Restrictions IIS Setting IPs
  4. This screen allows you to set up allow and deny lists, but the restrictions don’t actually have an effect until you edit the feature settings.
    IP and Domain Restrictions IIS Feature Settings
  5. On this screen you need to set the access for unspecified clients to deny. You can also specify a deny action type which alters the status code between unauthorized, forbidden, not found and abort.

What this doesn’t do

What this won’t do is block all traffic not in the allow list to your server. It will only cover IIS, so if you have other services running on your box like SQL Server, Mongo, Apache etc this will all still be publicly available.

Team Development for Sitecore (TDS) with Github

If your using Team Development for Sitecore (TDS) and Github or Git as your source control you may experience an issue where TDS is unable to create/update some of the items in Sitecore, due to a content length issue.

The error will look something like this:

Failed to load version 1 for language en
Length of field content does not match the content-length attribute. File name: name, field id: {id}

What’s happening comes down to how Github encodes line ending. If your item contains a Rich Text field you can end up with data that has been serialized with both CRLF and LF as the line feed. This will have been included in the content length. However when you push to Git, the CRLF value will have been removed making the content length value incorrect.

To overcome this issue you need to update your .gitattributes file to treat these files differently. Just add this to your file:

# TDS files should be treated as binary
*.item -text

How do I create a .gitattributes file?

If you don’t have a .gitattributes file you may run into an issue with windows where it won’t let you create it, due to requiring a file-name rather than just an extension.

To create the file:

  1. Create the text file gitattributes.txt
  2. Open it in a text editor and add your rules, then save and close
  3. Hold SHIFT, right click the folder you’re in, then select Open command window here
  4. Then rename the file in the command line, with ren gitiattributes.txt .gitattributes

Alternatively you can download my .gitattributes file here

Sitecore Continuous Integration with Team City and TDS

CIProcess

There are a lot of articles around on how to do automated deployments / continuous integration with Sitecore, which if you’re new to the tools will likely leave you slightly baffled. This article will hopefully show you exactly what you need to do and explain why.

Solution Overview

  1. TDS is used by developers to serialize their Sitecore item changes and push them into source control
  2. Team City is used to detect the changes and run a build script
  3. Team City uses Web Deploy to push the code changes to the web server
  4. Team City calls MSBuild which will trigger TDS which is installed on the server to deploy Sitecore items to the destination server

Prerequisites

  • You have a build server with Team City installed and know how to set it up to do a web deploy
  • You are already using TDS and have your Sitecore items serialized in source control
  • Essentially you know how to do the first 3 steps and just need help with Step 4

Step by Step

UNC Share

On your web server you need to set up a UNC Share on your website’s folder. When TDS does a deploy it will install a web service on your website through this share, do the item deployment and then remove the web service again.

The share needs to give permission to the user that your Team City Build Agent runs as. To find out which user your Build Agent is using:

  1. open the list of services and find TeamCity Build Agent in the list
  2. right click and select “Properties”
  3. in the “Log On” tab you will be able to see which Windows User is being used

Team City Config

In your Team City’s build configuration settings for your project, add a new build step with the following config:

Runner Type: MSBuild
Step Name: Publish TDS Items (or some other identifier)
Build file path: Path to your projects .sln file
Command line parameters:

  • SitecoreDeployFolder: TDS will use this file path to install a web-service on your site to publish the items through.
  • SitecoreWebUrl: This is the url of the site you are going to update. TDS will use this when it tries to call the web service it installed.
  • IsDesktopBuild: false
  • GeneratePackage: false
  • RecursiveDeployAction: Delete
/p:IsDesktopBuild=false;GeneratePackage=false;RecursiveDeployAction=Delete;SitecoreWebUrl=URL OF SITE;SitecoreDeployFolder=&quot;UNC PATH TO YOUR SITECORE SITE&quot;

Setting the command line parameters here will take precedence over any that have been included in your TDS projects solution file (which are liable to be overwritten by a developer).

TDS Build Settings

That’s it!

It’s that easy. If you run your build script now your items should all be published to Sitecore.

Alternatives

This certainly isn’t the only way to setup automated deployments and nor is it without issues. The fact a share needs to be set up between the Web Server and the Build Server, could cause an issue with security and may just not be possible if you’re using a cloud server.

Rather than using TDS to deploy the Sitecore items you could use TDS to create a .update package. These would normally be installed through an admin webpage (not great for CI) but there is an open source project called Sitecore Ship that will expose a REST endpoint for the package to be posted to. Brad Curtis has written an excellent guide to this setup here (http://www.bradcurtis.com/sitecore-automated-deployments-with-tds-web-deploy-and-sitecore-ship/), however at the time of writing Sitecore Ship isn’t compatible with Sitecore 7.5 or 8.

Another alternative to installing the update package is to use the TDS Package Installer. This is a tool Hedgehog provide alongside TDS for installing the update package. In this scenario you would need the tool installed on your web server and some way to call it. Jason Bert has written a setup guide for this example (http://www.jasonbert.com/2013/11/03/continuous-integration-deployment-with-sitecore/) however as well as Team City, you will also need Octopus Deploy to call the package installer. Octopus Deploy works by having what it calls Tentacles on each server you deploy to, making it easy to set up scripts to call programs on that server.

Sticking with the example using just TDS, you could also use TDS to deploy the solutions files as well as Sitecore items rather than using Web Deploy. However the downside here is that TDS is unable to modify your Web.Config file, which is one reason to stick with Web Deploy.

What’s your source control strategy?

I’ve seen companies that have no form of source control, companies using backups as source control, people still using Source Safe along with a whole bunch of actual good source control solutions such as Git and SVN. But when I say what’s your source control strategy I don’t mean what tool are you using, I mean how are you using it.

Checking you’re code into source control adds many benefits such as merging, versioning, reverting etc, but to get the most out of it you really need a strategy to define how you will use it. There is no one right way and the strategy you use ultimately depends on your team size and what you are doing.

Here are a few example strategies:

The didn’t know about a strategy approach

You’ve got a source control solution, you’re regularly checking your code in but you’ve never made a branch.

This is actually quite common. Everyone is developing against the same branch, you’ve got all the benefits of seeing who changed which piece of code, you can revert when you need to revert and when you do an update there’s some help with merging.

What you don’t get though is the ability to have work happening in parallel and one being released before the other. If you needed to do a release either you have to have a way of hiding the unfinished updates as the code is pushed to live or some way of reverting just those bits.

A branch for every feature

The other extreme is to have all development work happen on a new branch. When it’s ready to be release the code is merged back into the trunk and then released (plus some testing along the way).

The advantages of this are your trunk is always the same as what’s on live meaning emergency fix’s can be made without trying to find the last release in the source control history. Developers can also work separately without stepping each other toes and can release.

However with more than one development happening at the same time, whoever merges second could have a big job on their hands. All that branching is also going to be time consuming.

Updating branches often

So you’re using branches and to avoid a tricky merge are keeping the branch up to date regularly, possibly daily. Anyone who’s tried to merge a feature branch after 3 months of development can testify that it doesn’t always go smoothly and can take some time.

Only updating the branch at the end

You’ve decided to accept the large merge at the end of the project. You don’t like it but figure all those small merges during the project actually add up to more time than one big one at the end. You can then also concentrated on getting the feature built without the regular merge distraction.

A tag for each release

If you’ve ever needed to find a previous release and only have the history to go on it can be hard. If there’s no comment that says release to live you are basically guessing based on release date and last commit.

A tag is basically the same as a branch and if its part of your release plan, you’ve got a good way of finding each release to live.

A branch for live, a branch for dev

In this setup you have a live branch that always match’s what’s on the live site. There’s less need for tags as commits to this branch are normally also a release to live, and because the branch is always in sync with live, emergency fix’s are taken care of.

The downside with this approach is everyone is working on the dev branch, so exceptions should be made and feature branches used when it makes sense.

So…

So there are some of the strategies I’ve come across, but there will be plenty more. What you do ultimately relates to what kind of work you do, how frequently you release and what kind of team size you have. But the one thing you must do with all of them is make comments on each commit. If you don’t then source control becomes virtually useless.

IIS Where are my log files?

This is one of those things that once you know is very very simple, but finding out can be very very annoying.

IIS by default will store a log file for each site that it runs. This gives you valuable details on each request to the site that can help when errors are being reported by users.

When you go searching for them your initial thought may be to go to IIS and look at the site you want the files for. There you will see an item called logging. Excellent you think, this will tell you all you need to know.

IIS Log Files

There’s even a button saying “View Log File…”, but once you click it you realise things aren’t so simple. The link and folder path on the logging page both take you to a folder, containing more folders. In those folders are the logs, but there’s a folder for each site in IIS and they’ve all got a weird name. How do you know which folder has the log files for the site you want?

IIS Log Files Folders

Back on the IIS logging screen there’s nothing to say which folder the files will be in. There isn’t any indication anywhere.

The answer however is very easy. Each folder has the Site ID in its name. You can find the Site ID for your site in IIS either by looking at the sites list

IIS Sites List

or clicking on a site and clicking advanced settings

IIS Advanced Settings