Deploying Changes with Kentico

Modern development techniques involve moving change between environments, a typical life cycle is Local -> Development -> QA/Staging -> Production. Being able to package up a set of changes, apply a version number, and then migrate move that along the environment path is crucial to a resilient and reliable development process.

There is a wide range of tooling available for migrating change between environments for the .Net code used within the Kentico project. These tools can be used to move the changes to the website file system along with any bespoke code added to the Visual Studio solution.

Crucially Kentico also stores structural information in the database which the file system changes are dependent upon and this needs to be migrated between environments in parallel to the file system changes. 

The challenge to building an effective Continuous Integration process for Kentico has been synchronising file system and database system change between developers local environments onto an integrated development environment, then onto QA and Production.

Kentico has been providing more and more tooling over recent versions as a way to migrate change between environments, the following are some of the possible approaches.


The simplest way to manage change is to record the changes made to a Kentico Environment in spreadsheet or equivalent and then manually reapply these to the other environments when required. This technique is not recommended as is time consuming and error prone, the environments are likely to become inconsistent very quickly.

There are situations when this maybe the only option available for example access to environments or applying a very small change to a legacy site, it is not a practical solution for a team of developers producing multiple changes.


The first of the automated tools is the Kentico Import/Export process which has been in Kentico since the early versions. This can be accessed from the Kentico Admin section and allows everything from a single object too full site to be extracted into a collection of data files. These files can then be imported into the required environments.

This is an established approach to migration and is currently the best way to extract and migrate an entire site.

It is manual process though which can be slow for smaller changes, it is difficult to isolate a single developers change in the extract so the file contains more objects than required. This can lead to conflicts when importing and the files are very large to store in a version control system with no ordering process available for applying the import files. 

For more information on the Import and Export process the documentation is here Kentico Import and Export Documentation.

Content Staging


 Version 8 of Kentico introduced Content Staging , this process tracks change to content and Kentico objects and then allows that change to be selectively migrate to other environments. This process is bi-directional so content created on a QA environment can be synchronised down to the development environment.

The process is based on web services so each of the environments need to be able to communicate with the next one in the chain via HTTP. This is usually fine for local and development environments, but can prove more challenging for QA and Production environments where access maybe restricted.

When using Content Staging it is important to turn it on as soon as possible so it can track change, it is possible to do this without having the target a server available. The URL can be added at a later date and the change synchronised.

Content Staging tracks a mix of developer and content creator change so filtering is available to allow each of the target change sets to be separated out and migrated. Version 9 introduced the concept of grouping change and then migrating that group to another environment

By default Content Staging captures a wide range of change, and the tracked list can grow significantly over time. It is advisable to housekeep this list to only include required change. The tracked change is order dependant so a Create task must come before an Edit task for example. It is possible to programmatically filter out unrequired change so that managing the migrations becomes more efficient this document describes the process in more detail.


Continuous Integration

 Continuous Integration was introduced in version 9 and improved in version 10, this works on a similar principal to content staging in that changes are automatically tracked but differs in that this changes is serialized into xml files on the file system. These files can then be checked into the source control system and shipped with the rest of the file system changes. In the bin folder of the Kentico website is a command line tool that can be used to apply these change files to the database.

This is a developer focused tool which has enabled developers to use a local copy of the Kentico database and synchronise changes between themselves, and then on up to development. It also supports working on multiple branches on the same local environment with the Kentico database change being reapplied or removed when switching between environments. It has a negative impact on performance so should not be used on production environments.

The golden rule of working with continuous integration is to run the command line tool when ever a branch is changed, committed or pulled.

On my projects we use Continuous Integration to manage the change between local copies of Kentico and then as part of the build and deployment process up to the development environment. This is achieved by running the command line tool on the web server after the deployment, but does restrict us to just one webserver for the development environment. This is because the CI process is server specific so in a web farm scenario there is two servers pointing to one database which the CI process does not currently support. Theoretically it is possible to use the CI restore process on a development web farm if the CI process is disabled on the development database, although this is untested by the author.

For the environments after development we use Content Staging and have to perform this synchronisation as a manual step following the build and deployment process. It is a quick process and does not cause a lengthy delay but means the build process cannot be fully automated.

Continuous Integration is a great tool but does need to be set up correctly and used with care, the Kentico instructions should be followed carefully




 Compare for Kentico is a partner developed tool which allows the visual comparison of two instances of Kentico, and facilitate the synchronisation of the environments.

This tool is extremely useful when inheriting a Kentico previously unknown application or applying the previously discussed change control techniques to a legacy system. In these scenarios it is important to understand the differences between the environments and ensure they are synchronised before implementing a change control pipeline.

It is also very useful when for debugging environment that are behaving differently. Compare requires agents to be installed on each of the environments to work.

The Search for Compare tool is free to use but the Compare for Kentico is a paid for tool.


Kentico Draft


 Kentico draft is an offering from the Kentico Cloud suite , and is not strictly a tool for managing change, but does allow content to be created outside of the Kentico application and then migrated over.

It is a productivity tool that allows development teams and content creators to define structures and create content in a cloud based tool while the main application is being developed. When the main application is ready the content can be migrated over.

One of the major benefits of the using the Kentico Cloud tooling is that content can be managed in one place and accessed via an API. This means the content can be used across system not just kentico. Kentico Cloud Management acts as a content hub so in the context of this article could be used to move content between different environments of the same application, or to import or migrate content form other systems.

The Kentico Delivery API allows the content to be accessed via a RESTful API enabling it to be easily moved between environments. The other exciting feature is the ability to act as a Headless CMS allowing content to be accessed from any language or platform.

When using Kentico Draft it is possible to move the file system changes via a CI pipeline and then move content via the Kentico Delivery API.


Content Staging and continuous Integration when used together have vastly improved the process of managing change between environments. This has facilitated the creation of continuous Integration and Delivery Pipelines for Kentico Projects.

There are still some challenges that cannot be met, versioning a change set is difficult. Tools such as Octopus Deploy and Microsoft Release Manager can be used to create packages that have version numbers assigned which can then be migrated up through environments. For Kentico there still needs to a manual selection of the change and this is not able to be included into the packages.

Fully automated one click deployments are also not possible currently when using Content Staging as this step has to be done manually.

The ideal scenario is a change set of both file system and Kentico change can be grouped together, apply a version number, and then deploy to each environment. This would provide confidence in exactly what each release will change and the resilience in that the change will have been tested on previous environments.

Fully automated pipelines like this allow the release control to be taken away from the development team and empower the test and product teams to approve releases. One click tools mean that a user authorises a release with one click and it is applied to the environment.

With the recent developments in Kentico Change control hopefully this functionality will be in the near future and true Continuous Deployment can be achieved.  



Faking and Seeding Data with Bogus

I was searching for a tool a few weeks back to help me seed some data and I found this fantastic NuGet package called Bogus. This tool lets you set up rules for a classes properties with various different data generation options.

For Example here is a some sample rules I created for a simple class where the id is generated as a unique index but the hashedcollection is passed in as a variable ;

            var comparerFake =
                new Faker<Comparer>().RuleFor(u => u.Id, f => f.UniqueIndex)
                    .RuleFor(u => u.ItemCreatedWhen, f => f.Date.Recent())
                    .RuleFor(u => u.HashedCollection, f => HashedCollection);

Once the rules have been defined, I usually snaffle them away in a private method, then an instance or a collection of the class can be generated

//Generate a single instance
var classToPopulate = new ClassToPopulate();

//Generate a collection of ten instances
var fakeCollection = comparerFake.Generate(10);


The API Support for different datatypes and covers most of the ones likely to be need for most scenarios.

So far I have used this library for seeding a database with data, Testing complicated AutoMapper mappings and unit tests which operate on collections.

Testing Web Config IP Restrictions Locally

I was up against a hard deadline recently and needed to test some IP restrictions locally before deploying to a dev server. I have never had to do this locally before and had to jump through a few hoops to get it working. 

We wanted to put the restrictions in the config so they are applied to any environment the site is hosted on, this is easily achieved using the <ipSecurity> attribute in the <security> group. 

The sample xml below shows a configuration where all IP's are blocked except the ones listed, it is possible to invert this with the allowUnlisted attribute being set to true. When this is the case the IP's in the list are not allowed to access the site.

      <ipSecurity allowUnlisted="false" denyAction="Forbidden">
        <!-- this line blocks everybody, except those listed below -->
        <!-- removes all upstream restrictions -->
        <!-- allow requests from the local machine -->
        <add ipAddress="" allowed="true"/>
        <!-- Allowed IP's-->
        <add allowed="true" ipAddress="" subnetMask="" />

This article details the options for IP configuration restriction as well as other options

To get this working on my local machine I had to do three other steps , although one may not have been required!


1 - Turn on the IIS Role service for IP security via the Turn Windows Features On and Off - the steps are detailed here

2 - Allow the Security section to be overridden at the application level, within IIS the Security section is locked for override and needs to be set to Read/Write in the Feature Delegation settings. This is available at the server level in IIS and is covered in this blog post

3 - Allow Override in the ApplicationHost.config files, there are two of these one for the machine located at %windir%\system32\inetsrv\config\applicationHost.config and another in the project folder at .vs\config. I set both to be safe but it maybe that only the project level one is required.

This is an example of the required setting 

<section name="ipSecurity" overrideModeDefault="Allow" />

This is the stack overflow post that pointed me in the right direction for the ApplicationHost.config



Being a Digital Squirrel

I have noticed that over the last four years I have developed a habit, a habit of squirrelling away information. I have become a hoarder of digital information, a digital squirrel.

The habit started when I moved from being a Senior Developer to Technical Architect, the demands moved from in depth knowledge for a topic to a breadth of knowledge across a range of technologies and techniques.

I started consuming more and more information, emailing myself links from social media to review and bookmark in chrome. I followed more and more blogs and again bookmarking them in chrome in case they might be useful.

Then I discovered the aggregate blogs like the The Morning Brew and Visual Studio Top Ten which provided even more fuel for my bookmarking habit.

A colleague of mine saw me trying to find a particular book mark in this sea of information and asked if I had bookmarked the entire internet, I did reach a point when I had too many book marks to manage and have had to reorganise a few times.

As well as the development community the workplace is also a rich source of information sharing, Google+ and Slack are used at work for collegues to share useful links and articles, adding more
volume to my bookmark library.

My collection of Ebooks and White Papers has grown exponentially, some of these are purchased but the majority are free. The ones from tend to be very useful but the free ones supplied for marketing purposes, when yousupply an email and company information in return for a book are less useful and form clutter.

Every conference I have attended has resulted in physical hand outs and courses have provided reams of material which ended up clogging the desk and drawers in the office, so if there is not a digital option these end up in the recycling.

Facebook and LinkedIn are a source for Infographics which look great and do get a message over quickly. I love a good Cheat Sheet and they are a great reference when working with an unfamiliar tool or technology, but I now have duplicates and cannot lay my hands on the ones I want when
I need it so began snaffling these away. Facebook is surprisingly good source for articles the Microsoft Developer pages provide great quality articles.

The quantity of free good quality information available at present is immense, and the broad spectrum of technology subjects required for modern development multiples that even further. In my roles I am required to be able to suggest technologies and techniques to solve requirements in meetings and calls and also in passing conversations. In depth knowledge is still required for the implementation on projects and this has narrowed down to a smaller subset of technologies.

In order to be able to filter the information available and select what is useful to me I have use a number of techniques;

I use feedly to follow blogs and these are categorised into Aggregate Blogs, Architecture , Development Community and Development. This allows me to scan through blogs quickly bookmarking any reference blogs by subject are which may be useful and read any blogs posts.

This process allows me to see the trends in development and architecture and look for more in-depth information for any gaps in my knowledge.

I set up a blog about a year ago, and try to write a post a month, I am going to start creating summary posts for technologies and trends that I can use as a memory tool and also a jumping off point if I need to go into more in-depth research.

The ebooks I collect are categorised by topic and put on cloud storage, I can access them any where and refer people to them for personal training. By far the best source of reading material I have is my subscription to Safari Books Online. This has more books available than I will be able to read and I can queue them up, the mobile app allows me to quickly access a book while travelling and make the best use of small pockets of time in the day.

To summarise the approach is to scan multiple sources of information daily, identifying articles of use or new technologies and trends, either book mark these for reference or research further collecting books. This approach allows me to maintain a broad spectrum of knowledge required as a Architect in a modern development team, and also a library of digital material I can draw on quickly.

This information provide me the confidence that I can draw on resources quickly during the rapid pace of Agile development.

This process is never complete and the collection requires regular maintenance to be effective, but as the number and complexities of technologies used increases, refining the skills to process information and store whats useful becomes even more critical.







Consistent Redirects and Session State

One thing I learnt from experience to check when getting unexpected behaviour with sessions in ASP.Net is the redirects. I have worked on sites where the HTTP and HTTPS redirects are inconsistent and this can cause issues.
One of the more serious cases experiences was were a 'www' redirect was missing and redirect was being used to transfer to a third party site. In this case the user started a session using the address at http://websitedomain... and then transferred to the third party site.
When the user was redirected to the full web address http://www.websitedomain... a new session was started by IIS and the user lost their session and all information associated with it. This severely affected the user experience on the site.
Many issues with lost sessions are difficult to diagnose due to the symptoms being misleading. Fortunately the fix is quick and easy , simply specifying the redirects in the web.config for the site ensured the consistency of URL for all sessions on the site.
These rules can be put into IIS but these can be lost in some cases when the web.config is overwritten on a deployment. In the rules are in the web.config then there is the benefit of versioning in source control and consistency across web-farms.

Here is an example web.config entry for http to https and non-www to www redirects;

        <rule name="Redirect landing request to trailing slash" stopProcessing="true">
          <match url="landing\/(.*[^\/])$" />
          <action type="Redirect" url="{R:0}/" redirectType="Permanent" />
            <add input="{REQUEST_FILENAME}" matchType="IsDirectory" />
        <rule name="Redirect to HTTPS" enabled="true" stopProcessing="true">
          <match url=".*" />
          <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
            <add input="{HTTPS}" pattern="^OFF$" />
          <action type="Redirect" url="https://{HTTP_HOST}/{R:0}" redirectType="Permanent" />
        <rule name="redirect non www to www" stopProcessing="true">
          <match url=".*" />
          <conditions  logicalGrouping="MatchAll" trackAllCaptures="false">
            <add input="{HTTP_HOST}" pattern="^$" />
          <action type="Redirect" url="{R:0}" />

Certified Scrum Product Owner Course

I attended the Certified Scrum Product Owner course this month run by Agil8.

To be honest I was a bit sceptical as have worked with some really great Agile practictioners, but I came away feeling like I had my Agile Compass recalibrated!

 The instructor was David Hicks who had a great teaching technique, was very good at energising the room and broad experience of working in Agile.

The Syllabus is available on Agil8 website here. 

The main take away points for me on the course were;

  1. First Question should always be why?
  2. Agile is an Empirical Process , defining a goal and moving towards it in steps.
  3. Three pillars of Empirical Process Control - Tranparency , Inspection & Adaptation
  4. At the start of the process uncertainity is high - with each sprint it reduces
  5. It is important to stop if a product is not going to work, failing fast is one of the main benefits of the MVP process
  6. User stories should always focus on business values - even technical ones
  7. Need inspect all the time and to measure value!
  8. Value is offered by having a potentially shippable product - you don't have to ship it but you can get feedback from customers and prove ROI early.
  9. SCRUM is a mirror showing state of project from the data - removes the politics and emotions 
  10. Being a good product owner is hard.

The course provides the Scrum Alliance Certification and two years membership.

I can really recommend this course to anyone to attend. I am now relooking at all the projects I am working on with a product owner perspective and to unlearn the Agile bad habits that have formed over the last few years.

Kentico CMS.Tests Library - Unit Testing for Kentico Objects

As i was writing this post it was added to my works blog too - with much better formatting!

One of the big difficulties I have found when working on bespoke code for a Kentico based website was writing unit tests where the code under test used Kentico objects. For example the Kentico AddressInfo object being used in a custom code method. 

When writing the test an AddressInfo object would need to be created and passed in as a parameter by the test. However the creation of the object would fail without a connection string even though the database is not required. 

I have seen a few approaches where people have written wrappers around the providers classes but if you want to work with the Kentico Objects they still had the inability to create a new object.

Starting in Kentico 8 the is a hidden gem library is included called CMS.Tests which provides the solution to the problem, there is not much documentation on it but this Kentico Article covers everything. There is also a class reference Class Reference  document which shows all the methods and properties.

There are three test types available;

  1. Unit Tests
  2. Integration Tests
  3. Isolated Integration Tests

Unit Tests

As you would expect these isolate the code from the database and allow for the tests to run quickly with provided fake data.

They key thing to do is make sure the tests inherit from the UnitTests base class in CMS.Tests otherwise the Fake<> methods cannot be used.

public class IamAUnitTest : UnitTests


 Once that is set up it is a simple case of creating Fake<> for each Kentico Object and its provider that the test requires; 

Fake<AddressInfo, AddressInfoProvider>()
                .WithData(new AddressInfo()
                    AddressGUID = _billingAddressGuid,
                    AddressLine1 = "AddressLine1",
                    AddressLine2 = "AddressLine2",
                    AddressCity = "AddressCity",
                    AddressZip = "Postcode"
                    new AddressInfo()
                        AddressGUID = _deliveryAddressGuid,
                        AddressLine1 = "AddressLine1",
                        AddressLine2 = "AddressLine2",
                        AddressCity = "AddressCity",
                        AddressZip = "Postcode"


You can provide as many objects as you want on the .WithData() method, and then the fake provider will let you look up on ids providing realistic scenarios

var billingAddress = AddressInfoProvider.GetAddressInfo(_billingAddressGuid);


This has been really useful when testing the logic in custom methods without having to spin up the website and initiate through the UI. It is also great for edge case and exception testing to see how logic performs when you get something unexpected back from Kentico.

There is a full list of the fakeable Kentico Objects here.

Integration Tests

These allow the code to be run against database, but doesn't require a Kentico website. As long as all the required Kentico Binaries are referenced and an App.Config file with the database connection string.

These will be slower than a Unit test but allow for more realistic scenarios. Care should be taken if the tests write data to the database as this will be harder to reset at the end of the test, and could effect database integrity.

As above the test have to inherit from the IntegrationTests base class, but there is no need to call the Fake<> method as these tests will be accessing the database.

public class IamAIntegrationTest : IntegrationTests


Isolated Integration Tests

These tests are similar to the above integration tests, except it creates a copy of the database using SQL Server 2012 Express LocalDB to protect from any database writes affecting the integrity of the data. The local database get refreshed with each test so these will be slow to run, they are ideal for longer integration test perhaps on a nightly build.

The logic can be tested against a clean seeded database for each test which will be repeatable on each run.

CMS Asserts

 There is also a CMS Assert Methods set of classes that allow Kentico Specific assertions, these can be used alongside the Test framework assertion provider. An example assertion taken from the Kentico articlet is the QueryEquals method, which  checks two  SQL statements are equal.

CMSAssert.QueryEquals(q.ToString(), "SELECT UserID FROM CMS_User");


Blog Rebuild with HTTPS all the time

 Ok so it was time for the blog to receive some TLC, I am running BlogEngine.Net version 2.9 as an azure web app with an azure SQL database. Version 3.3 of BlogEngine.Net was available. There has been a lot of improvements to the user interface and editor. I was also running a multi user and multi blog set up which made the database upgrades trickier. So I opted for a full clean install and then to migrate the data over. I am not a profilic author so there was not too much to migrate.

Also with the push towards HTTPS for all pages and chrome marking pages insecure in 2017, I wanted to upgrade to HTTPS. The best way to do this was to use a Lets Encrypt certificate which is a free automated certificate authority.

Clean Installation of BlogEngine.Net

This was the easiest part simply download the zip file from, then extract the files. The web.config needed to be updated to use the DbBlogProvider and a connection string added. 

    <blogProvider defaultProvider="DbBlogProvider" fileStoreProvider="XmlBlogProvider">
        <add description="Xml Blog Provider" name="XmlBlogProvider" type="BlogEngine.Core.Providers.XmlBlogProvider, BlogEngine.Core" />
		<add connectionStringName="BlogEngine" description="Sql Database Provider" name="DbBlogProvider" type="BlogEngine.Core.Providers.DbBlogProvider, BlogEngine.Core" />


The Membership Provider and Role Provider also needed to be updated to use the database although this is less important as I am sticking to single user mode.

 <membership defaultProvider="DbMembershipProvider">
        <clear />
        <add name="XmlMembershipProvider" type="BlogEngine.Core.Providers.XmlMembershipProvider, BlogEngine.Core" description="XML membership provider" passwordFormat="Hashed" />
		<add name="DbMembershipProvider" type="BlogEngine.Core.Providers.DbMembershipProvider, BlogEngine.Core" passwordFormat="Hashed" connectionStringName="BlogEngine" />
    <roleManager defaultProvider="DbRoleProvider" enabled="true" cacheRolesInCookie="false">
        <clear />
        <add name="XmlRoleProvider" type="BlogEngine.Core.Providers.XmlRoleProvider, BlogEngine.Core" description="XML role provider" />
		<add name="SqlRoleProvider" type="System.Web.Security.SqlRoleProvider" connectionStringName="BlogEngine" applicationName="BlogEngine" />
        <add name="DbRoleProvider" type="BlogEngine.Core.Providers.DbRoleProvider, BlogEngine.Core" connectionStringName="BlogEngine" />


Once the blog was started up the admin section had a link to update to 3.3.5, which was upgraded automatically from the admin section without me having to touch a file. Very slick.

Adding LetsEncrypt

I have been wanting to try LetsEncrypt out for a while and this was ideal for my blog, I wasn't sure how to get a certificate onto a Azure WebApp using a custom domain. After some googling this Azure extension stood out a mile Not only does it provide a UI for adding the certificate it also sets up a web job to refresh them when they expire. The LetsEncrypt certificates expire after 90 days , this is for security and to encourage automation to renew the licences. LetsEncrypt Expiration Notes

I found this great detailed blog post  which explained a lot of the reasons for the different steps, however the 'Register a Service Principal' in powershell looked a bit tricker than the time I had available so I jumped back over to the WIKI Documenation for the extension and that covered how to do it in the Azure Control Panel  

I found the extension failed on the last step the first time it ran, then I ran again and it worked. Wasn't able to find out why and may have just been a azure refresh speed on one of the set up steps


Add a https redirect (Skipped down to the enforce HTTPS section)

Again a nice easy one, just need to add the redirect rule into the web.config this Blog Post on Azure covers how to add a certificate to a custom domain, as I had done this with the Azure Extension I just needed to skip to the 'Enforce HTTPS section'

Adding the following section in the <system.webserver> section of the web.config moves all sessions to HTTPS

        <!-- BEGIN rule TAG FOR HTTPS REDIRECT -->
        <rule name="Force HTTPS" enabled="true">
          <match url="(.*)" ignoreCase="false" />
            <add input="{HTTPS}" pattern="off" />
          <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" appendQueryString="true" redirectType="Permanent" />
        <!-- END rule TAG FOR HTTPS REDIRECT -->


Migrate old data 

 The final step was to migrate the data over from the old blog, I had created a new SQL Azure Database so just needed to take from three tables. be_Posts, be_Categories and be_PostCategory. I wrote the slightly funky query to generate a formatted insert statement and ran in SSMS with a results to text. Then moved the query over to the new database. It worked for all but three posts where the post data exceeded the limit in the results to text of SSMS. As there were only three I moved those posts manually. Not the cleanest of solutions I admit but I wanted a fresh install of the blog software and to make future upgrades easier by using the default blog guid as the blog id.

'INSERT INTO [dbo].[be_Posts]
      char(39) + '27604F05-86AD-47EF-9E05-950BB762570C' + char(39) + CHAR(13) + CHAR(10)
      ,',' + char(39) + convert(varchar(50),[PostID]) + char(39) + CHAR(13) + CHAR(10)
      ,',' + char(39) + [Title] + char(39) + CHAR(13) + CHAR(10)
      ,',' + char(39) + [Description] + char(39) + CHAR(13) + CHAR(10)
      ,',' + char(39) + '*' + char(39) + CHAR(13) + CHAR(10)
      ,',' + char(39) + convert(varchar(50),[DateCreated]) + char(39) + CHAR(13) + CHAR(10)
      ,',' + char(39) + convert(varchar(50),[DateModified]) + char(39) + CHAR(13) + CHAR(10)
      ,',' + char(39) + [Author] + char(39) + CHAR(13) + CHAR(10)
      ,',' + char(39) + convert(varchar(50),[IsPublished]) + char(39) + CHAR(13) + CHAR(10)
      ,',' + char(39) + convert(varchar(50),[IsCommentEnabled]) + char(39) + CHAR(13) + CHAR(10)
      ,',' + char(39) + convert(varchar(50),[Raters]) + char(39) + CHAR(13) + CHAR(10)
      ,',' + char(39) + convert(varchar(50),[Rating]) + char(39) + CHAR(13) + CHAR(10)
      ,',' + char(39) + [Slug] + char(39) + CHAR(13) + CHAR(10)
      ,',' + char(39) + convert(varchar(50),[IsDeleted]) + char(39) + CHAR(13) + CHAR(10)
  FROM [blogjongregorydb].[dbo].[be_Posts]



Writing this first post has been so much easier, the editor in 3.3.5 is much better and the ability to add links is far quicker. The code window has more formatting and is easier to edit, and code snippets can be added in directly. Although I still prefer the formatting of .

Adding the LetsEncrypt extension and certificate was really convenient and just need to check the renewal of the certificate in 90 days time. 

But the blog and my personal CV website is marked as secure and I feel confident using this approach on other websites.

C# Importance of Checking for NULLs

This week I was reviewing some C# code where the developer had used 'ToString()' throughout on properties without checking for nulls. 

The result of this was a log file full of 'System.NullReferenceException: Object reference not set to an instance of an object.'
And it was difficult to trace through where the error had occurred and the logic stopped executing.
So I put together a few simple samples of how to safely check for nulls and work with collections,  I can distribute them to the training developers and add hope to add more to this project over time. The code is  available here;
Some of the samples are below.
Examples Ways to handle null values
AS Operator

var asOperator = classWithNulls.IamNull as string;
Console.WriteLine("Null is returned by as with no exception : " + asOperator);
Convert Class

var convertToString = Convert.ToString(classWithNulls.IamNull);
Console.WriteLine("Convert Class returns empty string if null no exception : " + convertToString);
Coalesce Operator

var coalesce = classWithNulls.IamNull ?? "DefaultValueIfNull";
Console.WriteLine("DefaultValue is returned if null : " + coalesce);
Conditional Operator

// ReSharper disable once MergeConditionalExpression
var conditional = classWithNulls.IamNull != null
    ? classWithNulls.IamNull.ToString()
    : "DefaultValueIfNull";

Console.WriteLine("DefaultValue is returned if null : " + conditional);

// C# 6 and later null propagation operator where the null value is passed up the chain without an exception

var conditionalCSharp6 = classWithNulls.IamNull?.ToString();

Console.WriteLine("Null is propagated and returned without an Exception" + conditionalCSharp6);

// This can be combined with Coalesce to remove the null and provide a default value
var conditionalCSharp6DefaultValue = classWithNulls.IamNull?.ToString() ?? "DefaultValueIfNull";
Console.WriteLine("Default Value Returned without an Exception" + conditionalCSharp6DefaultValue);
Extension Method
public static class Extension
        public static string ToStringOrEmpty(this Object value)
            return value == null ? "" : value.ToString();

// A custom extension method attached to object to handle null values, see file ToStringOrEmpty.cs

var extensionMethod = classWithNulls.IamNull.ToStringOrEmpty();
Console.WriteLine("String Empty is returned by as with no exception : " + extensionMethod);

SSL V 3.0 - Disable Poodle Threats on IIS

On a recent PEN test the SSL V3 POODLE bug was picked up on the server

To test a website this handy page reports on any url visible to the browser

I assumed this would be a setting in IIS but the advice from Microsoft is a quick registry edit - detailed in the ''Disable SSL 3.0 Server Software'' section in this article

Disable SSL 3.0 in Windows For Server Software

You can disable support for the SSL 3.0 protocol on Windows by following these steps:

  1. Click Start, click Run, type regedt32 or type regedit, and then click OK.
  2. In Registry Editor, locate the following registry key:
    HKey_Local_Machine\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server

    Note If the complete registry key path does not exist, you can create it by expanding the available keys and using the New -> Key option from the Edit menu.

  3. On the Edit menu, click Add Value.
  4. In the Data Type list, click DWORD.
  5. In the Value Name box, type Enabled, and then click OK

    Note If this value is present, double-click the value to edit its current value.

  6. In the Edit DWORD (32-bit) Value dialog box, type 0 .
  7. Click OK. Restart the computer.


Note This workaround will disable SSL 3.0 for all server software installed on a system, including IIS.