Consuming RSS Feeds in C#

I have never consumed an RSS feed in code before and assumed it would be a bespoke process, but the System.ServiceModel.Syndication library has a everything you need in a few lines of code;

 string url = "http://blog.jongregory.net/syndication.axd";
            XmlReader reader = XmlReader.Create(url);
            SyndicationFeed feed = SyndicationFeed.Load(reader);
            reader.Close();
            foreach (SyndicationItem item in feed.Items)
            {
                Console.WriteLine("Title {0}",item.Title.Text);
                var summary = HttpUtility.UrlDecode(item.Summary.Text);
                Console.ForegroundColor = ConsoleColor.White;
                Console.WriteLine("Summary {0}", summary);
             }
               

 Quick and simple can also be used to create RSS feeds.

Binary Search of a Repository

Sometimes an innocuous change can cause issues on a project at a later date, especially in scenarios when working with an application such as a CMS with lots of black box code.

A useful technique when an issue is occurring and there is no obvious cause is to employ a Binary Search Algorithm on the repository.

This technique is also known as a half interval search, involves reverting the repository to the last known working point and verifying the issue is not present. 

Then move to the commit halfway between the last known working point and the last commit, if the issue is present then you know the cause is in the first half of the commits, if the issue is not present then the issue is in the second half of the commits.

The same process is then applied to the half way point of the commits which contain the issue, repeating until the number of commits has been reduced to a manageable level.

A tool like Git Extensions can be used to compare the few commits where the issue is known to start to see what has changed.

This approach works well when there are no clues to what is causing the issue, and can save lots of time with a trial and error approach.

It is recommended to use this technique on a completely fresh checkout to maintain the integrity of the local repository.

Real World Scenario


On a recent project the webforms async setting had been set on the web forms master page in February. In May it was discovered that a Resource String helper provided by the CMS was not working. Using the binary search technique helped us to zero on this change and then understand why this had broken the resource code.

Deploying Changes with Kentico

Modern development techniques involve moving change between environments, a typical life cycle is Local -> Development -> QA/Staging -> Production. Being able to package up a set of changes, apply a version number, and then migrate move that along the environment path is crucial to a resilient and reliable development process.

There is a wide range of tooling available for migrating change between environments for the .Net code used within the Kentico project. These tools can be used to move the changes to the website file system along with any bespoke code added to the Visual Studio solution.

Crucially Kentico also stores structural information in the database which the file system changes are dependent upon and this needs to be migrated between environments in parallel to the file system changes. 

The challenge to building an effective Continuous Integration process for Kentico has been synchronising file system and database system change between developers local environments onto an integrated development environment, then onto QA and Production.

Kentico has been providing more and more tooling over recent versions as a way to migrate change between environments, the following are some of the possible approaches.

Manual

The simplest way to manage change is to record the changes made to a Kentico Environment in spreadsheet or equivalent and then manually reapply these to the other environments when required. This technique is not recommended as is time consuming and error prone, the environments are likely to become inconsistent very quickly.

There are situations when this maybe the only option available for example access to environments or applying a very small change to a legacy site, it is not a practical solution for a team of developers producing multiple changes.

Imports/Exports

The first of the automated tools is the Kentico Import/Export process which has been in Kentico since the early versions. This can be accessed from the Kentico Admin section and allows everything from a single object too full site to be extracted into a collection of data files. These files can then be imported into the required environments.

This is an established approach to migration and is currently the best way to extract and migrate an entire site.

It is manual process though which can be slow for smaller changes, it is difficult to isolate a single developers change in the extract so the file contains more objects than required. This can lead to conflicts when importing and the files are very large to store in a version control system with no ordering process available for applying the import files. 

For more information on the Import and Export process the documentation is here Kentico Import and Export Documentation.

Content Staging

 

 Version 8 of Kentico introduced Content Staging , this process tracks change to content and Kentico objects and then allows that change to be selectively migrate to other environments. This process is bi-directional so content created on a QA environment can be synchronised down to the development environment.

The process is based on web services so each of the environments need to be able to communicate with the next one in the chain via HTTP. This is usually fine for local and development environments, but can prove more challenging for QA and Production environments where access maybe restricted.

When using Content Staging it is important to turn it on as soon as possible so it can track change, it is possible to do this without having the target a server available. The URL can be added at a later date and the change synchronised.

Content Staging tracks a mix of developer and content creator change so filtering is available to allow each of the target change sets to be separated out and migrated. Version 9 introduced the concept of grouping change and then migrating that group to another environment https://docs.kentico.com/k9/deploying-websites/content-staging/synchronizing-the-content

By default Content Staging captures a wide range of change, and the tracked list can grow significantly over time. It is advisable to housekeep this list to only include required change. The tracked change is order dependant so a Create task must come before an Edit task for example. It is possible to programmatically filter out unrequired change so that managing the migrations becomes more efficient this document https://docs.kentico.com/k9/custom-development/handling-global-events/excluding-content-from-staging-and-integration describes the process in more detail.

 

Continuous Integration

 Continuous Integration was introduced in version 9 and improved in version 10, this works on a similar principal to content staging in that changes are automatically tracked but differs in that this changes is serialized into xml files on the file system. These files can then be checked into the source control system and shipped with the rest of the file system changes. In the bin folder of the Kentico website is a command line tool that can be used to apply these change files to the database.

This is a developer focused tool which has enabled developers to use a local copy of the Kentico database and synchronise changes between themselves, and then on up to development. It also supports working on multiple branches on the same local environment with the Kentico database change being reapplied or removed when switching between environments. It has a negative impact on performance so should not be used on production environments.

The golden rule of working with continuous integration is to run the command line tool when ever a branch is changed, committed or pulled.

On my projects we use Continuous Integration to manage the change between local copies of Kentico and then as part of the build and deployment process up to the development environment. This is achieved by running the command line tool on the web server after the deployment, but does restrict us to just one webserver for the development environment. This is because the CI process is server specific so in a web farm scenario there is two servers pointing to one database which the CI process does not currently support. Theoretically it is possible to use the CI restore process on a development web farm if the CI process is disabled on the development database, although this is untested by the author.

For the environments after development we use Content Staging and have to perform this synchronisation as a manual step following the build and deployment process. It is a quick process and does not cause a lengthy delay but means the build process cannot be fully automated.

Continuous Integration is a great tool but does need to be set up correctly and used with care, the Kentico instructions should be followed carefully https://docs.kentico.com/k10/developing-websites/preparing-your-environment-for-team-development/setting-up-continuous-integration

 

Compare

 

 Compare for Kentico is a partner developed tool which allows the visual comparison of two instances of Kentico, and facilitate the synchronisation of the environments.

This tool is extremely useful when inheriting a Kentico previously unknown application or applying the previously discussed change control techniques to a legacy system. In these scenarios it is important to understand the differences between the environments and ensure they are synchronised before implementing a change control pipeline.

It is also very useful when for debugging environment that are behaving differently. Compare requires agents to be installed on each of the environments to work.

The Search for Compare tool is free to use but the Compare for Kentico is a paid for tool.

 

Kentico Draft

 

 Kentico draft is an offering from the Kentico Cloud suite , and is not strictly a tool for managing change, but does allow content to be created outside of the Kentico application and then migrated over.

It is a productivity tool that allows development teams and content creators to define structures and create content in a cloud based tool while the main application is being developed. When the main application is ready the content can be migrated over.

One of the major benefits of the using the Kentico Cloud tooling is that content can be managed in one place and accessed via an API. This means the content can be used across system not just kentico. Kentico Cloud Management acts as a content hub so in the context of this article could be used to move content between different environments of the same application, or to import or migrate content form other systems.

The Kentico Delivery API allows the content to be accessed via a RESTful API enabling it to be easily moved between environments. The other exciting feature is the ability to act as a Headless CMS allowing content to be accessed from any language or platform.

When using Kentico Draft it is possible to move the file system changes via a CI pipeline and then move content via the Kentico Delivery API.

Future

Content Staging and continuous Integration when used together have vastly improved the process of managing change between environments. This has facilitated the creation of continuous Integration and Delivery Pipelines for Kentico Projects.

There are still some challenges that cannot be met, versioning a change set is difficult. Tools such as Octopus Deploy and Microsoft Release Manager can be used to create packages that have version numbers assigned which can then be migrated up through environments. For Kentico there still needs to a manual selection of the change and this is not able to be included into the packages.

Fully automated one click deployments are also not possible currently when using Content Staging as this step has to be done manually.

The ideal scenario is a change set of both file system and Kentico change can be grouped together, apply a version number, and then deploy to each environment. This would provide confidence in exactly what each release will change and the resilience in that the change will have been tested on previous environments.

Fully automated pipelines like this allow the release control to be taken away from the development team and empower the test and product teams to approve releases. One click tools mean that a user authorises a release with one click and it is applied to the environment.

With the recent developments in Kentico Change control hopefully this functionality will be in the near future and true Continuous Deployment can be achieved.  

 

 

Faking and Seeding Data with Bogus

I was searching for a tool a few weeks back to help me seed some data and I found this fantastic NuGet package called Bogus. This tool lets you set up rules for a classes properties with various different data generation options.

For Example here is a some sample rules I created for a simple class where the id is generated as a unique index but the hashedcollection is passed in as a variable ;

            var comparerFake =
                new Faker<Comparer>().RuleFor(u => u.Id, f => f.UniqueIndex)
                    .RuleFor(u => u.ItemCreatedWhen, f => f.Date.Recent())
                    .RuleFor(u => u.HashedCollection, f => HashedCollection);

Once the rules have been defined, I usually snaffle them away in a private method, then an instance or a collection of the class can be generated

//Generate a single instance
var classToPopulate = new ClassToPopulate();
comparerFake.Populate(classToPopulate);

//Generate a collection of ten instances
var fakeCollection = comparerFake.Generate(10);

 

The API Support for different datatypes and covers most of the ones likely to be need for most scenarios.

So far I have used this library for seeding a database with data, Testing complicated AutoMapper mappings and unit tests which operate on collections.

Testing Web Config IP Restrictions Locally

I was up against a hard deadline recently and needed to test some IP restrictions locally before deploying to a dev server. I have never had to do this locally before and had to jump through a few hoops to get it working. 

We wanted to put the restrictions in the config so they are applied to any environment the site is hosted on, this is easily achieved using the <ipSecurity> attribute in the <security> group. 

The sample xml below shows a configuration where all IP's are blocked except the ones listed, it is possible to invert this with the allowUnlisted attribute being set to true. When this is the case the IP's in the list are not allowed to access the site.

    <security>
      <ipSecurity allowUnlisted="false" denyAction="Forbidden">
        <!-- this line blocks everybody, except those listed below -->
        <!-- removes all upstream restrictions -->
        <clear/>
        <!-- allow requests from the local machine -->
        <add ipAddress="127.0.0.1" allowed="true"/>
        <!-- Allowed IP's-->
        <add allowed="true" ipAddress="0.0.0.0" subnetMask="255.255.0.0" />
      </ipSecurity>
    </security>

This iis.net article details the options for IP configuration restriction as well as other options https://www.iis.net/configreference/system.webserver/security/ipsecurity?showTreeNavigation=true#005

To get this working on my local machine I had to do three other steps , although one may not have been required!

 

1 - Turn on the IIS Role service for IP security via the Turn Windows Features On and Off - the steps are detailed here https://www.iis.net/configreference/system.webserver/security/ipsecurity?showTreeNavigation=true#003

2 - Allow the Security section to be overridden at the application level, within IIS the Security section is locked for override and needs to be set to Read/Write in the Feature Delegation settings. This is available at the server level in IIS and is covered in this blog post https://www.iis.net/learn/manage/managing-your-configuration-settings/an-overview-of-feature-delegation-in-iis#02

3 - Allow Override in the ApplicationHost.config files, there are two of these one for the machine located at %windir%\system32\inetsrv\config\applicationHost.config and another in the project folder at .vs\config. I set both to be safe but it maybe that only the project level one is required.

This is an example of the required setting 

<section name="ipSecurity" overrideModeDefault="Allow" />

This is the stack overflow post that pointed me in the right direction for the ApplicationHost.config http://stackoverflow.com/questions/16220819/internal-server-error-with-web-config-ipsecurity

 

 

Being a Digital Squirrel

I have noticed that over the last four years I have developed a habit, a habit of squirrelling away information. I have become a hoarder of digital information, a digital squirrel.

The habit started when I moved from being a Senior Developer to Technical Architect, the demands moved from in depth knowledge for a topic to a breadth of knowledge across a range of technologies and techniques.

I started consuming more and more information, emailing myself links from social media to review and bookmark in chrome. I followed more and more blogs and again bookmarking them in chrome in case they might be useful.

Then I discovered the aggregate blogs like the The Morning Brew and Visual Studio Top Ten which provided even more fuel for my bookmarking habit.

A colleague of mine saw me trying to find a particular book mark in this sea of information and asked if I had bookmarked the entire internet, I did reach a point when I had too many book marks to manage and have had to reorganise a few times.

As well as the development community the workplace is also a rich source of information sharing, Google+ and Slack are used at work for collegues to share useful links and articles, adding more
volume to my bookmark library.

My collection of Ebooks and White Papers has grown exponentially, some of these are purchased but the majority are free. The ones from https://leanpub.com/ tend to be very useful but the free ones supplied for marketing purposes, when yousupply an email and company information in return for a book are less useful and form clutter.

Every conference I have attended has resulted in physical hand outs and courses have provided reams of material which ended up clogging the desk and drawers in the office, so if there is not a digital option these end up in the recycling.

Facebook and LinkedIn are a source for Infographics which look great and do get a message over quickly. I love a good Cheat Sheet and they are a great reference when working with an unfamiliar tool or technology, but I now have duplicates and cannot lay my hands on the ones I want when
I need it so began snaffling these away. Facebook is surprisingly good source for articles the Microsoft Developer pages provide great quality articles.

The quantity of free good quality information available at present is immense, and the broad spectrum of technology subjects required for modern development multiples that even further. In my roles I am required to be able to suggest technologies and techniques to solve requirements in meetings and calls and also in passing conversations. In depth knowledge is still required for the implementation on projects and this has narrowed down to a smaller subset of technologies.

In order to be able to filter the information available and select what is useful to me I have use a number of techniques;

I use feedly to follow blogs and these are categorised into Aggregate Blogs, Architecture , Development Community and Development. This allows me to scan through blogs quickly bookmarking any reference blogs by subject are which may be useful and read any blogs posts.

This process allows me to see the trends in development and architecture and look for more in-depth information for any gaps in my knowledge.

I set up a blog about a year ago, and try to write a post a month, I am going to start creating summary posts for technologies and trends that I can use as a memory tool and also a jumping off point if I need to go into more in-depth research.

The ebooks I collect are categorised by topic and put on cloud storage, I can access them any where and refer people to them for personal training. By far the best source of reading material I have is my subscription to Safari Books Online. This has more books available than I will be able to read and I can queue them up, the mobile app allows me to quickly access a book while travelling and make the best use of small pockets of time in the day.

To summarise the approach is to scan multiple sources of information daily, identifying articles of use or new technologies and trends, either book mark these for reference or research further collecting books. This approach allows me to maintain a broad spectrum of knowledge required as a Architect in a modern development team, and also a library of digital material I can draw on quickly.

This information provide me the confidence that I can draw on resources quickly during the rapid pace of Agile development.

This process is never complete and the collection requires regular maintenance to be effective, but as the number and complexities of technologies used increases, refining the skills to process information and store whats useful becomes even more critical.

 

 

 

 

 

 

Consistent Redirects and Session State

One thing I learnt from experience to check when getting unexpected behaviour with sessions in ASP.Net is the redirects. I have worked on sites where the HTTP and HTTPS redirects are inconsistent and this can cause issues.
 
One of the more serious cases experiences was were a 'www' redirect was missing and redirect was being used to transfer to a third party site. In this case the user started a session using the address at http://websitedomain... and then transferred to the third party site.
 
When the user was redirected to the full web address http://www.websitedomain... a new session was started by IIS and the user lost their session and all information associated with it. This severely affected the user experience on the site.
 
Many issues with lost sessions are difficult to diagnose due to the symptoms being misleading. Fortunately the fix is quick and easy , simply specifying the redirects in the web.config for the site ensured the consistency of URL for all sessions on the site.
 
These rules can be put into IIS but these can be lost in some cases when the web.config is overwritten on a deployment. In the rules are in the web.config then there is the benefit of versioning in source control and consistency across web-farms.
 
 

Here is an example web.config entry for http to https and non-www to www redirects;

   <rewrite>
      <rules>
        <rule name="Redirect landing request to trailing slash" stopProcessing="true">
          <match url="landing\/(.*[^\/])$" />
          <action type="Redirect" url="{R:0}/" redirectType="Permanent" />
          <conditions>
            <add input="{REQUEST_FILENAME}" matchType="IsDirectory" />
          </conditions>
        </rule>
        <rule name="Redirect to HTTPS" enabled="true" stopProcessing="true">
          <match url=".*" />
          <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
            <add input="{HTTPS}" pattern="^OFF$" />
          </conditions>
          <action type="Redirect" url="https://{HTTP_HOST}/{R:0}" redirectType="Permanent" />
        </rule>
        <rule name="redirect non www to www" stopProcessing="true">
          <match url=".*" />
          <conditions  logicalGrouping="MatchAll" trackAllCaptures="false">
            <add input="{HTTP_HOST}" pattern="^domain.com$" />
          </conditions>
          <action type="Redirect" url="https://www.domain.com/{R:0}" />
        </rule>
      </rules>
    </rewrite>

Certified Scrum Product Owner Course

I attended the Certified Scrum Product Owner course this month run by Agil8.

To be honest I was a bit sceptical as have worked with some really great Agile practictioners, but I came away feeling like I had my Agile Compass recalibrated!

 The instructor was David Hicks who had a great teaching technique, was very good at energising the room and broad experience of working in Agile.

The Syllabus is available on Agil8 website here. 

The main take away points for me on the course were;

  1. First Question should always be why?
  2. Agile is an Empirical Process , defining a goal and moving towards it in steps.
  3. Three pillars of Empirical Process Control - Tranparency , Inspection & Adaptation
  4. At the start of the process uncertainity is high - with each sprint it reduces
  5. It is important to stop if a product is not going to work, failing fast is one of the main benefits of the MVP process
  6. User stories should always focus on business values - even technical ones
  7. Need inspect all the time and to measure value!
  8. Value is offered by having a potentially shippable product - you don't have to ship it but you can get feedback from customers and prove ROI early.
  9. SCRUM is a mirror showing state of project from the data - removes the politics and emotions 
  10. Being a good product owner is hard.

The course provides the Scrum Alliance Certification https://www.scrumalliance.org/community/profile/jgregory21 and two years membership.

I can really recommend this course to anyone to attend. I am now relooking at all the projects I am working on with a product owner perspective and to unlearn the Agile bad habits that have formed over the last few years.

Kentico CMS.Tests Library - Unit Testing for Kentico Objects

As i was writing this post it was added to my works blog too - with much better formatting! http://www.mmtdigital.co.uk/blog/october-2016/unit-testing-for-kentico-objects?platform=hootsuite

One of the big difficulties I have found when working on bespoke code for a Kentico based website was writing unit tests where the code under test used Kentico objects. For example the Kentico AddressInfo object being used in a custom code method. 

When writing the test an AddressInfo object would need to be created and passed in as a parameter by the test. However the creation of the object would fail without a connection string even though the database is not required. 

I have seen a few approaches where people have written wrappers around the providers classes but if you want to work with the Kentico Objects they still had the inability to create a new object.

Starting in Kentico 8 the is a hidden gem library is included called CMS.Tests which provides the solution to the problem, there is not much documentation on it but this Kentico Article covers everything. There is also a class reference Class Reference  document which shows all the methods and properties.

There are three test types available;

  1. Unit Tests
  2. Integration Tests
  3. Isolated Integration Tests

Unit Tests

As you would expect these isolate the code from the database and allow for the tests to run quickly with provided fake data.

They key thing to do is make sure the tests inherit from the UnitTests base class in CMS.Tests otherwise the Fake<> methods cannot be used.

[TestClass]
public class IamAUnitTest : UnitTests
{
.....

 

 Once that is set up it is a simple case of creating Fake<> for each Kentico Object and its provider that the test requires; 

Fake<AddressInfo, AddressInfoProvider>()
                .WithData(new AddressInfo()
                {
                    AddressGUID = _billingAddressGuid,
                    AddressLine1 = "AddressLine1",
                    AddressLine2 = "AddressLine2",
                    AddressCity = "AddressCity",
                    AddressZip = "Postcode"
                },
                    new AddressInfo()
                    {
                        AddressGUID = _deliveryAddressGuid,
                        AddressLine1 = "AddressLine1",
                        AddressLine2 = "AddressLine2",
                        AddressCity = "AddressCity",
                        AddressZip = "Postcode"
                    });

 

You can provide as many objects as you want on the .WithData() method, and then the fake provider will let you look up on ids providing realistic scenarios

var billingAddress = AddressInfoProvider.GetAddressInfo(_billingAddressGuid);

 

This has been really useful when testing the logic in custom methods without having to spin up the website and initiate through the UI. It is also great for edge case and exception testing to see how logic performs when you get something unexpected back from Kentico.

There is a full list of the fakeable Kentico Objects here.

Integration Tests

These allow the code to be run against database, but doesn't require a Kentico website. As long as all the required Kentico Binaries are referenced and an App.Config file with the database connection string.

These will be slower than a Unit test but allow for more realistic scenarios. Care should be taken if the tests write data to the database as this will be harder to reset at the end of the test, and could effect database integrity.

As above the test have to inherit from the IntegrationTests base class, but there is no need to call the Fake<> method as these tests will be accessing the database.

[TestClass]
public class IamAIntegrationTest : IntegrationTests
{
.....

 

Isolated Integration Tests

These tests are similar to the above integration tests, except it creates a copy of the database using SQL Server 2012 Express LocalDB to protect from any database writes affecting the integrity of the data. The local database get refreshed with each test so these will be slow to run, they are ideal for longer integration test perhaps on a nightly build.

The logic can be tested against a clean seeded database for each test which will be repeatable on each run.

CMS Asserts

 There is also a CMS Assert Methods set of classes that allow Kentico Specific assertions, these can be used alongside the Test framework assertion provider. An example assertion taken from the Kentico articlet is the QueryEquals method, which  checks two  SQL statements are equal.

CMSAssert.QueryEquals(q.ToString(), "SELECT UserID FROM CMS_User");