Why I use SonarQube....

Why I use SonarQube... 

SonarQube http://www.sonarqube.org/ is a fantastic opensource tool for measuring code quality across multiple projects. I use it to provide an ''At-a-glance'' view of the size and quality of the projects I am involved in.

Where ReSharper or equivalent provides a point of creation view of code issues to the developer, SonarQube provides a historical and aggregated view.

SonarQube provides Customisable dashboards , Historical Views, Multi Language Support and much more. The features page describes in more detail http://www.sonarqube.org/features/ and the fantastic Nemo site shows a working instance of SonarQube https://nemo.sonarqube.org/

I gave a 15 minute talk introducing SonarQube at my local usergroup DotNetNotts and the slides are here http://www.slideshare.net/JonathanGregory4/code-quality-llightning-talk

Useful Applications

These are just some of the situations I have SonarQube useful in over the last few years.

  • Automating the mundane parts of code review process
  • Identifying training needs in the team
  • Provide Return on Investment Metrics for Technical Debt work to management.
  • Target precious re-factoring time to most beneficial areas
  • Consistency in Applying Standards
  • Incentive to write cleaner code
  • Map a teams improvement over time
  • Basis for code reviews

Running an Analysis

In order to get a project into SonarQube it is necessary to run an analysis application, this reads a Sonar-Project.Properties file in the root of the project.

There are several ways to integrate this analysis runner into your CI process, more details are here http://docs.sonarqube.org/display/SONAR/Analyzing+Source+Code

If you are lucky enough to be using TFS it can be integrated into MSBuild and the TFS flow http://docs.sonarqube.org/display/SONAR/Analyzing+with+SonarQube+Scanner+for+MSBuild.

I am currently working on multiple projects across many CI processes depending upon so have set up a dedicated SonarQube server in Azure to bring all the projects together.

For the analyser I  used the command line runner http://docs.sonarqube.org/display/SONAR/Analyzing+with+SonarQube+Scanner

I wrote some bespoke PowerShell to do the following;

  1. Read a Config File of projects and the repository URL''s
  2. Check if first run
  3. Do a pull for updates
  4. If there are any changes run SonarQube
I fire this off daily using a task scheduler on the server and can then only analyse projects that have been active. This is far more efficient as I am analysing multiple smaller projects, and I can also see when project where last active in the dashboard view.

The main advantage of this script is any developer can add a project to the config file and check into source control, on the next run the updated config file is pulled and the project analysed.

I have put this script in GitHub as it maybe a useful base for others. https://github.com/jongregory/SonarCaller

Conclusion

SonarQube is very quick and easy to set up with no licence costs, there is a two minute getting started guide http://docs.sonarqube.org/display/SONAR/Get+Started+in+Two+Minutes and a more detailed set up from Microsoft projects https://github.com/SonarSource-VisualStudio/sonar-.net-documentation

I would recommended it to anyone and it one of the first actions I do now on a new project to start recording the quality of the project.





Visual Studio Website Projects - Restore References without a build

 I recently had to deal with a multiple missing references issue on the CI Build Server for Visual Studio website project. 

Building the website was not an option as it was a CMS project and the build took over ten minutes, and it was not possible to convert it to a web application either. 

Team City was being used to build the class library projects and the Nuget restore step was skipping the website project from the solution file as it was called local.<websitename> in the *.sln and on the CI server it was set up as dev.<websitename>

I needed a generic process where new nuget packages and class library projects could be added with having to make large changes the established CI process and project , and was restricted by the limitations of the legacy CMS.

The solution to the two reference scenarios was quite simple in the end;

Scenario 1 – Adding a reference to a project in the visual studio solution

  1. Add the reference to the website as usual in Visual Studio by right clicking on the project 
  2. In the Class Library project that is being referenced add a custom build step, this will push the dll to the website \bin folder for each build on any machine.
    • Right Click on the Website Project 
    • Build Events
    • Add the following line to the ‘Post-build event command line’
               
xcopy /Y "$(ProjectDir)$(OutDir)$(TargetFileName)" "$(ProjectDir)..\CMS\Bin"

Using this technique means the project becomes responsible for putting the output dll into the website so the website does need to be compiled , and nothing needs to be configured on the CI server. If the project is being built in Visual Studio the /Y option forces and overwrite so the build does not fail.

Scenario 2 – Referencing NuGet Packages in a Visual Studio Website

Add the following Powershell into a script in the website project root directory and committed to source control.

$scriptpath = $MyInvocation.MyCommand.Path
$dir = Split-Path $scriptpath
$files = Get-ChildItem $dir\Bin\*.refresh

ForEach ($file in $files) { 
$content = Get-Content $file
$filename = Split-Path $content -leaf
Write-Host "Restoring File : "  $dir\$content " To " $dir\bin\$filename
Copy-Item $dir\$content $dir\bin\$filename -force
}

This script was then run in the CI process after the package restore , it finds the *.dll.refresh files and copies them out of the packages folder and into the \bin of the website project.

When adding a nuget package to the website project the Refresh File  is added to source control, but not the dll file itself. The refresh file is a text file used by visual studio to manage the location of the files for refreshing on builds. 

This approach was generic and maintenance free and saved having to build the website project unnecessarily .

Passing TOGAF 9.1 Certification

After a four day training course with http://enterprisearchitects.com/ and then two weeks of hard revision I am very relived to have passed both the TOGAF 9.1 Level 1 and 2 certification exams. This post is about why I decided to take the TOGAF exams and some useful resources and techniques  I found invaluable

Why Study for TOGAF 9.1 Certification

Having previously been a Senior Developer and then moving into a Technical Architect role means I am still close to the code.  I was very keen to improve my architecture modelling skills  with a framework certification. 

I looked at three certifications, TOGAF, Zachmann and the British Computer Society. TOGAF by The Open Group stood out to me as a widely adopted architectural framework with a worldwide footprint. The BCS certification also recognises TOGAF certification so I have the option to do one exam with the BCS to get Practitioner level certification.  

TOGAF is a Enterprise Architecture framework so has helped me understand how to work at higher levels of abstractions and modelling and provide foundation skills for progressing along the Architecture Career path.

There are three main components of TOGAF I could see having immediate benefit to my current role.

The Architecture Development Model - a set of guidelines and techniques for architecture development

The Architecture Capability Framework  - reference materials for establishing and architecture function in an enterprise

The Architecture Content Framework - a location and classification mechanism for architectural output


Useful Resources

I found the following invaluable to be able to pass the TOGAF certification exams

  1. Attending a revision course, the  Enterprise Architects Togaf 9.1 level 1 & 2 was really good but there was no exam at the end. 
  2. Reading the study guides by Rachel Harrison they are very good  Foundation Study Guide and Part 2 Study Guide
  3. Downloading the practice exams and reference cards from The Open Group Publications study materials section and do them over and over again
  4. I was advised not to buy any practice exams other then The Open Group ones and I found I didn''t need to.
  5. I did try the free questions on The Open Arch but found them significantly harder and confidence shattering
  6. Reading this Exam Study Guide by Nik Ansell a really useful point to help navigate the level 2 exams was "The best technique for answering a long and sometimes overwhelming question like this, is to use the paper/scratch pad the testing centre provides. Write each answer down, then mark off which concern is addressed by each answer. The one that addresses all the concerns if obviously the right answer " 
The Result

After all the hard work and travel I was very please to have passed with a overall mark of 81%, largely due to the course and the tips above.

Be prepared to travel to do the exam if you are not near a major city, I am based in Nottingham and the only place I could get into an exam in reasonable time was London, which meant two trips one for the course and then a day travel for the exams.

Welcome to my blog

Welcome to my blog! After many years consuming blogged information and finding it invaluable in my professional career I felt it was about time I blogged myself.

I have worked in IT for 15 years and recently moved from a development to technical architecture role. During my career I have been fortunate enough to work on Mainframes, Oracle & Unix and for the last seven years on .NET both web, windows and server side applications.

This blog will hopefully be a mix of coding challenges, useful resources and my experiences trying to develop and improve my architecture skills.