DevOps Dissection – Source Sanctuary


Welcome back to the dissection lab! This week let us travel into the realm of Source, taste the fruits of repository branching and have a cup of wisdom. If you’re ever developing software you must use some sort of version control system, otherwise known as source code management.

Source Code Management

 SCM is a means of allowing easy collaboration on a project, a whole group of developers can work together towards a common goal. Version-ing protects changes to the source code facilitating easy reverts and guards against hard drive failure. There are two varieties of SCM, client-server and distributed.

The client-server model uses a server as a single repository for source code. A developer would synchronise to that single repository in order to make changes, very simple pulling, pushing. The distributed model has a central server but every developer has a local copy of that repository on their HDD. An example of each are Subversion and Git, each with certain advantages and disadvantages.


 SVN follows the client-server model. It is very easy to learn as at a basic level all you need to do is checkout the repository, make a change and commit it. A team of developers can pick it up very quickly with just 3 commands and anything more advanced (branching) can be incorporated on the fly. Unfortunately due to the centralised nature of SVN it requires network access. So if you’re on the move and don’t have internet access you cannot make any changes.  On the plus though newbies can pick up SVN with a really good GUI, TortoiseSVN.


 Git is the absolute boss when it comes to SCM. The commands are a little more in-depth and there are more of them than SVN, with quite unfriendly error messages, but it allows development anywhere. If you had a server at home and a laptop on the road you would be able to continue working as you’d have a local copy of the repository; when you regain network access you can push your local changes to the server for the entire team. Additionally every development machine becomes a local backup of the repository, protecting against server failure. In general Git is faster than SVN anyway, it was built for the Linux kernel and can easily scale up to hundreds of developers, although as you’re checking out the entire repository you could be waiting on a long download. It’s a little unfriendly and the GUI is awful, to use Git you’ve just got to make the command line plunge (cmd is just better anyway).

 Branching and Merging Strategy

 No matter what SCM tool you go for it is best to follow some standard practices and properly manage the repository. Disasters can occur when developers are working on the same files concurrently, to counter this branching can be utilised. Branches are copies of the repository, within the repository, effectively a duplicate which allows work to be separated out. The generic idea is to preserve a master branch on the repository, this branch has passed integration tests and can be passed on to release. All development happens separate from this branch and can possibly occur across many branches depending on the nature of coding.

For instance, take a master branch at version at 1.0.0, a new feature for the product will have a feature branch created and designated as 1.1.0, while a hotfix to the master may be designated 1.0.1. During development once a branch has been completed and tested it is merged into the master, rebranding the master as that version. If the above hotfix is incorporated before the feature were completed then the new master would be merged into the feature to keep it up to date. An illustration is perhaps the best way to go:


This is quite rudimentary and it can get more complicated if you have even more future feature branches, but as a starting platform it’s quite nice. Ideally for long term development the features and hotfixes should be periodically merged into the master anyway (at the end of sprints), this defeats conflicts early. So now this removes some of the pain with parallel development and enables easy tracking of new features and hotfixes.

Thanks for reading, tune in next time for a breakdown of Issue Tracking with JIRA!


DevOps Dissection – Project Pursuit


Welcome back to the dissection lab! This week we’ll be taking a look at issue and project management, primarily with the tool JIRA, developed by Atlassian.

Issue Tracking and Project Management

 When developing software you need a clearly defined workflow. Who is working on what feature, how much technical debt do you have and what business needs are of highest priority. To allow maximum collaboration it is best to try and centralise information, at best a single resource that the entire team uses to keep track of the state of the project. In particular the team I am part of uses JIRA, it allows us to visibly see the amount of work that a project requires and how all of our different roles come together to achieve our goal.



Atlassian’s JIRA has been around since 2002 and allows an entire team to work from a single source. Each team member has a profile and they can create work issues or be assigned jobs to do. There is a lot of transparency and traceability by using tools like this. On the main JIRA dashboard you have an overview of the JIRA instance, i.e. assigned issues; and an activity feed which displays the latest changes that people have been making. Real time reporting is fantastic.


 Say that a company wants to design a new piece of software, using JIRA they’d create a new project. When a project is created it is assigned an issue tag. This tag is a series of letters which is then assigned to every issue within that project. It allows different issues to be easily referenced across the platform. For instance a project called ‘Demo Test’ may have an issue tag of DT, an issue within that project will be assigned the issue key DT-1, another one DT-2 and so on. If two issues are related to each other you could created dependency links between their issue-keys or you could easily reference an issue on a different ticket. In order to manage projects they are broken down into a number of components, starting with epics.


An epic is a particular objective with such a level of complexity that it will require many tasks to be completed across a substantial amount of time. A general example of this could to automate testing or to develop a web portal. Long, strenuous tasks. To make managing these easier they’re broken down into User Stories.

User Stories

 Each story has a description which will state a number of scenarios which the story must fulfil, an acceptance criteria list which dictates everything that must be completed and verified for the story to be considered done and a general summary of the problem in a format like: ‘As a business owner I want to add the ability to take credit cards payments to the website, so that the customer has an alternate method of payment’. This easily says who, what and why. Normally a slightly more in-depth description will follow, here is an example of a user story I have worked on:

User stories

This particular story is part of an ‘automation’ epic and is assigned to the current sprint. Alongside this description a priority is assigned to the story and a reporter, in this case the reporter is my boss.

As a project will end up with potentially hundreds of user stories, sprints are used to bunch up stories into sorties of work. A typical sprint will last 2 or 4 weeks and as a team we decide which stories should be part in the upcoming sprint. The stories to choose from are inside a backlog like so:

User stories 2

We size up stories into who will be required and the amount of time needed. This way we try and keep a constant pace from sprint to sprint and distribute work fairly. On each story we can actually assign it a length of time which makes it easier to refine stories down and get the fit right for a sprint.

To break stories down into manageable chunk we use sub-tasks.


In a similar vein each sub-task has a description which explains what the task is, why it is necessary and how to complete it. Every time progress is made on the task it is updated so that other team members can go straight to JIRA to check the progress.

sub tasks

Team Monitoring

From all the added organisation it becomes very easy to gleam statistics about a team from the number of user stories they take on during a sprint and the number that are completed. This can indicate if a team is committing to too much or not taking on enough. JIRA comes packed with tools which can analyse sprints and produce: burn-down charts (the amount of work in a sprint), velocity charts (the amount of value in a sprint), cumulative flow diagrams (exact issue statuses across a time range) and sprint reports (issue list). These are great in after sprint retrospectives where teams can discuss what worked well and what didn’t. This all complements the AGILE way of working, to try things out and continually improve.

JIRA goes even deeper than this but for now I hope this is enough to dig your teeth into. The best way of becoming familiar with this is to just head over to Atlassian (, grab a free trial of JIRA (cloud for a quick play, server to tool around setting it up yourself) and mess around.

Thanks for reading and please join me next time for some Continuous Integ

Microservices in the workplace


Hi, my name is Gareth Andrews and I am a Consultant at QA Consulting. I am currently deployed on client site as a developer, and an occasional Scrum Master when my services are required. Alongside this I am also training for a qualification in Information Security Management Principles. My current role as a developer has an emphasis on the production of Microservices, which brings me to the topic of this blog.

So what are these Microservices and why are companies, who can be gigantic entities with hundreds of employees working on software, looking to include them into their systems?

“Microservices” is a term a lot of companies use, but the description that is widely accepted is ‘where complex applications are comprised of multiple independent processes’.

The typical fall-down for many companies is that they keep expanding their systems, offering more and more without considering just what would happen if someone fell down. Imagine a large tower with many floors, what would happen if you suddenly decided that you wanted to move the bedroom from the 1st floor to the 5th? With Microservices, your architecture allows you to move, replace and update as you go along, with support available for both new and old features enabling you to create a stable and adaptable framework.

Companies like Netflix and Amazon use Microservices in order to help scale up their products, with new features and applications simply requiring you to add rather than adjust.

The point of Microservices is to be able to communicate between one another, with this varying depending on how you want the systems to work. What better way to communicate though than with HTTP web calls, the same way you would go about loading up Youtube or Facebook after a long day at work?

RESTful web services are written with the internet at heart. These services allow you to navigate to an address and get responses – much like you would when you search for that ticket website in a rush. Allowing access to Microservices through calls to addresses means that all of the Microservices are accessible to one another, with none having to actually know about any changes except their web address.

I think this quote by Richard Branson is a great place for me to finish “Complexity is your enemy. Any fool can make something complicated. It is hard to make something simple”.

Going faster with ChatOps

David_Wilcox_Tech_Bytes-05It has become widely recognised that adopting a DevOps culture leads to more stable, regular releases and increases collaboration between all those involved in the software delivery process. However there is often still much room for improvement in increasing collaboration, closing feedback loops, and reducing friction in processes.

What is ChatOps?

ChatOps is about taking the age-old and natural process of people collaborating and chatting, making this available for everyone to partake in, and then injecting the tools and automation to achieve tasks into the chat.

The great thing for those teams who are adopters of DevOps practices is that they will already likely have a high level of automation in many of their processes such as: testing, deploying, provisioning, and resolving infrastructure issues. Integrating these to the chat means they are driven in a collaborative, visible manner, with feedback given to all.

In short ChatOps is about aggregating information and triggers; making the discussions open; and the actions taken democratic and visible.

How is it done?

The most common pattern is to use a bot such as GitHub’s Hubot, Err, or Lita. These are integrated with real-time messaging tools like Slack, Atlassian’s HipChat or IRC. Hubot adaptors are available for almost any widely used chat tool.

In Hubot’s case, scripts are written which control the bot’s behaviour, and how it responds to messages in the chat. This allows actions to be triggered as a result of users interacting with the bot in the chat room. For example “@bot deploy latest version of app to UAT” could be configured to deploy the latest version of the code to the UAT environment. The possibilities are endless; if it is not possible, or sensible, to implement the action within the Hubot scripts, they can be configured to make HTTP calls to external services, which can drive more complicated actions. For example integrations can be made with build servers, CI pipelines and configuration management tools.

Hubot can also be configured to provide HTTP endpoints for external services to integrate with – Hubot can therefore keep the chat updated with external events. This can facilitate a great workflow where the bot alerts the chat to issues received from monitoring of the application or infrastructure; users discuss and undertake actions in the chat to fix the issue; and positive feedback is received in the chat when the issue is fixed.

Why should you do it?

It will open up processes and practices for all to see, empowering your team members and spreading the knowledge.

  • Reduce the silos, tighten feedback loops and encourage collaboration.
  • ChatOps will encourage further automation of the underlying tasks.