QA Consulting sponsors the DevOps Summit

DevOps Summit_2016-01.png

QA Consulting are excited to announce that we are a Bronze sponsor at the 2016 DevOps Summit. This years event takes place on the 5th of July at the Hilton Tower Bridge Hotel in London.

The DevOps Summit is a great opportunity for attendees to hear from the best DevOps practitioners, learn what works and what doesn’t and get tips to accelerate their organisations IT transformation, whilst avoiding the pitfalls suffered by others.

This year, QA Consulting will be joined by representatives from QA Learning who are happy to discuss how they can train your workforce to realise the benefits of ‘better, stronger, faster’ software delivery.

If you are attending the DevOps Summit and are interested in finding out how our services can help your organisation on their DevOps journey stop by our stand and speak with one of our team, or send an email to Consulting@QA.com to organise a meeting.

 

 

QA Consulting sponsors Cloud & DevOps World 2016

Cloud & DevOps World_2016-01

We are pleased to announce that QA Consulting are Premium Exhibitors at the 2016 Cloud & DevOps World.

This year’s event takes place at  Kensington Olympia, London, bringing together the industry’s leading technologists and innovators, to discuss the future of Cloud Computing.

This 2016 Cloud & DevOps event will focus on the strategies, business models and technologies which can activate the Cloud, and drive new opportunities for your organisation. Cloud & DevOps World provides you the opportunity to drive your business into the digital economy, and realise the potential of Cloud Computing.

If you are attending Cloud & DevOps World and are interested in finding out how our services can help your organisation on their Cloud & DevOps projects stop by stand G10 and speak with one of our representatives, or alternatively email us at Consulting@QA.com to organise a meeting.

DevOps Dissection – Source Sanctuary

Ed_Tech_Bytes

Welcome back to the dissection lab! This week let us travel into the realm of Source, taste the fruits of repository branching and have a cup of wisdom. If you’re ever developing software you must use some sort of version control system, otherwise known as source code management.

Source Code Management

 SCM is a means of allowing easy collaboration on a project, a whole group of developers can work together towards a common goal. Version-ing protects changes to the source code facilitating easy reverts and guards against hard drive failure. There are two varieties of SCM, client-server and distributed.

The client-server model uses a server as a single repository for source code. A developer would synchronise to that single repository in order to make changes, very simple pulling, pushing. The distributed model has a central server but every developer has a local copy of that repository on their HDD. An example of each are Subversion and Git, each with certain advantages and disadvantages.

Subversion

 SVN follows the client-server model. It is very easy to learn as at a basic level all you need to do is checkout the repository, make a change and commit it. A team of developers can pick it up very quickly with just 3 commands and anything more advanced (branching) can be incorporated on the fly. Unfortunately due to the centralised nature of SVN it requires network access. So if you’re on the move and don’t have internet access you cannot make any changes.  On the plus though newbies can pick up SVN with a really good GUI, TortoiseSVN.

Git

 Git is the absolute boss when it comes to SCM. The commands are a little more in-depth and there are more of them than SVN, with quite unfriendly error messages, but it allows development anywhere. If you had a server at home and a laptop on the road you would be able to continue working as you’d have a local copy of the repository; when you regain network access you can push your local changes to the server for the entire team. Additionally every development machine becomes a local backup of the repository, protecting against server failure. In general Git is faster than SVN anyway, it was built for the Linux kernel and can easily scale up to hundreds of developers, although as you’re checking out the entire repository you could be waiting on a long download. It’s a little unfriendly and the GUI is awful, to use Git you’ve just got to make the command line plunge (cmd is just better anyway).

 Branching and Merging Strategy

 No matter what SCM tool you go for it is best to follow some standard practices and properly manage the repository. Disasters can occur when developers are working on the same files concurrently, to counter this branching can be utilised. Branches are copies of the repository, within the repository, effectively a duplicate which allows work to be separated out. The generic idea is to preserve a master branch on the repository, this branch has passed integration tests and can be passed on to release. All development happens separate from this branch and can possibly occur across many branches depending on the nature of coding.

For instance, take a master branch at version at 1.0.0, a new feature for the product will have a feature branch created and designated as 1.1.0, while a hotfix to the master may be designated 1.0.1. During development once a branch has been completed and tested it is merged into the master, rebranding the master as that version. If the above hotfix is incorporated before the feature were completed then the new master would be merged into the feature to keep it up to date. An illustration is perhaps the best way to go:

123.png

This is quite rudimentary and it can get more complicated if you have even more future feature branches, but as a starting platform it’s quite nice. Ideally for long term development the features and hotfixes should be periodically merged into the master anyway (at the end of sprints), this defeats conflicts early. So now this removes some of the pain with parallel development and enables easy tracking of new features and hotfixes.

Thanks for reading, tune in next time for a breakdown of Issue Tracking with JIRA!

 

DevOps Dissection – Project Pursuit

Ed_Tech_Bytes

Welcome back to the dissection lab! This week we’ll be taking a look at issue and project management, primarily with the tool JIRA, developed by Atlassian.

Issue Tracking and Project Management

 When developing software you need a clearly defined workflow. Who is working on what feature, how much technical debt do you have and what business needs are of highest priority. To allow maximum collaboration it is best to try and centralise information, at best a single resource that the entire team uses to keep track of the state of the project. In particular the team I am part of uses JIRA, it allows us to visibly see the amount of work that a project requires and how all of our different roles come together to achieve our goal.

JIRA

Jira 

Atlassian’s JIRA has been around since 2002 and allows an entire team to work from a single source. Each team member has a profile and they can create work issues or be assigned jobs to do. There is a lot of transparency and traceability by using tools like this. On the main JIRA dashboard you have an overview of the JIRA instance, i.e. assigned issues; and an activity feed which displays the latest changes that people have been making. Real time reporting is fantastic.

Projects

 Say that a company wants to design a new piece of software, using JIRA they’d create a new project. When a project is created it is assigned an issue tag. This tag is a series of letters which is then assigned to every issue within that project. It allows different issues to be easily referenced across the platform. For instance a project called ‘Demo Test’ may have an issue tag of DT, an issue within that project will be assigned the issue key DT-1, another one DT-2 and so on. If two issues are related to each other you could created dependency links between their issue-keys or you could easily reference an issue on a different ticket. In order to manage projects they are broken down into a number of components, starting with epics.

Epics

An epic is a particular objective with such a level of complexity that it will require many tasks to be completed across a substantial amount of time. A general example of this could to automate testing or to develop a web portal. Long, strenuous tasks. To make managing these easier they’re broken down into User Stories.
Epics

User Stories

 Each story has a description which will state a number of scenarios which the story must fulfil, an acceptance criteria list which dictates everything that must be completed and verified for the story to be considered done and a general summary of the problem in a format like: ‘As a business owner I want to add the ability to take credit cards payments to the website, so that the customer has an alternate method of payment’. This easily says who, what and why. Normally a slightly more in-depth description will follow, here is an example of a user story I have worked on:

User stories

This particular story is part of an ‘automation’ epic and is assigned to the current sprint. Alongside this description a priority is assigned to the story and a reporter, in this case the reporter is my boss.

As a project will end up with potentially hundreds of user stories, sprints are used to bunch up stories into sorties of work. A typical sprint will last 2 or 4 weeks and as a team we decide which stories should be part in the upcoming sprint. The stories to choose from are inside a backlog like so:

User stories 2

We size up stories into who will be required and the amount of time needed. This way we try and keep a constant pace from sprint to sprint and distribute work fairly. On each story we can actually assign it a length of time which makes it easier to refine stories down and get the fit right for a sprint.

To break stories down into manageable chunk we use sub-tasks.

Sub-Tasks

In a similar vein each sub-task has a description which explains what the task is, why it is necessary and how to complete it. Every time progress is made on the task it is updated so that other team members can go straight to JIRA to check the progress.

sub tasks

Team Monitoring

From all the added organisation it becomes very easy to gleam statistics about a team from the number of user stories they take on during a sprint and the number that are completed. This can indicate if a team is committing to too much or not taking on enough. JIRA comes packed with tools which can analyse sprints and produce: burn-down charts (the amount of work in a sprint), velocity charts (the amount of value in a sprint), cumulative flow diagrams (exact issue statuses across a time range) and sprint reports (issue list). These are great in after sprint retrospectives where teams can discuss what worked well and what didn’t. This all complements the AGILE way of working, to try things out and continually improve.

JIRA goes even deeper than this but for now I hope this is enough to dig your teeth into. The best way of becoming familiar with this is to just head over to Atlassian (https://www.atlassian.com/), grab a free trial of JIRA (cloud for a quick play, server to tool around setting it up yourself) and mess around.

Thanks for reading and please join me next time for some Continuous Integ

DevOps Dissection – Welcome to the Party!

Ed_Tech_Bytes

Hello and welcome to my DevOps Dissection! My name is Ed, I’m a first year DevOps Analyst at QA Consulting and I am currently deployed at a specialist insurance firm within the FTSE 250 index. I’m here to pull apart and examine the reasonably new and growing field of DevOps.

DevOps?

The realm of Development Operations is quite new. It was born into our world during an AGILE conference in 2008 and started its teething during 2009 (i.e., it started screaming and clamouring for attention). Here in 2016 we now have a cynical child, questioning its surroundings and trying to grasp an understanding of how the world works. This cynicism is a force for good; IT practices must change. Traditionally, trying to get developers and operational IT staff to continually build, test, and frequently deliver small software changes has been like trying to perform open heart surgery with garden shears, jump leads and a car battery. You’re going to have blood everywhere from different IT teams fighting and some very angry stakeholders that wanted fast delivery, not a stagnant and dead business.

If DevOps were a celestial body you could consider it as an exciting new resource-rich planet orbiting the star of Information Technology. At its core we have a solid and defiant mass of cultural change, the surrounding mantel consists of rapid, continually flowing currents of communication and feedback, and last, the crust is a pleasant wrapping of technical implementation and know-how.

In theory this comes together to form a bridge across IT teams, aiding communication and collaboration.

Success is measured by rapid feedback mechanisms between developers, testers, management and infrastructure; fast delivery to different IT teams and the end user; and open integration, visibility and communication across every facet of development, testing, delivery and leadership. Often the means of achieving this success will involve incrementally adding automation and breaking cultural barriers between how different groups like to operate. It is an uphill march.

NB: There is a subtle difference between feedback and communication; feedback is more product-oriented (build failure, test failure) and what’s working, what’s not; communication is synchronisation between different people/teams working together.

Cultural Change

Perhaps the most challenging obstacle is trying to persuade individuals to adopt new methods and technologies which will facilitate continuous integration, continuous delivery and work practice changes – these individual topics will be covered in future.

Unfortunately human nature is a stickler for stability and consistency, or at least what someone may believe is unwavering and offers security. This makes work flow advancements very difficult to implement as everyone loves the status quo. There is plenty I have resisted myself only to find that embracing new ideas and thoughts does truly satisfy the human desire to search for knowledge. The foundation for DevOps lies in establishing a river of cultural change. In due time I imagine I’ll be writing many more blogs posts about my own successes and failures at invoking this change.

Communication and Feedback

Simply the lifeblood of a high performing agile team. Silo’ed teams must be avoided as they lead to stagnant, toxic pools. Having two teams which are dependent on one another but operate separately is a recipe for resentment and malice. Ideally, different specialisations need to be glued together in order to develop T-shaped people (people who have a broad understanding of the various stages of the software life-cycle, while having a well-defined spike in a particular area). For instance if a developer were to be paired with a tester they may distribute knowledge between themselves leading to an understanding of each other’s roles. This will be a much deeper understanding than if the development team and testing team were split apart and silo’ed. This sharing of knowledge allows communication to take place and with a little technical assistance can be empowered where feedback can be absorbed and challenged face on.

Technology

The wrapping to all of this lies in the toys which this blog will be focussing on for the next few weeks. There is a plethora of technologies which allow the right flows of information and bring people together.  Further than just communication we have all sorts of gizmos that allow for rapid, reliable, and repeatable build and deployment processes. The categories of these tools branch through: source code management, continuous integration, deployment, configuration management, tracking and monitoring, among others.

So here we are. This is my understanding of DevOps and the role it plays. We have 3 key components that combine into a formidable strategy for growth and improvement. Enforcing cultural change, opening up communication and using technology to help is the way forward for software development and promoting learning and evolution.

In my next blog I will be taking you on a journey through the different tools involved in the particular brand of DevOps I fulfil on site, beginning with  Source Code Management!

Going faster with ChatOps

David_Wilcox_Tech_Bytes-05It has become widely recognised that adopting a DevOps culture leads to more stable, regular releases and increases collaboration between all those involved in the software delivery process. However there is often still much room for improvement in increasing collaboration, closing feedback loops, and reducing friction in processes.

What is ChatOps?

ChatOps is about taking the age-old and natural process of people collaborating and chatting, making this available for everyone to partake in, and then injecting the tools and automation to achieve tasks into the chat.

The great thing for those teams who are adopters of DevOps practices is that they will already likely have a high level of automation in many of their processes such as: testing, deploying, provisioning, and resolving infrastructure issues. Integrating these to the chat means they are driven in a collaborative, visible manner, with feedback given to all.

In short ChatOps is about aggregating information and triggers; making the discussions open; and the actions taken democratic and visible.

How is it done?

The most common pattern is to use a bot such as GitHub’s Hubot, Err, or Lita. These are integrated with real-time messaging tools like Slack, Atlassian’s HipChat or IRC. Hubot adaptors are available for almost any widely used chat tool.

In Hubot’s case, scripts are written which control the bot’s behaviour, and how it responds to messages in the chat. This allows actions to be triggered as a result of users interacting with the bot in the chat room. For example “@bot deploy latest version of app to UAT” could be configured to deploy the latest version of the code to the UAT environment. The possibilities are endless; if it is not possible, or sensible, to implement the action within the Hubot scripts, they can be configured to make HTTP calls to external services, which can drive more complicated actions. For example integrations can be made with build servers, CI pipelines and configuration management tools.

Hubot can also be configured to provide HTTP endpoints for external services to integrate with – Hubot can therefore keep the chat updated with external events. This can facilitate a great workflow where the bot alerts the chat to issues received from monitoring of the application or infrastructure; users discuss and undertake actions in the chat to fix the issue; and positive feedback is received in the chat when the issue is fixed.

Why should you do it?

It will open up processes and practices for all to see, empowering your team members and spreading the knowledge.

  • Reduce the silos, tighten feedback loops and encourage collaboration.
  • ChatOps will encourage further automation of the underlying tasks.

Ricky Savjani excelling onsite

Ricky_Excelling_on_site-01

It’s always fantastic to receive such great feedback from our clients. This week we would like to congratulate Ricky Savjani on outstanding work on client site.

Ricky completed his degree in Computer Science at The University of West England (UWE Bristol) in 2011. He then joined the QA Academy in 2012 specialising in Oracle SOA and Java training before starting on an internal project.

Ricky was deployed to his current client site in January 2014 where he began working with the DevOps technology stack, specifically Puppet and Linux. In the last year he has been concentrating on Infrastructure based work with Networking, Linux and Puppet. Alongside this he has been helping to design environments and looking after boundary controls where firewalls and security devices are deployed.

Our client, a large government body, commented  “In context of the recent PRP1 issues, I wanted to provide highly positive feedback on Ricky. He was more than forthcoming in solving the issue and very collaborative throughout, alongside this he demonstrated technical depth and clearly ‘knows his stuff’ – he is always assertive in debugging and logical/dynamic in diagnosing the exact problem. All in all he is a great asset to the team and the programme”.

Well done Ricky on your accomplishments on site and keep up the good work!

If you are interested in a career with QA Consulting checkout our Academy Website for more information on how you can build your future in tech.

Home Office wins award for ‘Best use of Cloud Services’ at the 2015’s Computing/ BCS UK IT Industry Awards

CUKp8NtU8AEh2e3

This November QA Consulting joined the UK’s leading technology companies and professionals at London’s Battersea Park for 2015’s UK IT Industry Awards.

The awards focus on the contribution of individuals, projects, organisations and technologies that have excelled in the use, development and deployment of IT in the past 12 months. With over 1,300 guests in attendance, the awards bring together the industry’s leading players for the biggest night of the year.

The Home Office won the award for the Best Use of Cloud Services, for their EBSA solution. This is a project that QA Consulting (incorporating NETbuilder Academy) have been involved in delivering since its inception 2 years ago. They were able to leverage QA Consulting’s expertise in DevOps and their partnership with Puppet Labs to design and build out this Enterprise grade platform.

QA Consulting would like to congratulate our customer, the Home Office on their wins at the 2015 Computing/ BCS UK IT Industry Awards. It was great to see Jackie Keane and Neil Butler collect the award on the Home Office’s behalf.

QA Consulting to host the first Puppet User Group in the North

puppet user group

This December QA Consulting will be hosting the first Puppet User Group in the UK outside of London.

The User Group will take place on the 2nd of December at our Academy in Salford Quays, Manchester and will provide you with the perfect opportunity to network with the QA Consulting and Puppet Labs teams, as well as network with many other Puppet Users.

We’re aiming for an informal event to gauge the technical community interest in further events and what those might look like, as well as a short talk on what Puppet Labs are up to right now in the UK, as well as a presentation from QA Consulting’s DevOps Tech Lead on “Puppet at Scale. ​Managing 1000’s of nodes and maintaining performance” and a PuppetConf regular speaker, Sam Bashton, Open Source Cloud Computing Expert, Bashton Ltd, who will be presenting “Puppet on Amazon Web Services”.

The event will start at 6.30 PM, expect to stay till around 8.30ish, we will have pizza, beers and some soft drinks for you all to relax with after a hard day’s work.

We will be looking to spread the Puppet User Group across the North to areas such as Leeds in the not too distant future.

If you are interested in attending the Puppet User Group or are interested in any future Puppet User Groups in the North please follow this link to our meet up page.

QA Consulting attends the London Puppet Camp

Puppet_Labs

Puppet Lab’s second London based camp of 2015 was held last week at Cavendish Square.  It was an opportunity for Puppet to demonstrate using Puppet to new and existing users as well as show of some new feature, such as their Application Orchestration feature.  Whilst also providing Puppet users the ability to demonstrate how they are using Puppet as well as extending their network with other Puppet users, Puppet’s employees and partners.

The day held three key themes; the first as you would expect was for Puppet to show new features and demonstrate how they are used, then there was their quest to be able to automate anything connected to a network and finally the management of the development lifecycle of Puppet.

The day kicked off with a keynote from Ryan Coleman, a Puppet Project Manager where he gave an overview of Puppet, the thinking behind it and its core features, such as the node cycle, Puppet Forge, PuppetDB, MCollective and R10K. Hi keynote also touched on new and improved features, such as support for Bare Metal provisioning, Docker provisioning and automation, the new more resilient and monitor-able JVM based Puppet server, node graph view in the console.

Another new feature that I was looking forward to was Application Orchestration. I work on a project that have built their own application deployment tool around Puppet, as it seems other Puppet users have too, so I was excited to see how Puppet’s implementation would work. The keynote talk about App Orchestration was very top level, so it was good to see that they caught onto people’s interest and had organised a live demo for later in the day.

The App orchestration demo really helped to show how Puppet’s deployment mechanism and how it stays true to their origins by relying on the state and relationships of resources, but now we can do that between different machines. The demonstration showed the deployment of an application similar to the Puppet Forge and showed how puppet runs on various machines and waits for actions on others to complete before continuing. It also showed how App orchestration can be used in applications that need to scale, very important for an increasingly cloud based computing environment.

App orchestration isn’t finished yet either, still to come are features that will aid in zero downtime deployments, dynamic node hashes to allow you to dynamically scale and improved node and resource failure management.

Over the course of the day we were reminded about the number and variety of platforms now supported by Puppet, including all the major Linux distributions and a range of Unix distributions, such as Solaris, HP-UX and OSX, Windows, Bare Metal (using Razor in release 2015.2). This was also demonstrated by Matt Peterson from Cumulus Networks who are using Puppet to automate network devices.

Another interesting presentation was from Mark Boddington from Brocade, he demonstrated how he has managed to produce an easily maintainable Puppet Module for their API based services. It works by ‘walking through’ the API and reproducing this into a module. It can even replicate the configuration of a machine into a manifest so that you can quickly and easily use Puppet to maintain the configuration of existing Brocade instances. I can see this being extremely useful for use with other API based applications.

Aside from learning about new Puppet features and seeing some demos from the experts one of the key takeaways for me was to see how other people across the industry are using Puppet and nothing shows this better than the guest speakers who came and presented how they are doing so. Many of the talks touched on how to control a development workflow for Puppet. There is an increasing move in the industry to treat infrastructure as code, but at the same time (at a slower speed) to introduce, manage and scale a development workflow around the code – many of the speakers touched on how they are managing their workflow.

John Faultly, Unix Engineer, Fidelity demonstrated how Fidelity’s first iteration of a workflow meant that it could take days or weeks for new code to reach production, even though continuous integration tests could be completed in less than 45 minutes! However, by refactoring this workflow they managed to radically shorten the time it takes to deploy fixes, whilst also being able to stabilise releases. A combination of lints, code reviews, spec testing, deployment to new test server instances and code reviews manage ensure that there is a lot of trust in the code long before it finds its way into production. Interestingly, the use of lints, spec tests and catalog compilation means that 80-90% of errors are caught early!

Another interesting talk came from Iain Adams, IT Build Manager, LV=, he spoke about how he developed a SonarQube plugin for monitoring puppet code. It includes features for identifying duplicated code, areas of complexity and syntax errors. Still to come are a unit testing metric and custom puppet modules metrics. At the moment it only supports Puppet 3.8. Selfishly, as I’d like to be able to upgrade the version of Puppet being used on my current project, I’d like to see it cover a broader range of versions, this way it could monitor your current code quality and technical debt, but it could also be used as a good indicator of the technical debt you’d inherit if you were to upgrade versions.

I’d definitely recommend attending a Puppet Camp if you have a chance, you’ll have the chance to see first-hand how Puppet is used and hopefully get some inspiration on how to best use Puppet in your environment. You’ll also get to see the new features being used and network with the experts, which is always an invaluable experience.

Nathan McLean, DevOps Consultant