Puppet Lab’s second London based camp of 2015 was held last week at Cavendish Square. It was an opportunity for Puppet to demonstrate using Puppet to new and existing users as well as show of some new feature, such as their Application Orchestration feature. Whilst also providing Puppet users the ability to demonstrate how they are using Puppet as well as extending their network with other Puppet users, Puppet’s employees and partners.
The day held three key themes; the first as you would expect was for Puppet to show new features and demonstrate how they are used, then there was their quest to be able to automate anything connected to a network and finally the management of the development lifecycle of Puppet.
The day kicked off with a keynote from Ryan Coleman, a Puppet Project Manager where he gave an overview of Puppet, the thinking behind it and its core features, such as the node cycle, Puppet Forge, PuppetDB, MCollective and R10K. Hi keynote also touched on new and improved features, such as support for Bare Metal provisioning, Docker provisioning and automation, the new more resilient and monitor-able JVM based Puppet server, node graph view in the console.
Another new feature that I was looking forward to was Application Orchestration. I work on a project that have built their own application deployment tool around Puppet, as it seems other Puppet users have too, so I was excited to see how Puppet’s implementation would work. The keynote talk about App Orchestration was very top level, so it was good to see that they caught onto people’s interest and had organised a live demo for later in the day.
The App orchestration demo really helped to show how Puppet’s deployment mechanism and how it stays true to their origins by relying on the state and relationships of resources, but now we can do that between different machines. The demonstration showed the deployment of an application similar to the Puppet Forge and showed how puppet runs on various machines and waits for actions on others to complete before continuing. It also showed how App orchestration can be used in applications that need to scale, very important for an increasingly cloud based computing environment.
App orchestration isn’t finished yet either, still to come are features that will aid in zero downtime deployments, dynamic node hashes to allow you to dynamically scale and improved node and resource failure management.
Over the course of the day we were reminded about the number and variety of platforms now supported by Puppet, including all the major Linux distributions and a range of Unix distributions, such as Solaris, HP-UX and OSX, Windows, Bare Metal (using Razor in release 2015.2). This was also demonstrated by Matt Peterson from Cumulus Networks who are using Puppet to automate network devices.
Another interesting presentation was from Mark Boddington from Brocade, he demonstrated how he has managed to produce an easily maintainable Puppet Module for their API based services. It works by ‘walking through’ the API and reproducing this into a module. It can even replicate the configuration of a machine into a manifest so that you can quickly and easily use Puppet to maintain the configuration of existing Brocade instances. I can see this being extremely useful for use with other API based applications.
Aside from learning about new Puppet features and seeing some demos from the experts one of the key takeaways for me was to see how other people across the industry are using Puppet and nothing shows this better than the guest speakers who came and presented how they are doing so. Many of the talks touched on how to control a development workflow for Puppet. There is an increasing move in the industry to treat infrastructure as code, but at the same time (at a slower speed) to introduce, manage and scale a development workflow around the code – many of the speakers touched on how they are managing their workflow.
John Faultly, Unix Engineer, Fidelity demonstrated how Fidelity’s first iteration of a workflow meant that it could take days or weeks for new code to reach production, even though continuous integration tests could be completed in less than 45 minutes! However, by refactoring this workflow they managed to radically shorten the time it takes to deploy fixes, whilst also being able to stabilise releases. A combination of lints, code reviews, spec testing, deployment to new test server instances and code reviews manage ensure that there is a lot of trust in the code long before it finds its way into production. Interestingly, the use of lints, spec tests and catalog compilation means that 80-90% of errors are caught early!
Another interesting talk came from Iain Adams, IT Build Manager, LV=, he spoke about how he developed a SonarQube plugin for monitoring puppet code. It includes features for identifying duplicated code, areas of complexity and syntax errors. Still to come are a unit testing metric and custom puppet modules metrics. At the moment it only supports Puppet 3.8. Selfishly, as I’d like to be able to upgrade the version of Puppet being used on my current project, I’d like to see it cover a broader range of versions, this way it could monitor your current code quality and technical debt, but it could also be used as a good indicator of the technical debt you’d inherit if you were to upgrade versions.
I’d definitely recommend attending a Puppet Camp if you have a chance, you’ll have the chance to see first-hand how Puppet is used and hopefully get some inspiration on how to best use Puppet in your environment. You’ll also get to see the new features being used and network with the experts, which is always an invaluable experience.
Nathan McLean, DevOps Consultant