Dec 29 2011

Installing Vagrant, on Ubuntu Natty

(Warning some Ubuntu ranting ahead)

  1. apt-get install virtualbox-ose
  2. apt-get install rubygems
  3. gem install vagrant

That's what I assumed it would take me to install vagrant on a spare Ubuntu (Natty) laptop.

Well it's not. after that I was greeted with some weirdness.

  1. $vagrant
  2. vagrant: command not found...

Yet gem list --local showed the vagrant gem installed.

  1. $ruby
  2. ruby: command not found

I looked twice, checked again and indeed it seems you can install rubygems on natty with no ruby installed #dazedandconfused

So unlike other distro's on Ubuntu doesn't add the rubygems binary path to it's default path
After adding that to my .bashrc things started working better.

The active reader has noticed that by now half of the Twittersphere was pointing me to the already implemented
above solution and the other half was telling me to not install rubygems using apt-get, or to use rvm for all my rubygem troubles

Apart from the point that if you need tools to like rvm to fix things that are fundamentally broken, the fact is that joe average java developer doens't want to be bothered with RubyGem hell , he just wants to do apt-get install Vagrant and get on with his real work, and that's exactly what I'd expect from Linux for human beings

I'd expect any junior guy to be able to go to vagrantup.com read the 4 commands on the main page and be up and running
Coz that's how it works on my Bleeding Edge Enterprise Development Distro, the one I usually would not advise those people (and my mother) to use.

Dec 19 2011

How I like my Java

This is a repost of my article earlier posted at Jordan Sissel's awesome SysAdvent

After years of working in Java-based environments, there are a number of things that I like to implement together with the teams I`m working with - the application doesn't matter much, whether it's plain java, Tomcat, JBoss, etc, these deployment strategies will help your ops and dev teams build more managable services.

Packaging

The first step is to have the native operating system packages as build artifacts rolling out of your continuous integration server - No .ear, .war or .jar files: I want to have rpms or debs. With things like fpm or the maven rpm plugin this should not be an extra hassle, and the advantages you get from doing this are priceless.

What advantages? Most native package systems support dependency resolution, file verification, and upgrades (or downgrades). These are things you would have to implement yourself or cobble together from multiple tools. As a bonus, your fellow sysadmins are likely already comfortable with the native package tool used on your systems, so why not do it?

Proxied, not running as root

Shaken, not stirred

Just like any other daemon, for security reasons, I prefer to run run Tomcat or JBoss as its own user, rather than as root. In most cases, however, only root can bind to ports below 1024, so you need to put a proxy in front. This is a convenient requirement because proxying (with something like Apache) can be used to terminate SSL connections, give improved logging (access logs, etc), and provides the ability to run multiple java application server instances on the same infrastructure.

Service Management

Lots of Java application servers have a semi functional shell script that allows you to start the service. Often, these services don't daemonize in a clean way, so that's why I prefer to use the Java Service wrapper from Tanuki to manage most Java based services. With a small config file, you get a clean way to stop and start java as a service and even the possibility to add some more monitoring to it.

However, there are some problems the Java Service wrapper leaves unsolved. For example, after launching the service, the wrapper can return back with a successful exit code while your service is not ready yet. The application server might be ready, but your applications themselves are still starting up. If you are monitoring these applications (e.g for High Availability), you really only want to treat them as 'active' when the application is ready, so you don't want your wrapper script to return, "OK," before the application has been deployed and ready. Otherwise, you end up with false positives or nodes that failover before the application has ever started. It's pretty easy to create a ping-pong service flapping scenario on a cluster this way.

One application per host

I prefer to deploy one application per host even though you can easily deploy multiple applications within a single Java VM. With one-per-host, management becomes much easier. Given the availability and popularity of good virtualization, the overhead of launching multiple Linux VM's for different applications is so low that there are more benefits than disadvantages.

Configuration

What about configuration of the application? Where should remote API urls, database settings, and other tunables go? A good approach is to create a standard location for all your applications, like /etc/$vendor/app/, where you place the appropriate configuration files. Volatile application configuration must be outside the artifact that comes out the build (.ear , .jar, .war, .rpm). The content of these files should be managed by a configuration management tool such as puppet, chef, or cfengine. The developers should be given a basic training so they can provide the systems team with the appropriate configuration templates.

Logs

Logs are pretty important too, and very easy to neglect. There are plenty of alternative tools around to log from a Java application: Log4j, Logback, etc .. Use them and make sure that they are configured to log to syslog, then they can be collected centrally and parsed by tools much easier than if they were spread all over the filesystem.

Monitoring

You also want your application to have some ways to monitor it besides just checking if it is running - it is usually insufficient to simply check if a tcp server is listening. A nice solution is to have a simple plain text page with a list of critical services and whether they are OK or not (true/false), for example:

  1. someService: true
  2. otherService: false

This benefits humans as well as machines. Tools like mon, heartbeat or loadbalancers can just grep for "false" in the file. If the file contains false, it reports a failure and fails over. This page should live on a standard location for all your applications, maybe a pattern like this http://host / servicename/health.html and an example "http://10.0.129.10:8080/mrs-controller/health.html". The page should be accessible as soon as the app is deployed.

This true/false health report should not be a static HTML file; it should be a dynamically generated page. Text means that you can also use curl, wget, or any command-line tool or browser to check the status of your service.

The 'health.html' page should report honestly about health, executing any code necessary to compute 'health' before yielding a result. For example, if your app is a simple calculator, it should verify health by doing tests internally like adding up some numbers before sharing 'myCalculator:true' in the health report.

The 'health.html' page should report honestly about health, executing any code necessary to compute 'health' before yielding a result. For example, if your app is a simple calculator, then before reporting health it should put two and two together and get four.

This kind of approach could also be used to provide you with metrics you can't learn from the JVM, such as number of concurrent users or other valid application metadata for measurement and trending purposes.

Conclusion

If you can't convince your developers, then maybe more data can help: Check out Martin Jackson's (presentation on java deployments) Automated Java Deployments with RPM

With good strategies in packaging, deployment, logging, and monitoring, you are in a good position to have an easily manageable, reproducible, and scalable environment. You'll give your developers the opportunity to focus on writing the application, they can use the same setup on their local development boxes (e.g. by using vagrant) as you are using on production.

Dec 14 2011

Lisa 2011

Last week I was in Boston for my 1st and their 25th Edition of the Large Infrastructure System Administration Conferences
Lisa was pretty much all I expected from it. Old Unix wizards with long hair and white beards, the usual suspects, and a mix of devops practitioners on a devops themed conference with on one side awesome and well positioned content and on the other side absolutely basic stuff.

On tuesday I had a devops bof scheduled for 2 hours.

My goal of the session was to not talk myselve, and let the audience figure out the 4 key components of devops as documented by @botchagalupe and @damonedwards being , Culture, Automation, Measurement and Sharing. I have to admit it took me a while to get them to that point .. but they figured out themselves .. the bof was standing room only , and there was a good discussion going on

On wednesday I gave my talk titled , Devops the Future is here, it's just not evenly distributed yet.

During my talk I realized that there was some more explanation needed for the crowd explaining Vagrant ... so I proposed a Bof on that topic too ... I used @patrickdebois 's awesome slides and hosted a small bof on Vagrant on thursday evening.

Friday morning I was scheduled to be in a panel discussing featuring a #devops guy, a storage guy and a network guy ..
as my voice was starting to break down I wasn't really confident . however by the time the panel started I could talk normal again :)
The setup was weird.. it were basically 3 people with totally different backgrounds discussing a variety of topics. There were no rea
lly opposing views , mostly we agreed with eachother , so I`m not really sure if the audience was really entertained :)

Anyhow 2 bofs, a talk and a panel later .. I was exhausted and ready to fly back to Belgium.

Tomorrow I have another presentation together with Patrick at the BeJug .. problem is .. I`m still looking for my voice ;(

So worst case .. I`m just gonna turn on the recording that the Usenix folks made of my talk ...

Must admit .. I've given better talks ..

Nov 13 2011

QR Encode

For further reference .. and preventing me from googling it a 3rd time

  1. qrencode "http://www.krisbuytaert.be/" -o qrcode.png -s 10

or

  1. cat Kris_Buytaert.vcf | qrencode -o vcf.png -s 5

Oct 30 2011

A different shade of green

Back in late 1997 I had spent way too much time helping people to build websites and was fed up with customers wanting a different shade of green for the background of their website. I was fed up with the graphic artists that didn't want to understand the concept of a color pallet and browser safe colors and didn't understand the differences between print and web. So I decided to try not to work for the wannabe webexperts anymore and doing some real software.

Fast forward 15 years and I find myselve discussing the different shades of green with developers ... maybe it's time for some radical change again :)

You got to love Geek & Poke

Sep 24 2011

Fall , Winter and Spring Conference Season 2011 - 2012

Patrick posted his upcoming conference schedule for the next couple of months.
as you can see there are a comple of overlapping conferences :)

Conferences I'm speaking at or likely to attend are:

  • The first week of October I`ll be in the Valley , I`ll be late for Jenkinsconf but I hope to pick up some events while I`m there.. suggestions are welcome , I`m also heading back to Europe earlier than planned so I will miss BadCamp :( ...
  • Devopsdays Goteborg, Sweden : October 14,15 - The yearly Europe devops event is happening in Goteborg this time. It's going to be really exciting this time , as the theme is inclusive. Eploring the boundaries of devops, I`m once again in the organization of this conference.
  • T-Dose 2011, The Technical Dutch Open Source Event, on 5 and 6 november 2011 , I will be talking again about my experiences with complex Puppet setups
  • Citconf , London: November 11-12 - All you ever wanted to know about Continuous Integration. Period, registered, haven't booked flights yet.
  • Cloudcamp Belgium: November 21 - I'm looking forward to this year's event, as there will likely more practioners and less marketing folks.
  • Lisa 2011, Boston, US, I`m giving an Invited talk titled , Devops: The past and futre are here, It's just not evenly distributed (yet), and I`ll be on a panel titled What Will Be HOt Next Year, really looking forward to this one :)
  • Fosdem.org will take place on 4 and 5 February 2012 , and as every year since it inception I'll be there
  • The UKUUG rebranded to FlossUK , they are hosting their Annual Spring conference from 20th to 22nd March in Edinburgh , given their refound focus it will be even more interresting !
  • And as announced earlier this week Loadays.org will take place in Antwerp again this year on 31/3/2012 and 1/4/2012 , as the previous years I`m co organizing this conference

And yes, I do work from time to time. Just that these conferences are a great way to capture and share new ideas. All worth it!

Aug 24 2011

Using Veewee

With @dancarley and @patrickebois just discussing the origin of the name of Veewee I figured I still had that piece of documentation I wrote up for myselve flying around ...

So with no other reason than having my docs mirrored on the internet .

  1. gem install veewee

  1. veewee templates

shows you what templates we have around ..

  1. $veewee init natty ubuntu-11.04-server-amd64
  2. Init a new box natty, starting from template ubuntu-11.04-server-amd64
  3. The basebox 'natty' has been successfully created from the template ''ubuntu-11.04-server-amd64'
  4. You can now edit the definition files stored in definitions/natty
  5. or build the box with:
  6. vagrant basebox build 'natty'

As noted this will generate the definition for your natty box,
It will create a definition.rb file which describes your box.
A preseed (or kickstart or similar file) and a postinstall file

The next step is then to use vagrant to build this basebox

  1. $ vagrant basebox build natty
  2.  
  3. Verifying the isofile ubuntu-11.04-server-amd64.iso is ok.
  4. Creating vm natty : 384M - 1 CPU - Ubuntu_64
  5. Creating new harddrive of size 10140
  6. VBoxManage createhd --filename '/home/sdog/VirtualBox VMs/natty/natty.vdi' --size '10140' --format vdi > /dev/null
  7. Attaching disk: /home/sdog/VirtualBox VMs/natty/natty.vdi
  8. Mounting cdrom: /home/sdog/iso/ubuntu-11.04-server-amd64.iso
  9. Waiting for the machine to boot
  10.  
  11. Typing:[1]: <Esc><Esc><Enter>
  12. Typing:[2]: /install/vmlinuz noapic preseed/url=http://192.168.10.101:7122/preseed.cfg
  13. Typing:[3]: debian-installer=en_US auto locale=en_US kbd-chooser/method=us
  14. Typing:[4]: hostname=natty
  15. Typing:[5]: fb=false debconf/frontend=noninteractive
  16. Typing:[6]: keyboard-configuration/layout=USA keyboard-configuration/variant=USA console-setup/ask_detect=false
  17. Typing:[7]: initrd=/install/initrd.gz -- <Enter>
  18. Done typing.
  19.  
  20. Starting a webserver on port 7122
  21. Serving file /home/sdog/definitions/natty/preseed.cfg
  22.  
  23. Waiting for ssh login with user vagrant to sshd on port => 7222 to work
  24. .....................................................................................................................................................Transferring /tmp/vbox.version20110822-6766-1xcca1e-0 to .vbox_version
  25. ..
  26.  
  27.  
  28. Step [0] was successfully - saving state
  29.  
  30. Waiting for ssh login with user vagrant to sshd on port => 7222 to work
  31. .Transferring /home/sdog/definitions/natty/postinstall.sh to postinstall.sh

Plenty more output here !

Be very patient .. you will see VirtualBox launch a VM and start installing it ..

The next steps are clear .. vagrant tells you what you can do next

  1. Now you can:
  2. - verify your box by running : vagrant basebox validate natty
  3. - export your vm to a .box file by running : vagrant basebox export natty

So after validating it , you can now export the basebox and share it with other people.

The next step is to actually use that box in your own Vagrant setup, for that you need to import the box into your box collection

  1. $ vagrant box add 'natty' 'natty.box'
  2. [vagrant] Downloading with Vagrant::Downloaders::File...
  3. [vagrant] Copying box to temporary location...
  4. [vagrant] Extracting box...
  5. [vagrant] Verifying box...
  6. [vagrant] Cleaning up downloaded box...

To verify just run

  1. $ vagrant box list
  2. Centos6
  3. MyCentOS2
  4. debian
  5. natty

your freslhy imported box should be in the list .

You can now use

  1. config.vm.box = "natty"
to refer to the fresly imported box in your Vagrant file, a file that can be created by running vagrant init, or copying around another Vagrant template ..

After that .. regular vagrant fun starts, up, provision, provision, provision, destroy, and so forth ..

Aug 21 2011

Devops for Drupal, the survey,

Devops is gaining momentum, the idea that developers and operations should work much closer together , the idea that one should automate as much as possible in both their infrastructure and their release process brings along a lot of questions, ideas and tools that need to be integrated in your daily way of working.

Drupal has one of the biggest development communities in the open source world, being part of both communities We are trying to bridge the gap,

At Inuits we are building tools and writing best practices to close the gap, but we are not alone in this world and we would like to gather some feedback on how other people are deploying, and managing their Drupal environments

Working with Drupal, build with Drupal in mind .. how do you release your sites .. That's what we are trying to figure out ... for everybody else to learn from

Oh and you can win some items of our brand new fashion line !

The survey is here , please spend a bit of your time helping us to better understand the needs of the community

Aug 21 2011

Back from opendbcamp and Froscon

I`m back from my second opendbcamp this year, and my first Froscon :)

With Sankt Augustin being only a 2.5 hour drive , it was one of the only conferences in Europe so close to home that I hadn't visited yet. Overall it's a good conference, it's a good sized, not too crowded event which attracted a bunch of interesting speakers.

Sadly they managed to do way to much changes to the schedule last minute which were not updated in their Android app.. so I ended up arriving in the wrong room multiple times .. Also Sadly German conferences still tend to have way to much German language presentations, for foreign speakers from different parts of the world that limits the choice of talks they can visit.

I gave 2 talks today in Sankt Augustin, I opened the Devops Track today with a general introduction to Devops, the talk was fairly well attended given it was on a Sunday morning right after the social event and plenty of people had questions.

My second talk of the day was about my experience puppetizing SipX, it felt pretty weird having to give an introduction to Puppet in the slot after James , apart from the session chair telling me I had only 5 minutes left about 25 minutes into my talk and the microphone breaking twice on me it went fairly well, it was even attended by a dog.

Time permitting, Froscon is a conference I might visit again :)

Jul 17 2011

Drupal and Configuration Mgmt, we're getting there ...

For those who haven't noticed yet .. I`m into devops .. I`m also a little bit into Drupal, (blame my last name..) , so one of the frustrations I've been having with Drupal (an much other software) is the automation of deployment and upgrades of Drupal sites ...

So for the past couple of days I've been trying to catch up to the ongoing discussion regarding the results of the configuration mgmt sprint , I've been looking at it mainly from a systems point of view , being with the use of Puppet/ Chef or similar tools in mind .. I know I`m late to the discussion but hey , some people take holidays in this season :) So below you can read a bunch of my comments ... and thoughts on the topic ..

First of all , to me JSON looks like a valid option.
Initially there was the plan to wrap the JSON in a PHP header for "security" reasons, but that seems to be gone even while nobody mentioned the problems that would have been caused for external configuration management tools.
When thinking about external tools that should be capable of mangling the file plenty of them support JSON but won't be able to recognize a JSON file with a weird header ( thinking e.g about Augeas (augeas.net) , I`m not talking about IDE's , GUI's etc here, I`m talking about system level tools and libraries that are designed to mangle standard files. For Augeas we could create a separate lens to manage these files , but other tools might have bigger problems with the concept.

As catch suggest a clean .htaccess should be capable of preventing people to access the .json files There's other methods to figure out if files have been tampered with , not sure if this even fits within Drupal (I`m thinking about reusing existing CA setups rather than having yet another security setup to manage) ,

In general to me tools such as puppet should be capable of modifying config files , and then activating that config with no human interaction required , obviously drush is a good candidate here to trigger the system after the config files have been change, but unlike some people think having to browse to a web page to confirm the changes is not an acceptable solution. Just think about having to do this on multiple environments ... manual actions are error prone..

Apart from that I also think the storing of the certificates should not be part of the file. What about a meta file with the appropriate checksums ? (Also if I`m using Puppet or any other tool to manage my config files then the security , preventing to tamper these files, is already covered by the configuration management tools, I do understand that people want to build Drupal in the most secure way possible, but I don't think this belongs in any web application.

When I look at other similar discussions that wanted to provide a similar secure setup they ran into a lot of end user problems with these kind of setups, an alternative approach is to make this configurable and or plugable. The default approach should be to have it enable, but the more experienced users should have the opportunity to disable this, or replace it with another framework. Making it plugable upfront solves a lot of hassle later.

Someone in the discussion noted :
"One simple suggestion for enhancing security might be to make it possible to omit the secret key file and require the user to enter the key into the UI or drush in order to load configuration from disk."

Requiring the user to enter a key in the UI or drush would be counterproductive in the goal one wants to achieve, the last thing you want as a requirement is manual/human interaction when automating setups. therefore a feature like this should never be implemented

Luckily there seems to be new idea around that doesn't plan on using a raped json file
instead of storing the config files in a standard place, we store them in a directory that is named using a hash of your site's private key, like sites/default/config_723fd490de3fb7203c3a408abee8c0bf3c2d302392. The files in this directory would still be protected via .htaccess/web.config, but if that protection failed then the files would still be essentially impossible to find. This means we could store pure, native .json files everywhere instead, to still bring the benefits of JSON (human editable, syntax checkable, interoperability with external configuration management tools, native + speedy encoding/decoding functions), without the confusing and controversial PHP wrapper.

Figuring out the directory name for the configs from a configuration mgmt tool then could be done by something similar to

  1. cd sites/default/conf/$(ls sites/default/conf|head -1)

In general I think the proposed setup looks acceptable , it definitely goes in the right direction of providing systems people with a way to automate the deployment of Drupal sites and applications at scale.

I`ll be keeping a eye on both the direction they are heading into and the evolution of the code !