Three ways to optimize Puppet

Quick tips for optimizing your Puppet usage.

By Jo Rhett
April 25, 2016
Departure board at night Departure board at night (source: James Creegan via Flickr)

The following are good practices that you should employ to take care of your Puppet servers.

Indenting Heredoc

Heredoc format can suppress leading spaces, allowing you to maintain indentation in the code without this showing up to the user:

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more
   else {
     if( $keeping_up_with_jones ) {
       $message_text = @(END)
         This is a bit of a rat race, don't you think $username?
         All this running about, for $effort and $gain,
         the results of which we can't see at all.
         |- END
     }
   }

Furthermore, you can tell Heredoc the syntax inside the block and it can validate the syntax for you. Geppetto will display it according to the appropriate syntax scheme:

$certificates = @(END:json)
   {
     "private_key": "-----RSA PRIVATE KEY----...",
     "public_key": "-----BEGIN PUBLIC KEY-----...",
     "certificate": "-----BEGIN CERTIFICATE-----..."
   }
   | END

Splaying Puppet Agent Cron Jobs

If you run Puppet from cron and use Puppet or any shared data sources, you can’t run them all at the same time or you will overload the data source. In situations with only a few hundred nodes, you can simply use the splay feature and the random distribution will be random enough:

cron { 'puppet-agent':
  ensure      => present,
  environment => 'PATH=/usr/bin:/opt/puppetlabs/bin'
  command     => 'puppet agent --onetime --no-daemonize --splay --splaylimit 30m",
  user        => 'root',
  minute      => 0,
}

In my experience, after a few hundred nodes you’ll find that a quarter of them end up in the same three minutes and you’ll experience the “thundering herd” phenomenon. At that point, you’ll want a mechanism to guarantee a more consistent spread of hosts.

The way to do this is to have the server calculate a different minute number for each node, and add the cron resource to its catalog with that minute. The number selection must be consistent, as random numbers will change on each connection.

The easiest way with zero development on your part is to use the fqdn_rand(minutes,string) function, which will always produce the same value between 0 and the minutes provided. If you don’t provide a string, it will use the current node name as salt. It is possible to supply any value (which doesn’t change on a node) as salt. Here are some of my favorite sources for the string:

  • Any unique identifier for each node, such as a serial number.
  • The node’s IP address. The unique fe80:: address is always good.
  • Node certificate name or fully qualified domain name.
  • A unique identifier for each node from an internal datastore.

It really doesn’t matter how you generate the source number for the node, so long as it is evenly distributed and does not change often. However you generate the number, add 30 for the second run. Place both of these numbers in a resource like so:

$minute = fqdn_rand( 30, $facts['certname'] )
cron { 'puppet-agent':
  ensure  => present,
  command => '/opt/puppetlabs/bin/puppet agent --onetime --no-daemonize',
  user    => 'root',
  minute  => [ $minute, $minute+30 ]
}

Cleaning Puppet Reports

The reports created by Puppet nodes are never removed by the server. You’ll have to add jobs to clean them up. Shown here is a resource for your server to clean up reports after 90 days:

cron { 'clean-puppetserver-reports':
  command => 'find /var/opt/puppetlabs/puppetserver/reports -ctime +90 -delete',
  user    => 'root',
  minute  => '2',
  hour    => '3',
}

While it’s not likely to fill up quite so fast, the same problem does exist for the node itself. Here’s a resource to apply to every node to clean up the reports once a month:

cron { 'clean-puppet-agent-reports':
  command => 'find /opt/puppetlabs/puppet/cache/reports -ctime +90 -delete',
  user    => 'root',
  day     => '1',
  minute  => '2',
  hour    => '3',
}

Adjust the timing of the runs and the retention length to meet your own needs. In many environments we centrally archive the reports on long-term storage, and delete reports older than two days on the node every night.

Post topics: Operations
Share: