O'Reilly logo

Managing Infrastructure with Puppet by James Loope

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Chapter 4. MCollective

Puppet is not the end of this journey. We can abstract even further if we begin to talk about pools of servers and virtual instances. What if we have a cluster of application nodes that need to be managed as groups or if we need reporting of Facter variables from all of the nodes that include a certain Puppet class? What do we do if Apache needs a kick on 25 instances out of 1000? MCollective can do these things and more.

MCollective uses a publish/subscribe message bus to distribute commands to systems in parallel. It’s used to push requests or commands out to all of your systems at once, allowing the MCollective server to decide which of the messages it should execute, based on a set of filters in the message. A good analogue of this is an IRC chat service. We can chat in a channel and receive all the messages, but messages that are intended for us will have our name attached to them.

The messages that an MCollective server consumes are then passed on to agent modules that consume the message parameters and then do some work. Agents exist for all sorts of behaviors, such as managing running services; running Puppet; managing packages, processes, and files; and even banning IP addresses with iptables. Beyond this, the agents are fairly simple to write using SimpleRPC.

Getting the Software

MCollective installation is not as simple as Puppet was. We need to set up a Stomp messaging server and configure the MCollective server on each of our hosts before we can start using it.

ActiveMQ

ActiveMQ is Apache’s Java messaging server. We’ll need to install the Sun Java Runtime, get the ActiveMQ package, and configure it. If you’re running Ubuntu, the package sun-java6-jre can be downloaded from the partner repository. You can download an ActiveMQ tar from http://activemq.apache.org/activemq-542-release.html.

Once you have Java installed and the tarball extracted, you’ll need to edit the conf/activemq.xml file and add some authentication details to it. I’ll include an example below; the pertinent portions being the creation of an authorization user for MCollective and the MCollective topic. These are necessary to allow MCollective servers and client to talk to one another. You’ll need these credentials for your MCollective configuration as well:

<!---- SNIP ----->

<plugins>
    <statisticsBrokerPlugin/>
    <simpleAuthenticationPlugin>
    <users>
    <authenticationUser username="mcollective" password="secrets"
        groups="mcollective,everyone"/>
    <authenticationUser username="admin" password="moresecrets" 
        groups="mcollective,admin,everyone"/>
    </users>
    </simpleAuthenticationPlugin>
    <authorizationPlugin>
    <map>
    <authorizationMap>
        <authorizationEntries>
        <authorizationEntry queue=">" write="admins" read="admins" admin="admins" />
        <authorizationEntry topic=">" write="admins" read="admins" admin="admins" />
        <authorizationEntry topic="mcollective.>" write="mcollective" 
            read="mcollective" admin="mcollective" />
        <authorizationEntry topic="mcollective.>" write="mcollective"
            read="mcollective" admin="mcollective" />
        <authorizationEntry topic="ActiveMQ.Advisory.>" read="everyone" 
            write="everyone" admin="everyone"/>
        </authorizationEntries>
    </authorizationMap>
    </map>
    </authorizationPlugin>
</plugins>

<!---- SNIP ----->

You can now start up ActiveMQ with the command bin/activemq start.

MCollective Server

The MCollective “server” is the part that you’ll need to deploy on all of your nodes. The client is a sort of command console that sends messages to the servers. The installation of MCollective itself is fairly straightforward and has packages available for most distributions. You’ll need at least one client and one server installed in order to execute commands. Alternatively, there is a community Puppet module that can be used for installation of MCollective and distribution of the accompanying plug-ins:

Once it’s installed, you’ll need to edit the /etc/mcollective/server.cfg and /etc/mcollective/client.cfg files, entering the MCollective user’s password that you specified in the activemq configuration in the plugin.stomp.password field and specify your Stomp hostname in the plugin.stomp.host field. The plugin.psk secret must match between the server and client, as it is used for messaging encryption. This config assumes that you have Puppet installed and looks for the class file at the default location and sets the fact source to Facter:

# /etc/mcollective/server.cfg
topicprefix = /topic/mcollective
libdir = /usr/share/mcollective/plugins
logfile = /var/log/mcollective.log
loglevel = info
daemonize = 1

# Plugins 
securityprovider = psk
plugin.psk = mysharedsecret 

connector = stomp
plugin.stomp.host = stomp.example.com
plugin.stomp.port = 61613
plugin.stomp.user = mcollective
plugin.stomp.password = secret 

# Facts
factsource = facter
# Puppet setup
classesfile = /var/lib/puppet/state/classes.txt

plugin.service.hasstatus = true
plugin.service.hasrestart = true

In order for the Facter fact source to work correctly, you will need to distribute the Facter plug-in for MCollective to the servers. The plug-in source can be fetched from GitHub at https://github.com/puppetlabs/mcollective-plugins/tree/master/facts/facter/ and installed to the server under $libdir/mcollective. Remember to restart MCollective after copying the files so that MCollective will recognize the new agent.

MCollective Client

You’ll need to install and configure the client in the same fashion. Here’s an example of the client configuration:

topicprefix = /topic/mcollective
libdir = /usr/share/mcollective/plugins
logfile = /dev/null
loglevel = info

# Plugins 
securityprovider = psk
plugin.psk = mysharedsecret 

connector = stomp
plugin.stomp.host = stomp.example.com
plugin.stomp.port = 61613
plugin.stomp.user = mcollective
plugin.stomp.password = secret

Warning

These configuration files contain secrets that can be used to publish commands onto the MCollective channel. The MCollective servers necessarily run as root and execute with full privileges. It is of utmost importance that access to the secrets and the Stomp server be carefully controlled.

MCollective Commands

With both the servers and a client configured, we’re ready to start issuing MCollective commands. Let’s start off with the mc-find-hosts command. When run without any argument, mc-find-hosts will list all of the MCollective servers that are currently active and listening:

:> mc-find-hosts
A.example.com
B.example.com
C.example.com
D.example.com

We can also get some information about our individual MCollective nodes. mc-inventory will tell us what agents are available on a node, what Puppet classes that node is a member of, and assuming the Facter module is installed, a list out all of the available Facter facts about the node:

:> mc-inventory A.example.com

Inventory for A.example.com:

Server Statistics:
    Version: 1.0.1
    Start Time: Fri May 06 11:10:34 -0700 2011
    Config File: /etc/mcollective/server.cfg
    Process ID: 22338
    Total Messages: 143365
    Messages Passed Filters: 75428
    Messages Filtered: 67937
    Replies Sent: 75427
    Total Processor Time: 162.09 seconds
    System Time: 73.08 secondsAgents:
    discovery       filemgr        package 
    iptables        nrpe           rpcutil  
    process         puppetd        
    service

Configuration Management Classes:
    ntp             php                apache2
    mysql-5            varnish

Facts:
    architecture => x86_64
    domain => example.com
    facterversion => 1.5.7
    fqdn => A.example.com 
    hostname => A
    id => root
    is_virtual => true
    kernel => Linux
    kernelmajversion => 2.6
    kernelversion => 2.6.35

This is already a useful tool for diagnostics and inventory on all of your Puppet-managed servers, but MCollective also lets us execute agents on the target systems, filtered by any of these attributes, facts, agents, or classes. For example, if our servers run Apache and we need to restart all of the Apaches on all of our servers, we could use the mc-service agent to do this:

:> mc-service --with-class apache2 apache2 restart

This will place a message on the MCollective message bus that says: “All the servers with the apache2 Puppet class, use your service agent to restart apache2.” We can even add multiple filters like the following:

:> mc-service --with-class apache2 --with-fact architecture=x86_64 apache2 restart

This will let us restart Apache on only the 64bit “x86_64” architecture servers that have the Puppet apache2 class. These sorts of filters make remote execution of tasks on particular subsets of servers very easy.

Of particular interest to those of us running large infrastructures is MCollective’s built-in capacity to run the Puppet agent on the servers. Puppet’s client-server model, in its default configuration, will poll the Puppet Master once every half hour. This is not convenient, for instance, if you would like to use Puppet to coordinate an application release on a group of servers. If you would like some control over the sequence and timing of the Puppet runs, you can use the MCollective puppetd agent and forgo the polling behavior of the agent daemon. Since Puppet is built in to MCollective, it is not necessary to run the agent on boot either. So long as MCollective and Puppet are both installed, we can execute Puppet as we like.

The agent can be downloaded from GitHub at https://github.com/puppetlabs/mcollective-plugins/tree/master/agent/puppetd/ and, as with the Facter plug-in, should be copied to $libdir/mcollective on the servers, preferably using Puppet. Once it’s installed, you will be able to kick off a Puppet run on all or some of your servers with the following command:

:> mc-puppetd --with-class example runonce

If you don’t mind the default polling behavior of the Puppet agent, you can also use the puppetd MCollective agent to selectively enable or disable Puppet on sets of your instances as well as initiate one-off runs of the agent.

Note

If you still want to have Puppet run on a regular basis to ensure configuration correctness, but need to avoid polling “stampedes,” take a look at the PuppetCommander project at http://projects.puppetlabs.com/projects/mcollective-plugins/wiki/ToolPuppetcommander. It uses MCollective’s puppetd module to centrally coordinate Puppet runs so as to avoid overwhelming a Puppet Master. It will also give you the power to specify which nodes or classes to run automatically.

Finally, there is an mc-rpc command that serves as a sort of metacommand, allowing access to all of the available agents. We can execute the puppetd agent, for example, with the following syntax:

:> mc-rpx --agent puppetd --with-class example runonce

Alternatively, we can use mc-rpc to read out the documentation for a particular agent:

:> mc-rpc --agent-help puppetd
SimpleRPC Puppet Agent
======================

Agent to manage the puppet daemon

    Author: R.I.Pienaar
    Version: 1.3-sync
    License: Apache License 2.0
    Timeout: 120
Home Page: http://mcollective-plugins.googlecode.com/



ACTIONS:
========
disable, enable, runonce, status

disable action:
---------------
    Disables the Puppetd

    INPUT:

    OUTPUT:
        output:
            Description: String indicating status
            Display As: Statc
runonce action:
---------------
    Initiates a single Puppet run

    INPUT:

    OUTPUT:
        output:
            Description: Output from puppetd
            Display As: Output

status action:
--------------
    Status of the Puppet daemon

    INPUT:

    OUTPUT:
        enabled:
            Description: Is the agent enabled
            Display As: Enabled

        lastrun:
            Description: When last did the agent run
            Display As: Last Run

        output:
            Description: String displaying agent status
            Display As: Status

        running:
            Description: Is the agent running
            Display As: Running

You’ve seen the basic features of MCollective in this chapter. It works as a great orchestration tool for Puppet, allowing you greater control over your Puppet agents and more insight into your configurations through Facter. Beyond this, the agents are fairly simple to write and can be used to accomplish any task that you might want to execute in a distributed fashion across all or part of your infrastructure. Puppet Labs provides documentation on extending MCollective with custom agents with SimpleRPC at http://docs.puppetlabs.com/mcollective/simplerpc/agents.html.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required