Select Page

Adding methods to a ruby class

I’m just blogging this because it took me ages to figure out, it seems so simple now but I guess that’s how it usually goes.

The problem I have is I want a plugin to be able to either make a method using the normal Ruby def foo or via some DSL’ish helpers.

class Foo<Base
   register_action(:name => "do_something", :description => "foo")
 
   def do_something_action
   end
 
   register_action(:name => "do_something_else", :description => "foo") do
      # body of the action here
   end
end

The above code should make me two methods – do_something_action and do_something_else_action – they should be identical to viewers from the outside. Here’s the base class that makes this happen correctly:

class Base
   def self.register_input(input, &block)
      name = input[:name]
 
      self.module_eval { define_method("#{name}_action", &block) } if block_given?
   end
end

It’s pretty simple, we’re just using define_method in the scope of the module and that does the rest.

MCollective Agent Introspection

With the new SimpleRPC system in MCollective we have a simple interface to creating agents. The way to call an agent would be:

$ mc-rpc service status service=httpd

This is all fine and well and easy enough, however it requires you to know a lot. You need to know there’s a status action and you need to know it expects a service argument, not great.

I’m busy adding the ability for an agent to register its metadata and interface so that 3rd party tools can dynamically generate useful interfaces.

A sample registration for service agent is:

register_meta(:name        => "SimpleRPC Service Agent",
              :description => "Agent to manage services using the Puppet service provider",
              :author      => "R.I.Pienaar",
              :license     => "GPLv2",
              :version     => 1.1,
              :url         => "http://mcollective-plugins.googlecode.com/",
              :timeout     => 60)
 
["start", "stop", "restart", "status"].each do |action|
    register_input(:action      => action,
                   :name        => "service",
                   :prompt      => "Service Name",
                   :description => "The service to #{action}",
                   :type        => :string,
                   :validation  => '^[a-zA-Z\-_\d]+$',
                   :maxlength   => 30):

This includes all the meta data, versions, timeouts, validation of inputs, prompts and help text for every input argument.

Using this we can now generate dynamic UI’s, and do something like JavaDoc generated documentation. I’ve recorded a little video demonstrating a proof of concept Text UI that uses this data to generate a UI dynamically. This is ripe for integration into tools like Foreman and Puppet Dashboard.

Please watch the video here, best viewed full screen.

MCollective Release 0.4.x

A few days ago I released Marionette Collective version 0.4.0 and today I released 0.4.1. This release branch introduce a major new feature called Simple RPC.

In prior releases it took quite a bit of ruby knowledge to write a agent and client. In addition clients all ended up implementing their own little protocols for data exchange. We’ve simplified agents and clients and we’ve created a standard protocol between clients and agents.

Standard protocols between clients and agents means we have a standard one-size-fits-all client program called mc-rpc and it opens the door to writing simple web interfaces that can talk to all compliant agents. We’ve made a test REST Simple RPC bridge as an example.

Writing a client can now be done without all the earlier setup, command line parsing and so forth, it can now be as simple as:

require 'mcollective'
 
include MCollective::RPC
 
mc = rpcclient("rpctest")
 
printrpc mc.echo(:msg => "Welcome to MCollective Simple RPC")
 
printrpcstats

This simple client has full discovery, full –help output, and takes care of printing results and stats in a uniform way.

This should make it much easier to write more complex agents, like deployers that interact with packages, firewalls and services all in a single simple script.

We’ve taken a better approach in presenting the output from clients now, instead of listing 1000 OKs on success it will now only print whats failing.

Output from above client would look something along these lines:

$ hello.rb
 
 * [ ============================================================> ] 43 / 43
 
Finished processing 43 / 43 hosts in 392.60 ms

As you can see we have a nice progress indicator that will work for 1 or 1000 nodes, you can still see status of every reply by just running the client in verbose – which will also add more detailed stats at the end.

Agents are also much easier, here’s a echo agent:

class Rpctest<RPC::Agent
    def echo_action
         validate :msg, :shellsafe
 
         reply.data = request[:msg]
    end
end

You can get full information on this new feature here. We’ve also created a lot of new wiki docs about ActiveMQ setup for use with MCollective and we’ve recorded a new introduction video here.

Exim, MCollective and speed

Usually when I describe mcollective to someone they generally think its nice and all but the infrastructure to install is quite a bit and so ssh parallel tools like cap seems a better choice. They like the discovery and stuff but it’s not all that clear.

I have a different end-game in mind than just restarting services, and I’ve made a video to show just how I manage a cluster of Exim servers using mcollective. This video should give you some ideas about the possibilities that the architecture I chose brings to the table and just what it can enable.

While watching the video please note how quick and interactive everything is, then keep in mind the following while you are seeing the dialog driven app:

  • I am logged in via SSH from UK to Germany into a little VM there
  • The mcollective client talks to a Germany based ActiveMQ
  • The 4 mail servers in the 2nd part of the demo are based 2 x US, 1 x UK and 1 x DE
  • I have ActiveMQ instances in each of the above countries clustered together using the technique previous documented here.

Here’s the video then, as before I suggest you hit the full screen link and watch it that way to see what’s going on.




This is the end game, I want a framework to enable this kind of tool on Unix CLI – complete with pipes as you’d expect – things like the dialog interface you see here, on the web, in general shell scripts and in nagios checks like with cucumber-nagios, all sharing a API and all talking to a collective of servers as if they are one. I want to make building these apps easy, quick and fun.

Splitting MySQL dumps by table – take 2

A few days ago I posted about splitting mysqldump files using sed and a bit of Ruby to drive it, turns out that sucked, a lot.

I eventually killed it after 2 days of not finishing, the problem is, obviously, that sed does not seek to the position, it reads the whole file. So pulling out the last line of a 150GB file requires reading 150GB of data, if you have 120 tables this is a huge problem.

The below code is a new take on it, I am just reading the file with ruby and spitting out the resulting files with 1 read operation, start to finish on the same data was less than a hour. When run it gives you nice output like this:

Found a new table: sms_queue_out_status
    writing line: 1954 2001049770 bytes in 91 seconds 21989557 bytes/sec
 
Found a new table: sms_scheduling
    writing line: 725 729256250 bytes in 33 seconds 22098674 bytes/sec

The new code below:

#!/usr/bin/ruby
 
if ARGV.length == 1
    dumpfile = ARGV.shift
else
    puts("Please specify a dumpfile to process")
    exit 1
end
 
STDOUT.sync = true
 
if File.exist?(dumpfile)
    d = File.new(dumpfile, "r")
 
    outfile = false
    table = ""
    linecount = tablecount = starttime = 0
 
    while (line = d.gets)
        if line =~ /^-- Table structure for table .(.+)./
            table = $1
            linecount = 0
            tablecount += 1
 
            puts("\n\n") if outfile
 
            puts("Found a new table: #{table}")
 
            starttime = Time.now
            outfile = File.new("#{table}.sql", "w")
        end
 
        if table != "" && outfile
            outfile.syswrite line
            linecount += 1
            elapsed = Time.now.to_i - starttime.to_i + 1
            print("    writing line: #{linecount} #{outfile.stat.size} bytes in #{elapsed} seconds #{outfile.stat.size / elapsed} bytes/sec\r")
        end
    end
end
 
puts

Splitting MySQL dumps by table

I often need to split large mysql dumps into smaller files so I can do selective imports from live to dev for example where you might not want all the data. Each time I seem to rescript some solution for the problem. So here’s my current solution to the problem, it’s a simple Ruby script, you give it the path to a mysqldump and it outputs a string of echo’s and sed commands to do the work.

UPDATE: Please do not use this code, it’s too slow and inefficient, new code can be found here.

Just pipe it’s output to a file and run it via shell when you’re ready to do the splitting. At the end you’ll have a file per table in your cwd.

#!/usr/bin/ruby
 
prevtable = ""
prevline = 0
 
if ARGV.length == 1
    dumpfile = ARGV.shift
else
    puts("Please specify a dumpfile to process")
    exit 1
end
 
if File.exist?(dumpfile)
   %x[grep -n "Table structure for table" #{dumpfile}].each do |line|
       if line =~ /(\d+):-- Table structure for table .(.+)./
           curline = $1.to_i
           table = $2
 
           unless prevtable == ""
               puts("echo \"\`date\`: Processing #{prevtable} - lines #{prevline - 1} to #{curline - 2}\"")
               puts("sed -n '#{prevline - 1},#{curline - 2}p;#{curline - 2}q' #{dumpfile} > #{prevtable}.sql")
               puts
           end
 
           prevline = curline
           prevtable = table
       end
   end
else
   puts("Can't find dumpfile #{dumpfile}")
   exit 1
end

It’s pretty fast, the heavy lifting is all done with grep and sed, ruby just there to drive those commands and parse a few lines of output.

Running it produces something like this:

$ split-mysql-dump.rb exim.sql
echo "`date`: Processing domain_sender_whitelist - lines 32 to 47"
sed -n '32,47p;47q' exim.sql > domain_sender_whitelist.sql
 
echo "`date`: Processing domain_valid_users - lines 48 to 64"
sed -n '48,64p;64q' exim.sql > domain_valid_users.sql