I’m just blogging this because it took me ages to figure out, it seems so simple now but I guess that’s how it usually goes.
The problem I have is I want a plugin to be able to either make a method using the normal Ruby def foo or via some DSL’ish helpers.
class Foo<Base
register_action(:name=>"do_something", :description=>"foo")def do_something_action
end
register_action(:name=>"do_something_else", :description=>"foo")do# body of the action hereendend
class Foo<Base
register_action(:name => "do_something", :description => "foo")
def do_something_action
end
register_action(:name => "do_something_else", :description => "foo") do
# body of the action here
end
end
The above code should make me two methods – do_something_action and do_something_else_action – they should be identical to viewers from the outside. Here’s the base class that makes this happen correctly:
class Base
defself.register_input(input, &block)
name = input[:name]self.module_eval{ define_method("#{name}_action", &block)}if block_given?
endend
class Base
def self.register_input(input, &block)
name = input[:name]
self.module_eval { define_method("#{name}_action", &block) } if block_given?
end
end
It’s pretty simple, we’re just using define_method in the scope of the module and that does the rest.
With the new SimpleRPC system in MCollective we have a simple interface to creating agents. The way to call an agent would be:
$ mc-rpc service status service=httpd
$ mc-rpc service status service=httpd
This is all fine and well and easy enough, however it requires you to know a lot. You need to know there’s a status action and you need to know it expects a service argument, not great.
I’m busy adding the ability for an agent to register its metadata and interface so that 3rd party tools can dynamically generate useful interfaces.
A sample registration for service agent is:
register_meta(:name=>"SimpleRPC Service Agent",
:description=>"Agent to manage services using the Puppet service provider",
:author=>"R.I.Pienaar",
:license=>"GPLv2",
:version=>1.1,
:url=>"http://mcollective-plugins.googlecode.com/",
:timeout=>60)["start", "stop", "restart", "status"].eachdo|action|
register_input(:action=> action,
:name=>"service",
:prompt=>"Service Name",
:description=>"The service to #{action}",
:type=> :string,
:validation=>'^[a-zA-Z\-_\d]+$',
:maxlength=>30):
register_meta(:name => "SimpleRPC Service Agent",
:description => "Agent to manage services using the Puppet service provider",
:author => "R.I.Pienaar",
:license => "GPLv2",
:version => 1.1,
:url => "http://mcollective-plugins.googlecode.com/",
:timeout => 60)
["start", "stop", "restart", "status"].each do |action|
register_input(:action => action,
:name => "service",
:prompt => "Service Name",
:description => "The service to #{action}",
:type => :string,
:validation => '^[a-zA-Z\-_\d]+$',
:maxlength => 30):
This includes all the meta data, versions, timeouts, validation of inputs, prompts and help text for every input argument.
A few days ago I released Marionette Collective version 0.4.0 and today I released 0.4.1. This release branch introduce a major new feature called Simple RPC.
In prior releases it took quite a bit of ruby knowledge to write a agent and client. In addition clients all ended up implementing their own little protocols for data exchange. We’ve simplified agents and clients and we’ve created a standard protocol between clients and agents.
Standard protocols between clients and agents means we have a standard one-size-fits-all client program called mc-rpc and it opens the door to writing simple web interfaces that can talk to all compliant agents. We’ve made a test REST Simple RPC bridge as an example.
Writing a client can now be done without all the earlier setup, command line parsing and so forth, it can now be as simple as:
require'mcollective'includeMCollective::RPC
mc = rpcclient("rpctest")
printrpc mc.echo(:msg=>"Welcome to MCollective Simple RPC")
printrpcstats
require 'mcollective'
include MCollective::RPC
mc = rpcclient("rpctest")
printrpc mc.echo(:msg => "Welcome to MCollective Simple RPC")
printrpcstats
This simple client has full discovery, full –help output, and takes care of printing results and stats in a uniform way.
This should make it much easier to write more complex agents, like deployers that interact with packages, firewalls and services all in a single simple script.
We’ve taken a better approach in presenting the output from clients now, instead of listing 1000 OKs on success it will now only print whats failing.
Output from above client would look something along these lines:
$ hello.rb
* [ ============================================================> ] 43 / 43
Finished processing 43 / 43 hosts in 392.60 ms
$ hello.rb
* [ ============================================================> ] 43 / 43
Finished processing 43 / 43 hosts in 392.60 ms
As you can see we have a nice progress indicator that will work for 1 or 1000 nodes, you can still see status of every reply by just running the client in verbose – which will also add more detailed stats at the end.
Agents are also much easier, here’s a echo agent:
class Rpctest<RPC::Agentdef echo_action
validate :msg, :shellsafe
reply.data = request[:msg]endend
class Rpctest<RPC::Agent
def echo_action
validate :msg, :shellsafe
reply.data = request[:msg]
end
end
You can get full information on this new feature here. We’ve also created a lot of new wiki docs about ActiveMQ setup for use with MCollective and we’ve recorded a new introduction video here.
Usually when I describe mcollective to someone they generally think its nice and all but the infrastructure to install is quite a bit and so ssh parallel tools like cap seems a better choice. They like the discovery and stuff but it’s not all that clear.
I have a different end-game in mind than just restarting services, and I’ve made a video to show just how I manage a cluster of Exim servers using mcollective. This video should give you some ideas about the possibilities that the architecture I chose brings to the table and just what it can enable.
While watching the video please note how quick and interactive everything is, then keep in mind the following while you are seeing the dialog driven app:
I am logged in via SSH from UK to Germany into a little VM there
The mcollective client talks to a Germany based ActiveMQ
The 4 mail servers in the 2nd part of the demo are based 2 x US, 1 x UK and 1 x DE
I have ActiveMQ instances in each of the above countries clustered together using the technique previous documented here.
Here’s the video then, as before I suggest you hit the full screen link and watch it that way to see what’s going on.
This is the end game, I want a framework to enable this kind of tool on Unix CLI – complete with pipes as you’d expect – things like the dialog interface you see here, on the web, in general shell scripts and in nagios checks like with cucumber-nagios, all sharing a API and all talking to a collective of servers as if they are one. I want to make building these apps easy, quick and fun.
I eventually killed it after 2 days of not finishing, the problem is, obviously, that sed does not seek to the position, it reads the whole file. So pulling out the last line of a 150GB file requires reading 150GB of data, if you have 120 tables this is a huge problem.
The below code is a new take on it, I am just reading the file with ruby and spitting out the resulting files with 1 read operation, start to finish on the same data was less than a hour. When run it gives you nice output like this:
Found a new table: sms_queue_out_status
writing line: 1954 2001049770 bytes in 91 seconds 21989557 bytes/sec
Found a new table: sms_scheduling
writing line: 725 729256250 bytes in 33 seconds 22098674 bytes/sec
Found a new table: sms_queue_out_status
writing line: 1954 2001049770 bytes in 91 seconds 21989557 bytes/sec
Found a new table: sms_scheduling
writing line: 725 729256250 bytes in 33 seconds 22098674 bytes/sec
The new code below:
#!/usr/bin/rubyif ARGV.length == 1
dumpfile = ARGV.shiftelseputs("Please specify a dumpfile to process")exit1end
STDOUT.sync = trueifFile.exist?(dumpfile)
d = File.new(dumpfile, "r")
outfile = false
table = ""
linecount = tablecount = starttime = 0while(line = d.gets)if line =~ /^-- Table structure for table .(.+)./
table = $1
linecount = 0
tablecount += 1puts("\n\n")if outfile
puts("Found a new table: #{table}")
starttime = Time.now
outfile = File.new("#{table}.sql", "w")endif table != ""&& outfile
outfile.syswrite line
linecount += 1
elapsed = Time.now.to_i- starttime.to_i+1print(" writing line: #{linecount} #{outfile.stat.size} bytes in #{elapsed} seconds #{outfile.stat.size / elapsed} bytes/sec\r")endendendputs
#!/usr/bin/ruby
if ARGV.length == 1
dumpfile = ARGV.shift
else
puts("Please specify a dumpfile to process")
exit 1
end
STDOUT.sync = true
if File.exist?(dumpfile)
d = File.new(dumpfile, "r")
outfile = false
table = ""
linecount = tablecount = starttime = 0
while (line = d.gets)
if line =~ /^-- Table structure for table .(.+)./
table = $1
linecount = 0
tablecount += 1
puts("\n\n") if outfile
puts("Found a new table: #{table}")
starttime = Time.now
outfile = File.new("#{table}.sql", "w")
end
if table != "" && outfile
outfile.syswrite line
linecount += 1
elapsed = Time.now.to_i - starttime.to_i + 1
print(" writing line: #{linecount} #{outfile.stat.size} bytes in #{elapsed} seconds #{outfile.stat.size / elapsed} bytes/sec\r")
end
end
end
puts
I often need to split large mysql dumps into smaller files so I can do selective imports from live to dev for example where you might not want all the data. Each time I seem to rescript some solution for the problem. So here’s my current solution to the problem, it’s a simple Ruby script, you give it the path to a mysqldump and it outputs a string of echo’s and sed commands to do the work.
UPDATE: Please do not use this code, it’s too slow and inefficient, new code can be found here.
Just pipe it’s output to a file and run it via shell when you’re ready to do the splitting. At the end you’ll have a file per table in your cwd.