Splunk: Unscaling units

I'm working on a Splunk Application for Solaris.

One of the commands that is of interest to me is the fsstat(1m) command output.  Here's the output for two filesystem types (zfs, nfs4):

solaris# fsstat zfs nfs4 1 1
 new  name   name  attr  attr lookup rddir  read read  write write
 file remov  chng   get   set    ops   ops   ops bytes   ops bytes
2.21K   881   521  585K 1.22K  1.71M 9.34K 1.66M 21.3G  765K 10.7G zfs
    0     0     0     0     0      0     0     0     0     0     0 nfs4
    0     0     0    20     0      0     0   279  997K   142  997K zfs
    0     0     0     0     0      0     0     0     0     0     0 nfs4

While Splunk is very flexible in parsing whatever output, for command outputs it is better to do a little pre-formatting:

-Make headers single line
-Drop the summary line (activity since fs loaded/mounted)
-Find a solution to be able to do stats on the autoscale values (K,M,G,T)

First, I wrote a script to adjust the output. The output looks like this now:

./fsstat.pl zfs nfs4
new_file name_remov name_chng attr_get attr_set lookup_ops rddir_ops read_ops read_bytes write_ops write_bytes fstype
    1     0     1     9     1     27     0   260 1.14M   145 1.18M zfs
    0     0     0     0     0      0     0     0     0     0     0 nfs4


This makes it much easier to parse the data.

A splunk search with multikv will split this into several fields:

sourcetype="solaris_fsstat" |multikv

We will now have single line events with the fields new_file, name_remov, name_chng etc...

The trouble is, that the fsstat command scales values automatically into human readable format. This can not be disabled.

But we are lucky, Splunk is able to solve the problem. To unscale e.g. read_ops, we add a bit Splunk magic to the search:

| rex field=read_ops "(?[\d\.]+)(?\w+)?" | eval read_ops_unscaled=case(
read_ops_unit=="",read_ops_amount,
read_ops_unit=="K",read_ops_amount*1024, read_ops_unit=="M",read_ops_amount*1024*1024, read_ops_unit=="G",read_ops_amount*1024*1024*1024, read_ops_unit=="T",read_ops_amount*1024*1024*1024*1024)

Now we have created a new field called read_ops_unscaled.

Wasn't this cool?

As this is quite hard to type I have created macros for every field that has to be scaled.

After this, I have created a master macro called `unscale_fsstat` which calls all other macros. Now it is trivial to run a search and do some stats on the results.

Happy Splunking!

Comments

Popular posts from this blog

Heating up the Data Pipeline (Part 1)

Heating up the Data Pipeline (Part 3)

Heating up the Data Pipeline (Part 2)