Splunking Oracle's ZFS Appliance Part II

In my first part I wrote about storing long term analytics data in Splunk.

Wouldn't it be nice to also have storage capacity tracked with Splunk?

This is how it's done:

1. Get pool properties


#!/bin/ksh


# list capacity all Pools in a System to ${outputdir}/${poolname}.pools.log


# Example: listPools.ksh /tmp 10.16.5.14


typeset outputdir=$1
typeset ipname=$2
typeset debug=$3
typeset user=monitor


if [ -z "$1" -o -z "$2" ]; then
  printf "\nUsage: $0 <output Dir≷ <ZFSSA ipname> [ debug ]\n\n"
  exit 1
fi


mkdir -p ${outputdir}
dat=$(date +'%y-%m-%d %H:%M:%S')


ssh -T ${user}@${ipname} << --EOF-- > ${outputdir}/${ipname}.pools.log
script
run('status');
run('storage');
var poollist=list();


printf("Time,pool,avail,compression,used,space_percentage\\n");


for(var k=0; k&lt:poollist.length; k++) {
  run('select ' + poollist[k]);


  var space_used=get('used')/1024/1024/1024;
  var space_avail=get('avail')/1024/1024/1024;
  var compression=get('compression');
  var space_percentage=space_used/(space_used + space_avail)*100;


  printf("$dat,%s,%0.0f,%0.2f,%0.0f,%0.0f\\n",poollist[k],space_avail,compression,space_used,space_percentage);


  run('done');
}
run('done');


--EOF--


exit $?

2. Write Splunk props.conf

As an exercise...


3. Enjoy Splunk Dashboards:

















4. Repeat for projects and shares.































PS: If I find the time, I will eventually package this into a Splunk App.

Comments

Popular posts from this blog

SLOG Latency

Heating up the Data Pipeline (Part 1)

Heating up the Data Pipeline (Part 2)