Posts

Showing posts from October, 2011

Splunking Oracle's ZFS Appliance Part II

Image
In my first part I wrote about storing long term analytics data in Splunk. Wouldn't it be nice to also have storage capacity tracked with Splunk? This is how it's done: 1. Get pool properties #!/bin/ksh # list capacity all Pools in a System to ${outputdir}/${poolname}.pools.log # Example: listPools.ksh /tmp 10.16.5.14 typeset outputdir=$1 typeset ipname=$2 typeset debug=$3 typeset user=monitor if [ -z "$1" -o -z "$2" ]; then   printf "\nUsage: $0 <output Dir≷ <ZFSSA ipname> [ debug ]\n\n"   exit 1 fi mkdir -p ${outputdir} dat=$(date +'%y-%m-%d %H:%M:%S') ssh -T ${user}@${ipname} << --EOF-- > ${outputdir}/${ipname}.pools.log script run('status'); run('storage'); var poollist=list(); printf("Time,pool,avail,compression,used,space_percentage\\n"); for(var k=0; k&lt:poollist.length; k++) {   run('select ' + poollist[k]);   var space_used=get(...

Splunking Oracle's ZFS Appliance

Image
We have a bunch of Oracle ZFS Appliances. What I really like is their integrated dtrace based analytics feature. However, some things are missing or causing problems: -Storing long-term analytics data on the appliances produces a lot of data on the internal disks. This can fill up your appliance and in the worst case slow down the appliance software -Scaling the timeline out too much, makes peaks invisible. This is probably a problem of the rendering software used on the appliance (JavaScript) -Comparing all our appliances is not possible. There is no central analytics console. As we are a heavy Splunk user, I sat together with our friendly storage consultant from Oracle and we brought these two great products closer together: This is how we did it: 1. Setting up analytics worksheets First we had to create the analytics worksheets. This is best done using the CLI interface, as the order of drilldowns should be always the same. Otherwise fields in the gener...