Showing posts from October, 2009

SLOG Latency

After reading Brendan's newest blog entry, I was curious about what kind of slog latency we can see for our data migration load.

To remind you, only synchronous writes go into the slog SSD devices.

As this is a migration running, we can see mostly NFS write operations:

In our configuration the SSD slog is mirrored (HDD4 and HDD8). Hence the same number of IOPS:

The next picture shows us the latency for our SDD SLOG Device HDD 4. We can see here that latencies start at 79 us and are mostly under 200 us. There are some outliers, but approx. 95% are under 500 us:

This matches quite well with the values Brendan blogged about (137-181 us), which includes NFSv3 latency. For reference (no picture here), we can see latencies of about 170-500 us mostly for NFSv4.

By the way. SLOG Devices are mostly one way devices, as shown here. Only if things go really bad, they are read from...

Resilvering progress...found!

Found the CLI menu for resilvering progress :-)

file1:configuration storage (pool-1)> show

pool = pool-1 file1 mirror_nspf log_mirror_nspf degraded

pool = pool-1
status = degraded
owner = file1
profile = mirror_nspf
log_profile = log_mirror
cache_profile = cache_stripe
scrub = resilver in progress for 4h46m, 83.85% done, 0h55m to go

High IOPS for 1TB Disks

Wondering how 1TB disk can have such a high IOPS.

HDD8 and HDD4 are Log-Devices (SSD)

UPDATE: For the moment I assume, these are writes to the disk cache.

Being bored....

...while doing 22536 NFS ops per second, and doing gzip-2 at the same time.

HSP really works. NFS synchronous operations get "eaten" by the SSD devices (1133 IOPS).