Sunday, July 22, 2007

ZFS Rocks

Jeff Bonwick writes about ZFS license announcements and how it rocks.

Saturday, July 21, 2007

ZFS Take-Off

Robin Haris (zdnet) and Jörg (c0t0d0s0) are writing about RAID6 respectivly ZFS.

RAID6 will surely be a marketing success as most people do not know about ZFS, or are thinking that making the move from RAID5 to RAID6 will solve all their problems. Those companies who have heard about ZFS have certainly had a look at it.

Most companies are rather conservative when implementing new technology. This is not a bad thing, especially when trusting your precious data to a new filesystem.

Paradoxically, without proper checksumming like ZFS does, your data could be at higher risk, even if ZFS is a rather new technology.

While there have been some problems with ZFS, none of them have affected the on-disk data. This is certainly the result of thoroughly testing ZFS like no other filesystem (leaving real-world "testing" beside).

There are still some issues, that may prevent ZFS to be deployed in a broader area:

-Performance issues on Storage Array with stable storage (not ignoring cache flush)
-No dynamic LUN resizing (not really a ZFS issue)
-Database performance may not be at UFS DirectIO level (work is on the way)
-No long-term database performance experience available
-Booting from ZFS not yet integrated
-3rd Party support missing (e.g. Backup solutions not yet there)

If Sun is working on these technical issues, and I know they are, my guess is that ZFS will really take off in a timeframe of 2 years. Compared with the age of UFS this is a short chapter.

Thursday, July 19, 2007

Again ZFS performance improvements for databases

There is a new fix to improve performance for databases on ZFS.

Can't wait to see OLTP benchmarks where ZFS is being faster than UFS with DirectIO for.

Oh, and did you know there is some work going on for getting the ZIL onto seperate devices (NVRAM or Solid State Disks)

(For those who don't know what the ZIL is, look here)

Xen code drop to OpenSolaris

After almost waiting a year, the Lego bricks are falling neatly into place. This means, that it is finally possible to run Windows under Solaris (where needed :-).

Wednesday, July 18, 2007

Splunk 3.0 with Access-Control

Yeaah!!!

Just found out that Splunk 3.0 will support access control. This means e.g., that developers can debug production problems without logging in to that host. They will see only logfiles relevant to finding the problem...

This is a huge step forward, as logs often contain classified data.

Where should security happen?

As close to the data as possible!

Yesterday's post from Erik reminded me about a paradigm, that came up when I implemented the SSH Tectia Suite. It's the question where to do security properly. The answer is as close to the data as possible.

The problem with today's corporate networks is, that the "enemy" is already inside. There are so many people (internal/external) connecting to a company's network, which makes firewalls almost irrelevant. This effect is called deperimeterization.

Firewalls have a false reputation, that they protect everything. But because there are so many people who have access to both sides of firewalls, this doesn't make it very secure.

What could be a new approach? The Jericho Forum (a security focused group) says: "Individual Hosts should be able to defend themselves".

I certainly agree with that. Most operating systems contain integrated firewalls waiting for activation. Many applications provide extended authentication features and encryption (e.g. TLS/SSL). Not to forget the SSH protocol for managing the operating system instead of telnet.

While in theory you could take all firewalls away, and rely on host security, in practice you wouldn't do that of course. As an analogy to real life, you would certainly lock the gate to your stately home...

Tuesday, July 17, 2007

Paul Murphy clarifies the Sun Ray "difference"

In today's blog, Paul Murphy lists advantages of the Sun Ray technology. This as an answer to a comment of Erik Engbrecht (a regular visitor to Murph's blog).

Of course, Erik has already answered in his blog.

While Erik certainly makes some valid points, I personally believe that having the desktop processing happening in the datacenter would be for a lot of companies the best approach.

I worked for several banks, and you wouldn't believe me, the investments done or planned to make desktops/laptops secure (harddisk encryption, USB-port-locking software, reverse-firewalls, virus-scanners, security audit tools, etc.)

Most of these security activities just aren't needed with a stateless device.

Another point was made, that Sun Rays are dependent on a working network connection. As more and more vital information is kept on servers that require online access, you already are dependent of a connection to your company's network.

Offline work usually requires documents to be carried with in paper form (bad!) or on harddisk. The later would again require harddisk encryption.

Do you trust harddisk encryption made in China?

Links for 2007-07-17

Monday, July 16, 2007

SunRay Stuff

ThinGuy has a few interesting blog entries...

First of all there are some Youtube movies about the Sun/Mitel partnership. This sounds like a great deal. The deal consists of two parts. A Multi-Instance Call Server (I guess this is something like a phone switch), and a unified solution for SunRays and the Mitel IP phone. This allows to hotdesk not only between SunRays, but also between IP phone. You can find more information here.

Another entry in ThinGuy's blog sounds also promising. He will talk in august about the upcoming SunRay Software 4.2. He promises some "trendy" new features for desktop virtualisation. As always, these kind of product release events are never around the corner...

I really believe that in the next 1-2 years desktop virtualization will be the next big management buzzword (not in the negative meaning).

The only thing Sun desperately needs if they want SunRays to take off, are salesmen who really want to sell thin clients. Selling Sunrays is certainly harder than selling big boxes, but hey, take it as a challenge!

But maybe Sun salesmen just don't understand the real-life problems of todays desktops and how thin-clients are a solution for most of these problems...

Sunday, July 15, 2007

Links for 2007-07-15

Configuration Engines for Unix

As a system administrator, there is one problem that is persistent. Standardizing and keeping track of configuration changes.

Standardizing begins with the installation of a system. All major unix brands have their own installation methods. As mainly a Solaris administrator, I'm very familiar with the Jumpstart framework.

Using plain vanilla jumpstart is ok, if no customization in addition to the OS is needed (special configurations/application installation)

For advanced customization, Sun Professional Services UK developed the JET framework. JET is an addition to Jumpstart. The advantage of JET is in its usage of template files. All information about a client to install is kept in one file. The framework provides a simple way to add additional software, make additional software changes.

With JET it is possible to get a host running with all its settings and applications. But,
as soon as there are changes on a specific host, which could affect standardization, those changes have to be propagated towards JET.

A couple of years ago, when we used our own framework on top of Jumpstart, we already had this discussion, about getting changes back to Jumpstart. We never found a real solution to this problem. Sometimes, changes were forgot by human error (lazyness?), or some changes would not fit into the framework.

Fortunatly there are others, that have had the same problem. After googling some time, I found three configuration engines:

While I haven't looked at these tools in detail, all of them are managing configuration files from a central host. Local configuration changes are always initiated from a central host. The new configuration will either be pushed from the central host or pulled from the local host. If somebody changes configuration files locally they will be overwritten. The local configurations can be generated dynamically by rules.

The more I think about it, the right way seems to be to install a plain-vanilla OS, then install additional software packages (the configuration engines support this too), and finally push the configuration to the host.

This would make it simple to reinstall or upgrading the OS on a host. Jumpstart could be used in a plain-vanilla configuration, with only needing the configuration engine to be installed.

I will certainly further investigate the tools above.

Thursday, July 12, 2007

Using ZFS clones with several Application instances

This was an interesting blog entry about E25Ks, DTrace and ZFS.

One thing I was also thinking about, was the performance gains with ZFS clones, when running several instances of Applications using mostly the same data.

Quote:

"Then came a flash of inspiration. Using clones of a ZFS snapshot of the data together with Zones it was possible to partition multiple instances of the application. But the really cool bit is that ZFS snapshots are almost instant and virtually free.

ZFS clones are implemented using copy-on-write relative to a snapshot. This means that most of the storage blocks on disk and filesystem cache in RAM can be shared across all instances. Although snapshots and partitioning are possible on other systems, they are not instant, and they are unable to share RAM."

I had this idea, when I was thinking about using Solaris as a Xen Dom0 and running several identical (at least in the beginning) MS-Windows instances on Xen DomUs. The cloned MS-Windows images would be located on ZFS clones.

Most of the blocks would then reside in the memory of Dom0. I guess this would certainly improve performance.

The OS images could of course be also served over e.g. ZFS iSCSI Target devices, but the effect would be the same...

Splunk 3.0

Since I read about Splunk on Ben Rockwood's blog, I'm a huge fan. I even got a Splunk baseball cap and a T-Shirt.

At my former employer, I've implemented Splunk to collect system logs for system monitoring and compliance checks/reporting.

Version 3.0 (still in beta) seems to be a huge step forwards. Reporting e.g. is now very sophisticated, allowing one to create many kinds of reports (charts, tables).

To get a quick overview over different environment aspects, it is possible to create user/role dashboards.

In the beginning Splunk was mainly meant for sucking only log files in, the target has now changed to index any kind of unstructured data.

I'm very much interested in loading configuration files and monitor these for changes (security monitoring/audits anyone?). It is also possible to periodically index command outputs. This could be used for recording performance data (output from e.g. iostat, vmstat, etc.). The output from a config file or a command, looks just like any other multi-line event.

I think this feature is what differentiates Splunk from competitors. Collecting log data is one thing, collecting and analyzing any unstructured data is more difficult. Here Splunk has unique features.

In the near future I will write more about, what I think Splunk could be used for.