Showing posts from 2007

Debugging sendmail

To debug outgoing mail use following command # /usr/lib/sendmail -d -f root@hostname -ODeliveryMode=i recipient@hostname

I've got a green brain

Your Brain is Green Of all the brain types, yours has the most balance. You are able to see all sides to most problems and are a good problem solver. You need time to work out your thoughts, but you don't get stuck in bad thinking patterns. You tend to spend a lot of time thinking about the future, philosophy, and relationships (both personal and intellectual). What Color Is Your Brain?

ZFS Rocks

Jeff Bonwick writes about ZFS license announcements and how it rocks.

ZFS Take-Off

Robin Haris ( zdnet ) and Jörg ( c0t0d0s0 ) are writing about RAID6 respectivly ZFS. RAID6 will surely be a marketing success as most people do not know about ZFS, or are thinking that making the move from RAID5 to RAID6 will solve all their problems. Those companies who have heard about ZFS have certainly had a look at it. Most companies are rather conservative when implementing new technology. This is not a bad thing, especially when trusting your precious data to a new filesystem. Paradoxically, without proper checksumming like ZFS does, your data could be at higher risk, even if ZFS is a rather new technology. While there have been some problems with ZFS, none of them have affected the on-disk data. This is certainly the result of thoroughly testing ZFS like no other filesystem (leaving real-world "testing" beside). There are still some issues, that may prevent ZFS to be deployed in a broader area: -Performance issues on Storage Array with stable storage (not ignori

Again ZFS performance improvements for databases

There is a new fix to improve performance for databases on ZFS. Can't wait to see OLTP benchmarks where ZFS is being faster than UFS with DirectIO for. Oh, and did you know there is some work going on for getting the ZIL onto seperate devices (NVRAM or Solid State Disks) (For those who don't know what the ZIL is, look here )

Xen code drop to OpenSolaris

After almost waiting a year, the Lego bricks are falling neatly into place. This means, that it is finally possible to run Windows under Solaris (where needed :-).

Splunk 3.0 with Access-Control

Yeaah!!! Just found out that Splunk 3.0 will support access control. This means e.g., that developers can debug production problems without logging in to that host. They will see only logfiles relevant to finding the problem... This is a huge step forward, as logs often contain classified data.

Where should security happen?

As close to the data as possible! Yesterday's post from Erik reminded me about a paradigm, that came up when I implemented the SSH Tectia Suite . It's the question where to do security properly. The answer is as close to the data as possible. The problem with today's corporate networks is, that the "enemy" is already inside. There are so many people (internal/external) connecting to a company's network, which makes firewalls almost irrelevant. This effect is called deperimeterization. Firewalls have a false reputation, that they protect everything. But because there are so many people who have access to both sides of firewalls, this doesn't make it very secure. What could be a new approach? The Jericho Forum (a security focused group) says: "Individual Hosts should be able to defend themselves". I certainly agree with that. Most operating systems contain integrated firewalls waiting for activation. Many applications provide extended authent

Paul Murphy clarifies the Sun Ray "difference"

In today's blog, Paul Murphy lists advantages of the Sun Ray technology. This as an answer to a comment of Erik Engbrecht (a regular visitor to Murph's blog). Of course, Erik has already answered in his blog . While Erik certainly makes some valid points, I personally believe that having the desktop processing happening in the datacenter would be for a lot of companies the best approach. I worked for several banks, and you wouldn't believe me, the investments done or planned to make desktops/laptops secure (harddisk encryption, USB-port-locking software, reverse-firewalls, virus-scanners, security audit tools, etc.) Most of these security activities just aren't needed with a stateless device. Another point was made, that Sun Rays are dependent on a working network connection. As more and more vital information is kept on servers that require online access, you already are dependent of a connection to your company's network. Offline work usually requires docum

Links for 2007-07-17

Comparison of open source configuration management software (Wikipedia) Virtual Desktop Talk (Podcast on desktop virtualization) Sun Desktop Virtualization Solution Blueprint (with VMWare) Win4Solaris (Running Windows on Solaris)

SunRay Stuff

ThinGuy has a few interesting blog entries... First of all there are some Youtube movies about the Sun/Mitel partnership. This sounds like a great deal. The deal consists of two parts. A Multi-Instance Call Server (I guess this is something like a phone switch), and a unified solution for SunRays and the Mitel IP phone. This allows to hotdesk not only between SunRays, but also between IP phone. You can find more information here . Another entry in ThinGuy's blog sounds also promising. He will talk in august about the upcoming SunRay Software 4.2. He promises some "trendy" new features for desktop virtualisation. As always, these kind of product release events are never around the corner... I really believe that in the next 1-2 years desktop virtualization will be the next big management buzzword (not in the negative meaning). The only thing Sun desperately needs if they want SunRays to take off, are salesmen who really want to sell thin clients. Selling Sunrays is

Links for 2007-07-15

Seattle Conference on Scalability Videos (from Storagemojo) IT Jungle about Oracle 11g IT Jungle about AIX 6.1 Beta Hitachi Data Systems, Upcoming Webinars

Configuration Engines for Unix

As a system administrator, there is one problem that is persistent. Standardizing and keeping track of configuration changes. Standardizing begins with the installation of a system. All major unix brands have their own installation methods. As mainly a Solaris administrator, I'm very familiar with the Jumpstart framework. Using plain vanilla jumpstart is ok, if no customization in addition to the OS is needed (special configurations/application installation) For advanced customization, Sun Professional Services UK developed the JET framework. JET is an addition to Jumpstart. The advantage of JET is in its usage of template files. All information about a client to install is kept in one file. The framework provides a simple way to add additional software, make additional software changes. With JET it is possible to get a host running with all its settings and applications. But, as soon as there are changes on a specific host, which could affect standardization, those changes ha

Using ZFS clones with several Application instances

This was an interesting blog entry about E25Ks, DTrace and ZFS. One thing I was also thinking about, was the performance gains with ZFS clones, when running several instances of Applications using mostly the same data. Quote: "Then came a flash of inspiration. Using clones of a ZFS snapshot of the data together with Zones it was possible to partition multiple instances of the application. But the really cool bit is that ZFS snapshots are almost instant and virtually free. ZFS clones are implemented using copy-on-write relative to a snapshot. This means that most of the storage blocks on disk and filesystem cache in RAM can be shared across all instances. Although snapshots and partitioning are possible on other systems, they are not instant, and they are unable to share RAM." I had this idea, when I was thinking about using Solaris as a Xen Dom0 and running several identical (at least in the beginning) MS-Windows instances on Xen DomUs. The cloned MS-Windows images would

Splunk 3.0

Since I read about Splunk on Ben Rockwood's blog , I'm a huge fan. I even got a Splunk baseball cap and a T-Shirt. At my former employer, I've implemented Splunk to collect system logs for system monitoring and compliance checks/reporting. Version 3.0 (still in beta) seems to be a huge step forwards. Reporting e.g. is now very sophisticated, allowing one to create many kinds of reports (charts, tables). To get a quick overview over different environment aspects, it is possible to create user/role dashboards. In the beginning Splunk was mainly meant for sucking only log files in, the target has now changed to index any kind of unstructured data. I'm very much interested in loading configuration files and monitor these for changes (security monitoring/audits anyone?). It is also possible to periodically index command outputs. This could be used for recording performance data (output from e.g. iostat, vmstat, etc.). The output from a config file or a command, looks