Category Archives: SAN

STEPS TO CREATE BCV DEVICES DMX / SYMMETRIX

1       Verify the free space available in MBs

SUN1#symconfigure   -sid  277  list  -freespace –unit MB

  • Now the free space in SYMM 277 is displayed in MBs example

1278888 MB

2      Verify any configuration sessions are running

SUN1#symconfigure   -sid  277 verify

  • It verifies if any configurations are running

 

3      Verify if any locks have been enforced upon SYM

SUN1#symcfg   -sid  277  list  -lockn

  • It displays the lock number if any lock has been enforced locks can be from 0 to 15. (Example : configuration lock =15)

 

4       To release the lock (example 15)

SUN1#syncfg   -sid   277 release -lockn    15

  • Lock released

 

5      Now create a text file using vi editor to submit parameters to commit

the unprotected LUN configuration

SUN1#vi  create_BCV

Create  dev count =8 size=958 emulation=fba config=unprotected;

:wq

 

 

  • Count denotes number of devices to be created
  • Size mentioned in cylinders one cyl = 0.5x MB
  • Emulation refers to fba(fixed block architecture 512 bytes)used in open systems.
  • Config refers to the protection enforced on device

6     Commit the configuration

SUN1#symconfigure  -sid   277  -v  -f   create_LUN    commit –nop

 

  • Configuration is saved and eight LUNs are created
  • Phases of this command are preview, prepare and commit
  • -v verbose mode
  • -f file parameter create_LUN  LUN specifications are enclosed
  • Commit: to perform the activity
  • -nop: non interactive session and no prompting

7          Discover the sym devices

SUN1#symcfg  discover

  • Solution enabler in host sends an API call through HBA to sym
  • Gate keeper devices receives the API call
  • Downloads the configuration of sym in IMPL.bin file to host
  • This configuration is stored in /var/sym_api/db directory of host

8     Scan for new devices IO level at host

SUN1#devfsadm  -Cv

  • It verifies for newly added devices and also the new device drivers

 

9        To list the devices

SUN1#symdev  list

  • Displays the newly added devices
  • The LUN number is given in hexa decimal code (0000 – ffff)
  • As the devices are not mapped the directors field displays ???:???
  • in our session let us suppose the addresses of 8LUNs are 0009 – 0010
  • 0000 lun is for VCM ,0001- 0006 for protected and in most of the cases 0007 and 0008 are automatically assigned by symm to sfs luns

10                  To convert the devices to BCV devices create a configuration file

SUN1#convert_BCV

Convert  dev  0009:0010 to BCV;

Wq

  • The unprotected standard devices more  than 5 cylinders cannot be mapped by front end directors, unless it is converted to BCV device, hence the devices 0009 to 0010 total 8 devices are being converted to BCV devices.

11      Commit the configuration

SUN1#symconfigure  -sid   277  -v  -f   convert_BCV    commit –nop

  • configuration is saved and eight LUNs are converted
  • phases of this command are preview, prepare and commit
  • -v verbose mode
  • -f file parameter create_LUN, here  LUN specifications are enclosed
  • Commit: to perform the activity
  • -nop: non interactive session and no prompting

12     To display the LUNs which have not been mapped

SUN1#symdev list – noport

  • Displays the sym devices which have not been mapped to any of the front-end directors and its ports
  1. To display the available front end ports

SUN1#symcfg –sid  277  list  -connections

  • It displays the front end directors and their ports to which the hosts are connected
  1. To check the available address in sym

SUN1#symcfg  -sid  277  list  -available  -addresses  -dir  1c –p 0

SUN1#symcfg  -sid  277  list  -available  -addresses  -dir  1d –p 0

SUN1#symcfg  -sid  277  list  -available  -addresses  -dir  16c –p 0

SUN1#symcfg  -sid  277  list  -available  -addresses  -dir  16d –p 0

  • It shows the available logical unit address to which the sym devices in host can be mapped

    • The available logical address to map is seen in the last field which is preceded by status field where it shows available

example :

*        – AVAILABLE-        002

  • The logical unit address is in hexadecimal number.

 

 

 

 

15      Create a file to Map the devices and the specifications

SUN1# vi map_BCV

Map dev 0009:0010  to dir1c:0, Lun=002;

Map dev 0009:0010  to dir1d:0, Lun=002;

Map dev 0009:0010  to dir16c:0, Lun=002;

Map dev 0009:0010  to dir16d:0, Lun=002;

:wq

  • Map the sym devices with the Lun address 002  in sym box  through the front end directors port 0

16  Commit the activity

SUN1#symconfigure –sid 277 –v –f  map_BCV commit –nop

  • Configuration is saved and eight  LUNs are Mapped
  • Phases of this command are preview, prepare and commit
  • -v verbose mode
  • -f file parameter map_LUN  LUN specifications are enclosed
  • Commit: to perform the activity
  • -nop: non interactive session and no prompting

17  Scan the devices once again at host level

SUN1#devfsadm –Cv

  • Scans the changes in hosts

13         Check the controller

SUN1# fcinfo

  • Displays controller number and its wwpn address

14         configure the controller

SUN#cfgadm –c configure C3

  • Configure the host controller c3 to map the symdevs with LUN address

20   Discover the sym devices

SUN1#symcfg discover

21   To list the devices

SUN1#symdev list

  • Displays the newly added & mapped sym devices
  • As the devices are  mapped the directors field displays ***:**

If the devices are not mapped properly reboot the host onceLNot Recommended

STEPS TO CREATE STANDARD DEVICES WITH TWO WAY PROTECTION

  • Verify the free space available in MBs

SUN1#symconfigure   -sid  277  list  -freespace –unit MB

  • now the free space in SYMM 277 is displayed in MBs example

1278888 MB

  • verify any configuration sessions are running

SUN1#symconfigure   -sid  277 verify

  • it verifies if any configurations are running

 

  • verify if any locks have been enforced upon SYM

SUN1#symcfg   -sid  277  list  -lockn

  • it displays the lock number if any lock has been enforced locks can be from 0 to 15. (Example : configuration lock =15)

 

  • to release the lock (example 15)

SUN1#syncfg   -sid   277  release  -lockn    15

  • lock released

 

  • now create a text file using vi editor to submit parameters to commit

the LUN configuration

SUN1#vi  create_LUN

Create  dev count =6 size=958 emulation=fba config=2-way-mir;

:wq

 

  • Count denotes number of devices to be created
  • Size mentioned in cylinders one cyl = 0.5x MB
  • Emulation refers to fba(fixed block architecture 512 bytes)used in open systems.
  • Config refers to the protection enforced on device
  1. Commit the configuration

SUN1#symconfigure  -sid   277  -v  -f   create_LUN    commit –nop

 

  • configuration is saved and six LUNs are created
  • phases of this command are preview, prepare and commit
  • -v verbose mode
  • -f file parameter create_LUN  LUN specifications are enclosed
  • Commit: to perform the activity
  • -nop: non interactive session and no prompting
  • Discover the sym devices

SUN1#symcfg  discover

  • Solution enabler in host sends an API call through HBA to sym
  • Gate keeper devices receives the API call
  • Downloads the configuration of sym in IMPL.bin file to host
  • This configuration is stored in /var/sym_api/db directory of host

  • scan for new devices IO level at host

SUN1#devfsadm  -Cv

  • it verifies for newly added devices and also the new device drivers

 

  • To list the devices

SUN1#symdev  list

  • displays the newly added devices
  • the LUN number is given in hexa decimal code (0000 – ffff)
  • As the devices are not mapped the directors field displays ???:???
  • In our session let us suppose the addresses of 6LUNs are 0001 – 0006
  • To display the LUNs which have not been mapped

SUN1#symdev list – noport

  • Displays the sym devices which have not been mapped to any of the frontend directors and its ports
  • To display the available front end ports

SUN1#symcfg –sid  277  list  -connections

  • It displays the front end directors and their ports to which the hosts are connected
  • To check the available address in sym

SUN1#symcfg  -sid  277  list  -available  -addresses  -dir  1c –p 0

SUN1#symcfg  -sid  277  list  -available  -addresses  -dir  1d –p 0

SUN1#symcfg  -sid  277  list  -available  -addresses  -dir  16c –p 0

SUN1#symcfg  -sid  277  list  -available  -addresses  -dir  16d –p 0

  • It shows the available logical unit address to which the sym devices in host can be mapped
  • The available logical address to map is seen in the last field which is preceded by status field where it shows available

example :

*        -  AVAILABLE-        001

  • The logical unit address is in hexadecimal number.
  • create a file to Map the devices and the specifications

SUN1# vi map_LUN

Map dev 0001:0006  to dir1c:0, Lun=001;

Map dev 0001:0006  to dir1d:0, Lun=001;

Map dev 0001:0006  to dir16c:0, Lun=001;

Map dev 0001:0006  to dir16d:0, Lun=001;

:wq

  • Map the sym devices with the Lun address 001  in sym box  through the front end directors port 0
  • Commit the activity

SUN1#symconfigure –sid 277 –v –f  map_LUN commit –nop

  • Configuration is saved and six LUNs are Mapped
  • phases of this command are preview, prepare and commit
  • -v verbose mode
  • -f file parameter map_LUN  LUN specifications are enclosed
  • Commit: to perform the activity
  • -nop: non interactive session and no prompting
  • Scan the devices once again at host level

SUN1#devfsadm –Cv

  • Scans the changes in hosts
  • Check the controller

SUN1# fcinfo

  • Displays controller number and its wwpn address

 

  • configure the controller

SUN#cfgadm –c configure C3

  • Configure the host controller c3 to map the symdevs with LUN address
  • Discover the sym devices

SUN1#symcfg  discover

  • To list the devices

SUN1#symdev  list

  • Displays the newly added & mapped sym devices
  • As the devices are  mapped the directors field displays director number and port

If the devices are not mapped properly reboot the host onceLNot Recommended

LUN / METAVOLUME CREATION AND MAPPING DMX / SYMMETRIX

  • Physical discs are not visible to hosts.

 

  • Logical volumes can be created from physical discs and they are called “hyper volume extension”.

 

  • 256 hypers can be created on a physical disc.

 

  • Every hyper has its own personality.

 

  • Initially any hyper is known as a standard device.

 

  • Minimum size of a hyper can be 8mb and maximum size is 32 GB (varies according to the micro code).

 

  • Standard device can be unprotected, two ways mirrored, three ways mirrored, 4 way mirrored as well as raid 5 protected.

 

  • Unprotected Standard device less than 5 cylinders are visible to the host which can be configured as gate keeper device.

 

  • Standard device is visible to host when it is mapped to front end director.

 

  • Once the LUN is deleted it should be unmapped.

 

  • Standard device can be configured to different personalities which may not be visible to host unless it is protected.

 

  • To set the personalities change the attributes.

 

  • To convert standard device to BCV device it should be an unprotected device.

 

  • Once standard device is configured to BCV, the BCV can become two ways mirrored.

 

  • An unprotected standard device with less than 5 cylinders can be configured as gate keeper device.

 

  • If gate keeper device is not configured, any volume can become gate keeper.

 

  • Without gatekeeper EMC control center and solution enabler cannot be operated by host on SYMM.

 

  • SYMM cannot be managed through TCP/IP stack as it is a fiber channel device.

 

  • It requires FC adapter and HBA.

 

  • Solution enabler installed in the host sends API calls through the HBA across fiber channel to gate keeper device upon receiving the API’s gate keeper allows the commands execute on SYMM

METAVOLUME: -   The larger volumes created by concatenating or striping the smaller hyper volumes.

  • The size of a Meta volume depends on the micro code.
  • 2 to 255 hyper volumes can be joined to a Meta volume.
  • All hypers joined to a Meta volume should be of same size.
  • They should follow same protection mechanism.
  • Meta volume is identified by its Meta head i.e., the first volume of the Meta volume.
  • Volumes are mapped to the host through the front end director ports.
  • The ports have some flags either disabled or enabled.

 

  • The following flags should be enabled
  1. C bit enabled  for common serial number to volume
  2. SCSI persistent reservations enabled for cluster environment
  3. VCM enabled
  • To change the status of port flags offline the ports of directors, once

Changed, enable the directors.

 

LAB SESSION:-

 

  • Create six standard devices two way protected of size 450MB
  • Create eight BCV devices of size 450MB
  • Create a device unprotected of size five cylinders
  • Map the devices
  • Mask the devices

 

NOTE:-

For convenience, following device names are given through out the programme.

i)                   primary host name SUN1

ii)                backup host name SUN2

iii)              SYMM IDs  277 & 694

iv)              Fe directors  1c, 1d ,16c and 16d  port No=0

How to Reset Forgotten MySQL Root Password

First things first. Log in as root and stop the mysql daemon. Now lets start up the mysql daemon and skip the grant tables which store the passwords.

#mysqld_safe –skip-grant-tables

You should see mysqld start up successfully. If not, well you have bigger issues. Now you should be able to connect to mysql without a password.

#mysql –user=root mysql

Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 2 to server version: 5.0.22

Type ‘help;’ or ‘h’ for help. Type ‘c’ to clear the buffer.

mysql> update user set Password=PASSWORD(‘new-password’) where user=’root’;
mysql>flush privileges;
mysql>exit;

Now stop your running mysqld, then restart it normally. You should be good to go. Try not to forget your password again.

 

RAID Theory

RAID: That it is; What it does ………………………………………………………….
RAID: The Details ……………………………………………………………………….
RAID Type: Concatenation ………………………………………………………
RAID Type: Striping (RAID-0) …………………………………………………
RAID Type: Mirroring (RAID-1) ……………………………………………….
RAID Type: Stripping plus Mirroring (RAID-0+1) …………………………..
RAID Type: RAID-5 (Striping with Parity) ……………………………………
RAID Comparison: RAID0+1 vs RAID5 ………………………………………
RAID History: The Lost Brothers ………………………………………………………

RAID: That it is; What it does

RAID is something all of us have heard about but very few of us understand, at least fully. So lets get off on the right foot. RAID stands for Redundant Array of Inexpensive (or Independent) Disks. There are a dozen or so theories as to why RAID was conceptualized, but the most accepted reason is that once upon a time, not long ago, disks were
small and expensive. In order to provide a large amount of data you had to have a bunch of disks all mounted in a single file tree, which was a real mess. So, to solve this problem RAID was born. With RAID you could take a bunch of disks at create a big virtual disk out of them which made administration much easier and more logical. Over time RAID grew to include new solutions for old problems, like disk performance, redundancy, and scalability. And for any skeptics out there, tell me where I can get a 10 terabyte disk drive…. that should make us all agree that RAID has a place in the universe.

Just to try and clear things up a bit more, lets see why we don’t simple just need RAID, but actually WANT it. Let’s say we’re building a production NFS server that will be used to store all of our software. We’ll need this system to extremely stable, because if it goes down no one can get or submit code. With RAID we could build a single virtual disk (volume) that would meet our need for 200G of disk. But we also what to make sure that if disks die that we don’t go down. So we use a mirror (another set of disks identical to the first set of disks). If a disk dies we’re okey, because the mirror will take over; we essentially have 2 identical sets of the same data which are constantly kept up to date. See? Using these 2 simple RAID concepts we’ve achieved both availability (thats our mirror saving us from disk crashes) and increased capacity (we’ve got a whole bunch of disks working together, which is cheaper than buying a single 200G disks… if you can find one!).
Okey, enough of the bad examples. Lets look at the different forms of RAID in use today.

RAID: The Details

RAID Type: Concatenation

Concatenations are also know as “Simple” RAIDs. A Concatenation is a collection of disks that are “welded” together. Data in a concatenation is layed across the disks in a linear fashion from on disk to the next. So if we’ve got 3 9G (gig) disks that are made into a Simple RAID, we’ll end up with a single 27G virtual disk (volume). When you write data
to the disk you’ll write to the first disk, and you’ll keep writing your data to the first disk until it’s full, then you’ll start writing to the second disk, and so on. All this is done by the Volume Manager, which is “keeper of the RAID”. Concatenation is the cornerstone of  RAID.
Now, do you see the problem with this type of RAID? Because we’re writing data linearly across the disks, if we only have 7G of data on our RAID we’re only using the first disk! The 2 other disks are just sitting there bored and useless. This sucks. We got the big disk we wanted, but it’s not any better than a normal disk drive you can buy off the shelves in terms of performance. There has got to be a better way……….

RAID Type: Striping (RAID-0)

Striping is similar to Concatenation because it will turn a bunch of little disks into a big single virtual disk (volume), but the difference here is that when we write data we write it across ALL the disks. So, when we need to read or write data we’re moving really really fast, in fact faster than any one disk could move. There are 2 things to know about RAID-0, they are: stripe width, and columns. They sound scary, but they’re totally sweet, let me show you. So, if we’re going to read and write across multiple disks in our RAID we need an organized way to go about it. First, we’ll have to agree on how much data should be written to a disk before moving to the next; we call that our “stripe width”. Then we’ll need far kooler term for each disk, a term that allows us to visualize our new RAID better….. “column” sounds kool! Alright, so each disk is a “column” and the amount of data we put on each “column” before moving to the next is our “stripe width”. Let’s solidify this. If we’re building a RAID-0 with 4 columns, and a stripe width of 128k, what do I have? It might look something like this:
Look good? So, when we start writing to our new RAID, we’ll write the first 128k to the first column, then the next 128k to the second column, then the next 128k to the third column, then the next 128k to the fourth column, THEN the next 128k to the first column, and keep going till all the data is written. See? If we were writing a 1M file we’d
wrap that one file around all 4 disks almost 3 times! Can you see now where our speed up comes from? SCSI drives can write data at about (depending on what type of drive and what type of SCSI) 20M/s. On our Striped RAID we’d be writing at 80M/s! Kool huh!?
But, now we’ve got ANOTHER problem. In a Simple RAID if we had, say, 3 9G disks, we’d have 27G of data. Now, if I only wrote 9G of data to that RAID and the third disk died, so what, there is no data on it. (See where I’m going with this?) We’d only be using one of our three disks in a simple. BUT, in a Striped RAID, we could write only 10M of
data to the RAID, but if even ONE disk failed, the whole thing would be trash because we wrote it on ALL of the disks. So, how do we solve this one?

RAID Type: Mirroring (RAID-1)

Mirroring isn’t actually a “RAID” like the other forms, but it’s a critical component to RAID, so it was honored by being given it’s own number. The concept is to create a separate RAID (Simple or RAID0) that is used to duplicate an existing RAID. So, it’s literally a mirror image of your RAID. This is done so that if a disk crashes in your RAID the
mirror will take over. If one RAID crashes, then the other RAID takes its place. Simple, right? There’s not much to it. However, there is a new problem! This is expensive… really expensive. Let’s say you wanted a 27G RAID. So you bought 3 9G drives. In order to mirror it you’ll need to buy 3 more 9G drives. If you ever get depressed you’ll start thinking:
“You know, I just shelled out $400 for 3 more drives, and I don’t even get more usable space!”. Well, in this industry we all get depressed a lot so, they thought of another kool idea for a RAID……
RAID Type: Stripping plus Mirroring (RAID-0+1)
When we talk about mirroring (RAID-1) we’re not explicitly specifying whether we’re mirroring a Simple RAID or a Striped (RAID-0) RAID. RAID-0+1 is a term used to explicitly say that we’re mirroring a Striped RAID. The only thing you need to know about it is this… A mirror is nothing more that another RAID identical to the RAID we’re trying to protect. So when we build a mirror we’ll need the mirror to be the same type of RAID as the original RAID. If the RAID we want to mirror is a Simple RAID, our mirror then will be a Simple RAID. If we want to mirror a Striped RAID, then we’ll want another Striped RAID to mirror the first. Right? So, if you say to me, we’re building a RAID-0+1, I know that we’re going to mirror a Striped RAID, and the mirror itself is going to be striped as well.
You’ll see this term used more often than “RAID-1” simply because a mirror, in and of itself, isn’t useful. Again, it’s not really a “RAID” in the sense that we mean to use the word.

RAID Type: RAID-5 (Striping with Parity)

RAID-5 is the ideal solution for maximizing disk space and disk redundancy. It’s like Striping (RAID-0) in the fact that we have columns and stripe widths, but when we write data two interesting things happen: the data is written to multiple disks at the same time, and parity is written with the data.
Okey, let’s break it down a bit. Let’s say we build a RAID-5 out of 4 9G drives. So we’ll have 4 columns, and lets say our stripe width is 128k again. The first 128k is written on disks one, two AND three. At the same time it’s written a little magic number is written on each disk with the data. That magic number is called the parity. Then, the second 128k
of data is written to (watch carefully) disks two, three and four. Again, a parity number is written with that data. The third 128k of data is written to disks three, four and one. (See, we wrapped around). And data keeps being written like that. Here’s the beauty of it. Each piece of our data is on three different disks in the RAID at the same time! Let’s look back at our 4 disk raid. We’re working normally, writing along, and then SNAP! Disk 3 fails! Are we worried? Not particularly. Because our data is being written to 3 disks per write instead of just one, the RAID is smart enough to just get the data off the other 2 disks it wrote to! Then, once we replace the bad disk with a new one, the RAID “floods” all the data back onto the disk from the data on the other 2 adjacent disks! But, you ask, how does the RAID know it’s giving you the correct data? Because of our parity. When the data was written to disk(s) that parity was written with it.
We (actually the computer does this automatically) just look at the data on disks 2 and 4, then compare (XOR) the parity written with the data and if the parity checks out, we know the data is good. Kool huh? Now, as you might expect, this isn’t perfect either. Why? Okey, number 1, remember that parity that saves our butt and makes sure our data is good? Well, as you might expect the systems CPU has to calculate that, which isn’t hard but we’re still wasting CPU cycles for the RAID, which means if the system is really loaded we may need to (eek!) wait. This is the “performance hit” you’ll hear people talk about. Also, we’re writing to 3 disks at a time for the SAME data, which means we’re using up I/O bandwidth and not getting a real boost out of it.

RAID Comparison: RAID0+1 vs RAID5

There are battles fought in the storage arena, much like the old UNIX vs NT battles. We tend to fight over RAID0+1 vs RAID5. The fact is that RAID5 is advantageous because we use less disks in the endeavor to provide large amounts of disk space, while still having protection. All that means is that RAID5 is inexpensive compared to RAID0+1 where
we’ll need double the amount of disk we expect to use, because we’ll only need a third more disks rather than twice as many. But, then RAID5 is also slower than RAID0+1 because of that damned parity. If you really want speed, you’ll need to bite the bullet and use RAID0+1 because even though you need more disks, you don’t need to calculate anything, you just dump the data to the disks. In my estimates (this isn’t scientific, just what I’ve noticed by experience) RAID0+1 is about 20%-30% faster than RAID5.
Now, in the real world, you rarely have much choice, and the way to go is clear. If you’re given 10 9G disks and are told to create a 60G RAID, and you can’t buy more disks, you’ll need to either go RAID5, or be unprotected. However, if you’ve got thoughs same disks and they only want 36G RAID you can go RAID0+1, with the only drawback that they won’t have much room to grow. It’s all up to you as an admin, but always take growth into account. Look at what you’ve got, downtime availability to grow when needed, budget, performance needs, etc, etc, etc. Welcome to the world of capacity planning!

RAID History: The Lost Brothers

Wondering what ever happened to RAID-2, RAID-3, and RAID-4? You can look in history books for the details, but they were to be hybrids of mirroring and striping. Ways to include a parity with the data, for protection, but still staying away from mirroring each disks in a normal “one-to-one” mirror. One RAID type would have problems, so they would build another. RAID5, if you hadn’t guessed, was the agreed upon solution. RAID-2 and RAID-3 died and burned and scattered into the sea of obsolescence. However, RAID-4 found a home with our friends at NetApp (www.netapp.com).
A RAID-4 volume is made up of one or more data disks which are stripped, and a dedicated parity disk which maintains the write checksums of the data written on each stripe. Checksums are just numbers, so they are very small and quick to write. The problem is that generally when you write data to the volume you write a stripe, then write parity, then write the stripe, then write parity, so on and so forth, but you have to wait for the parity to complete writting before writting the next stripe which is a bottleneck. Add to that from the avaliblity side of things, if you loose your parity disk due to failure your running with your pants down. Sure you can rebuild the parity disk after hot-swapping or replacing the disk, but that requires re-computing parity from each stripe which is an obviously
time consuming proposition. NetApp however worked around these problems by using the Filers onboard memory for caching writes, and additionally an NVRAM as a ready it writes out the data stripes first, and then the parity because it’s already calculated parity and the write pattern in memory. In this way we take away the bottleneck of parity disk. This is possible due to the intellegence of WALF, the NetApp OnTap File System.
NetApp has added a new feature in OnTap 6.5, actually, called RAID-DP, Double Parity.  It’s the same RAID-4 system but it employs mirror parity disks. This way you can loose 2 disks in a volume (assuming one is a parity disk) and keep running. (It’s unfortunate, but NetApp Filer’s are the worlds fastest NFS servers. But one day Sun is going to kick their ……. never mind.)