Gluster on OpenSolaris so far…part 1.

We have been running Gluster in our production environment for about 1 month now, so I figured I would post some details about our setup and our experiences with Gluster and OpenSolaris so far.

Overview:

Currently we have a 2 node Gluster cluster, we are using the replicate translator in order to provide Raid-1 type mirroring of the filesystem.  The initial requirements involved providing  a solution that would house our digital media archive (audio, video, etc), would scale up to around 150TB, support exports such as CIFS and NFS, and be extremely stable.

It was decided that we would use ZFS as our underlying filesystem, due to it’s data integerity features as well as it’s support for taking filesystem snapshots, both considered very high on the requirement list for this project as well.

Although FreeBSD has had ZFS support for quite some time, there were some known issues (with 32 vs 64 bit inode numbers) at the time of my research that prevented us from going that route.

Just this week  KQstor released their native ZFS kernel module for Linux, which as of this latest release is supposed to fully support extended filesystem attributes, these are requirement in order for Gluster to function properly.  This software was Beta at the time,  and did not support extended attributes, so we were unable to consider and/or test this configuration either.

The choice was then made to go with ZFS on OpenSolaris (2008.11 specifically due to the 3ware drivers available at the time).  Currently there is no FUSE support under Solaris, so although you can use it without a problem on the server side,  if you choose to use a Solaris variant for your storage nodes,  you will be required to use a head node with an OS that does support it on the client side.

The latest version of Gluster to be fully supported on the Solaris platform is version 3.0.5. 3.1.x introduced some nice new features, however we will have to either port our storage nodes to Linux, or wait until the folks at Gluster decide to release 3.1.x for Solaris (which I am not sure will happen anytime soon).

Here is the current hardware/software configuration:

  • 2 x Intel Xeon E5410 @ 2.33GHz:CPU
  • 32 GB DDR2 DIMMS:RAM
  • 48 X 2TB Western Digital SATA II:HARD DRIVES
  • 2 x 3WARE 9650SE-24M8 PCIE:RAID CONTROLLER
  • Opensolaris version 2008.11
  • Glusterfs version 3.0.5
  • Samba version 3.2.5 (Gluster1)

ZFS Setup:

Setup for the two OS drives was pretty straight forward, we created a two disk mirrored rpool.  This will allow us to have a disk failure in the root pool and still be able to boot the system.

Since we have 48 disks to work with for our data pool, we created a total of 6 Raid-z2 vdevs, each consisting of 7 physical disks.  This setup gives up 75TB of space (53TB usable) per node, while leaving 6 disks available to use as spares.

user@server1:/# zpool list
NAME       SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rpool     1.81T  19.6G  1.79T     1%  ONLINE  -
datapool  75.8T  9.01T  66.7T    11%  ONLINE  -

Gluster setup:

Creating the Gluster .vol configuration files is easily done via the glusterfs-volgen command:

user1@host1:/#glusterfs-volgen --name cluster01 --raid 1 server1.hostname.com:/data/path server2.hostname.com:/data/path

That command will produce 2 volume files, one is called ‘glusterfsd.vol’ used on the server side and one called ‘glusterfs.vol’ used on the client.

Starting glusterd on the serverside is straightforward:

user1@host1:/# /usr/glusterfs/sbin/glusterfsd

Starting gluster on the client side is straightforward as well:

user1@host2:/#/usr/glusterfs/sbin/glusterfs --volfile=/usr/glusterfs/etc/glusterfs/glusterfs.vol /mnt/glusterfs/

In a later blog post I plan to talk more about issues that we have encountered running this specific setup in a production environment.

3 thoughts on “Gluster on OpenSolaris so far…part 1.

  1. Menno

    Hi Shain,

    I’ve got a similar setup and I’ve got a question about OpenSolaris and the 3Ware driver. Did you have any problem having the OS recognize all 24 disks you attached to each of the controllers?

    On my box, I’ve configured all disks as SINGLE disks using the tw_cli tool from 3Ware, but OpenSolaris sees just the first 16 disks on each controllers and not the remaining disks.

    Just wondering if you ran into something similar.

    Cheers,
    Menno

  2. shainmiley Post author

    Menno,
    You will need to edit the file ‘/kernel/drv/sd.conf’ and add the entries for the additional disks. For example, here are the entries for my 48 disk system, which has 24 disks per scsi controller:

    name=”sd” class=”scsi” target=0 lun=0;
    name=”sd” class=”scsi” target=1 lun=0;
    name=”sd” class=”scsi” target=2 lun=0;
    name=”sd” class=”scsi” target=3 lun=0;
    name=”sd” class=”scsi” target=4 lun=0;
    name=”sd” class=”scsi” target=5 lun=0;
    name=”sd” class=”scsi” target=6 lun=0;
    name=”sd” class=”scsi” target=7 lun=0;
    name=”sd” class=”scsi” target=8 lun=0;
    name=”sd” class=”scsi” target=9 lun=0;
    name=”sd” class=”scsi” target=10 lun=0;
    name=”sd” class=”scsi” target=11 lun=0;
    name=”sd” class=”scsi” target=12 lun=0;
    name=”sd” class=”scsi” target=13 lun=0;
    name=”sd” class=”scsi” target=14 lun=0;
    name=”sd” class=”scsi” target=15 lun=0;
    name=”sd” class=”scsi” target=16 lun=0;
    name=”sd” class=”scsi” target=17 lun=0;
    name=”sd” class=”scsi” target=18 lun=0;
    name=”sd” class=”scsi” target=19 lun=0;
    name=”sd” class=”scsi” target=20 lun=0;
    name=”sd” class=”scsi” target=21 lun=0;
    name=”sd” class=”scsi” target=22 lun=0;
    name=”sd” class=”scsi” target=23 lun=0;

    After a restart…you should not be able to see all the disks.

  3. Menno

    Thanks Shain, that worked for me.

    ps: note for others reading this post, beware not to just copy/paste the above, as the curling quotes will cause OpenSolaris to not boot, saves you a couple minutes debugging 🙂

    cat shows the problem with the quotes:

    $ cat -v oops
    name="sd" class="scsi" target=16 lun=0;
    name=M-bM-^@M-^]sdM-bM-^@M-^] class=M-bM-^@M-^]scsiM-bM-^@M-^] target=17 lun=0;

Leave a Reply

Your email address will not be published. Required fields are marked *