We have been running Gluster in our production environment for about 1 month now, so I figured I would post some details about our setup and our experiences with Gluster and OpenSolaris so far.
Currently we have a 2 node Gluster cluster, we are using the replicate translator in order to provide Raid-1 type mirroring of the filesystem.Â The initial requirements involved providingÂ a solution that would house our digital media archive (audio, video, etc), would scale up to around 150TB, support exports such as CIFS and NFS, and be extremely stable.
It was decided that we would use ZFS as our underlying filesystem, due to it’s data integerity features as well as it’s support for taking filesystem snapshots, both considered very high on the requirement list for this project as well.
Although FreeBSD has had ZFS support for quite some time, there were some known issues (with 32 vs 64 bit inode numbers) at the time of my research that prevented us from going that route.
Just this weekÂ KQstor released their native ZFS kernel module for Linux, which as of this latestÂ release is supposed to fully support extended filesystem attributes, these are requirement in order for Gluster to function properly.Â This software was Beta at the time,Â and did not support extended attributes, so we were unable to consider and/or test this configuration either.
The choice was then made to go with ZFS on OpenSolaris (2008.11 specifically due to the 3ware drivers available at the time).Â Currently there is no FUSE support under Solaris, so although you can use it without a problem on the server side,Â if you choose to use a Solaris variant for your storage nodes,Â you will be required to use a head node with an OS that does support it on the client side.
The latest version of Gluster to be fully supported on the Solaris platform is version 3.0.5. 3.1.x introduced some nice new features, however we will have to either port our storage nodes to Linux, or wait until the folks at Gluster decide to release 3.1.x for Solaris (which I am not sure will happen anytime soon).
Here is the current hardware/software configuration:
- 2 x Intel Xeon E5410 @ 2.33GHz:CPU
- 32 GB DDR2 DIMMS:RAM
- 48 X 2TB Western Digital SATA II:HARD DRIVES
- 2 x 3WARE 9650SE-24M8 PCIE:RAID CONTROLLER
- Opensolaris version 2008.11
- Glusterfs version 3.0.5
- Samba version 3.2.5 (Gluster1)
Setup for the two OS drives was pretty straight forward, we created a two disk mirrored rpool.Â This will allow us to have a disk failure in the root pool and still be able to boot the system.
Since we have 48 disks to work with for our data pool, we created a total of 6 Raid-z2 vdevs, each consisting of 7 physical disks.Â This setup gives up 75TB of space (53TB usable) per node, while leaving 6 disks available to use as spares.
user@server1:/# zpool list
NAMEÂ Â Â Â Â Â SIZEÂ Â USEDÂ AVAILÂ Â Â CAPÂ HEALTHÂ ALTROOT
rpoolÂ Â Â Â 1.81TÂ 19.6GÂ 1.79TÂ Â Â Â 1%Â ONLINEÂ -
datapoolÂ 75.8TÂ 9.01TÂ 66.7TÂ Â Â 11%Â ONLINEÂ -
Creating the Gluster .vol configuration files is easily done via the glusterfs-volgen command:
user1@host1:/#glusterfs-volgen --name cluster01 --raid 1 server1.hostname.com:/data/path server2.hostname.com:/data/path
That command will produce 2 volume files, one is called ‘glusterfsd.vol’ used on the server side and one called ‘glusterfs.vol’ used on the client.
Starting glusterd on the serverside is straightforward:
Starting gluster on the client side is straightforward as well:
user1@host2:/#/usr/glusterfs/sbin/glusterfs --volfile=/usr/glusterfs/etc/glusterfs/glusterfs.vol /mnt/glusterfs/
In a later blog post I plan to talk more about issues that we have encountered running this specific setup in a production environment.