Today the team over at Gluster.com announced the availability of version of Gluster 3.1 of their software. There are currently two different offerings available from Gluster. There is the Gluster Storage Platform, known as ‘GlusterSP’ which provides a Linux based bare metal installer, web based front end, etc.
They also offer ‘Glusterfs’ which they release as open source and provides the same functionality of GlusterSP, but does not require a fresh install like GlusterSP, but instead, you can use it on an existing Linux or Solaris based system.
The 3.1 release brings the following new features:
Elastic Volume Management: logical storage volumes are decoupled from physical hardware, allowing administrators to grow, shrink and migrate storage volumes without any application downtime. As storage is added, storage volumes are automatically rebalanced across the cluster making it always available online regardless of changes to the underlying hardware.
New Gluster Console Manager: the Command Line Interface (CLI), Application Programming Interface (API) and shell are merged into a single powerful interface, enabling automation by giving the CLI higher level API’s and scripting capabilities. Languages such as Python, Ruby or PHP can be used to script a series of commands that are invoked through the command line. This new tool requires no new APIs and is able to script out and rapidly automate any information inserted in the CLI allowing cloud administrators the ability to simply automate large scale operations.
Native Network File System (NFS): including a native NFS v3 module which allows storage servers to communicate natively with NFS clients directly to any storage server in the cluster and simultaneously communicates NFS and the Gluster protocol. NFS requires no specialized training, making it simple and easy to deploy.
To find out more about Gluster you can visit Gluster.com, you can also visit Gluster.org if you want to get more familiar with the open source side of the Gluster house.
I came across an interesting project last week while doing some research on OpenSolaris and zfs. The distribution is called Nexenta. The kernel of Nexenta is based on opensolaris, however the userspace tools are based on Debian/Ubuntu.
There is also a commercial offshoot called the Nexenta storage appliance which is a the Nexentra distribution packaged as a zfs based storage server. Pricing is dependant on the maxium size of the storage pool.
I have downloaded the free version and am currently planning testing this distro with Gluster as well. The FUSE project (which is required by a Gluster client to mount the filesystem) is currently not stable on opensolaris. However I plan on using Nexenta as the server bricks of the Gluster cluster and using Linux as the client, since FUSE has no issues running on Linux.
I recently went looking to see what sort of open source scalable filesystem projects existed. I wanted to see about putting together a storage solution that would scale to upwards of 100 TB using open source software and commodity hardware. During the search I became reacquainted with the GluserFS project.
I had configured a 3 brick ‘unify’ cluster a while back with one of their 1.3.x builds, however I had not gotten an opportunity to play with it much more after that.
After looking at the various other options out there, spending a considerable amount of time on IRC and reviewing the contents of their mailing lists, I ended up settling on GlusterFS due to it’s seemingly simple design, management, configuration and future roadmap goals.
As it turns out a few days after I started my search, the gluster team released version 2.0 of their software. At this point I have setup a 5 brick ‘distribute’ (DHT) cluster on a few of our Proxmox (OpenVZ) servers.
I now have 5 independent 4GB bricks and a 20GB mountpoint it representes to the client. In this case I am currently exporting CIFS (Samaba) on top of the gluster mountpoint. I found some very useful instructions on setup, etc here. I plan to test NFS as well at some point on some real physical hardware, due to current OpenVZ limitations on NFS servers inside of a container.
One thing I was unable to get working at this point is to have the glusterfs client and server running on the same machine. The single client/server setup worked flawlessly on my Ubutu laptop, so I suspect that is just an OpenVZ issue that I need to work out.