Monthly Archives: January 2012

The future of

Now that RedHat has purchased Gluster, and they are in the process of releasing their storage software appliance, many people are wondering what all this means for the GlusterFS project and as a whole.

John Mark Walker conducted a webinar last week entitled ‘The Future of GlusterFS and’. In the beginning of this presentation John talks about the history behind, and origins of the Gluster project, he then goes into a basic overview of the features provided by GlusterFS, and finally he talks about what to expect from version 3.3 of GlusterFS and the GlusterFS open source community going forward.

Here are some of the talking points that were discussed during the webinar:

  • Unstructured data is expected to grow 44X by 2020
  • Scale out storage will hold 63,000 PB by 2015
  • RedHat is aggressively hiring developers with file system knowledge
  • Moving back to an open-source model from and open-core model
  • Open source version will be testing ground for new features
  • RHSSA will be more hardened and thoroughly tested
  • Beta 3 for 3.3 due in Feb/Mar 2012
  • GlusterFS 3.3 expected in Q2/Q3 of 2012

Here is the link to the entire presentation in a downloadable .mp4 format.

Here is a link to all the slides that were presented during the talk.

A tour of btrfs by Avi Miller

Here is a Youtube video of a presentation from this years conference given by Avi Miller.  The video talks about the current state of btrfs, some of the upcoming features, and Avi also provides a demonstration of one of the filesystem recovery tools in action.

Here are a a few of the highlights:

  • Lots of performance and stability fixes
  • Lots of code cleanup
  • New compression options (LZO and snappy)
  • Auto file defrag
  • Kernel 3.3 will allow larger block sizes (4k,8k,16k) for better meta-data throughput
  • A ZFS like send/receive is in the works
  • New filesystem checker (btrfsck) should be released by Feb 14th
  • Raid 5/6 code (from Intel) will go into mainline kernel after the release of btrfsck
  • Options exist/will exist to do mixed raid modes for data and meta-data
  • Btrfs will be production filesystem in next version of Oracle Unbreakable Linux

No doubt about it, if you are interested in the current state of btrfs you should check out this talk.

Mdadm cheat sheet

I have spent some time over the last few weeks getting familiar with mdadm and software RAID on Linux, so I thought I would write down some of the commands and example syntax that I have used while getting started.

1)If we would like to create a new RAID array from scratch we can use the following example commands:

RAID1-with 2 Drives:

# mdadm --create --verbose /dev/md0 --level=1 /dev/sda1 /dev/sdb1

RAID5-with 5 Drives:

# mdadm --create --verbose /dev/md0 --level=5 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

RAID6-with 4 Drives with 1 spare:

# mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

2)If we would like to add a disk to an existing array:

# mdadm --add /dev/md0 /dev/sdf1 (only added as a spare)
# mdadm --grow /dev/md0 -n [new number of active disks -- spares] (grow the size of the array)

3)If we would like to remove a disk from an existing array:

First we need to ‘fail’ the drive:

# mdadm --fail /dev/md0 /dev/sdc1

Next it can be safely removed from the array:

# mdadm --remove /dev/md0 /dev/sdc1

4)In order to make the array survive a reboot, you need to add the details to ‘/etc/mdadm/mdadm.conf’

# mdadm --detail --scan >> /etc/mdadm/mdadm.conf (Debian)
# mdadm --detail --scan >> /etc/mdadm.conf (Everyone else)

5)In order to delete and remove the entire array:

First we need to ‘stop’ the array:

# mdadm --stop /dev/md0

Next it can be removed:

# mdadm --remove /dev/md0

6)Examining the status of your RAID array:

There are two options here:

# cat /proc/mdstat
# mdadm --detail /dev/md0