{"id":1164,"date":"2012-03-06T18:19:57","date_gmt":"2012-03-06T23:19:57","guid":{"rendered":"http:\/\/www.shainmiley.com\/wordpress\/?p=1164"},"modified":"2012-03-07T16:01:45","modified_gmt":"2012-03-07T21:01:45","slug":"zfsonlinux-and-gluster-so-far","status":"publish","type":"post","link":"https:\/\/www.shainmiley.com\/wordpress\/2012\/03\/06\/zfsonlinux-and-gluster-so-far\/","title":{"rendered":"zfsonlinux and gluster so far&#8230;."},"content":{"rendered":"<p>Recently I started to revisit the idea of using zfs and linux (<a href=\"http:\/\/zfsonlinux.org\" target=\"_blank\">zfsonlinux<\/a>) as the basis for a server that will eventually be the foundation of our gluster storage infrastructure. \u00c2\u00a0At this point we are using the Opensolaris version of zfs and an older (but stable) version of gluster (3.0.5).<\/p>\n<p>The problem with staying with Opensolaris (besides the fact that it is no longer being actively supported itself),\u00c2\u00a0\u00c2\u00a0is that we would be unable to upgrade gluster&#8230;.and thus we would be unable to take advantage of some of the new and upcoming features that exist in the later versions (such as\u00c2\u00a0geo-replication, snapshots, active-active geo-replication and various other bugfixes, performance\u00c2\u00a0enhancements, etc).<\/p>\n<p><strong>Hardware:<\/strong><\/p>\n<p>Here are the specs for the current hardware I am using to test:<\/p>\n<ul>\n<li>2 x Intel Xeon E5410 @ 2.33GHz:CPU<\/li>\n<li>32 GB DDR2 DIMMS:RAM<\/li>\n<li>48 X 2TB Western Digital SATA II:HARD DRIVES<\/li>\n<li>2 x 3WARE 9650SE-24M8 PCIE:RAID CONTROLLER<\/li>\n<li>Ubuntu 11.10<\/li>\n<li>Glusterfs version 3.2.5<\/li>\n<li>1 Gbps interconnects (LAN)<\/li>\n<\/ul>\n<p><strong>ZFS installation:<\/strong><\/p>\n<p>I decided to use Ubuntu 11.10 for this round of testing, currently the daliy ppa has a lot of bugfixes and performance improvements that do not exist in the latest stable release ( 0.6.0-rc6) so the daily ppa is the version that should be used until either v0.6.0-rc7 or v0.6.0 final are released.<\/p>\n<p>Here is what you will need to get zfs installed and running:<\/p>\n<div class=\"ex\"># apt-add-repository ppa:zfs-native\/daily<br \/>\n# apt-get update<br \/>\n# apt-get install debootstrap ubuntu-zfs<\/div>\n<p>At this point we can create our first zpool. Here is the syntax used to create a 6 disk raidz2 vdev:<\/p>\n<div class=\"ex\"># zpool create -f tank raidz2 sdc sdd sde sdf sdg sdh<\/div>\n<p>Now let&#8217;s check the status of the zpool:<\/p>\n<div class=\"ex\"># zpool status tank<br \/>\npool: tank<br \/>\nstate: ONLINE<br \/>\nscan: none requested<br \/>\nconfig:NAME STATE READ WRITE CKSUM<br \/>\ntank ONLINE 0 0 0<br \/>\nraidz2-0 ONLINE 0 0 0<br \/>\nsdc ONLINE 0 0 0<br \/>\nsdd ONLINE 0 0 0<br \/>\nsde ONLINE 0 0 0<br \/>\nsdf ONLINE 0 0 0<br \/>\nsdg ONLINE 0 0 0<br \/>\nsdh ONLINE 0 0 0errors: No known data errors<\/div>\n<p><strong>ZFS Benchmarks:<\/strong><\/p>\n<p>I ran a few tests to see what kind of performance I could expect out of zfs first, before I added gluster on top, that way I would have better idea about where the bottleneck (if any) existed.<\/p>\n<p>linux 3.3-rc5 kernel untar:<\/p>\n<div class=\"ex\">single ext4 disk: 3.277s<br \/>\nzfs 2 disk mirror: 19.338s<br \/>\nzfs 6 disk raidz2: 8.256s<\/div>\n<p>dd using block size of 4096:<\/p>\n<div class=\"ex\">single ext4 disk: 204 MB\/s<br \/>\nzfs 2 disk mirror: 7.5 MB\/s<br \/>\nzfs 6 disk raidz2: 174 MB\/s<\/div>\n<p>dd using block size of 1M:<\/p>\n<div class=\"ex\">single ext4 disk: 153.0 MB\/s<br \/>\nzfs 2 disk mirror: 99.7 MB\/s<br \/>\nzfs 6 disk raidz2: 381.2 MB\/s<\/div>\n<p><strong>Gluster + ZFS Benchmarks<\/strong><\/p>\n<p>Next I added gluster (version 3.2.5) to the mix to see how they performed together:<\/p>\n<p>linux 3.3-rc5 kernel untar:<\/p>\n<div class=\"ex\">zfs 6 disk raidz2 + gluster (replication): 4m10.093s<br \/>\nzfs 6 disk raidz2 + gluster (geo replication): 1m12.054s<\/div>\n<p>dd using block size of 4096:<\/p>\n<div class=\"ex\">zfs 6 disk raidz2 + gluster (replication): 53.6 MB\/s<br \/>\nzfs 6 disk raidz2 + gluster (geo replication): 53.7 MB\/s<\/div>\n<p>dd using block size of 1M:<\/p>\n<div class=\"ex\">zfs 6 disk raidz2 + gluster (replication): 45.7 MB\/s<br \/>\nzfs 6 disk raidz2 + gluster (geo replication): 155 MB\/s<\/div>\n<p><strong>Conclusion<\/strong><\/p>\n<p>Well so far so good, I have been running the zfsonlinux port for two weeks now without any real issues. From what I understand there is still a decent amount of work left to do around dedup and compression (neither of which I\u00c2\u00a0necessarily require for this particular setup).<\/p>\n<p>The good news is that the zfsonlinux developers have not even really started looking into improving performance at this point, since their main focus thus far has been overall stability.<\/p>\n<p>A good deal of development is also taking place in order to allow linux to boot using a zfs &#8216;\/boot&#8217; partition. \u00c2\u00a0This is currently an option on several disto&#8217;s including Ubuntu and Gentoo, however the setup requires a fair amount of effort to get going, so it will be nice when this style setup is supported out of the box.<\/p>\n<p>In terms of Gluster specifically, it performs quite well using geo-replication with larger file sizes. I am really looking forward to the\u00c2\u00a0active-active geo-replication feature currently planned for v3.4 to become fully implemented and available. Our current production setup (currently using two node replication) has a T3 (WAN) interconnect, so having the option to use geo-replication in the future should really speed up our write throughput, which is currently\u00c2\u00a0hampered by the throughput of the T3 itself.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Recently I started to revisit the idea of using zfs and linux (zfsonlinux) as the basis for a server that will eventually be the foundation of our gluster storage infrastructure. \u00c2\u00a0At this point we are using the Opensolaris version of zfs and an older (but stable) version of gluster (3.0.5). The problem with staying with [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[8,3,13,29,23,14],"tags":[],"_links":{"self":[{"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/posts\/1164"}],"collection":[{"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/comments?post=1164"}],"version-history":[{"count":79,"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/posts\/1164\/revisions"}],"predecessor-version":[{"id":1255,"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/posts\/1164\/revisions\/1255"}],"wp:attachment":[{"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/media?parent=1164"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/categories?post=1164"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/tags?post=1164"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}