{"id":1501,"date":"2013-11-13T17:16:05","date_gmt":"2013-11-13T22:16:05","guid":{"rendered":"http:\/\/www.shainmiley.com\/wordpress\/?p=1501"},"modified":"2013-12-03T19:47:32","modified_gmt":"2013-12-04T00:47:32","slug":"ceph-braindump-part1","status":"publish","type":"post","link":"https:\/\/www.shainmiley.com\/wordpress\/2013\/11\/13\/ceph-braindump-part1\/","title":{"rendered":"Ceph braindump part1"},"content":{"rendered":"<p>After spending about 4 months testing, benchmarking, setting up and breaking down various Ceph clusters, I though I would spend time documenting some of the things I have learned while setting up cephfs, rbd and radosgw along the way.<\/p>\n<p>First let me talk a little bit about the details of the cluster that we will be putting into production over the next several weeks.<\/p>\n<p><strong>Cluster specs:<\/strong><\/p>\n<ul>\n<li>6 x Dell R-720xd;64 GB of RAM; for OSD nodes<\/li>\n<li>72 x 4TB SAS drives as OSD&#8217;s<\/li>\n<li>3 x Dell R-420;32 GB of RAM; for MON\/RADOSGW\/MDS nodes<\/li>\n<li>2 x Force10 S4810 switches<\/li>\n<li>4 x 10 GigE LCAP bonded Intel cards<\/li>\n<li>Ubuntu 12.04 (AMD64)<\/li>\n<li>Ceph 0.72.1 (emperor)<\/li>\n<li>2400 placement groups<\/li>\n<li>261TB of usable space<\/li>\n<\/ul>\n<p>The process I used to set- up and tear down our cluster during testing was quite simple, after installing &#8216;ceph-deploy&#8217; on the admin node:<\/p>\n<ol>\n<li>ceph-deploy new mon1 mon2 mon3<\/li>\n<li>ceph-deploy install\u00c2\u00a0 mon1 mon2 mon3 osd1 osd2 osd3 osd4 osd5 osd6<\/li>\n<li>ceph-deploy mon create mon1 mon2 mon3<\/li>\n<li>ceph-deploy gatherkeys mon1<\/li>\n<li>ceph-deploy osd create osd1:sdb<\/li>\n<li>ceph-deploy osd create osd1:sdc<br \/>\n&#8230;&#8230;&#8230;.<\/li>\n<\/ol>\n<p><strong>The uninstall process went something like this:<\/strong><\/p>\n<ol>\n<li>ceph-deploy disk zap osd1:sdb<br \/>\n&#8230;&#8230;&#8230;.<\/li>\n<li>ceph-deploy purge mon1 mon2 mon3 osd1 osd2 osd3 osd4 osd5 osd6<\/li>\n<li>ceph-deploy purgedata mon1 mon2 mon3 osd1 osd2 osd3 osd4 osd5 osd6<\/li>\n<\/ol>\n<p><strong>Additions to ceph.conf:<\/strong><\/p>\n<p>Since we wanted to configure an appropriate journal size for our 10GigE network, mount xfs with\u00c2\u00a0appropriate options and configure radosgw, we added the following to our ceph.conf (after &#8216;ceph-deploy new but before &#8216;ceph-deploy install&#8217;:<\/p>\n<p>[global]<br \/>\nosd_journal_size = 10240<br \/>\nosd_mount_options_xfs = &#8220;rw,noatime,nodiratime,logbsize=256k,logbufs=8,inode64&#8221;<br \/>\nosd_mkfs_options_xfs = &#8220;-f -i size=2048&#8221;<\/p>\n<p>[client.radosgw.gateway]<br \/>\nhost = mon1<br \/>\nkeyring = \/etc\/ceph\/keyring.radosgw.gateway<br \/>\nrgw_socket_path = \/tmp\/radosgw.sock<br \/>\nlog_file = \/var\/log\/ceph\/radosgw.log<br \/>\nadmin_socket = \/var\/run\/ceph\/radosgw.asok<br \/>\nrgw_dns_name = yourdomain.com<br \/>\ndebug rgw = 20<br \/>\nrgw print continue = true<br \/>\nrgw should log = true<br \/>\nrgw enable usage log = true<\/p>\n<p><strong>Benchmarking:<\/strong><\/p>\n<p>I used the following commands to benchmark rados, rbd, cephfs, etc<\/p>\n<ol>\n<li>rados -p rbd\u00c2\u00a0 bench 20 write &#8211;no-cleanup<\/li>\n<li>rados -p rbd\u00c2\u00a0 bench 20 seq<\/li>\n<li>dd bs=1M count=512 if=\/dev\/zero of=test conv=fdatasync<\/li>\n<li>dd bs=4M count=512 if=\/dev\/zero of=test conv=fdatasync<\/li>\n<\/ol>\n<p><strong>\u00c2\u00a0Ceph blogs worth reading:<\/strong><\/p>\n<p><a href=\"http:\/\/ceph.com\/community\/blog\/\" target=\"_blank\">http:\/\/ceph.com\/community\/blog\/<\/a><br \/>\n<a href=\"http:\/\/www.sebastien-han.fr\/blog\/\" target=\"_blank\">http:\/\/www.sebastien-han.fr\/blog\/<\/a><br \/>\n<a href=\"http:\/\/dachary.org\/\" target=\"_blank\">http:\/\/dachary.org\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>After spending about 4 months testing, benchmarking, setting up and breaking down various Ceph clusters, I though I would spend time documenting some of the things I have learned while setting up cephfs, rbd and radosgw along the way. First let me talk a little bit about the details of the cluster that we will [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[35,3,23,32],"tags":[],"_links":{"self":[{"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/posts\/1501"}],"collection":[{"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/comments?post=1501"}],"version-history":[{"count":23,"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/posts\/1501\/revisions"}],"predecessor-version":[{"id":1530,"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/posts\/1501\/revisions\/1530"}],"wp:attachment":[{"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/media?parent=1501"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/categories?post=1501"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.shainmiley.com\/wordpress\/wp-json\/wp\/v2\/tags?post=1501"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}