[Gluster-users] Cookbook for Clustered NAS

Russell Purinton russell.purinton at gmail.com
Thu Mar 31 03:40:24 UTC 2016


So far, it works terribly! Atleast for storing VHDs…  Have had tons of issues, performance and stability related.  I have been trying unsuccessfully for almost 18 months to find a stable solution.   There’s a 6 page thread on the subject on the XenServer forums.  

I do not use it for production VMs.  I only use it for storing backups, and regular files.   Gluster servers and clients can run fine under XenServer though.


> On Mar 30, 2016, at 11:37 PM, Pawan Devaiah <pawan.devaiah at gmail.com> wrote:
> 
> Thanks for your inputs Russell and thing
> 
> Russel: I would be interested in knowing how Gluster is working with Xen? Did you have any issues?
> 
> Cheers 
> Dev
> 
> On Thu, Mar 31, 2016 at 4:23 PM, Russell Purinton <russell.purinton at gmail.com <mailto:russell.purinton at gmail.com>> wrote:
> If High Availability is important then you really need 3 nodes, even if the 3rd node is just a 1U server for storing meta data. With only 2 nodes you will encounter split brain conditions which can not only crash and corrupt your VMs, but can cause you plenty of downtime as you manually resolve the split brain condition. I understand you’re starting with 2 nodes, but just don’t expect high availability, and do keep good backups because split brain condition means that different data would be written to differently to both nodes. If you were dealing with say, small pictures, or text documents, this might be easy to deal with, but that’s much harder to resolve with VHDs. Usually you would have to revert to a snapshot after a split brain, otherwise the VM has file system corruption.
> 
> Also, with the 3 node (replica 3 arbiter 1) setup there’s currently a bug that results in very slow write speeds which may make running many VMs problematic.
> 
> As far as access from Windows clients, I do not recommend using the Windows NFS client, as I’ve found it to be problematic if the connection is ever lost, it can cause windows explorer to hang completely and require a restart of the VM. Instead, install the Samba server and access the shares over SMB. For Linux clients, you can use NFS, but you’ll probably have better results installing the actual Gluster client.
> 
> Gluster has been pretty good for me for storing backups.
> 
> I haven’t worked at all with VMware, as I run a Citrix XenServer pool myself, so I don’t know what you might run into for issues there.
> 
> Generally speaking I do recommend having a battery backed up RAID controller with onboard DDR or some NVFlash cache, as this will significantly improve write speeds than going without it, however I would only recommend using RAID0. If you use RAID1, 5, 6, 10 etc then you will be losing a significant amount of space keeping so many copies of the data.
> 
> Hope this helps.
> 
> Russ
> 
> 
> On Mar 30, 2016, at 10:28 PM, Pawan Devaiah <pawan.devaiah at gmail.com <mailto:pawan.devaiah at gmail.com>> wrote:
> 
> Hi All,
> 
> I am planning to build highly available Clustered NAS using GlusterFS, which will be accessed by windows and linux clients on VMware or Hyper-V hypervisor.
> I am looking for a cook book of sorts to achieve this, since this is new implementation I want to do it right from the begining
> 
> Hardware : 2x 4 U servers with 36 X 4 TB drives (I understand minimum 3 nodes are required for reliable cluster, but lack of space on the rack means we have to start with 2 and add additional nodes later
> 
> Workload: Store VMware VM files and store backup data
> 
> Compatibility : VMware Hypervisor 
> 
> This is going to be production system, so should I use RAID or EC is ready for production?
> 
> High Availability is the key
> 
> Any guidance will be much appreciated.
> 
> Cheers
> Dev
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> http://www.gluster.org/mailman/listinfo/gluster-users <http://www.gluster.org/mailman/listinfo/gluster-users>
> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160330/c22839c5/attachment.html>


More information about the Gluster-users mailing list