[Gluster-users] Gluster 3.3.0 and VMware ESXi 5

Simon Blackstein simon at blackstein.com
Wed Jun 27 03:05:01 UTC 2012


OK, figured this one out (my word this took some time....). Found an
attribute set on the fourth node after running through them again.

Back to work... after patching all nodes and removing/recreating the
volume, this now works for me. Definitely not simple and please, if we can
clean up volumes after a delete (remove .glusterfs dir and extra attributes
across all nodes) that would be tremendously useful.

I'm going to throw a bunch of VMware VMs on my new volume and try it out.

Recommendation for you, Fernando. I had to roll my own RPMs to get close to
consistency across all nodes. Basically:

- Download glusterfs-3.3.0.tar.gz, untar into its directory
- Replace the files from Anand's patch (
http://review.gluster.com/#change,3617)
- Re-tar up the file and run 'rpmbuild -ta glusterfs-3.3.0.tar.gz'
- You'll end up with RPMs in the rpmbuild/RPMS directory from which you can
scp to other nodes and bring them all up on the new version

You may need to do the same with the Fuse client and work through some
dependencies but it should work.

Many Thanks!

Simon

On Tue, Jun 26, 2012 at 3:37 PM, Anand Avati <anand.avati at gmail.com> wrote:

> Can you get the output of getfattr -d -e hex -m . /gfs
>
> Avati
> On Jun 26, 2012 5:08 PM, "Simon Blackstein" <simon at blackstein.com> wrote:
>
>> Thanks Brian.
>>
>> Yes, got rid of the .glusterfs and .vSphereHA directory that VMware
>> makes. Rebooted, so yes it was remounted and used a different mount
>> point name. Also got rid of attribute I found set on the root:
>>
>> setfattr -x trusted.gfid / && setfattr -x trusted.glusterfs.dht /
>>
>> Any other tips? :)
>>
>> Many Rgds,
>>
>> Simon
>>
>> On Tue, Jun 26, 2012 at 12:19 PM, Brian Candler <B.Candler at pobox.com>wrote:
>>
>>> On Tue, Jun 26, 2012 at 11:46:52AM -0700, Simon Blackstein wrote:
>>> >    Basically did all of that as previously noted:
>>>
>>> And rm -rf .glusterfs ?
>>>
>>> If /gfs is the mountpoint, you could also try
>>>   unmount /gfs
>>>   rmdir /gfs
>>>   mkdir /gfs
>>> and remount.
>>>
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120626/3cbe9a8e/attachment.html>


More information about the Gluster-users mailing list