[Gluster-users] Error when creating volume

Joe Julian joe at julianfamily.org
Fri Aug 23 16:22:36 UTC 2013


On 08/23/2013 12:00 AM, Olivier Desport wrote:
> Le 22/08/2013 17:04, Joe Julian a écrit :
>> On 08/22/2013 07:58 AM, Olivier Desport wrote:
>>> Le 22/08/2013 16:34, Joe Julian a écrit :
>>>> On 08/22/2013 06:31 AM, Olivier Desport wrote:
>>>>> Le 22/08/2013 15:07, Olivier Desport a écrit :
>>>>>> I've removed a volume and I can't re-create it :
>>>>>>
>>>>>> gluster volume create gluster-export gluster-6:/export 
>>>>>> gluster-5:/export gluster-4:/export gluster-3:/export
>>>>>> /export or a prefix of it is already part of a volume
>>>>>>
>>>>>> I've formatted the partition and reinstalled the 4 gluster 
>>>>>> servers and the error still appears.
>>>>>>
>>>>>> Any idea ?
>>>>>>
>>>>> Some more information:
>>>>>
>>>>> I've formatted in OCFS2 and GlusterFS version is 3.3.1-1.
>>>>>
>>>>> I've tried to set attributes with setfattr command but it still 
>>>>> doesn't work.
>>>> Isn't running a clustered filesystem on top of a clustered 
>>>> filesystem a little bit redundant?
>>>
>>> Perhaps but can I format my isci shared volume in xfs and mount it 
>>> on several machines to share with GlusterFS ?
>> Wait... so are you saying that /export is the same shared filesystem 
>> on each of those servers?
>
> Yes. I want to use GlusterFS to have an HA network share.
GlusterFS *is* a clustered filesystem. Provided you use replication, it 
is a HA network share all by itself. You don't need to put it on top of 
another clustered filesystem. You especially cannot use the /same/ 
filesystem for multiple bricks.

In your other email, you suggested that you could scp to the same target 
from two clients when your bricks were xfs. If you're mounting the 
glusterfs volume on your client(s) (mount -t glusterfs) and accessing 
your same file through those mountpoints, that wouldn't happen. Standard 
posix locking would happen and only one client could write to the file 
at once.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130823/bd78692f/attachment.html>


More information about the Gluster-users mailing list