[Gluster-users] Got one that stumped me
Mohit Anchlia
mohitanchlia at gmail.com
Fri Dec 2 18:29:21 UTC 2011
Is this normal?
Brick3: sfsccl03:/data/brick-sdc2/glusterfs/dht
Brick4: sfsccl03:/data/brick-sdd2/glusterfs/dht
Both are pointing to the same dir location. Could that be confusing gluster?
On Fri, Dec 2, 2011 at 10:24 AM, Joe Landman
<landman at scalableinformatics.com> wrote:
> Can't start a volume.
>
> [root at sfsccl03 ~]# gluster volume start brick1
> brick: sfsccl03:/data/brick-sdc2/glusterfs/dht, path creation failed,
> reason: No such file or directory
>
> But ...
>
>
> [root at sfsccl03 ~]# ls -alF /data/brick-sdc2/glusterfs
> total 0
> drwxr-xr-x 4 root root 27 Dec 2 13:00 ./
> drwxr-xr-x 4 root root 107 Jul 5 11:55 ../
> drwxrwxrwt 7 root root 61 Sep 15 11:35 dht/
> drwxr-xr-x 2 root root 6 Dec 2 13:00 dht2/
>
> So it is there.
>
> [root at sfsccl03 ~]# ls -alF /data/brick-sdc2/glusterfs/dht
> total 128
> drwxrwxrwt 7 root root 61 Sep 15 11:35 ./
> drwxr-xr-x 4 root root 27 Dec 2 13:00 ../
> drwxr-xr-x 1230 root root 65536 Oct 24 09:14 equity/
> drwxr-xr-x 1740 oracle root 65536 Nov 30 23:33 opra/
> drwxr-xr-x 35 oracle oinstall 501 Jul 9 17:07 tag/
> drwxr-xr-x 11 root root 126 Jul 1 08:51 taq/
> drwxr-xr-x 2 root root 34 Jul 11 19:44 test/
>
>
> and it is readable.
>
> More info:
>
> [root at sfsccl03 ~]# gluster volume info brick1
>
> Volume Name: brick1
> Type: Distribute
> Status: Stopped
> Number of Bricks: 4
> Transport-type: tcp
> Bricks:
> Brick1: sfsccl01:/data/glusterfs/dht
> Brick2: sfsccl02:/data/glusterfs/dht
> Brick3: sfsccl03:/data/brick-sdc2/glusterfs/dht
> Brick4: sfsccl03:/data/brick-sdd2/glusterfs/dht
>
> [root at sfsccl03 ~]# gluster peer status
> Number of Peers: 2
>
> Hostname: sfsccl02
> Uuid: 6e72d1a8-bdeb-4bfb-806c-7fa8b98cb697
> State: Peer in Cluster (Connected)
>
> Hostname: sfsccl01
> Uuid: 116197cd-5dfe-4881-85ad-5de2be484ba6
> State: Peer in Cluster (Connected)
>
> a volume reset doesn't help.
>
> [root at sfsccl03 ~]# gluster volume reset brick1
> reset volume successful
>
> [root at sfsccl03 ~]# gluster volume start brick1
> brick: sfsccl03:/data/brick-sdc2/glusterfs/dht, path creation failed,
> reason: No such file or directory
>
> New volume creation also fails.
>
> [root at sfsccl03 ~]# gluster volume create brick2 transport tcp
> sfsccl01:/data/glusterfs/dht2 sfsccl03:/data/brick-sdc2/glusterfs/dht2
> sfsccl02:/data/glusterfs/dht2 sfsccl03:/data/brick-sdd2/glusterfs/dht2
> brick: sfsccl03:/data/brick-sdc2/glusterfs/dht2, path creation failed,
> reason: No such file or directory
>
> Not good.
>
> Taking out the 03 machine
>
> [root at sfsccl03 ~]# gluster volume create brick2 transport tcp
> sfsccl01:/data/glusterfs/dht2 sfsccl02:/data/glusterfs/dht2Creation of
> volume brick2 has been successful. Please start the volume to access data.
>
> I am wondering if I should remove the 03 machine from the volume, start it
> up with 01 and 02, and then add the 03 machine in, after forcing the volume
> back up. Any thoughts?
>
> --
> Joseph Landman, Ph.D
> Founder and CEO
> Scalable Informatics Inc.
> email: landman at scalableinformatics.com
> web : http://scalableinformatics.com
> http://scalableinformatics.com/sicluster
> phone: +1 734 786 8423 x121
> fax : +1 866 888 3112
> cell : +1 734 612 4615
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list