[Gluster-users] Problems repairing volume

Kevin Bridges kevin at cyberswat.com
Fri Jul 17 19:42:59 UTC 2015


Thank you for the reply.  I've attempted the volume heal command and ran
the stat command from a client that has this volume mounted.  It does not
appear that the bricks are replicating all the files.

[root at gluster01 nmd]# find . -type f | wc -l
824371
[root at gluster02 nmd]# find . -type f | wc -l
741043

[root at gluster01 sdb1]# gluster volume heal nmd info
Brick gluster01.newmediadenver.com:/srv/sdb1/nmd/
Number of entries: 0

Brick gluster02.newmediadenver.com:/srv/sdb1/nmd/
Number of entries: 0

[root at gluster01 nmd]# gluster volume status nmd
Status of volume: nmd
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster01.newmediadenver.com:/srv/sdb
1/nmd                                       49152     0          Y
27680
Brick gluster02.newmediadenver.com:/srv/sdb
1/nmd                                       49152     0          Y
20921
NFS Server on localhost                     N/A       N/A        N       N/A
Self-heal Daemon on localhost               N/A       N/A        Y
27673
NFS Server on 10.0.2.85                     N/A       N/A        N       N/A
Self-heal Daemon on 10.0.2.85               N/A       N/A        Y
20914

Task Status of Volume nmd
------------------------------------------------------------------------------
There are no active volume tasks

On Fri, Jul 17, 2015 at 8:25 AM, Curro Rodriguez <curro at tyba.com> wrote:

>
> Hello Kevin,
>
> I think you can't rebalance because you are not using a distributed
> gluster, you are using replicated gluster instead, I had a similar problem
> some time ago, and It is supposed that gluster self-heal will sync the
> replicas.
>
> You can trigger the heal with:
>
> gluster volume heal nmd full
>>
>
> Anyway after reading and googling a lot one member told me that we could
> resync fast if you do
>
> find -exec stat {} \;
>>
>
> on a mounted client. This solution wasn't the ideal but started to resync
> faster than heal-self. Still slow 5MB/s but better than nothing.
>
> I am starting with glusterfs too, but I am sure someone could help you
> better than me.
>
> Kind regards.
>
> Curro Rodríguez.
>
> On Fri, Jul 17, 2015 at 4:15 PM, Kevin Bridges <kevin at cyberswat.com>
> wrote:
>
>> I lost one of my bricks and attempted to rebuild it.  I did not
>> understand what I was doing and created a mess.  I'm looking for guidance
>> so that I don't create a bigger mess.
>>
>> I believe that the gluster mount is relatively simple.  It's a 2 brick
>> replicated volume (gluster01 & gluster02).  I lost gluster02 and attempted
>> to replace it.  Now that it is replaced, the files do not seem to match
>> what is on the brick that I did not loose.  I would like to repair these
>> bricks and then add more storage capacity to the volume.
>>
>> Below is the output of the `df -h`, `gluster peer status`, and `gluster
>> volume info` for each of the servers.  I'm concerned by the `gluster peer
>> status` and `gluster volume rebalance nmd status` commands.
>>
>> Any help is vastly appreciated.
>>
>> glusterfs 3.7.2 built on Jun 23 2015 12:13:13
>>
>> [root at gluster01 /]# df -h
>> Filesystem      Size  Used Avail Use% Mounted on
>> /dev/xvde       7.9G  2.3G  5.3G  30% /
>> tmpfs           3.7G     0  3.7G   0% /dev/shm
>> /dev/xvdf1       63G   38G   23G  62% /srv/sdb1
>> [root at gluster01 /]# gluster peer status
>> Number of Peers: 1
>>
>> Hostname: 10.0.2.85
>> Uuid: 5f75bd77-0faf-4fb8-9819-83326c4f77f7
>> State: Peer in Cluster (Connected)
>> Other names:
>> gluster02.newmediadenver.com
>> [root at gluster01 /]# gluster volume info all
>>
>> Volume Name: nmd
>> Type: Replicate
>> Volume ID: 62bec597-b479-4bfd-88dc-44f5bb88d737
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster01.newmediadenver.com:/srv/sdb1/nmd
>> Brick2: gluster02.newmediadenver.com:/srv/sdb1/nmd
>> Options Reconfigured:
>> performance.readdir-ahead: on
>> [root at gluster01 /]# gluster volume info nmd
>>
>> Volume Name: nmd
>> Type: Replicate
>> Volume ID: 62bec597-b479-4bfd-88dc-44f5bb88d737
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster01.newmediadenver.com:/srv/sdb1/nmd
>> Brick2: gluster02.newmediadenver.com:/srv/sdb1/nmd
>> Options Reconfigured:
>> performance.readdir-ahead: on
>> [root at gluster01 /]# gluster volume rebalance nmd status
>> volume rebalance: nmd: failed: Volume nmd is not a distribute volume or
>> contains only 1 brick.
>> Not performing rebalance
>>
>> [root at gluster02 /]# df -h
>> Filesystem      Size  Used Avail Use% Mounted on
>> /dev/xvde       7.9G  2.5G  5.1G  33% /
>> tmpfs           3.7G     0  3.7G   0% /dev/shm
>> /dev/xvdh1       63G   30G   31G  50% /srv/sdb1
>> [root at gluster02 /]# gluster peer status
>> Number of Peers: 1
>>
>> Hostname: gluster01.newmediadenver.com
>> Uuid: afb3e1c3-de9e-4c06-ba5c-5551b1d7030e
>> State: Peer in Cluster (Connected)
>> [root at gluster02 /]# gluster volume info all
>>
>> Volume Name: nmd
>> Type: Replicate
>> Volume ID: 62bec597-b479-4bfd-88dc-44f5bb88d737
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster01.newmediadenver.com:/srv/sdb1/nmd
>> Brick2: gluster02.newmediadenver.com:/srv/sdb1/nmd
>> Options Reconfigured:
>> performance.readdir-ahead: on
>> [root at gluster02 /]# gluster volume info nmd
>>
>> Volume Name: nmd
>> Type: Replicate
>> Volume ID: 62bec597-b479-4bfd-88dc-44f5bb88d737
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster01.newmediadenver.com:/srv/sdb1/nmd
>> Brick2: gluster02.newmediadenver.com:/srv/sdb1/nmd
>> Options Reconfigured:
>> performance.readdir-ahead: on
>> [root at gluster02 /]# gluster volume rebalance nmd status
>> volume rebalance: nmd: failed: Volume nmd is not a distribute volume or
>> contains only 1 brick.
>> Not performing rebalance
>>
>> Thanks,
>> Kevin Bridges
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150717/b85ee3c1/attachment.html>


More information about the Gluster-users mailing list