[Gluster-users] Brick Sync Issue

Takemura, Won Won.Takemura at healthsparq.com
Wed Mar 9 18:57:13 UTC 2016


Configuration:
Glusterfs v3.6.1

4 server trusted storage pool
Volume: secure-magnetic
3 bricks in volume
1 brick on each of 3 servers
-nfs-secure01
-nfs-secure02
-nfs-secure03

I get the following output from: gluster volume status secure-magnetic detail

Status of volume: secure-magnetic
------------------------------------------------------------------------------
Brick                : Brick nfs-secure01.abc.com:/replicate-secure-magnetic
Port                 : 49158
Online               : Y
Pid                  : 29938
File System          : ext4
Device               : /dev/mapper/vg--secure--magnetic-lv--secure--magnetic
Mount Options        : rw,noatime,nobarrier,user_xattr
Inode Size           : N/A
Disk Space Free      : 740.5GB
Total Disk Space     : 5.7TB
Inode Count          : 390578176
Free Inodes          : 390395905
------------------------------------------------------------------------------
Brick                : Brick nfs-secure02.abc.com:/replicate-secure-magnetic
Port                 : 49159
Online               : Y
Pid                  : 31986
File System          : ext4
Device               : /dev/mapper/vg--secure--magnetic-lv--secure--magnetic
Mount Options        : rw,noatime,nobarrier,user_xattr
Inode Size           : N/A
Disk Space Free      : 969.7GB
Total Disk Space     : 5.7TB
Inode Count          : 390578176
Free Inodes          : 390398386
------------------------------------------------------------------------------
Brick                : Brick nfs-secure03.abc.com:/replicate-secure-magnetic
Port                 : 49158
Online               : Y
Pid                  : 2121
File System          : ext4
Device               : /dev/mapper/vg--secure--magnetic-lv--secure--magnetic
Mount Options        : rw,noatime,nobarrier,user_xattr
Inode Size           : N/A
Disk Space Free      : 2.5TB
Total Disk Space     : 5.7TB
Inode Count          : 390578176
Free Inodes          : 390398164


Status of volume: secure-magnetic
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick nfs-secure01.abc.com:/replicate-secure-ma
gnetic                                                  49158   Y       29938
Brick nfs-secure02.abc.com:/replicate-secure-ma
gnetic                                                  49159   Y       31986
Brick nfs-secure03.abc.com:/replicate-secure-ma
gnetic                                                  49158   Y       2121
NFS Server on localhost                                 2049    Y       27013
Self-heal Daemon on localhost                           N/A     Y       27021
NFS Server on nfs-secure02                              2049    Y       32086
Self-heal Daemon on nfs-secure02                        N/A     Y       32093
NFS Server on nfs-secure01                              2049    Y       30034
Self-heal Daemon on nfs-secure01                        N/A     Y       30041
NFS Server on nfs-secure03                              2049    Y       2229
Self-heal Daemon on nfs-secure03                        N/A     Y       2236

Task Status of Volume secure-magnetic
------------------------------------------------------------------------------
There are no active volume tasks

Question:
What is the most effective way to get the bricks back in sync?



IMPORTANT NOTICE: This communication, including any attachment, contains information that may be confidential or privileged, and is intended solely for the entity or individual to whom it is addressed.  If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this message is strictly prohibited.  Nothing in this email, including any attachment, is intended to be a legally binding signature.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160309/1a095470/attachment.html>


More information about the Gluster-users mailing list