[Gluster-users] are they no longer syncing?

Atin Mukherjee atin.mukherjee83 at gmail.com
Mon Jan 18 06:36:32 UTC 2016


-Atin
Sent from one plus one
On Jan 18, 2016 11:41 AM, "Mark Chaney" <mail at lists.macscr.com> wrote:
>
> I have a two node cluster setup with iscsi using the image files that are
stored on the gluster cluster as LUNs. They do appear to be syncing, but I
have a few questions and I appreciate any help you can give me. Thanks for
your time!
>
> 1) Why does the second brick show as N for online?
> 2) Why is the healer daemon shown as NA? How can I correct that if it
needs to be corrected?
SHD doesn't need to listen to any specific port, and its showing online, so
no issues.
>From the status output it looks like brick hasn't started in gluster2 node.
Could you check/send glusterd and brick log from gluster2 node?
> 3) Should i really be mounting the gluster volumes on each gluster node
for iscsi access or should i be accessing /var/gluster-storage directly?
> 4) If i only have about 72GB of files stored in gluster, why is each
gluster host about 155GB? Are their duplicates stored somewhere and why?
>
> root at gluster1:~# gluster volume status volume1
> Status of volume: volume1
> Gluster process                             TCP Port  RDMA Port  Online
Pid
>
------------------------------------------------------------------------------
> Brick gluster1:/var/gluster-storage         49152     0          Y
 3043
> Brick gluster2:/var/gluster-storage         N/A       N/A        N
 N/A
> NFS Server on localhost                     2049      0          Y
 3026
> Self-heal Daemon on localhost               N/A       N/A        Y
 3034
> NFS Server on gluster2                      2049      0          Y
 2738
> Self-heal Daemon on gluster2                N/A       N/A        Y
 2743
>
> Task Status of Volume volume1
>
------------------------------------------------------------------------------
> There are no active volume tasks
>
> root at gluster1:~# gluster peer status
> Number of Peers: 1
>
> Hostname: gluster2
> Uuid: abe7ee21-bea9-424f-ac5c-694bdd989d6b
> State: Peer in Cluster (Connected)
> root at gluster1:~#
> root at gluster1:~# mount | grep gluster
> gluster1:/volume1 on /mnt/glusterfs type fuse.glusterfs
(rw,default_permissions,allow_other,max_read=131072)
>
>
> root at gluster2:~# gluster volume status volume1
> Status of volume: volume1
> Gluster process                             TCP Port  RDMA Port  Online
Pid
>
------------------------------------------------------------------------------
> Brick gluster1:/var/gluster-storage         49152     0          Y
 3043
> Brick gluster2:/var/gluster-storage         N/A       N/A        N
 N/A
> NFS Server on localhost                     2049      0          Y
 2738
> Self-heal Daemon on localhost               N/A       N/A        Y
 2743
> NFS Server on gluster1.mgr.example.com      2049      0          Y
 3026
> Self-heal Daemon on gluster1.mgr.example.co
> m                                           N/A       N/A        Y
 3034
>
> Task Status of Volume volume1
>
------------------------------------------------------------------------------
> There are no active volume tasks
>
> root at gluster2:~# gluster peer status
> Number of Peers: 1
>
> Hostname: gluster1.mgr.example.com
> Uuid: dff9118b-a2bd-4cd8-b562-0dfdbd2ea8a3
> State: Peer in Cluster (Connected)
> root at gluster2:~#
> root at gluster2:~# mount | grep gluster
> gluster1:/volume1 on /mnt/glusterfs type fuse.glusterfs
(rw,default_permissions,allow_other,max_read=131072)
> root at gluster2:~#
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160118/28490665/attachment.html>


More information about the Gluster-users mailing list