[Gluster-users] are they no longer syncing?
Mark Chaney
mail at lists.macscr.com
Mon Jan 18 07:11:39 UTC 2016
Thanks! So I can create a proper check for check_mk, is there a command
I can use to see if the current brick is online without having to parse
through all that full cluster status info for the volume?
How about questions 3 and 4?
On 2016-01-18 00:43, Anuradha Talur wrote:
> ----- Original Message -----
>> From: "Mark Chaney" <mail at lists.macscr.com>
>> To: gluster-users at gluster.org
>> Sent: Monday, January 18, 2016 11:21:18 AM
>> Subject: [Gluster-users] are they no longer syncing?
>>
>> I have a two node cluster setup with iscsi using the image files that
>> are stored on the gluster cluster as LUNs. They do appear to be
>> syncing,
>> but I have a few questions and I appreciate any help you can give me.
>> Thanks for your time!
>>
>> 1) Why does the second brick show as N for online?
>
> 'N' means that the second brick is not online. Running 'gluster volume
> start <volname> force'
> should bring the brick up.
>
>> 2) Why is the healer daemon shown as NA? How can I correct that if it
>> needs to be corrected?
> Self-heal daemon status on both gluster1 and gluster2 is shown online (
> Y ).
> It doesn't need to be corrected.
>
>> 3) Should i really be mounting the gluster volumes on each gluster
>> node
>> for iscsi access or should i be accessing /var/gluster-storage
>> directly?
>> 4) If i only have about 72GB of files stored in gluster, why is each
>> gluster host about 155GB? Are their duplicates stored somewhere and
>> why?
>>
>> root at gluster1:~# gluster volume status volume1
>> Status of volume: volume1
>> Gluster process TCP Port RDMA Port
>> Online
>> Pid
>> ------------------------------------------------------------------------------
>> Brick gluster1:/var/gluster-storage 49152 0 Y
>> 3043
>> Brick gluster2:/var/gluster-storage N/A N/A N
>> N/A
>> NFS Server on localhost 2049 0 Y
>> 3026
>> Self-heal Daemon on localhost N/A N/A Y
>> 3034
>> NFS Server on gluster2 2049 0 Y
>> 2738
>> Self-heal Daemon on gluster2 N/A N/A Y
>> 2743
>>
>> Task Status of Volume volume1
>> ------------------------------------------------------------------------------
>> There are no active volume tasks
>>
>> root at gluster1:~# gluster peer status
>> Number of Peers: 1
>>
>> Hostname: gluster2
>> Uuid: abe7ee21-bea9-424f-ac5c-694bdd989d6b
>> State: Peer in Cluster (Connected)
>> root at gluster1:~#
>> root at gluster1:~# mount | grep gluster
>> gluster1:/volume1 on /mnt/glusterfs type fuse.glusterfs
>> (rw,default_permissions,allow_other,max_read=131072)
>>
>>
>> root at gluster2:~# gluster volume status volume1
>> Status of volume: volume1
>> Gluster process TCP Port RDMA Port
>> Online
>> Pid
>> ------------------------------------------------------------------------------
>> Brick gluster1:/var/gluster-storage 49152 0 Y
>> 3043
>> Brick gluster2:/var/gluster-storage N/A N/A N
>> N/A
>> NFS Server on localhost 2049 0 Y
>> 2738
>> Self-heal Daemon on localhost N/A N/A Y
>> 2743
>> NFS Server on gluster1.mgr.example.com 2049 0 Y
>> 3026
>> Self-heal Daemon on gluster1.mgr.example.co
>> m N/A N/A Y
>> 3034
>>
>> Task Status of Volume volume1
>> ------------------------------------------------------------------------------
>> There are no active volume tasks
>>
>> root at gluster2:~# gluster peer status
>> Number of Peers: 1
>>
>> Hostname: gluster1.mgr.example.com
>> Uuid: dff9118b-a2bd-4cd8-b562-0dfdbd2ea8a3
>> State: Peer in Cluster (Connected)
>> root at gluster2:~#
>> root at gluster2:~# mount | grep gluster
>> gluster1:/volume1 on /mnt/glusterfs type fuse.glusterfs
>> (rw,default_permissions,allow_other,max_read=131072)
>> root at gluster2:~#
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
More information about the Gluster-users
mailing list