[Gluster-users] mount with one alive node in replica set

Atin Mukherjee atin.mukherjee83 at gmail.com
Thu Jun 18 19:07:14 UTC 2015


Sent from one plus one
On Jun 18, 2015 8:51 PM, "Игорь Бирюлин" <biryulini at gmail.com> wrote:
>
> Sorry, I didn't check this, because after reboot my second node I had
looked "gluster volume info" only and I have found "Status: Started".
> Now I've checked your recomendation and you are right!
> "gluster volume start <volname> force" didn't changed output of "gluster
volume info" but I have mounted my share!
> Thank you very much for your advice!
>
> But why does "gluster volume info" show that my volname "started" before
"gluster volume start <volname> force"?
In this case glusterd stops the brick processes but doesn't mark its status
as stopped.

Ravi, correct me if I am wrong.
>
>
>
> 2015-06-18 14:18 GMT+03:00 Ravishankar N <ravishankar at redhat.com>:
>>
>>
>>
>> On 06/18/2015 04:25 PM, Игорь Бирюлин wrote:
>>>
>>> Thank you for you answer!
>>>
>>> I check recomendation:
>>> 1. On first node closed all connection from second node by iptables.
Check that on both nodes "gluster peer status" return  "Disconnected".
Check that on both nodes share was mounted and work well like local file
system.
>>> 2. Rebooted second node (remind first node closed by iptables). Second
node booted without problem and proccesses of glusterfs started:
>>> # ps aux | grep [g]luster
>>> root      4145  0.0  0.0 375692 16076 ?        Ssl  13:35   0:00
/usr/sbin/glusterd -p /var/run/glusterd.pid
>>>
>>> "gluster peer status" return "Disconnected" and volume started on
localhost:
>>> # gluster volume info
>>>  Volume Name: files
>>> Type: Replicate
>>> Volume ID: 41067184-d57a-4132-a997-dbd47c974b40
>>> Status: Started
>>> Number of Bricks: 1 x 2 = 2
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: xxx1:/storage/gluster_brick_repofiles
>>> Brick2: xxx2:/storage/gluster_brick_repofiles
>>>
>>> Вut I cann't mount this volume:
>>> # cat /etc/fstab |grep gluster
>>> 127.0.0.1:/files                            /repo
glusterfs       rw,_netdev              0 0
>>> # mount /repo
>>> Mount failed. Please check the log file for more details.
>>>
>>> Part of log I have sent in first message.
>>>
>>> If I will open first node by iptables I could mount without problem,
but what must I do, when I lost one node and I have probability reboot
another node?
>>>
>>>
>> `gluster volume start <volname> force` doesn't work?
>>
>>>
>>> 2015-06-17 18:46 GMT+03:00 Ravishankar N <ravishankar at redhat.com>:
>>>>
>>>>
>>>>
>>>> On 06/17/2015 07:04 PM, Игорь Бирюлин wrote:
>>>>>
>>>>> If we turn off one server, another will be work and mounted volume
will be use without problem.
>>>>> But if we rebooted our another server, when first was turned off (or
gluster was stopped on this server), our volume cann't mount (glusterd
started).
>>>>
>>>> If both nodes are down and you bring up only one node, glusterd will
not start the volume (i.e. the brick, nfs and glustershd processes)
automatically. It waits for the other node's glusterd also to be up so that
they are in sync. You can override this behavior by doing a `gluster volume
start <volname> force` to bring up the gluster process only on this node
and then mount the volume.
>>>>
>>>> -Ravi
>>>
>>>
>>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150619/b12df552/attachment.html>


More information about the Gluster-users mailing list