[Gluster-users] mount with one alive node in replica set

Игорь Бирюлин biryulini at gmail.com
Thu Jun 18 10:55:31 UTC 2015


Thank you for you answer!

I check recomendation:
1. On first node closed all connection from second node by iptables. Check
that on both nodes "gluster peer status" return  "Disconnected". Check that
on both nodes share was mounted and work well like local file system.
2. Rebooted second node (remind first node closed by iptables). Second node
booted without problem and proccesses of glusterfs started:
# ps aux | grep [g]luster
root      4145  0.0  0.0 375692 16076 ?        Ssl  13:35   0:00
/usr/sbin/glusterd -p /var/run/glusterd.pid

"gluster peer status" return "Disconnected" and volume started on localhost:
# gluster volume info
 Volume Name: files
Type: Replicate
Volume ID: 41067184-d57a-4132-a997-dbd47c974b40
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: xxx1:/storage/gluster_brick_repofiles
Brick2: xxx2:/storage/gluster_brick_repofiles

Вut I cann't mount this volume:
# cat /etc/fstab |grep gluster
127.0.0.1:/files                            /repo           glusterfs
rw,_netdev              0 0
# mount /repo
Mount failed. Please check the log file for more details.

Part of log I have sent in first message.

If I will open first node by iptables I could mount without problem, but
what must I do, when I lost one node and I have probability reboot another
node?



2015-06-17 18:46 GMT+03:00 Ravishankar N <ravishankar at redhat.com>:

>
>
> On 06/17/2015 07:04 PM, Игорь Бирюлин wrote:
>
>> If we turn off one server, another will be work and mounted volume will
>> be use without problem.
>> But if we rebooted our another server, when first was turned off (or
>> gluster was stopped on this server), our volume cann't mount (glusterd
>> started).
>>
> If both nodes are down and you bring up only one node, glusterd will not
> start the volume (i.e. the brick, nfs and glustershd processes)
> automatically. It waits for the other node's glusterd also to be up so that
> they are in sync. You can override this behavior by doing a `gluster volume
> start <volname> force` to bring up the gluster process only on this node
> and then mount the volume.
>
> -Ravi
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150618/3a869e8e/attachment.html>


More information about the Gluster-users mailing list