[Gluster-users] Quorum and reboots

David Gossage dgossage at carouselchecks.com
Fri Mar 11 00:04:15 UTC 2016


On Thu, Mar 10, 2016 at 5:30 PM, Lindsay Mathieson <
lindsay.mathieson at gmail.com> wrote:

> On 11/03/2016 9:12 AM, David Gossage wrote:
>
>> Odd thing is I did only reboot the one node so I was expecting one
>> version to be healed, the one I had rebooted, and the other 2 to handle
>> writes still during the heal process. However that was not what happened.
>>
>
>
> Can you post your gluster settings?
>
> gluster volume info


(VM Images)
Volume Name: GLUSTER1
Type: Replicate
Volume ID: 167b8e57-28c3-447a-95cc-8410cbdf3f7f
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ccgl1.gl.local:/gluster1/BRICK1/1
Brick2: ccgl2.gl.local:/gluster1/BRICK1/1
Brick3: ccgl3.gl.local:/amanda/gluster1/BRICK1
Options Reconfigured:
performance.readdir-ahead: on
storage.owner-uid: 36
storage.owner-gid: 36
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
server.allow-insecure: on

(hosted engine)
Volume Name: HOST1
Type: Replicate
Volume ID: aab2e2a8-da2d-4167-9b87-b3317ce1f14d
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ccgl1.gl.local:/gluster/host1/brick1
Brick2: ccgl2.gl.local:/gluster/host1/brick1
Brick3: ccgl3.gl.local:/gluster/host1/brick1
Options Reconfigured:
server.allow-insecure: on
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
storage.owner-gid: 36
storage.owner-uid: 36
performance.readdir-ahead: on




>
>
> --
> Lindsay Mathieson
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160310/ab2f09e4/attachment.html>


More information about the Gluster-users mailing list