[Gluster-users] Strange gluster behaviour after snapshot restore

Strahil Nikolov hunter86_bg at yahoo.com
Sat Nov 9 17:37:02 UTC 2019


Hello Community,
today was the first time I had to rollback from a gluster snapshot.
Here is what I did:
1. Killed the HostedEngine VM2. Stopped gluster volume3. Run 'gluster snapshot restore <snap-from-severam-minutes-ago>'4. Started my volume5. Created a new snapshot as the previous one was removed (according to docs - this is expected)
Now I don't see my gluster bricks as before: '/gluster_bricks/engine/engine' but like this:

# gluster volume info engine
  
Volume Name: engine
Type: Replicate
Volume ID: 30ca1cc2-f2f7-4749-9e2e-cee9d7099ded
Status: Started
Snapshot Count: 3
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/run/gluster/snaps/abe6122484624d9f85dd89652fb8d207/brick1/engine
Brick2: gluster2:/run/gluster/snaps/abe6122484624d9f85dd89652fb8d207/brick2/engine
Brick3: ovirt3:/run/gluster/snaps/abe6122484624d9f85dd89652fb8d207/brick3/engine (arbiter)
Options Reconfigured:
features.barrier: disable
cluster.choose-local: on
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: on
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
server.event-threads: 4
client.event-threads: 4
features.quota: off
features.inode-quota: off
features.quota-deem-statfs: off
cluster.enable-shared-storage: enable


Also, I have noticed 3 new features were added (quota,inode-quota,quota-deem-statfs).
So why do I see my bricks as '/run/gluster/snaps/.../brickX/engine' ?I have created snapshots before and I never seen such behaviour.
Best Regards,Strahil Nikolov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20191109/745aa461/attachment.html>


More information about the Gluster-users mailing list