[Gluster-users] Can't add-brick to an encrypted volume without the master key

Mark Wadham gu at rkw.io
Thu May 18 08:29:43 UTC 2017


Hi,

I followed this guide for setting up an encrypted volume:

https://github.com/gluster/glusterfs-specs/blob/master/done/GlusterFS%203.5/Disk%20Encryption.md

I started with 3 nodes (EC2) and this all worked fine.  My understanding 
from the article is that the master key does not need to be present on 
the glusterfs nodes, and as such is only known to the client machines.

My issue comes when trying to make this solution resilient - terminating 
a node and having it respawned by the ASG, I’m then apparently unable 
to add the brick from the new node into the existing volume.

It fails with:

# gluster volume add-brick gv0 replica 3 glusterfs2:/data/brick/gv0
volume add-brick: failed: Commit failed on glusterfs2. Please check log 
file for details.

The log on glusterfs2 shows:

# cat gv0-add-brick-mount.log
[2017-05-18 07:55:24.712211] I [MSGID: 100030] [glusterfsd.c:2454:main] 
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 
3.8.11 (args: /usr/sbin/glusterfs --volfile /tmp/gv0.tcp-fuse.vol 
--client-pid -6 -l /var/log/glusterfs/gv0-add-brick-mount.log 
/tmp/mntnwGall)
[2017-05-18 07:55:24.891563] E [crypt.c:4306:master_set_master_vol_key] 
0-gv0-crypt: FATAL: can not open file with master key
[2017-05-18 07:55:24.891591] E [MSGID: 101019] 
[xlator.c:433:xlator_init] 0-gv0-crypt: Initialization of volume 
'gv0-crypt' failed, review your volfile again
[2017-05-18 07:55:24.891603] E [MSGID: 101066] 
[graph.c:324:glusterfs_graph_init] 0-gv0-crypt: initializing translator 
failed
[2017-05-18 07:55:24.891608] E [MSGID: 101176] 
[graph.c:673:glusterfs_graph_activate] 0-graph: init failed
[2017-05-18 07:55:24.891987] W [glusterfsd.c:1327:cleanup_and_exit] 
(-->/usr/sbin/glusterfs(glusterfs_volumes_init+0xfd) [0x7fead0b9e72d] 
-->/usr/sbin/glusterfs(glusterfs_process_volfp+0x172) [0x7fead0b9e5d2] 
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7fead0b9db4b] ) 0-: 
received signum (1), shutting down
[2017-05-18 07:55:24.892018] I [fuse-bridge.c:5788:fini] 0-fuse: 
Unmounting '/tmp/mntnwGall'.
[2017-05-18 07:55:24.893023] W [glusterfsd.c:1327:cleanup_and_exit] 
(-->/lib64/libpthread.so.0(+0x7dc5) [0x7feacf509dc5] 
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7fead0b9dcd5] 
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7fead0b9db4b] ) 0-: 
received signum (15), shutting down


This seems to be suggesting that the master key needs to be present on 
the glusterfs nodes themselves in order to add a brick, but this 
wasn’t the case when I set the cluster up.  When I set it up I did 
create the volume before enabling the encryption though.

What’s going on here?  Do the glusterfs nodes actually need the master 
key in order to work?

Thanks,
Mark


More information about the Gluster-users mailing list