[Gluster-users] Trying to recover a gluster volume upon reinstall

Pat Haley phaley at mit.edu
Fri Jun 3 14:15:25 UTC 2016


Hi,

Just to clarify, our main question is:

This is a distributed volume, not replicated. Can we delete the gluster 
volume, remove the .glusterfs folders from each brick and recreate the 
volume? Will it re-index the files on both bricks?

Thanks

On 06/02/2016 04:50 PM, Pat Haley wrote:
>
>
> we have a machine that previously had centos 6.8 and gluster 3.7.10-1 
> with 2 bricks.  The machine had to be rebuilt with centos 6.8 and the 
> 2 bricks were not formatted.   Gluster 3.7.11 was installed with the 
> new OS, and we can start the service, create the volume with the 2 
> bricks and mount the gluster share.
>
>
>
> The folder name (gluster-data) in it is correct, but we are getting error:
>
>  ls /data
> ls: cannot access /data/gluster-data: No such file or directory
> gluster-data
>
> The data and directories are still there (i.e. we can still see them 
> looking at the underlying file systems) but gluster isn't serving them.
>
> Looking in the log file for each brick we sea the same errors:
> [2016-06-03 04:30:07.494068] I [MSGID: 100030] 
> [glusterfsd.c:2332:main] 0-/usr/sbin/glusterfsd: Started running 
> /usr/sbin/glusterfsd version 3.7.11 (args: /usr/sbin/glusterfsd -s 
> mseas-data2 --volfile-id data-volume.mseas-data2.mnt-brick1 -p 
> /var/lib/glusterd/vols/data-volume/run/mseas-data2-mnt-brick1.pid -S 
> /var/run/gluster/aa572e87933c930cb53983de35bdccbe.socket --brick-name 
> /mnt/brick1 -l /var/log/glusterfs/bricks/mnt-brick1.log 
> --xlator-option 
> *-posix.glusterd-uuid=c1110fd9-cb99-4ca1-b18a-536a122d67ef 
> --brick-port 49152 --xlator-option data-volume-server.listen-port=49152)
> [2016-06-03 04:30:07.510671] I [MSGID: 101190] 
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started 
> thread with index 1
> [2016-06-03 04:30:07.519040] I [graph.c:269:gf_add_cmdline_options] 
> 0-data-volume-server: adding option 'listen-port' for volume 
> 'data-volume-server' with value '49152'
> [2016-06-03 04:30:07.519089] I [graph.c:269:gf_add_cmdline_options] 
> 0-data-volume-posix: adding option 'glusterd-uuid' for volume 
> 'data-volume-posix' with value 'c1110fd9-cb99-4ca1-b18a-536a122d67ef'
> [2016-06-03 04:30:07.519479] I [MSGID: 101190] 
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started 
> thread with index 2
> [2016-06-03 04:30:07.519486] I [MSGID: 115034] 
> [server.c:403:_check_for_auth_option] 0-/mnt/brick1: skip format check 
> for non-addr auth option auth.login./mnt/brick1.allow
> [2016-06-03 04:30:07.519537] I [MSGID: 115034] 
> [server.c:403:_check_for_auth_option] 0-/mnt/brick1: skip format check 
> for non-addr auth option 
> auth.login.0016d59c-9691-4bb2-bc44-b1d8b19dd230.password
> [2016-06-03 04:30:07.520926] I 
> [rpcsvc.c:2215:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: 
> Configured rpc.outstanding-rpc-limit with value 64
> [2016-06-03 04:30:07.521003] W [MSGID: 101002] 
> [options.c:957:xl_opt_validate] 0-data-volume-server: option 
> 'listen-port' is deprecated, preferred is 
> 'transport.socket.listen-port', continuing with correction
> [2016-06-03 04:30:07.523056] I [MSGID: 121050] 
> [ctr-helper.c:259:extract_ctr_options] 0-gfdbdatastore: CTR Xlator is 
> disabled.
> [2016-06-03 04:30:07.523077] W [MSGID: 101105] 
> [gfdb_sqlite3.h:239:gfdb_set_sql_params] 
> 0-data-volume-changetimerecorder: Failed to retrieve sql-db-pagesize 
> from params.Assigning default value: 4096
> [2016-06-03 04:30:07.523086] W [MSGID: 101105] 
> [gfdb_sqlite3.h:239:gfdb_set_sql_params] 
> 0-data-volume-changetimerecorder: Failed to retrieve 
> sql-db-journalmode from params.Assigning default value: wal
> [2016-06-03 04:30:07.523095] W [MSGID: 101105] 
> [gfdb_sqlite3.h:239:gfdb_set_sql_params] 
> 0-data-volume-changetimerecorder: Failed to retrieve sql-db-sync from 
> params.Assigning default value: off
> [2016-06-03 04:30:07.523102] W [MSGID: 101105] 
> [gfdb_sqlite3.h:239:gfdb_set_sql_params] 
> 0-data-volume-changetimerecorder: Failed to retrieve sql-db-autovacuum 
> from params.Assigning default value: none
> [2016-06-03 04:30:07.523280] I [trash.c:2369:init] 
> 0-data-volume-trash: no option specified for 'eliminate', using NULL
> [2016-06-03 04:30:07.523910] W [graph.c:357:_log_if_unknown_option] 
> 0-data-volume-server: option 'rpc-auth.auth-glusterfs' is not recognized
> [2016-06-03 04:30:07.523937] W [graph.c:357:_log_if_unknown_option] 
> 0-data-volume-server: option 'rpc-auth.auth-unix' is not recognized
> [2016-06-03 04:30:07.523955] W [graph.c:357:_log_if_unknown_option] 
> 0-data-volume-server: option 'rpc-auth.auth-null' is not recognized
> [2016-06-03 04:30:07.523989] W [graph.c:357:_log_if_unknown_option] 
> 0-data-volume-quota: option 'timeout' is not recognized
> [2016-06-03 04:30:07.524031] W [graph.c:357:_log_if_unknown_option] 
> 0-data-volume-trash: option 'brick-path' is not recognized
> [2016-06-03 04:30:07.529994] W [MSGID: 113036] 
> [posix.c:2211:posix_rename] 0-data-volume-posix: found directory at 
> /mnt/brick1/.trashcan/ while expecting ENOENT [File exists]
> Final graph:
> +------------------------------------------------------------------------------+
>   1: volume data-volume-posix
>   2:     type storage/posix
>   3:     option glusterd-uuid c1110fd9-cb99-4ca1-b18a-536a122d67ef
>   4:     option directory /mnt/brick1
>   5:     option volume-id c54b2a60-ffdc-4d82-9db1-890e41002e28
>   6: end-volume
>   7:
>   8: volume data-volume-trash
>   9:     type features/trash
>  10:     option trash-dir .trashcan
>  11:     option brick-path /mnt/brick1
>  12:     option trash-internal-op off
>  13:     subvolumes data-volume-posix
>  14: end-volume
>  15:
>  16: volume data-volume-changetimerecorder
>  17:     type features/changetimerecorder
>  18:     option db-type sqlite3
>  19:     option hot-brick off
>  20:     option db-name brick1.db
>  21:     option db-path /mnt/brick1/.glusterfs/
>  22:     option record-exit off
>  23:     option ctr_link_consistency off
>  24:     option ctr_lookupheal_link_timeout 300
>  25:     option ctr_lookupheal_inode_timeout 300
>  26:     option record-entry on
>  27:     option ctr-enabled off
>  28:     option record-counters off
>  29:     option ctr-record-metadata-heat off
>  30:     option sql-db-cachesize 1000
>  31:     option sql-db-wal-autocheckpoint 1000
>  32:     subvolumes data-volume-trash
>  33: end-volume
>  34:
>  35: volume data-volume-changelog
>  36:     type features/changelog
>  37:     option changelog-brick /mnt/brick1
>  38:     option changelog-dir /mnt/brick1/.glusterfs/changelogs
>  39:     option changelog-barrier-timeout 120
>  40:     subvolumes data-volume-changetimerecorder
>  41: end-volume
>  42:
>  43: volume data-volume-bitrot-stub
>  44:     type features/bitrot-stub
>  45:     option export /mnt/brick1
>  46:     subvolumes data-volume-changelog
>  47: end-volume
>  48:
>  49: volume data-volume-access-control
>  50:     type features/access-control
>  51:     subvolumes data-volume-bitrot-stub
>  52: end-volume
>  53:
>  54: volume data-volume-locks
>  55:     type features/locks
>  56:     subvolumes data-volume-access-control
>  57: end-volume
>  58:
>  59: volume data-volume-upcall
>  60:     type features/upcall
>  61:     option cache-invalidation off
>  62:     subvolumes data-volume-locks
>  63: end-volume
>  64:
>  65: volume data-volume-io-threads
>  66:     type performance/io-threads
>  67:     subvolumes data-volume-upcall
>  68: end-volume
>  69:
>  70: volume data-volume-marker
>  71:     type features/marker
>  72:     option volume-uuid c54b2a60-ffdc-4d82-9db1-890e41002e28
>  73:     option timestamp-file 
> /var/lib/glusterd/vols/data-volume/marker.tstamp
>  74:     option quota-version 0
>  75:     option xtime off
>  76:     option gsync-force-xtime off
>  77:     option quota off
>  78:     option inode-quota off
>  79:     subvolumes data-volume-io-threads
>  80: end-volume
>  81:
>  82: volume data-volume-barrier
>  83:     type features/barrier
>  84:     option barrier disable
>  85:     option barrier-timeout 120
>  86:     subvolumes data-volume-marker
>  87: end-volume
>  88:
>  89: volume data-volume-index
>  90:     type features/index
>  91:     option index-base /mnt/brick1/.glusterfs/indices
>  92:     subvolumes data-volume-barrier
>  93: end-volume
>  94:
>  95: volume data-volume-quota
>  96:     type features/quota
>  97:     option volume-uuid data-volume
>  98:     option server-quota off
>  99:     option timeout 0
> 100:     option deem-statfs off
> 101:     subvolumes data-volume-index
> 102: end-volume
> 103:
> 104: volume data-volume-worm
> 105:     type features/worm
> 106:     option worm off
> 107:     subvolumes data-volume-quota
> 108: end-volume
> 109:
> 110: volume data-volume-read-only
> 111:     type features/read-only
> 112:     option read-only off
> 113:     subvolumes data-volume-worm
> 114: end-volume
> 115:
> 116: volume /mnt/brick1
> 117:     type debug/io-stats
> 118:     option log-level INFO
> 119:     option latency-measurement off
> 120:     option count-fop-hits off
> 121:     subvolumes data-volume-read-only
> 122: end-volume
> 123:
> 124: volume data-volume-server
> 125:     type protocol/server
> 126:     option transport.socket.listen-port 49152
> 127:     option rpc-auth.auth-glusterfs on
> 128:     option rpc-auth.auth-unix on
> 129:     option rpc-auth.auth-null on
> 130:     option rpc-auth-allow-insecure on
> 131:     option transport-type tcp
> 132:     option auth.login./mnt/brick1.allow 
> 0016d59c-9691-4bb2-bc44-b1d8b19dd230
> 133:     option 
> auth.login.0016d59c-9691-4bb2-bc44-b1d8b19dd230.password 
> b021dbcf-e114-4c23-ad9f-968a2d93dd61
> 134:     option auth.addr./mnt/brick1.allow *
> 135:     subvolumes /mnt/brick1
> 136: end-volume
> 137:
> +------------------------------------------------------------------------------+
> [2016-06-03 04:30:07.583590] I [login.c:81:gf_auth] 0-auth/login: 
> allowed user names: 0016d59c-9691-4bb2-bc44-b1d8b19dd230
> [2016-06-03 04:30:07.583640] I [MSGID: 115029] 
> [server-handshake.c:690:server_setvolume] 0-data-volume-server: 
> accepted client from 
> mseas-data2-2383-2016/06/03-04:30:07:127671-data-volume-client-0-0-0 
> (version: 3.7.11)
> [2016-06-03 04:30:40.124584] I [login.c:81:gf_auth] 0-auth/login: 
> allowed user names: 0016d59c-9691-4bb2-bc44-b1d8b19dd230
> [2016-06-03 04:30:40.124628] I [MSGID: 115029] 
> [server-handshake.c:690:server_setvolume] 0-data-volume-server: 
> accepted client from 
> mseas-data2-2500-2016/06/03-04:30:40:46064-data-volume-client-0-0-0 
> (version: 3.7.11)
> [2016-06-03 04:30:43.265342] W [MSGID: 101182] 
> [inode.c:174:__foreach_ancestor_dentry] 0-data-volume-server: per 
> dentry fn returned 1
> [2016-06-03 04:30:43.265393] C [MSGID: 101184] 
> [inode.c:228:__is_dentry_cyclic] 0-/mnt/brick1/inode: detected cyclic 
> loop formation during inode linkage. inode 
> (00000000-0000-0000-0000-000000000001) linking under itself as 
> gluster-data
> [2016-06-03 04:30:43.269197] W [MSGID: 101182] 
> [inode.c:174:__foreach_ancestor_dentry] 0-data-volume-server: per 
> dentry fn returned 1
> [2016-06-03 04:30:43.269241] C [MSGID: 101184] 
> [inode.c:228:__is_dentry_cyclic] 0-/mnt/brick1/inode: detected cyclic 
> loop formation during inode linkage. inode 
> (00000000-0000-0000-0000-000000000001) linking under itself as 
> gluster-data
> [2016-06-03 04:30:43.270689] W [MSGID: 101182] 
> [inode.c:174:__foreach_ancestor_dentry] 0-data-volume-server: per 
> dentry fn returned 1
> [2016-06-03 04:30:43.270733] C [MSGID: 101184] 
> [inode.c:228:__is_dentry_cyclic] 0-/mnt/brick1/inode: detected cyclic 
> loop formation during inode linkage. inode 
> (00000000-0000-0000-0000-000000000001) linking under itself as 
> gluster-data
>
>
> This is a distributed volume, not replicated. Can we delete the 
> gluster volume, remove the .glusterfs folders from each brick and 
> recreate the volume? Will it re-index the files on both bricks?
>
> Note:
> From the last lines of log file,
> there is a soft link at 
> /mnt/brick1/.glusterfs/00/00/00000000-0000-0000-0000-000000000001 >>>> 
> ../../..
>
> We have tried removing the link and restarting the service, no change 
> in behavior. It replaces/rebuilds the link on service startup.
>
> Any advice you can give will be appreciated.
>
> Thanks
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> Pat Haley                          Email:phaley at mit.edu
> Center for Ocean Engineering       Phone:  (617) 253-6824
> Dept. of Mechanical Engineering    Fax:    (617) 253-8125
> MIT, Room 5-213http://web.mit.edu/phaley/www/
> 77 Massachusetts Avenue
> Cambridge, MA  02139-4301
>
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

-- 

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley                          Email:  phaley at mit.edu
Center for Ocean Engineering       Phone:  (617) 253-6824
Dept. of Mechanical Engineering    Fax:    (617) 253-8125
MIT, Room 5-213                    http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge, MA  02139-4301

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160603/4c471940/attachment.html>


More information about the Gluster-users mailing list