[Gluster-users] Unable to access directories in wounded gluster setup
Pat Haley
phaley at MIT.EDU
Wed Mar 19 14:08:35 UTC 2014
Hi,
One thing I failed to mention in my previous Emails was
that when I log into the 2 up servers and look at the
directories in question in the underlying filesystems
(i.e. not going through gluster) the directories and
files seem fine (i.e. it does not appear to be an issue
with the underlying filesystems).
I have tried "killall glusterfsd", followed by restarting
glusterd on both bricks. Still cannot see the directories
in question through gluster.
If I tried the "reset" from my previous Email (below), but saved
a copy of the /var/lib/glusterd directory, would a be
able to use the copy to return to the previous state,
in case the "reset" makes things worse?
Thanks,
Pat
>
> Hi Again,
>
> I was again going over my notes from the other times
> I've been in trouble. In particular, back in November
> I received the following "reset" procedure:
>
> ----------
> > If gluster-data is pingable from the other bricks, you could try
> > detaching and retttaching it from gluster-0-0 or 0-1.
> > 1) On gluster-0-0:
> > `gluster peer detach gluster-data`, if that fails,
> > `gluster peer detach gluster-data force`
> > 2) On gluster-data:
> > `rm -rf /var/lib/glusterd`
> > `service glusterd restart`
> > 3) Again on gluster-0-0:
> > 'gluster peer probe gluster-data'
> ----------
>
> Would such a "reset" be safe to try when only 2 out of 3
> bricks are up? Would it be likely to help?
>
> Thanks.
>
>
> Pat
>
>>
>>
>> Hi,
>>
>> We are employing gluster to merge the disks from
>> 3 servers into a common name-space. We just had
>> a power outage, and when the power came back on,
>> one of the servers had it's power supply damaged.
>> While working on this problem, we tried to get the
>> gluster area (/gdata) up with the the 2 working
>> servers, so at least the files on them would be
>> available while we repair the other server. This
>> hasn't worked as expected. From past experience
>> we expected to be able to see all the subdirectories
>> with some files simply absent. What we see instead
>> is that 4 of the top-level directories are unaccessible
>>
>> from client:
>> ls /gdata
>> ls: cannot access /gdata/test: Invalid argument
>> ls: cannot access /gdata/projects: Invalid argument
>> ls: cannot access /gdata/harvard-data2: Invalid argument
>> ls: cannot access /gdata/temp_home: Invalid argument
>> drwxr-xr-x 6 root root 152 Dec 5 12:27 harvard
>> drwxr-xr-x 4 root root 74 Dec 5 12:27 harvard-archive
>> ?????????? ? ? ? ? ? harvard-data2
>> drwxr-xr-x 104 pierrel 8310 16K Dec 5 12:27 harvard-data3
>> drwx------ 2 root root 12 Dec 5 12:27 lost+found
>> ?????????? ? ? ? ? ? projects
>> drwxrwxr-x 8 root software 4.1K Dec 5 12:27 software
>> drwxr-xr-x 5 root root 4.1K Dec 5 12:27 src
>> ?????????? ? ? ? ? ? temp_home
>> ?????????? ? ? ? ? ? test
>>
>> We have tried
>> - testing the communications between the bricks and
>> between the client & bricks (comms fine)
>> - restarting the gluster daemons on the bricks
>> - unmounting and remounting /gdata on the client
>> - mounting the gluster area directly on one of the
>> servers using mount -t glusterfs mseas-data:/gdata /gdata
>>
>> All give the same result. The status reported by gluster
>> looks fine (given that 1 brick is down). The reports are
>> included below. The log files from the mounting/ls tests are
>> included after that.
>>
>> What should we do/look at next to solve/debug this problem?
>>
>> Thanks.
>>
>> Pat
>>
>> ---------------------------------------------------------------------
>> [root at mseas-data glusterfs]# gluster --version
>> glusterfs 3.3.1 built on Oct 11 2012 22:01:05
>> ---------------------------------------------------------------------
>> [root at mseas-data bricks]# gluster volume info
>>
>> Volume Name: gdata
>> Type: Distribute
>> Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
>> Status: Started
>> Number of Bricks: 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster-0-0:/mseas-data-0-0
>> Brick2: gluster-0-1:/mseas-data-0-1
>> Brick3: gluster-data:/data
>>
>> ---------------------------------------------------------------------
>>
>> [root at mseas-data bricks]# gluster volume status
>> Status of volume: gdata
>> Gluster process Port
>> Online Pid
>> ------------------------------------------------------------------------------
>>
>> Brick gluster-0-0:/mseas-data-0-0 24010 Y 11346
>> Brick gluster-data:/data 24010 Y 28791
>> NFS Server on localhost 38467 Y 28797
>> NFS Server on gluster-0-0 38467 Y 11352
>>
>> ---------------------------------------------------------------------
>>
>> [root at mseas-data bricks]# gluster peer status
>> Number of Peers: 2
>>
>> Hostname: gluster-0-1
>> Uuid: 978e0f76-6474-4203-8617-ed5ad7d29239
>> State: Peer in Cluster (Disconnected)
>>
>> Hostname: gluster-0-0
>> Uuid: 3f73f5cc-39d8-4d9a-b442-033cb074b247
>> State: Peer in Cluster (Connected)
>> ---------------------------------------------------------------------
>>
>>
>> =============================================================
>> Log files from test mount from client:
>> --------------------------------------
>>
>> gluster-data: /var/log/glusterfs/bricks/data.log
>> -------------------------------------------------
>> [2014-03-18 10:43:57.149035] I [server.c:703:server_rpc_notify]
>> 0-gdata-server: disconnecting connectionfrom
>> compute-3-0.local-15332-2014/03/18-10:10:31:456263-gdata-client-2-0
>> [2014-03-18 10:43:57.149073] I
>> [server-helpers.c:741:server_connection_put] 0-gdata-server: Shutting
>> down connection
>> compute-3-0.local-15332-2014/03/18-10:10:31:456263-gdata-client-2-0
>> [2014-03-18 10:43:57.149127] I
>> [server-helpers.c:629:server_connection_destroy] 0-gdata-server:
>> destroyed connection of
>> compute-3-0.local-15332-2014/03/18-10:10:31:456263-gdata-client-2-0
>> [2014-03-18 10:44:10.149539] I
>> [server-handshake.c:571:server_setvolume] 0-gdata-server: accepted
>> client from
>> compute-3-0.local-15405-2014/03/18-10:44:06:129552-gdata-client-2-0
>> (version: 3.3.1)
>>
>> gluster-0-0: /var/log/glusterfs/bricks/data.log
>> -----------------------------------------------
>> [2014-03-18 10:43:57.141122] I [server.c:703:server_rpc_notify]
>> 0-gdata-server: disconnecting connectionfrom
>> compute-3-0.local-15332-2014/03/18-10:10:31:456263-gdata-client-0-0
>> [2014-03-18 10:43:57.141209] I
>> [server-helpers.c:741:server_connection_put] 0-gdata-server: Shutting
>> down connection
>> compute-3-0.local-15332-2014/03/18-10:10:31:456263-gdata-client-0-0
>> [2014-03-18 10:43:57.141299] I
>> [server-helpers.c:629:server_connection_destroy] 0-gdata-server:
>> destroyed connection of
>> compute-3-0.local-15332-2014/03/18-10:10:31:456263-gdata-client-0-0
>> [2014-03-18 10:44:10.143643] I
>> [server-handshake.c:571:server_setvolume] 0-gdata-server: accepted
>> client from
>> compute-3-0.local-15405-2014/03/18-10:44:06:129552-gdata-client-0-0
>> (version: 3.3.1)
>>
>> client: /var/log/glusterfs/gdata.log
>> -------------------------------------
>> [2014-03-18 10:43:57.139136] I [fuse-bridge.c:4091:fuse_thread_proc]
>> 0-fuse: unmounting /gdata
>> [2014-03-18 10:43:57.139423] W [glusterfsd.c:831:cleanup_and_exit]
>> (-->/lib64/libc.so.6(clone+0x6d) [0x3e6a8e5ccd]
>> (-->/lib64/libpthread.so.0() [0x3e6b0077f1]
>> (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d4d]))) 0-:
>> received signum (15), shutting down
>> [2014-03-18 10:43:57.139443] I [fuse-bridge.c:4648:fini] 0-fuse:
>> Unmounting '/gdata'.
>> [2014-03-18 10:44:06.132964] I [glusterfsd.c:1666:main]
>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.1
>> [2014-03-18 10:44:06.139237] I [io-cache.c:1549:check_cache_size_ok]
>> 0-gdata-quick-read: Max cache size is 66901987328
>> [2014-03-18 10:44:06.139290] I [io-cache.c:1549:check_cache_size_ok]
>> 0-gdata-io-cache: Max cache size is 66901987328
>> [2014-03-18 10:44:06.141718] I [client.c:2142:notify]
>> 0-gdata-client-0: parent translators are ready, attempting connect on
>> transport
>> [2014-03-18 10:44:06.144062] I [client.c:2142:notify]
>> 0-gdata-client-1: parent translators are ready, attempting connect on
>> transport
>> [2014-03-18 10:44:06.146039] I [client.c:2142:notify]
>> 0-gdata-client-2: parent translators are ready, attempting connect on
>> transport
>> Given volfile:
>> +------------------------------------------------------------------------------+
>>
>> 1: volume gdata-client-0
>> 2: type protocol/client
>> 3: option remote-host gluster-0-0
>> 4: option remote-subvolume /mseas-data-0-0
>> 5: option transport-type tcp
>> 6: end-volume
>> 7:
>> 8: volume gdata-client-1
>> 9: type protocol/client
>> 10: option remote-host gluster-0-1
>> 11: option remote-subvolume /mseas-data-0-1
>> 12: option transport-type tcp
>> 13: end-volume
>> 14:
>> 15: volume gdata-client-2
>> 16: type protocol/client
>> 17: option remote-host gluster-data
>> 18: option remote-subvolume /data
>> 19: option transport-type tcp
>> 20: end-volume
>> 21:
>> 22: volume gdata-dht
>> 23: type cluster/distribute
>> 24: subvolumes gdata-client-0 gdata-client-1 gdata-client-2
>> 25: end-volume
>> 26:
>> 27: volume gdata-write-behind
>> 28: type performance/write-behind
>> 29: subvolumes gdata-dht
>> 30: end-volume
>> 31:
>> 32: volume gdata-read-ahead
>> 33: type performance/read-ahead
>> 34: subvolumes gdata-write-behind
>> 35: end-volume
>> 36:
>> 37: volume gdata-io-cache
>> 38: type performance/io-cache
>> 39: subvolumes gdata-read-ahead
>> 40: end-volume
>> 41:
>> 42: volume gdata-quick-read
>> 43: type performance/quick-read
>> 44: subvolumes gdata-io-cache
>> 45: end-volume
>> 46:
>> 47: volume gdata-md-cache
>> 48: type performance/md-cache
>> 49: subvolumes gdata-quick-read
>> 50: end-volume
>> 51:
>> 52: volume gdata
>> 53: type debug/io-stats
>> 54: option latency-measurement off
>> 55: option count-fop-hits off
>> 56: subvolumes gdata-md-cache
>> 57: end-volume
>>
>> +------------------------------------------------------------------------------+
>>
>> [2014-03-18 10:44:06.148454] I [rpc-clnt.c:1657:rpc_clnt_reconfig]
>> 0-gdata-client-2: changing port to 24010 (from 0)
>> [2014-03-18 10:44:06.148529] I [rpc-clnt.c:1657:rpc_clnt_reconfig]
>> 0-gdata-client-0: changing port to 24010 (from 0)
>> [2014-03-18 10:44:09.146764] E [socket.c:1715:socket_connect_finish]
>> 0-gdata-client-1: connection to failed (No route to host)
>> [2014-03-18 10:44:10.141499] I
>> [client-handshake.c:1636:select_server_supported_programs]
>> 0-gdata-client-2: Using Program GlusterFS 3.3.1, Num (1298437),
>> Version (330)
>> [2014-03-18 10:44:10.141748] I
>> [client-handshake.c:1433:client_setvolume_cbk] 0-gdata-client-2:
>> Connected to 10.1.1.2:24010, attached to remote volume '/data'.
>> [2014-03-18 10:44:10.141759] I
>> [client-handshake.c:1445:client_setvolume_cbk] 0-gdata-client-2:
>> Server and Client lk-version numbers are not same, reopening the fds
>> [2014-03-18 10:44:10.141866] I
>> [client-handshake.c:453:client_set_lk_version_cbk] 0-gdata-client-2:
>> Server lk version = 1
>> [2014-03-18 10:44:10.143526] I
>> [client-handshake.c:1636:select_server_supported_programs]
>> 0-gdata-client-0: Using Program GlusterFS 3.3.1, Num (1298437),
>> Version (330)
>> [2014-03-18 10:44:10.143800] I
>> [client-handshake.c:1433:client_setvolume_cbk] 0-gdata-client-0:
>> Connected to 10.1.1.10:24010, attached to remote volume
>> '/mseas-data-0-0'.
>> [2014-03-18 10:44:10.143810] I
>> [client-handshake.c:1445:client_setvolume_cbk] 0-gdata-client-0:
>> Server and Client lk-version numbers are not same, reopening the fds
>> [2014-03-18 10:44:10.147967] I [fuse-bridge.c:4191:fuse_graph_setup]
>> 0-fuse: switched to graph 0
>> [2014-03-18 10:44:10.148147] I
>> [client-handshake.c:453:client_set_lk_version_cbk] 0-gdata-client-0:
>> Server lk version = 1
>> [2014-03-18 10:44:10.148251] I [fuse-bridge.c:3376:fuse_init]
>> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13
>> kernel 7.13
>> [2014-03-18 10:44:10.148993] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /. holes=1 overlaps=0
>> [2014-03-18 10:44:10.149012] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 10:48:34.335068] W [dht-layout.c:186:dht_layout_search]
>> 0-gdata-dht: no subvolume for hash (value) = 1072520554
>> [2014-03-18 10:48:34.335110] E [dht-common.c:1372:dht_lookup]
>> 0-gdata-dht: Failed to get hashed subvol for /test
>> [2014-03-18 10:48:34.335131] W [fuse-bridge.c:292:fuse_entry_cbk]
>> 0-glusterfs-fuse: 7: LOOKUP() /test => -1 (Invalid argument)
>> [2014-03-18 10:48:34.336648] W [dht-layout.c:186:dht_layout_search]
>> 0-gdata-dht: no subvolume for hash (value) = 1092090275
>> [2014-03-18 10:48:34.336660] E [dht-common.c:1372:dht_lookup]
>> 0-gdata-dht: Failed to get hashed subvol for /projects
>> [2014-03-18 10:48:34.336670] W [fuse-bridge.c:292:fuse_entry_cbk]
>> 0-glusterfs-fuse: 16: LOOKUP() /projects => -1 (Invalid argument)
>> [2014-03-18 10:48:34.337271] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /src. holes=1 overlaps=0
>> [2014-03-18 10:48:34.337310] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 10:48:34.337400] W [dht-layout.c:186:dht_layout_search]
>> 0-gdata-dht: no subvolume for hash (value) = 485417449
>> [2014-03-18 10:48:34.337414] E [dht-common.c:1372:dht_lookup]
>> 0-gdata-dht: Failed to get hashed subvol for /harvard-data2
>> [2014-03-18 10:48:34.337424] W [fuse-bridge.c:292:fuse_entry_cbk]
>> 0-glusterfs-fuse: 18: LOOKUP() /harvard-data2 => -1 (Invalid argument)
>> [2014-03-18 10:48:34.337988] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /harvard. holes=1 overlaps=0
>> [2014-03-18 10:48:34.338003] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 10:48:34.338564] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /harvard-archive. holes=1 overlaps=0
>> [2014-03-18 10:48:34.338576] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 10:48:34.339173] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /harvard-data3. holes=1 overlaps=0
>> [2014-03-18 10:48:34.339184] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 10:48:34.339740] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /lost+found. holes=1 overlaps=0
>> [2014-03-18 10:48:34.339751] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 10:48:34.340290] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /software. holes=1 overlaps=0
>> [2014-03-18 10:48:34.340302] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 10:48:34.340872] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /tmp. holes=1 overlaps=0
>> [2014-03-18 10:48:34.340883] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 10:48:34.341463] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /z.rdiff-backup-data. holes=1 overlaps=0
>> [2014-03-18 10:48:34.341474] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 10:48:34.341561] W [dht-layout.c:186:dht_layout_search]
>> 0-gdata-dht: no subvolume for hash (value) = 124395703
>> [2014-03-18 10:48:34.341576] E [dht-common.c:1372:dht_lookup]
>> 0-gdata-dht: Failed to get hashed subvol for /temp_home
>> [2014-03-18 10:48:34.341586] W [fuse-bridge.c:292:fuse_entry_cbk]
>> 0-glusterfs-fuse: 26: LOOKUP() /temp_home => -1 (Invalid argument)
>> [2014-03-18 10:48:34.342137] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /data. holes=1 overlaps=0
>> [2014-03-18 10:48:34.342152] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 10:48:34.342737] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /bibliography. holes=1 overlaps=0
>> [2014-03-18 10:48:34.342748] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [root at compute-3-0 glusterfs]#
>>
>> =============================================================
>> Log file from test mount on gluster-data:
>> -----------------------------------------
>>
>> /var/log/glusterfs/gdata.log
>> -----------------------------
>> [2014-03-18 11:29:23.549718] I [glusterfsd.c:1666:main]
>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.1
>> [2014-03-18 11:29:23.751872] I [io-cache.c:1549:check_cache_size_ok]
>> 0-gdata-quick-read: Max cache size is 8374652928
>> [2014-03-18 11:29:23.767321] I [io-cache.c:1549:check_cache_size_ok]
>> 0-gdata-io-cache: Max cache size is 8374652928
>> [2014-03-18 11:29:23.770541] I [client.c:2142:notify]
>> 0-gdata-client-0: parent translators are ready, attempting connect on
>> transport
>> [2014-03-18 11:29:23.773247] I [client.c:2142:notify]
>> 0-gdata-client-1: parent translators are ready, attempting connect on
>> transport
>> [2014-03-18 11:29:23.775734] I [client.c:2142:notify]
>> 0-gdata-client-2: parent translators are ready, attempting connect on
>> transport
>> Given volfile:
>> +------------------------------------------------------------------------------+
>>
>> 1: volume gdata-client-0
>> 2: type protocol/client
>> 3: option remote-host gluster-0-0
>> 4: option remote-subvolume /mseas-data-0-0
>> 5: option transport-type tcp
>> 6: end-volume
>> 7:
>> 8: volume gdata-client-1
>> 9: type protocol/client
>> 10: option remote-host gluster-0-1
>> 11: option remote-subvolume /mseas-data-0-1
>> 12: option transport-type tcp
>> 13: end-volume
>> 14:
>> 15: volume gdata-client-2
>> 16: type protocol/client
>> 17: option remote-host gluster-data
>> 18: option remote-subvolume /data
>> 19: option transport-type tcp
>> 20: end-volume
>> 21:
>> 22: volume gdata-dht
>> 23: type cluster/distribute
>> 24: subvolumes gdata-client-0 gdata-client-1 gdata-client-2
>> 25: end-volume
>> 26:
>> 27: volume gdata-write-behind
>> 28: type performance/write-behind
>> 29: subvolumes gdata-dht
>> 30: end-volume
>> 31:
>> 32: volume gdata-read-ahead
>> 33: type performance/read-ahead
>> 34: subvolumes gdata-write-behind
>> 35: end-volume
>> 36:
>> 37: volume gdata-io-cache
>> 38: type performance/io-cache
>> 39: subvolumes gdata-read-ahead
>> 40: end-volume
>> 41:
>> 42: volume gdata-quick-read
>> 43: type performance/quick-read
>> 44: subvolumes gdata-io-cache
>> 45: end-volume
>> 46:
>> 47: volume gdata-md-cache
>> 48: type performance/md-cache
>> 49: subvolumes gdata-quick-read
>> 50: end-volume
>> 51:
>> 52: volume gdata
>> 53: type debug/io-stats
>> 54: option latency-measurement off
>> 55: option count-fop-hits off
>> 56: subvolumes gdata-md-cache
>> 57: end-volume
>>
>> +------------------------------------------------------------------------------+
>>
>> [2014-03-18 11:29:23.778543] I [rpc-clnt.c:1657:rpc_clnt_reconfig]
>> 0-gdata-client-2: changing port to 24010 (from 0)
>> [2014-03-18 11:29:23.778755] I [rpc-clnt.c:1657:rpc_clnt_reconfig]
>> 0-gdata-client-0: changing port to 24010 (from 0)
>> [2014-03-18 11:29:23.981081] E [socket.c:1715:socket_connect_finish]
>> 0-gdata-client-1: connection to failed (No route to host)
>> [2014-03-18 11:29:27.736927] I
>> [client-handshake.c:1636:select_server_supported_programs]
>> 0-gdata-client-2: Using Program GlusterFS 3.3.1, Num (1298437),
>> Version (330)
>> [2014-03-18 11:29:27.737135] I
>> [client-handshake.c:1433:client_setvolume_cbk] 0-gdata-client-2:
>> Connected to 10.1.1.2:24010, attached to remote volume '/data'.
>> [2014-03-18 11:29:27.737152] I
>> [client-handshake.c:1445:client_setvolume_cbk] 0-gdata-client-2:
>> Server and Client lk-version numbers are not same, reopening the fds
>> [2014-03-18 11:29:27.737242] I
>> [client-handshake.c:453:client_set_lk_version_cbk] 0-gdata-client-2:
>> Server lk version = 1
>> [2014-03-18 11:29:27.739760] I
>> [client-handshake.c:1636:select_server_supported_programs]
>> 0-gdata-client-0: Using Program GlusterFS 3.3.1, Num (1298437),
>> Version (330)
>> [2014-03-18 11:29:27.740067] I
>> [client-handshake.c:1433:client_setvolume_cbk] 0-gdata-client-0:
>> Connected to 10.1.1.10:24010, attached to remote volume
>> '/mseas-data-0-0'.
>> [2014-03-18 11:29:27.740081] I
>> [client-handshake.c:1445:client_setvolume_cbk] 0-gdata-client-0:
>> Server and Client lk-version numbers are not same, reopening the fds
>> [2014-03-18 11:29:27.747148] I [fuse-bridge.c:4191:fuse_graph_setup]
>> 0-fuse: switched to graph 0
>> [2014-03-18 11:29:27.747322] I
>> [client-handshake.c:453:client_set_lk_version_cbk] 0-gdata-client-0:
>> Server lk version = 1
>> [2014-03-18 11:29:27.747458] I [fuse-bridge.c:3376:fuse_init]
>> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13
>> kernel 7.8
>> [2014-03-18 11:29:27.748446] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /. holes=1 overlaps=0
>> [2014-03-18 11:29:27.748467] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 11:30:27.543344] W [dht-layout.c:186:dht_layout_search]
>> 0-gdata-dht: no subvolume for hash (value) = 1072520554
>> [2014-03-18 11:30:27.543374] E [dht-common.c:1372:dht_lookup]
>> 0-gdata-dht: Failed to get hashed subvol for /test
>> [2014-03-18 11:30:27.543394] W [fuse-bridge.c:292:fuse_entry_cbk]
>> 0-glusterfs-fuse: 7: LOOKUP() /test => -1 (Invalid argument)
>> [2014-03-18 11:30:27.544817] W [dht-layout.c:186:dht_layout_search]
>> 0-gdata-dht: no subvolume for hash (value) = 1092090275
>> [2014-03-18 11:30:27.544832] E [dht-common.c:1372:dht_lookup]
>> 0-gdata-dht: Failed to get hashed subvol for /projects
>> [2014-03-18 11:30:27.544846] W [fuse-bridge.c:292:fuse_entry_cbk]
>> 0-glusterfs-fuse: 12: LOOKUP() /projects => -1 (Invalid argument)
>> [2014-03-18 11:30:27.545336] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /src. holes=1 overlaps=0
>> [2014-03-18 11:30:27.545356] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 11:30:27.545476] W [dht-layout.c:186:dht_layout_search]
>> 0-gdata-dht: no subvolume for hash (value) = 485417449
>> [2014-03-18 11:30:27.545491] E [dht-common.c:1372:dht_lookup]
>> 0-gdata-dht: Failed to get hashed subvol for /harvard-data2
>> [2014-03-18 11:30:27.545505] W [fuse-bridge.c:292:fuse_entry_cbk]
>> 0-glusterfs-fuse: 14: LOOKUP() /harvard-data2 => -1 (Invalid argument)
>> [2014-03-18 11:30:27.545995] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /harvard. holes=1 overlaps=0
>> [2014-03-18 11:30:27.546010] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 11:30:27.546600] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /harvard-archive. holes=1 overlaps=0
>> [2014-03-18 11:30:27.546616] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 11:30:27.547206] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /harvard-data3. holes=1 overlaps=0
>> [2014-03-18 11:30:27.547222] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 11:30:27.547901] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /lost+found. holes=1 overlaps=0
>> [2014-03-18 11:30:27.547917] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 11:30:27.548423] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /software. holes=1 overlaps=0
>> [2014-03-18 11:30:27.548439] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 11:30:27.548981] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /tmp. holes=1 overlaps=0
>> [2014-03-18 11:30:27.548997] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 11:30:27.549500] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /z.rdiff-backup-data. holes=1 overlaps=0
>> [2014-03-18 11:30:27.549515] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 11:30:27.549629] W [dht-layout.c:186:dht_layout_search]
>> 0-gdata-dht: no subvolume for hash (value) = 124395703
>> [2014-03-18 11:30:27.549644] E [dht-common.c:1372:dht_lookup]
>> 0-gdata-dht: Failed to get hashed subvol for /temp_home
>> [2014-03-18 11:30:27.549658] W [fuse-bridge.c:292:fuse_entry_cbk]
>> 0-glusterfs-fuse: 22: LOOKUP() /temp_home => -1 (Invalid argument)
>> [2014-03-18 11:30:27.550134] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /data. holes=1 overlaps=0
>> [2014-03-18 11:30:27.550149] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 11:30:27.550793] I [dht-layout.c:593:dht_layout_normalize]
>> 0-gdata-dht: found anomalies in /bibliography. holes=1 overlaps=0
>> [2014-03-18 11:30:27.550830] W
>> [dht-selfheal.c:875:dht_selfheal_directory] 0-gdata-dht: 1 subvolumes
>> down -- not fixing
>> [2014-03-18 11:32:00.358667] W [dht-layout.c:186:dht_layout_search]
>> 0-gdata-dht: no subvolume for hash (value) = 1072520554
>> [2014-03-18 11:32:00.358700] E [dht-common.c:1372:dht_lookup]
>> 0-gdata-dht: Failed to get hashed subvol for /test
>> [2014-03-18 11:32:00.358717] W [fuse-bridge.c:292:fuse_entry_cbk]
>> 0-glusterfs-fuse: 35: LOOKUP() /test => -1 (Invalid argument)
>> [2014-03-18 11:32:00.359934] W [dht-layout.c:186:dht_layout_search]
>> 0-gdata-dht: no subvolume for hash (value) = 1092090275
>> [2014-03-18 11:32:00.359949] E [dht-common.c:1372:dht_lookup]
>> 0-gdata-dht: Failed to get hashed subvol for /projects
>> [2014-03-18 11:32:00.359963] W [fuse-bridge.c:292:fuse_entry_cbk]
>> 0-glusterfs-fuse: 44: LOOKUP() /projects => -1 (Invalid argument)
>> [2014-03-18 11:32:00.360343] W [dht-layout.c:186:dht_layout_search]
>> 0-gdata-dht: no subvolume for hash (value) = 485417449
>> [2014-03-18 11:32:00.360358] E [dht-common.c:1372:dht_lookup]
>> 0-gdata-dht: Failed to get hashed subvol for /harvard-data2
>> [2014-03-18 11:32:00.360372] W [fuse-bridge.c:292:fuse_entry_cbk]
>> 0-glusterfs-fuse: 47: LOOKUP() /harvard-data2 => -1 (Invalid argument)
>> [2014-03-18 11:32:00.362629] W [dht-layout.c:186:dht_layout_search]
>> 0-gdata-dht: no subvolume for hash (value) = 124395703
>> [2014-03-18 11:32:00.362645] E [dht-common.c:1372:dht_lookup]
>> 0-gdata-dht: Failed to get hashed subvol for /temp_home
>> [2014-03-18 11:32:00.362660] W [fuse-bridge.c:292:fuse_entry_cbk]
>> 0-glusterfs-fuse: 62: LOOKUP() /temp_home => -1 (Invalid argument)
>>
>> ===========================================================================
>>
>>
>>
>> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
>> Pat Haley Email: phaley at mit.edu
>> Center for Ocean Engineering Phone: (617) 253-6824
>> Dept. of Mechanical Engineering Fax: (617) 253-8125
>> MIT, Room 5-213 http://web.mit.edu/phaley/www/
>> 77 Massachusetts Avenue
>> Cambridge, MA 02139-4301
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: phaley at mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical Engineering Fax: (617) 253-8125
MIT, Room 5-213 http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge, MA 02139-4301
More information about the Gluster-users
mailing list