[Gluster-users] Start a new volume with pre-existing directories
Dan Bretherton
d.a.bretherton at reading.ac.uk
Tue Dec 7 09:30:20 UTC 2010
> Date: Tue, 07 Dec 2010 09:15:06 +0100
> From: Daniel Zander<zander at ekp.uni-karlsruhe.de>
> Subject: Re: [Gluster-users] Start a new volume with pre-existing
> directories
> To: gluster-users at gluster.org
> Message-ID:<4CFDED0A.5030102 at ekp.uni-karlsruhe.de>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Dear all,
>
> as there have been no further questions or suggestions, I assume that
> you are just as out of ideas as I am. Maybe the logfiles of the two
> bricks will help. They were recorded while I created a new volume,
> started it, mounted it on FS8 and ran `find . | xargs stat>>/dev/null
> 2>&1` and unmounted again. Then the same on FS7. And finally, I mounte
> it on a client. And here are the logfiles:
>
> -----
> FS8
> -----
>
> [2010-12-07 09:00:26.86494] W [graph.c:274:gf_add_cmdline_options]
> heal_me-server: adding option 'listen-port' for volume 'heal_me-server'
> with value '24022'
> [2010-12-07 09:00:26.87247] W
> [rpc-transport.c:566:validate_volume_options] tcp.heal_me-server: option
> 'listen-port' is deprecated, preferred is
> 'transport.socket.listen-port', continuing with correction
> Given volfile:
> +------------------------------------------------------------------------------+
> 1: volume heal_me-posix
> 2: type storage/posix
> 3: option directory /storage/8
> 4: end-volume
> 5:
> 6: volume heal_me-access-control
> 7: type features/access-control
> 8: subvolumes heal_me-posix
> 9: end-volume
> 10:
> 11: volume heal_me-locks
> 12: type features/locks
> 13: subvolumes heal_me-access-control
> 14: end-volume
> 15:
> 16: volume heal_me-io-threads
> 17: type performance/io-threads
> 18: option thread-count 16
> 19: subvolumes heal_me-locks
> 20: end-volume
> 21:
> 22: volume /storage/8
> 23: type debug/io-stats
> 24: subvolumes heal_me-io-threads
> 25: end-volume
> 26:
> 27: volume heal_me-server
> 28: type protocol/server
> 29: option transport-type tcp
> 30: option auth.addr./storage/8.allow *
> 31: subvolumes /storage/8
> 32: end-volume
>
> +------------------------------------------------------------------------------+
> [2010-12-07 09:00:30.168852] I [server-handshake.c:535:server_setvolume]
> heal_me-server: accepted client from 192.168.101.246:1023
> [2010-12-07 09:00:30.240014] I [server-handshake.c:535:server_setvolume]
> heal_me-server: accepted client from 192.168.101.247:1022
> [2010-12-07 09:01:17.729708] I [server-handshake.c:535:server_setvolume]
> heal_me-server: accepted client from 192.168.101.246:1019
> [2010-12-07 09:02:27.588813] I [server-handshake.c:535:server_setvolume]
> heal_me-server: accepted client from 192.168.101.247:1017
> [2010-12-07 09:03:05.394282] I [server-handshake.c:535:server_setvolume]
> heal_me-server: accepted client from 192.168.101.203:1008
>
>
> -----
> FS7
> -----
> [2010-12-07 08:59:04.673533] W [graph.c:274:gf_add_cmdline_options]
> heal_me-server: adding option 'listen-port' for volume 'heal_me-server'
> with value '24022'
> [2010-12-07 08:59:04.674068] W
> [rpc-transport.c:566:validate_volume_options] tcp.heal_me-server: option
> 'listen-port' is deprecated, preferred is
> 'transport.socket.listen-port', continuing with correction
> Given volfile:
> +------------------------------------------------------------------------------+
> 1: volume heal_me-posix
> 2: type storage/posix
> 3: option directory /storage/7
> 4: end-volume
> 5:
> 6: volume heal_me-access-control
> 7: type features/access-control
> 8: subvolumes heal_me-posix
> 9: end-volume
> 10:
> 11: volume heal_me-locks
> 12: type features/locks
> 13: subvolumes heal_me-access-control
> 14: end-volume
> 15:
> 16: volume heal_me-io-threads
> 17: type performance/io-threads
> 18: option thread-count 16
> 19: subvolumes heal_me-locks
> 20: end-volume
> 21:
> 22: volume /storage/7
> 23: type debug/io-stats
> 24: subvolumes heal_me-io-threads
> 25: end-volume
> 26:
> 27: volume heal_me-server
> 28: type protocol/server
> 29: option transport-type tcp
> 30: option auth.addr./storage/7.allow *
> 31: subvolumes /storage/7
> 32: end-volume
>
> +------------------------------------------------------------------------------+
> [2010-12-07 08:59:08.717715] I [server-handshake.c:535:server_setvolume]
> heal_me-server: accepted client from 192.168.101.247:1023
> [2010-12-07 08:59:08.757648] I [server-handshake.c:535:server_setvolume]
> heal_me-server: accepted client from 192.168.101.246:1021
> [2010-12-07 08:59:56.274677] I [server-handshake.c:535:server_setvolume]
> heal_me-server: accepted client from 192.168.101.246:1020
> [2010-12-07 09:01:06.130142] I [server-handshake.c:535:server_setvolume]
> heal_me-server: accepted client from 192.168.101.247:1020
> [2010-12-07 09:01:43.945880] I [server-handshake.c:535:server_setvolume]
> heal_me-server: accepted client from 192.168.101.203:1007
>
>
> Any help is greatly appreciated,
> Regards,
> Daniel
>
>
>
> On 12/03/2010 01:24 PM, Daniel Zander wrote:
>
>> Hi!
>>
>> >Can you send the output of -
>> >
>> >`gluster volume info all`
>> >`gluster peer status`
>> >
>> >from a gluster storage server and
>> >
>> >`mount` from the client?
>>
>> Certainly....
>>
>> --------------------------------------
>> root at ekpfs8:~# gluster volume info all
>> Volume Name: heal_me
>> Type: Distribute
>> Status: Started
>> Number of Bricks: 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: 192.168.101.246:/storage/8
>> Brick2: 192.168.101.247:/storage/7
>> --------------------------------------
>> root at ekpfs8:~# gluster peer status
>> Number of Peers: 1
>>
>> Hostname: 192.168.101.247
>> Uuid: b36ce6e3-fa14-4d7e-bc4a-170a59a6f4f5
>> State: Peer in Cluster (Connected)
>> --------------------------------------
>> [root at ekpbelle ~]# mount
>> [ ... ]
>> glusterfs#192.168.101.246:/heal_me on /storage/gluster type fuse
>> (rw,allow_other,default_permissions,max_read=131072)
>> --------------------------------------
>>
>> Regards,
>> Daniel
>
Hello Daniel,
I have managed to export existing data successfully in the past, before
hearing about the "find . | xargs stat" self heal method. I did
encounter problems similar to the one you describe, where some or all of
the subdirectories were missing under the GlusterFS mount point. I
found that the missing directories could be listed by manually entering
their paths relative the the mount point, after which they would be
visible permanently. I came up with the following procedure for making
sure that GlusterFS could see all the data being exported.
1) Run "find -print" on each of the backend filesystems, saving the
output to a location that can be seen from all the servers and at least
one client.
2) In the GlusterFS mount point, "ls" every file and directory in each
of the "find" output lists generated earlier. I wrote a simple script
that reads each "find" file line by line, running "ls" on each one.
This isn't a very elegant solution and certainly isn't officially
recommended or supported, but it did work for me and I can't think of
any reason why it would be risky in any way.
-Dan.
More information about the Gluster-users
mailing list