[Gluster-users] Missing files with unify 2.0.0rc1

Filipe Maia filipe at xray.bmc.uu.se
Fri Jan 16 09:00:22 UTC 2009


I can reproduce the problem with this more simple configuration file
and using just one server.

On the server side:

volume brick
  type storage/posix
  option directory /homes/davinci/filipe
end-volume

volume ns
  type storage/posix
  option directory /homes/ns
end-volume

volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow *
  option auth.addr.ns.allow *
  subvolumes brick ns
end-volume

On the client side:

volume b1
  type protocol/client
  option transport-type tcp
  option remote-host michelangelo
  option remote-subvolume brick
end-volume

volume ns
  type protocol/client
  option transport-type tcp
  option remote-host michelangelo
  option remote-subvolume ns
end-volume

volume unify1
  type cluster/unify
  subvolumes b1
  option namespace ns
  ### ** Round Robin (RR) Scheduler **
  option scheduler rr
  option rr.limits.min-free-disk 10%          # Don't drop free space below %
  option rr.refresh-interval 10               # Check server brick after 10s.
end-volume

I will see if I can discover any pattern on the missing files.

On Fri, Jan 16, 2009 at 09:36, Filipe Maia <filipe at xray.bmc.uu.se> wrote:
> Pre-existing.
> The problem does not occur if I remove the unify (if I export just one
> volume and import that one volume).
>
> On Thu, Jan 15, 2009 at 22:38, Anand Avati <avati at zresearch.com> wrote:
>> Filipe,
>>  how did you populate data into your volumes? was it pre-exising data
>> or did you copy in all data into a freshly created empty volume?
>>
>> avati
>>
>> 2009/1/16 Filipe Maia <filipe at xray.bmc.uu.se>:
>>> Hi,
>>>
>>> I'm trying to use unify to replace my NFS servers but i have some problems.
>>> In my tests I also see about a quarter of the fiels that I see on NFS.
>>> I also get the following errors on my glusterfsd.log:
>>>
>>> Version      : glusterfs 2.0.0rc1 built on Jan 15 2009 00:02:28
>>> TLA Revision : glusterfs--mainline--3.0--patch-844
>>> Starting Time: 2009-01-15 13:50:01
>>> Command line : glusterfsd
>>> given volfile
>>> +-----
>>>  1: volume disk
>>>  2:   type storage/posix
>>>  3:   option directory /homes/davinci
>>>  4: end-volume
>>>  5:
>>>  6: volume disk-rs
>>>  7:   type features/filter
>>>  8:   option root-squashing enable
>>>  9:   subvolumes disk
>>>  10: end-volume
>>>  11:
>>>  12: volume iot
>>>  13:   type performance/io-threads
>>>  14:   subvolumes disk-rs
>>>  15:   option thread-count 4
>>>  16: end-volume
>>>  17:
>>>  18: volume brick
>>>  19:   type performance/write-behind
>>>  20:   subvolumes iot
>>>  21:   option window-size 2MB
>>>  22:   option aggregate-size 1MB
>>>  23: end-volume
>>>  24:
>>>  25: # Volume name is server
>>>  26: volume server
>>>  27:   type protocol/server
>>>  28:   option transport-type tcp
>>>  29:   option auth.addr.brick.allow *
>>>  30:   subvolumes brick
>>>  31: end-volume
>>> +-----
>>>
>>> 2009-01-15 13:50:01 W [xlator.c:382:validate_xlator_volume_options]
>>> brick: option 'aggregate-size' i
>>> s deprecated, preferred is 'block-size', continuing with correction
>>> 2009-01-15 13:50:01 W [xlator.c:382:validate_xlator_volume_options]
>>> brick: option 'window-size' is d
>>> eprecated, preferred is 'cache-size', continuing with correction
>>> 2009-01-15 14:01:42 E [socket.c:104:__socket_rwv] server: readv failed
>>> (Connection reset by peer)
>>> 2009-01-15 14:01:42 E [socket.c:566:socket_proto_state_machine]
>>> server: socket read failed (Connecti
>>> on reset by peer) in state 1 (192.168.1.235:1020)
>>> 2009-01-15 14:04:07 W [posix.c:1042:posix_link] disk: link
>>> /filipe/.Xauthority-n to /filipe/.Xauthor
>>> ity failed: File exists
>>> 2009-01-15 14:04:08 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/.dbus: File exists
>>> 2009-01-15 14:04:08 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/.dbus/session-bus: File exist
>>> s
>>> 2009-01-15 14:04:08 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/GNUstep: File exists
>>> 2009-01-15 14:04:08 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/GNUstep/Defaults: File exists
>>> 2009-01-15 14:04:08 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/GNUstep/Library: File exists
>>> 2009-01-15 14:04:08 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/GNUstep/Library/WindowMaker:
>>> File exists
>>> 2009-01-15 14:04:08 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/GNUstep/Library/WindowMaker/B
>>> ackgrounds: File exists
>>> 2009-01-15 14:04:08 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/GNUstep/Library/WindowMaker/I
>>> conSets: File exists
>>> 2009-01-15 14:04:08 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/GNUstep/Library/WindowMaker/S
>>> oundSets: File exists
>>> 2009-01-15 14:04:08 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/GNUstep/Library/WindowMaker/P
>>> ixmaps: File exists
>>> 2009-01-15 14:04:08 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/GNUstep/Library/Icons: File e
>>> xists
>>> 2009-01-15 14:04:08 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/GNUstep/Library/WindowMaker/S
>>> ounds: File exists
>>> 2009-01-15 14:04:08 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/GNUstep/Library/WindowMaker/S
>>> tyles: File exists
>>> 2009-01-15 14:04:08 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/GNUstep/Library/WindowMaker/T
>>> hemes: File exists
>>> 2009-01-15 14:08:57 E [socket.c:104:__socket_rwv] server: readv failed
>>> (Connection reset by peer)
>>> 2009-01-15 14:08:57 E [socket.c:566:socket_proto_state_machine]
>>> server: socket read failed (Connecti
>>> on reset by peer) in state 1 (192.168.1.235:1019)
>>> 2009-01-15 15:34:44 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/.kde: File exists
>>> 2009-01-15 15:34:46 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/.kde/share: File exists
>>> 2009-01-15 15:34:46 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/.kde/share/config: File exist
>>> s
>>> 2009-01-15 15:34:46 W [posix.c:928:posix_symlink] disk: symlink of
>>> /filipe/.kde/socket-gauguin --> /
>>> tmp/ksocket-filipe: File exists
>>> 2009-01-15 15:34:46 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/.qt: File exists
>>> 2009-01-15 15:34:47 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/.kde/share/apps: File exists
>>> 2009-01-15 15:35:02 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/.mcop: File exists
>>> 2009-01-15 15:35:57 W [posix.c:796:posix_mkdir] disk: mkdir of
>>> /filipe/.ssh: File exists
>>> 2009-01-15 15:37:58 E [socket.c:104:__socket_rwv] server: readv failed
>>> (Connection reset by peer)
>>> 2009-01-15 15:37:58 E [socket.c:566:socket_proto_state_machine]
>>> server: socket read failed (Connecti
>>> on reset by peer) in state 1 (192.168.1.235:1016)
>>> 2009-01-15 16:04:32 E [socket.c:104:__socket_rwv] server: writev
>>> failed (Connection reset by peer)
>>> 2009-01-15 16:05:16 E [write-behind.c:1150:wb_flush] brick: returning EBADFD
>>> 2009-01-15 16:05:17 E [write-behind.c:1150:wb_flush] brick: returning EBADFD
>>> 2009-01-15 16:05:17 E [write-behind.c:1150:wb_flush] brick: returning EBADFD
>>> 2009-01-15 16:05:17 E [write-behind.c:1150:wb_flush] brick: returning EBADFD
>>> 2009-01-15 16:05:17 E [write-behind.c:1150:wb_flush] brick: returning EBADFD
>>> 2009-01-15 16:05:17 E [write-behind.c:1150:wb_flush] brick: returning EBADFD
>>> 2009-01-15 16:05:17 E [write-behind.c:1150:wb_flush] brick: returning EBADFD
>>> 2009-01-15 16:05:17 E [write-behind.c:1150:wb_flush] brick: returning EBADFD
>>>
>>>
>>> I don't think I have any hardware problems as I can cat all the files
>>> in my home directory without any problem.
>>>
>>> I tried to reproduce the problem with a smaller setup without much
>>> luck unfortunately.
>>>
>>> Here is the client file:
>>>
>>> volume tintoretto
>>>  type protocol/client
>>>  option transport-type tcp
>>>  option remote-host tintoretto
>>>  option remote-subvolume brick
>>> end-volume
>>>
>>> volume giotto
>>>  type protocol/client
>>>  option transport-type tcp
>>>  option remote-host giotto
>>>  option remote-subvolume brick
>>> end-volume
>>>
>>> volume michelangelo
>>>  type protocol/client
>>>  option transport-type tcp
>>>  option remote-host michelangelo
>>>  option remote-subvolume brick
>>>
>>> volume donatello
>>>  type protocol/client
>>>  option transport-type tcp
>>>  option remote-host donatello
>>>  option remote-subvolume brick
>>> end-volume
>>>
>>> volume ns
>>>  type protocol/client
>>>  option transport-type tcp
>>>  option remote-host tintoretto
>>>  option remote-subvolume ns
>>> end-volume
>>>
>>> volume bricks
>>>  type cluster/unify
>>>  option namespace ns # this will not be storage child of unify.
>>>  subvolumes tintoretto michelangelo giotto donatello
>>> #  option self-heal foreground # foreground off # default is foreground
>>> #  option self-heal background # foreground off # default is foreground
>>> ### ** Round Robin (RR) Scheduler **
>>>  option scheduler rr
>>> # A server is not used if it's free disk space drops below 15%.
>>>  option scheduler.limits.min-free-disk 15% #%
>>> end-volume
>>>
>>> volume bricks-rs
>>>  type features/filter
>>>  option root-squashing enable
>>>  subvolumes bricks
>>> end-volume
>>>
>>> volume iot
>>>  type performance/io-threads
>>>  subvolumes bricks-rs
>>>  option thread-count 4
>>> end-volume
>>>
>>> volume wb
>>>  type performance/write-behind
>>>  subvolumes iot
>>>  option flush-behind off    # default value is 'off'
>>>  option window-size 2MB
>>>  option aggregate-size 1MB # default value is 0
>>> end-volume
>>>
>>> ### 'IO-Cache' translator is best used on client side when a filesystem has file
>>> # which are not modified frequently but read several times. For example, while
>>> # compiling a kernel, *.h files are read while compiling every *.c
>>> file, in these
>>> # case, io-cache translator comes very handy, as it keeps the whole
>>> file content in
>>> # the cache, and serves from the cache.
>>> # One can provide the priority of the cache too.
>>>
>>> volume ioc
>>>  type performance/io-cache
>>>  subvolumes wb
>>>  option page-size 1MB      # 128KB is default
>>>  option cache-size 64MB    # 32MB is default
>>>  option cache-timeout 5 # 1second is default
>>>  option priority *.c:2,*.h:1 # default is *:0
>>> end-volume
>>>
>>>
>>> ### 'Read-Ahead' translator is best utilized on client side, as it prefetches
>>> # the file contents when the first read() call is issued.
>>> volume ra
>>>  type performance/read-ahead
>>>  subvolumes ioc
>>>  option page-size 1MB         # default is 256KB
>>>  option page-count 4          # default is 2
>>>  option force-atime-update no # defalut is 'no'
>>> end-volume
>>>
>>>
>>> Filipe
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>>
>>
>




More information about the Gluster-users mailing list