[Gluster-devel] Re: big problem with unify in rc2

Dan Parsons dparsons at nyip.net
Wed Feb 25 22:01:52 UTC 2009


Further information- I can't find any rhyme or reason as to why some files
have this problems and others don't. It's not every file on dht or every
file on stripe.... I don't think I've found any broken files on stripe,
everything that's broken is on dht, but not every dht file is broken.
Please advise.

Dan


On Wed, Feb 25, 2009 at 1:47 PM, Dan Parsons <dparsons at nyip.net> wrote:

> Upon further examination, it looks like maybe unify is expecting that .pni
> file to be handled by stripe, as according to the error message, it's
> expecting to find the file on more than 1 server. It's supposed to be
> handled by dht though, and I've verified that this file exists on just one
> server, like it should.
> Did the syntax for specifying file extensions to 'option scheduler switch'
> change between rc1 and rc2?
>
> Dan
>
>
>
> On Wed, Feb 25, 2009 at 1:44 PM, Dan Parsons <dparsons at nyip.net> wrote:
>
>> After upgrading to rc2, I'm getting unify errors for a lot of files. If I
>> try to read one of these files, I get an I/O error. Here are the
>> corresponding lines from gluster log:
>> 2009-02-25 13:42:49 E [unify.c:1239:unify_open] unify:
>> /bio/db/blast/blastp-nr_v9/blastp-nr.19.pni: entry_count is 1
>> 2009-02-25 13:42:49 E [unify.c:1242:unify_open] unify:
>> /bio/db/blast/blastp-nr_v9/blastp-nr.19.pni: found on unify-switch-ns
>> 2009-02-25 13:42:49 E [unify.c:1246:unify_open] unify: returning EIO as
>> file found on onlyone node
>> 2009-02-25 13:42:49 E [fuse-bridge.c:667:fuse_fd_cbk] glusterfs-fuse:
>> 4152: OPEN() /bio/db/blast/blastp-nr_v9/blastp-nr.19.pni => -1 (Input/output
>> error)
>>
>> No errors on any of the servers, and I've verified that the files DO exist
>> on the server. Here's my client config:
>>
>> volume unify-switch-ns
>>    type protocol/client
>>    option transport-type tcp
>>    option remote-host 10.8.101.51
>>    option remote-subvolume posix-unify-switch-ns
>> end-volume
>>
>> #volume distfs01-ns-readahead
>> #   type performance/read-ahead
>> #   option page-size 1MB
>> #   option page-count 8
>> #   subvolumes distfs01-ns-brick
>> #end-volume
>>
>> #volume unify-switch-ns
>> #   type performance/write-behind
>> #   option block-size 1MB
>> #   option cache-size 3MB
>> #   subvolumes distfs01-ns-readahead
>> #end-volume
>>
>> volume distfs01-unify
>>    type protocol/client
>>    option transport-type tcp
>>    option remote-host 10.8.101.51
>>    option remote-subvolume posix-unify
>> end-volume
>>
>> volume distfs02-unify
>>    type protocol/client
>>    option transport-type tcp
>>    option remote-host 10.8.101.52
>>    option remote-subvolume posix-unify
>> end-volume
>>
>> volume distfs03-unify
>>    type protocol/client
>>    option transport-type tcp
>>    option remote-host 10.8.101.53
>>    option remote-subvolume posix-unify
>> end-volume
>>
>> volume distfs04-unify
>>    type protocol/client
>>    option transport-type tcp
>>    option remote-host 10.8.101.54
>>    option remote-subvolume posix-unify
>> end-volume
>>
>> volume distfs01-stripe
>>    type protocol/client
>>    option transport-type tcp
>>    option remote-host 10.8.101.51
>>    option remote-subvolume posix-stripe
>> end-volume
>>
>> volume distfs02-stripe
>>    type protocol/client
>>    option transport-type tcp
>>    option remote-host 10.8.101.52
>>    option remote-subvolume posix-stripe
>> end-volume
>>
>> volume distfs03-stripe
>>    type protocol/client
>>    option transport-type tcp
>>    option remote-host 10.8.101.53
>>    option remote-subvolume posix-stripe
>> end-volume
>>
>> volume distfs04-stripe
>>    type protocol/client
>>    option transport-type tcp
>>    option remote-host 10.8.101.54
>>    option remote-subvolume posix-stripe
>> end-volume
>>
>> volume stripe0
>> type cluster/stripe
>> option block-size *.jar,*.pin:1MB,*:2MB
>>  subvolumes distfs01-stripe distfs02-stripe distfs03-stripe
>> distfs04-stripe
>> end-volume
>>
>> volume dht0
>> type cluster/dht
>>  subvolumes distfs01-unify distfs02-unify distfs03-unify distfs04-unify
>> end-volume
>>
>> volume unify
>> type cluster/unify
>>  option namespace unify-switch-ns
>> option self-heal off
>> option scheduler switch
>> # send *.phr/psq/pnd etc to stripe0, send the rest to hash
>> # extensions have to be *.foo* and not simply *.foo or rsync's tmp file
>> naming will prevent files from being matched
>> option scheduler.switch.case
>> *.phr*:stripe0;*.psq*:stripe0;*.pnd*:stripe0;*.psd*:stripe0;*.pin*:stripe0;*.nsi*:stripe0;*.nin*:stripe0;*.nsd*:stripe0;*.nhr*:stripe0;*.nsq*:stripe0;*.tar*:stripe0;*.tar.gz*:stripe0;*.jar*:stripe0;*.img*:stripe0;*.perf*:stripe0;*.tgz*:stripe0;*.fasta*:stripe0;*.huge*:stripe0
>>  subvolumes stripe0 dht0
>> end-volume
>>
>> volume ioc
>>    type performance/io-cache
>>    subvolumes unify
>>    option cache-size 3000MB
>> option cache-timeout 3600
>> end-volume
>>
>> volume filter
>>   type features/filter
>>   option fixed-uid 0
>>   option fixed-gid 900
>>   subvolumes ioc
>> end-volume
>>
>>
>> Dan
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20090225/878e9703/attachment-0003.html>


More information about the Gluster-devel mailing list