[Gluster-devel] My experience with patch-287

DeeDee Park deedee6905 at hotmail.com
Tue Jul 10 02:21:44 UTC 2007


I ran across some problems with the patch-287 build (patch-249 was tree-id I 
used before)

I'm running a test with only 2 bricks, and 1 client.

*) The original client config is setup for 4 bricks, but two of the machines 
were shut off. If the client is configured for all 4 bricks. A couple times 
it wouldn't let me do an 'ls' complaining about not finding "." After some 
time, I was able to do a ls.

*) I did a "cd /glusterfs/somedir; rm -rf .SomeDir*" ; and it erased *most* 
of the directories/files. I did it again and it removed more, but again not 
all the files. it ended up leaving one stubborn directory that I couldn't 
erase.

*) I did a "df -kh" on a setup with 1 client and 2 servers -- 40GB and 750GB 
and it showed only the total disk  space of 1 of the bricks (40GB). This use 
to work in earlier versions.

Config:
Server1(40GB): posix, iothreads, server
Server2(750GB):
    volume brick
    volume brick-ns (This is only about 6GB)
    iothreads-brick
    iothreads-brick-ns
    volume server
        subvolumes iothreads-brick iothreads-brick-ns
        auth.ip.brick.allow
        auth.ip.brick-ns.allow

client:
   vol server2-namespace
     remote-host server2
     remote-subvolume brick-ns
   vol server2-brick
     remote-host server2
     remote-subvolume brick
   vol server1
   unify
     subvolumes server2-namespace server2-brick server1
    scheduler alu
   (NOTE: no AFR)
   writeback
   readahead
   (NOTE: NO stat-prefetch)

_________________________________________________________________
http://newlivehotmail.com






More information about the Gluster-devel mailing list