[Gluster-devel] 1.3.0pre2

Anand Avati avati at zresearch.com
Fri Mar 2 22:46:32 UTC 2007


Brent,
  first off, thank you for trying glusterfs. Can you give a few more
  details -
  
  * is the log from server or client?
  * the log message from the other one as well.
  * if possible a backtrace from the core of the one which died.
  
  can you also tell what was the I/O pattern which made the crash? was
  it heavy I/O on a single file? creation of a lot of files? metadata
  operations? and is it possible to reproduce it consistantly with some
  steps??
  
  Also we recently uploaded pre2-1 release tarball. That had a couple of
  bug fixes, but I need to get your answers to say if the fixes apply
  to you as well.
  
  Please attach your spec files as well.
  
  regards,
  avati
  
  On Fri, Mar 02, 2007 at 04:05:17PM -0500, Brent A Nelson wrote:
> So, I compiled 1.3.0pre2 as soon as it came out (nice, trouble-free 
> standard configure and make), and I found it very easy to set up a 
> GlusterFS with one node mirroring 16 disks to another, all optimizers 
> loaded.
> 
> However, it isn't stable under load.  I get errors like the following and 
> glusterfs exits:
> 
> [Mar 02 14:23:29] [ERROR/common-utils.c:52/full_rw()] 
> libglusterfs:full_rw: 0 bytes r/w instead of 113
> 
> I thought it might be because I was using the stock fuse module with my 
> kernel, but I replaced it with the 2.6.3 fuse module and it still dies in 
> this way.
> 
> Is this a bug or just that my setup is poor (one node serves 16 
> individual shares through a single glusterfsd, the mirror node does the 
> same, and the servers are also acting as my test clients) or that I'm not 
> using the deadline scheduler (yet) or...?
> 
> Thanks,
> 
> Brent
> 
> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
> 

-- 
Shaw's Principle:
        Build a system that even a fool can use,
        and only a fool will want to use it.





More information about the Gluster-devel mailing list