[Gluster-users] Bug of write-behind client side

Manhong Dai daimh at umich.edu
Fri Aug 29 13:12:11 UTC 2008


Hi,


	Thanks a lot for look into this problem.


	Actually the problem for me is the output file stop growing, and cpu
utilization keeps at 100% forever.


Best,
Manhong

On Fri, 2008-08-29 at 11:10 +0400, Raghavendra G wrote:
> Hi,
> 
> Even I ran into 100% cpu utilization with patch-796. But it was due to
> badly configured aggregate-size. An aggregate-size of 1KB kept the cpu
> usage well around 54%.
> 
> regards,
> On Thu, Aug 28, 2008 at 5:12 PM, Dai, Manhong <daimh at umich.edu> wrote:
>         Hi,
>         
>         
>                 Repository revision:
>         glusterfs--mainline--2.5--patch-795 is the version
>         I am using.
>         
>         
>                 Our cluster is in use, so I cannot confirm this
>         problem on other
>         releases. Since removing write-behind can get rid of this
>         problem, I can
>         live with that.
>         
>         
>         
>         Best,
>         Manhong
>         
>         
>         
>                
>         On Thu, 2008-08-28 at 07:48 +0400, Raghavendra G wrote:
>         > Hi,
>         >
>         > what patch are you using? with
>         glusterfs--mainline--3.0--patch-329 and
>         > a basics setup of write-behind over protocol/client,
>         glusterfs cpu
>         > usage never went above 65% in my tests. Can you please
>         confirm whether
>         > the problem persists in patch-329?
>         >
>         > regards,
>         >
>         > On Thu, Aug 28, 2008 at 6:27 AM, Dai, Manhong
>         <daimh at umich.edu> wrote:
>         >         Hi,
>         >        
>         >         client.vol is
>         >        
>         >         volume unify-brick
>         >           type cluster/unify
>         >           option scheduler rr # round robin
>         >           option namespace muskie-ns
>         >         #  subvolumes muskie-brick pike1-brick pike2-brick
>         pike3-brick
>         >           subvolumes muskie-brick pike1-brick pike3-brick
>         >         end-volume
>         >        
>         >         volume wb
>         >           type performance/write-behind
>         >           option aggregate-size 1MB
>         >           option flush-behind on
>         >           subvolume
>         >         client.vol is
>         >        
>         >         volume unify-brick
>         >           type cluster/unify
>         >           option scheduler rr # round robin
>         >           option namespace muskie-ns
>         >         #  subvolumes muskie-brick pike1-brick pike2-brick
>         pike3-brick
>         >           subvolumes muskie-brick pike1-brick pike3-brick
>         >         end-volume
>         >        
>         >         volume wb
>         >           type performance/write-behind
>         >           option aggregate-size 1MB
>         >           option flush-behind on
>         >           subvolumes unify-brick
>         >         end-volume
>         >        
>         >        
>         >         yes abcdefghijklmn | while read l; do echo $l; done
>         > a
>         >        
>         >         would cause glusterfs process 100% busy and files
>         system hang
>         >         when the output size is around the aggregate-size.
>         >        
>         >        
>         >         removing write-behind translator would get rid of
>         this
>         >         problem.
>         >        
>         >        
>         >         s unify-brick
>         >         end-volume
>         >        
>         >        
>         >         command "yes abcdefghijklmn | while read l; do echo
>         $l; done >
>         >         a" would cause glusterfs process 100% busy and files
>         system
>         >         hang when the output size is around the
>         aggregate-size.
>         >         removing write-behind translator would get rid of
>         this
>         >         problem.
>         >        
>         >        
>         >        
>         >        
>         >         Best,
>         >         Manhong
>         >        
>         >        
>         >        
>         >         _______________________________________________
>         >         Gluster-users mailing list
>         >         Gluster-users at gluster.org
>         >
>         http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>         >        
>         >
>         >
>         >
>         > --
>         > Raghavendra G
>         >
>         > A centipede was happy quite, until a toad in fun,
>         > Said, "Prey, which leg comes after which?",
>         > This raised his doubts to such a pitch,
>         > He fell flat into the ditch,
>         > Not knowing how to run.
>         > -Anonymous
>         >
>         
> 
> 
> 
> -- 
> Raghavendra G
> 
> A centipede was happy quite, until a toad in fun,
> Said, "Prey, which leg comes after which?",
> This raised his doubts to such a pitch,
> He fell flat into the ditch,
> Not knowing how to run.
> -Anonymous
> 




More information about the Gluster-users mailing list