[Gluster-devel] Re: ALU Scheduler ?

Angel clist at uah.es
Thu Jan 17 10:04:17 UTC 2008


Hia

You can refeer to
http://www.gluster.org/docs/index.php/GlusterFS_Translators_v1.3#ALU_Scheduler

for complete information,

ALU is Adaptative Least Usage a intelligent scheduler based on submodules



volume bricks
  type cluster/unify
  subvolumes brick1 brick2 brick3 brick4
  option scheduler alu   # use the ALU scheduler
  option alu.limits.min-free-disk  5%      # Don't create files one a volume with less than 5% free diskspace
  option alu.limits.max-open-files 10000   # Don't create files on a volume with more than 10000 files open
  
  # When deciding where to place a file, first look at the disk-usage, then at  
  # read-usage, write-usage, open files, and finally the disk-speed-usage.
  option alu.order disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
  option alu.disk-usage.entry-threshold 2GB   # Kick in if the discrepancy in disk-usage between volumes is more than 2GB
  option alu.disk-usage.exit-threshold  60MB   # Don't stop writing to the least-used volume until the discrepancy is 1988MB 
  option alu.open-files-usage.entry-threshold 1024   # Kick in if the discrepancy in open files is 1024
  option alu.open-files-usage.exit-threshold 32   # Don't stop until 992 files have been written the least-used volume
# option alu.read-usage.entry-threshold 20%   # Kick in when the read-usage discrepancy is 20%
# option alu.read-usage.exit-threshold 4%   # Don't stop until the discrepancy has been reduced to 16% (20% - 4%)
# option alu.write-usage.entry-threshold 20%   # Kick in when the write-usage discrepancy is 20%
# option alu.write-usage.exit-threshold 4%   # Don't stop until the discrepancy has been reduced to 16%
# option alu.disk-speed-usage.entry-threshold # NEVER SET IT. SPEED IS CONSTANT!!!
# option alu.disk-speed-usage.exit-threshold  # NEVER SET IT. SPEED IS CONSTANT!!!
  option alu.stat-refresh.interval 10sec   # Refresh the statistics used for decision-making every 10 seconds
# option alu.stat-refresh.num-file-create 10   # Refresh the statistics used for decision-making after creating 10 files
end-volume


El Jueves, 17 de Enero de 2008 03:15, An. Dinh Nhat escribió:
> 
> I have a question.I don’t understand Parameter below.Please,you can explain  this Parameter for me.Thanks you.
> 
>    *  ALU Scheduler Volume example 
> 
>   volume bricks
> 
>   type cluster/unify
> 
>   subvolumes brick1 brick2 brick3 brick4
> 
>   option scheduler alu   # use the ALU scheduler
> 
>   option alu.limits.min-free-disk  5%      
> 
>   option alu.limits.max-open-files 10000   
> 
>   option alu.order disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
> 
>   option alu.disk-usage.entry-threshold 2GB   
> 
>   option alu.disk-usage.exit-threshold  60MB    
> 
>   option alu.open-files-usage.entry-threshold 1024   
> 
>   option alu.open-files-usage.exit-threshold 32   
> 
>   option alu.read-usage.entry-threshold 20%   
> 
>   option alu.read-usage.exit-threshold 4%   
> 
>   option alu.stat-refresh.interval 10sec   
> 
>   option alu.stat-refresh.num-file-create 10   
> 
>  
> 
> Thanks & Best Regard,
> 
> Đinh Nhật An
> 
> System Engineer
> 
> System Operation - Vinagame JSC
> 
> Email:andn at vinagame.com.vn - Yahoo:atuladn
> 
> Vinagame JSC - 459B Nguyễn Đình Chiểu. Q3 , HCMC , VietNam
> 
> Office phone: 8.328.426 Ext 310
> 
>  
> 
>  
> 
> -----Original Message-----
> From: Angel [mailto:clist at uah.es] 
> Sent: Thursday, January 17, 2008 3:37 AM
> To: An. Dinh Nhat
> Cc: gluster-devel at nongnu.org
> Subject: Re: AFR Translator have problem
> 
>  
> 
>  
> 
> I see, glusterfs developers have this point in mind on the roadmap:
> 
>  
> 
> for the 1.4 release roadmap says:
> 
>  
> 
> active self-heal - log and replay failed I/O transactions 
> 
> brick hot-add/remove/swap - live storage hardware maintenance
> 
>  
> 
> so till that day, we the users have to figure out how to force lazy afrs into doing their job :-)
> 
>  
> 
> One issue positive is that this way you can control how much resources are devoted to afr, the more you touch files the more replication occurs and
> 
> in the event of high net or cpu pressure, lowering touching speed should lower afr requirements. 
> 
>  
> 
> Your mileage may vary. :-P
> 
>  
> 
> Perhaps GlusterFS client (maybe servers) should talk to a housekeeping daemon to acomplish this tasks instead of over-engineering the code to do as much things as required.. 
> 
>  
> 
> Let's wait what developers have to say about this issue...
> 
>  
> 
> Regards Angel
> 
>  
> 
> El Miércoles, 16 de Enero de 2008 An. Dinh Nhat escribió:
> 
> > Thanks your answer.
> 
> > 
> 
> > I understand when I touch file after server 3 go on when afr issue.Howerver if I have 2 server.After I edit add one server in glusterfs-client.vol, and Mount point have 40000 file,size: 800 Gb.How to AFR replication file to server3 automatic?
> 
> > 
> 
> > -----Original Message-----
> 
> > From: Angel [mailto:clist at uah.es] 
> 
> > Sent: Wednesday, January 16, 2008 11:16 PM
> 
> > To: gluster-devel at nongnu.org
> 
> > Cc: An. Dinh Nhat
> 
> > Subject: Re: AFR Translator have problem
> 
> > 
> 
> > I thinks AFR replication occurs on file access
> 
> > 
> 
> > try to touch all files from the client and probably you will trigger replication onto server3
> 
> > 
> 
> > client --> creates files on AFR(server1,server2)
> 
> > 
> 
> > server 3 goes up now we have AFR(server1,server2,server3)
> 
> > 
> 
> > you wont see any files on server3
> 
> > 
> 
> > 
> 
> > now touch files from the client, AFR will be triggered
> 
> > 
> 
> > now you will see touched files on server3
> 
> > 
> 
> > ive made similar test on local scenarios client --> local AFR(dir1,dir2)
> 
> > 
> 
> > i copied a file test.pdf to my mountpoint and it got replicated on both 'remote' dirs. Next i deleted one copy on the exported 'remote' directories (dir1)
> 
> > After that,  i opened the pdf file on the mount point, it opened right and i could see now dir1 was storing a new copy of test.pdf again.
> 
> > 
> 
> > it seems for me looking at the code that things mostly occur on file operations because xlator work intercepting fuse calls along the path to posix modules. 
> 
> > 
> 
> > my tests showed things occurring like this...
> 
> > 
> 
> > 
> 
> > Regards Angel
> 
> > El Miércoles, 16 de Enero de 2008 Anand Avati escribió:
> 
> > > Dinh,
> 
> > >  can you post your spec files, mentioning the order of events in terms of
> 
> > > subvolumes?
> 
> > > 
> 
> > > thanks,
> 
> > > avati
> 
> > > 
> 
> > > ---------- Forwarded message ----------
> 
> > > From: An. Dinh Nhat <andn at vinagame.com.vn>
> 
> > > Date: 16-ene-2008 16:07
> 
> > > Subject: AFR Translator have problem
> 
> > > To: gluster-devel-owner at nongnu.org
> 
> > > 
> 
> > >  Hi.
> 
> > > 
> 
> > > I set up 3 server using
> 
> > > GlusterFS<http://www.gluster.org/docs/index.php/GlusterFS>.Begin
> 
> > > I start 2 server after From Client I mount
> 
> > > GlusterFS<http://www.gluster.org/docs/index.php/GlusterFS>and copy 10
> 
> > > file on volume
> 
> > > gluster.After I start 'server 3' however I don't see any file on 'server
> 
> > > 3',I think  AFR Translator have problem.
> 
> > > 
> 
> > > 
> 
> > > 
> 
> > > [root at client examples]# glusterfs -V
> 
> > > 
> 
> > > glusterfs 1.3.7 built on Dec 18 2007
> 
> > > 
> 
> > > 
> 
> > > 
> 
> > > 
> 
> > > 
> 
> > > Thanks & Best Regard,[image: victory]
> 
> > > Đinh Nhật An
> 
> > > System Engineer
> 
> > > 
> 
> > > System Operation - Vinagame JSC
> 
> > > Email:andn at vinagame.com.vn - Yahoo:atuladn
> 
> > > *V*inagame *J*SC - 459B Nguyễn Đình Chiểu. Q3 , HCMC , VietNam
> 
> > > 
> 
> > > Office phone: 8.328.426 Ext 310
> 
> > > 
> 
> > > 
> 
> > > 
> 
> > > 
> 
> > > 
> 
> > 
> 
> > 
> 
> > 
> 
>  
> 
>  
> 
>  
> 

-- 
----------------------------
Clister UAH
----------------------------





More information about the Gluster-devel mailing list