[Gluster-users] Question regarding tiering

Dan Lambright dlambrig at redhat.com
Mon Oct 5 23:00:37 UTC 2015


> Subject:         [Gluster-users] Question regarding tiering
> Date:         Thu, 1 Oct 2015 17:48:41 +0530
> From:         Ameet Pyati <ameet.pyati at virident.net>
> To:         gluster-users at gluster.org
> 
> 
> 
> Hi,
> 
> I am trying to attach a cache tier to normal distributed volume. I am
> seeing write failures when the cache brick becomes full. following are
> the steps
> 
> 
> *>> 1. create volume using hdd brick*
> 
> /root at host:~/gluster/glusterfs# gluster volume create vol
> host:/data/brick1/hdd//
> /volume create: vol: success: please start the volume to access data/
> /root at host:~/gluster/glusterfs# gluster volume start vol/
> /volume start: vol: success/
> 
> *>> 2. mount and write one file of size 1G*


The tech preview version of tiering does not gracefully handle a hot tier. When the feature is out of tech preview (later this fall?) a watermarking feature will exist. It will aggressively move data off the hot tier when its utilization crosses the watermark. 

The watermark's value is expressed as a percentage of the total storage. So if you set the watermark to 80%, then when the hot tier is 80% full, the system will begin aggressively moving data off the hot tier to the cold tier.  

There are some other mechanisms that are being explored to buttress watermarking:

- take a statfs of the hot tier every X number of I/Os, so we discover the system is "in the red zone" sooner.

- check the return value of a file operation for "out of space", and redirect that file operation to the cold tier if this happens. (ideal, but may be complex)

Together these ideas should eventually provide for a more resilient and responsive system.

> 
> /root at host:~/gluster/glusterfs# mount -t glusterfs host:/vol /mnt/
> /root at host:~/gluster/glusterfs# dd if=/dev/zero of=/mnt/file1 bs=1G count=1/
> /1+0 records in/
> /1+0 records out/
> /1073741824 bytes (1.1 GB) copied, 1.50069 s, 715 MB/s/
> /
> /
> /root at host:~/gluster/glusterfs# du -sh /data/brick*/
> /1.1G Â  Â /data/brick1/
> /60K Â  Â  /data/brick2/
> 
> *
> *
> *>> 3. attach ssd brick as tier*
> /
> /
> /root at host:~/gluster/glusterfs# gluster volume attach-tier vol
> host:/data/brick2/ssd//
> /Attach tier is recommended only for testing purposes in this release.
> Do you want to continue? (y/n) y/
> /volume attach-tier: success/
> /volume rebalance: vol: success: Rebalance on vol has been started
> successfully. Use rebalance status command to check status of the
> rebalance process./
> /ID: dea8d1b7-f0f4-4c17-94f5-ba0e263bc561/
> /
> /
> /root at host:~/gluster/glusterfs# gluster volume rebalance vol tier status/
> /Node                 Promoted files       Demoted files Â
> Â  Â  Â Status/
> /--------- Â  Â  Â  Â  Â  Â --------- Â  Â  Â  Â  Â  Â --------- Â  Â
> Â  Â  Â  Â ---------/
> /localhost            0                    0       Â
> Â  Â  Â  Â  Â  Â in progress/
> /volume rebalance: vol: success/
> 
> *
> *
> *>> 4. write data to fill up cache tier*
> 
> /root at host:~/gluster/glusterfs# dd if=/dev/zero of=/mnt/file2 bs=1G
> count=9 oflag=direct/
> /9+0 records in/
> /9+0 records out/
> /9663676416 bytes (9.7 GB) copied, 36.793 s, 263 MB/s/
> /root at host:~/gluster/glusterfs# du -sh /data/brick*/
> /1.1G Â  Â /data/brick1/
> /9.1G Â  Â /data/brick2/
> /root at host:~/gluster/glusterfs# gluster volume rebalance vol tier status/
> /Node                 Promoted files       Demoted files Â
> Â  Â  Â Status/
> /--------- Â  Â  Â  Â  Â  Â --------- Â  Â  Â  Â  Â  Â --------- Â  Â
> Â  Â  Â  Â ---------/
> /localhost            0                    0       Â
> Â  Â  Â  Â  Â  Â in progress/
> /volume rebalance: vol: success/
> /root at host:~/gluster/glusterfs# gluster volume rebalance vol  status/
> /Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Node
> Rebalanced-files          size       scanned      failures Â
>     skipped               status   run time in secs/
> /Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â --------- Â  Â
> Â ----------- Â  ----------- Â  ----------- Â  ----------- Â
> ----------- Â  Â  Â  Â  ------------ Â  Â  --------------/
> /                               localhost         Â
>      0        0Bytes             0             0 Â
>           0          in progress             112.00/
> /volume rebalance: vol: success/
> /
> /
> /root at host:~/gluster/glusterfs# dd if=/dev/zero of=/mnt/file3 bs=1G
> count=5 oflag=direct/
> /dd: error writing â/mnt/file3â: No space left on device/
> /dd: closing output file â/mnt/file3â: No space left on device/
> /
> /
> /root at host:~/gluster/glusterfs# du -sh /data/brick*/
> /1.1G Â  Â /data/brick1/
> /9.3G Â  Â /data/brick2/
> 
> *>>>> there is lot of space free in cold brick but writes are failing...*
> 
> /root at vsan18:~/gluster/glusterfs# df -h/
> /. <cut>/
> /./
> //dev/sdb3 Â  Â  Â  231G Â 1.1G Â 230G Â  1% /data/brick1/
> //dev/ssd       9.4G  9.4G  104K 100% /data/brick2/
> /host:/vol     241G   11G  230G   5% /mnt/
> 
> Please let me know if I am missing something.Â
> Is this behavior expected. shouldn't the files be re-balanced? Â
>


More information about the Gluster-users mailing list