[Gluster-users] Question regarding tiering

Ameet Pyati ameet.pyati at virident.net
Thu Oct 1 12:18:41 UTC 2015


Hi,

I am trying to attach a cache tier to normal distributed volume. I am
seeing write failures when the cache brick becomes full. following are the
steps


*>> 1. create volume using hdd brick*

*root at host:~/gluster/glusterfs# gluster volume create vol
host:/data/brick1/hdd/*
*volume create: vol: success: please start the volume to access data*
*root at host:~/gluster/glusterfs# gluster volume start vol*
*volume start: vol: success*

*>> 2. mount and write one file of size 1G*

*root at host:~/gluster/glusterfs# mount -t glusterfs host:/vol /mnt*
*root at host:~/gluster/glusterfs# dd if=/dev/zero of=/mnt/file1 bs=1G count=1*
*1+0 records in*
*1+0 records out*
*1073741824 bytes (1.1 GB) copied, 1.50069 s, 715 MB/s*

*root at host:~/gluster/glusterfs# du -sh /data/brick**
*1.1G    /data/brick1*
*60K     /data/brick2*


*>> 3. attach ssd brick as tier*

*root at host:~/gluster/glusterfs# gluster volume attach-tier vol
host:/data/brick2/ssd/*
*Attach tier is recommended only for testing purposes in this release. Do
you want to continue? (y/n) y*
*volume attach-tier: success*
*volume rebalance: vol: success: Rebalance on vol has been started
successfully. Use rebalance status command to check status of the rebalance
process.*
*ID: dea8d1b7-f0f4-4c17-94f5-ba0e263bc561*

*root at host:~/gluster/glusterfs# gluster volume rebalance vol tier status*
*Node                 Promoted files       Demoted files        Status*
*---------            ---------            ---------            ---------*
*localhost            0                    0                    in progress*
*volume rebalance: vol: success*


*>> 4. write data to fill up cache tier*

*root at host:~/gluster/glusterfs# dd if=/dev/zero of=/mnt/file2 bs=1G count=9
oflag=direct*
*9+0 records in*
*9+0 records out*
*9663676416 bytes (9.7 GB) copied, 36.793 s, 263 MB/s*
*root at host:~/gluster/glusterfs# du -sh /data/brick**
*1.1G    /data/brick1*
*9.1G    /data/brick2*
*root at host:~/gluster/glusterfs# gluster volume rebalance vol tier status*
*Node                 Promoted files       Demoted files        Status*
*---------            ---------            ---------            ---------*
*localhost            0                    0                    in progress*
*volume rebalance: vol: success*
*root at host:~/gluster/glusterfs# gluster volume rebalance vol  status*
*                                    Node Rebalanced-files          size
    scanned      failures       skipped               status   run time in
secs*
*                               ---------      -----------   -----------
-----------   -----------   -----------         ------------
--------------*
*                               localhost                0        0Bytes
          0             0             0          in progress
112.00*
*volume rebalance: vol: success*

*root at host:~/gluster/glusterfs# dd if=/dev/zero of=/mnt/file3 bs=1G count=5
oflag=direct*
*dd: error writing â/mnt/file3â: No space left on device*
*dd: closing output file â/mnt/file3â: No space left on device*

*root at host:~/gluster/glusterfs# du -sh /data/brick**
*1.1G    /data/brick1*
*9.3G    /data/brick2*

*>>>> there is lot of space free in cold brick but writes are failing...*

*root at vsan18:~/gluster/glusterfs# df -h*
*. <cut>*
*.*
*/dev/sdb3       231G  1.1G  230G   1% /data/brick1*
*/dev/ssd       9.4G  9.4G  104K 100% /data/brick2*
*host:/vol     241G   11G  230G   5% /mnt*

Please let me know if I am missing something.
Is this behavior expected. shouldn't the files be re-balanced?

Thanks,
Ameet
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151001/60bbccde/attachment.html>


More information about the Gluster-users mailing list