[Bugs] [Bug 1259312] New: Data Tiering:File create and new writes to existing file fails when the hot tier is full instead of redirecting/flushing the data to cold tier

bugzilla at redhat.com bugzilla at redhat.com
Wed Sep 2 11:58:47 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1259312

            Bug ID: 1259312
           Summary: Data Tiering:File create and new writes to existing
                    file fails when the hot tier is full instead of
                    redirecting/flushing the data to cold tier
           Product: GlusterFS
           Version: 3.7.3
         Component: tiering
          Severity: high
          Assignee: bugs at gluster.org
          Reporter: nchilaka at redhat.com
        QA Contact: bugs at gluster.org
                CC: bugs at gluster.org



Description of problem:
======================
given that the hot tier is mostly a costly storage space, it is highly possible
for the hot tier to have very less disk size compared to the cold tier.
So, in this problem, while I a, doing writes to a file and if the hot tier is
full, the new writes fail. Also, Iam able to create any more new files even
though the cold tier is largely free



Version-Release number of selected component (if applicable):
=========================================================
[root at nag-manual-node1 brick999]# gluster --version
glusterfs 3.7.3 built on Aug 27 2015 01:23:05
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General
Public License.
[root at nag-manual-node1 brick999]# rpm -qa|grep gluster
glusterfs-libs-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-fuse-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-server-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-api-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-cli-3.7.3-0.82.git6c4096f.el6.x86_64
python-gluster-3.7.3-0.82.git6c4096f.el6.noarch
glusterfs-client-xlators-3.7.3-0.82.git6c4096f.el6.x86_64
[root at nag-manual-node1 brick999]# 


How reproducible:
=====================
easily

Steps to Reproduce:
====================
1.have a cold tier with huge space and hot tier with say about only 1GB space
2.Now turn on ctr and set the demote freq to say a large value like 3600(1hr)
3.Now fill the volume with about 990MB space(which will go to hot tier)
4. Now create a file (may be using dd command) which is about 100Mb
5. Now while creating , the file fails to acccomodate any more writes after
10MB as hot tier is full
6. Also try to create new files(to see if new files go to cold tier, as hot si
filled)


Actual results:
===============
Exisin file writes and new file creates fail when hot tier is full, even though
cold  tier is largely free.

Expected results:
=================
Make new writes /file creates to go to cold tier when hot tier is full or flush
the relatively cold files in hot tier to accomodate new files, irrespective of
demote freq.

Additional info:


Work-Around:
==========
Wait for the next CTR promote/demote cycle to kick in




CLI Log:
==========
[root at nag-manual-nfsclient1 srt]# dd if=/dev/urandom of=junkrandom.120m bs=1024
count=120000
120000+0 records in
120000+0 records out
122880000 bytes (123 MB) copied, 30.1008 s, 4.1 MB/s
[root at nag-manual-nfsclient1 srt]# ll
total 1320000
-rw-r--r--. 1 root root 1228800000 Sep  2 22:10 junkrandom
-rw-r--r--. 1 root root  122880000 Sep  2 22:13 junkrandom.120m
[root at nag-manual-nfsclient1 srt]# du -sh *
1.2G    junkrandom
118M    junkrandom.120m
[root at nag-manual-nfsclient1 srt]# cp junkrandom.120m junkrandom.120m1
========creating a 300MB file when there is hardly 35MB free
[root at nag-manual-nfsclient1 srt]# dd if=/dev/urandom of=bricklimit bs=1024
count=3000000
dd: writing `bricklimit': No space left on device
160892+0 records in
160891+0 records out
164752384 bytes (165 MB) copied, 35.6738 s, 4.6 MB/s
[root at nag-manual-nfsclient1 srt]# dd if=/dev/urandom of=bricklimit.1 bs=1024
count=3000000
dd: opening `bricklimit.1': No space left on device
[root at nag-manual-nfsclient1 srt]# 
[root at nag-manual-nfsclient1 srt]# 
[root at nag-manual-nfsclient1 srt]# 
[root at nag-manual-nfsclient1 srt]# ls -l
total 1475652
-rw-r--r--. 1 root root   47190016 Sep  2 22:17 bricklimit
-rw-r--r--. 1 root root 1228800000 Sep  2 22:10 junkrandom
-rw-r--r--. 1 root root  122880000 Sep  2 22:13 junkrandom.120m
-rw-r--r--. 1 root root  122880000 Sep  2 22:14 junkrandom.120m1
[root at nag-manual-nfsclient1 srt]# touch newf1
touch: cannot touch `newf1': No space left on device
[root at nag-manual-nfsclient1 srt]# touch newf1
touch: cannot touch `newf1': No space left on device
[root at nag-manual-nfsclient1 srt]# touch newf1
touch: cannot touch `newf1': No space left on device
[root at nag-manual-nfsclient1 srt]# du -sh *
35M    bricklimit
1.2G    junkrandom
118M    junkrandom.120m
118M    junkrandom.120m1
[root at nag-manual-nfsclient1 srt]#

-- 
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list