[Bugs] [Bug 1219048] Data Tiering:Enabling quota command fails with "quota command failed : Commit failed on localhost"
bugzilla at redhat.com
bugzilla at redhat.com
Thu May 14 06:35:52 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1219048
Joseph Elwin Fernandes <josferna at redhat.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |ON_QA
--- Comment #4 from Joseph Elwin Fernandes <josferna at redhat.com> ---
I tested this with glusterfs-3.7.0beta2-0.2.gitc1cd4fa.autobuild for fedora 21
updated on 13th May 2015
http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs-3.7/fedora-21-x86_64/glusterfs-3.7.0beta2-0.2.gitc1cd4fa.autobuild/
And it works!
[root at rhs-srv-09 glusterfs-3.7.0beta2-0.2.gitc1cd4fa.autobuild]# gluster v
quota test enable
volume quota : success
Please find the vol info:
[root at rhs-srv-09 glusterfs-3.7.0beta2-0.2.gitc1cd4fa.autobuild]# gluster volume
info
Volume Name: test
Type: Tier
Volume ID: a64cdd30-aaaa-4692-8cb2-2f94659a4d13
Status: Started
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: rhs-srv-08:/home/ssd/s2
Brick2: rhs-srv-09:/home/ssd/s2
Brick3: rhs-srv-08:/home/ssd/s1
Brick4: rhs-srv-09:/home/ssd/s1
Cold Bricks:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick5: rhs-srv-09:/home/disk/d1
Brick6: rhs-srv-08:/home/disk/d1
Brick7: rhs-srv-09:/home/disk/d2
Brick8: rhs-srv-08:/home/disk/d2
Options Reconfigured:
features.inode-quota: on
features.quota: on
cluster.read-freq-threshold: 4
cluster.write-freq-threshold: 4
features.record-counters: on
performance.io-cache: off
performance.quick-read: off
cluster.tier-promote-frequency: 180
cluster.tier-demote-frequency: 180
features.ctr-enabled: on
performance.readdir-ahead: on
Please note that the quotad is running.
[root at rhs-srv-09 glusterfs-3.7.0beta2-0.2.gitc1cd4fa.autobuild]# ps -ef | grep
gluster
root 24472 1 0 May13 ? 00:00:02 /usr/sbin/glusterd -p
/var/run/glusterd.pid
root 24682 1 0 00:24 ? 00:00:01 /usr/sbin/glusterfs -s
localhost --volfile-id gluster/glustershd -p
/var/lib/glusterd/glustershd/run/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S
/var/run/gluster/f44ab838f2da2718654348889bbe6dfb.socket --xlator-option
*replicate*.node-uuid=316a021d-44b9-4fc0-b454-5b3c68a927f8
root 25168 1 2 00:30 ? 00:00:02 /usr/sbin/glusterfs -s
localhost --volfile-id test -l /var/log/glusterfs/quota-mount-test.log -p
/var/run/gluster/test.pid --client-pid -5 /var/run/gluster/test/
root 25181 1 2 00:30 ? 00:00:02 /usr/sbin/glusterfs -s
localhost --volfile-id gluster/quotad -p
/var/lib/glusterd/quotad/run/quotad.pid -l /var/log/glusterfs/quotad.log -S
/var/run/gluster/e8a9003c9022266961a6f2768b238291.socket --xlator-option
*replicate*.data-self-heal=off --xlator-option
*replicate*.metadata-self-heal=off --xlator-option
*replicate*.entry-self-heal=off
root 25236 24055 0 00:31 pts/0 00:00:00 grep --color=auto gluster
[root at rhs-srv-09 glusterfs-3.7.0beta2-0.2.gitc1cd4fa.autobuild]#
which build did you use to reproduce the issue?
Moving back the issue to QA as its fixed in the latest build.
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list