[Bugs] [Bug 1540376] New: Tiered volume performance degrades badly after a volume stop/ start or system restart.
bugzilla at redhat.com
bugzilla at redhat.com
Tue Jan 30 23:18:48 UTC 2018
https://bugzilla.redhat.com/show_bug.cgi?id=1540376
Bug ID: 1540376
Summary: Tiered volume performance degrades badly after a
volume stop/start or system restart.
Product: GlusterFS
Version: 3.12
Component: tiering
Severity: high
Assignee: bugs at gluster.org
Reporter: jbyers at stonefly.com
QA Contact: bugs at gluster.org
CC: bugs at gluster.org
Tiered volume performance degrades badly after a volume
stop/start or system restart.
The degradation is very significant, making the performance of
an SSD hot tiered volume a fraction of what it was with the
HDD before tiering.
Stopping and starting the tiered volume causes the problem to
exhibit. Stopping and starting the Gluster services also does.
Nothing in the tier is being promoted or demoted, the volume
starts empty, a file is written, then read, then deleted. The
file(s) only ever exist on the hot tier.
This affects GlusterFS FUSE mounts, and also NFSv3 NFS mounts.
The problem has been reproduced in two test lab environments.
The issue was first seen using GlusterFS 3.7.18, and retested
with the same result using GlusterFS 3.12.3.
I'm using the default tiering settings, no adjustments.
Nothing of any significance appears to be being reported in
the GlusterFS logs.
Summary:
Before SSD tiering, HDD performance on a FUSE mount was 130.87
MB/sec writes, 128.53 MB/sec reads.
After SSD tiering, performance on a FUSE mount was 199.99
MB/sec writes, 257.28 MB/sec reads.
After GlusterFS volume stop/start, SSD tiering performance on
FUSE mount was 35.81 MB/sec writes, 37.33 MB/sec reads. A very
significant reduction in performance.
Detaching and reattaching the SSD tier restores the good
tiered performance.
Details below:
#####################################
### Create the volume and mounts.
#####################################
# gluster volume create volume-1 transport tcp
192.168.101.226:/exports/brick-hdd/volume-1
volume create: volume-1: success: please start the volume to access data
# gluster volume set volume-1 allow-insecure on
volume set: success
# gluster volume set volume-1 allow-insecure on
volume set: success
# gluster volume set volume-1 nfs.disable off
Gluster NFS is being deprecated in favor of NFS-Ganesha Enter "yes" to continue
using Gluster NFS (y/n) y
volume set: success
# gluster volume start volume-1
volume start: volume-1: success
# gluster volume info
Volume Name: volume-1
Type: Distribute
Volume ID: 24e958c7-b39f-441b-90eb-2120260ee8d1
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.101.226:/exports/brick-hdd/volume-1
Options Reconfigured:
server.allow-insecure: on
transport.address-family: inet
nfs.disable: off
# glusterfs --acl --volfile-server=127.0.0.1 --volfile-id=/volume-1
/samba/volume-1
# mount -o "soft,vers=3,tcp" 192.168.101.226:/volume-1 /mnt/volume-1
#####################################
### FUSE Mount read and write performance with HDD without tiering.
#####################################
# sgp_dd time=1 thr=1 bs=128K bpt=1 if=/dev/zero
of=/samba/volume-1/testfile.dat count=16384
time to transfer data was 16.409684 secs, 130.87 MB/sec
# sync;sync; echo 3 > /proc/sys/vm/drop_caches
# sgp_dd time=1 thr=1 bs=128K bpt=1 if=/dev/zero
of=/samba/volume-1/testfile.dat count=16384
time to transfer data was 16.707668 secs, 128.53 MB/sec
# gluster volume profile volume-1 info
Brick: 192.168.101.226:/exports/brick-hdd/volume-1
--------------------------------------------------
Cumulative Stats:
Block Size: 4096b+ 8192b+ 16384b+
No. of Reads: 0 0 0
No. of Writes: 3 2 4
Block Size: 32768b+ 65536b+ 131072b+
No. of Reads: 0 0 16384
No. of Writes: 16 16 49161
Block Size: 262144b+ 524288b+ 1048576b+
No. of Reads: 0 2 2047
No. of Writes: 11 25 2018
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 2 FORGET
0.00 0.00 us 0.00 us 0.00 us 4 RELEASE
0.00 23.00 us 23.00 us 23.00 us 1 STATFS
0.00 15.00 us 14.00 us 16.00 us 2 FLUSH
0.00 17.00 us 12.00 us 22.00 us 2 STAT
0.00 51.00 us 15.00 us 87.00 us 2 ACCESS
0.00 165.00 us 165.00 us 165.00 us 1 CREATE
0.00 399.00 us 399.00 us 399.00 us 1 OPEN
0.00 53.72 us 28.00 us 174.00 us 18 LOOKUP
0.06 154611.00 us 154611.00 us 154611.00 us 1 UNLINK
99.94 8174.57 us 56.00 us 227368.00 us 32766 WRITE
Duration: 377 seconds
Data Read: 4294967296 bytes
Data Written: 8589934592 bytes
#####################################
## NFS Mount read and write performance with HDD without tiering.
#####################################
# rm -f /samba/volume-1/testfile.dat /mnt/volume-1/testfile.dat
# sync;sync; echo 3 > /proc/sys/vm/drop_caches
# gluster volume profile volume-1 start clear
Starting volume profile on volume-1 has been successful
# sgp_dd time=1 thr=1 bs=128K bpt=1 if=/dev/zero of=/mnt/volume-1/testfile.dat
count=16384
time to transfer data was 41.936227 secs, 51.21 MB/sec
# sync;sync; echo 3 > /proc/sys/vm/drop_caches
# sgp_dd time=1 thr=1 bs=128K bpt=1 if=/mnt/volume-1/testfile.dat of=/dev/null
count=16384
time to transfer data was 30.138106 secs, 71.25 MB/sec
# gluster volume profile volume-1 info
Brick: 192.168.101.226:/exports/brick-hdd/volume-1
--------------------------------------------------
Cumulative Stats:
Block Size: 4096b+ 8192b+ 16384b+
No. of Reads: 0 0 0
No. of Writes: 9 10 56
Block Size: 32768b+ 65536b+ 131072b+
No. of Reads: 0 0 16384
No. of Writes: 180 35 49190
Block Size: 262144b+ 524288b+ 1048576b+
No. of Reads: 0 4 4094
No. of Writes: 20 62 4016
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 3 FORGET
0.00 0.00 us 0.00 us 0.00 us 5 RELEASE
0.00 24.00 us 23.00 us 25.00 us 2 STATFS
0.00 51.00 us 51.00 us 51.00 us 1 GETXATTR
0.00 33.60 us 15.00 us 87.00 us 5 ACCESS
0.00 35.17 us 12.00 us 82.00 us 6 STAT
0.00 168.00 us 165.00 us 171.00 us 2 CREATE
0.00 399.00 us 399.00 us 399.00 us 1 OPEN
0.00 57.31 us 23.00 us 183.00 us 26 LOOKUP
0.02 2799.59 us 14.00 us 236711.00 us 108 FLUSH
0.03 202899.00 us 154611.00 us 251187.00 us 2 UNLINK
31.06 190240.58 us 8753.00 us 632660.00 us 2049 READ
68.89 24641.36 us 35.00 us 20466767.00 us 35088
WRITE
Duration: 674 seconds
Data Read: 6442450944 bytes
Data Written: 10737418240 bytes
#####################################
## Attach the hot tier SSD disk.
#####################################
# gluster volume tier volume-1 attach
192.168.101.226:/exports/brick-ssd/volume-1
volume attach-tier: success
# gluster volume info
Volume Name: volume-1
Type: Tier
Volume ID: 24e958c7-b39f-441b-90eb-2120260ee8d1
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 1
Brick1: 192.168.101.226:/exports/brick-ssd/volume-1
Cold Tier:
Cold Tier Type : Distribute
Number of Bricks: 1
Brick2: 192.168.101.226:/exports/brick-hdd/volume-1
Options Reconfigured:
cluster.tier-mode: cache
features.ctr-enabled: on
server.allow-insecure: on
transport.address-family: inet
nfs.disable: off
#####################################
### FUSE Mount read and write performance with HDD with SSD tiering.
#####################################
# rm -f /samba/volume-1/testfile.dat /mnt/volume-1/testfile.dat
# sync;sync; echo 3 > /proc/sys/vm/drop_caches
# gluster volume profile volume-1 start clear
Starting volume profile on volume-1 has been successful
# sgp_dd time=1 thr=1 bs=128K bpt=1 if=/dev/zero
of=/samba/volume-1/testfile.dat count=16384
time to transfer data was 10.737773 secs, 199.99 MB/sec
# sync;sync; echo 3 > /proc/sys/vm/drop_caches
# sgp_dd time=1 thr=1 bs=128K bpt=1 if=/samba/volume-1/testfile.dat
of=/dev/null count=16384
time to transfer data was 8.346831 secs, 257.28 MB/sec
# gluster volume profile volume-1 info
Brick: 192.168.101.226:/exports/brick-ssd/volume-1
--------------------------------------------------
Cumulative Stats:
Block Size: 131072b+
No. of Reads: 16384
No. of Writes: 16384
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 1 RELEASE
0.00 0.00 us 0.00 us 0.00 us 1 RELEASEDIR
0.00 18.00 us 18.00 us 18.00 us 1 FLUSH
0.00 44.82 us 30.00 us 90.00 us 11 LOOKUP
0.00 763.00 us 763.00 us 763.00 us 1 CREATE
0.46 29.09 us 23.00 us 366.00 us 16361 STAT
1.34 4664.81 us 14.00 us 123500.00 us 294 STATFS
11.71 732.17 us 234.00 us 2151.00 us 16384 READ
86.49 5410.15 us 146.00 us 245867.00 us 16384 WRITE
Duration: 158 seconds
Data Read: 2147483648 bytes
Data Written: 2147483648 bytes
Brick: 192.168.101.226:/exports/brick-hdd/volume-1
--------------------------------------------------
Cumulative Stats:
Block Size: 4096b+ 8192b+ 16384b+
No. of Reads: 0 0 0
No. of Writes: 9 10 56
Block Size: 32768b+ 65536b+ 131072b+
No. of Reads: 0 0 16384
No. of Writes: 180 35 49190
Block Size: 262144b+ 524288b+ 1048576b+
No. of Reads: 0 4 4094
No. of Writes: 20 62 4016
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 4 FORGET
0.00 0.00 us 0.00 us 0.00 us 5 RELEASE
0.00 0.00 us 0.00 us 0.00 us 1 RELEASEDIR
0.00 51.00 us 51.00 us 51.00 us 1 GETXATTR
0.00 24.00 us 22.00 us 26.00 us 4 STATFS
0.00 33.60 us 15.00 us 87.00 us 5 ACCESS
0.00 35.17 us 12.00 us 82.00 us 6 STAT
0.00 168.00 us 165.00 us 171.00 us 2 CREATE
0.00 399.00 us 399.00 us 399.00 us 1 OPEN
0.00 1044.00 us 1044.00 us 1044.00 us 1 SETATTR
0.00 53.62 us 23.00 us 183.00 us 47 LOOKUP
0.00 3380.00 us 3380.00 us 3380.00 us 1 MKNOD
0.00 4686.00 us 39.00 us 9333.00 us 2 IPC
0.02 2799.59 us 14.00 us 236711.00 us 108 FLUSH
0.03 202899.00 us 154611.00 us 251187.00 us 2 UNLINK
31.06 190240.58 us 8753.00 us 632660.00 us 2049 READ
68.89 24641.36 us 35.00 us 20466767.00 us 35088
WRITE
Duration: 977 seconds
Data Read: 6442450944 bytes
Data Written: 10737418240 bytes
#####################################
### NFS Mount read and write performance with HDD with SSD tiering.
#####################################
# rm -f /samba/volume-1/testfile.dat /mnt/volume-1/testfile.dat
# sync;sync; echo 3 > /proc/sys/vm/drop_caches
# gluster volume profile volume-1 start clear
Starting volume profile on volume-1 has been successful
# sgp_dd time=1 thr=1 bs=128K bpt=1 if=/dev/zero of=/mnt/volume-1/testfile.dat
count=16384
time to transfer data was 33.214713 secs, 64.65 MB/sec
# sync;sync; echo 3 > /proc/sys/vm/drop_caches
# sgp_dd time=1 thr=1 bs=128K bpt=1 if=/mnt/volume-1/testfile.dat of=/dev/null
count=16384
time to transfer data was 7.944743 secs, 270.30 MB/sec
# gluster volume profile volume-1 info
Brick: 192.168.101.226:/exports/brick-ssd/volume-1
--------------------------------------------------
Cumulative Stats:
Block Size: 4096b+ 8192b+ 16384b+
No. of Reads: 0 0 0
No. of Writes: 9 5 14
Block Size: 32768b+ 65536b+ 131072b+
No. of Reads: 0 0 16384
No. of Writes: 22 147 16643
Block Size: 262144b+ 524288b+ 1048576b+
No. of Reads: 0 2 2047
No. of Writes: 15 46 1954
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 1 FORGET
0.00 0.00 us 0.00 us 0.00 us 2 RELEASE
0.00 0.00 us 0.00 us 0.00 us 1 RELEASEDIR
0.00 28.00 us 28.00 us 28.00 us 1 ACCESS
0.00 75.00 us 75.00 us 75.00 us 1 GETXATTR
0.00 50.14 us 30.00 us 110.00 us 14 LOOKUP
0.00 731.50 us 700.00 us 763.00 us 2 CREATE
0.00 142.94 us 13.00 us 2008.00 us 104 FLUSH
0.08 29.09 us 22.00 us 366.00 us 16364 STAT
14.47 93985.20 us 14.00 us 20325659.00 us 900
STATFS
24.67 7827.09 us 234.00 us 81162.00 us 18433 READ
60.78 18847.35 us 129.00 us 20634527.00 us 18855
WRITE
Duration: 361 seconds
Data Read: 4294967296 bytes
Data Written: 4294967296 bytes
Brick: 192.168.101.226:/exports/brick-hdd/volume-1
--------------------------------------------------
Cumulative Stats:
Block Size: 4096b+ 8192b+ 16384b+
No. of Reads: 0 0 0
No. of Writes: 9 10 56
Block Size: 32768b+ 65536b+ 131072b+
No. of Reads: 0 0 16384
No. of Writes: 180 35 49190
Block Size: 262144b+ 524288b+ 1048576b+
No. of Reads: 0 4 4094
No. of Writes: 20 62 4016
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 4 FORGET
0.00 0.00 us 0.00 us 0.00 us 5 RELEASE
0.00 0.00 us 0.00 us 0.00 us 1 RELEASEDIR
0.00 48.50 us 46.00 us 51.00 us 2 GETXATTR
0.00 22.00 us 14.00 us 26.00 us 6 STATFS
0.00 32.14 us 12.00 us 82.00 us 7 STAT
0.00 41.83 us 15.00 us 87.00 us 6 ACCESS
0.00 168.00 us 165.00 us 171.00 us 2 CREATE
0.00 399.00 us 399.00 us 399.00 us 1 OPEN
0.00 544.50 us 45.00 us 1044.00 us 2 SETATTR
0.00 53.53 us 23.00 us 183.00 us 51 LOOKUP
0.00 1765.00 us 150.00 us 3380.00 us 2 MKNOD
0.00 4766.50 us 38.00 us 9656.00 us 4 IPC
0.02 2799.59 us 14.00 us 236711.00 us 108 FLUSH
0.03 202899.00 us 154611.00 us 251187.00 us 2 UNLINK
31.06 190240.58 us 8753.00 us 632660.00 us 2049 READ
68.89 24641.36 us 35.00 us 20466767.00 us 35088
WRITE
Duration: 1180 seconds
Data Read: 6442450944 bytes
Data Written: 10737418240 bytes
#####################################
### Stop and start the GlusterFS volume.
#####################################
# gluster volume stop volume-1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n)
y
volume stop: volume-1: success
# gluster volume start volume-1
volume start: volume-1: success
# umount /samba/volume-1/
# umount /mnt/volume-1/
# mount -t glusterfs -o acl 127.0.0.1:/volume-1 /samba/volume-1/
# mount -o "soft,vers=3,tcp" 192.168.101.226:/volume-1 /mnt/volume-1
#####################################
### FUSE Mount read and write performance with HDD with SSD tiering after
volume stop/start.
#####################################
# rm -f /samba/volume-1/testfile.dat /mnt/volume-1/testfile.dat
# sync;sync; echo 3 > /proc/sys/vm/drop_caches
# gluster volume profile volume-1 start clear
Starting volume profile on volume-1 has been successful
# sgp_dd time=1 thr=1 bs=128K bpt=1 if=/dev/zero
of=/samba/volume-1/testfile.dat count=16384
time to transfer data was 59.973165 secs, 35.81 MB/sec
# sync;sync; echo 3 > /proc/sys/vm/drop_caches
# sgp_dd time=1 thr=1 bs=128K bpt=1 if=/samba/volume-1/testfile.dat
of=/dev/null count=16384
time to transfer data was 57.523719 secs, 37.33 MB/sec
# gluster volume profile volume-1 info
Brick: 192.168.101.226:/exports/brick-hdd/volume-1
--------------------------------------------------
Cumulative Stats:
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.56 34.00 us 19.00 us 49.00 us 2 STATFS
0.71 85.00 us 85.00 us 85.00 us 1 SETATTR
6.31 36.19 us 24.00 us 131.00 us 21 LOOKUP
17.56 2116.00 us 2116.00 us 2116.00 us 1 MKNOD
74.87 2256.00 us 32.00 us 6098.00 us 4 IPC
Duration: 326 seconds
Data Read: 0 bytes
Data Written: 0 bytes
Brick: 192.168.101.226:/exports/brick-ssd/volume-1
--------------------------------------------------
Cumulative Stats:
Block Size: 131072b+
No. of Reads: 16384
No. of Writes: 16384
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 1 RELEASE
0.00 237.00 us 237.00 us 237.00 us 1 FLUSH
0.00 44.55 us 28.00 us 92.00 us 11 LOOKUP
0.00 7954.00 us 7954.00 us 7954.00 us 1 CREATE
0.10 40.04 us 22.00 us 386.00 us 16382 STAT
0.60 5850.17 us 11.00 us 195272.00 us 662 STATFS
16.67 6587.80 us 3201.00 us 16650.00 us 16384 READ
82.63 32659.94 us 3559.00 us 366309.00 us 16384 WRITE
Duration: 327 seconds
Data Read: 2147483648 bytes
Data Written: 2147483648 bytes
#####################################
### NFS Mount read and write performance with HDD with SSD tiering after volume
stop/start.
#####################################
# rm -f /samba/volume-1/testfile.dat /mnt/volume-1/testfile.dat
# sync;sync; echo 3 > /proc/sys/vm/drop_caches
# gluster volume profile volume-1 start clear
Starting volume profile on volume-1 has been successful
# sgp_dd time=1 thr=1 bs=128K bpt=1 if=/dev/zero of=/mnt/volume-1/testfile.dat
count=16384
time to transfer data was 59.720392 secs, 35.96 MB/sec
# sync;sync; echo 3 > /proc/sys/vm/drop_caches
# sgp_dd time=1 thr=1 bs=128K bpt=1 if=/mnt/volume-1/testfile.dat of=/dev/null
count=16384
time to transfer data was 33.495813 secs, 64.11 MB/sec
# gluster volume profile volume-1 info
Brick: 192.168.101.226:/exports/brick-hdd/volume-1
--------------------------------------------------
Cumulative Stats:
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.09 21.00 us 21.00 us 21.00 us 1 STAT
0.20 47.00 us 47.00 us 47.00 us 1 GETXATTR
0.27 32.50 us 31.00 us 34.00 us 2 ACCESS
0.87 105.00 us 85.00 us 125.00 us 2 SETATTR
0.94 56.50 us 19.00 us 95.00 us 4 STATFS
3.90 37.56 us 24.00 us 131.00 us 25 LOOKUP
9.43 1135.00 us 154.00 us 2116.00 us 2 MKNOD
84.31 2537.38 us 32.00 us 7024.00 us 8 IPC
Duration: 543 seconds
Data Read: 0 bytes
Data Written: 0 bytes
Brick: 192.168.101.226:/exports/brick-ssd/volume-1
--------------------------------------------------
Cumulative Stats:
Block Size: 4096b+ 8192b+ 16384b+
No. of Reads: 0 0 0
No. of Writes: 21 4 67
Block Size: 32768b+ 65536b+ 131072b+
No. of Reads: 0 0 16384
No. of Writes: 214 296 16840
Block Size: 262144b+ 524288b+ 1048576b+
No. of Reads: 0 2 2047
No. of Writes: 94 99 1834
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 1 FORGET
0.00 0.00 us 0.00 us 0.00 us 2 RELEASE
0.00 31.00 us 31.00 us 31.00 us 1 ACCESS
0.00 61.00 us 61.00 us 61.00 us 1 GETXATTR
0.00 50.14 us 28.00 us 98.00 us 14 LOOKUP
0.00 7588.00 us 7222.00 us 7954.00 us 2 CREATE
0.01 2799.44 us 16.00 us 44718.00 us 87 FLUSH
0.03 40.04 us 22.00 us 386.00 us 16385 STAT
2.21 29997.91 us 11.00 us 4715716.00 us 1428 STATFS
28.28 29731.49 us 3201.00 us 697284.00 us 18433 READ
69.47 69158.05 us 2748.00 us 20388039.00 us 19469
WRITE
Duration: 544 seconds
Data Read: 4294967296 bytes
Data Written: 4294967296 bytes
#####################################
### Detach tier, performance goes back to HDD normal.
#####################################
# rm -f /samba/volume-1/testfile.dat /mnt/volume-1/testfile.dat
# gluster volume tier volume-1 detach start
volume detach tier start: success
ID: 224f1b4b-376a-4623-b901-5e429866fbd2
# gluster volume tier volume-1 detach status
volume detach tier status: success
Node Rebalanced-files size scanned failures
skipped status run time in h:m:s
--------- ----------- ----------- ----------- -----------
----------- ------------ --------------
localhost 1 2.0GB 2 0
0 completed 0:00:18
# gluster volume tier volume-1 detach commit
Removing tier can result in data loss. Do you want to Continue? (y/n) y
volume detach tier commit: success
#####################################
### Detached tier, performance goes back to HDD normal.
#####################################
# rm -f /samba/volume-1/testfile.dat /mnt/volume-1/testfile.dat
# sync;sync; echo 3 > /proc/sys/vm/drop_caches
# sgp_dd time=1 thr=1 bs=128K bpt=1 if=/dev/zero
of=/samba/volume-1/testfile.dat count=16384
time to transfer data was 16.588293 secs, 129.46 MB/sec
# sync;sync; echo 3 > /proc/sys/vm/drop_caches
# sgp_dd time=1 thr=1 bs=128K bpt=1 if=/samba/volume-1/testfile.dat
of=/dev/null count=16384
time to transfer data was 16.504866 secs, 130.11 MB/sec
#####################################
### Reattach tier, performance goes back to SSD tiered normal.
#####################################
# gluster volume tier volume-1 attach
192.168.101.226:/exports/brick-ssd/volume-1
volume attach-tier: success
#####################################
### Reattached tier, performance goes back to SSD tiered normal.
#####################################
# rm -f /samba/volume-1/testfile.dat /mnt/volume-1/testfile.dat
# sync;sync; echo 3 > /proc/sys/vm/drop_caches
# sgp_dd time=1 thr=1 bs=128K bpt=1 if=/dev/zero
of=/samba/volume-1/testfile.dat count=16384
time to transfer data was 10.512222 secs, 204.28 MB/sec
# sync;sync; echo 3 > /proc/sys/vm/drop_caches
# sgp_dd time=1 thr=1 bs=128K bpt=1 if=/samba/volume-1/testfile.dat
of=/dev/null count=16384
time to transfer data was 8.374527 secs, 256.43 MB/sec
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list