<div dir="ltr">Hi,<div><br></div><div>doing some tests to compare performance I've found some weird results. I've seen this in different tests, but probably the more clear an easier to reproduce is to use smallfile tool to create files.</div><div><br></div>The test command is:<br><br><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><font face="monospace, monospace"># python smallfile_cli.py --operation create --files-per-dir 100 --file-size 32768 --threads 16 --files 256 --top <mountpoint> --stonewall no</font></blockquote><div><br></div><div>I've run this test 5 times sequentially using the same initial conditions (at least this is what I think): bricks cleared, all gluster processes stopped, volume destroyed and recreated, caches emptied.</div><div><br></div><div>This is the data I've obtained for each execution:</div><div><br></div><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><font face="monospace, monospace">Time   us   sy   ni   id   wa   hi   si   st   read    write   use<br></font><font face="monospace, monospace"> 435  1.80  3.70  0.00 81.62 11.06  0.00  0.00  0.00  32.931  608715.575  97.632<br></font><font face="monospace, monospace"> 450  1.67  3.62  0.00 80.67 12.19  0.00  0.00  0.00  30.989  589078.308  97.714<br></font><font face="monospace, monospace"> 425  1.74  3.75  0.00 81.85 10.76  0.00  0.00  0.00  37.588  622034.812  97.706<br></font><font face="monospace, monospace"> 320  2.47  5.06  0.00 82.84  7.75  0.00  0.00  0.00  46.406  828637.359  96.891<br></font><font face="monospace, monospace"> 365  2.19  4.44  0.00 84.45  7.12  0.00  0.00  0.00  45.822  734566.685  97.466</font></blockquote><br><div>Time is in seconds. us, sy, ni, id, wa, hi, si and st are the CPU times, as reported by top. read and write are the disk throughput in KiB/s. use is the disk usage percentage.</div><div><br></div><div>Based on this we can see that there's a big difference between the best and the worst cases. But it seems more relevant that when it performed better, in fact disk utilization and CPU wait time were a bit lower.</div><div><br></div>Disk is a NVMe and I used a recent commit from master (2b86da69). Volume type is a replica 3 with 3 bricks.<div><br></div><div>I'm not sure what can be causing this. Any idea ? can anyone try to reproduce it to see if it's a problem in my environment or it's a common problem ?</div><div><br></div><div>Thanks,</div><div><br></div><div>Xavi</div></div>