From gluster-jenkins at redhat.com Mon Oct 3 01:18:08 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Mon, 3 Oct 2022 01:18:08 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 03/10/2022 Test Status: PASS (4.64%) Message-ID: <802530828.129.1664759888892@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20220930.fb6e641-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15692 1 ls-l 229378 237142 3 chmod 24180 24575 1 stat 35121 35699 1 read 29017 29392 1 append 13818 13805 0 rename 962 991 3 delete-renamed 22363 22603 1 mkdir 3180 3258 2 rmdir 2652 3584 35 cleanup 9525 9829 3 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 126.39 -1.95 322.89 319.21 -1.15 ls-l 9.07 9.29 2.37 20.47 20.23 -1.19 chmod 107.04 104.04 -2.88 273.72 273.25 -0.17 stat 86.60 87.01 0.47 212.06 212.76 0.33 read 62.01 61.39 -1.01 227.10 218.20 -4.08 append 132.05 127.97 -3.19 292.08 285.76 -2.21 rename 40.95 40.80 -0.37 87.01 86.17 -0.97 delete-renamed 187.21 183.65 -1.94 332.34 332.86 0.16 mkdir 235.22 231.92 -1.42 392.84 391.92 -0.23 rmdir 233.31 241.39 3.35 396.89 384.48 -3.23 cleanup 79.43 78.77 -0.84 208.93 202.30 -3.28 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 596.65 5.31 69.71 72.44 3.77 ls-l 564.88 596.66 5.33 70.47 72.86 3.28 chmod 564.88 596.67 5.33 78.62 78.59 -0.04 stat 564.87 596.68 5.33 79.44 78.78 -0.84 read 564.86 596.68 5.33 79.98 79.16 -1.04 append 564.85 596.68 5.33 79.10 78.34 -0.97 rename 564.90 596.69 5.33 78.77 78.44 -0.42 delete-renamed 564.88 596.69 5.33 78.42 78.15 -0.35 mkdir 568.25 597.09 4.83 82.06 81.42 -0.79 rmdir 566.11 598.54 5.42 77.90 77.76 -0.18 cleanup 565.88 598.53 5.46 66.20 66.08 -0.18 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 331 bytes Desc: not available URL: From sajmoham at redhat.com Mon Oct 3 02:32:49 2022 From: sajmoham at redhat.com (sajmoham at redhat.com) Date: Mon, 03 Oct 2022 02:32:49 +0000 Subject: [Gluster-devel] Gluster Code Metrics Weekly Report Message-ID: <000000000000da4db305ea1828c0@google.com> Gluster Code Metrics Metrics Values Clang Scan 62 Coverity 16 Line Cov 70.9 % Func Cov 84.7 % Trend Graph Check the latest run: Coverity Clang Code Coverage -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: chart.png Type: image/png Size: 36506 bytes Desc: not available URL: From gluster-jenkins at redhat.com Tue Oct 4 01:18:55 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Tue, 4 Oct 2022 01:18:55 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 04/10/2022 Test Status: PASS (3.82%) Message-ID: <172371398.133.1664846335826@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20220930.fb6e641-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15665 1 ls-l 229378 229915 0 chmod 24180 24346 0 stat 35121 35241 0 read 29017 29173 0 append 13818 13871 0 rename 962 998 3 delete-renamed 22363 22766 1 mkdir 3180 3251 2 rmdir 2652 3562 34 cleanup 9525 9638 1 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 127.07 -1.40 322.89 318.38 -1.42 ls-l 9.07 8.90 -1.91 20.47 20.51 0.20 chmod 107.04 103.50 -3.42 273.72 271.39 -0.86 stat 86.60 86.03 -0.66 212.06 212.75 0.32 read 62.01 61.10 -1.49 227.10 216.25 -5.02 append 132.05 128.17 -3.03 292.08 287.88 -1.46 rename 40.95 40.53 -1.04 87.01 86.59 -0.49 delete-renamed 187.21 183.29 -2.14 332.34 332.39 0.02 mkdir 235.22 229.50 -2.49 392.84 392.48 -0.09 rmdir 233.31 237.90 1.93 396.89 384.16 -3.31 cleanup 79.43 75.37 -5.39 208.93 202.03 -3.42 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 596.43 5.28 69.71 72.33 3.62 ls-l 564.88 596.44 5.29 70.47 72.69 3.05 chmod 564.88 596.44 5.29 78.62 78.30 -0.41 stat 564.87 596.44 5.29 79.44 78.82 -0.79 read 564.86 596.44 5.29 79.98 79.58 -0.50 append 564.85 596.44 5.30 79.10 78.66 -0.56 rename 564.90 596.44 5.29 78.77 78.34 -0.55 delete-renamed 564.88 596.44 5.29 78.42 77.93 -0.63 mkdir 568.25 597.08 4.83 82.06 81.32 -0.91 rmdir 566.11 598.64 5.43 77.90 77.59 -0.40 cleanup 565.88 598.63 5.47 66.20 65.89 -0.47 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 331 bytes Desc: not available URL: From gluster-jenkins at redhat.com Wed Oct 5 01:18:54 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Wed, 5 Oct 2022 01:18:54 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 05/10/2022 Test Status: PASS (4.64%) Message-ID: <835910181.137.1664932734795@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20220930.fb6e641-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15876 2 ls-l 229378 233392 1 chmod 24180 24548 1 stat 35121 35563 1 read 29017 29501 1 append 13818 14187 2 rename 962 992 3 delete-renamed 22363 22722 1 mkdir 3180 3266 2 rmdir 2652 3589 35 cleanup 9525 9730 2 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 130.90 1.57 322.89 320.25 -0.82 ls-l 9.07 8.86 -2.37 20.47 19.89 -2.92 chmod 107.04 103.91 -3.01 273.72 272.84 -0.32 stat 86.60 85.96 -0.74 212.06 211.62 -0.21 read 62.01 61.35 -1.08 227.10 216.81 -4.75 append 132.05 130.48 -1.20 292.08 288.93 -1.09 rename 40.95 40.66 -0.71 87.01 86.22 -0.92 delete-renamed 187.21 183.41 -2.07 332.34 330.37 -0.60 mkdir 235.22 230.08 -2.23 392.84 390.12 -0.70 rmdir 233.31 238.32 2.10 396.89 384.41 -3.25 cleanup 79.43 79.62 0.24 208.93 203.98 -2.43 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 597.58 5.46 69.71 71.70 2.78 ls-l 564.88 597.60 5.48 70.47 72.29 2.52 chmod 564.88 597.60 5.48 78.62 78.33 -0.37 stat 564.87 597.60 5.48 79.44 78.87 -0.72 read 564.86 597.60 5.48 79.98 79.54 -0.55 append 564.85 597.60 5.48 79.10 78.45 -0.83 rename 564.90 597.60 5.47 78.77 78.31 -0.59 delete-renamed 564.88 597.60 5.48 78.42 77.87 -0.71 mkdir 568.25 598.00 4.97 82.06 81.32 -0.91 rmdir 566.11 599.46 5.56 77.90 77.66 -0.31 cleanup 565.88 599.45 5.60 66.20 65.99 -0.32 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 331 bytes Desc: not available URL: From gluster-jenkins at redhat.com Thu Oct 6 01:17:55 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Thu, 6 Oct 2022 01:17:55 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 06/10/2022 Test Status: PASS (4.00%) Message-ID: <125746219.140.1665019075305@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20220930.fb6e641-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15793 2 ls-l 229378 232965 1 chmod 24180 24485 1 stat 35121 35375 0 read 29017 29333 1 append 13818 13950 0 rename 962 999 3 delete-renamed 22363 22671 1 mkdir 3180 3239 1 rmdir 2652 3540 33 cleanup 9525 9689 1 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 131.47 1.99 322.89 327.84 1.51 ls-l 9.07 8.40 -7.98 20.47 19.92 -2.76 chmod 107.04 103.31 -3.61 273.72 273.73 0.00 stat 86.60 85.68 -1.07 212.06 212.46 0.19 read 62.01 61.06 -1.56 227.10 215.57 -5.35 append 132.05 128.50 -2.76 292.08 287.25 -1.68 rename 40.95 40.50 -1.11 87.01 86.53 -0.55 delete-renamed 187.21 181.68 -3.04 332.34 333.57 0.37 mkdir 235.22 228.45 -2.96 392.84 390.79 -0.52 rmdir 233.31 232.71 -0.26 396.89 381.65 -3.99 cleanup 79.43 75.20 -5.63 208.93 203.16 -2.84 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 597.79 5.50 69.71 71.78 2.88 ls-l 564.88 597.79 5.51 70.47 72.21 2.41 chmod 564.88 597.80 5.51 78.62 78.46 -0.20 stat 564.87 597.81 5.51 79.44 78.84 -0.76 read 564.86 597.83 5.51 79.98 79.38 -0.76 append 564.85 597.83 5.52 79.10 78.57 -0.67 rename 564.90 597.84 5.51 78.77 78.51 -0.33 delete-renamed 564.88 597.84 5.51 78.42 78.04 -0.49 mkdir 568.25 598.15 5.00 82.06 81.57 -0.60 rmdir 566.11 599.38 5.55 77.90 77.82 -0.10 cleanup 565.88 599.38 5.59 66.20 65.95 -0.38 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 331 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 7 04:27:04 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 7 Oct 2022 04:27:04 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 07/10/2022 Test Status: PASS (4.09%) Message-ID: <797206356.144.1665116824611@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221006.9865145-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15796 2 ls-l 229378 228727 0 chmod 24180 24336 0 stat 35121 34944 0 read 29017 29328 1 append 13818 13857 0 rename 962 997 3 delete-renamed 22363 22798 1 mkdir 3180 3275 2 rmdir 2652 3548 33 cleanup 9525 9813 3 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 126.92 -1.52 322.89 331.71 2.66 ls-l 9.07 8.72 -4.01 20.47 1.51 -1255.63 chmod 107.04 103.48 -3.44 273.72 270.95 -1.02 stat 86.60 84.87 -2.04 212.06 215.53 1.61 read 62.01 61.01 -1.64 227.10 207.75 -9.31 append 132.05 127.93 -3.22 292.08 291.91 -0.06 rename 40.95 40.75 -0.49 87.01 86.00 -1.17 delete-renamed 187.21 183.10 -2.24 332.34 330.04 -0.70 mkdir 235.22 230.52 -2.04 392.84 389.37 -0.89 rmdir 233.31 234.38 0.46 396.89 384.27 -3.28 cleanup 79.43 77.62 -2.33 208.93 205.15 -1.84 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 597.36 5.43 69.71 69.26 -0.65 ls-l 564.88 597.37 5.44 70.47 69.64 -1.19 chmod 564.88 597.37 5.44 78.62 75.59 -4.01 stat 564.87 597.37 5.44 79.44 76.15 -4.32 read 564.86 597.37 5.44 79.98 76.78 -4.17 append 564.85 597.37 5.44 79.10 75.86 -4.27 rename 564.90 597.37 5.44 78.77 76.22 -3.35 delete-renamed 564.88 597.37 5.44 78.42 75.75 -3.52 mkdir 568.25 597.60 4.91 82.06 78.82 -4.11 rmdir 566.11 599.13 5.51 77.90 75.37 -3.36 cleanup 565.88 599.12 5.55 66.20 63.34 -4.52 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 331 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 7 19:48:54 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 7 Oct 2022 19:48:54 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 08/10/2022 Test Status: PASS (4.55%) Message-ID: <2034774647.146.1665172134658@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221006.9865145-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15748 1 ls-l 229378 233649 1 chmod 24180 24402 0 stat 35121 35109 0 read 29017 29447 1 append 13818 14014 1 rename 962 1000 3 delete-renamed 22363 22870 2 mkdir 3180 3287 3 rmdir 2652 3588 35 cleanup 9525 9825 3 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 131.90 2.31 322.89 329.76 2.08 ls-l 9.07 8.91 -1.80 20.47 1.46 -1302.05 chmod 107.04 103.94 -2.98 273.72 270.78 -1.09 stat 86.60 85.55 -1.23 212.06 215.99 1.82 read 62.01 61.49 -0.85 227.10 207.67 -9.36 append 132.05 128.67 -2.63 292.08 294.49 0.82 rename 40.95 40.56 -0.96 87.01 86.36 -0.75 delete-renamed 187.21 184.09 -1.69 332.34 332.06 -0.08 mkdir 235.22 231.09 -1.79 392.84 396.77 0.99 rmdir 233.31 238.21 2.06 396.89 384.06 -3.34 cleanup 79.43 81.08 2.04 208.93 205.21 -1.81 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 598.05 5.54 69.71 68.90 -1.18 ls-l 564.88 598.06 5.55 70.47 69.39 -1.56 chmod 564.88 598.06 5.55 78.62 75.24 -4.49 stat 564.87 598.06 5.55 79.44 75.77 -4.84 read 564.86 598.06 5.55 79.98 76.42 -4.66 append 564.85 598.07 5.55 79.10 75.33 -5.00 rename 564.90 598.07 5.55 78.77 75.79 -3.93 delete-renamed 564.88 598.07 5.55 78.42 75.47 -3.91 mkdir 568.25 598.45 5.05 82.06 78.24 -4.88 rmdir 566.11 599.92 5.64 77.90 74.65 -4.35 cleanup 565.88 599.93 5.68 66.20 63.26 -4.65 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 332 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 7 21:02:45 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 7 Oct 2022 21:02:45 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Disperse] Performance report for Gluster Upstream - 08/10/2022 Test Status: PASS (11.45%) Message-ID: <1433127569.148.1665176565395@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221006.9865145-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Disperse Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 20121 20380 1 ls-l 137612 140043 1 chmod 32549 32632 0 stat 76100 75058 -1 read 19584 19644 0 append 19598 20379 3 rename 1005 1035 2 delete-renamed 24506 24130 -1 mkdir 2683 2708 0 rmdir 2447 5415 121 cleanup 21294 21267 0 =============================================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 335 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 7 21:50:36 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 7 Oct 2022 21:50:36 +0000 (UTC) Subject: [Gluster-devel] [Largefile-Replica-3] Performance report for Gluster Upstream - 08/10/2022 Test Status: PASS (2.75%) Message-ID: <1589466135.150.1665179436184@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221006.9865145-0.0 Intermediate Gluster version: No intermediate baseline Test type: Largefile Tool: fio Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== random-write 654 688 5 random-read 1802 1870 3 sequential-read 6186 6433 3 sequential-write 2451 2447 0 =============================================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 194 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 7 22:30:15 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 7 Oct 2022 22:30:15 +0000 (UTC) Subject: [Gluster-devel] [Largefile-Disperse] Performance report for Gluster Upstream - 08/10/2022 Test Status: PASS (1.50%) Message-ID: <102872244.152.1665181815401@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221006.9865145-0.0 Intermediate Gluster version: No intermediate baseline Test type: Largefile Tool: fio Volume type: Disperse Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== random-write 1067 1096 2 random-read 1334 1368 2 sequential-read 6631 6778 2 sequential-write 4855 4855 0 =============================================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 196 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 7 23:17:01 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 7 Oct 2022 23:17:01 +0000 (UTC) Subject: [Gluster-devel] [Largefile-Replica-3 with Shard] Performance report for Gluster Upstream - 08/10/2022 Test Status: PASS (0.00%) Message-ID: <1959936700.156.1665184621983@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221006.9865145-0.0 Intermediate Gluster version: No intermediate baseline Test type: Largefile Tool: fio Volume type: Replica-3 Volume Option: Shard =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== random-write 1746 1778 1 random-read 2779 2738 -1 sequential-read 7249 7220 0 sequential-write 2351 2364 0 =============================================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 196 bytes Desc: not available URL: From gluster-jenkins at redhat.com Mon Oct 10 01:18:22 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Mon, 10 Oct 2022 01:18:22 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 10/10/2022 Test Status: PASS (4.18%) Message-ID: <1607861821.164.1665364702854@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221006.9865145-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15793 2 ls-l 229378 231333 0 chmod 24180 24488 1 stat 35121 35305 0 read 29017 29413 1 append 13818 13919 0 rename 962 996 3 delete-renamed 22363 22605 1 mkdir 3180 3239 1 rmdir 2652 3559 34 cleanup 9525 9824 3 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 129.56 0.55 322.89 328.57 1.73 ls-l 9.07 8.78 -3.30 20.47 1.46 -1302.05 chmod 107.04 104.13 -2.79 273.72 273.48 -0.09 stat 86.60 86.03 -0.66 212.06 219.33 3.31 read 62.01 61.41 -0.98 227.10 207.95 -9.21 append 132.05 128.81 -2.52 292.08 294.45 0.80 rename 40.95 40.73 -0.54 87.01 86.80 -0.24 delete-renamed 187.21 183.84 -1.83 332.34 333.44 0.33 mkdir 235.22 231.49 -1.61 392.84 395.52 0.68 rmdir 233.31 241.05 3.21 396.89 381.82 -3.95 cleanup 79.43 78.47 -1.22 208.93 204.38 -2.23 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 596.45 5.28 69.71 69.37 -0.49 ls-l 564.88 596.46 5.29 70.47 69.93 -0.77 chmod 564.88 596.47 5.30 78.62 76.03 -3.41 stat 564.87 596.48 5.30 79.44 76.31 -4.10 read 564.86 596.48 5.30 79.98 77.04 -3.82 append 564.85 596.48 5.30 79.10 76.15 -3.87 rename 564.90 596.52 5.30 78.77 76.17 -3.41 delete-renamed 564.88 596.53 5.31 78.42 75.63 -3.69 mkdir 568.25 596.86 4.79 82.06 79.17 -3.65 rmdir 566.11 598.22 5.37 77.90 75.59 -3.06 cleanup 565.88 598.24 5.41 66.20 64.08 -3.31 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 333 bytes Desc: not available URL: From sajmoham at redhat.com Mon Oct 10 02:32:44 2022 From: sajmoham at redhat.com (sajmoham at redhat.com) Date: Mon, 10 Oct 2022 02:32:44 +0000 Subject: [Gluster-devel] Gluster Code Metrics Weekly Report Message-ID: <000000000000783afd05eaa4f9b7@google.com> Gluster Code Metrics Metrics Values Clang Scan 62 Coverity 16 Line Cov Func Cov Trend Graph Check the latest run: Coverity Clang Code Coverage -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: chart.png Type: image/png Size: 36044 bytes Desc: not available URL: From jstrunk at redhat.com Mon Oct 10 12:04:53 2022 From: jstrunk at redhat.com (jstrunk at redhat.com) Date: Mon, 10 Oct 2022 12:04:53 +0000 Subject: [Gluster-devel] Updated invitation: Gluster Community Meeting @ Monthly from 05:00 to 06:00 on the second Tuesday (EDT) (gluster-devel@gluster.org) Message-ID: <000000000000a04cf905eaacf721@google.com> This event has been updated Gluster Community Meeting Monthly from 05:00 to 06:00 on the second Tuesday Eastern Time - New York Location Bridge: meet.google.com/cpu-eiue-hvk https://www.google.com/maps/search/Bridge:++meet.google.com%2Fcpu-eiue-hvk?hl=en Join with Google Meet https://meet.google.com/cpu-eiue-hvk?hs=224 Join by phone (US) +1 574-400-8405 PIN: 291845177 More joining options https://tel.meet/cpu-eiue-hvk?pin=8483247585922&hs=0 Attachments Notes - Gluster Community Meeting https://docs.google.com/document/d/1gnan3tNRsv09wGyADjhxnWFxyO4vA6VkkVzEF5kTIb0/edit UPDATED THE GOOGLE MEET LINK -  meet.google.com/cpu-eiue-hvkSchedule -Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTCBridge:  meet.google.com/cpu-eiue-hvkMinutes meeting: https://hackmd.io/@wUUmx3WsTRerQyHUZbuy6A/SkR6Otqsq/editPrevious Meeting notes: GitHub Organizer nladha at redhat.com nladha at redhat.com Guests nladha at redhat.com - organizer sajmoham at redhat.com alpha754293 at hotmail.com Sheetal Pamecha Shwetha Acharya Deepshikha Khandelwal Sunil Kumar Heggodu Gopala Acharya Vinayakswami Hariharmath sunkumar at redhat.com Ana Neri Richard Wareing David Hasson tshacked at redhat.com Wojciech J. Turek pranith.karampuri at phonepe.com sasundar at redhat.com amar at kadalu.io prakash.mohanraj at gmail.com jstrunk at redhat.com Michael O'Sullivan max.degraaf at kpn.com Rakshitha Kamath ravishankar.n at pavilion.io niryadav at redhat.com ??? Gary Lloyd sanju.rakonde at phonepe.com pierre-marie.janvre at agoda.com aravinda at kadalu.tech Wilkinson, Hugo (IT Dept) rafi.kavungal at iternity.com gluster-users at gluster.org gluster-devel at gluster.org jocelyn.thode at elca.ch alvin at netvel.net David Spisla Gaby Rubin Luk?? Hejtm?nek Chris Knipe Johan H Karlsson Taylor, James (IT Dept) Brian Riddle ItLinux View all guest info https://calendar.google.com/calendar/event?action=VIEW&eid=ZXVvZTlscHVuZXEzMTdsanN2Mms2czEzaDRfUjIwMjIxMDExVDA5MDAwMCBnbHVzdGVyLWRldmVsQGdsdXN0ZXIub3Jn&tok=MTcjbmxhZGhhQHJlZGhhdC5jb21hZjE5ZjAwN2U4ZTk1ZTExMjljMDU2ZDdhMDIwNTg5MzRmOTBhNGFh&ctz=America%2FNew_York&hl=en&es=0 Reply for gluster-devel at gluster.org and view more details https://calendar.google.com/calendar/event?action=VIEW&eid=ZXVvZTlscHVuZXEzMTdsanN2Mms2czEzaDRfUjIwMjIxMDExVDA5MDAwMCBnbHVzdGVyLWRldmVsQGdsdXN0ZXIub3Jn&tok=MTcjbmxhZGhhQHJlZGhhdC5jb21hZjE5ZjAwN2U4ZTk1ZTExMjljMDU2ZDdhMDIwNTg5MzRmOTBhNGFh&ctz=America%2FNew_York&hl=en&es=0 Your attendance is optional. ~~//~~ Invitation from Google Calendar: https://calendar.google.com/calendar/ You are receiving this email because you are an attendee on the event. To stop receiving future updates for this event, decline this event. Forwarding this invitation could allow any recipient to send a response to the organizer, be added to the guest list, invite others regardless of their own invitation status, or modify your RSVP. Learn more https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 8497 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 8648 bytes Desc: not available URL: From jstrunk at redhat.com Mon Oct 10 12:04:53 2022 From: jstrunk at redhat.com (jstrunk at redhat.com) Date: Mon, 10 Oct 2022 12:04:53 +0000 Subject: [Gluster-devel] Updated invitation: Gluster Community Meeting @ Monthly from 05:00 to 06:00 on the second Tuesday from Tue Jul 12 to Mon Oct 10 (EDT) (gluster-devel@gluster.org) Message-ID: <000000000000a52b0005eaacf721@google.com> This event has been updated Changed: time Gluster Community Meeting Monthly from 05:00 to 06:00 on the second Tuesday from Tuesday Jul 12 to Monday Oct 10 Eastern Time - New York Location Bridge: meet.google.com/cpu-eiue-hvk https://www.google.com/maps/search/Bridge:++meet.google.com%2Fcpu-eiue-hvk?hl=en Join with Google Meet https://meet.google.com/cpu-eiue-hvk?hs=224 Join by phone (US) +1 574-400-8405 PIN: 291845177 More joining options https://tel.meet/cpu-eiue-hvk?pin=8483247585922&hs=0 Attachments Notes - Gluster Community Meeting https://docs.google.com/document/d/1gnan3tNRsv09wGyADjhxnWFxyO4vA6VkkVzEF5kTIb0/edit UPDATED THE GOOGLE MEET LINK -  meet.google.com/cpu-eiue-hvkSchedule -Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTCBridge:  meet.google.com/cpu-eiue-hvkMinutes meeting: https://hackmd.io/@wUUmx3WsTRerQyHUZbuy6A/SkR6Otqsq/editPrevious Meeting notes: GitHub Organizer nladha at redhat.com nladha at redhat.com Guests nladha at redhat.com - organizer sajmoham at redhat.com alpha754293 at hotmail.com Sheetal Pamecha Shwetha Acharya Deepshikha Khandelwal Sunil Kumar Heggodu Gopala Acharya Vinayakswami Hariharmath sunkumar at redhat.com Ana Neri Richard Wareing David Hasson tshacked at redhat.com Wojciech J. Turek pranith.karampuri at phonepe.com sasundar at redhat.com amar at kadalu.io prakash.mohanraj at gmail.com jstrunk at redhat.com Michael O'Sullivan max.degraaf at kpn.com Rakshitha Kamath ravishankar.n at pavilion.io niryadav at redhat.com ??? Gary Lloyd sanju.rakonde at phonepe.com pierre-marie.janvre at agoda.com aravinda at kadalu.tech Wilkinson, Hugo (IT Dept) rafi.kavungal at iternity.com gluster-users at gluster.org gluster-devel at gluster.org jocelyn.thode at elca.ch alvin at netvel.net David Spisla Gaby Rubin Luk?? Hejtm?nek Chris Knipe Johan H Karlsson Taylor, James (IT Dept) Brian Riddle ItLinux View all guest info https://calendar.google.com/calendar/event?action=VIEW&eid=ZXVvZTlscHVuZXEzMTdsanN2Mms2czEzaDRfUjIwMjIwNzEyVDA5MDAwMCBnbHVzdGVyLWRldmVsQGdsdXN0ZXIub3Jn&tok=MTcjbmxhZGhhQHJlZGhhdC5jb20wMmMwOWJmZGQzYzU0ZjIwOWE4NGVmNTg5NThhZDVhODMzMmE1NzBm&ctz=America%2FNew_York&hl=en&es=0 Reply for gluster-devel at gluster.org and view more details https://calendar.google.com/calendar/event?action=VIEW&eid=ZXVvZTlscHVuZXEzMTdsanN2Mms2czEzaDRfUjIwMjIwNzEyVDA5MDAwMCBnbHVzdGVyLWRldmVsQGdsdXN0ZXIub3Jn&tok=MTcjbmxhZGhhQHJlZGhhdC5jb20wMmMwOWJmZGQzYzU0ZjIwOWE4NGVmNTg5NThhZDVhODMzMmE1NzBm&ctz=America%2FNew_York&hl=en&es=0 Your attendance is optional. ~~//~~ Invitation from Google Calendar: https://calendar.google.com/calendar/ You are receiving this email because you are an attendee on the event. To stop receiving future updates for this event, decline this event. Forwarding this invitation could allow any recipient to send a response to the organizer, be added to the guest list, invite others regardless of their own invitation status, or modify your RSVP. Learn more https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 8520 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 8671 bytes Desc: not available URL: From gluster-jenkins at redhat.com Tue Oct 11 01:18:11 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Tue, 11 Oct 2022 01:18:11 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 11/10/2022 Test Status: PASS (4.18%) Message-ID: <1038651694.167.1665451091843@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221006.9865145-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15711 1 ls-l 229378 231289 0 chmod 24180 24524 1 stat 35121 35481 1 read 29017 29267 0 append 13818 14000 1 rename 962 993 3 delete-renamed 22363 22769 1 mkdir 3180 3262 2 rmdir 2652 3582 35 cleanup 9525 9638 1 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 128.95 0.08 322.89 327.56 1.43 ls-l 9.07 8.77 -3.42 20.47 1.44 -1321.53 chmod 107.04 103.63 -3.29 273.72 272.61 -0.41 stat 86.60 86.78 0.21 212.06 218.47 2.93 read 62.01 61.49 -0.85 227.10 208.84 -8.74 append 132.05 129.46 -2.00 292.08 294.37 0.78 rename 40.95 40.75 -0.49 87.01 85.72 -1.50 delete-renamed 187.21 183.52 -2.01 332.34 331.58 -0.23 mkdir 235.22 230.91 -1.87 392.84 395.46 0.66 rmdir 233.31 239.28 2.49 396.89 385.21 -3.03 cleanup 79.43 76.28 -4.13 208.93 205.50 -1.67 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 597.41 5.44 69.71 68.95 -1.10 ls-l 564.88 597.41 5.45 70.47 69.48 -1.42 chmod 564.88 597.41 5.45 78.62 75.31 -4.40 stat 564.87 597.42 5.45 79.44 75.80 -4.80 read 564.86 597.43 5.45 79.98 76.51 -4.54 append 564.85 597.43 5.45 79.10 75.48 -4.80 rename 564.90 597.45 5.45 78.77 75.47 -4.37 delete-renamed 564.88 597.45 5.45 78.42 75.03 -4.52 mkdir 568.25 597.75 4.94 82.06 78.24 -4.88 rmdir 566.11 599.21 5.52 77.90 74.72 -4.26 cleanup 565.88 599.21 5.56 66.20 63.05 -5.00 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 333 bytes Desc: not available URL: From niryadav at redhat.com Tue Oct 11 03:42:31 2022 From: niryadav at redhat.com (niryadav at redhat.com) Date: Tue, 11 Oct 2022 03:42:31 +0000 Subject: [Gluster-devel] Updated invitation: Gluster Community Meeting @ Tue Oct 11, 2022 2:30pm - 3:30pm (IST) (gluster-devel@gluster.org) Message-ID: <000000000000e0233a05eaba10c5@google.com> This event has been updated Changed: description Gluster Community Meeting Tuesday Oct 11, 2022 ? 2:30pm ? 3:30pm India Standard Time - Kolkata Location Bridge: meet.google.com/cpu-eiue-hvk https://www.google.com/maps/search/Bridge:++meet.google.com%2Fcpu-eiue-hvk?hl=en Join with Google Meet https://meet.google.com/cpu-eiue-hvk?hs=224 Join by phone (US) +1 574-400-8405 PIN: 291845177 More joining options https://tel.meet/cpu-eiue-hvk?pin=8483247585922&hs=0 Attachments Notes - Gluster Community Meeting https://docs.google.com/document/d/1gnan3tNRsv09wGyADjhxnWFxyO4vA6VkkVzEF5kTIb0/edit UPDATED THE GOOGLE MEET LINK -  meet.google.com/cpu-eiue-hvkSchedule -Every 2nd Tuesday at 14:30 IST / 09:00 UTCBridge:  meet.google.com/cpu-eiue-hvkMinutes meeting: https://hackmd.io/fB7S_jpZQ7K-d3ROFTUtFw?viewPrevious Meeting notes: GitHub Organizer nladha at redhat.com nladha at redhat.com Guests nladha at redhat.com - organizer sajmoham at redhat.com alpha754293 at hotmail.com Sheetal Pamecha Shwetha Acharya Deepshikha Khandelwal Sunil Kumar Heggodu Gopala Acharya Vinayakswami Hariharmath sunkumar at redhat.com Ana Neri Richard Wareing David Hasson tshacked at redhat.com Wojciech J. Turek pranith.karampuri at phonepe.com sasundar at redhat.com amar at kadalu.io prakash.mohanraj at gmail.com jstrunk at redhat.com Michael O'Sullivan max.degraaf at kpn.com Rakshitha Kamath ravishankar.n at pavilion.io niryadav at redhat.com ??? Gary Lloyd sanju.rakonde at phonepe.com pierre-marie.janvre at agoda.com aravinda at kadalu.tech Wilkinson, Hugo (IT Dept) Brian Riddle kkeithle at redhat.com rafi.kavungal at iternity.com gluster-users at gluster.org gluster-devel at gluster.org jocelyn.thode at elca.ch alvin at netvel.net David Spisla Gaby Rubin Luk?? Hejtm?nek Chris Knipe Johan H Karlsson Taylor, James (IT Dept) ItLinux Jones, Thomas View all guest info https://calendar.google.com/calendar/event?action=VIEW&eid=ZXVvZTlscHVuZXEzMTdsanN2Mms2czEzaDRfMjAyMjEwMTFUMDkwMDAwWiBnbHVzdGVyLWRldmVsQGdsdXN0ZXIub3Jn&tok=MTcjbmxhZGhhQHJlZGhhdC5jb20wNzA0MTFhOWIzOTgyZjRiNjUwZGY0YTc2ZWUxMzIyNzRkODM4NzZj&ctz=Asia%2FKolkata&hl=en&es=0 Reply for gluster-devel at gluster.org and view more details https://calendar.google.com/calendar/event?action=VIEW&eid=ZXVvZTlscHVuZXEzMTdsanN2Mms2czEzaDRfMjAyMjEwMTFUMDkwMDAwWiBnbHVzdGVyLWRldmVsQGdsdXN0ZXIub3Jn&tok=MTcjbmxhZGhhQHJlZGhhdC5jb20wNzA0MTFhOWIzOTgyZjRiNjUwZGY0YTc2ZWUxMzIyNzRkODM4NzZj&ctz=Asia%2FKolkata&hl=en&es=0 Your attendance is optional. ~~//~~ Invitation from Google Calendar: https://calendar.google.com/calendar/ You are receiving this email because you are an attendee on the event. To stop receiving future updates for this event, decline this event. Forwarding this invitation could allow any recipient to send a response to the organizer, be added to the guest list, invite others regardless of their own invitation status, or modify your RSVP. Learn more https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 8724 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 8878 bytes Desc: not available URL: From sacharya at redhat.com Tue Oct 11 09:36:26 2022 From: sacharya at redhat.com (Shwetha Acharya) Date: Tue, 11 Oct 2022 15:06:26 +0530 Subject: [Gluster-devel] [Gluster-Maintainers] Release 11: Revisting our proposed timeline and features Message-ID: It is time to evaluate the fulfillment of our committed features/improvements and the feasibility of the proposed deadlines as per Release 11 tracker . Currently our timeline is as follows: Code Freeze: 31-Oct-2022 RC : 30-Nov-2022 GA : 10-JAN-2023 *Please evaluate the following and reply to this thread if you want to convey anything important:* - Can we ensure to fulfill all the proposed requirements by the Code Freeze? - Do we need to add any more changes to accommodate any shortcomings or improvements? - Are we all good to go with the proposed timeline? Regards, Shwetha -------------- next part -------------- An HTML attachment was scrubbed... URL: From gluster-jenkins at redhat.com Wed Oct 12 01:16:07 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Wed, 12 Oct 2022 01:16:07 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 12/10/2022 Test Status: PASS (4.36%) Message-ID: <75194680.171.1665537367454@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221010.a59eec1-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15881 2 ls-l 229378 229890 0 chmod 24180 24527 1 stat 35121 35277 0 read 29017 29402 1 append 13818 14019 1 rename 962 1001 4 delete-renamed 22363 22717 1 mkdir 3180 3264 2 rmdir 2652 3575 34 cleanup 9525 9735 2 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 132.33 2.63 322.89 330.68 2.36 ls-l 9.07 8.63 -5.10 20.47 1.40 -1362.14 chmod 107.04 103.44 -3.48 273.72 272.17 -0.57 stat 86.60 86.03 -0.66 212.06 217.88 2.67 read 62.01 61.31 -1.14 227.10 206.56 -9.94 append 132.05 129.26 -2.16 292.08 294.47 0.81 rename 40.95 40.72 -0.56 87.01 86.87 -0.16 delete-renamed 187.21 182.68 -2.48 332.34 332.20 -0.04 mkdir 235.22 231.26 -1.71 392.84 391.91 -0.24 rmdir 233.31 238.01 1.97 396.89 384.10 -3.33 cleanup 79.43 77.10 -3.02 208.93 206.41 -1.22 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 594.05 4.90 69.71 71.19 2.08 ls-l 564.88 594.07 4.91 70.47 71.51 1.45 chmod 564.88 594.08 4.92 78.62 75.85 -3.65 stat 564.87 594.08 4.92 79.44 76.05 -4.46 read 564.86 594.08 4.92 79.98 76.50 -4.55 append 564.85 594.08 4.92 79.10 75.81 -4.34 rename 564.90 594.09 4.91 78.77 76.03 -3.60 delete-renamed 564.88 594.09 4.92 78.42 75.55 -3.80 mkdir 568.25 594.33 4.39 82.06 77.75 -5.54 rmdir 566.11 595.59 4.95 77.90 75.04 -3.81 cleanup 565.88 595.59 4.99 66.20 66.14 -0.09 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 334 bytes Desc: not available URL: From gluster-jenkins at redhat.com Thu Oct 13 01:19:21 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Thu, 13 Oct 2022 01:19:21 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 13/10/2022 Test Status: PASS (4.18%) Message-ID: <2032605960.174.1665623962490@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221010.a59eec1-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15556 0 ls-l 229378 234354 2 chmod 24180 24429 1 stat 35121 35520 1 read 29017 29384 1 append 13818 13926 0 rename 962 993 3 delete-renamed 22363 22725 1 mkdir 3180 3261 2 rmdir 2652 3555 34 cleanup 9525 9684 1 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 124.80 -3.25 322.89 327.28 1.34 ls-l 9.07 8.93 -1.57 20.47 1.49 -1273.83 chmod 107.04 103.64 -3.28 273.72 271.69 -0.75 stat 86.60 86.36 -0.28 212.06 219.67 3.46 read 62.01 61.21 -1.31 227.10 211.01 -7.63 append 132.05 128.69 -2.61 292.08 293.59 0.51 rename 40.95 40.69 -0.64 87.01 86.21 -0.93 delete-renamed 187.21 183.14 -2.22 332.34 330.69 -0.50 mkdir 235.22 230.23 -2.17 392.84 396.06 0.81 rmdir 233.31 234.57 0.54 396.89 382.65 -3.72 cleanup 79.43 77.93 -1.92 208.93 203.54 -2.65 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 594.61 4.99 69.71 68.04 -2.45 ls-l 564.88 594.62 5.00 70.47 68.72 -2.55 chmod 564.88 594.62 5.00 78.62 75.13 -4.65 stat 564.87 594.63 5.00 79.44 75.70 -4.94 read 564.86 594.63 5.01 79.98 76.40 -4.69 append 564.85 594.64 5.01 79.10 75.61 -4.62 rename 564.90 594.66 5.00 78.77 75.27 -4.65 delete-renamed 564.88 594.66 5.01 78.42 74.88 -4.73 mkdir 568.25 594.95 4.49 82.06 78.10 -5.07 rmdir 566.11 596.35 5.07 77.90 74.53 -4.52 cleanup 565.88 596.34 5.11 66.20 62.63 -5.70 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 333 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 14 01:26:17 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 14 Oct 2022 01:26:17 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 14/10/2022 Test Status: PASS (4.18%) Message-ID: <1885431618.177.1665710777756@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221010.a59eec1-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15895 2 ls-l 229378 229590 0 chmod 24180 24421 0 stat 35121 35338 0 read 29017 29459 1 append 13818 13968 1 rename 962 998 3 delete-renamed 22363 22772 1 mkdir 3180 3269 2 rmdir 2652 3596 35 cleanup 9525 9712 1 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 131.66 2.13 322.89 332.00 2.74 ls-l 9.07 8.51 -6.58 20.47 1.48 -1283.11 chmod 107.04 103.69 -3.23 273.72 271.30 -0.89 stat 86.60 85.70 -1.05 212.06 217.92 2.69 read 62.01 61.11 -1.47 227.10 207.22 -9.59 append 132.05 128.43 -2.82 292.08 295.05 1.01 rename 40.95 40.53 -1.04 87.01 86.88 -0.15 delete-renamed 187.21 183.69 -1.92 332.34 331.96 -0.11 mkdir 235.22 231.30 -1.69 392.84 394.31 0.37 rmdir 233.31 239.80 2.71 396.89 385.75 -2.89 cleanup 79.43 76.18 -4.27 208.93 202.89 -2.98 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 594.99 5.05 69.71 68.87 -1.22 ls-l 564.88 595.00 5.06 70.47 69.53 -1.35 chmod 564.88 595.00 5.06 78.62 75.36 -4.33 stat 564.87 595.00 5.06 79.44 75.61 -5.07 read 564.86 595.00 5.07 79.98 76.27 -4.86 append 564.85 595.00 5.07 79.10 75.63 -4.59 rename 564.90 595.00 5.06 78.77 75.67 -4.10 delete-renamed 564.88 595.00 5.06 78.42 75.30 -4.14 mkdir 568.25 595.33 4.55 82.06 78.35 -4.74 rmdir 566.11 596.82 5.15 77.90 74.52 -4.54 cleanup 565.88 596.81 5.18 66.20 62.71 -5.57 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 333 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 14 19:48:07 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 14 Oct 2022 19:48:07 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 15/10/2022 Test Status: PASS (4.09%) Message-ID: <137895761.179.1665776888001@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221010.a59eec1-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15889 2 ls-l 229378 235188 2 chmod 24180 24434 1 stat 35121 35487 1 read 29017 29265 0 append 13818 13935 0 rename 962 993 3 delete-renamed 22363 22678 1 mkdir 3180 3241 1 rmdir 2652 3563 34 cleanup 9525 9610 0 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 131.96 2.36 322.89 330.27 2.23 ls-l 9.07 8.99 -0.89 20.47 1.49 -1273.83 chmod 107.04 104.15 -2.77 273.72 272.66 -0.39 stat 86.60 86.28 -0.37 212.06 219.10 3.21 read 62.01 61.44 -0.93 227.10 208.75 -8.79 append 132.05 128.32 -2.91 292.08 294.48 0.81 rename 40.95 40.76 -0.47 87.01 86.44 -0.66 delete-renamed 187.21 183.19 -2.19 332.34 335.12 0.83 mkdir 235.22 229.06 -2.69 392.84 393.59 0.19 rmdir 233.31 236.83 1.49 396.89 386.10 -2.79 cleanup 79.43 72.98 -8.84 208.93 204.02 -2.41 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 593.99 4.89 69.71 69.26 -0.65 ls-l 564.88 593.99 4.90 70.47 69.90 -0.82 chmod 564.88 593.99 4.90 78.62 75.11 -4.67 stat 564.87 593.99 4.90 79.44 75.52 -5.19 read 564.86 593.99 4.90 79.98 76.17 -5.00 append 564.85 594.00 4.91 79.10 75.09 -5.34 rename 564.90 594.00 4.90 78.77 75.26 -4.66 delete-renamed 564.88 594.00 4.90 78.42 74.93 -4.66 mkdir 568.25 594.37 4.39 82.06 77.92 -5.31 rmdir 566.11 595.77 4.98 77.90 74.26 -4.90 cleanup 565.88 595.85 5.03 66.20 62.61 -5.73 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 333 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 14 21:03:03 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 14 Oct 2022 21:03:03 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Disperse] Performance report for Gluster Upstream - 15/10/2022 Test Status: PASS (11.09%) Message-ID: <832063857.181.1665781386194@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221010.a59eec1-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Disperse Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 20121 20394 1 ls-l 137612 139596 1 chmod 32549 32549 0 stat 76100 75218 -1 read 19584 19618 0 append 19598 20243 3 rename 1005 1033 2 delete-renamed 24506 23903 -2 mkdir 2683 2686 0 rmdir 2447 5383 119 cleanup 21294 20947 -1 =============================================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 337 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 14 21:51:06 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 14 Oct 2022 21:51:06 +0000 (UTC) Subject: [Gluster-devel] [Largefile-Replica-3] Performance report for Gluster Upstream - 15/10/2022 Test Status: PASS (4.00%) Message-ID: <386915830.183.1665784266660@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221010.a59eec1-0.0 Intermediate Gluster version: No intermediate baseline Test type: Largefile Tool: fio Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== random-write 654 686 4 random-read 1802 1891 4 sequential-read 6186 6741 8 sequential-write 2451 2446 0 =============================================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 196 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 14 22:30:46 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 14 Oct 2022 22:30:46 +0000 (UTC) Subject: [Gluster-devel] [Largefile-Disperse] Performance report for Gluster Upstream - 15/10/2022 Test Status: PASS (0.50%) Message-ID: <1412233966.185.1665786646790@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221010.a59eec1-0.0 Intermediate Gluster version: No intermediate baseline Test type: Largefile Tool: fio Volume type: Disperse Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== random-write 1067 1083 1 random-read 1334 1357 1 sequential-read 6631 6651 0 sequential-write 4855 4854 0 =============================================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 198 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 14 23:17:33 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 14 Oct 2022 23:17:33 +0000 (UTC) Subject: [Gluster-devel] [Largefile-Replica-3 with Shard] Performance report for Gluster Upstream - 15/10/2022 Test Status: PASS (0.50%) Message-ID: <2000505613.189.1665789453829@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221010.a59eec1-0.0 Intermediate Gluster version: No intermediate baseline Test type: Largefile Tool: fio Volume type: Replica-3 Volume Option: Shard =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== random-write 1746 1785 2 random-read 2779 2753 0 sequential-read 7249 7175 -1 sequential-write 2351 2391 1 =============================================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 198 bytes Desc: not available URL: From gluster-jenkins at redhat.com Mon Oct 17 01:19:33 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Mon, 17 Oct 2022 01:19:33 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 17/10/2022 Test Status: PASS (4.82%) Message-ID: <2001691622.197.1665969573771@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221014.9f45d4c-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15803 2 ls-l 229378 235309 2 chmod 24180 24503 1 stat 35121 34996 0 read 29017 29629 2 append 13818 13958 1 rename 962 994 3 delete-renamed 22363 22796 1 mkdir 3180 3264 2 rmdir 2652 3589 35 cleanup 9525 9936 4 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 129.07 0.17 322.89 331.06 2.47 ls-l 9.07 25.87 64.94 20.47 1.80 -1037.22 chmod 107.04 99.47 -7.61 273.72 284.05 3.64 stat 86.60 63.60 -36.16 212.06 248.25 14.58 read 62.01 59.49 -4.24 227.10 251.63 9.75 append 132.05 117.02 -12.84 292.08 304.55 4.09 rename 40.95 40.81 -0.34 87.01 86.41 -0.69 delete-renamed 187.21 150.14 -24.69 332.34 355.95 6.63 mkdir 235.22 231.85 -1.45 392.84 396.60 0.95 rmdir 233.31 237.76 1.87 396.89 386.07 -2.80 cleanup 79.43 79.14 -0.37 208.93 215.14 2.89 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 580.11 2.62 69.71 66.23 -5.25 ls-l 564.88 580.12 2.63 70.47 67.17 -4.91 chmod 564.88 580.12 2.63 78.62 73.84 -6.47 stat 564.87 580.12 2.63 79.44 74.55 -6.56 read 564.86 580.14 2.63 79.98 75.07 -6.54 append 564.85 580.15 2.64 79.10 74.14 -6.69 rename 564.90 580.15 2.63 78.77 73.96 -6.50 delete-renamed 564.88 580.15 2.63 78.42 73.74 -6.35 mkdir 568.25 581.23 2.23 82.06 76.86 -6.77 rmdir 566.11 582.85 2.87 77.90 73.12 -6.54 cleanup 565.88 582.87 2.91 66.20 61.59 -7.48 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 333 bytes Desc: not available URL: From amar at kadalu.io Mon Oct 17 02:03:26 2022 From: amar at kadalu.io (Amar Tumballi) Date: Mon, 17 Oct 2022 07:33:26 +0530 Subject: [Gluster-devel] [Gluster-Maintainers] Release 11: Revisting our proposed timeline and features In-Reply-To: References: Message-ID: Here is my honest take on this one. On Tue, Oct 11, 2022 at 3:06 PM Shwetha Acharya wrote: > It is time to evaluate the fulfillment of our committed > features/improvements and the feasibility of the proposed deadlines as per Release > 11 tracker . > > > Currently our timeline is as follows: > > Code Freeze: 31-Oct-2022 > RC : 30-Nov-2022 > GA : 10-JAN-2023 > > *Please evaluate the following and reply to this thread if you want to > convey anything important:* > > - Can we ensure to fulfill all the proposed requirements by the Code > Freeze? > - Do we need to add any more changes to accommodate any shortcomings or > improvements? > - Are we all good to go with the proposed timeline? > > We have delayed the release already by more than 1year, and that is a significant delay for any project. If the changes we work on is not getting released frequently, the feedback loop for the project is delayed and hence the further improvements. So, regardless of any pending promised things, we should go ahead with the code-freeze and release on these dates. It is crucial for any projects / companies dependent on the project to plan accordingly. There may be already few others who would have planned their product release around these dates. Lets keep the same dates, and try to achieve the tasks we have planned in these dates. Regards, Amar > Regards, > Shwetha > _______________________________________________ > maintainers mailing list > maintainers at gluster.org > https://lists.gluster.org/mailman/listinfo/maintainers > -- -- https://kadalu.io Container Storage made easy! -------------- next part -------------- An HTML attachment was scrubbed... URL: From sajmoham at redhat.com Mon Oct 17 02:32:44 2022 From: sajmoham at redhat.com (sajmoham at redhat.com) Date: Mon, 17 Oct 2022 02:32:44 +0000 Subject: [Gluster-devel] Gluster Code Metrics Weekly Report Message-ID: <0000000000005cf04a05eb31ca32@google.com> Gluster Code Metrics Metrics Values Clang Scan 63 Coverity 16 Line Cov Func Cov Trend Graph Check the latest run: Coverity Clang Code Coverage -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: chart.png Type: image/png Size: 36517 bytes Desc: not available URL: From jahernan at redhat.com Mon Oct 17 05:41:11 2022 From: jahernan at redhat.com (Xavi Hernandez) Date: Mon, 17 Oct 2022 07:41:11 +0200 Subject: [Gluster-devel] [Gluster-Maintainers] Release 11: Revisting our proposed timeline and features In-Reply-To: References: Message-ID: On Mon, Oct 17, 2022 at 4:03 AM Amar Tumballi wrote: > Here is my honest take on this one. > > On Tue, Oct 11, 2022 at 3:06 PM Shwetha Acharya > wrote: > >> It is time to evaluate the fulfillment of our committed >> features/improvements and the feasibility of the proposed deadlines as per Release >> 11 tracker . >> >> >> Currently our timeline is as follows: >> >> Code Freeze: 31-Oct-2022 >> RC : 30-Nov-2022 >> GA : 10-JAN-2023 >> >> *Please evaluate the following and reply to this thread if you want to >> convey anything important:* >> >> - Can we ensure to fulfill all the proposed requirements by the Code >> Freeze? >> - Do we need to add any more changes to accommodate any shortcomings or >> improvements? >> - Are we all good to go with the proposed timeline? >> >> > We have delayed the release already by more than 1year, and that is a > significant delay for any project. If the changes we work on is not getting > released frequently, the feedback loop for the project is delayed and hence > the further improvements. So, regardless of any pending promised things, we > should go ahead with the code-freeze and release on these dates. > > It is crucial for any projects / companies dependent on the project to > plan accordingly. There may be already few others who would have planned > their product release around these dates. Lets keep the same dates, and try > to achieve the tasks we have planned in these dates. > I agree. Pending changes will need to be added to next release. Doing it at last time is not safe for stability. Xavi -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Mon Oct 17 08:40:03 2022 From: ykaul at redhat.com (Yaniv Kaul) Date: Mon, 17 Oct 2022 11:40:03 +0300 Subject: [Gluster-devel] [Gluster-Maintainers] Release 11: Revisting our proposed timeline and features In-Reply-To: References: Message-ID: On Mon, Oct 17, 2022 at 8:41 AM Xavi Hernandez wrote: > On Mon, Oct 17, 2022 at 4:03 AM Amar Tumballi wrote: > >> Here is my honest take on this one. >> >> On Tue, Oct 11, 2022 at 3:06 PM Shwetha Acharya >> wrote: >> >>> It is time to evaluate the fulfillment of our committed >>> features/improvements and the feasibility of the proposed deadlines as per Release >>> 11 tracker . >>> >>> >>> Currently our timeline is as follows: >>> >>> Code Freeze: 31-Oct-2022 >>> RC : 30-Nov-2022 >>> GA : 10-JAN-2023 >>> >>> *Please evaluate the following and reply to this thread if you want to >>> convey anything important:* >>> >>> - Can we ensure to fulfill all the proposed requirements by the Code >>> Freeze? >>> - Do we need to add any more changes to accommodate any shortcomings or >>> improvements? >>> - Are we all good to go with the proposed timeline? >>> >>> >> We have delayed the release already by more than 1year, and that is a >> significant delay for any project. If the changes we work on is not getting >> released frequently, the feedback loop for the project is delayed and hence >> the further improvements. So, regardless of any pending promised things, we >> should go ahead with the code-freeze and release on these dates. >> >> It is crucial for any projects / companies dependent on the project to >> plan accordingly. There may be already few others who would have planned >> their product release around these dates. Lets keep the same dates, and try >> to achieve the tasks we have planned in these dates. >> > > I agree. Pending changes will need to be added to next release. Doing it > at last time is not safe for stability. > Generally, +1. - Some info on my in-flight PRs: I have multiple independent patches for the flexible array member conversion of different variables that are pending: https://github.com/gluster/glusterfs/pull/3873 https://github.com/gluster/glusterfs/pull/3872 https://github.com/gluster/glusterfs/pull/3868 (this one is particularly interesting, I hope it works!) https://github.com/gluster/glusterfs/pull/3861 https://github.com/gluster/glusterfs/pull/3870 (already in review, perhaps it can get it soon?) I have this for one for inode related code, which got some attention recently: https://github.com/gluster/glusterfs/pull/3226 I think this one is worthwhile looking at: https://github.com/gluster/glusterfs/pull/3854 I wish we could get rid of old, unsupported versions: https://github.com/gluster/glusterfs/pull/3544 (there's more to do, in different patches, but it's a start) None of them is critical for release 11, though I'm unsure if I'll have the ability to complete them later. - The lack of EL9 official support (inc. testing infra.) is regrettable, and I think something worth fixing *before* release 11 - adding sanity on newer OS releases, which will use io_uring for example, is something we should definitely consider. Lastly, I thought zstandard compression to the CDC xlator is interesting for 11 (https://github.com/gluster/glusterfs/pull/3841) - unsure if it's ready for inclusion, but overall impact for stability should be low, considered this is not a fully supported xlator anyway (and then https://github.com/gluster/glusterfs/pull/3835 should / could be considered as well). Last though: If we are just time-based - sure, there's value in going forward and releasing it - there are hundreds (or more) of great patches already merged, I think there's value here. If we wish to look at features and impactful changes to the users - I suggest we review https://github.com/gluster/glusterfs/issues/3023 , we look at what's missing, what's in-flight and can get in, draft the release announcement and see if there's enough content. (I'm for the former, I just think the latter is a good reasonable exercise to see what's in 11!) Y. > Xavi > ------- > > Community Meeting Calendar: > Schedule - > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC > Bridge: https://meet.google.com/cpu-eiue-hvk > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jahernan at redhat.com Mon Oct 17 09:53:58 2022 From: jahernan at redhat.com (Xavi Hernandez) Date: Mon, 17 Oct 2022 11:53:58 +0200 Subject: [Gluster-devel] [Gluster-Maintainers] Release 11: Revisting our proposed timeline and features In-Reply-To: References: Message-ID: On Mon, Oct 17, 2022 at 10:40 AM Yaniv Kaul wrote: > > > On Mon, Oct 17, 2022 at 8:41 AM Xavi Hernandez > wrote: > >> On Mon, Oct 17, 2022 at 4:03 AM Amar Tumballi wrote: >> >>> Here is my honest take on this one. >>> >>> On Tue, Oct 11, 2022 at 3:06 PM Shwetha Acharya >>> wrote: >>> >>>> It is time to evaluate the fulfillment of our committed >>>> features/improvements and the feasibility of the proposed deadlines as per Release >>>> 11 tracker . >>>> >>>> >>>> Currently our timeline is as follows: >>>> >>>> Code Freeze: 31-Oct-2022 >>>> RC : 30-Nov-2022 >>>> GA : 10-JAN-2023 >>>> >>>> *Please evaluate the following and reply to this thread if you want to >>>> convey anything important:* >>>> >>>> - Can we ensure to fulfill all the proposed requirements by the Code >>>> Freeze? >>>> - Do we need to add any more changes to accommodate any shortcomings or >>>> improvements? >>>> - Are we all good to go with the proposed timeline? >>>> >>>> >>> We have delayed the release already by more than 1year, and that is a >>> significant delay for any project. If the changes we work on is not getting >>> released frequently, the feedback loop for the project is delayed and hence >>> the further improvements. So, regardless of any pending promised things, we >>> should go ahead with the code-freeze and release on these dates. >>> >>> It is crucial for any projects / companies dependent on the project to >>> plan accordingly. There may be already few others who would have planned >>> their product release around these dates. Lets keep the same dates, and try >>> to achieve the tasks we have planned in these dates. >>> >> >> I agree. Pending changes will need to be added to next release. Doing it >> at last time is not safe for stability. >> > > Generally, +1. > > - Some info on my in-flight PRs: > > I have multiple independent patches for the flexible array member > conversion of different variables that are pending: > https://github.com/gluster/glusterfs/pull/3873 > https://github.com/gluster/glusterfs/pull/3872 > https://github.com/gluster/glusterfs/pull/3868 (this one is particularly > interesting, I hope it works!) > https://github.com/gluster/glusterfs/pull/3861 > https://github.com/gluster/glusterfs/pull/3870 (already in review, > perhaps it can get it soon?) > I'm already looking at these and I expect they can be merged before the current code-freeze date. > I have this for one for inode related code, which got some attention > recently: > https://github.com/gluster/glusterfs/pull/3226 > I'll try to review this one before code-freeze, but it requires much more care. Any help will be appreciated. > > I think this one is worthwhile looking at: > https://github.com/gluster/glusterfs/pull/3854 > I'll try to take a look at this one also. > I wish we could get rid of old, unsupported versions: > https://github.com/gluster/glusterfs/pull/3544 > (there's more to do, in different patches, but it's a start) > This one is mostly ok, but I think we can't release a new version without an explicit check for unsupported versions at least at the beginning, to avoid problems when users upgrade directly from 3.x to 11.x. > None of them is critical for release 11, though I'm unsure if I'll have > the ability to complete them later. > > > - The lack of EL9 official support (inc. testing infra.) is regrettable, > and I think something worth fixing *before* release 11 - adding sanity on > newer OS releases, which will use io_uring for example, is something we > should definitely consider. > > Lastly, I thought zstandard compression to the CDC xlator is interesting > for 11 (https://github.com/gluster/glusterfs/pull/3841) - unsure if it's > ready for inclusion, but overall impact for stability should be low, > considered this is not a fully supported xlator anyway (and then > https://github.com/gluster/glusterfs/pull/3835 should / could be > considered as well). > I already started the review but I'm not very familiarized with cdc and the compression libraries, so I'll need some more time. > > Last though: > If we are just time-based - sure, there's value in going forward and > releasing it - there are hundreds (or more) of great patches already > merged, I think there's value here. > If we wish to look at features and impactful changes to the users - I > suggest we review https://github.com/gluster/glusterfs/issues/3023 , we > look at what's missing, what's in-flight and can get in, draft the release > announcement and see if there's enough content. > At this point I don't think any of the remaining big features can be added, and given that release 11 has already been delayed quite a bit, I agree with Amar that we should provide a new version soon. I think it already contains very interesting changes that can improve performance and stability. > (I'm for the former, I just think the latter is a good reasonable > exercise to see what's in 11!) > Y. > > >> Xavi >> ------- >> >> Community Meeting Calendar: >> Schedule - >> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC >> Bridge: https://meet.google.com/cpu-eiue-hvk >> >> Gluster-devel mailing list >> Gluster-devel at gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-devel >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gluster-jenkins at redhat.com Tue Oct 18 05:36:40 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Tue, 18 Oct 2022 05:36:40 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 18/10/2022 Test Status: PASS (5.09%) Message-ID: <231296910.201.1666071400063@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221017.572c5e1-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15865 2 ls-l 229378 239488 4 chmod 24180 24467 1 stat 35121 35363 0 read 29017 29562 1 append 13818 13948 0 rename 962 1001 4 delete-renamed 22363 23083 3 mkdir 3180 3279 3 rmdir 2652 3592 35 cleanup 9525 9850 3 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 131.26 1.84 322.89 333.04 3.05 ls-l 9.07 25.63 64.61 20.47 1.81 -1030.94 chmod 107.04 93.79 -14.13 273.72 311.09 12.01 stat 86.60 58.75 -47.40 212.06 258.43 17.94 read 62.01 59.26 -4.64 227.10 263.40 13.78 append 132.05 112.67 -17.20 292.08 307.50 5.01 rename 40.95 40.61 -0.84 87.01 86.16 -0.99 delete-renamed 187.21 144.81 -29.28 332.34 377.31 11.92 mkdir 235.22 227.56 -3.37 392.84 395.33 0.63 rmdir 233.31 227.30 -2.64 396.89 392.17 -1.20 cleanup 79.43 75.61 -5.05 208.93 218.68 4.46 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 569.61 0.82 69.71 68.99 -1.04 ls-l 564.88 569.49 0.81 70.47 69.42 -1.51 chmod 564.88 569.53 0.82 78.62 74.07 -6.14 stat 564.87 569.57 0.83 79.44 74.48 -6.66 read 564.86 569.59 0.83 79.98 75.02 -6.61 append 564.85 569.61 0.84 79.10 74.09 -6.76 rename 564.90 569.68 0.84 78.77 74.22 -6.13 delete-renamed 564.88 569.70 0.85 78.42 73.95 -6.04 mkdir 568.25 570.72 0.43 82.06 77.04 -6.52 rmdir 566.11 572.20 1.06 77.90 73.55 -5.91 cleanup 565.88 572.20 1.10 66.20 61.88 -6.98 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 334 bytes Desc: not available URL: From gluster-jenkins at redhat.com Wed Oct 19 01:12:04 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Wed, 19 Oct 2022 01:12:04 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 19/10/2022 Test Status: PASS (5.18%) Message-ID: <805863992.205.1666141924657@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221017.572c5e1-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15945 3 ls-l 229378 242922 5 chmod 24180 24391 0 stat 35121 35026 0 read 29017 29648 2 append 13818 14006 1 rename 962 1000 3 delete-renamed 22363 22564 0 mkdir 3180 3288 3 rmdir 2652 3578 34 cleanup 9525 10118 6 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 133.04 3.15 322.89 329.12 1.89 ls-l 9.07 25.48 64.40 20.47 1.88 -988.83 chmod 107.04 94.70 -13.03 273.72 308.00 11.13 stat 86.60 58.44 -48.19 212.06 257.01 17.49 read 62.01 59.49 -4.24 227.10 272.01 16.51 append 132.05 112.23 -17.66 292.08 307.43 4.99 rename 40.95 40.67 -0.69 87.01 85.63 -1.61 delete-renamed 187.21 143.01 -30.91 332.34 366.84 9.40 mkdir 235.22 229.68 -2.41 392.84 395.81 0.75 rmdir 233.31 227.64 -2.49 396.89 387.03 -2.55 cleanup 79.43 76.81 -3.41 208.93 220.77 5.36 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 570.89 1.04 69.71 66.93 -4.15 ls-l 564.88 570.90 1.05 70.47 67.41 -4.54 chmod 564.88 570.91 1.06 78.62 73.64 -6.76 stat 564.87 570.93 1.06 79.44 74.23 -7.02 read 564.86 570.95 1.07 79.98 74.55 -7.28 append 564.85 570.98 1.07 79.10 73.63 -7.43 rename 564.90 571.02 1.07 78.77 73.76 -6.79 delete-renamed 564.88 571.03 1.08 78.42 73.44 -6.78 mkdir 568.25 571.85 0.63 82.06 75.86 -8.17 rmdir 566.11 573.38 1.27 77.90 73.10 -6.57 cleanup 565.88 573.38 1.31 66.20 64.31 -2.94 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 335 bytes Desc: not available URL: From gluster-jenkins at redhat.com Thu Oct 20 08:48:02 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Thu, 20 Oct 2022 08:48:02 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 20/10/2022 Test Status: PASS (5.64%) Message-ID: <1813041420.208.1666255682096@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221019.e40cd0d-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15980 3 ls-l 229378 239053 4 chmod 24180 24312 0 stat 35121 35207 0 read 29017 29889 3 append 13818 14004 1 rename 962 1006 4 delete-renamed 22363 23242 3 mkdir 3180 3299 3 rmdir 2652 3595 35 cleanup 9525 10127 6 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 129.68 0.64 322.89 329.68 2.06 ls-l 9.07 25.28 64.12 20.47 1.76 -1063.07 chmod 107.04 90.90 -17.76 273.72 306.90 10.81 stat 86.60 57.79 -49.85 212.06 256.60 17.36 read 62.01 58.83 -5.41 227.10 263.82 13.92 append 132.05 110.87 -19.10 292.08 306.72 4.77 rename 40.95 40.57 -0.94 87.01 85.93 -1.26 delete-renamed 187.21 140.54 -33.21 332.34 374.83 11.34 mkdir 235.22 225.91 -4.12 392.84 396.35 0.89 rmdir 233.31 224.71 -3.83 396.89 390.78 -1.56 cleanup 79.43 72.30 -9.86 208.93 215.50 3.05 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 569.34 0.77 69.71 67.63 -3.08 ls-l 564.88 569.35 0.79 70.47 68.22 -3.30 chmod 564.88 569.39 0.79 78.62 72.28 -8.77 stat 564.87 569.42 0.80 79.44 72.77 -9.17 read 564.86 569.45 0.81 79.98 73.15 -9.34 append 564.85 569.45 0.81 79.10 72.87 -8.55 rename 564.90 569.46 0.80 78.77 73.02 -7.87 delete-renamed 564.88 569.48 0.81 78.42 72.73 -7.82 mkdir 568.25 570.57 0.41 82.06 76.24 -7.63 rmdir 566.11 571.90 1.01 77.90 72.31 -7.73 cleanup 565.88 571.89 1.05 66.20 63.74 -3.86 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 335 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 21 01:20:21 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 21 Oct 2022 01:20:21 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 21/10/2022 Test Status: PASS (5.45%) Message-ID: <1214048936.211.1666315226397@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221019.e40cd0d-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 16003 3 ls-l 229378 239173 4 chmod 24180 24706 2 stat 35121 35671 1 read 29017 29751 2 append 13818 14117 2 rename 962 1003 4 delete-renamed 22363 21903 -2 mkdir 3180 3302 3 rmdir 2652 3609 36 cleanup 9525 10047 5 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 132.91 3.05 322.89 323.36 0.15 ls-l 9.07 25.38 64.26 20.47 1.81 -1030.94 chmod 107.04 92.70 -15.47 273.72 309.95 11.69 stat 86.60 58.21 -48.77 212.06 257.51 17.65 read 62.01 58.80 -5.46 227.10 270.20 15.95 append 132.05 112.34 -17.54 292.08 310.16 5.83 rename 40.95 40.60 -0.86 87.01 86.29 -0.83 delete-renamed 187.21 142.88 -31.03 332.34 374.27 11.20 mkdir 235.22 226.00 -4.08 392.84 396.36 0.89 rmdir 233.31 225.55 -3.44 396.89 388.38 -2.19 cleanup 79.43 73.64 -7.86 208.93 214.28 2.50 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 564.27 -0.12 69.71 68.16 -2.27 ls-l 564.88 564.28 -0.11 70.47 68.80 -2.43 chmod 564.88 564.31 -0.10 78.62 73.65 -6.75 stat 564.87 564.32 -0.10 79.44 73.85 -7.57 read 564.86 564.34 -0.09 79.98 74.47 -7.40 append 564.85 564.38 -0.08 79.10 73.51 -7.60 rename 564.90 564.41 -0.09 78.77 73.42 -7.29 delete-renamed 564.88 564.41 -0.08 78.42 72.93 -7.53 mkdir 568.25 564.83 -0.61 82.06 76.67 -7.03 rmdir 566.11 566.47 0.06 77.90 72.97 -6.76 cleanup 565.88 566.45 0.10 66.20 61.48 -7.68 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 335 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 21 19:47:10 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 21 Oct 2022 19:47:10 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 22/10/2022 Test Status: PASS (6.18%) Message-ID: <451593753.213.1666381630270@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221020.9a1b8c3-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 16158 4 ls-l 229378 234371 2 chmod 24180 24731 2 stat 35121 35725 1 read 29017 30048 3 append 13818 14107 2 rename 962 1010 4 delete-renamed 22363 23245 3 mkdir 3180 3303 3 rmdir 2652 3620 36 cleanup 9525 10344 8 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 132.15 2.50 322.89 333.14 3.08 ls-l 9.07 25.36 64.24 20.47 1.84 -1012.50 chmod 107.04 93.85 -14.05 273.72 307.38 10.95 stat 86.60 57.54 -50.50 212.06 259.52 18.29 read 62.01 57.95 -7.01 227.10 264.62 14.18 append 132.05 110.98 -18.99 292.08 308.41 5.29 rename 40.95 40.24 -1.76 87.01 86.61 -0.46 delete-renamed 187.21 140.02 -33.70 332.34 376.17 11.65 mkdir 235.22 223.73 -5.14 392.84 394.78 0.49 rmdir 233.31 223.70 -4.30 396.89 388.72 -2.10 cleanup 79.43 72.94 -8.90 208.93 214.07 2.40 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 555.93 -1.62 69.71 67.11 -3.87 ls-l 564.88 555.95 -1.61 70.47 67.50 -4.40 chmod 564.88 555.98 -1.60 78.62 71.86 -9.41 stat 564.87 555.99 -1.60 79.44 72.01 -10.32 read 564.86 555.99 -1.60 79.98 72.39 -10.48 append 564.85 556.03 -1.59 79.10 71.90 -10.01 rename 564.90 556.12 -1.58 78.77 71.57 -10.06 delete-renamed 564.88 556.15 -1.57 78.42 70.98 -10.48 mkdir 568.25 556.84 -2.05 82.06 73.50 -11.65 rmdir 566.11 558.20 -1.42 77.90 70.70 -10.18 cleanup 565.88 558.22 -1.37 66.20 61.94 -6.88 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 335 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 21 20:56:21 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 21 Oct 2022 20:56:21 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Disperse] Performance report for Gluster Upstream - 22/10/2022 Test Status: PASS (12.09%) Message-ID: <1924254734.215.1666385781835@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221020.9a1b8c3-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Disperse Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 20121 20688 2 ls-l 137612 135954 -1 chmod 32549 33546 3 stat 76100 75973 0 read 19584 19948 1 append 19598 20600 5 rename 1005 1046 4 delete-renamed 24506 24728 0 mkdir 2683 2712 1 rmdir 2447 5446 122 cleanup 21294 20255 -4 =============================================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 337 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 21 21:45:04 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 21 Oct 2022 21:45:04 +0000 (UTC) Subject: [Gluster-devel] [Largefile-Replica-3] Performance report for Gluster Upstream - 22/10/2022 Test Status: PASS (3.00%) Message-ID: <423732052.217.1666388704866@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221020.9a1b8c3-0.0 Intermediate Gluster version: No intermediate baseline Test type: Largefile Tool: fio Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== random-write 654 696 6 random-read 1802 1923 6 sequential-read 6186 6179 0 sequential-write 2451 2439 0 =============================================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 196 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 21 22:24:49 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 21 Oct 2022 22:24:49 +0000 (UTC) Subject: [Gluster-devel] [Largefile-Disperse] Performance report for Gluster Upstream - 22/10/2022 Test Status: PASS (0.75%) Message-ID: <408075130.219.1666391089682@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221020.9a1b8c3-0.0 Intermediate Gluster version: No intermediate baseline Test type: Largefile Tool: fio Volume type: Disperse Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== random-write 1067 1087 1 random-read 1334 1369 2 sequential-read 6631 6663 0 sequential-write 4855 4815 0 =============================================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 198 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 21 23:11:34 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 21 Oct 2022 23:11:34 +0000 (UTC) Subject: [Gluster-devel] [Largefile-Replica-3 with Shard] Performance report for Gluster Upstream - 22/10/2022 Test Status: PASS (0.75%) Message-ID: <763231218.222.1666393894419@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221020.9a1b8c3-0.0 Intermediate Gluster version: No intermediate baseline Test type: Largefile Tool: fio Volume type: Replica-3 Volume Option: Shard =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== random-write 1746 1804 3 random-read 2779 2772 0 sequential-read 7249 7256 0 sequential-write 2351 2354 0 =============================================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 198 bytes Desc: not available URL: From sajmoham at redhat.com Mon Oct 24 02:32:44 2022 From: sajmoham at redhat.com (sajmoham at redhat.com) Date: Mon, 24 Oct 2022 02:32:44 +0000 Subject: [Gluster-devel] Gluster Code Metrics Weekly Report Message-ID: <0000000000004716e005ebbe9b4c@google.com> Gluster Code Metrics Metrics Values Clang Scan 64 Coverity 16 Line Cov Func Cov Trend Graph Check the latest run: Coverity Clang Code Coverage -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: chart.png Type: image/png Size: 36602 bytes Desc: not available URL: From gluster-jenkins at redhat.com Tue Oct 25 01:15:16 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Tue, 25 Oct 2022 01:15:16 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 25/10/2022 Test Status: PASS (5.82%) Message-ID: <676669717.230.1666660516096@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221020.9a1b8c3-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15868 2 ls-l 229378 231887 1 chmod 24180 24651 1 stat 35121 35115 0 read 29017 30112 3 append 13818 13971 1 rename 962 1006 4 delete-renamed 22363 23206 3 mkdir 3180 3308 4 rmdir 2652 3636 37 cleanup 9525 10295 8 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 131.14 1.75 322.89 330.40 2.27 ls-l 9.07 25.43 64.33 20.47 1.73 -1083.24 chmod 107.04 94.79 -12.92 273.72 306.38 10.66 stat 86.60 57.53 -50.53 212.06 254.85 16.79 read 62.01 59.08 -4.96 227.10 262.81 13.59 append 132.05 110.50 -19.50 292.08 304.15 3.97 rename 40.95 40.47 -1.19 87.01 85.66 -1.58 delete-renamed 187.21 143.43 -30.52 332.34 373.83 11.10 mkdir 235.22 225.58 -4.27 392.84 394.80 0.50 rmdir 233.31 229.35 -1.73 396.89 389.19 -1.98 cleanup 79.43 71.75 -10.70 208.93 214.62 2.65 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 555.51 -1.70 69.71 66.95 -4.12 ls-l 564.88 555.51 -1.69 70.47 67.65 -4.17 chmod 564.88 555.51 -1.69 78.62 71.73 -9.61 stat 564.87 555.53 -1.68 79.44 72.31 -9.86 read 564.86 555.57 -1.67 79.98 72.80 -9.86 append 564.85 555.61 -1.66 79.10 72.18 -9.59 rename 564.90 555.72 -1.65 78.77 72.19 -9.11 delete-renamed 564.88 555.72 -1.65 78.42 71.65 -9.45 mkdir 568.25 556.45 -2.12 82.06 74.04 -10.83 rmdir 566.11 557.74 -1.50 77.90 71.19 -9.43 cleanup 565.88 557.74 -1.46 66.20 62.50 -5.92 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 335 bytes Desc: not available URL: From gluster-jenkins at redhat.com Wed Oct 26 01:20:16 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Wed, 26 Oct 2022 01:20:16 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 26/10/2022 Test Status: PASS (6.09%) Message-ID: <1464252489.234.1666747216911@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221024.dfd193a-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15965 3 ls-l 229378 238818 4 chmod 24180 24573 1 stat 35121 35401 0 read 29017 29896 3 append 13818 14038 1 rename 962 1017 5 delete-renamed 22363 23305 4 mkdir 3180 3295 3 rmdir 2652 3632 36 cleanup 9525 10206 7 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 130.64 1.37 322.89 327.67 1.46 ls-l 9.07 25.14 63.92 20.47 1.84 -1012.50 chmod 107.04 94.40 -13.39 273.72 308.70 11.33 stat 86.60 57.38 -50.92 212.06 257.95 17.79 read 62.01 57.66 -7.54 227.10 263.19 13.71 append 132.05 110.00 -20.05 292.08 307.00 4.86 rename 40.95 40.38 -1.41 87.01 85.94 -1.25 delete-renamed 187.21 141.40 -32.40 332.34 376.66 11.77 mkdir 235.22 225.59 -4.27 392.84 395.63 0.71 rmdir 233.31 227.09 -2.74 396.89 389.82 -1.81 cleanup 79.43 72.57 -9.45 208.93 215.59 3.09 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 557.25 -1.38 69.71 67.90 -2.67 ls-l 564.88 557.19 -1.38 70.47 68.50 -2.88 chmod 564.88 557.25 -1.37 78.62 73.06 -7.61 stat 564.87 557.31 -1.36 79.44 73.40 -8.23 read 564.86 557.32 -1.35 79.98 73.84 -8.32 append 564.85 557.34 -1.35 79.10 73.08 -8.24 rename 564.90 557.35 -1.35 78.77 73.28 -7.49 delete-renamed 564.88 557.38 -1.35 78.42 72.89 -7.59 mkdir 568.25 557.95 -1.85 82.06 75.93 -8.07 rmdir 566.11 559.46 -1.19 77.90 72.30 -7.75 cleanup 565.88 559.47 -1.15 66.20 60.83 -8.83 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 335 bytes Desc: not available URL: From gluster-jenkins at redhat.com Thu Oct 27 01:25:13 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Thu, 27 Oct 2022 01:25:13 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 27/10/2022 Test Status: PASS (6.18%) Message-ID: <1097343364.237.1666833914142@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221024.dfd193a-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 16133 4 ls-l 229378 236873 3 chmod 24180 24589 1 stat 35121 35625 1 read 29017 30111 3 append 13818 14115 2 rename 962 1006 4 delete-renamed 22363 23306 4 mkdir 3180 3296 3 rmdir 2652 3622 36 cleanup 9525 10224 7 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 132.99 3.11 322.89 328.25 1.63 ls-l 9.07 25.16 63.95 20.47 1.78 -1050.00 chmod 107.04 94.94 -12.74 273.72 307.84 11.08 stat 86.60 57.51 -50.58 212.06 258.03 17.82 read 62.01 58.78 -5.50 227.10 263.39 13.78 append 132.05 111.16 -18.79 292.08 309.82 5.73 rename 40.95 40.43 -1.29 87.01 86.25 -0.88 delete-renamed 187.21 142.58 -31.30 332.34 375.09 11.40 mkdir 235.22 224.72 -4.67 392.84 395.45 0.66 rmdir 233.31 225.19 -3.61 396.89 387.94 -2.31 cleanup 79.43 71.91 -10.46 208.93 215.13 2.88 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 557.46 -1.34 69.71 67.33 -3.53 ls-l 564.88 557.40 -1.34 70.47 68.09 -3.50 chmod 564.88 557.40 -1.34 78.62 73.27 -7.30 stat 564.87 557.41 -1.34 79.44 73.59 -7.95 read 564.86 557.43 -1.33 79.98 74.08 -7.96 append 564.85 557.43 -1.33 79.10 73.49 -7.63 rename 564.90 557.53 -1.32 78.77 73.27 -7.51 delete-renamed 564.88 557.53 -1.32 78.42 73.03 -7.38 mkdir 568.25 558.22 -1.80 82.06 76.00 -7.97 rmdir 566.11 559.65 -1.15 77.90 72.06 -8.10 cleanup 565.88 559.65 -1.11 66.20 60.36 -9.68 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 335 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 28 01:20:10 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 28 Oct 2022 01:20:10 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 28/10/2022 Test Status: PASS (6.09%) Message-ID: <938194640.240.1666920010566@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221024.dfd193a-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15948 3 ls-l 229378 242103 5 chmod 24180 24658 1 stat 35121 35486 1 read 29017 30067 3 append 13818 13968 1 rename 962 1010 4 delete-renamed 22363 23159 3 mkdir 3180 3304 3 rmdir 2652 3614 36 cleanup 9525 10207 7 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 130.15 1.00 322.89 331.38 2.56 ls-l 9.07 25.05 63.79 20.47 1.71 -1097.08 chmod 107.04 93.71 -14.22 273.72 309.28 11.50 stat 86.60 57.32 -51.08 212.06 259.19 18.18 read 62.01 58.37 -6.24 227.10 264.10 14.01 append 132.05 109.92 -20.13 292.08 306.84 4.81 rename 40.95 40.23 -1.79 87.01 86.62 -0.45 delete-renamed 187.21 142.43 -31.44 332.34 368.24 9.75 mkdir 235.22 224.25 -4.89 392.84 396.03 0.81 rmdir 233.31 222.84 -4.70 396.89 388.68 -2.11 cleanup 79.43 73.55 -7.99 208.93 215.13 2.88 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 557.07 -1.41 69.71 68.02 -2.48 ls-l 564.88 557.02 -1.41 70.47 68.69 -2.59 chmod 564.88 557.05 -1.41 78.62 73.25 -7.33 stat 564.87 557.09 -1.40 79.44 73.41 -8.21 read 564.86 557.13 -1.39 79.98 73.95 -8.15 append 564.85 557.13 -1.39 79.10 73.16 -8.12 rename 564.90 557.18 -1.39 78.77 73.22 -7.58 delete-renamed 564.88 557.21 -1.38 78.42 72.78 -7.75 mkdir 568.25 557.81 -1.87 82.06 75.88 -8.14 rmdir 566.11 559.21 -1.23 77.90 72.08 -8.07 cleanup 565.88 559.21 -1.19 66.20 60.20 -9.97 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 335 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 28 19:50:23 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 28 Oct 2022 19:50:23 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 29/10/2022 Test Status: PASS (5.82%) Message-ID: <1749915603.242.1666986623855@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221024.dfd193a-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 16145 4 ls-l 229378 240727 4 chmod 24180 24616 1 stat 35121 35080 0 read 29017 30064 3 append 13818 13942 0 rename 962 1001 4 delete-renamed 22363 23136 3 mkdir 3180 3297 3 rmdir 2652 3616 36 cleanup 9525 10155 6 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 134.15 3.95 322.89 337.51 4.33 ls-l 9.07 25.19 63.99 20.47 1.77 -1056.50 chmod 107.04 93.89 -14.01 273.72 307.83 11.08 stat 86.60 57.43 -50.79 212.06 254.61 16.71 read 62.01 58.66 -5.71 227.10 269.10 15.61 append 132.05 111.53 -18.40 292.08 307.17 4.91 rename 40.95 40.34 -1.51 87.01 86.70 -0.36 delete-renamed 187.21 142.82 -31.08 332.34 375.68 11.54 mkdir 235.22 224.24 -4.90 392.84 396.00 0.80 rmdir 233.31 223.83 -4.24 396.89 389.64 -1.86 cleanup 79.43 73.23 -8.47 208.93 214.49 2.59 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 556.91 -1.44 69.71 68.03 -2.47 ls-l 564.88 556.91 -1.43 70.47 68.61 -2.71 chmod 564.88 556.93 -1.43 78.62 73.56 -6.88 stat 564.87 556.98 -1.42 79.44 73.79 -7.66 read 564.86 556.99 -1.41 79.98 74.18 -7.82 append 564.85 556.99 -1.41 79.10 73.43 -7.72 rename 564.90 557.12 -1.40 78.77 73.90 -6.59 delete-renamed 564.88 557.16 -1.39 78.42 73.47 -6.74 mkdir 568.25 557.87 -1.86 82.06 76.28 -7.58 rmdir 566.11 559.12 -1.25 77.90 72.51 -7.43 cleanup 565.88 559.12 -1.21 66.20 60.77 -8.94 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 335 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 28 21:06:17 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 28 Oct 2022 21:06:17 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Disperse] Performance report for Gluster Upstream - 29/10/2022 Test Status: PASS (12.27%) Message-ID: <261969876.244.1666991177231@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221024.dfd193a-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Disperse Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 20121 20551 2 ls-l 137612 138083 0 chmod 32549 33119 1 stat 76100 76549 0 read 19584 19767 0 append 19598 20509 4 rename 1005 1039 3 delete-renamed 24506 24508 0 mkdir 2683 2714 1 rmdir 2447 5450 122 cleanup 21294 21817 2 =============================================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 337 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 28 21:54:04 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 28 Oct 2022 21:54:04 +0000 (UTC) Subject: [Gluster-devel] [Largefile-Replica-3] Performance report for Gluster Upstream - 29/10/2022 Test Status: PASS (5.00%) Message-ID: <694143210.246.1666994044619@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221024.dfd193a-0.0 Intermediate Gluster version: No intermediate baseline Test type: Largefile Tool: fio Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== random-write 654 697 6 random-read 1802 1919 6 sequential-read 6186 6685 8 sequential-write 2451 2446 0 =============================================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 196 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 28 22:33:44 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 28 Oct 2022 22:33:44 +0000 (UTC) Subject: [Gluster-devel] [Largefile-Disperse] Performance report for Gluster Upstream - 29/10/2022 Test Status: PASS (2.00%) Message-ID: <742064113.248.1666996424099@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221024.dfd193a-0.0 Intermediate Gluster version: No intermediate baseline Test type: Largefile Tool: fio Volume type: Disperse Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== random-write 1067 1105 3 random-read 1334 1397 4 sequential-read 6631 6732 1 sequential-write 4855 4863 0 =============================================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 198 bytes Desc: not available URL: From gluster-jenkins at redhat.com Fri Oct 28 23:20:40 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Fri, 28 Oct 2022 23:20:40 +0000 (UTC) Subject: [Gluster-devel] [Largefile-Replica-3 with Shard] Performance report for Gluster Upstream - 29/10/2022 Test Status: PASS (0.75%) Message-ID: <2059473814.251.1666999240967@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221024.dfd193a-0.0 Intermediate Gluster version: No intermediate baseline Test type: Largefile Tool: fio Volume type: Replica-3 Volume Option: Shard =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== random-write 1746 1807 3 random-read 2779 2768 0 sequential-read 7249 7217 0 sequential-write 2351 2373 0 =============================================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 198 bytes Desc: not available URL: From gluster-jenkins at redhat.com Mon Oct 31 01:19:28 2022 From: gluster-jenkins at redhat.com (Gluster-jenkins) Date: Mon, 31 Oct 2022 01:19:28 +0000 (UTC) Subject: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 31/10/2022 Test Status: PASS (6.09%) Message-ID: <811821001.257.1667179168350@gluster-downstream-jenkins-csb-storage> Test details: RPM Location: Upstream OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa) Baseline Gluster version: glusterfs-10.1-1. Current Gluster version: glusterfs-20221024.dfd193a-0.0 Intermediate Gluster version: No intermediate baseline Test type: Smallfile Tool: smallfile Volume type: Replica-3 Volume Option: No volume options configured =============================================================================================== FOPs Baseline DailyBuild Baseline vs DailyBuild =============================================================================================== create 15476 15869 2 ls-l 229378 241183 5 chmod 24180 24674 2 stat 35121 35399 0 read 29017 29788 2 append 13818 13978 1 rename 962 1008 4 delete-renamed 22363 23043 3 mkdir 3180 3308 4 rmdir 2652 3627 36 cleanup 9525 10333 8 =============================================================================================== =================================================================================================================== CPU Usage (%) by Servers and Clients ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 128.85 129.02 0.13 322.89 321.99 -0.28 ls-l 9.07 25.38 64.26 20.47 1.77 -1056.50 chmod 107.04 91.79 -16.61 273.72 307.77 11.06 stat 86.60 57.90 -49.57 212.06 254.65 16.72 read 62.01 58.45 -6.09 227.10 259.02 12.32 append 132.05 111.27 -18.68 292.08 304.19 3.98 rename 40.95 40.60 -0.86 87.01 85.58 -1.67 delete-renamed 187.21 141.04 -32.74 332.34 374.03 11.15 mkdir 235.22 229.31 -2.58 392.84 396.38 0.89 rmdir 233.31 229.66 -1.59 396.89 388.61 -2.13 cleanup 79.43 74.02 -7.31 208.93 214.37 2.54 =================================================================================================================== NOTE: The CPU usage per brick process averaged out accross the servers and clients =================================================================================================================== Memory Usage by Servers and Clients in (Mbs) ___________________________________________________________________________________________________________________ FOP Base_Server Current_Server Base vs Current Base_Client Current_Client Base vs Current =================================================================================================================== create 564.94 556.47 -1.52 69.71 67.28 -3.61 ls-l 564.88 556.41 -1.52 70.47 68.20 -3.33 chmod 564.88 556.42 -1.52 78.62 73.21 -7.39 stat 564.87 556.42 -1.52 79.44 73.55 -8.01 read 564.86 556.43 -1.52 79.98 74.01 -8.07 append 564.85 556.44 -1.51 79.10 73.18 -8.09 rename 564.90 556.46 -1.52 78.77 72.86 -8.11 delete-renamed 564.88 556.47 -1.51 78.42 72.39 -8.33 mkdir 568.25 557.21 -1.98 82.06 75.65 -8.47 rmdir 566.11 558.78 -1.31 77.90 71.96 -8.25 cleanup 565.88 558.80 -1.27 66.20 60.09 -10.17 =================================================================================================================== NOTE: The memory usage per brick process averaged out accross the servers and clients -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: final.csv Type: application/octet-stream Size: 335 bytes Desc: not available URL: From sajmoham at redhat.com Mon Oct 31 02:32:43 2022 From: sajmoham at redhat.com (sajmoham at redhat.com) Date: Mon, 31 Oct 2022 02:32:43 +0000 Subject: [Gluster-devel] Gluster Code Metrics Weekly Report Message-ID: <000000000000154a5005ec4b6cf7@google.com> Gluster Code Metrics Metrics Values Clang Scan #VALUE! Coverity 16 Line Cov Func Cov Trend Graph Check the latest run: Coverity Clang Code Coverage -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: chart.png Type: image/png Size: 36575 bytes Desc: not available URL: