From jenkins at build.gluster.org Mon Jul 1 01:45:02 2019 From: jenkins at build.gluster.org (jenkins at build.gluster.org) Date: Mon, 1 Jul 2019 01:45:02 +0000 (UTC) Subject: [Gluster-devel] Weekly Untriaged Bugs Message-ID: <1050437470.35.1561945502710.JavaMail.jenkins@jenkins-el7.rht.gluster.org> [...truncated 7 lines...] https://bugzilla.redhat.com/1722708 / bitrot: WORM: Segmentation Fault if bitrot stub do signature https://bugzilla.redhat.com/1722709 / bitrot: WORM: Segmentation Fault if bitrot stub do signature https://bugzilla.redhat.com/1719778 / core: build fails for every patch on release 5 https://bugzilla.redhat.com/1721842 / core: Spelling errors in 6.3 https://bugzilla.redhat.com/1723617 / distribute: nfs-ganesha gets empty stat (all zero) when glfs_mkdir return success https://bugzilla.redhat.com/1724618 / ganesha-nfs: ganesha : nfstest_posix from NFSTest https://bugzilla.redhat.com/1722390 / glusterd: "All subvolumes are down" when all bricks are online https://bugzilla.redhat.com/1722187 / glusterd: Glusterd Seg faults (sig 11) when RDMA used with MLNX_OFED https://bugzilla.redhat.com/1724024 / glusterd: use more secure mode for mkdir operations https://bugzilla.redhat.com/1718741 / glusterfind: GlusterFS having high CPU https://bugzilla.redhat.com/1716875 / gluster-smb: Inode Unref Assertion failed: inode->ref https://bugzilla.redhat.com/1716455 / gluster-smb: OS X error -50 when creating sub-folder on Samba share when using Gluster VFS https://bugzilla.redhat.com/1716440 / gluster-smb: SMBD thread panics when connected to from OS X machine https://bugzilla.redhat.com/1720733 / libglusterfsclient: glusterfs 4.1.7 client crash https://bugzilla.redhat.com/1717824 / locks: Fencing: Added the tcmu-runner ALUA feature support but after one of node is rebooted the glfs_file_lock() get stucked https://bugzilla.redhat.com/1718562 / locks: flock failure (regression) https://bugzilla.redhat.com/1724957 / project-infrastructure: Grant additional maintainers merge rights on release branches https://bugzilla.redhat.com/1719388 / project-infrastructure: infra: download.gluster.org /var/www/html/... is out of free space https://bugzilla.redhat.com/1721353 / project-infrastructure: Run 'line-coverage' regression runs on a latest fedora machine (say fedora30). https://bugzilla.redhat.com/1720453 / project-infrastructure: Unable to access review.gluster.org https://bugzilla.redhat.com/1721462 / quota: Quota limits not honored writes allowed past quota limit. https://bugzilla.redhat.com/1723781 / tests: Run 'known-issues' and 'bad-tests' in line-coverage test (nightly) https://bugzilla.redhat.com/1724624 / upcall: LINK does not invalidate metadata cache of parent directory [...truncated 2 lines...] -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 2793 bytes Desc: not available URL: From rabhat at redhat.com Tue Jul 2 15:22:25 2019 From: rabhat at redhat.com (FNU Raghavendra Manjunath) Date: Tue, 2 Jul 2019 11:22:25 -0400 Subject: [Gluster-devel] fallocate behavior in glusterfs Message-ID: Hi All, In glusterfs, there is an issue regarding the fallocate behavior. In short, if someone does fallocate from the mount point with some size that is greater than the available size in the backend filesystem where the file is present, then fallocate can fail with a subset of the required number of blocks allocated and then failing in the backend filesystem with ENOSPC error. The behavior of fallocate in itself is simlar to how it would have been on a disk filesystem (atleast xfs where it was checked). i.e. allocates subset of the required number of blocks and then fail with ENOSPC. And the file in itself would show the number of blocks in stat to be whatever was allocated as part of fallocate. Please refer [1] where the issue is explained. Now, there is one small difference between how the behavior is between glusterfs and xfs. In xfs after fallocate fails, doing 'stat' on the file shows the number of blocks that have been allocated. Whereas in glusterfs, the number of blocks is shown as zero which makes tools like "du" show zero consumption. This difference in behavior in glusterfs is because of libglusterfs on how it handles sparse files etc for calculating number of blocks (mentioned in [1]) At this point I can think of 3 things on how to handle this. 1) Except for how many blocks are shown in the stat output for the file from the mount point (on which fallocate was done), the remaining behavior of attempting to allocate the requested size and failing when the filesystem becomes full is similar to that of XFS. Hence, what is required is to come up with a solution on how libglusterfs calculate blocks for sparse files etc (without breaking any of the existing components and features). This makes the behavior similar to that of backend filesystem. This might require its own time to fix libglusterfs logic without impacting anything else. OR 2) Once the fallocate fails in the backend filesystem, make posix xlator in the brick truncate the file to the previous size of the file before attempting fallocate. A patch [2] has been sent for this. But there is an issue with this when there are parallel writes and fallocate operations happening on the same file. It can lead to a data loss. a) statpre is obtained ===> before fallocate is attempted, get the stat hence the size of the file b) A parrallel Write fop on the same file that extends the file is successful c) Fallocate fails d) ftruncate truncates it to size given by statpre (i.e. the previous stat and the size obtained in step a) OR 3) Make posix check for available disk size before doing fallocate. i.e. in fallocate once posix gets the number of bytes to be allocated for the file from a particular offset, it checks whether so many bytes are available or not in the disk. If not, fail the fallocate fop with ENOSPC (without attempting it on the backend filesystem). There still is a probability of a parallel write happening while this fallocate is happening and by the time falllocate system call is attempted on the disk, the available space might have been less than what was calculated before fallocate. i.e. following things can happen a) statfs ===> get the available space of the backend filesystem b) a parallel write succeeds and extends the file c) fallocate is attempted assuming there is sufficient space in the backend While the above situation can arise, I think we are still fine. Because fallocate is attempted from the offset received in the fop. So, irrespective of whether write extended the file or not, the fallocate itself will be attempted for so many bytes from the offset which we found to be available by getting statfs information. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1724754#c3 [2] https://review.gluster.org/#/c/glusterfs/+/22969/ Please provide feedback. Regards, Raghavendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From ravishankar at redhat.com Wed Jul 3 04:43:19 2019 From: ravishankar at redhat.com (Ravishankar N) Date: Wed, 3 Jul 2019 10:13:19 +0530 Subject: [Gluster-devel] fallocate behavior in glusterfs In-Reply-To: References: Message-ID: <1081f226-67c2-0d19-af99-c4d691b10484@redhat.com> On 02/07/19 8:52 PM, FNU Raghavendra Manjunath wrote: > > Hi All, > > In glusterfs, there is an issue regarding the fallocate behavior. In > short, if someone does fallocate from the mount point with some size > that is greater than the available size in the backend filesystem > where the file is present, then fallocate can fail with a subset of > the required number of blocks allocated and then failing in the > backend filesystem with ENOSPC error. > > The behavior of fallocate in itself is simlar to how it would have > been on a disk filesystem (atleast xfs where it was checked). i.e. > allocates subset of the required number of blocks and then fail with > ENOSPC. And the file in itself would show the number of blocks in stat > to be whatever was allocated as part of fallocate. Please refer [1] > where the issue is explained. > > Now, there is one small difference between how the behavior is between > glusterfs and xfs. > In xfs after fallocate fails, doing 'stat' on the file shows the > number of blocks that have been allocated. Whereas in glusterfs, the > number of blocks is shown as zero which makes tools like "du" show > zero consumption. This difference in behavior in glusterfs is because > of libglusterfs on how it handles sparse files etc for calculating > number of blocks (mentioned in [1]) > > At this point I can think of 3 things on how to handle this. > > 1) Except for how many blocks are shown in the stat output for the > file from the mount point (on which fallocate was done), the remaining > behavior of attempting to allocate the requested size and failing when > the filesystem becomes full is similar to that of XFS. > > Hence, what is required is to come up with a solution on how > libglusterfs calculate blocks for sparse files etc (without breaking > any of the existing components and features). This makes the behavior > similar to that of backend filesystem. This might require its own time > to fix libglusterfs logic without impacting anything else. I think we should just revert the commit b1a5fa55695f497952264e35a9c8eb2bbf1ec4c3 (BZ 817343) and see if it really breaks anything (or check whatever it breaks is something that we can live with). XFS speculative preallocation is not permanent and the extra space is freed up eventually. It can be sped up via procfs tunable: http://xfs.org/index.php/XFS_FAQ#Q:_How_can_I_speed_up_or_avoid_delayed_removal_of_speculative_preallocation.3F. We could also tune the allocsize option to a low value like 4k so that glusterfs quota is not affected. FWIW, ENOSPC is not the only fallocate problem in gluster because of? 'iatt->ia_block' tweaking. It also breaks the --keep-size option (i.e. the FALLOC_FL_KEEP_SIZE flag in fallocate(2)) and reports incorrect du size. Regards, Ravi > > OR > > 2) Once the fallocate fails in the backend filesystem, make posix > xlator in the brick truncate the file to the previous size of the file > before attempting fallocate. A patch [2] has been sent for this. But > there is an issue with this when there are parallel writes and > fallocate operations happening on the same file. It can lead to a data > loss. > > a) statpre is obtained ===> before fallocate is attempted, get the > stat hence the size of the file b) A parrallel Write fop on the same > file that extends the file is successful c) Fallocate fails d) > ftruncate truncates it to size given by statpre (i.e. the previous > stat and the size obtained in step a) > > OR > > 3) Make posix check for available disk size before doing fallocate. > i.e. in fallocate once posix gets the number of bytes to be allocated > for the file from a particular offset, it checks whether so many bytes > are available or not in the disk. If not, fail the fallocate fop with > ENOSPC (without attempting it on the backend filesystem). > > There still is a probability of a parallel write happening while this > fallocate is happening and by the time falllocate system call is > attempted on the disk, the available space might have been less than > what was calculated before fallocate. > i.e. following things can happen > > ?a) statfs ===> get the available space of the backend filesystem > ?b) a parallel write succeeds and extends the file > ?c) fallocate is attempted assuming there is sufficient space in the > backend > > While the above situation can arise, I think we are still fine. > Because fallocate is attempted from the offset received in the fop. > So, irrespective of whether write extended the file or not, the > fallocate itself will be attempted for so many bytes from the offset > which we found to be available by getting statfs information. > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1724754#c3 > [2] https://review.gluster.org/#/c/glusterfs/+/22969/ > > Please provide feedback. > > Regards, > Raghavendra > > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkothiya at redhat.com Wed Jul 3 05:00:58 2019 From: rkothiya at redhat.com (Rinku Kothiya) Date: Wed, 3 Jul 2019 10:30:58 +0530 Subject: [Gluster-devel] Release 7 Branch Created Message-ID: Hi Team, Release 7 branch has been created in upstream. ## Schedule Curretnly the plan working backwards on the schedule, here's what we have: - Announcement: Week of Aug 4th, 2019 - GA tagging: Aug-02-2019 - RC1: On demand before GA - RC0: July-03-2019 - Late features cut-off: Week of June-24th, 2018 - Branching (feature cutoff date): June-17-2018 Regards Rinku -------------- next part -------------- An HTML attachment was scrubbed... URL: From amukherj at redhat.com Wed Jul 3 05:38:50 2019 From: amukherj at redhat.com (Atin Mukherjee) Date: Wed, 3 Jul 2019 11:08:50 +0530 Subject: [Gluster-devel] Fwd: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4632 In-Reply-To: <890301571.38.1562092612047.JavaMail.jenkins@jenkins-el7.rht.gluster.org> References: <1981025080.37.1562005037885.JavaMail.jenkins@jenkins-el7.rht.gluster.org> <890301571.38.1562092612047.JavaMail.jenkins@jenkins-el7.rht.gluster.org> Message-ID: Can we check these failures please? 2 test(s) failed ./tests/bugs/glusterd/bug-1699339.t ./tests/bugs/glusterd/bug-857330/normal.t ---------- Forwarded message --------- From: Date: Wed, Jul 3, 2019 at 12:08 AM Subject: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4632 To: See < https://build.gluster.org/job/regression-test-burn-in/4632/display/redirect?page=changes > Changes: [Amar Tumballi] Removing one top command from gluster v help [Amar Tumballi] glusterfs-fops: fix the modularity [Sheetal Pamecha] cli: Remove Wformat-truncation compiler warning [Nithya Balachandran] cluster/dht: Fixed a memleak in dht_rename_cbk ------------------------------------------ [...truncated 4.02 MB...] ./tests/bugs/ec/bug-1227869.t - 9 second ./tests/bugs/distribute/bug-1122443.t - 9 second ./tests/bugs/cli/bug-1022905.t - 9 second ./tests/bugs/changelog/bug-1321955.t - 9 second ./tests/bugs/changelog/bug-1208470.t - 9 second ./tests/bugs/bug-1258069.t - 9 second ./tests/bugs/bitrot/1209751-bitrot-scrub-tunable-reset.t - 9 second ./tests/bitrot/bug-1207627-bitrot-scrub-status.t - 9 second ./tests/basic/xlator-pass-through-sanity.t - 9 second ./tests/basic/md-cache/bug-1317785.t - 9 second ./tests/basic/glusterd/thin-arbiter-volume-probe.t - 9 second ./tests/basic/afr/stale-file-lookup.t - 9 second ./tests/basic/afr/root-squash-self-heal.t - 9 second ./tests/basic/afr/add-brick-self-heal.t - 9 second ./tests/features/readdir-ahead.t - 8 second ./tests/bugs/snapshot/bug-1260848.t - 8 second ./tests/bugs/snapshot/bug-1064768.t - 8 second ./tests/bugs/shard/shard-inode-refcount-test.t - 8 second ./tests/bugs/shard/bug-1260637.t - 8 second ./tests/bugs/replicate/bug-986905.t - 8 second ./tests/bugs/replicate/bug-1686568-send-truncate-on-arbiter-from-shd.t - 8 second ./tests/bugs/replicate/bug-1132102.t - 8 second ./tests/bugs/replicate/bug-1101647.t - 8 second ./tests/bugs/replicate/bug-1037501.t - 8 second ./tests/bugs/protocol/bug-1321578.t - 8 second ./tests/bugs/posix/bug-1175711.t - 8 second ./tests/bugs/nfs/bug-915280.t - 8 second ./tests/bugs/io-cache/bug-858242.t - 8 second ./tests/bugs/glusterfs/bug-872923.t - 8 second ./tests/bugs/glusterfs/bug-848251.t - 8 second ./tests/bugs/glusterd/bug-1696046.t - 8 second ./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t - 8 second ./tests/bugs/fuse/bug-983477.t - 8 second ./tests/bugs/distribute/bug-1088231.t - 8 second ./tests/bugs/distribute/bug-1086228.t - 8 second ./tests/bugs/cli/bug-1087487.t - 8 second ./tests/bugs/bug-1371806_2.t - 8 second ./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t - 8 second ./tests/bitrot/br-stub.t - 8 second ./tests/basic/glusterd/arbiter-volume-probe.t - 8 second ./tests/basic/gfapi/libgfapi-fini-hang.t - 8 second ./tests/basic/fop-sampling.t - 8 second ./tests/basic/fencing/fencing-crash-conistency.t - 8 second ./tests/basic/ec/statedump.t - 8 second ./tests/basic/distribute/file-create.t - 8 second ./tests/basic/ctime/ctime-noatime.t - 8 second ./tests/basic/changelog/changelog-rename.t - 8 second ./tests/basic/afr/tarissue.t - 8 second ./tests/basic/afr/ta-read.t - 8 second ./tests/basic/afr/granular-esh/add-brick.t - 8 second ./tests/basic/afr/afr-read-hash-mode.t - 8 second ./tests/bugs/upcall/bug-1369430.t - 7 second ./tests/bugs/shard/bug-1342298.t - 7 second ./tests/bugs/shard/bug-1259651.t - 7 second ./tests/bugs/replicate/bug-1626994-info-split-brain.t - 7 second ./tests/bugs/replicate/bug-1250170-fsync.t - 7 second ./tests/bugs/quota/bug-1243798.t - 7 second ./tests/bugs/quota/bug-1104692.t - 7 second ./tests/bugs/nfs/bug-1116503.t - 7 second ./tests/bugs/md-cache/afr-stale-read.t - 7 second ./tests/bugs/glusterd/quorum-value-check.t - 7 second ./tests/bugs/glusterd/bug-948729/bug-948729-mode-script.t - 7 second ./tests/bugs/glusterd/bug-948729/bug-948729-force.t - 7 second ./tests/bugs/glusterd/bug-1482906-peer-file-blank-line.t - 7 second ./tests/bugs/distribute/bug-884597.t - 7 second ./tests/bugs/distribute/bug-1368012.t - 7 second ./tests/bugs/core/bug-1699025-brick-mux-detach-brick-fd-issue.t - 7 second ./tests/bugs/core/bug-1168803-snapd-option-validation-fix.t - 7 second ./tests/bugs/bug-1702299.t - 7 second ./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t - 7 second ./tests/basic/ec/ec-read-policy.t - 7 second ./tests/basic/afr/ta-write-on-bad-brick.t - 7 second ./tests/basic/afr/ta.t - 7 second ./tests/basic/afr/ta-shd.t - 7 second ./tests/basic/afr/gfid-heal.t - 7 second ./tests/basic/afr/arbiter-remove-brick.t - 7 second ./tests/gfid2path/gfid2path_nfs.t - 6 second ./tests/gfid2path/get-gfid-to-path.t - 6 second ./tests/bugs/snapshot/bug-1178079.t - 6 second ./tests/bugs/shard/bug-1272986.t - 6 second ./tests/bugs/shard/bug-1258334.t - 6 second ./tests/bugs/replicate/bug-767585-gfid.t - 6 second ./tests/bugs/replicate/bug-1365455.t - 6 second ./tests/bugs/readdir-ahead/bug-1670253-consistent-metadata.t - 6 second ./tests/bugs/quota/bug-1287996.t - 6 second ./tests/bugs/nfs/bug-847622.t - 6 second ./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t - 6 second ./tests/bugs/md-cache/setxattr-prepoststat.t - 6 second ./tests/bugs/md-cache/bug-1476324.t - 6 second ./tests/bugs/io-stats/bug-1598548.t - 6 second ./tests/bugs/glusterfs-server/bug-877992.t - 6 second ./tests/bugs/glusterfs-server/bug-873549.t - 6 second ./tests/bugs/glusterfs/bug-895235.t - 6 second ./tests/bugs/glusterd/bug-1091935-brick-order-check-from-cli-to-glusterd.t - 6 second ./tests/bugs/fuse/bug-1336818.t - 6 second ./tests/bugs/ec/bug-1179050.t - 6 second ./tests/bugs/ec/bug-1161621.t - 6 second ./tests/bugs/distribute/bug-912564.t - 6 second ./tests/bugs/core/bug-986429.t - 6 second ./tests/bugs/core/bug-1119582.t - 6 second ./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t - 6 second ./tests/bugs/bitrot/bug-1210684-scrub-pause-resume-error-handling.t - 6 second ./tests/bitrot/bug-1221914.t - 6 second ./tests/basic/trace.t - 6 second ./tests/basic/playground/template-xlator-sanity.t - 6 second ./tests/basic/hardlink-limit.t - 6 second ./tests/basic/ec/nfs.t - 6 second ./tests/basic/ec/ec-anonymous-fd.t - 6 second ./tests/performance/quick-read.t - 5 second ./tests/gfid2path/gfid2path_fuse.t - 5 second ./tests/gfid2path/block-mount-access.t - 5 second ./tests/bugs/upcall/bug-upcall-stat.t - 5 second ./tests/bugs/upcall/bug-1422776.t - 5 second ./tests/bugs/upcall/bug-1394131.t - 5 second ./tests/bugs/trace/bug-797171.t - 5 second ./tests/bugs/shard/bug-1256580.t - 5 second ./tests/bugs/shard/bug-1250855.t - 5 second ./tests/bugs/rpc/bug-954057.t - 5 second ./tests/bugs/replicate/bug-976800.t - 5 second ./tests/bugs/replicate/bug-886998.t - 5 second ./tests/bugs/replicate/bug-1480525.t - 5 second ./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t - 5 second ./tests/bugs/replicate/bug-1325792.t - 5 second ./tests/bugs/read-only/bug-1134822-read-only-default-in-graph.t - 5 second ./tests/bugs/readdir-ahead/bug-1439640.t - 5 second ./tests/bugs/quick-read/bug-846240.t - 5 second ./tests/bugs/posix/bug-gfid-path.t - 5 second ./tests/bugs/posix/bug-765380.t - 5 second ./tests/bugs/posix/bug-1619720.t - 5 second ./tests/bugs/nfs/zero-atime.t - 5 second ./tests/bugs/nfs/subdir-trailing-slash.t - 5 second ./tests/bugs/nfs/socket-as-fifo.t - 5 second ./tests/bugs/nfs/showmount-many-clients.t - 5 second ./tests/bugs/nfs/bug-877885.t - 5 second ./tests/bugs/nfs/bug-1210338.t - 5 second ./tests/bugs/nfs/bug-1166862.t - 5 second ./tests/bugs/nfs/bug-1161092-nfs-acls.t - 5 second ./tests/bugs/md-cache/bug-1211863_unlink.t - 5 second ./tests/bugs/glusterfs-server/bug-864222.t - 5 second ./tests/bugs/glusterfs/bug-893378.t - 5 second ./tests/bugs/glusterfs/bug-869724.t - 5 second ./tests/bugs/glusterfs/bug-856455.t - 5 second ./tests/bugs/glusterfs/bug-811493.t - 5 second ./tests/bugs/glusterd/bug-948729/bug-948729.t - 5 second ./tests/bugs/geo-replication/bug-1296496.t - 5 second ./tests/bugs/fuse/bug-1126048.t - 5 second ./tests/bugs/distribute/bug-907072.t - 5 second ./tests/bugs/core/io-stats-1322825.t - 5 second ./tests/bugs/core/bug-913544.t - 5 second ./tests/bugs/core/bug-908146.t - 5 second ./tests/bugs/core/bug-903336.t - 5 second ./tests/bugs/core/bug-834465.t - 5 second ./tests/bugs/core/bug-1421721-mpx-toggle.t - 5 second ./tests/bugs/core/949327.t - 5 second ./tests/bugs/cli/bug-977246.t - 5 second ./tests/bugs/cli/bug-1004218.t - 5 second ./tests/bugs/bug-1371806_1.t - 5 second ./tests/bugs/bug-1138841.t - 5 second ./tests/bugs/access-control/bug-1051896.t - 5 second ./tests/basic/quota-rename.t - 5 second ./tests/basic/fops-sanity.t - 5 second ./tests/basic/ec/ec-internal-xattrs.t - 5 second ./tests/basic/ec/dht-rename.t - 5 second ./tests/basic/distribute/non-root-unlink-stale-linkto.t - 5 second ./tests/basic/distribute/bug-1265677-use-readdirp.t - 5 second ./tests/basic/afr/heal-info.t - 5 second ./tests/line-coverage/meta-max-coverage.t - 4 second ./tests/bugs/unclassified/bug-991622.t - 4 second ./tests/bugs/unclassified/bug-1034085.t - 4 second ./tests/bugs/snapshot/bug-1111041.t - 4 second ./tests/bugs/replicate/bug-880898.t - 4 second ./tests/bugs/readdir-ahead/bug-1512437.t - 4 second ./tests/bugs/readdir-ahead/bug-1446516.t - 4 second ./tests/bugs/readdir-ahead/bug-1390050.t - 4 second ./tests/bugs/nl-cache/bug-1451588.t - 4 second ./tests/bugs/md-cache/bug-1632503.t - 4 second ./tests/bugs/glusterfs-server/bug-889996.t - 4 second ./tests/bugs/glusterfs-server/bug-861542.t - 4 second ./tests/bugs/glusterfs/bug-860297.t - 4 second ./tests/bugs/glusterfs/bug-1482528.t - 4 second ./tests/bugs/glusterd/bug-1085330-and-bug-916549.t - 4 second ./tests/bugs/core/log-bug-1362520.t - 4 second ./tests/bugs/core/bug-924075.t - 4 second ./tests/bugs/core/bug-1135514-allow-setxattr-with-null-value.t - 4 second ./tests/bugs/core/bug-1117951.t - 4 second ./tests/bugs/cli/bug-983317-volume-get.t - 4 second ./tests/bugs/cli/bug-969193.t - 4 second ./tests/bugs/cli/bug-961307.t - 4 second ./tests/bugs/access-control/bug-1387241.t - 4 second ./tests/basic/glusterd/check-cloudsync-ancestry.t - 4 second ./tests/basic/fencing/test-fence-option.t - 4 second ./tests/basic/ec/ec-fallocate.t - 4 second ./tests/basic/distribute/debug-xattrs.t - 4 second ./tests/line-coverage/some-features-in-libglusterfs.t - 3 second ./tests/bugs/shard/bug-1261773.t - 3 second ./tests/bugs/shard/bug-1245547.t - 3 second ./tests/bugs/replicate/bug-884328.t - 3 second ./tests/bugs/posix/disallow-gfid-volumeid-removexattr.t - 3 second ./tests/bugs/nfs/bug-970070.t - 3 second ./tests/bugs/nfs/bug-1302948.t - 3 second ./tests/bugs/logging/bug-823081.t - 3 second ./tests/bugs/glusterfs/bug-892730.t - 3 second ./tests/bugs/glusterfs/bug-853690.t - 3 second ./tests/bugs/glusterfs/bug-844688.t - 3 second ./tests/bugs/fuse/bug-1283103.t - 3 second ./tests/bugs/distribute/bug-924265.t - 3 second ./tests/bugs/distribute/bug-1204140.t - 3 second ./tests/bugs/core/bug-845213.t - 3 second ./tests/bugs/core/bug-1111557.t - 3 second ./tests/bugs/cli/bug-921215.t - 3 second ./tests/bugs/cli/bug-867252.t - 3 second ./tests/bugs/cli/bug-764638.t - 3 second ./tests/bitrot/bug-internal-xattrs-check-1243391.t - 3 second ./tests/basic/md-cache/bug-1418249.t - 3 second ./tests/basic/distribute/lookup.t - 3 second ./tests/basic/afr/arbiter-cli.t - 3 second ./tests/bugs/cli/bug-949298.t - 2 second ./tests/bugs/cli/bug-1378842-volume-get-all.t - 2 second ./tests/bugs/cli/bug-1047378.t - 2 second ./tests/basic/peer-parsing.t - 2 second ./tests/basic/afr/ta-check-locks.t - 2 second ./tests/line-coverage/volfile-with-all-graph-syntax.t - 1 second ./tests/basic/posixonly.t - 1 second ./tests/basic/gfapi/sink.t - 1 second ./tests/bugs/replicate/ta-inode-refresh-read.t - 0 second ./tests/basic/netgroup_parsing.t - 0 second ./tests/basic/glusterfsd-args.t - 0 second ./tests/basic/exports_parsing.t - 0 second 2 test(s) failed ./tests/bugs/glusterd/bug-1699339.t ./tests/bugs/glusterd/bug-857330/normal.t 0 test(s) generated core 5 test(s) needed retry ./tests/bugs/glusterd/bug-1699339.t ./tests/bugs/glusterd/bug-857330/normal.t ./tests/bugs/glusterd/removing-multiple-bricks-in-single-remove-brick-command.t ./tests/bugs/protocol/bug-1433815-auth-allow.t ./tests/bugs/replicate/bug-1341650.t Result is 124 tar: Removing leading `/' from member names kernel.core_pattern = /%e-%p.core Build step 'Execute shell' marked build as failure _______________________________________________ maintainers mailing list maintainers at gluster.org https://lists.gluster.org/mailman/listinfo/maintainers -------------- next part -------------- An HTML attachment was scrubbed... URL: From amukherj at redhat.com Wed Jul 3 05:39:36 2019 From: amukherj at redhat.com (Atin Mukherjee) Date: Wed, 3 Jul 2019 11:09:36 +0530 Subject: [Gluster-devel] Fwd: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4631 In-Reply-To: <1981025080.37.1562005037885.JavaMail.jenkins@jenkins-el7.rht.gluster.org> References: <1981025080.37.1562005037885.JavaMail.jenkins@jenkins-el7.rht.gluster.org> Message-ID: Can we check the following failure? 1 test(s) failed ./tests/00-geo-rep/georep-basic-dr-rsync-arbiter.t ---------- Forwarded message --------- From: Date: Mon, Jul 1, 2019 at 11:48 PM Subject: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4631 To: See < https://build.gluster.org/job/regression-test-burn-in/4631/display/redirect?page=changes > Changes: [Amar Tumballi] rfc.sh: Improve bug identification [Amar Tumballi] glusterd: fix clang scan defects [Amar Tumballi] core: use multiple servers while mounting a volume using ipv6 ------------------------------------------ [...truncated 3.99 MB...] ./tests/bugs/replicate/bug-1561129-enospc.t - 9 second ./tests/bugs/replicate/bug-1221481-allow-fops-on-dir-split-brain.t - 9 second ./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t - 9 second ./tests/bugs/protocol/bug-1321578.t - 9 second ./tests/bugs/posix/bug-1122028.t - 9 second ./tests/bugs/glusterfs/bug-861015-log.t - 9 second ./tests/bugs/glusterfs/bug-848251.t - 9 second ./tests/bugs/glusterd/bug-949930.t - 9 second ./tests/bugs/gfapi/bug-1032894.t - 9 second ./tests/bugs/fuse/bug-983477.t - 9 second ./tests/bugs/ec/bug-1227869.t - 9 second ./tests/bugs/distribute/bug-1088231.t - 9 second ./tests/bugs/cli/bug-1022905.t - 9 second ./tests/bugs/changelog/bug-1321955.t - 9 second ./tests/bugs/changelog/bug-1208470.t - 9 second ./tests/bugs/bug-1371806_2.t - 9 second ./tests/bugs/bitrot/1209751-bitrot-scrub-tunable-reset.t - 9 second ./tests/basic/quota-nfs.t - 9 second ./tests/basic/md-cache/bug-1317785.t - 9 second ./tests/basic/glusterd/thin-arbiter-volume-probe.t - 9 second ./tests/basic/fop-sampling.t - 9 second ./tests/basic/ctime/ctime-noatime.t - 9 second ./tests/basic/changelog/changelog-rename.t - 9 second ./tests/basic/afr/stale-file-lookup.t - 9 second ./tests/basic/afr/root-squash-self-heal.t - 9 second ./tests/basic/afr/afr-read-hash-mode.t - 9 second ./tests/basic/afr/add-brick-self-heal.t - 9 second ./tests/bugs/upcall/bug-1369430.t - 8 second ./tests/bugs/transport/bug-873367.t - 8 second ./tests/bugs/snapshot/bug-1260848.t - 8 second ./tests/bugs/snapshot/bug-1064768.t - 8 second ./tests/bugs/shard/shard-inode-refcount-test.t - 8 second ./tests/bugs/shard/bug-1258334.t - 8 second ./tests/bugs/replicate/bug-986905.t - 8 second ./tests/bugs/replicate/bug-1686568-send-truncate-on-arbiter-from-shd.t - 8 second ./tests/bugs/replicate/bug-1626994-info-split-brain.t - 8 second ./tests/bugs/replicate/bug-1132102.t - 8 second ./tests/bugs/replicate/bug-1037501.t - 8 second ./tests/bugs/quota/bug-1104692.t - 8 second ./tests/bugs/posix/bug-1175711.t - 8 second ./tests/bugs/md-cache/afr-stale-read.t - 8 second ./tests/bugs/glusterfs/bug-902610.t - 8 second ./tests/bugs/glusterfs/bug-872923.t - 8 second ./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t - 8 second ./tests/bugs/distribute/bug-1086228.t - 8 second ./tests/bugs/cli/bug-1087487.t - 8 second ./tests/bugs/bug-1258069.t - 8 second ./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t - 8 second ./tests/basic/xlator-pass-through-sanity.t - 8 second ./tests/basic/glusterd/arbiter-volume-probe.t - 8 second ./tests/basic/gfapi/libgfapi-fini-hang.t - 8 second ./tests/basic/fencing/fencing-crash-conistency.t - 8 second ./tests/basic/ec/statedump.t - 8 second ./tests/basic/distribute/file-create.t - 8 second ./tests/basic/afr/ta-write-on-bad-brick.t - 8 second ./tests/basic/afr/tarissue.t - 8 second ./tests/basic/afr/ta-read.t - 8 second ./tests/basic/afr/granular-esh/add-brick.t - 8 second ./tests/gfid2path/gfid2path_fuse.t - 7 second ./tests/bugs/shard/bug-1259651.t - 7 second ./tests/bugs/replicate/bug-767585-gfid.t - 7 second ./tests/bugs/replicate/bug-1250170-fsync.t - 7 second ./tests/bugs/replicate/bug-1101647.t - 7 second ./tests/bugs/quota/bug-1287996.t - 7 second ./tests/bugs/quota/bug-1243798.t - 7 second ./tests/bugs/nfs/bug-915280.t - 7 second ./tests/bugs/io-cache/bug-858242.t - 7 second ./tests/bugs/glusterd/quorum-value-check.t - 7 second ./tests/bugs/glusterd/bug-948729/bug-948729-force.t - 7 second ./tests/bugs/distribute/bug-884597.t - 7 second ./tests/bugs/distribute/bug-1368012.t - 7 second ./tests/bugs/core/bug-1699025-brick-mux-detach-brick-fd-issue.t - 7 second ./tests/bugs/core/bug-1168803-snapd-option-validation-fix.t - 7 second ./tests/bugs/bug-1702299.t - 7 second ./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t - 7 second ./tests/bitrot/bug-1221914.t - 7 second ./tests/bitrot/br-stub.t - 7 second ./tests/basic/ec/ec-read-policy.t - 7 second ./tests/basic/ec/ec-anonymous-fd.t - 7 second ./tests/basic/afr/ta.t - 7 second ./tests/basic/afr/ta-shd.t - 7 second ./tests/basic/afr/gfid-heal.t - 7 second ./tests/gfid2path/get-gfid-to-path.t - 6 second ./tests/bugs/upcall/bug-upcall-stat.t - 6 second ./tests/bugs/upcall/bug-1422776.t - 6 second ./tests/bugs/shard/bug-1342298.t - 6 second ./tests/bugs/shard/bug-1272986.t - 6 second ./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t - 6 second ./tests/bugs/readdir-ahead/bug-1670253-consistent-metadata.t - 6 second ./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t - 6 second ./tests/bugs/nfs/bug-1116503.t - 6 second ./tests/bugs/md-cache/setxattr-prepoststat.t - 6 second ./tests/bugs/md-cache/bug-1476324.t - 6 second ./tests/bugs/io-stats/bug-1598548.t - 6 second ./tests/bugs/glusterfs-server/bug-877992.t - 6 second ./tests/bugs/glusterfs-server/bug-864222.t - 6 second ./tests/bugs/glusterfs/bug-895235.t - 6 second ./tests/bugs/glusterfs/bug-893378.t - 6 second ./tests/bugs/glusterfs/bug-869724.t - 6 second ./tests/bugs/glusterd/bug-948729/bug-948729.t - 6 second ./tests/bugs/glusterd/bug-948729/bug-948729-mode-script.t - 6 second ./tests/bugs/glusterd/bug-1482906-peer-file-blank-line.t - 6 second ./tests/bugs/glusterd/bug-1091935-brick-order-check-from-cli-to-glusterd.t - 6 second ./tests/bugs/fuse/bug-1126048.t - 6 second ./tests/bugs/ec/bug-1179050.t - 6 second ./tests/bugs/ec/bug-1161621.t - 6 second ./tests/bugs/distribute/bug-912564.t - 6 second ./tests/bugs/core/bug-986429.t - 6 second ./tests/bugs/core/bug-908146.t - 6 second ./tests/bugs/core/bug-1119582.t - 6 second ./tests/bugs/cli/bug-977246.t - 6 second ./tests/bugs/cli/bug-1004218.t - 6 second ./tests/bugs/bug-1371806_1.t - 6 second ./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t - 6 second ./tests/bugs/bitrot/bug-1210684-scrub-pause-resume-error-handling.t - 6 second ./tests/basic/quota-rename.t - 6 second ./tests/basic/ec/nfs.t - 6 second ./tests/basic/afr/arbiter-remove-brick.t - 6 second ./tests/performance/quick-read.t - 5 second ./tests/line-coverage/meta-max-coverage.t - 5 second ./tests/gfid2path/gfid2path_nfs.t - 5 second ./tests/gfid2path/block-mount-access.t - 5 second ./tests/bugs/trace/bug-797171.t - 5 second ./tests/bugs/snapshot/bug-1178079.t - 5 second ./tests/bugs/snapshot/bug-1111041.t - 5 second ./tests/bugs/shard/bug-1256580.t - 5 second ./tests/bugs/shard/bug-1250855.t - 5 second ./tests/bugs/rpc/bug-954057.t - 5 second ./tests/bugs/replicate/bug-976800.t - 5 second ./tests/bugs/replicate/bug-886998.t - 5 second ./tests/bugs/replicate/bug-1480525.t - 5 second ./tests/bugs/replicate/bug-1365455.t - 5 second ./tests/bugs/replicate/bug-1325792.t - 5 second ./tests/bugs/read-only/bug-1134822-read-only-default-in-graph.t - 5 second ./tests/bugs/readdir-ahead/bug-1446516.t - 5 second ./tests/bugs/quick-read/bug-846240.t - 5 second ./tests/bugs/posix/bug-gfid-path.t - 5 second ./tests/bugs/posix/bug-765380.t - 5 second ./tests/bugs/posix/bug-1619720.t - 5 second ./tests/bugs/nfs/zero-atime.t - 5 second ./tests/bugs/nfs/subdir-trailing-slash.t - 5 second ./tests/bugs/nfs/socket-as-fifo.t - 5 second ./tests/bugs/nfs/showmount-many-clients.t - 5 second ./tests/bugs/nfs/bug-877885.t - 5 second ./tests/bugs/nfs/bug-847622.t - 5 second ./tests/bugs/nfs/bug-1210338.t - 5 second ./tests/bugs/nfs/bug-1166862.t - 5 second ./tests/bugs/nfs/bug-1161092-nfs-acls.t - 5 second ./tests/bugs/md-cache/bug-1211863_unlink.t - 5 second ./tests/bugs/glusterfs-server/bug-873549.t - 5 second ./tests/bugs/glusterfs/bug-856455.t - 5 second ./tests/bugs/geo-replication/bug-1296496.t - 5 second ./tests/bugs/fuse/bug-1336818.t - 5 second ./tests/bugs/distribute/bug-907072.t - 5 second ./tests/bugs/core/io-stats-1322825.t - 5 second ./tests/bugs/core/bug-913544.t - 5 second ./tests/bugs/core/bug-834465.t - 5 second ./tests/bugs/core/949327.t - 5 second ./tests/bugs/cli/bug-983317-volume-get.t - 5 second ./tests/bugs/bug-1138841.t - 5 second ./tests/bugs/access-control/bug-1387241.t - 5 second ./tests/bugs/access-control/bug-1051896.t - 5 second ./tests/bitrot/bug-internal-xattrs-check-1243391.t - 5 second ./tests/basic/trace.t - 5 second ./tests/basic/playground/template-xlator-sanity.t - 5 second ./tests/basic/hardlink-limit.t - 5 second ./tests/basic/glusterd/check-cloudsync-ancestry.t - 5 second ./tests/basic/fops-sanity.t - 5 second ./tests/basic/fencing/test-fence-option.t - 5 second ./tests/basic/ec/ec-internal-xattrs.t - 5 second ./tests/basic/ec/ec-fallocate.t - 5 second ./tests/basic/ec/dht-rename.t - 5 second ./tests/basic/distribute/non-root-unlink-stale-linkto.t - 5 second ./tests/basic/distribute/bug-1265677-use-readdirp.t - 5 second ./tests/basic/afr/heal-info.t - 5 second ./tests/bugs/upcall/bug-1394131.t - 4 second ./tests/bugs/unclassified/bug-991622.t - 4 second ./tests/bugs/unclassified/bug-1034085.t - 4 second ./tests/bugs/shard/bug-1245547.t - 4 second ./tests/bugs/replicate/bug-884328.t - 4 second ./tests/bugs/replicate/bug-880898.t - 4 second ./tests/bugs/readdir-ahead/bug-1512437.t - 4 second ./tests/bugs/readdir-ahead/bug-1439640.t - 4 second ./tests/bugs/readdir-ahead/bug-1390050.t - 4 second ./tests/bugs/nl-cache/bug-1451588.t - 4 second ./tests/bugs/md-cache/bug-1632503.t - 4 second ./tests/bugs/glusterfs-server/bug-861542.t - 4 second ./tests/bugs/glusterfs/bug-811493.t - 4 second ./tests/bugs/glusterfs/bug-1482528.t - 4 second ./tests/bugs/glusterd/bug-1085330-and-bug-916549.t - 4 second ./tests/bugs/core/log-bug-1362520.t - 4 second ./tests/bugs/core/bug-924075.t - 4 second ./tests/bugs/core/bug-903336.t - 4 second ./tests/bugs/core/bug-845213.t - 4 second ./tests/bugs/core/bug-1421721-mpx-toggle.t - 4 second ./tests/bugs/core/bug-1117951.t - 4 second ./tests/bugs/cli/bug-969193.t - 4 second ./tests/bugs/cli/bug-961307.t - 4 second ./tests/basic/distribute/debug-xattrs.t - 4 second ./tests/line-coverage/some-features-in-libglusterfs.t - 3 second ./tests/bugs/posix/disallow-gfid-volumeid-removexattr.t - 3 second ./tests/bugs/nfs/bug-970070.t - 3 second ./tests/bugs/nfs/bug-1302948.t - 3 second ./tests/bugs/logging/bug-823081.t - 3 second ./tests/bugs/glusterfs-server/bug-889996.t - 3 second ./tests/bugs/glusterfs/bug-892730.t - 3 second ./tests/bugs/glusterfs/bug-860297.t - 3 second ./tests/bugs/glusterfs/bug-844688.t - 3 second ./tests/bugs/fuse/bug-1283103.t - 3 second ./tests/bugs/distribute/bug-924265.t - 3 second ./tests/bugs/distribute/bug-1204140.t - 3 second ./tests/bugs/core/bug-1135514-allow-setxattr-with-null-value.t - 3 second ./tests/bugs/cli/bug-949298.t - 3 second ./tests/bugs/cli/bug-921215.t - 3 second ./tests/bugs/cli/bug-867252.t - 3 second ./tests/bugs/cli/bug-764638.t - 3 second ./tests/bugs/cli/bug-1378842-volume-get-all.t - 3 second ./tests/bugs/cli/bug-1047378.t - 3 second ./tests/basic/peer-parsing.t - 3 second ./tests/basic/md-cache/bug-1418249.t - 3 second ./tests/basic/distribute/lookup.t - 3 second ./tests/basic/afr/arbiter-cli.t - 3 second ./tests/line-coverage/volfile-with-all-graph-syntax.t - 2 second ./tests/bugs/shard/bug-1261773.t - 2 second ./tests/bugs/glusterfs/bug-853690.t - 2 second ./tests/bugs/core/bug-1111557.t - 2 second ./tests/basic/afr/ta-check-locks.t - 2 second ./tests/bugs/replicate/ta-inode-refresh-read.t - 1 second ./tests/basic/netgroup_parsing.t - 1 second ./tests/basic/gfapi/sink.t - 1 second ./tests/basic/posixonly.t - 0 second ./tests/basic/glusterfsd-args.t - 0 second ./tests/basic/exports_parsing.t - 0 second 1 test(s) failed ./tests/00-geo-rep/georep-basic-dr-rsync-arbiter.t 0 test(s) generated core 1 test(s) needed retry ./tests/00-geo-rep/georep-basic-dr-rsync-arbiter.t Result is 1 tar: Removing leading `/' from member names kernel.core_pattern = /%e-%p.core Build step 'Execute shell' marked build as failure _______________________________________________ maintainers mailing list maintainers at gluster.org https://lists.gluster.org/mailman/listinfo/maintainers -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkarampu at redhat.com Wed Jul 3 07:28:06 2019 From: pkarampu at redhat.com (Pranith Kumar Karampuri) Date: Wed, 3 Jul 2019 12:58:06 +0530 Subject: [Gluster-devel] fallocate behavior in glusterfs In-Reply-To: <1081f226-67c2-0d19-af99-c4d691b10484@redhat.com> References: <1081f226-67c2-0d19-af99-c4d691b10484@redhat.com> Message-ID: On Wed, Jul 3, 2019 at 10:14 AM Ravishankar N wrote: > > On 02/07/19 8:52 PM, FNU Raghavendra Manjunath wrote: > > > Hi All, > > In glusterfs, there is an issue regarding the fallocate behavior. In > short, if someone does fallocate from the mount point with some size that > is greater than the available size in the backend filesystem where the file > is present, then fallocate can fail with a subset of the required number of > blocks allocated and then failing in the backend filesystem with ENOSPC > error. > > The behavior of fallocate in itself is simlar to how it would have been on > a disk filesystem (atleast xfs where it was checked). i.e. allocates subset > of the required number of blocks and then fail with ENOSPC. And the file in > itself would show the number of blocks in stat to be whatever was allocated > as part of fallocate. Please refer [1] where the issue is explained. > > Now, there is one small difference between how the behavior is between > glusterfs and xfs. > In xfs after fallocate fails, doing 'stat' on the file shows the number of > blocks that have been allocated. Whereas in glusterfs, the number of blocks > is shown as zero which makes tools like "du" show zero consumption. This > difference in behavior in glusterfs is because of libglusterfs on how it > handles sparse files etc for calculating number of blocks (mentioned in [1]) > > At this point I can think of 3 things on how to handle this. > > 1) Except for how many blocks are shown in the stat output for the file > from the mount point (on which fallocate was done), the remaining behavior > of attempting to allocate the requested size and failing when the > filesystem becomes full is similar to that of XFS. > > Hence, what is required is to come up with a solution on how libglusterfs > calculate blocks for sparse files etc (without breaking any of the existing > components and features). This makes the behavior similar to that of > backend filesystem. This might require its own time to fix libglusterfs > logic without impacting anything else. > > I think we should just revert the commit > b1a5fa55695f497952264e35a9c8eb2bbf1ec4c3 (BZ 817343) and see if it really > breaks anything (or check whatever it breaks is something that we can live > with). XFS speculative preallocation is not permanent and the extra space > is freed up eventually. It can be sped up via procfs tunable: > http://xfs.org/index.php/XFS_FAQ#Q:_How_can_I_speed_up_or_avoid_delayed_removal_of_speculative_preallocation.3F. > We could also tune the allocsize option to a low value like 4k so that > glusterfs quota is not affected. > > FWIW, ENOSPC is not the only fallocate problem in gluster because of > 'iatt->ia_block' tweaking. It also breaks the --keep-size option (i.e. the > FALLOC_FL_KEEP_SIZE flag in fallocate(2)) and reports incorrect du size. > Regards, > Ravi > > > OR > > 2) Once the fallocate fails in the backend filesystem, make posix xlator > in the brick truncate the file to the previous size of the file before > attempting fallocate. A patch [2] has been sent for this. But there is an > issue with this when there are parallel writes and fallocate operations > happening on the same file. It can lead to a data loss. > > a) statpre is obtained ===> before fallocate is attempted, get the stat > hence the size of the file b) A parrallel Write fop on the same file that > extends the file is successful c) Fallocate fails d) ftruncate truncates it > to size given by statpre (i.e. the previous stat and the size obtained in > step a) > > OR > > 3) Make posix check for available disk size before doing fallocate. i.e. > in fallocate once posix gets the number of bytes to be allocated for the > file from a particular offset, it checks whether so many bytes are > available or not in the disk. If not, fail the fallocate fop with ENOSPC > (without attempting it on the backend filesystem). > > There still is a probability of a parallel write happening while this > fallocate is happening and by the time falllocate system call is attempted > on the disk, the available space might have been less than what was > calculated before fallocate. > i.e. following things can happen > > a) statfs ===> get the available space of the backend filesystem > b) a parallel write succeeds and extends the file > c) fallocate is attempted assuming there is sufficient space in the > backend > > While the above situation can arise, I think we are still fine. Because > fallocate is attempted from the offset received in the fop. So, > irrespective of whether write extended the file or not, the fallocate > itself will be attempted for so many bytes from the offset which we found > to be available by getting statfs information. > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1724754#c3 > [2] https://review.gluster.org/#/c/glusterfs/+/22969/ > > option 2) will affect performance if we have to serialize all the data operations on the file. option 3) can still lead to the same problem we are trying to solve in a different way. - thread-1: fallocate came with 1MB size, Statfs says there is 1MB space. - thread-2: Write on a different file is attempted with 128KB and succeeds - thread-1: fallocate fails on the file after partially allocating size because there doesn't exist 1MB anymore. So option-1 is what we need to explore and fix it so that the behavior is closer to other posix filesystems. Maybe start with what Ravi suggested? > Please provide feedback. > > Regards, > Raghavendra > > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing listGluster-devel at gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-devel > > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -- Pranith -------------- next part -------------- An HTML attachment was scrubbed... URL: From dkhandel at redhat.com Wed Jul 3 14:33:33 2019 From: dkhandel at redhat.com (Deepshikha Khandelwal) Date: Wed, 3 Jul 2019 20:03:33 +0530 Subject: [Gluster-devel] Removing glupy from release 5.7 In-Reply-To: References: <20190612151142.GL8725@ndevos-x270> <20190613090825.GN8725@ndevos-x270> <20190613122837.GS8725@ndevos-x270> <61c99ac170cc004a7f90897ff9f47cf7facdbc12.camel@redhat.com> <20190620074335.GA12566@ndevos-x270> <20190620090508.GA13895@ndevos-x270> Message-ID: Misc, is EPEL got recently installed on the builders? Can you please resolve the 'Why EPEL on builders?'. EPEL+python3 on builders seems not a good option to have. On Thu, Jun 20, 2019 at 6:37 PM Michael Scherer wrote: > Le jeudi 20 juin 2019 ? 08:38 -0400, Kaleb Keithley a ?crit : > > On Thu, Jun 20, 2019 at 7:39 AM Michael Scherer > > wrote: > > > > > Le jeudi 20 juin 2019 ? 06:57 -0400, Kaleb Keithley a ?crit : > > > > AFAICT, working fine right up to when EPEL and python3 were > > > > installed > > > > on > > > > the centos builders. If it was my decision, I'd undo that > > > > change. > > > > > > The biggest problem is that mock do pull python3. > > > > > > > > > > That's mock on Fedora ? to run a build in a centos-i386 chroot. > > Fedora > > already has python3. I don't see how that can affect what's running > > in the > > mock chroot. > > I am not sure we are talking about the same thing, but mock, the rpm > package from EPEL 7, do pull python 3: > > $ cat /etc/redhat-release; rpm -q --requires mock |grep 'python(abi' > Red Hat Enterprise Linux Server release 7.6 (Maipo) > python(abi) = 3.6 > > So we do have python3 installed on the Centos 7 builders (and was after > a upgrade), and we are not going to remove it, because we use mock for > a lot of stuff. > > And again, if the configure script is detecting the wrong version of > python, the fix is not to remove the version of python for the > builders, the fix is to detect the right version of python, or at > least, permit to people to bypass the detection. > > > Is the build inside mock also installing EPEL and python3 somehow? > > Now? If so, why? > > No, I doubt but then, if we are using a chroot, the package installed > on the builders shouldn't matter, since that's a chroot. > > So I am kinda being lost. > > > And maybe the solution for centos regressions is to run those in > > mock, with a centos-x86_64 chroot. Without EPEL or python3. > > That would likely requires a big refactor of the setup, since we have > to get the data out of specific place, etc. We would also need to > reinstall the builders to set partitions in a different way, with a > bigger / and/or give more space for /var/lib/mock. > > I do not see that happening fast, and if my hypothesis of a issue in > configure is right, then fixing seems the faster way to avoid the > issue. > -- > Michael Scherer > Sysadmin, Community Infrastructure > > > > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mscherer at redhat.com Wed Jul 3 14:46:11 2019 From: mscherer at redhat.com (Michael Scherer) Date: Wed, 03 Jul 2019 16:46:11 +0200 Subject: [Gluster-devel] Removing glupy from release 5.7 In-Reply-To: References: <20190612151142.GL8725@ndevos-x270> <20190613090825.GN8725@ndevos-x270> <20190613122837.GS8725@ndevos-x270> <61c99ac170cc004a7f90897ff9f47cf7facdbc12.camel@redhat.com> <20190620074335.GA12566@ndevos-x270> <20190620090508.GA13895@ndevos-x270> Message-ID: <5c0e83ae5e9c5360eadd505bbb33a5b94b61a99a.camel@redhat.com> Le mercredi 03 juillet 2019 ? 20:03 +0530, Deepshikha Khandelwal a ?crit : > Misc, is EPEL got recently installed on the builders? No, it has been there since september 2016. What got changed is that python3 wasn't installed before. > Can you please resolve the 'Why EPEL on builders?'. EPEL+python3 on > builders seems not a good option to have. Python 3 is pulled by 'mock', cf https://lists.gluster.org/pipermail/gluster-devel/2019-June/056347.html So sure, I can remove EPEL, but then it will remove mock. Or I can remove python3, and it will remove mock. But again, the problem is not with the set of installed packages on the builder, that's just showing there is a bug. The configure script do pick the latest python version: https://github.com/gluster/glusterfs/blob/master/configure.ac#L612 if there is a python3, it take that, if not, it fall back to python2. then, later: https://github.com/gluster/glusterfs/blob/master/configure.ac#L639 it verify the presence of what is required to build. So if there is a runtime version only of python3, it will detect python3, but not build anything, because the -devel subpackage is not h ere. There is 2 solutions: - fix that piece of code, so it doesn't just test the presence of python executable, but do that, and test the presence of headers before deciding if we need to build or not glupy. - use PYTHON env var to force python2, and document that it need to be done. > On Thu, Jun 20, 2019 at 6:37 PM Michael Scherer > wrote: > > > Le jeudi 20 juin 2019 ? 08:38 -0400, Kaleb Keithley a ?crit : > > > On Thu, Jun 20, 2019 at 7:39 AM Michael Scherer < > > > mscherer at redhat.com> > > > wrote: > > > > > > > Le jeudi 20 juin 2019 ? 06:57 -0400, Kaleb Keithley a ?crit : > > > > > AFAICT, working fine right up to when EPEL and python3 were > > > > > installed > > > > > on > > > > > the centos builders. If it was my decision, I'd undo that > > > > > change. > > > > > > > > The biggest problem is that mock do pull python3. > > > > > > > > > > > > > > That's mock on Fedora ? to run a build in a centos-i386 chroot. > > > Fedora > > > already has python3. I don't see how that can affect what's > > > running > > > in the > > > mock chroot. > > > > I am not sure we are talking about the same thing, but mock, the > > rpm > > package from EPEL 7, do pull python 3: > > > > $ cat /etc/redhat-release; rpm -q --requires mock |grep > > 'python(abi' > > Red Hat Enterprise Linux Server release 7.6 (Maipo) > > python(abi) = 3.6 > > > > So we do have python3 installed on the Centos 7 builders (and was > > after > > a upgrade), and we are not going to remove it, because we use mock > > for > > a lot of stuff. > > > > And again, if the configure script is detecting the wrong version > > of > > python, the fix is not to remove the version of python for the > > builders, the fix is to detect the right version of python, or at > > least, permit to people to bypass the detection. > > > > > Is the build inside mock also installing EPEL and python3 > > > somehow? > > > Now? If so, why? > > > > No, I doubt but then, if we are using a chroot, the package > > installed > > on the builders shouldn't matter, since that's a chroot. > > > > So I am kinda being lost. > > > > > And maybe the solution for centos regressions is to run those in > > > mock, with a centos-x86_64 chroot. Without EPEL or python3. > > > > That would likely requires a big refactor of the setup, since we > > have > > to get the data out of specific place, etc. We would also need to > > reinstall the builders to set partitions in a different way, with a > > bigger / and/or give more space for /var/lib/mock. > > > > I do not see that happening fast, and if my hypothesis of a issue > > in > > configure is right, then fixing seems the faster way to avoid the > > issue. > > -- > > Michael Scherer > > Sysadmin, Community Infrastructure > > > > > > > > _______________________________________________ > > > > Community Meeting Calendar: > > > > APAC Schedule - > > Every 2nd and 4th Tuesday at 11:30 AM IST > > Bridge: https://bluejeans.com/836554017 > > > > NA/EMEA Schedule - > > Every 1st and 3rd Tuesday at 01:00 PM EDT > > Bridge: https://bluejeans.com/486278655 > > > > Gluster-devel mailing list > > Gluster-devel at gluster.org > > https://lists.gluster.org/mailman/listinfo/gluster-devel > > > > -- Michael Scherer Sysadmin, Community Infrastructure -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From rabhat at redhat.com Wed Jul 3 17:28:52 2019 From: rabhat at redhat.com (FNU Raghavendra Manjunath) Date: Wed, 3 Jul 2019 13:28:52 -0400 Subject: [Gluster-devel] fallocate behavior in glusterfs In-Reply-To: References: <1081f226-67c2-0d19-af99-c4d691b10484@redhat.com> Message-ID: On Wed, Jul 3, 2019 at 3:28 AM Pranith Kumar Karampuri wrote: > > > On Wed, Jul 3, 2019 at 10:14 AM Ravishankar N > wrote: > >> >> On 02/07/19 8:52 PM, FNU Raghavendra Manjunath wrote: >> >> >> Hi All, >> >> In glusterfs, there is an issue regarding the fallocate behavior. In >> short, if someone does fallocate from the mount point with some size that >> is greater than the available size in the backend filesystem where the file >> is present, then fallocate can fail with a subset of the required number of >> blocks allocated and then failing in the backend filesystem with ENOSPC >> error. >> >> The behavior of fallocate in itself is simlar to how it would have been >> on a disk filesystem (atleast xfs where it was checked). i.e. allocates >> subset of the required number of blocks and then fail with ENOSPC. And the >> file in itself would show the number of blocks in stat to be whatever was >> allocated as part of fallocate. Please refer [1] where the issue is >> explained. >> >> Now, there is one small difference between how the behavior is between >> glusterfs and xfs. >> In xfs after fallocate fails, doing 'stat' on the file shows the number >> of blocks that have been allocated. Whereas in glusterfs, the number of >> blocks is shown as zero which makes tools like "du" show zero consumption. >> This difference in behavior in glusterfs is because of libglusterfs on how >> it handles sparse files etc for calculating number of blocks (mentioned in >> [1]) >> >> At this point I can think of 3 things on how to handle this. >> >> 1) Except for how many blocks are shown in the stat output for the file >> from the mount point (on which fallocate was done), the remaining behavior >> of attempting to allocate the requested size and failing when the >> filesystem becomes full is similar to that of XFS. >> >> Hence, what is required is to come up with a solution on how libglusterfs >> calculate blocks for sparse files etc (without breaking any of the existing >> components and features). This makes the behavior similar to that of >> backend filesystem. This might require its own time to fix libglusterfs >> logic without impacting anything else. >> >> I think we should just revert the commit >> b1a5fa55695f497952264e35a9c8eb2bbf1ec4c3 (BZ 817343) and see if it really >> breaks anything (or check whatever it breaks is something that we can live >> with). XFS speculative preallocation is not permanent and the extra space >> is freed up eventually. It can be sped up via procfs tunable: >> http://xfs.org/index.php/XFS_FAQ#Q:_How_can_I_speed_up_or_avoid_delayed_removal_of_speculative_preallocation.3F. >> We could also tune the allocsize option to a low value like 4k so that >> glusterfs quota is not affected. >> >> FWIW, ENOSPC is not the only fallocate problem in gluster because of >> 'iatt->ia_block' tweaking. It also breaks the --keep-size option (i.e. the >> FALLOC_FL_KEEP_SIZE flag in fallocate(2)) and reports incorrect du size. >> > Regards, >> Ravi >> >> >> OR >> >> 2) Once the fallocate fails in the backend filesystem, make posix xlator >> in the brick truncate the file to the previous size of the file before >> attempting fallocate. A patch [2] has been sent for this. But there is an >> issue with this when there are parallel writes and fallocate operations >> happening on the same file. It can lead to a data loss. >> >> a) statpre is obtained ===> before fallocate is attempted, get the stat >> hence the size of the file b) A parrallel Write fop on the same file that >> extends the file is successful c) Fallocate fails d) ftruncate truncates it >> to size given by statpre (i.e. the previous stat and the size obtained in >> step a) >> >> OR >> >> 3) Make posix check for available disk size before doing fallocate. i.e. >> in fallocate once posix gets the number of bytes to be allocated for the >> file from a particular offset, it checks whether so many bytes are >> available or not in the disk. If not, fail the fallocate fop with ENOSPC >> (without attempting it on the backend filesystem). >> >> There still is a probability of a parallel write happening while this >> fallocate is happening and by the time falllocate system call is attempted >> on the disk, the available space might have been less than what was >> calculated before fallocate. >> i.e. following things can happen >> >> a) statfs ===> get the available space of the backend filesystem >> b) a parallel write succeeds and extends the file >> c) fallocate is attempted assuming there is sufficient space in the >> backend >> >> While the above situation can arise, I think we are still fine. Because >> fallocate is attempted from the offset received in the fop. So, >> irrespective of whether write extended the file or not, the fallocate >> itself will be attempted for so many bytes from the offset which we found >> to be available by getting statfs information. >> >> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1724754#c3 >> [2] https://review.gluster.org/#/c/glusterfs/+/22969/ >> >> > option 2) will affect performance if we have to serialize all the data > operations on the file. > option 3) can still lead to the same problem we are trying to solve in a > different way. > - thread-1: fallocate came with 1MB size, Statfs says there is > 1MB space. > - thread-2: Write on a different file is attempted with 128KB and > succeeds > - thread-1: fallocate fails on the file after partially > allocating size because there doesn't exist 1MB anymore. > > Here I have a doubt. Even if a 128K write on the file succeeds, IIUC fallocate will try to reserve 1MB of space relative to the offset that was received as part of the fallocate call which was found to be available. So, despite write succeeding, the region fallocate aimed at was 1MB of space from a particular offset. As long as that is available, can posix still go ahead and perform the fallocate operation? Regards, Raghavendra > So option-1 is what we need to explore and fix it so that the behavior is > closer to other posix filesystems. Maybe start with what Ravi suggested? > > >> Please provide feedback. >> >> Regards, >> Raghavendra >> >> _______________________________________________ >> >> Community Meeting Calendar: >> >> APAC Schedule - >> Every 2nd and 4th Tuesday at 11:30 AM IST >> Bridge: https://bluejeans.com/836554017 >> >> NA/EMEA Schedule - >> Every 1st and 3rd Tuesday at 01:00 PM EDT >> Bridge: https://bluejeans.com/486278655 >> >> Gluster-devel mailing listGluster-devel at gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-devel >> >> _______________________________________________ >> >> Community Meeting Calendar: >> >> APAC Schedule - >> Every 2nd and 4th Tuesday at 11:30 AM IST >> Bridge: https://bluejeans.com/836554017 >> >> NA/EMEA Schedule - >> Every 1st and 3rd Tuesday at 01:00 PM EDT >> Bridge: https://bluejeans.com/486278655 >> >> Gluster-devel mailing list >> Gluster-devel at gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-devel >> >> > > -- > Pranith > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkarampu at redhat.com Thu Jul 4 05:13:00 2019 From: pkarampu at redhat.com (Pranith Kumar Karampuri) Date: Thu, 4 Jul 2019 10:43:00 +0530 Subject: [Gluster-devel] fallocate behavior in glusterfs In-Reply-To: References: <1081f226-67c2-0d19-af99-c4d691b10484@redhat.com> Message-ID: On Wed, Jul 3, 2019 at 10:59 PM FNU Raghavendra Manjunath wrote: > > > On Wed, Jul 3, 2019 at 3:28 AM Pranith Kumar Karampuri < > pkarampu at redhat.com> wrote: > >> >> >> On Wed, Jul 3, 2019 at 10:14 AM Ravishankar N >> wrote: >> >>> >>> On 02/07/19 8:52 PM, FNU Raghavendra Manjunath wrote: >>> >>> >>> Hi All, >>> >>> In glusterfs, there is an issue regarding the fallocate behavior. In >>> short, if someone does fallocate from the mount point with some size that >>> is greater than the available size in the backend filesystem where the file >>> is present, then fallocate can fail with a subset of the required number of >>> blocks allocated and then failing in the backend filesystem with ENOSPC >>> error. >>> >>> The behavior of fallocate in itself is simlar to how it would have been >>> on a disk filesystem (atleast xfs where it was checked). i.e. allocates >>> subset of the required number of blocks and then fail with ENOSPC. And the >>> file in itself would show the number of blocks in stat to be whatever was >>> allocated as part of fallocate. Please refer [1] where the issue is >>> explained. >>> >>> Now, there is one small difference between how the behavior is between >>> glusterfs and xfs. >>> In xfs after fallocate fails, doing 'stat' on the file shows the number >>> of blocks that have been allocated. Whereas in glusterfs, the number of >>> blocks is shown as zero which makes tools like "du" show zero consumption. >>> This difference in behavior in glusterfs is because of libglusterfs on how >>> it handles sparse files etc for calculating number of blocks (mentioned in >>> [1]) >>> >>> At this point I can think of 3 things on how to handle this. >>> >>> 1) Except for how many blocks are shown in the stat output for the file >>> from the mount point (on which fallocate was done), the remaining behavior >>> of attempting to allocate the requested size and failing when the >>> filesystem becomes full is similar to that of XFS. >>> >>> Hence, what is required is to come up with a solution on how >>> libglusterfs calculate blocks for sparse files etc (without breaking any of >>> the existing components and features). This makes the behavior similar to >>> that of backend filesystem. This might require its own time to fix >>> libglusterfs logic without impacting anything else. >>> >>> I think we should just revert the commit >>> b1a5fa55695f497952264e35a9c8eb2bbf1ec4c3 (BZ 817343) and see if it really >>> breaks anything (or check whatever it breaks is something that we can live >>> with). XFS speculative preallocation is not permanent and the extra space >>> is freed up eventually. It can be sped up via procfs tunable: >>> http://xfs.org/index.php/XFS_FAQ#Q:_How_can_I_speed_up_or_avoid_delayed_removal_of_speculative_preallocation.3F. >>> We could also tune the allocsize option to a low value like 4k so that >>> glusterfs quota is not affected. >>> >>> FWIW, ENOSPC is not the only fallocate problem in gluster because of >>> 'iatt->ia_block' tweaking. It also breaks the --keep-size option (i.e. the >>> FALLOC_FL_KEEP_SIZE flag in fallocate(2)) and reports incorrect du size. >>> >> Regards, >>> Ravi >>> >>> >>> OR >>> >>> 2) Once the fallocate fails in the backend filesystem, make posix xlator >>> in the brick truncate the file to the previous size of the file before >>> attempting fallocate. A patch [2] has been sent for this. But there is an >>> issue with this when there are parallel writes and fallocate operations >>> happening on the same file. It can lead to a data loss. >>> >>> a) statpre is obtained ===> before fallocate is attempted, get the stat >>> hence the size of the file b) A parrallel Write fop on the same file that >>> extends the file is successful c) Fallocate fails d) ftruncate truncates it >>> to size given by statpre (i.e. the previous stat and the size obtained in >>> step a) >>> >>> OR >>> >>> 3) Make posix check for available disk size before doing fallocate. i.e. >>> in fallocate once posix gets the number of bytes to be allocated for the >>> file from a particular offset, it checks whether so many bytes are >>> available or not in the disk. If not, fail the fallocate fop with ENOSPC >>> (without attempting it on the backend filesystem). >>> >>> There still is a probability of a parallel write happening while this >>> fallocate is happening and by the time falllocate system call is attempted >>> on the disk, the available space might have been less than what was >>> calculated before fallocate. >>> i.e. following things can happen >>> >>> a) statfs ===> get the available space of the backend filesystem >>> b) a parallel write succeeds and extends the file >>> c) fallocate is attempted assuming there is sufficient space in the >>> backend >>> >>> While the above situation can arise, I think we are still fine. Because >>> fallocate is attempted from the offset received in the fop. So, >>> irrespective of whether write extended the file or not, the fallocate >>> itself will be attempted for so many bytes from the offset which we found >>> to be available by getting statfs information. >>> >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1724754#c3 >>> [2] https://review.gluster.org/#/c/glusterfs/+/22969/ >>> >>> >> option 2) will affect performance if we have to serialize all the data >> operations on the file. >> option 3) can still lead to the same problem we are trying to solve in a >> different way. >> - thread-1: fallocate came with 1MB size, Statfs says there is >> 1MB space. >> - thread-2: Write on a different file is attempted with 128KB >> and succeeds >> - thread-1: fallocate fails on the file after partially >> allocating size because there doesn't exist 1MB anymore. >> >> > Here I have a doubt. Even if a 128K write on the file succeeds, IIUC > fallocate will try to reserve 1MB of space relative to the offset that was > received as part of the fallocate call which was found to be available. > So, despite write succeeding, the region fallocate aimed at was 1MB of > space from a particular offset. As long as that is available, can posix > still go ahead and perform the fallocate operation? > It can go ahead and perform the operation. Just that in the case I mentioned it will lead to partial success because the size fallocate wants to reserve is not available. > > Regards, > Raghavendra > > > > >> So option-1 is what we need to explore and fix it so that the behavior is >> closer to other posix filesystems. Maybe start with what Ravi suggested? >> >> >>> Please provide feedback. >>> >>> Regards, >>> Raghavendra >>> >>> _______________________________________________ >>> >>> Community Meeting Calendar: >>> >>> APAC Schedule - >>> Every 2nd and 4th Tuesday at 11:30 AM IST >>> Bridge: https://bluejeans.com/836554017 >>> >>> NA/EMEA Schedule - >>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>> Bridge: https://bluejeans.com/486278655 >>> >>> Gluster-devel mailing listGluster-devel at gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-devel >>> >>> _______________________________________________ >>> >>> Community Meeting Calendar: >>> >>> APAC Schedule - >>> Every 2nd and 4th Tuesday at 11:30 AM IST >>> Bridge: https://bluejeans.com/836554017 >>> >>> NA/EMEA Schedule - >>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>> Bridge: https://bluejeans.com/486278655 >>> >>> Gluster-devel mailing list >>> Gluster-devel at gluster.org >>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>> >>> >> >> -- >> Pranith >> > -- Pranith -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndevos at redhat.com Thu Jul 4 14:20:56 2019 From: ndevos at redhat.com (Niels de Vos) Date: Thu, 4 Jul 2019 16:20:56 +0200 Subject: [Gluster-devel] Removing glupy from release 5.7 In-Reply-To: <5c0e83ae5e9c5360eadd505bbb33a5b94b61a99a.camel@redhat.com> References: <20190620074335.GA12566@ndevos-x270> <20190620090508.GA13895@ndevos-x270> <5c0e83ae5e9c5360eadd505bbb33a5b94b61a99a.camel@redhat.com> Message-ID: <20190704142056.GG2674@ndevos-x270.lan.nixpanic.net> On Wed, Jul 03, 2019 at 04:46:11PM +0200, Michael Scherer wrote: > Le mercredi 03 juillet 2019 ? 20:03 +0530, Deepshikha Khandelwal a > ?crit : > > Misc, is EPEL got recently installed on the builders? > > No, it has been there since september 2016. What got changed is that > python3 wasn't installed before. > > > Can you please resolve the 'Why EPEL on builders?'. EPEL+python3 on > > builders seems not a good option to have. > > > Python 3 is pulled by 'mock', cf > https://lists.gluster.org/pipermail/gluster-devel/2019-June/056347.html > > So sure, I can remove EPEL, but then it will remove mock. Or I can > remove python3, and it will remove mock. > > But again, the problem is not with the set of installed packages on the > builder, that's just showing there is a bug. > > The configure script do pick the latest python version: > https://github.com/gluster/glusterfs/blob/master/configure.ac#L612 > > if there is a python3, it take that, if not, it fall back to python2. > > then, later: > https://github.com/gluster/glusterfs/blob/master/configure.ac#L639 > > it verify the presence of what is required to build. > > So if there is a runtime version only of python3, it will detect > python3, but not build anything, because the -devel subpackage is not h > ere. > > There is 2 solutions: > - fix that piece of code, so it doesn't just test the presence of > python executable, but do that, and test the presence of headers before > deciding if we need to build or not glupy. > > - use PYTHON env var to force python2, and document that it need to be > done. What about option 3: - install python3-devel in addition to python3 Niels > > > > On Thu, Jun 20, 2019 at 6:37 PM Michael Scherer > > wrote: > > > > > Le jeudi 20 juin 2019 ? 08:38 -0400, Kaleb Keithley a ?crit : > > > > On Thu, Jun 20, 2019 at 7:39 AM Michael Scherer < > > > > mscherer at redhat.com> > > > > wrote: > > > > > > > > > Le jeudi 20 juin 2019 ? 06:57 -0400, Kaleb Keithley a ?crit : > > > > > > AFAICT, working fine right up to when EPEL and python3 were > > > > > > installed > > > > > > on > > > > > > the centos builders. If it was my decision, I'd undo that > > > > > > change. > > > > > > > > > > The biggest problem is that mock do pull python3. > > > > > > > > > > > > > > > > > > That's mock on Fedora ? to run a build in a centos-i386 chroot. > > > > Fedora > > > > already has python3. I don't see how that can affect what's > > > > running > > > > in the > > > > mock chroot. > > > > > > I am not sure we are talking about the same thing, but mock, the > > > rpm > > > package from EPEL 7, do pull python 3: > > > > > > $ cat /etc/redhat-release; rpm -q --requires mock |grep > > > 'python(abi' > > > Red Hat Enterprise Linux Server release 7.6 (Maipo) > > > python(abi) = 3.6 > > > > > > So we do have python3 installed on the Centos 7 builders (and was > > > after > > > a upgrade), and we are not going to remove it, because we use mock > > > for > > > a lot of stuff. > > > > > > And again, if the configure script is detecting the wrong version > > > of > > > python, the fix is not to remove the version of python for the > > > builders, the fix is to detect the right version of python, or at > > > least, permit to people to bypass the detection. > > > > > > > Is the build inside mock also installing EPEL and python3 > > > > somehow? > > > > Now? If so, why? > > > > > > No, I doubt but then, if we are using a chroot, the package > > > installed > > > on the builders shouldn't matter, since that's a chroot. > > > > > > So I am kinda being lost. > > > > > > > And maybe the solution for centos regressions is to run those in > > > > mock, with a centos-x86_64 chroot. Without EPEL or python3. > > > > > > That would likely requires a big refactor of the setup, since we > > > have > > > to get the data out of specific place, etc. We would also need to > > > reinstall the builders to set partitions in a different way, with a > > > bigger / and/or give more space for /var/lib/mock. > > > > > > I do not see that happening fast, and if my hypothesis of a issue > > > in > > > configure is right, then fixing seems the faster way to avoid the > > > issue. > > > -- > > > Michael Scherer > > > Sysadmin, Community Infrastructure > > > > > > > > > > > > _______________________________________________ > > > > > > Community Meeting Calendar: > > > > > > APAC Schedule - > > > Every 2nd and 4th Tuesday at 11:30 AM IST > > > Bridge: https://bluejeans.com/836554017 > > > > > > NA/EMEA Schedule - > > > Every 1st and 3rd Tuesday at 01:00 PM EDT > > > Bridge: https://bluejeans.com/486278655 > > > > > > Gluster-devel mailing list > > > Gluster-devel at gluster.org > > > https://lists.gluster.org/mailman/listinfo/gluster-devel > > > > > > > -- > Michael Scherer > Sysadmin, Community Infrastructure > > > > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > From mscherer at redhat.com Thu Jul 4 15:03:53 2019 From: mscherer at redhat.com (Michael Scherer) Date: Thu, 04 Jul 2019 17:03:53 +0200 Subject: [Gluster-devel] Removing glupy from release 5.7 In-Reply-To: <20190704142056.GG2674@ndevos-x270.lan.nixpanic.net> References: <20190620074335.GA12566@ndevos-x270> <20190620090508.GA13895@ndevos-x270> <5c0e83ae5e9c5360eadd505bbb33a5b94b61a99a.camel@redhat.com> <20190704142056.GG2674@ndevos-x270.lan.nixpanic.net> Message-ID: Le jeudi 04 juillet 2019 ? 16:20 +0200, Niels de Vos a ?crit : > On Wed, Jul 03, 2019 at 04:46:11PM +0200, Michael Scherer wrote: > > Le mercredi 03 juillet 2019 ? 20:03 +0530, Deepshikha Khandelwal a > > ?crit : > > > Misc, is EPEL got recently installed on the builders? > > > > No, it has been there since september 2016. What got changed is > > that > > python3 wasn't installed before. > > > > > Can you please resolve the 'Why EPEL on builders?'. EPEL+python3 > > > on > > > builders seems not a good option to have. > > > > > > Python 3 is pulled by 'mock', cf > > https://lists.gluster.org/pipermail/gluster-devel/2019-June/056347.html > > > > So sure, I can remove EPEL, but then it will remove mock. Or I can > > remove python3, and it will remove mock. > > > > But again, the problem is not with the set of installed packages on > > the > > builder, that's just showing there is a bug. > > > > The configure script do pick the latest python version: > > https://github.com/gluster/glusterfs/blob/master/configure.ac#L612 > > > > if there is a python3, it take that, if not, it fall back to > > python2. > > > > then, later: > > https://github.com/gluster/glusterfs/blob/master/configure.ac#L639 > > > > it verify the presence of what is required to build. > > > > So if there is a runtime version only of python3, it will detect > > python3, but not build anything, because the -devel subpackage is > > not h > > ere. > > > > There is 2 solutions: > > - fix that piece of code, so it doesn't just test the presence of > > python executable, but do that, and test the presence of headers > > before > > deciding if we need to build or not glupy. > > > > - use PYTHON env var to force python2, and document that it need to > > be > > done. > > What about option 3: > > - install python3-devel in addition to python3 That's a option, but I think that's a disservice for the users, since that's fixing our CI to no longer trigger a corner case, which doesn't mean the corner case no longer exist, just that we do not trigger it. -- Michael Scherer Sysadmin, Community Infrastructure -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From mscherer at redhat.com Thu Jul 4 16:06:05 2019 From: mscherer at redhat.com (Michael Scherer) Date: Thu, 04 Jul 2019 18:06:05 +0200 Subject: [Gluster-devel] Migration of the builders to Fedora 30 Message-ID: <1af917bd44ba629a663014eaee0a24208b422aca.camel@redhat.com> Hi, I have upgraded for testing some of the builder to F30 (because F28 is EOL and people did request newer version of stuff), and I was a bit surprised to see the result of the test of the jobs. So we have 10 jobs that run on those builders. 5 jobs run without trouble: - python-lint - clang-scan - clang-format - 32-bit-build-smoke - bugs-summary 1 is disabled, tsan. I didn't try to run it. 4 fails: - python-compliance - fedora-smoke - gluster-csi-containers - glusterd2-containers The job python-compliance fail like this: https://build.gluster.org/job/python-compliance/5813/ The fedora-smoke job, who is building on newer fedora (so newer gcc), is failling too: https://build.gluster.org/job/fedora-smoke/6753/console Gluster-csi-containers is having trouble to run https://build.gluster.org/job/gluster-csi-containers/304/console but before, it did fail with "out of space": https://build.gluster.org/job/gluster-csi-containers/303/console and it also fail (well, should fail) with this: 16:51:07 make: *** No targets specified and no makefile found. Stop. which is indeed not present in the git repo, so this seems like the job is unmaintained. The last one to fail is glusterd2-containers: https://build.gluster.org/job/glusterd2-containers/323/console This one is fun, because it fail, but appear as ok on jenkins. It fail because of some ansible issue, due to newer Fedora. So, since we need to switch, here is what I would recommend: - switch the working job to F30 - wait 2 weeks, and switch fedora-smoke and python-compliance to F30. This will force someone to fix the problem. - drop the non fixed containers jobs, unless someone fix them, in 1 month. -- Michael Scherer Sysadmin, Community Infrastructure -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From atumball at redhat.com Thu Jul 4 16:25:19 2019 From: atumball at redhat.com (Amar Tumballi Suryanarayan) Date: Thu, 4 Jul 2019 21:55:19 +0530 Subject: [Gluster-devel] Migration of the builders to Fedora 30 In-Reply-To: <1af917bd44ba629a663014eaee0a24208b422aca.camel@redhat.com> References: <1af917bd44ba629a663014eaee0a24208b422aca.camel@redhat.com> Message-ID: On Thu, Jul 4, 2019 at 9:37 PM Michael Scherer wrote: > Hi, > > I have upgraded for testing some of the builder to F30 (because F28 is > EOL and people did request newer version of stuff), and I was a bit > surprised to see the result of the test of the jobs. > > So we have 10 jobs that run on those builders. > > 5 jobs run without trouble: > - python-lint > - clang-scan > - clang-format > - 32-bit-build-smoke > - bugs-summary > > 1 is disabled, tsan. I didn't try to run it. > > 4 fails: > - python-compliance > OK to run, but skip voting, so we can eventually (soonish) fix this. > - fedora-smoke > Ideally we should soon fix it. Effort is ON. We have a bug for this: https://bugzilla.redhat.com/show_bug.cgi?id=1693385#c5 > - gluster-csi-containers > - glusterd2-containers > > OK to drop for now. > The job python-compliance fail like this: > https://build.gluster.org/job/python-compliance/5813/ > > The fedora-smoke job, who is building on newer fedora (so newer gcc), > is failling too: > https://build.gluster.org/job/fedora-smoke/6753/console > > Gluster-csi-containers is having trouble to run > https://build.gluster.org/job/gluster-csi-containers/304/console > https://bugzilla.redhat.com/show_bug.cgi?id=1693385#c5 > but before, it did fail with "out of space": > https://build.gluster.org/job/gluster-csi-containers/303/console > > and it also fail (well, should fail) with this: > 16:51:07 make: *** No targets specified and no makefile found. Stop. > > which is indeed not present in the git repo, so this seems like the job is > unmaintained. > > > The last one to fail is glusterd2-containers: > > https://build.gluster.org/job/glusterd2-containers/323/console > > This one is fun, because it fail, but appear as ok on jenkins. It fail > because of some ansible issue, due to newer Fedora. > > So, since we need to switch, here is what I would recommend: > - switch the working job to F30 > - wait 2 weeks, and switch fedora-smoke and python-compliance to F30. This > will force someone to fix the problem. > - drop the non fixed containers jobs, unless someone fix them, in 1 month. > Looks like a good plan. > > -- > Michael Scherer > Sysadmin, Community Infrastructure > > > > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -- Amar Tumballi (amarts) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndevos at redhat.com Fri Jul 5 13:17:09 2019 From: ndevos at redhat.com (Niels de Vos) Date: Fri, 5 Jul 2019 15:17:09 +0200 Subject: [Gluster-devel] Removing glupy from release 5.7 In-Reply-To: References: <20190620090508.GA13895@ndevos-x270> <5c0e83ae5e9c5360eadd505bbb33a5b94b61a99a.camel@redhat.com> <20190704142056.GG2674@ndevos-x270.lan.nixpanic.net> Message-ID: <20190705131709.GA5625@ndevos-x270> On Thu, Jul 04, 2019 at 05:03:53PM +0200, Michael Scherer wrote: > Le jeudi 04 juillet 2019 ? 16:20 +0200, Niels de Vos a ?crit : > > On Wed, Jul 03, 2019 at 04:46:11PM +0200, Michael Scherer wrote: > > > Le mercredi 03 juillet 2019 ? 20:03 +0530, Deepshikha Khandelwal a > > > ?crit : > > > > Misc, is EPEL got recently installed on the builders? > > > > > > No, it has been there since september 2016. What got changed is > > > that > > > python3 wasn't installed before. > > > > > > > Can you please resolve the 'Why EPEL on builders?'. EPEL+python3 > > > > on > > > > builders seems not a good option to have. > > > > > > > > > Python 3 is pulled by 'mock', cf > > > > https://lists.gluster.org/pipermail/gluster-devel/2019-June/056347.html > > > > > > So sure, I can remove EPEL, but then it will remove mock. Or I can > > > remove python3, and it will remove mock. > > > > > > But again, the problem is not with the set of installed packages on > > > the > > > builder, that's just showing there is a bug. > > > > > > The configure script do pick the latest python version: > > > https://github.com/gluster/glusterfs/blob/master/configure.ac#L612 > > > > > > if there is a python3, it take that, if not, it fall back to > > > python2. > > > > > > then, later: > > > https://github.com/gluster/glusterfs/blob/master/configure.ac#L639 > > > > > > it verify the presence of what is required to build. > > > > > > So if there is a runtime version only of python3, it will detect > > > python3, but not build anything, because the -devel subpackage is > > > not h > > > ere. > > > > > > There is 2 solutions: > > > - fix that piece of code, so it doesn't just test the presence of > > > python executable, but do that, and test the presence of headers > > > before > > > deciding if we need to build or not glupy. > > > > > > - use PYTHON env var to force python2, and document that it need to > > > be > > > done. > > > > What about option 3: > > > > - install python3-devel in addition to python3 > > That's a option, but I think that's a disservice for the users, since > that's fixing our CI to no longer trigger a corner case, which doesn't > mean the corner case no longer exist, just that we do not trigger it. This is only interesting for building releases/packages, I think. Normal build environments have -devel packages installed for the components that are used during the build process. The weird python2-devel and python3 (without -devel) is definitely a corner case, but not something people would normally have. And if so, we expect -devel for the python version that is used, so developers would hopefully just install that on their build system. Niels From jenkins at build.gluster.org Mon Jul 8 01:45:09 2019 From: jenkins at build.gluster.org (jenkins at build.gluster.org) Date: Mon, 8 Jul 2019 01:45:09 +0000 (UTC) Subject: [Gluster-devel] Weekly Untriaged Bugs Message-ID: <2121251061.53.1562550310185.JavaMail.jenkins@jenkins-el7.rht.gluster.org> [...truncated 7 lines...] https://bugzilla.redhat.com/1727430 / arbiter: CPU Spike casue files unavailable https://bugzilla.redhat.com/1722708 / bitrot: WORM: Segmentation Fault if bitrot stub do signature https://bugzilla.redhat.com/1722709 / bitrot: WORM: Segmentation Fault if bitrot stub do signature https://bugzilla.redhat.com/1719778 / core: build fails for every patch on release 5 https://bugzilla.redhat.com/1726935 / core: (glusterfs-6.4) - GlusterFS 6.4 tracker https://bugzilla.redhat.com/1721842 / core: Spelling errors in 6.3 https://bugzilla.redhat.com/1723617 / distribute: nfs-ganesha gets empty stat (all zero) when glfs_mkdir return success https://bugzilla.redhat.com/1726175 / fuse: CentOs 6 GlusterFS client creates files with time 01/01/1970 https://bugzilla.redhat.com/1726038 / ganesha-nfs: ganesha : nfstest_lock from NFSTest failed on v3 https://bugzilla.redhat.com/1724618 / ganesha-nfs: ganesha : nfstest_posix from NFSTest failed https://bugzilla.redhat.com/1722390 / glusterd: "All subvolumes are down" when all bricks are online https://bugzilla.redhat.com/1726905 / glusterd: get-state does not show correct brick status https://bugzilla.redhat.com/1722187 / glusterd: Glusterd Seg faults (sig 11) when RDMA used with MLNX_OFED https://bugzilla.redhat.com/1718741 / glusterfind: GlusterFS having high CPU https://bugzilla.redhat.com/1720733 / libglusterfsclient: glusterfs 4.1.7 client crash https://bugzilla.redhat.com/1726205 / md-cache: Windows client fails to copy large file to GlusterFS volume share with fruit and streams_xattr VFS modules via Samba https://bugzilla.redhat.com/1724957 / project-infrastructure: Grant additional maintainers merge rights on release branches https://bugzilla.redhat.com/1719388 / project-infrastructure: infra: download.gluster.org /var/www/html/... is out of free space https://bugzilla.redhat.com/1721353 / project-infrastructure: Run 'line-coverage' regression runs on a latest fedora machine (say fedora30). https://bugzilla.redhat.com/1720453 / project-infrastructure: Unable to access review.gluster.org https://bugzilla.redhat.com/1721462 / quota: Quota limits not honored writes allowed past quota limit. https://bugzilla.redhat.com/1723781 / tests: Run 'known-issues' and 'bad-tests' in line-coverage test (nightly) https://bugzilla.redhat.com/1724624 / upcall: LINK does not invalidate metadata cache of parent directory [...truncated 2 lines...] -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 2775 bytes Desc: not available URL: From amarts at gmail.com Mon Jul 8 06:12:30 2019 From: amarts at gmail.com (Amar Tumballi) Date: Mon, 8 Jul 2019 11:42:30 +0530 Subject: [Gluster-devel] GlusterFS 8+: Roadmap ahead - Open for discussion Message-ID: Hello everyone, This email is long, and I request each one of you to participate and give comments. We want your collaboration in this big step. TL;DR; We are at an interesting time in Gluster project?s development roadmap. In the last year, we have taken some hard decisions to not focus on features and focus all our energies to stabilize the project, and if you notice as a result of that, we did really well with many regards. With most of the stabilization work getting into the glusterfs-7 branch, we feel the time is good for discussing the future. Now, it is the time for us to start addressing the most common concerns of the project, Performance and related improvements. While many of our users and customers have faced problems with not so great performance, please note that there is no one silver bullet which will solve all performance problems in one step, especially with a distributed storage solution like GlusterFS. Over the years, we have noticed that there are a lot of factors which contribute to the performance issues in Gluster, and it is not ?easy? to tell which one of the ?known? issue caused the particular problem. Sometimes, even to debug where is the bottleneck, we face the challenge of lack of instrumentation in many parts of the codebase. Hence, one of the major activities we want to pick as immediate roadmap is, work on this area. Instead of discussing on the email thread, and losing context soon, I prefer, this time, we can take our discussion to hackmd with comments. Would like each of you to participate and let us know what are your priorities, what you need, how you can help etc. Link to hackmd URL here: https://hackmd.io/JtfYZr49QeGaNIlTvQNsaA After the meeting, I will share the updates as a blog, and once its final, will update the ML with an email. Along with this, from the Gluster project, in the last couple of years, we have noticed increased interest in 2 major use cases. First is using Gluster in container use cases, and the second is using it as a storage for VMs, especially with oVirt project, and also as hyperconverged storage in some cases. We see more stability and performance improvements should help our usecases with VMs. For container storage, Gluster?s official solution involved ?Heketi? project as the frontend to handle k8s APIs and provide storage from Gluster. We did try to come up with a new age management solution with GD2 , but haven?t got enough contributions on it to take it to completion. There were a couple of different approaches attempted too, gluster-subvol and piragua . But neither of them have seen major contributions. From the activity in github and other places, we see that there is still a major need for a proper solution. We are happy to discuss on this too. Please suggest your ideas. -------- Another topic while we are at Roadmap is, the discussion on github vs gerrit. There are some opinions in the group, saying that, we are not getting not many new developers because our project is hosted on gerrit, and most of the developer community is on github. We surely want your opinion on this. Lets use Doc: https://docs.google.com/document/d/16a-EyPRySPlJR3ioRgZRNohq7lM-2EmavulfDxlid_M/edit?usp=sharing for discussing on this. -------- This email is to kick start a discussion focused on our roadmap, discuss the priorities, look into what we can quickly do, and what we can achieve long term. We can have discussions about this in our community meeting, so we can cover most of the time-zones. If we need more time to finalize on things, then we can schedule a few more slots based on people?s preference. Maintainers, please send your preferences for the components you maintain as part of this discussion too. Again, we are planning to use collaborative tool hackmd ( https://hackmd.io/JtfYZr49QeGaNIlTvQNsaA) to capture the notes, and will publish it in a blog form once the meetings conclude. The actionable tasks will move to github issues from there. Looking for your active participation. Regards, Amar (@tumballi) -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspandey at redhat.com Mon Jul 8 06:52:54 2019 From: aspandey at redhat.com (Ashish Pandey) Date: Mon, 8 Jul 2019 02:52:54 -0400 (EDT) Subject: [Gluster-devel] Gluster Community Meeting (APAC friendly hours) Message-ID: <1917478196.26602423.1562568774768.JavaMail.zimbra@redhat.com> The following is a new meeting request: Subject: Gluster Community Meeting (APAC friendly hours) Organizer: "Ashish Pandey" Location: https://bluejeans.com/836554017 Time: Tuesday, July 9, 2019, 11:30:00 AM - 12:30:00 PM GMT +05:30 Chennai, Kolkata, Mumbai, New Delhi Invitees: gluster-users at gluster.org; gluster-devel at gluster.org; aspandey at redhat.com *~*~*~*~*~*~*~*~*~* Bridge: https://bluejeans.com/836554017 Minutes meeting: https://hackmd.io/Keo9lk_yRMK24QTEo7qr7g Previous Meeting notes: https://github.com/gluster/community/meetings Flash talk: Amar would like to talk about glusterfs 8.0 and its roadmap. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 3058 bytes Desc: not available URL: From hgowtham at redhat.com Mon Jul 8 09:07:34 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Mon, 8 Jul 2019 14:37:34 +0530 Subject: [Gluster-devel] Removing glupy from release 5.7 In-Reply-To: <20190705131709.GA5625@ndevos-x270> References: <20190620090508.GA13895@ndevos-x270> <5c0e83ae5e9c5360eadd505bbb33a5b94b61a99a.camel@redhat.com> <20190704142056.GG2674@ndevos-x270.lan.nixpanic.net> <20190705131709.GA5625@ndevos-x270> Message-ID: I have a few concerns about adding the python3 devel package and continuing the build. In the effort to make Gluster python3 compatible, https://github.com/gluster/glusterfs/issues/411 I think we have decided to skip working on Glupy to make it python3 compatible. (Correct me if i'm wrong.) As Glupy was decided to be deprecated. Though i don't see any mail thread regarding the same. I don't see any patches merged to make Glupy python3 compatible, as well. In such a case, I think its better to make changes to the configure.ac of release 5 to work with python2 alone. This way, Glupy will not be affected as well. And machines with python3 will also work because of the presence of python2. And no change will be needed on the infra side as well. We are a bit too late with the 5 series releases. If we are fine with this approach, I will send out a mail informing this, work on the patch and push it. On Fri, Jul 5, 2019 at 6:48 PM Niels de Vos wrote: > > On Thu, Jul 04, 2019 at 05:03:53PM +0200, Michael Scherer wrote: > > Le jeudi 04 juillet 2019 ? 16:20 +0200, Niels de Vos a ?crit : > > > On Wed, Jul 03, 2019 at 04:46:11PM +0200, Michael Scherer wrote: > > > > Le mercredi 03 juillet 2019 ? 20:03 +0530, Deepshikha Khandelwal a > > > > ?crit : > > > > > Misc, is EPEL got recently installed on the builders? > > > > > > > > No, it has been there since september 2016. What got changed is > > > > that > > > > python3 wasn't installed before. > > > > > > > > > Can you please resolve the 'Why EPEL on builders?'. EPEL+python3 > > > > > on > > > > > builders seems not a good option to have. > > > > > > > > > > > > Python 3 is pulled by 'mock', cf > > > > > > https://lists.gluster.org/pipermail/gluster-devel/2019-June/056347.html > > > > > > > > So sure, I can remove EPEL, but then it will remove mock. Or I can > > > > remove python3, and it will remove mock. > > > > > > > > But again, the problem is not with the set of installed packages on > > > > the > > > > builder, that's just showing there is a bug. > > > > > > > > The configure script do pick the latest python version: > > > > https://github.com/gluster/glusterfs/blob/master/configure.ac#L612 > > > > > > > > if there is a python3, it take that, if not, it fall back to > > > > python2. > > > > > > > > then, later: > > > > https://github.com/gluster/glusterfs/blob/master/configure.ac#L639 > > > > > > > > it verify the presence of what is required to build. > > > > > > > > So if there is a runtime version only of python3, it will detect > > > > python3, but not build anything, because the -devel subpackage is > > > > not h > > > > ere. > > > > > > > > There is 2 solutions: > > > > - fix that piece of code, so it doesn't just test the presence of > > > > python executable, but do that, and test the presence of headers > > > > before > > > > deciding if we need to build or not glupy. > > > > > > > > - use PYTHON env var to force python2, and document that it need to > > > > be > > > > done. > > > > > > What about option 3: > > > > > > - install python3-devel in addition to python3 > > > > That's a option, but I think that's a disservice for the users, since > > that's fixing our CI to no longer trigger a corner case, which doesn't > > mean the corner case no longer exist, just that we do not trigger it. > > This is only interesting for building releases/packages, I think. Normal > build environments have -devel packages installed for the components > that are used during the build process. The weird python2-devel and > python3 (without -devel) is definitely a corner case, but not something > people would normally have. And if so, we expect -devel for the python > version that is used, so developers would hopefully just install that on > their build system. > > Niels > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > -- Regards, Hari Gowtham. From ndevos at redhat.com Mon Jul 8 09:37:47 2019 From: ndevos at redhat.com (Niels de Vos) Date: Mon, 8 Jul 2019 11:37:47 +0200 Subject: [Gluster-devel] Removing glupy from release 5.7 In-Reply-To: References: <5c0e83ae5e9c5360eadd505bbb33a5b94b61a99a.camel@redhat.com> <20190704142056.GG2674@ndevos-x270.lan.nixpanic.net> <20190705131709.GA5625@ndevos-x270> Message-ID: <20190708093747.GB5625@ndevos-x270> On Mon, Jul 08, 2019 at 02:37:34PM +0530, Hari Gowtham wrote: > I have a few concerns about adding the python3 devel package and > continuing the build. > In the effort to make Gluster python3 compatible, > https://github.com/gluster/glusterfs/issues/411 > I think we have decided to skip working on Glupy to make it python3 compatible. > (Correct me if i'm wrong.) As Glupy was decided to be deprecated. > Though i don't see any mail thread regarding the same. > I don't see any patches merged to make Glupy python3 compatible, as well. > > In such a case, I think its better to make changes to the configure.ac > of release 5 to work with python2 alone. > This way, Glupy will not be affected as well. And machines with > python3 will also work because of the presence of python2. > And no change will be needed on the infra side as well. Building when only python3 is available should still keep working as well. Recent Fedora versions do not have python2 (by default?) anymore, and that may be true for other distributions too. configure.ac for release-5 and release-4.1 should probably prefer python2 before python3. Niels > We are a bit too late with the 5 series releases. If we are fine with > this approach, > I will send out a mail informing this, work on the patch and push it. > > > On Fri, Jul 5, 2019 at 6:48 PM Niels de Vos wrote: > > > > On Thu, Jul 04, 2019 at 05:03:53PM +0200, Michael Scherer wrote: > > > Le jeudi 04 juillet 2019 ? 16:20 +0200, Niels de Vos a ?crit : > > > > On Wed, Jul 03, 2019 at 04:46:11PM +0200, Michael Scherer wrote: > > > > > Le mercredi 03 juillet 2019 ? 20:03 +0530, Deepshikha Khandelwal a > > > > > ?crit : > > > > > > Misc, is EPEL got recently installed on the builders? > > > > > > > > > > No, it has been there since september 2016. What got changed is > > > > > that > > > > > python3 wasn't installed before. > > > > > > > > > > > Can you please resolve the 'Why EPEL on builders?'. EPEL+python3 > > > > > > on > > > > > > builders seems not a good option to have. > > > > > > > > > > > > > > > Python 3 is pulled by 'mock', cf > > > > > > > > https://lists.gluster.org/pipermail/gluster-devel/2019-June/056347.html > > > > > > > > > > So sure, I can remove EPEL, but then it will remove mock. Or I can > > > > > remove python3, and it will remove mock. > > > > > > > > > > But again, the problem is not with the set of installed packages on > > > > > the > > > > > builder, that's just showing there is a bug. > > > > > > > > > > The configure script do pick the latest python version: > > > > > https://github.com/gluster/glusterfs/blob/master/configure.ac#L612 > > > > > > > > > > if there is a python3, it take that, if not, it fall back to > > > > > python2. > > > > > > > > > > then, later: > > > > > https://github.com/gluster/glusterfs/blob/master/configure.ac#L639 > > > > > > > > > > it verify the presence of what is required to build. > > > > > > > > > > So if there is a runtime version only of python3, it will detect > > > > > python3, but not build anything, because the -devel subpackage is > > > > > not h > > > > > ere. > > > > > > > > > > There is 2 solutions: > > > > > - fix that piece of code, so it doesn't just test the presence of > > > > > python executable, but do that, and test the presence of headers > > > > > before > > > > > deciding if we need to build or not glupy. > > > > > > > > > > - use PYTHON env var to force python2, and document that it need to > > > > > be > > > > > done. > > > > > > > > What about option 3: > > > > > > > > - install python3-devel in addition to python3 > > > > > > That's a option, but I think that's a disservice for the users, since > > > that's fixing our CI to no longer trigger a corner case, which doesn't > > > mean the corner case no longer exist, just that we do not trigger it. > > > > This is only interesting for building releases/packages, I think. Normal > > build environments have -devel packages installed for the components > > that are used during the build process. The weird python2-devel and > > python3 (without -devel) is definitely a corner case, but not something > > people would normally have. And if so, we expect -devel for the python > > version that is used, so developers would hopefully just install that on > > their build system. > > > > Niels > > _______________________________________________ > > > > Community Meeting Calendar: > > > > APAC Schedule - > > Every 2nd and 4th Tuesday at 11:30 AM IST > > Bridge: https://bluejeans.com/836554017 > > > > NA/EMEA Schedule - > > Every 1st and 3rd Tuesday at 01:00 PM EDT > > Bridge: https://bluejeans.com/486278655 > > > > Gluster-devel mailing list > > Gluster-devel at gluster.org > > https://lists.gluster.org/mailman/listinfo/gluster-devel > > > > > -- > Regards, > Hari Gowtham. From hgowtham at redhat.com Mon Jul 8 09:57:13 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Mon, 8 Jul 2019 15:27:13 +0530 Subject: [Gluster-devel] Removing glupy from release 5.7 In-Reply-To: <20190708093747.GB5625@ndevos-x270> References: <5c0e83ae5e9c5360eadd505bbb33a5b94b61a99a.camel@redhat.com> <20190704142056.GG2674@ndevos-x270.lan.nixpanic.net> <20190705131709.GA5625@ndevos-x270> <20190708093747.GB5625@ndevos-x270> Message-ID: I'll make the changes for prioritizing python2 over 3 for release 5 and 4. Let me know if we need to make any other changes. Thanks! On Mon, Jul 8, 2019 at 3:07 PM Niels de Vos wrote: > > On Mon, Jul 08, 2019 at 02:37:34PM +0530, Hari Gowtham wrote: > > I have a few concerns about adding the python3 devel package and > > continuing the build. > > In the effort to make Gluster python3 compatible, > > https://github.com/gluster/glusterfs/issues/411 > > I think we have decided to skip working on Glupy to make it python3 compatible. > > (Correct me if i'm wrong.) As Glupy was decided to be deprecated. > > Though i don't see any mail thread regarding the same. > > I don't see any patches merged to make Glupy python3 compatible, as well. > > > > In such a case, I think its better to make changes to the configure.ac > > of release 5 to work with python2 alone. > > This way, Glupy will not be affected as well. And machines with > > python3 will also work because of the presence of python2. > > And no change will be needed on the infra side as well. > > Building when only python3 is available should still keep working as > well. Recent Fedora versions do not have python2 (by default?) anymore, > and that may be true for other distributions too. > > configure.ac for release-5 and release-4.1 should probably prefer > python2 before python3. > > Niels > > > > We are a bit too late with the 5 series releases. If we are fine with > > this approach, > > I will send out a mail informing this, work on the patch and push it. > > > > > > On Fri, Jul 5, 2019 at 6:48 PM Niels de Vos wrote: > > > > > > On Thu, Jul 04, 2019 at 05:03:53PM +0200, Michael Scherer wrote: > > > > Le jeudi 04 juillet 2019 ? 16:20 +0200, Niels de Vos a ?crit : > > > > > On Wed, Jul 03, 2019 at 04:46:11PM +0200, Michael Scherer wrote: > > > > > > Le mercredi 03 juillet 2019 ? 20:03 +0530, Deepshikha Khandelwal a > > > > > > ?crit : > > > > > > > Misc, is EPEL got recently installed on the builders? > > > > > > > > > > > > No, it has been there since september 2016. What got changed is > > > > > > that > > > > > > python3 wasn't installed before. > > > > > > > > > > > > > Can you please resolve the 'Why EPEL on builders?'. EPEL+python3 > > > > > > > on > > > > > > > builders seems not a good option to have. > > > > > > > > > > > > > > > > > > Python 3 is pulled by 'mock', cf > > > > > > > > > > https://lists.gluster.org/pipermail/gluster-devel/2019-June/056347.html > > > > > > > > > > > > So sure, I can remove EPEL, but then it will remove mock. Or I can > > > > > > remove python3, and it will remove mock. > > > > > > > > > > > > But again, the problem is not with the set of installed packages on > > > > > > the > > > > > > builder, that's just showing there is a bug. > > > > > > > > > > > > The configure script do pick the latest python version: > > > > > > https://github.com/gluster/glusterfs/blob/master/configure.ac#L612 > > > > > > > > > > > > if there is a python3, it take that, if not, it fall back to > > > > > > python2. > > > > > > > > > > > > then, later: > > > > > > https://github.com/gluster/glusterfs/blob/master/configure.ac#L639 > > > > > > > > > > > > it verify the presence of what is required to build. > > > > > > > > > > > > So if there is a runtime version only of python3, it will detect > > > > > > python3, but not build anything, because the -devel subpackage is > > > > > > not h > > > > > > ere. > > > > > > > > > > > > There is 2 solutions: > > > > > > - fix that piece of code, so it doesn't just test the presence of > > > > > > python executable, but do that, and test the presence of headers > > > > > > before > > > > > > deciding if we need to build or not glupy. > > > > > > > > > > > > - use PYTHON env var to force python2, and document that it need to > > > > > > be > > > > > > done. > > > > > > > > > > What about option 3: > > > > > > > > > > - install python3-devel in addition to python3 > > > > > > > > That's a option, but I think that's a disservice for the users, since > > > > that's fixing our CI to no longer trigger a corner case, which doesn't > > > > mean the corner case no longer exist, just that we do not trigger it. > > > > > > This is only interesting for building releases/packages, I think. Normal > > > build environments have -devel packages installed for the components > > > that are used during the build process. The weird python2-devel and > > > python3 (without -devel) is definitely a corner case, but not something > > > people would normally have. And if so, we expect -devel for the python > > > version that is used, so developers would hopefully just install that on > > > their build system. > > > > > > Niels > > > _______________________________________________ > > > > > > Community Meeting Calendar: > > > > > > APAC Schedule - > > > Every 2nd and 4th Tuesday at 11:30 AM IST > > > Bridge: https://bluejeans.com/836554017 > > > > > > NA/EMEA Schedule - > > > Every 1st and 3rd Tuesday at 01:00 PM EDT > > > Bridge: https://bluejeans.com/486278655 > > > > > > Gluster-devel mailing list > > > Gluster-devel at gluster.org > > > https://lists.gluster.org/mailman/listinfo/gluster-devel > > > > > > > > > -- > > Regards, > > Hari Gowtham. -- Regards, Hari Gowtham. From rabhat at redhat.com Mon Jul 8 16:00:24 2019 From: rabhat at redhat.com (FNU Raghavendra Manjunath) Date: Mon, 8 Jul 2019 12:00:24 -0400 Subject: [Gluster-devel] fallocate behavior in glusterfs In-Reply-To: References: <1081f226-67c2-0d19-af99-c4d691b10484@redhat.com> Message-ID: I have sent a rfc patch [1] for review. https://review.gluster.org/#/c/glusterfs/+/23011/ On Thu, Jul 4, 2019 at 1:13 AM Pranith Kumar Karampuri wrote: > > > On Wed, Jul 3, 2019 at 10:59 PM FNU Raghavendra Manjunath < > rabhat at redhat.com> wrote: > >> >> >> On Wed, Jul 3, 2019 at 3:28 AM Pranith Kumar Karampuri < >> pkarampu at redhat.com> wrote: >> >>> >>> >>> On Wed, Jul 3, 2019 at 10:14 AM Ravishankar N >>> wrote: >>> >>>> >>>> On 02/07/19 8:52 PM, FNU Raghavendra Manjunath wrote: >>>> >>>> >>>> Hi All, >>>> >>>> In glusterfs, there is an issue regarding the fallocate behavior. In >>>> short, if someone does fallocate from the mount point with some size that >>>> is greater than the available size in the backend filesystem where the file >>>> is present, then fallocate can fail with a subset of the required number of >>>> blocks allocated and then failing in the backend filesystem with ENOSPC >>>> error. >>>> >>>> The behavior of fallocate in itself is simlar to how it would have been >>>> on a disk filesystem (atleast xfs where it was checked). i.e. allocates >>>> subset of the required number of blocks and then fail with ENOSPC. And the >>>> file in itself would show the number of blocks in stat to be whatever was >>>> allocated as part of fallocate. Please refer [1] where the issue is >>>> explained. >>>> >>>> Now, there is one small difference between how the behavior is between >>>> glusterfs and xfs. >>>> In xfs after fallocate fails, doing 'stat' on the file shows the number >>>> of blocks that have been allocated. Whereas in glusterfs, the number of >>>> blocks is shown as zero which makes tools like "du" show zero consumption. >>>> This difference in behavior in glusterfs is because of libglusterfs on how >>>> it handles sparse files etc for calculating number of blocks (mentioned in >>>> [1]) >>>> >>>> At this point I can think of 3 things on how to handle this. >>>> >>>> 1) Except for how many blocks are shown in the stat output for the file >>>> from the mount point (on which fallocate was done), the remaining behavior >>>> of attempting to allocate the requested size and failing when the >>>> filesystem becomes full is similar to that of XFS. >>>> >>>> Hence, what is required is to come up with a solution on how >>>> libglusterfs calculate blocks for sparse files etc (without breaking any of >>>> the existing components and features). This makes the behavior similar to >>>> that of backend filesystem. This might require its own time to fix >>>> libglusterfs logic without impacting anything else. >>>> >>>> I think we should just revert the commit >>>> b1a5fa55695f497952264e35a9c8eb2bbf1ec4c3 (BZ 817343) and see if it really >>>> breaks anything (or check whatever it breaks is something that we can live >>>> with). XFS speculative preallocation is not permanent and the extra space >>>> is freed up eventually. It can be sped up via procfs tunable: >>>> http://xfs.org/index.php/XFS_FAQ#Q:_How_can_I_speed_up_or_avoid_delayed_removal_of_speculative_preallocation.3F. >>>> We could also tune the allocsize option to a low value like 4k so that >>>> glusterfs quota is not affected. >>>> >>>> FWIW, ENOSPC is not the only fallocate problem in gluster because of >>>> 'iatt->ia_block' tweaking. It also breaks the --keep-size option (i.e. the >>>> FALLOC_FL_KEEP_SIZE flag in fallocate(2)) and reports incorrect du size. >>>> >>> Regards, >>>> Ravi >>>> >>>> >>>> OR >>>> >>>> 2) Once the fallocate fails in the backend filesystem, make posix >>>> xlator in the brick truncate the file to the previous size of the file >>>> before attempting fallocate. A patch [2] has been sent for this. But there >>>> is an issue with this when there are parallel writes and fallocate >>>> operations happening on the same file. It can lead to a data loss. >>>> >>>> a) statpre is obtained ===> before fallocate is attempted, get the stat >>>> hence the size of the file b) A parrallel Write fop on the same file that >>>> extends the file is successful c) Fallocate fails d) ftruncate truncates it >>>> to size given by statpre (i.e. the previous stat and the size obtained in >>>> step a) >>>> >>>> OR >>>> >>>> 3) Make posix check for available disk size before doing fallocate. >>>> i.e. in fallocate once posix gets the number of bytes to be allocated for >>>> the file from a particular offset, it checks whether so many bytes are >>>> available or not in the disk. If not, fail the fallocate fop with ENOSPC >>>> (without attempting it on the backend filesystem). >>>> >>>> There still is a probability of a parallel write happening while this >>>> fallocate is happening and by the time falllocate system call is attempted >>>> on the disk, the available space might have been less than what was >>>> calculated before fallocate. >>>> i.e. following things can happen >>>> >>>> a) statfs ===> get the available space of the backend filesystem >>>> b) a parallel write succeeds and extends the file >>>> c) fallocate is attempted assuming there is sufficient space in the >>>> backend >>>> >>>> While the above situation can arise, I think we are still fine. Because >>>> fallocate is attempted from the offset received in the fop. So, >>>> irrespective of whether write extended the file or not, the fallocate >>>> itself will be attempted for so many bytes from the offset which we found >>>> to be available by getting statfs information. >>>> >>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1724754#c3 >>>> [2] https://review.gluster.org/#/c/glusterfs/+/22969/ >>>> >>>> >>> option 2) will affect performance if we have to serialize all the data >>> operations on the file. >>> option 3) can still lead to the same problem we are trying to solve in a >>> different way. >>> - thread-1: fallocate came with 1MB size, Statfs says there is >>> 1MB space. >>> - thread-2: Write on a different file is attempted with 128KB >>> and succeeds >>> - thread-1: fallocate fails on the file after partially >>> allocating size because there doesn't exist 1MB anymore. >>> >>> >> Here I have a doubt. Even if a 128K write on the file succeeds, IIUC >> fallocate will try to reserve 1MB of space relative to the offset that was >> received as part of the fallocate call which was found to be available. >> So, despite write succeeding, the region fallocate aimed at was 1MB of >> space from a particular offset. As long as that is available, can posix >> still go ahead and perform the fallocate operation? >> > > It can go ahead and perform the operation. Just that in the case I > mentioned it will lead to partial success because the size fallocate wants > to reserve is not available. > > >> >> Regards, >> Raghavendra >> >> >> >> >>> So option-1 is what we need to explore and fix it so that the behavior >>> is closer to other posix filesystems. Maybe start with what Ravi suggested? >>> >>> >>>> Please provide feedback. >>>> >>>> Regards, >>>> Raghavendra >>>> >>>> _______________________________________________ >>>> >>>> Community Meeting Calendar: >>>> >>>> APAC Schedule - >>>> Every 2nd and 4th Tuesday at 11:30 AM IST >>>> Bridge: https://bluejeans.com/836554017 >>>> >>>> NA/EMEA Schedule - >>>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>>> Bridge: https://bluejeans.com/486278655 >>>> >>>> Gluster-devel mailing listGluster-devel at gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-devel >>>> >>>> _______________________________________________ >>>> >>>> Community Meeting Calendar: >>>> >>>> APAC Schedule - >>>> Every 2nd and 4th Tuesday at 11:30 AM IST >>>> Bridge: https://bluejeans.com/836554017 >>>> >>>> NA/EMEA Schedule - >>>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>>> Bridge: https://bluejeans.com/486278655 >>>> >>>> Gluster-devel mailing list >>>> Gluster-devel at gluster.org >>>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>>> >>>> >>> >>> -- >>> Pranith >>> >> > > -- > Pranith > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kirr at nexedi.com Mon Jul 8 17:03:30 2019 From: kirr at nexedi.com (Kirill Smelkov) Date: Mon, 08 Jul 2019 17:03:30 +0000 Subject: [Gluster-devel] [PATCH, RESEND2] fuse: require /dev/fuse reads to have enough buffer capacity (take 2) In-Reply-To: <20190623072619.31037-1-kirr@nexedi.com> Message-ID: <20190708170314.27982-1-kirr@nexedi.com> [ This retries commit d4b13963f217 which was reverted in 766741fcaa1f. In this version we require only `sizeof(fuse_in_header) + sizeof(fuse_write_in)` instead of 4K for FUSE request header room, because, contrary to libfuse and kernel client behaviour, GlusterFS actually provides only so much room for request header. ] A FUSE filesystem server queues /dev/fuse sys_read calls to get filesystem requests to handle. It does not know in advance what would be that request as it can be anything that client issues - LOOKUP, READ, WRITE, ... Many requests are short and retrieve data from the filesystem. However WRITE and NOTIFY_REPLY write data into filesystem. Before getting into operation phase, FUSE filesystem server and kernel client negotiate what should be the maximum write size the client will ever issue. After negotiation the contract in between server/client is that the filesystem server then should queue /dev/fuse sys_read calls with enough buffer capacity to receive any client request - WRITE in particular, while FUSE client should not, in particular, send WRITE requests with > negotiated max_write payload. FUSE client in kernel and libfuse historically reserve 4K for request header. However an existing filesystem server - GlusterFS - was found which reserves only 80 bytes for header room (= `sizeof(fuse_in_header) + sizeof(fuse_write_in)`). https://lore.kernel.org/linux-fsdevel/20190611202738.GA22556 at deco.navytux.spb.ru/ https://github.com/gluster/glusterfs/blob/v3.8.15-0-gd174f021a/xlators/mount/fuse/src/fuse-bridge.c#L4894 Since `sizeof(fuse_in_header) + sizeof(fuse_write_in)` == `sizeof(fuse_in_header) + sizeof(fuse_read_in)` == `sizeof(fuse_in_header) + sizeof(fuse_notify_retrieve_in)` is the absolute minimum any sane filesystem should be using for header room, the contract is that filesystem server should queue sys_reads with `sizeof(fuse_in_header) + sizeof(fuse_write_in)` + max_write buffer. If the filesystem server does not follow this contract, what can happen is that fuse_dev_do_read will see that request size is > buffer size, and then it will return EIO to client who issued the request but won't indicate in any way that there is a problem to filesystem server. This can be hard to diagnose because for some requests, e.g. for NOTIFY_REPLY which mimics WRITE, there is no client thread that is waiting for request completion and that EIO goes nowhere, while on filesystem server side things look like the kernel is not replying back after successful NOTIFY_RETRIEVE request made by the server. We can make the problem easy to diagnose if we indicate via error return to filesystem server when it is violating the contract. This should not practically cause problems because if a filesystem server is using shorter buffer, writes to it were already very likely to cause EIO, and if the filesystem is read-only it should be too following FUSE_MIN_READ_BUFFER minimum buffer size. Please see [1] for context where the problem of stuck filesystem was hit for real (because kernel client was incorrectly sending more than max_write data with NOTIFY_REPLY; see also previous patch), how the situation was traced and for more involving patch that did not make it into the tree. [1] https://marc.info/?l=linux-fsdevel&m=155057023600853&w=2 Signed-off-by: Kirill Smelkov Tested-by: Sander Eikelenboom Cc: Han-Wen Nienhuys Cc: Jakob Unterwurzacher --- fs/fuse/dev.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c index ea8237513dfa..b2b2344eadcf 100644 --- a/fs/fuse/dev.c +++ b/fs/fuse/dev.c @@ -1317,6 +1317,26 @@ static ssize_t fuse_dev_do_read(struct fuse_dev *fud, struct file *file, unsigned reqsize; unsigned int hash; + /* + * Require sane minimum read buffer - that has capacity for fixed part + * of any request header + negotiated max_write room for data. If the + * requirement is not satisfied return EINVAL to the filesystem server + * to indicate that it is not following FUSE server/client contract. + * Don't dequeue / abort any request. + * + * Historically libfuse reserves 4K for fixed header room, but e.g. + * GlusterFS reserves only 80 bytes + * + * = `sizeof(fuse_in_header) + sizeof(fuse_write_in)` + * + * which is the absolute minimum any sane filesystem should be using + * for header room. + */ + if (nbytes < max_t(size_t, FUSE_MIN_READ_BUFFER, + sizeof(struct fuse_in_header) + sizeof(struct fuse_write_in) + + fc->max_write)) + return -EINVAL; + restart: spin_lock(&fiq->waitq.lock); err = -EAGAIN; -- 2.20.1 From hgowtham at redhat.com Tue Jul 9 08:59:10 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Tue, 9 Jul 2019 14:29:10 +0530 Subject: [Gluster-devel] Release 6.3: Expected tagging on July 15th Message-ID: Hi, Expected tagging date for release-6.3 is on July, 15th, 2019. Please ensure required patches are backported and also are passing regressions and are appropriately reviewed for easy merging and tagging on the date. -- Regards, Hari Gowtham. From hgowtham at redhat.com Tue Jul 9 09:02:46 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Tue, 9 Jul 2019 14:32:46 +0530 Subject: [Gluster-devel] Release 4.1.10: Expected tagging on July 15th Message-ID: Hi, Expected tagging date for release-4.1.10 is on July, 15th 2019. NOTE: This is the last release for 4 series. Branch 4 will be EOLed after this. So if there are any critical patches please ensure they are backported and also are passing regressions and are appropriately reviewed for easy merging and tagging on the date. -- Regards, Hari Gowtham. From atumball at redhat.com Tue Jul 9 12:03:06 2019 From: atumball at redhat.com (Amar Tumballi Suryanarayan) Date: Tue, 9 Jul 2019 17:33:06 +0530 Subject: [Gluster-devel] Migration of the builders to Fedora 30 In-Reply-To: References: <1af917bd44ba629a663014eaee0a24208b422aca.camel@redhat.com> Message-ID: On Thu, Jul 4, 2019 at 9:55 PM Amar Tumballi Suryanarayan < atumball at redhat.com> wrote: > > > On Thu, Jul 4, 2019 at 9:37 PM Michael Scherer > wrote: > >> Hi, >> >> I have upgraded for testing some of the builder to F30 (because F28 is >> EOL and people did request newer version of stuff), and I was a bit >> surprised to see the result of the test of the jobs. >> >> So we have 10 jobs that run on those builders. >> >> 5 jobs run without trouble: >> - python-lint >> - clang-scan >> - clang-format >> - 32-bit-build-smoke >> - bugs-summary >> >> 1 is disabled, tsan. I didn't try to run it. >> >> 4 fails: >> - python-compliance >> > > OK to run, but skip voting, so we can eventually (soonish) fix this. > > >> - fedora-smoke >> > > Ideally we should soon fix it. Effort is ON. We have a bug for this: > https://bugzilla.redhat.com/show_bug.cgi?id=1693385#c5 > Can we re-run this on latest master? I think we are ready for fedora-smoke on fedora30 on latest master. > >> - gluster-csi-containers >> - glusterd2-containers >> >> > OK to drop for now. > > >> The job python-compliance fail like this: >> https://build.gluster.org/job/python-compliance/5813/ >> >> The fedora-smoke job, who is building on newer fedora (so newer gcc), >> is failling too: >> https://build.gluster.org/job/fedora-smos some new vol option that ought >> to be set?ke/6753/console >> >> >> Gluster-csi-containers is having trouble to run >> https://build.gluster.org/job/gluster-csi-containers/304/console >> https://bugzilla.redhat.com/show_bug.cgi?id=1693385#c5 >> but before, it did fail with "out of space": >> https://build.gluster.org/job/gluster-csi-containers/303/console >> >> and it also fail (well, should fail) with this: >> 16:51:07 make: *** No targets specified and no makefile found. Stop. >> >> which is indeed not present in the git repo, so this seems like the job >> is unmaintained. >> >> >> The last one to fail is glusterd2-containers: >> >> https://build.gluster.org/job/glusterd2-containers/323/console >> >> This one is fun, because it fail, but appear as ok on jenkins. It fail >> because of some ansible issue, due to newer Fedora. >> >> So, since we need to switch, here is what I would recommend: >> - switch the working job to F30 >> - wait 2 weeks, and switch fedora-smoke and python-compliance to F30. >> This will force someone to fix the problem. >> - drop the non fixed containers jobs, unless someone fix them, in 1 month. >> > > Looks like a good plan. > > >> >> -- >> Michael Scherer >> Sysadmin, Community Infrastructure >> >> >> >> _______________________________________________ >> >> Community Meeting Calendar: >> >> APAC Schedule - >> Every 2nd and 4th Tuesday at 11:30 AM IST >> Bridge: https://bluejeans.com/836554017 >> >> NA/EMEA Schedule - >> Every 1st and 3rd Tuesday at 01:00 PM EDT >> Bridge: https://bluejeans.com/486278655 >> >> Gluster-devel mailing list >> Gluster-devel at gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-devel >> >> > > -- > Amar Tumballi (amarts) > -- Amar Tumballi (amarts) -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspandey at redhat.com Tue Jul 9 12:55:01 2019 From: aspandey at redhat.com (Ashish Pandey) Date: Tue, 9 Jul 2019 08:55:01 -0400 (EDT) Subject: [Gluster-devel] Gluster Community Meeting : 2019-07-09 In-Reply-To: <781545948.26833790.1562676847855.JavaMail.zimbra@redhat.com> Message-ID: <184783736.26834446.1562676901798.JavaMail.zimbra@redhat.com> Hi All, Today, we had Gluster Community Meeting and the minutes of meeting can be found on following link - https://github.com/gluster/community/blob/master/meetings/2019-07-09-Community_meeting.md --- Ashish -------------- next part -------------- An HTML attachment was scrubbed... URL: From atumball at redhat.com Tue Jul 9 15:30:58 2019 From: atumball at redhat.com (Amar Tumballi Suryanarayan) Date: Tue, 9 Jul 2019 21:00:58 +0530 Subject: [Gluster-devel] [Announcement] Gluster Community Update Message-ID: Hello Gluster community, Today marks a new day in the 26-year history of Red Hat. IBM has finalized its acquisition of Red Hat , which will operate as a distinct unit within IBM moving forward. What does this mean for Red Hat?s contributions to the Gluster project? In short, nothing. Red Hat always has and will continue to be a champion for open source and projects like Gluster. IBM is committed to Red Hat?s independence and role in open source software communities so that we can continue this work without interruption or changes. Our mission, governance, and objectives remain the same. We will continue to execute the existing project roadmap. Red Hat associates will continue to contribute to the upstream in the same ways they have been. And, as always, we will continue to help upstream projects be successful and contribute to welcoming new members and maintaining the project. We will do this together, with the community, as we always have. If you have questions or would like to learn more about today?s news, I encourage you to review the list of materials below. Red Hat CTO Chris Wright will host an online Q&A session in the coming days where you can ask questions you may have about what the acquisition means for Red Hat and our involvement in open source communities. Details will be announced on the Red Hat blog . - Press release - Chris Wright blog - Red Hat and IBM: Accelerating the adoption of open source - FAQ on Red Hat Community Blog Amar Tumballi, Maintainer, Lead, Gluster Community. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dkhandel at redhat.com Wed Jul 10 07:06:01 2019 From: dkhandel at redhat.com (Deepshikha Khandelwal) Date: Wed, 10 Jul 2019 12:36:01 +0530 Subject: [Gluster-devel] [Gluster-infra] Migration of the builders to Fedora 30 In-Reply-To: References: <1af917bd44ba629a663014eaee0a24208b422aca.camel@redhat.com> Message-ID: On Tue, Jul 9, 2019 at 5:34 PM Amar Tumballi Suryanarayan < atumball at redhat.com> wrote: > > > On Thu, Jul 4, 2019 at 9:55 PM Amar Tumballi Suryanarayan < > atumball at redhat.com> wrote: > >> >> >> On Thu, Jul 4, 2019 at 9:37 PM Michael Scherer >> wrote: >> >>> Hi, >>> >>> I have upgraded for testing some of the builder to F30 (because F28 is >>> EOL and people did request newer version of stuff), and I was a bit >>> surprised to see the result of the test of the jobs. >>> >>> So we have 10 jobs that run on those builders. >>> >>> 5 jobs run without trouble: >>> - python-lint >>> - clang-scan >>> - clang-format >>> - 32-bit-build-smoke >>> - bugs-summary >>> >>> 1 is disabled, tsan. I didn't try to run it. >>> >>> 4 fails: >>> - python-compliance >>> >> >> OK to run, but skip voting, so we can eventually (soonish) fix this. >> >> >>> - fedora-smoke >>> >> >> Ideally we should soon fix it. Effort is ON. We have a bug for this: >> https://bugzilla.redhat.com/show_bug.cgi?id=1693385#c5 >> > > Can we re-run this on latest master? I think we are ready for > fedora-smoke on fedora30 on latest master. > I have triggered a test run https://build.gluster.org/job/fedora-smoke/6817/console to fedora-smoke job on fedora30 builders. It is running on latest master. > > >> >>> - gluster-csi-containers >>> - glusterd2-containers >>> >>> >> OK to drop for now. >> > I have disabled these two container jobs: - gluster-csi-containers - glusterd2-containers >> >>> The job python-compliance fail like this: >>> https://build.gluster.org/job/python-compliance/5813/ >>> >>> The fedora-smoke job, who is building on newer fedora (so newer gcc), >>> is failling too: >>> https://build.gluster.org/job/fedora-smos some new vol option that ought >>> to be set?ke/6753/console >>> >>> >>> Gluster-csi-containers is having trouble to run >>> https://build.gluster.org/job/gluster-csi-containers/304/console >>> https://bugzilla.redhat.com/show_bug.cgi?id=1693385#c5 >>> but before, it did fail with "out of space": >>> https://build.gluster.org/job/gluster-csi-containers/303/console >>> >>> and it also fail (well, should fail) with this: >>> 16:51:07 make: *** No targets specified and no makefile found. Stop. >>> >>> which is indeed not present in the git repo, so this seems like the job >>> is unmaintained. >>> >>> >>> The last one to fail is glusterd2-containers: >>> >>> https://build.gluster.org/job/glusterd2-containers/323/console >>> >>> This one is fun, because it fail, but appear as ok on jenkins. It fail >>> because of some ansible issue, due to newer Fedora. >>> >>> So, since we need to switch, here is what I would recommend: >>> - switch the working job to F30 >>> - wait 2 weeks, and switch fedora-smoke and python-compliance to F30. >>> This will force someone to fix the problem. >>> - drop the non fixed containers jobs, unless someone fix them, in 1 >>> month. >>> >> >> Looks like a good plan. >> >> >>> >>> -- >>> Michael Scherer >>> Sysadmin, Community Infrastructure >>> >>> >>> >>> _______________________________________________ >>> >>> Community Meeting Calendar: >>> >>> APAC Schedule - >>> Every 2nd and 4th Tuesday at 11:30 AM IST >>> Bridge: https://bluejeans.com/836554017 >>> >>> NA/EMEA Schedule - >>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>> Bridge: https://bluejeans.com/486278655 >>> >>> Gluster-devel mailing list >>> Gluster-devel at gluster.org >>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>> >>> >> >> -- >> Amar Tumballi (amarts) >> > > > -- > Amar Tumballi (amarts) > _______________________________________________ > Gluster-infra mailing list > Gluster-infra at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-infra -------------- next part -------------- An HTML attachment was scrubbed... URL: From spisla80 at gmail.com Wed Jul 10 10:10:41 2019 From: spisla80 at gmail.com (David Spisla) Date: Wed, 10 Jul 2019 12:10:41 +0200 Subject: [Gluster-devel] Re-Compile glusterd1 and add it to the stack Message-ID: Hello Gluster Devels, I add a custom volume option to glusterd-volume-set.c . I could build my own RPMs but I don't want this, I only want to add new compiled glusterd to the stack. I tried it out to copy glusterd.so to /usr/lib64/glusterfs/x.x/xlator/mgmt . After this glusterd is running normally and I can create volumes but in the vol files my new option is not there and if I want to start the volume it failed. It seems to be that I need to add some other files to the stack. Any idea? Regards David Spisla -------------- next part -------------- An HTML attachment was scrubbed... URL: From hgowtham at redhat.com Thu Jul 11 07:32:48 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Thu, 11 Jul 2019 13:02:48 +0530 Subject: [Gluster-devel] Release 5.7 or 5.8 Message-ID: Hi, We came across an build issue with release 5.7. It was related the python version. A fix for it ha been posted [ https://review.gluster.org/#/c/glusterfs/+/23028 ] Once we take this fix in we need to go ahead with tagging and release it. Though we have tagged 5.7, we weren't able to package 5.7 because of this issue. Now the question is, to create 5.7.1 or go with 5.8 as recreating a tag isn't an option. My take is to create 5.8 and mark 5.7 obsolete. And the reasons are as below: *) We have moved on to using 5.x. Going back to 5.x.y will be confusing. *) 5.8 is also due as we got delayed a lot in this issue. If we have any other opinion, please let us know so we can decide and go ahead with the best option. -- Regards, Hari Gowtham. From hgowtham at redhat.com Thu Jul 11 09:09:54 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Thu, 11 Jul 2019 14:39:54 +0530 Subject: [Gluster-devel] Announcing Gluster release 6.3 Message-ID: Hi, The Gluster community is pleased to announce the release of Gluster 6.3 (packages available at [1]). Release notes for the release can be found at [2]. Major changes, features and limitations addressed in this release: None Thanks, Gluster community [1] Packages for 6.2: https://download.gluster.org/pub/gluster/glusterfs/6/6.3/ [2] Release notes for 6.2: https://docs.gluster.org/en/latest/release-notes/6.3/ -- Regards, Hari Gowtham. From pasik at iki.fi Thu Jul 11 11:57:39 2019 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Thu, 11 Jul 2019 14:57:39 +0300 Subject: [Gluster-devel] Release 6.3: Expected tagging on July 15th In-Reply-To: References: Message-ID: <20190711115739.GG26890@reaktio.net> On Tue, Jul 09, 2019 at 02:29:10PM +0530, Hari Gowtham wrote: > Hi, > > Expected tagging date for release-6.3 is on July, 15th, 2019. > Hmm.. wasn't release-6.3 already tagged one month ago, on Jun 11, based on: https://github.com/gluster/glusterfs/releases I think CentOS gluster repos also already have rpms for 6.3. > Please ensure required patches are backported and also are passing > regressions and are appropriately reviewed for easy merging and tagging > on the date. > .. so the next release would be 6.4 ? Thanks, -- Pasi From pasik at iki.fi Thu Jul 11 12:04:50 2019 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Thu, 11 Jul 2019 15:04:50 +0300 Subject: [Gluster-devel] Release 4.1.10: Expected tagging on July 15th In-Reply-To: References: Message-ID: <20190711120450.GH26890@reaktio.net> On Tue, Jul 09, 2019 at 02:32:46PM +0530, Hari Gowtham wrote: > Hi, > > Expected tagging date for release-4.1.10 is on July, 15th 2019. > > NOTE: This is the last release for 4 series. > > Branch 4 will be EOLed after this. So if there are any critical patches > please ensure they are backported and also are passing > regressions and are appropriately reviewed for easy merging and tagging > on the date. > Glusterfs 4.1.9 did close this issue: "gfapi: do not block epoll thread for upcall notifications": https://bugzilla.redhat.com/show_bug.cgi?id=1694563 But more patches are needed to properly fix the issue, so it'd really nice to have these patches backported to 4.1.10 aswell: "gfapi: fix incorrect initialization of upcall syncop arguments": https://bugzilla.redhat.com/show_bug.cgi?id=1718316 "Upcall: Avoid sending upcalls for invalid Inode": https://bugzilla.redhat.com/show_bug.cgi?id=1718338 This gfapi/upcall issue gets easily triggered with nfs-ganesha, and causes "complete IO hang", as can be seem here: "Complete IO hang on CentOS 7.5": https://github.com/nfs-ganesha/nfs-ganesha/issues/335 Thanks, -- Pasi > -- > Regards, > Hari Gowtham. From vbellur at redhat.com Thu Jul 11 18:34:57 2019 From: vbellur at redhat.com (Vijay Bellur) Date: Thu, 11 Jul 2019 11:34:57 -0700 Subject: [Gluster-devel] Re-Compile glusterd1 and add it to the stack In-Reply-To: References: Message-ID: Hi David, If the option is related to a particular translator, you would need to add that option in the options table of the translator and add code in glusterd-volgen.c to generate that option in the volfiles. Would it be possible to share the code diff that you are trying out? Regards, Vijay On Wed, Jul 10, 2019 at 3:11 AM David Spisla wrote: > Hello Gluster Devels, > > I add a custom volume option to glusterd-volume-set.c . I could build my > own RPMs but I don't want this, I only want to add new compiled glusterd to > the stack. I tried it out to copy glusterd.so to > /usr/lib64/glusterfs/x.x/xlator/mgmt . After this glusterd is running > normally and I can create volumes but in the vol files my new option is not > there and if I want to start the volume it failed. > > It seems to be that I need to add some other files to the stack. Any idea? > > Regards > David Spisla > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hgowtham at redhat.com Thu Jul 11 18:54:45 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Fri, 12 Jul 2019 00:24:45 +0530 Subject: [Gluster-devel] Release 6.3: Expected tagging on July 15th In-Reply-To: <20190711115739.GG26890@reaktio.net> References: <20190711115739.GG26890@reaktio.net> Message-ID: Sorry about the typo. It's 6.4. Thanks for correcting it. On Thu, 11 Jul, 2019, 5:34 PM Pasi K?rkk?inen, wrote: > On Tue, Jul 09, 2019 at 02:29:10PM +0530, Hari Gowtham wrote: > > Hi, > > > > Expected tagging date for release-6.3 is on July, 15th, 2019. > > > > Hmm.. wasn't release-6.3 already tagged one month ago, on Jun 11, based on: > https://github.com/gluster/glusterfs/releases > > I think CentOS gluster repos also already have rpms for 6.3. > > > > Please ensure required patches are backported and also are passing > > regressions and are appropriately reviewed for easy merging and tagging > > on the date. > > > > .. so the next release would be 6.4 ? > > > Thanks, > > -- Pasi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndevos at redhat.com Fri Jul 12 15:43:03 2019 From: ndevos at redhat.com (Niels de Vos) Date: Fri, 12 Jul 2019 17:43:03 +0200 Subject: [Gluster-devel] Release 5.7 or 5.8 In-Reply-To: References: Message-ID: <20190712154303.GH3700@ndevos-x270> On Thu, Jul 11, 2019 at 01:02:48PM +0530, Hari Gowtham wrote: > Hi, > > We came across an build issue with release 5.7. It was related the > python version. > A fix for it ha been posted [ https://review.gluster.org/#/c/glusterfs/+/23028 ] > Once we take this fix in we need to go ahead with tagging and release it. > Though we have tagged 5.7, we weren't able to package 5.7 because of this issue. > > Now the question is, to create 5.7.1 or go with 5.8 as recreating a > tag isn't an option. > My take is to create 5.8 and mark 5.7 obsolete. And the reasons are as below: > *) We have moved on to using 5.x. Going back to 5.x.y will be confusing. > *) 5.8 is also due as we got delayed a lot in this issue. > > If we have any other opinion, please let us know so we can decide and > go ahead with the best option. I would go with 5.7.1. However if 5.8 would be tagged around the same time, then only do 5.8. Niels From manu at netbsd.org Sat Jul 13 01:57:03 2019 From: manu at netbsd.org (Emmanuel Dreyfus) Date: Sat, 13 Jul 2019 03:57:03 +0200 Subject: [Gluster-devel] directory filehandles Message-ID: <1oamuwo.12b2396p3cs55M%manu@netbsd.org> Hello I have trouble figuring the whole story about how to cope with FUSE directory filehandles in the NetBSD implementation. libfuse makes a special use of filehandles exposed to filesystem for OPENDIR, READDIR, FSYNCDIR, and RELEASEDIR. For that four operations, the fh is a pointer to a struct fuse_dh, in which the fh field is exposed to the filesystem. All other filesystem operations pass the fh as is from kernel to filesystem back and forth. That means that a fh obtained by OPENDIR should never be passed to operations others than (READDIR, FSYNCDIR and RELEASEDIR). For instance, when porting ltfs to NetBSD, I experienced that passing a fh obtained from OPENDIR to SETATTR would crash. glusterfs implementation differs from libfuse because it seems the filesystem is always passed as is: there is nothing like libfuse struct fuse_dh. It will therefore happily accept fh obtained by OPENDIR for any operation, something that I do not expect to happen in libfuse based filesystems. My real concern is SETLK on directory. Here glusterfs really wants a fh or it will report an error. The NetBSD implementation passes the fh it got from OPENDIR, but I expect a libfuse based filesystem to crash in such a situation. For now I did not find any libfuse-based filesystem that implements locking, so I could not test that. Could someone clarify this? What are the FUSE operations that should be sent to filesystem on that kind of program? int fd; /* NetBSD calls FUSE LOOKUP and OPENDIR */ if ((fd = open("/gfs/tmp", O_RDONLY, 0)) == -1) err(1, "open failed"); /* NetBSD calls FUSE SETLKW */ if (flock(fd, LOCK_EX) == -1) err(1, "flock failed"); -- Emmanuel Dreyfus http://hcpnet.free.fr/pubz manu at netbsd.org From amukherj at redhat.com Sun Jul 14 05:24:39 2019 From: amukherj at redhat.com (Atin Mukherjee) Date: Sun, 14 Jul 2019 10:54:39 +0530 Subject: [Gluster-devel] Rebase your patches to avoid fedora-smoke failure Message-ID: With https://review.gluster.org/23033 being now merged, we should be unblocked on the fedora-smoke failure. Request all of the patch owners to rebase your respective patches to get unblocked. -------------- next part -------------- An HTML attachment was scrubbed... URL: From amukherj at redhat.com Sun Jul 14 14:15:53 2019 From: amukherj at redhat.com (Atin Mukherjee) Date: Sun, 14 Jul 2019 19:45:53 +0530 Subject: [Gluster-devel] Release 5.7 or 5.8 In-Reply-To: <20190712154303.GH3700@ndevos-x270> References: <20190712154303.GH3700@ndevos-x270> Message-ID: On Fri, 12 Jul 2019 at 21:14, Niels de Vos wrote: > On Thu, Jul 11, 2019 at 01:02:48PM +0530, Hari Gowtham wrote: > > Hi, > > > > We came across an build issue with release 5.7. It was related the > > python version. > > A fix for it ha been posted [ > https://review.gluster.org/#/c/glusterfs/+/23028 ] > > Once we take this fix in we need to go ahead with tagging and release it. > > Though we have tagged 5.7, we weren't able to package 5.7 because of > this issue. > > > > Now the question is, to create 5.7.1 or go with 5.8 as recreating a > > tag isn't an option. > > My take is to create 5.8 and mark 5.7 obsolete. And the reasons are as > below: > > *) We have moved on to using 5.x. Going back to 5.x.y will be confusing. > > *) 5.8 is also due as we got delayed a lot in this issue. > > > > If we have any other opinion, please let us know so we can decide and > > go ahead with the best option. > > I would go with 5.7.1. However if 5.8 would be tagged around the same > time, then only do 5.8. Since 5.8 is nearing, lets do 5.8 instead of 5.7.1? > > Niels > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > > -- - Atin (atinm) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jenkins at build.gluster.org Mon Jul 15 01:45:02 2019 From: jenkins at build.gluster.org (jenkins at build.gluster.org) Date: Mon, 15 Jul 2019 01:45:02 +0000 (UTC) Subject: [Gluster-devel] Weekly Untriaged Bugs Message-ID: <523618633.64.1563155102990.JavaMail.jenkins@jenkins-el7.rht.gluster.org> [...truncated 7 lines...] https://bugzilla.redhat.com/1727430 / arbiter: CPU Spike casue files unavailable https://bugzilla.redhat.com/1722708 / bitrot: WORM: Segmentation Fault if bitrot stub do signature https://bugzilla.redhat.com/1722709 / bitrot: WORM: Segmentation Fault if bitrot stub do signature https://bugzilla.redhat.com/1726935 / core: (glusterfs-6.4) - GlusterFS 6.4 tracker https://bugzilla.redhat.com/1729052 / core: glusterfs-fuse client mount point process stuck du to inode->lock and table->lock deadlock https://bugzilla.redhat.com/1729085 / disperse: [EC] shd crashed while heal failed due to out of memory error. https://bugzilla.redhat.com/1723617 / distribute: nfs-ganesha gets empty stat (all zero) when glfs_mkdir return success https://bugzilla.redhat.com/1726175 / fuse: CentOs 6 GlusterFS client creates files with time 01/01/1970 https://bugzilla.redhat.com/1726038 / ganesha-nfs: ganesha : nfstest_lock from NFSTest failed on v3 https://bugzilla.redhat.com/1724618 / ganesha-nfs: ganesha : nfstest_posix from NFSTest failed https://bugzilla.redhat.com/1722390 / glusterd: "All subvolumes are down" when all bricks are online https://bugzilla.redhat.com/1722187 / glusterd: Glusterd Seg faults (sig 11) when RDMA used with MLNX_OFED https://bugzilla.redhat.com/1728183 / gluster-smb: SMBD thread panics on file operations from Windows, OS X and Windows when using vfs_glusterfs https://bugzilla.redhat.com/1726205 / md-cache: Windows client fails to copy large file to GlusterFS volume share with fruit and streams_xattr VFS modules via Samba https://bugzilla.redhat.com/1727727 / project-infrastructure: Build+Packaging Automation https://bugzilla.redhat.com/1724957 / project-infrastructure: Grant additional maintainers merge rights on release branches https://bugzilla.redhat.com/1728120 / project-infrastructure: Not able to access core https://build.gluster.org/job/regression-test-with-multiplex/1402/consoleFull https://bugzilla.redhat.com/1721353 / project-infrastructure: Run 'line-coverage' regression runs on a latest fedora machine (say fedora30). https://bugzilla.redhat.com/1721462 / quota: Quota limits not honored writes allowed past quota limit. https://bugzilla.redhat.com/1723781 / tests: Run 'known-issues' and 'bad-tests' in line-coverage test (nightly) https://bugzilla.redhat.com/1724624 / upcall: LINK does not invalidate metadata cache of parent directory [...truncated 2 lines...] -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 2782 bytes Desc: not available URL: From jthottan at redhat.com Mon Jul 15 11:57:35 2019 From: jthottan at redhat.com (Jiffin Tony Thottan) Date: Mon, 15 Jul 2019 17:27:35 +0530 Subject: [Gluster-devel] Requesting reviews [Re: Release 7 Branch Created] In-Reply-To: <891942035.493157.1563191603936.JavaMail.zimbra@redhat.com> References: <891942035.493157.1563191603936.JavaMail.zimbra@redhat.com> Message-ID: <39a54689-c8b8-1393-bd1c-03e4a21f464b@redhat.com> Hi, The "Add Ganesha HA bits back to glusterfs code repo"[1] is targeted for glusterfs-7. Requesting maintainers to review below two patches [1] https://review.gluster.org/#/q/topic:ref-663+(status:open+OR+status:merged) Regards, Jiffin On 15/07/19 5:23 PM, Jiffin Thottan wrote: > > ----- Original Message ----- > From: "Rinku Kothiya" > To: maintainers at gluster.org, gluster-devel at gluster.org, "Shyam Ranganathan" > Sent: Wednesday, July 3, 2019 10:30:58 AM > Subject: [Gluster-devel] Release 7 Branch Created > > Hi Team, > > Release 7 branch has been created in upstream. > > ## Schedule > > Curretnly the plan working backwards on the schedule, here's what we have: > - Announcement: Week of Aug 4th, 2019 > - GA tagging: Aug-02-2019 > - RC1: On demand before GA > - RC0: July-03-2019 > - Late features cut-off: Week of June-24th, 2018 > - Branching (feature cutoff date): June-17-2018 > > Regards > Rinku > > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > From spisla80 at gmail.com Mon Jul 15 14:00:48 2019 From: spisla80 at gmail.com (David Spisla) Date: Mon, 15 Jul 2019 16:00:48 +0200 Subject: [Gluster-devel] Re-Compile glusterd1 and add it to the stack In-Reply-To: References: Message-ID: Hello Vijay, there is a patch file attached. You can see the code there. I oriented myself here: https://review.gluster.org/#/c/glusterfs/+/18633/ As you can see there is no additional code in glusterd-volgen.c . Both glusterd-volgen.c and glusterd-volume.set.c will be compiled into glusterd.so . Its still the problem, that my new option is not available if I only re-compile glusterd.so . Compiling and using the whole RPMs is working It is not possible to re-compile glusterd.so ? Regards David Spisla Am Do., 11. Juli 2019 um 20:35 Uhr schrieb Vijay Bellur : > Hi David, > > If the option is related to a particular translator, you would need to add > that option in the options table of the translator and add code in > glusterd-volgen.c to generate that option in the volfiles. > > Would it be possible to share the code diff that you are trying out? > > Regards, > Vijay > > On Wed, Jul 10, 2019 at 3:11 AM David Spisla wrote: > >> Hello Gluster Devels, >> >> I add a custom volume option to glusterd-volume-set.c . I could build my >> own RPMs but I don't want this, I only want to add new compiled glusterd to >> the stack. I tried it out to copy glusterd.so to >> /usr/lib64/glusterfs/x.x/xlator/mgmt . After this glusterd is running >> normally and I can create volumes but in the vol files my new option is not >> there and if I want to start the volume it failed. >> >> It seems to be that I need to add some other files to the stack. Any idea? >> >> Regards >> David Spisla >> _______________________________________________ >> >> Community Meeting Calendar: >> >> APAC Schedule - >> Every 2nd and 4th Tuesday at 11:30 AM IST >> Bridge: https://bluejeans.com/836554017 >> >> NA/EMEA Schedule - >> Every 1st and 3rd Tuesday at 01:00 PM EDT >> Bridge: https://bluejeans.com/486278655 >> >> Gluster-devel mailing list >> Gluster-devel at gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-devel >> >> _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: diff-option.patch Type: application/octet-stream Size: 3116 bytes Desc: not available URL: From amukherj at redhat.com Mon Jul 15 14:14:50 2019 From: amukherj at redhat.com (Atin Mukherjee) Date: Mon, 15 Jul 2019 19:44:50 +0530 Subject: [Gluster-devel] Requesting reviews [Re: Release 7 Branch Created] In-Reply-To: <39a54689-c8b8-1393-bd1c-03e4a21f464b@redhat.com> References: <891942035.493157.1563191603936.JavaMail.zimbra@redhat.com> <39a54689-c8b8-1393-bd1c-03e4a21f464b@redhat.com> Message-ID: Please ensure : 1. commit message has the explanation on the motive behind this change. 2. I always feel more confident if a patch has passed regression to kick start the review. Can you please ensure that verified flag is put up? On Mon, Jul 15, 2019 at 5:27 PM Jiffin Tony Thottan wrote: > Hi, > > The "Add Ganesha HA bits back to glusterfs code repo"[1] is targeted for > glusterfs-7. Requesting maintainers to review below two patches > > [1] > https://review.gluster.org/#/q/topic:ref-663+(status:open+OR+status:merged) > > Regards, > > Jiffin > > On 15/07/19 5:23 PM, Jiffin Thottan wrote: > > > > ----- Original Message ----- > > From: "Rinku Kothiya" > > To: maintainers at gluster.org, gluster-devel at gluster.org, "Shyam > Ranganathan" > > Sent: Wednesday, July 3, 2019 10:30:58 AM > > Subject: [Gluster-devel] Release 7 Branch Created > > > > Hi Team, > > > > Release 7 branch has been created in upstream. > > > > ## Schedule > > > > Curretnly the plan working backwards on the schedule, here's what we > have: > > - Announcement: Week of Aug 4th, 2019 > > - GA tagging: Aug-02-2019 > > - RC1: On demand before GA > > - RC0: July-03-2019 > > - Late features cut-off: Week of June-24th, 2018 > > - Branching (feature cutoff date): June-17-2018 > > > > Regards > > Rinku > > > > _______________________________________________ > > > > Community Meeting Calendar: > > > > APAC Schedule - > > Every 2nd and 4th Tuesday at 11:30 AM IST > > Bridge: https://bluejeans.com/836554017 > > > > NA/EMEA Schedule - > > Every 1st and 3rd Tuesday at 01:00 PM EDT > > Bridge: https://bluejeans.com/486278655 > > > > Gluster-devel mailing list > > Gluster-devel at gluster.org > > https://lists.gluster.org/mailman/listinfo/gluster-devel > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amukherj at redhat.com Mon Jul 15 15:20:10 2019 From: amukherj at redhat.com (Atin Mukherjee) Date: Mon, 15 Jul 2019 20:50:10 +0530 Subject: [Gluster-devel] Re-Compile glusterd1 and add it to the stack In-Reply-To: References: Message-ID: David - I don't see a GF_OPTION_INIT in init () of read-only.c . How is that working even when you're compiling the entire source? On Mon, Jul 15, 2019 at 7:40 PM David Spisla wrote: > Hello Vijay, > there is a patch file attached. You can see the code there. I oriented > myself here: > https://review.gluster.org/#/c/glusterfs/+/18633/ > > As you can see there is no additional code in glusterd-volgen.c . Both > glusterd-volgen.c and glusterd-volume.set.c will be compiled into > glusterd.so . > Its still the problem, that my new option is not available if I only > re-compile glusterd.so . Compiling and using the whole RPMs is working > > It is not possible to re-compile glusterd.so ? > > Regards > David Spisla > > Am Do., 11. Juli 2019 um 20:35 Uhr schrieb Vijay Bellur < > vbellur at redhat.com>: > >> Hi David, >> >> If the option is related to a particular translator, you would need to >> add that option in the options table of the translator and add code in >> glusterd-volgen.c to generate that option in the volfiles. >> >> Would it be possible to share the code diff that you are trying out? >> >> Regards, >> Vijay >> >> On Wed, Jul 10, 2019 at 3:11 AM David Spisla wrote: >> >>> Hello Gluster Devels, >>> >>> I add a custom volume option to glusterd-volume-set.c . I could build my >>> own RPMs but I don't want this, I only want to add new compiled glusterd to >>> the stack. I tried it out to copy glusterd.so to >>> /usr/lib64/glusterfs/x.x/xlator/mgmt . After this glusterd is running >>> normally and I can create volumes but in the vol files my new option is not >>> there and if I want to start the volume it failed. >>> >>> It seems to be that I need to add some other files to the stack. Any >>> idea? >>> >>> Regards >>> David Spisla >>> _______________________________________________ >>> >>> Community Meeting Calendar: >>> >>> APAC Schedule - >>> Every 2nd and 4th Tuesday at 11:30 AM IST >>> Bridge: https://bluejeans.com/836554017 >>> >>> NA/EMEA Schedule - >>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>> Bridge: https://bluejeans.com/486278655 >>> >>> Gluster-devel mailing list >>> Gluster-devel at gluster.org >>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>> >>> _______________________________________________ >> >> Community Meeting Calendar: >> >> APAC Schedule - >> Every 2nd and 4th Tuesday at 11:30 AM IST >> Bridge: https://bluejeans.com/836554017 >> >> NA/EMEA Schedule - >> Every 1st and 3rd Tuesday at 01:00 PM EDT >> Bridge: https://bluejeans.com/486278655 >> >> Gluster-devel mailing list >> Gluster-devel at gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-devel >> >> _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spisla80 at gmail.com Mon Jul 15 15:28:32 2019 From: spisla80 at gmail.com (David Spisla) Date: Mon, 15 Jul 2019 17:28:32 +0200 Subject: [Gluster-devel] Re-Compile glusterd1 and add it to the stack In-Reply-To: References: Message-ID: You are rigth. When creating the patch file, I did a mistake. Attached should be the complete one Regards David Am Mo., 15. Juli 2019 um 17:20 Uhr schrieb Atin Mukherjee < amukherj at redhat.com>: > David - I don't see a GF_OPTION_INIT in init () of read-only.c . How is > that working even when you're compiling the entire source? > > On Mon, Jul 15, 2019 at 7:40 PM David Spisla wrote: > >> Hello Vijay, >> there is a patch file attached. You can see the code there. I oriented >> myself here: >> https://review.gluster.org/#/c/glusterfs/+/18633/ >> >> As you can see there is no additional code in glusterd-volgen.c . Both >> glusterd-volgen.c and glusterd-volume.set.c will be compiled into >> glusterd.so . >> Its still the problem, that my new option is not available if I only >> re-compile glusterd.so . Compiling and using the whole RPMs is working >> >> It is not possible to re-compile glusterd.so ? >> >> Regards >> David Spisla >> >> Am Do., 11. Juli 2019 um 20:35 Uhr schrieb Vijay Bellur < >> vbellur at redhat.com>: >> >>> Hi David, >>> >>> If the option is related to a particular translator, you would need to >>> add that option in the options table of the translator and add code in >>> glusterd-volgen.c to generate that option in the volfiles. >>> >>> Would it be possible to share the code diff that you are trying out? >>> >>> Regards, >>> Vijay >>> >>> On Wed, Jul 10, 2019 at 3:11 AM David Spisla wrote: >>> >>>> Hello Gluster Devels, >>>> >>>> I add a custom volume option to glusterd-volume-set.c . I could build >>>> my own RPMs but I don't want this, I only want to add new compiled glusterd >>>> to the stack. I tried it out to copy glusterd.so to >>>> /usr/lib64/glusterfs/x.x/xlator/mgmt . After this glusterd is running >>>> normally and I can create volumes but in the vol files my new option is not >>>> there and if I want to start the volume it failed. >>>> >>>> It seems to be that I need to add some other files to the stack. Any >>>> idea? >>>> >>>> Regards >>>> David Spisla >>>> _______________________________________________ >>>> >>>> Community Meeting Calendar: >>>> >>>> APAC Schedule - >>>> Every 2nd and 4th Tuesday at 11:30 AM IST >>>> Bridge: https://bluejeans.com/836554017 >>>> >>>> NA/EMEA Schedule - >>>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>>> Bridge: https://bluejeans.com/486278655 >>>> >>>> Gluster-devel mailing list >>>> Gluster-devel at gluster.org >>>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>>> >>>> _______________________________________________ >>> >>> Community Meeting Calendar: >>> >>> APAC Schedule - >>> Every 2nd and 4th Tuesday at 11:30 AM IST >>> Bridge: https://bluejeans.com/836554017 >>> >>> NA/EMEA Schedule - >>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>> Bridge: https://bluejeans.com/486278655 >>> >>> Gluster-devel mailing list >>> Gluster-devel at gluster.org >>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>> >>> _______________________________________________ >> >> Community Meeting Calendar: >> >> APAC Schedule - >> Every 2nd and 4th Tuesday at 11:30 AM IST >> Bridge: https://bluejeans.com/836554017 >> >> NA/EMEA Schedule - >> Every 1st and 3rd Tuesday at 01:00 PM EDT >> Bridge: https://bluejeans.com/486278655 >> >> Gluster-devel mailing list >> Gluster-devel at gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-devel >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: diff-option.patch Type: application/octet-stream Size: 3424 bytes Desc: not available URL: From amarts at gmail.com Tue Jul 16 04:18:58 2019 From: amarts at gmail.com (Amar Tumballi) Date: Tue, 16 Jul 2019 09:48:58 +0530 Subject: [Gluster-devel] directory filehandles In-Reply-To: <1oamuwo.12b2396p3cs55M%manu@netbsd.org> References: <1oamuwo.12b2396p3cs55M%manu@netbsd.org> Message-ID: On Sat, Jul 13, 2019 at 7:33 AM Emmanuel Dreyfus wrote: > Hello > > I have trouble figuring the whole story about how to cope with FUSE > directory filehandles in the NetBSD implementation. > > libfuse makes a special use of filehandles exposed to filesystem for > OPENDIR, READDIR, FSYNCDIR, and RELEASEDIR. For that four operations, > the fh is a pointer to a struct fuse_dh, in which the fh field is > exposed to the filesystem. All other filesystem operations pass the fh > as is from kernel to filesystem back and forth. > > That means that a fh obtained by OPENDIR should never be passed to > operations others than (READDIR, FSYNCDIR and RELEASEDIR). For instance, > when porting ltfs to NetBSD, I experienced that passing a fh obtained > from OPENDIR to SETATTR would crash. > > glusterfs implementation differs from libfuse because it seems the > filesystem is always passed as is: there is nothing like libfuse struct > fuse_dh. It will therefore happily accept fh obtained by OPENDIR for any > operation, something that I do not expect to happen in libfuse based > filesystems. > > It would be great to add these comments as part of https://github.com/gluster/glusterfs/issues/153. My take is to start working in the direction of rebasing gluster code to use libfuse in future than to maintain our own changes. Would that help if we move in that direction? > My real concern is SETLK on directory. Here glusterfs really wants a fh > or it will report an error. The NetBSD implementation passes the fh it > got from OPENDIR, but I expect a libfuse based filesystem to crash in > such a situation. For now I did not find any libfuse-based filesystem > that implements locking, so I could not test that. > > Could someone clarify this? What are the FUSE operations that should be > sent to filesystem on that kind of program? > > int fd; > > /* NetBSD calls FUSE LOOKUP and OPENDIR */ > if ((fd = open("/gfs/tmp", O_RDONLY, 0)) == -1) > err(1, "open failed"); > > /* NetBSD calls FUSE SETLKW */ > if (flock(fd, LOCK_EX) == -1) > err(1, "flock failed"); > > Csaba, Raghavendra, Any suggestions here? -Amar > > > -- > Emmanuel Dreyfus > http://hcpnet.free.fr/pubz > manu at netbsd.org > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skoduri at redhat.com Tue Jul 16 14:37:18 2019 From: skoduri at redhat.com (Soumya Koduri) Date: Tue, 16 Jul 2019 20:07:18 +0530 Subject: [Gluster-devel] Release 4.1.10: Expected tagging on July 15th In-Reply-To: <20190711120450.GH26890@reaktio.net> References: <20190711120450.GH26890@reaktio.net> Message-ID: <7d653fd1-4212-d482-01fd-c593ffe6384f@redhat.com> On 7/11/19 5:34 PM, Pasi K?rkk?inen wrote: > On Tue, Jul 09, 2019 at 02:32:46PM +0530, Hari Gowtham wrote: >> Hi, >> >> Expected tagging date for release-4.1.10 is on July, 15th 2019. >> >> NOTE: This is the last release for 4 series. >> >> Branch 4 will be EOLed after this. So if there are any critical patches >> please ensure they are backported and also are passing >> regressions and are appropriately reviewed for easy merging and tagging >> on the date. >> > > Glusterfs 4.1.9 did close this issue: > > "gfapi: do not block epoll thread for upcall notifications": > https://bugzilla.redhat.com/show_bug.cgi?id=1694563 > > > But more patches are needed to properly fix the issue, so it'd really nice to have these patches backported to 4.1.10 aswell: > > "gfapi: fix incorrect initialization of upcall syncop arguments": > https://bugzilla.redhat.com/show_bug.cgi?id=1718316 > > "Upcall: Avoid sending upcalls for invalid Inode": > https://bugzilla.redhat.com/show_bug.cgi?id=1718338 > > > This gfapi/upcall issue gets easily triggered with nfs-ganesha, and causes "complete IO hang", as can be seem here: > > "Complete IO hang on CentOS 7.5": > https://github.com/nfs-ganesha/nfs-ganesha/issues/335 Thanks Pasi. @Hari, The smoke tests have passed for these patches now. Kindly merge them. Thanks, Soumya > > > Thanks, > > -- Pasi > > >> -- >> Regards, >> Hari Gowtham. > > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > From hgowtham at redhat.com Tue Jul 16 20:54:29 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Wed, 17 Jul 2019 02:24:29 +0530 Subject: [Gluster-devel] Release 4.1.10: Expected tagging on July 15th In-Reply-To: <7d653fd1-4212-d482-01fd-c593ffe6384f@redhat.com> References: <20190711120450.GH26890@reaktio.net> <7d653fd1-4212-d482-01fd-c593ffe6384f@redhat.com> Message-ID: Thanks Soumya. Have merged them. On Tue, Jul 16, 2019 at 8:07 PM Soumya Koduri wrote: > > > > On 7/11/19 5:34 PM, Pasi K?rkk?inen wrote: > > On Tue, Jul 09, 2019 at 02:32:46PM +0530, Hari Gowtham wrote: > >> Hi, > >> > >> Expected tagging date for release-4.1.10 is on July, 15th 2019. > >> > >> NOTE: This is the last release for 4 series. > >> > >> Branch 4 will be EOLed after this. So if there are any critical patches > >> please ensure they are backported and also are passing > >> regressions and are appropriately reviewed for easy merging and tagging > >> on the date. > >> > > > > Glusterfs 4.1.9 did close this issue: > > > > "gfapi: do not block epoll thread for upcall notifications": > > https://bugzilla.redhat.com/show_bug.cgi?id=1694563 > > > > > > But more patches are needed to properly fix the issue, so it'd really nice to have these patches backported to 4.1.10 aswell: > > > > "gfapi: fix incorrect initialization of upcall syncop arguments": > > https://bugzilla.redhat.com/show_bug.cgi?id=1718316 > > > > "Upcall: Avoid sending upcalls for invalid Inode": > > https://bugzilla.redhat.com/show_bug.cgi?id=1718338 > > > > > > This gfapi/upcall issue gets easily triggered with nfs-ganesha, and causes "complete IO hang", as can be seem here: > > > > "Complete IO hang on CentOS 7.5": > > https://github.com/nfs-ganesha/nfs-ganesha/issues/335 > > Thanks Pasi. > > @Hari, > > The smoke tests have passed for these patches now. Kindly merge them. > > Thanks, > Soumya > > > > > > > Thanks, > > > > -- Pasi > > > > > >> -- > >> Regards, > >> Hari Gowtham. > > > > _______________________________________________ > > > > Community Meeting Calendar: > > > > APAC Schedule - > > Every 2nd and 4th Tuesday at 11:30 AM IST > > Bridge: https://bluejeans.com/836554017 > > > > NA/EMEA Schedule - > > Every 1st and 3rd Tuesday at 01:00 PM EDT > > Bridge: https://bluejeans.com/486278655 > > > > Gluster-devel mailing list > > Gluster-devel at gluster.org > > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -- Regards, Hari Gowtham. From mscherer at redhat.com Thu Jul 18 08:44:05 2019 From: mscherer at redhat.com (Michael Scherer) Date: Thu, 18 Jul 2019 10:44:05 +0200 Subject: [Gluster-devel] [Fwd: [Gluster-infra] Regarding the recent arrival of emails on list] References: <6285ca09acc6fc87d133527ec9ffd6bdde600124.camel@redhat.com> Message-ID: <2f5d32e742e8a1aa595f125331d2c4149a1cd968.camel@redhat.com> (work better without a error in the address) -- Michael Scherer Sysadmin, Community Infrastructure -------------- next part -------------- An embedded message was scrubbed... From: Michael Scherer Subject: [Gluster-infra] Regarding the recent arrival of emails on list Date: Thu, 18 Jul 2019 10:38:43 +0200 Size: 6448 URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From hgowtham at redhat.com Fri Jul 19 10:48:40 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Fri, 19 Jul 2019 16:18:40 +0530 Subject: [Gluster-devel] [Gluster-users] "du" and "df -hT" commands output mismatch In-Reply-To: <287791D2-4775-4CB1-9AA1-9C160839C2D2@cmcc.it> References: <287791D2-4775-4CB1-9AA1-9C160839C2D2@cmcc.it> Message-ID: Hi Mauro, The fsck script is the fastest way to resolve the issue. The other way would be to disable quota and once the crawl for disable is done, we have to enable and set the limits again. In this way, the crawl happens twice and hence its slow. On Fri, Jul 19, 2019 at 3:27 PM Mauro Tridici wrote: > > Dear All, > > I?m experiencing again a problem with gluster file system quota. > The ?df -hT /tier2/CSP/sp1? command output is different from the ?du -ms? command executed against the same folder. > > [root at s01 manual]# df -hT /tier2/CSP/sp1 > Filesystem Type Size Used Avail Use% Mounted on > s01-stg:tier2 fuse.glusterfs 25T 22T 3.5T 87% /tier2 > > [root at s01 sp1]# du -ms /tier2/CSP/sp1 > 14TB /tier2/CSP/sp1 > > In the past, I used successfully the quota_fsck_new-6.py script in order to detect the SIZE_MISMATCH occurrences and fix them. > Unfortunately, the number of sub-directories and files saved in /tier2/CSP/sp1 grew so much and the list of SIZE_MISMATCH entries is very long. > > Is there a faster way to correct the mismatching outputs? > Could you please help me to solve, if it is possible, this issue? > > Thank you in advance, > Mauro > > > > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users -- Regards, Hari Gowtham. From sason922 at gmail.com Sun Jul 21 22:19:27 2019 From: sason922 at gmail.com (Barak Sason) Date: Mon, 22 Jul 2019 01:19:27 +0300 Subject: [Gluster-devel] Assistance setting up Gluster Message-ID: Hello everyone, My name is Barak and I'll soon be joining the Gluster development team as a part of Red Hat. As a preparation for my upcoming employment I've been trying to get Gluster up and running on my system, but came across some technical difficulties. I'll appreciate any assistance you may provide. I have 2 VMs on my PC - Ubuntu 18, which I used for previous development and RHEL 8 which I installed a fresh copy just days ago. The copy of Gluster code I'm working with is a clone of the master repository. On Ubuntu installation completed, but running the command 'sudo glusterd' does nothing. Debugging with gdb shows that the program terminates very early due to an error. At glusterfsd.c:2878 (main method) there is a call to 'daemonize' method. at glusterfsd.c:2568 a call to sys_read fails with errno 17. I'm unsure why this happens and I was unable to solve this. I've tried to run 'sudo glusterd -N' in order to deactivate deamonization, but this also fails at glusterfsd.c:2712 ('glusterfs_process_volfp' method). I was unable to solve this issue too. On RHEL, running ./configure results in an error regarding 'rpcgen'. Running ./configure --without-libtirp was unhelpful and results in the same error. As of right now I'm unable to proceed so I ask for your assistance. Thank you all very much. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jenkins at build.gluster.org Mon Jul 22 01:45:03 2019 From: jenkins at build.gluster.org (jenkins at build.gluster.org) Date: Mon, 22 Jul 2019 01:45:03 +0000 (UTC) Subject: [Gluster-devel] Weekly Untriaged Bugs Message-ID: <1001436311.15.1563759903764.JavaMail.jenkins@jenkins-el7.rht.gluster.org> [...truncated 7 lines...] https://bugzilla.redhat.com/1731041 / build: GlusterFS fails on RHEL-8 during build. https://bugzilla.redhat.com/1730433 / build: Gluster release 6 build errors on ppc64le https://bugzilla.redhat.com/1723617 / distribute: nfs-ganesha gets empty stat (all zero) when glfs_mkdir return success https://bugzilla.redhat.com/1726175 / fuse: CentOs 6 GlusterFS client creates files with time 01/01/1970 https://bugzilla.redhat.com/1730948 / fuse: [Glusterfs4.1.9] memory leak in fuse mount process. https://bugzilla.redhat.com/1726038 / ganesha-nfs: ganesha : nfstest_lock from NFSTest failed on v3 https://bugzilla.redhat.com/1724618 / ganesha-nfs: ganesha : nfstest_posix from NFSTest failed https://bugzilla.redhat.com/1730565 / geo-replication: Geo-replication does not sync default ACL https://bugzilla.redhat.com/1728183 / gluster-smb: SMBD thread panics on file operations from Windows, OS X and Windows when using vfs_glusterfs https://bugzilla.redhat.com/1726205 / md-cache: Windows client fails to copy large file to GlusterFS volume share with fruit and streams_xattr VFS modules via Samba https://bugzilla.redhat.com/1730962 / project-infrastructure: My emails to gluster-users are not hitting the list https://bugzilla.redhat.com/1731067 / project-infrastructure: Need nightly build for release 7 branch https://bugzilla.redhat.com/1724624 / upcall: LINK does not invalidate metadata cache of parent directory [...truncated 2 lines...] -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 1813 bytes Desc: not available URL: From ykaul at redhat.com Mon Jul 22 05:09:44 2019 From: ykaul at redhat.com (Yaniv Kaul) Date: Mon, 22 Jul 2019 08:09:44 +0300 Subject: [Gluster-devel] Assistance setting up Gluster In-Reply-To: References: Message-ID: On Mon, Jul 22, 2019 at 1:20 AM Barak Sason wrote: > Hello everyone, > > My name is Barak and I'll soon be joining the Gluster development team as > a part of Red Hat. > Hello and welcome to the Gluster community. > > As a preparation for my upcoming employment I've been trying to get > Gluster up and running on my system, but came across some technical > difficulties. > I'll appreciate any assistance you may provide. > > I have 2 VMs on my PC - Ubuntu 18, which I used for previous development > and RHEL 8 which I installed a fresh copy just days ago. > 2 VMs is really minimal. You should use more. > The copy of Gluster code I'm working with is a clone of the master > repository. > > On Ubuntu installation completed, but running the command 'sudo glusterd' > does nothing. Debugging with gdb shows that the program terminates very > early due to an error. > At glusterfsd.c:2878 (main method) there is a call to 'daemonize' method. > at glusterfsd.c:2568 a call to sys_read fails with errno 17. > I'm unsure why this happens and I was unable to solve this. > I've tried to run 'sudo glusterd -N' in order to deactivate > deamonization, but this also fails at glusterfsd.c:2712 > ('glusterfs_process_volfp' method). I was unable to solve this issue too. > > On RHEL, running ./configure results in an error regarding 'rpcgen'. > Running ./configure --without-libtirp was unhelpful and results in the > same error. > I'd separate the two issues to two different email threads, as they may or may not be related. Please provide logs for each. Why are you running glusterd manually, btw? You may want to take a look at https://github.com/mykaul/vg - which is a simple way to set up Gluster on CentOS 7 VMs for testing. I have not tried it for some time - let me know how it works for you. Y. > > As of right now I'm unable to proceed so I ask for your assistance. > > Thank you all very much. > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hgowtham at redhat.com Mon Jul 22 07:16:06 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Mon, 22 Jul 2019 12:46:06 +0530 Subject: [Gluster-devel] [Gluster-users] "du" and "df -hT" commands output mismatch In-Reply-To: <176E4EC2-DB3E-4C33-8EB4-8D09DCB11599@cmcc.it> References: <287791D2-4775-4CB1-9AA1-9C160839C2D2@cmcc.it> <176E4EC2-DB3E-4C33-8EB4-8D09DCB11599@cmcc.it> Message-ID: Hi, Yes the above mentioned steps are right. The way to find if the crawl is still happening is to grep for quota_crawl in the processes that are still running. # ps aux | grep quota_crawl As long as this process is alive, the crawl is happening. Note: crawl does take a lot of time as well. And it happens twice. On Fri, Jul 19, 2019 at 5:42 PM Mauro Tridici wrote: > > Hi Hari, > > thank you very much for the fast answer. > I think that the we will try to solve the issue disabling and enabling quota. > So, if I understand I have to do the following actions: > > - save on my notes the current quota limits; > - disable quota using "gluster volume quota /tier2 disable? command; > - wait a while for the crawl (question: how can I understand that crawl is terminated!? how logn should I wait?); > - enable quota using "gluster volume quota /tier2 enable?; > - set again the previous quota limits. > > Is this correct? > > Many thanks for your support, > Mauro > > > > On 19 Jul 2019, at 12:48, Hari Gowtham wrote: > > Hi Mauro, > > The fsck script is the fastest way to resolve the issue. > The other way would be to disable quota and once the crawl for disable > is done, we have to enable and set the limits again. > In this way, the crawl happens twice and hence its slow. > > On Fri, Jul 19, 2019 at 3:27 PM Mauro Tridici wrote: > > > Dear All, > > I?m experiencing again a problem with gluster file system quota. > The ?df -hT /tier2/CSP/sp1? command output is different from the ?du -ms? command executed against the same folder. > > [root at s01 manual]# df -hT /tier2/CSP/sp1 > Filesystem Type Size Used Avail Use% Mounted on > s01-stg:tier2 fuse.glusterfs 25T 22T 3.5T 87% /tier2 > > [root at s01 sp1]# du -ms /tier2/CSP/sp1 > 14TB /tier2/CSP/sp1 > > In the past, I used successfully the quota_fsck_new-6.py script in order to detect the SIZE_MISMATCH occurrences and fix them. > Unfortunately, the number of sub-directories and files saved in /tier2/CSP/sp1 grew so much and the list of SIZE_MISMATCH entries is very long. > > Is there a faster way to correct the mismatching outputs? > Could you please help me to solve, if it is possible, this issue? > > Thank you in advance, > Mauro > > > > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users > > > > > -- > Regards, > Hari Gowtham. > > > -- Regards, Hari Gowtham. From hgowtham at redhat.com Mon Jul 22 08:28:31 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Mon, 22 Jul 2019 13:58:31 +0530 Subject: [Gluster-devel] [Gluster-users] "du" and "df -hT" commands output mismatch In-Reply-To: <05385746-58F0-4FFE-BE49-3CFED6C919A2@cmcc.it> References: <287791D2-4775-4CB1-9AA1-9C160839C2D2@cmcc.it> <176E4EC2-DB3E-4C33-8EB4-8D09DCB11599@cmcc.it> <05385746-58F0-4FFE-BE49-3CFED6C919A2@cmcc.it> Message-ID: As of now we don't have way to solve it indefinitely. There may be a number of ways accounting mismatch can happen. To solve each way, we need to identify how it happened (the IOs that went through, their order and the timing) with this we need to understand what change is necessary and implement that. This has to done every time we come across an issue that can cause accounting mismatch. Most of the changes might affect the performance. That is a down side. And we don't have a way to collect the above necessary information. To do the above requirements, we don't have enough bandwidth. If anyone from the community is interested, they can contribute to it. We are here to help with them out. On Mon, Jul 22, 2019 at 1:12 PM Mauro Tridici wrote: > > Hi Hari, > > I hope that the crawl will run at most for a couple of days. > Do you know if there is a way to solve the issue definitely ? > > GlusterFS version is 3.12.14. > You can find below some additional info. > > Volume Name: tier2 > Type: Distributed-Disperse > Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c > Status: Started > Snapshot Count: 0 > Number of Bricks: 12 x (4 + 2) = 72 > Transport-type: tcp > > Many thanks, > Mauro > > On 22 Jul 2019, at 09:16, Hari Gowtham wrote: > > Hi, > Yes the above mentioned steps are right. > The way to find if the crawl is still happening is to grep for > quota_crawl in the processes that are still running. > # ps aux | grep quota_crawl > As long as this process is alive, the crawl is happening. > > Note: crawl does take a lot of time as well. And it happens twice. > > On Fri, Jul 19, 2019 at 5:42 PM Mauro Tridici wrote: > > > Hi Hari, > > thank you very much for the fast answer. > I think that the we will try to solve the issue disabling and enabling quota. > So, if I understand I have to do the following actions: > > - save on my notes the current quota limits; > - disable quota using "gluster volume quota /tier2 disable? command; > - wait a while for the crawl (question: how can I understand that crawl is terminated!? how logn should I wait?); > - enable quota using "gluster volume quota /tier2 enable?; > - set again the previous quota limits. > > Is this correct? > > Many thanks for your support, > Mauro > > > > On 19 Jul 2019, at 12:48, Hari Gowtham wrote: > > Hi Mauro, > > The fsck script is the fastest way to resolve the issue. > The other way would be to disable quota and once the crawl for disable > is done, we have to enable and set the limits again. > In this way, the crawl happens twice and hence its slow. > > On Fri, Jul 19, 2019 at 3:27 PM Mauro Tridici wrote: > > > Dear All, > > I?m experiencing again a problem with gluster file system quota. > The ?df -hT /tier2/CSP/sp1? command output is different from the ?du -ms? command executed against the same folder. > > [root at s01 manual]# df -hT /tier2/CSP/sp1 > Filesystem Type Size Used Avail Use% Mounted on > s01-stg:tier2 fuse.glusterfs 25T 22T 3.5T 87% /tier2 > > [root at s01 sp1]# du -ms /tier2/CSP/sp1 > 14TB /tier2/CSP/sp1 > > In the past, I used successfully the quota_fsck_new-6.py script in order to detect the SIZE_MISMATCH occurrences and fix them. > Unfortunately, the number of sub-directories and files saved in /tier2/CSP/sp1 grew so much and the list of SIZE_MISMATCH entries is very long. > > Is there a faster way to correct the mismatching outputs? > Could you please help me to solve, if it is possible, this issue? > > Thank you in advance, > Mauro > > > > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users > > > > > -- > Regards, > Hari Gowtham. > > > > > > -- > Regards, > Hari Gowtham. > > > -- Regards, Hari Gowtham. From sason922 at gmail.com Mon Jul 22 09:09:18 2019 From: sason922 at gmail.com (Barak Sason) Date: Mon, 22 Jul 2019 12:09:18 +0300 Subject: [Gluster-devel] Assistance setting up Gluster In-Reply-To: References: Message-ID: Greeting Yaniv, Thank you very much for your response. As you suggested, I'm installing additional VM (CentOs) on which I'll try to use the repo you suggested in order to get Gluster up and running. I'll update on progress in this matter later today, as it'll take a bit of time to get the VM ready. In addition, I'll post the RHEL problem in a separate thread, as you requested. In the meantime, let's focus on the Ubuntu problem. I'm attaching the log file from Ubuntu, corresponding to running 'sudo glusterd' command (attachment - glusterd.log). Regarding you question about running manually - I've followed the instructions specified in the INSTALL.txt file which comes with the repo and specifies the following steps for installation: 1- ./autogen.sh 2- ./configure 3- make install Please let me know if this somehow incorrect. I kindly thank you for your time and effort, Barak On Mon, Jul 22, 2019 at 8:10 AM Yaniv Kaul wrote: > > > On Mon, Jul 22, 2019 at 1:20 AM Barak Sason wrote: > >> Hello everyone, >> >> My name is Barak and I'll soon be joining the Gluster development team as >> a part of Red Hat. >> > > Hello and welcome to the Gluster community. > >> >> As a preparation for my upcoming employment I've been trying to get >> Gluster up and running on my system, but came across some technical >> difficulties. >> I'll appreciate any assistance you may provide. >> >> I have 2 VMs on my PC - Ubuntu 18, which I used for previous development >> and RHEL 8 which I installed a fresh copy just days ago. >> > > 2 VMs is really minimal. You should use more. > >> The copy of Gluster code I'm working with is a clone of the master >> repository. >> >> On Ubuntu installation completed, but running the command 'sudo glusterd' >> does nothing. Debugging with gdb shows that the program terminates very >> early due to an error. >> At glusterfsd.c:2878 (main method) there is a call to 'daemonize' method. >> at glusterfsd.c:2568 a call to sys_read fails with errno 17. >> I'm unsure why this happens and I was unable to solve this. >> I've tried to run 'sudo glusterd -N' in order to deactivate >> deamonization, but this also fails at glusterfsd.c:2712 >> ('glusterfs_process_volfp' method). I was unable to solve this issue too. >> >> On RHEL, running ./configure results in an error regarding 'rpcgen'. >> Running ./configure --without-libtirp was unhelpful and results in the >> same error. >> > > I'd separate the two issues to two different email threads, as they may or > may not be related. > Please provide logs for each. > Why are you running glusterd manually, btw? > > You may want to take a look at https://github.com/mykaul/vg - which is a > simple way to set up Gluster on CentOS 7 VMs for testing. I have not tried > it for some time - let me know how it works for you. > Y. > >> >> As of right now I'm unable to proceed so I ask for your assistance. >> >> Thank you all very much. >> _______________________________________________ >> >> Community Meeting Calendar: >> >> APAC Schedule - >> Every 2nd and 4th Tuesday at 11:30 AM IST >> Bridge: https://bluejeans.com/836554017 >> >> NA/EMEA Schedule - >> Every 1st and 3rd Tuesday at 01:00 PM EDT >> Bridge: https://bluejeans.com/486278655 >> >> Gluster-devel mailing list >> Gluster-devel at gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-devel >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: glusterd.log Type: application/octet-stream Size: 2055 bytes Desc: not available URL: From hgowtham at redhat.com Tue Jul 23 04:30:20 2019 From: hgowtham at redhat.com (hgowtham at redhat.com) Date: Tue, 23 Jul 2019 04:30:20 +0000 Subject: [Gluster-devel] Invitation: nvitation: Gluster Community Meeting (APAC friendly hours... @ Tue Jul 23, 2019 10am - 11am (IST) (gluster-devel@gluster.org) Message-ID: <00000000000080eb6d058e51a6b3@google.com> You have been invited to the following event. Title: nvitation: Gluster Community Meeting (APAC friendly hours) @ Tue July 23, 2019 11:30am - 12:30pm (IST) Hi all, This is the biweekly Gluster community meeting that is hosted to collaborate and make the community better. Please do join the discussion. Bridge: https://bluejeans.com/836554017 Minutes meeting: https://hackmd.io/zoI6PYIrTbWUpchGPZ_pBg?both Previous Meeting notes: https://github.com/gluster/community Regards, Hari. When: Tue Jul 23, 2019 10am ? 11am India Standard Time - Kolkata Calendar: gluster-devel at gluster.org Who: * hgowtham at redhat.com - organizer * gluster-users at gluster.org * gluster-devel at gluster.org Event details: https://www.google.com/calendar/event?action=VIEW&eid=M2RuOGlmMmgyMHJqdmE4MzMzb20wcHRpa3IgZ2x1c3Rlci1kZXZlbEBnbHVzdGVyLm9yZw&tok=MTkjaGdvd3RoYW1AcmVkaGF0LmNvbWJlZjcwNjZjM2M0YmU2NzA1ZDViNmU3NDVlYTVlODA1OGYyODk0OTY&ctz=Asia%2FKolkata&hl=en&es=0 Invitation from Google Calendar: https://www.google.com/calendar/ You are receiving this courtesy email at the account gluster-devel at gluster.org because you are an attendee of this event. To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. Forwarding this invitation could allow any recipient to send a response to the organizer and be added to the guest list, or invite others regardless of their own invitation status, or to modify your RSVP. Learn more at https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 1930 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 1971 bytes Desc: not available URL: From hgowtham at redhat.com Tue Jul 23 04:33:03 2019 From: hgowtham at redhat.com (hgowtham at redhat.com) Date: Tue, 23 Jul 2019 04:33:03 +0000 Subject: [Gluster-devel] Updated invitation: Invitation: Gluster Community Meeting (APAC friendly hour... @ Tue Jul 23, 2019 11:30am - 12:25pm (IST) (gluster-devel@gluster.org) Message-ID: <0000000000003866b9058e51b0f8@google.com> This event has been changed. Title: Invitation: Gluster Community Meeting (APAC friendly hours) @ Tue July 23, 2019 11:30am - 12:30pm (IST) (changed) Hi all, This is the biweekly Gluster community meeting that is hosted to collaborate and make the community better. Please do join the discussion. Bridge: https://bluejeans.com/836554017 Minutes meeting: https://hackmd.io/zoI6PYIrTbWUpchGPZ_pBg?both Previous Meeting notes: https://github.com/gluster/community Regards, Hari. When: Tue Jul 23, 2019 11:30am ? 12:25pm India Standard Time - Kolkata (changed) Calendar: gluster-devel at gluster.org Who: * hgowtham at redhat.com - organizer * gluster-users at gluster.org * gluster-devel at gluster.org * jae.park at thryv.com Event details: https://www.google.com/calendar/event?action=VIEW&eid=M2RuOGlmMmgyMHJqdmE4MzMzb20wcHRpa3IgZ2x1c3Rlci1kZXZlbEBnbHVzdGVyLm9yZw&tok=MTkjaGdvd3RoYW1AcmVkaGF0LmNvbWJlZjcwNjZjM2M0YmU2NzA1ZDViNmU3NDVlYTVlODA1OGYyODk0OTY&ctz=Asia%2FKolkata&hl=en&es=0 Invitation from Google Calendar: https://www.google.com/calendar/ You are receiving this courtesy email at the account gluster-devel at gluster.org because you are an attendee of this event. To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. Forwarding this invitation could allow any recipient to send a response to the organizer and be added to the guest list, or invite others regardless of their own invitation status, or to modify your RSVP. Learn more at https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2076 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 2119 bytes Desc: not available URL: From hgowtham at redhat.com Tue Jul 23 06:54:03 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Tue, 23 Jul 2019 12:24:03 +0530 Subject: [Gluster-devel] Minutes of Gluster Community Meeting (APAC) 23rd July 2019 Message-ID: Hi, The minutes of the meeting are as follows: Recording of this meeting- - https://bluejeans.com/s/s1Zma ------- ### Attendance Name (#gluster-dev alias) - company * Ashish Pandey (_apandey) - Redhat * Rishubh Jain (risjain) - Redhat * Susant Palai (spalai) - Redhat * hari gowtham (hgowtham) - Red Hat * Sheetal Pamecha (spamecha) - Red Hat * Shwetha Acharya (sacharya) - Red Hat * Amar Tumballi (amarts/@tumballi) * Sunny Kumar (sunnyk) - Red Hat * Rinku Kothya (rinku) - Red Hat * Hamza * Khalid * Sanju Rakonde (srakonde) - RedHat * Deepshikha (dkhandel) - Red Hat * Pranith Kumar (pranithk) - Red Hat * Sunil Kumar (skumar) - Red Hat * Kotresh HR (kotreshhr) - RedHat * David Spisla - Gluster User * Rafi KC - Red Hat * Prasanna Kumar Kalever (pkalever) - RedHat * Karthik Subrahmanya (ksubrahm) - Red Hat * Arjun Sharma - Red Hat * Kaustav Majumder - Red Hat ### User stories Felix, is trying to setup a production server. asked for ideas to set it up. storing large media and research files using replica/disperse vol. a users issue was addressed with a patch. duplicate entry in volfile. [bug which talks about issue](https://bugzilla.redhat.com/1730953) ### Community * Project metrics: | Metrics | Value | | ------------------------- | -------- | |[Coverity](https://scan.coverity.com/projects/gluster-glusterfs) | 71 | |[Clang Scan](https://build.gluster.org/job/clang-scan/lastBuild/) | 59 | |[Test coverage](https://build.gluster.org/job/line-coverage/lastCompletedBuild/Line_20Coverage_20Report/)| 69.7 (13-07-2019) | |New Bugs in last 14 days
[master](https://bugzilla.redhat.com/buglist.cgi?bug_status=NEW&bug_status=ASSIGNED&bug_status=POST&f1=creation_ts&o1=greaterthan&product=GlusterFS&query_format=advanced&v1=-14d&version=mainline)
[7.x](https://bugzilla.redhat.com/buglist.cgi?bug_status=NEW&bug_status=ASSIGNED&bug_status=POST&f1=creation_ts&list_id=10353290&o1=greaterthan&product=GlusterFS&query_format=advanced&v1=-14d&version=7)
[ 6.x](https://bugzilla.redhat.com/buglist.cgi?bug_status=NEW&bug_status=ASSIGNED&bug_status=POST&f1=creation_ts&o1=greaterthan&product=GlusterFS&query_format=advanced&v1=-14d&version=6)
[ 5.x](https://bugzilla.redhat.com/buglist.cgi?bug_status=NEW&bug_status=ASSIGNED&bug_status=POST&f1=creation_ts&o1=greaterthan&product=GlusterFS&query_format=advanced&v1=-14d&version=5) |
12
2
5
0 | |[Gluster User Queries in last 14 days](https://lists.gluster.org/pipermail/gluster-users/2019-April/thread.html) | 17 | |[Total Bugs](https://bugzilla.redhat.com/report.cgi?x_axis_field=bug_status&y_axis_field=component&z_axis_field=&no_redirect=1&query_format=report-table&short_desc_type=allwordssubstr&short_desc=&bug_status=__open__&longdesc_type=allwordssubstr&longdesc=&bug_file_loc_type=allwordssubstr&bug_file_loc=&status_whiteboard_type=allwordssubstr&status_whiteboard=&keywords_type=allwords&keywords=&deadlinefrom=&deadlineto=&bug_id=&bug_id_type=anyexact&votes=&votes_type=greaterthaneq&emailtype1=substring&email1=&emailtype2=substring&email2=&emailtype3=substring&email3=&chfieldvalue=&chfieldfrom=&chfieldto=Now&j_top=AND&f1=noop&o1=noop&v1=&format=table&action=wrap&product=GlusterFS) | 335 | |[Total Github issues](https://github.com/gluster/glusterfs/issues) | 380 | * Any release updates? * 4.1.10, 5.8 and 6.4's packaging is nearly done. will release it in a day or two. * 4.1.10 will be EOLed * Release 7, merged some patches and some are pending due to centos regression failing. * Blocker issues across the project? * we have fixed the python issue with 5.7 and are working on 5.8 * Notable thread form mailing list * webhook for geo rep by Aravinda ### Conferences / Meetups * [Developers' Conference - August 2-3, 2019](https://devconf.info/in) - Important dates: CFP Closed Schedule Announcement: https://devconfin19.sched.com/ Event Open for Registration : https://devconfin19.eventbrite.com Last Date of Registration: 31st July, 2019 Event dates: Aug 2nd, 3rd Venue: Christ University - Bengaluru, India Talks related to gluster: Ashish: Thin Arbiter volume Aravinda: Rethinking Gluster Management using k8s Ravi: Strategies for Replication in Distributed systems Mugdha: selenium for automation in RHHI ### GlusterFS - v7.0 and beyond * Proposal - https://docs.google.com/presentation/d/1rtn38S4YBe77KK5IjczWmoAR-ZSO-i3tNHg9pAH8Wt8/edit?usp=sharing * Proposed Plan: - GlusterFS-7.0 (July 1st) - Stability, Automation - Only - GlusterFS-8.0 (Nov 1st) - - Plan for Fedora 31/RHEL8.2 - GlusterFS-9.0 (March 1st, 2020) Reflink, io_uring, and similar improvements. ### Developer focus * Any design specs to discuss? nothing ### Component status * Arbiter - nothing * AFR - metadata split brain has been fixed. gfid split brain is being worked on. * DHT - nothing * EC - data corruption worked by Xavi and Pranith. * FUSE - Nithya sent a few patches to invalidate inode. mail to discuss this * POSIX - nothing * DOC - Man page bugs are open need to be looked into. glusterfs-8 doc has to be looked into(RDMA has to be removed) Gluster v status has RDMA which has to be looked into as well . tier has to be removed from man page. * Geo Replication - The mount broker blocker issue is fixed. The [patch](https://review.gluster.org/#/c/glusterfs/+/23089/) is merged. Needs backport to release branches. * libglusterfs - nothing * Management Daemon : glusterd1 - nothing new. glusterd_volinfo_find() optimization * Snapshot - nothing * NFS - * thin-arbiter - performance improvements. ### Flash Talk Gluster * Typical 5 min talk about Gluster with up to 5 more minutes for questions * For this meeting lets talk about Roadmap suggestions. - https://hackmd.io/JtfYZr49QeGaNIlTvQNsaA ### Recent Blog posts / Document updates * https://medium.com/@sheetal.pamecha08/https-medium-com-sheetal-pamecha08-one-good-way-to-start-contributing-to-open-source-static-analysers-16543eeeb138 * https://pkalever.wordpress.com/2019/07/02/expanding-gluster-block-volume/ * https://medium.com/@tumballi/glusters-management-in-k8s-13020a561962 * https://aravindavk.in/blog/gluster-and-k8s-portmap/ ### Gluster Friday Five * Every friday we release this, which basically covers highlight of week in gluster.Also you can find more videos in youtube link. https://www.youtube.com/channel/UCfilWh0JA5NfCjbqq1vsBVA ### Host * Who will host next meeting? - Host will need to send out the agenda 24hr - 12hrs in advance to mailing list, and also make sure to send the meeting minutes. - Host will need to reach out to one user at least who can talk about their usecase, their experience, and their needs. - Host needs to send meeting minutes as PR to http://github.com/gluster/community Sunny will host the next meeting. ### Notetaker * Who will take notes from the next meeting? ### RoundTable [Amar] Road map has to be worked on for the next 6 months to be sent by Maintainers. ### Action Items on host * Check-in Minutes of meeting for this meeting -- Regards, Hari Gowtham. From jthottan at redhat.com Tue Jul 23 07:53:17 2019 From: jthottan at redhat.com (Jiffin Tony Thottan) Date: Tue, 23 Jul 2019 13:23:17 +0530 Subject: [Gluster-devel] Requesting reviews [Re: Release 7 Branch Created] In-Reply-To: References: <891942035.493157.1563191603936.JavaMail.zimbra@redhat.com> <39a54689-c8b8-1393-bd1c-03e4a21f464b@redhat.com> Message-ID: <176f9bba-703e-ea74-06ad-dd825d8f4b42@redhat.com> It have passed all the regression? and tested the packages on 3 node set up. -- Jiffin On 15/07/19 7:44 PM, Atin Mukherjee wrote: > Please ensure : > 1. commit message has the explanation on the motive behind this change. > 2. I always feel more confident if a patch has passed regression to > kick start the review. Can you please ensure that verified flag is put up? > > On Mon, Jul 15, 2019 at 5:27 PM Jiffin Tony Thottan > > wrote: > > Hi, > > The "Add Ganesha HA bits back to glusterfs code repo"[1] is > targeted for > glusterfs-7. Requesting maintainers to review below two patches > > [1] > https://review.gluster.org/#/q/topic:ref-663+(status:open+OR+status:merged) > > Regards, > > Jiffin > > On 15/07/19 5:23 PM, Jiffin Thottan wrote: > > > > ----- Original Message ----- > > From: "Rinku Kothiya" > > > To: maintainers at gluster.org , > gluster-devel at gluster.org , > "Shyam Ranganathan" > > > Sent: Wednesday, July 3, 2019 10:30:58 AM > > Subject: [Gluster-devel] Release 7 Branch Created > > > > Hi Team, > > > > Release 7 branch has been created in upstream. > > > > ## Schedule > > > > Curretnly the plan working backwards on the schedule, here's > what we have: > > - Announcement: Week of Aug 4th, 2019 > > - GA tagging: Aug-02-2019 > > - RC1: On demand before GA > > - RC0: July-03-2019 > > - Late features cut-off: Week of June-24th, 2018 > > - Branching (feature cutoff date): June-17-2018 > > > > Regards > > Rinku > > > > _______________________________________________ > > > > Community Meeting Calendar: > > > > APAC Schedule - > > Every 2nd and 4th Tuesday at 11:30 AM IST > > Bridge: https://bluejeans.com/836554017 > > > > NA/EMEA Schedule - > > Every 1st and 3rd Tuesday at 01:00 PM EDT > > Bridge: https://bluejeans.com/486278655 > > > > Gluster-devel mailing list > > Gluster-devel at gluster.org > > https://lists.gluster.org/mailman/listinfo/gluster-devel > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amukherj at redhat.com Tue Jul 23 09:18:57 2019 From: amukherj at redhat.com (Atin Mukherjee) Date: Tue, 23 Jul 2019 14:48:57 +0530 Subject: [Gluster-devel] Assistance setting up Gluster In-Reply-To: References: Message-ID: Sanju - can you please help Barak? >From a quick glance of the log it seems that this wasn?t a clean setup. Barak - can you please have an empty /var/lib/glusterd/ and start over again? Also make sure that there?s no glusterd process already running. On Mon, 22 Jul 2019 at 14:40, Barak Sason wrote: > Greeting Yaniv, > > Thank you very much for your response. > > As you suggested, I'm installing additional VM (CentOs) on which I'll try > to use the repo you suggested in order to get Gluster up and running. I'll > update on progress in this matter later today, as it'll take a bit of time > to get the VM ready. > > In addition, I'll post the RHEL problem in a separate thread, as you > requested. > > In the meantime, let's focus on the Ubuntu problem. > I'm attaching the log file from Ubuntu, corresponding to running 'sudo > glusterd' command (attachment - glusterd.log). > Regarding you question about running manually - I've followed the > instructions specified in the INSTALL.txt file which comes with the repo > and specifies the following steps for installation: > 1- ./autogen.sh > 2- ./configure > 3- make install > Please let me know if this somehow incorrect. > > I kindly thank you for your time and effort, > > Barak > > On Mon, Jul 22, 2019 at 8:10 AM Yaniv Kaul wrote: > >> >> >> On Mon, Jul 22, 2019 at 1:20 AM Barak Sason wrote: >> >>> Hello everyone, >>> >>> My name is Barak and I'll soon be joining the Gluster development team >>> as a part of Red Hat. >>> >> >> Hello and welcome to the Gluster community. >> >>> >>> As a preparation for my upcoming employment I've been trying to get >>> Gluster up and running on my system, but came across some technical >>> difficulties. >>> I'll appreciate any assistance you may provide. >>> >>> I have 2 VMs on my PC - Ubuntu 18, which I used for previous >>> development and RHEL 8 which I installed a fresh copy just days ago. >>> >> >> 2 VMs is really minimal. You should use more. >> >>> The copy of Gluster code I'm working with is a clone of the master >>> repository. >>> >>> On Ubuntu installation completed, but running the command 'sudo >>> glusterd' does nothing. Debugging with gdb shows that the program >>> terminates very early due to an error. >>> At glusterfsd.c:2878 (main method) there is a call to 'daemonize' >>> method. at glusterfsd.c:2568 a call to sys_read fails with errno 17. >>> I'm unsure why this happens and I was unable to solve this. >>> I've tried to run 'sudo glusterd -N' in order to deactivate >>> deamonization, but this also fails at glusterfsd.c:2712 >>> ('glusterfs_process_volfp' method). I was unable to solve this issue too. >>> >>> On RHEL, running ./configure results in an error regarding 'rpcgen'. >>> Running ./configure --without-libtirp was unhelpful and results in the >>> same error. >>> >> >> I'd separate the two issues to two different email threads, as they may >> or may not be related. >> Please provide logs for each. >> Why are you running glusterd manually, btw? >> >> You may want to take a look at https://github.com/mykaul/vg - which is a >> simple way to set up Gluster on CentOS 7 VMs for testing. I have not tried >> it for some time - let me know how it works for you. >> Y. >> >>> >>> As of right now I'm unable to proceed so I ask for your assistance. >>> >>> Thank you all very much. >>> _______________________________________________ >>> >>> Community Meeting Calendar: >>> >>> APAC Schedule - >>> Every 2nd and 4th Tuesday at 11:30 AM IST >>> Bridge: https://bluejeans.com/836554017 >>> >>> NA/EMEA Schedule - >>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>> Bridge: https://bluejeans.com/486278655 >>> >>> Gluster-devel mailing list >>> Gluster-devel at gluster.org >>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>> >>> _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -- - Atin (atinm) -------------- next part -------------- An HTML attachment was scrubbed... URL: From sason922 at gmail.com Tue Jul 23 10:06:52 2019 From: sason922 at gmail.com (Barak Sason) Date: Tue, 23 Jul 2019 13:06:52 +0300 Subject: [Gluster-devel] Assistance setting up Gluster In-Reply-To: References: Message-ID: Hello Sanju, I greatly appreciate your assistance. The problem has been solved already - There was indeed a process running in the background. I do have another problem with setting up Gluster on RHEL 8, but as suggested before I'll post it in another thread. Again, Thank you very much for your help, Barak On Tue, Jul 23, 2019 at 12:19 PM Atin Mukherjee wrote: > Sanju - can you please help Barak? > > From a quick glance of the log it seems that this wasn?t a clean setup. > > Barak - can you please have an empty /var/lib/glusterd/ and start over > again? Also make sure that there?s no glusterd process already running. > > On Mon, 22 Jul 2019 at 14:40, Barak Sason wrote: > >> Greeting Yaniv, >> >> Thank you very much for your response. >> >> As you suggested, I'm installing additional VM (CentOs) on which I'll try >> to use the repo you suggested in order to get Gluster up and running. I'll >> update on progress in this matter later today, as it'll take a bit of time >> to get the VM ready. >> >> In addition, I'll post the RHEL problem in a separate thread, as you >> requested. >> >> In the meantime, let's focus on the Ubuntu problem. >> I'm attaching the log file from Ubuntu, corresponding to running 'sudo >> glusterd' command (attachment - glusterd.log). >> Regarding you question about running manually - I've followed the >> instructions specified in the INSTALL.txt file which comes with the repo >> and specifies the following steps for installation: >> 1- ./autogen.sh >> 2- ./configure >> 3- make install >> Please let me know if this somehow incorrect. >> >> I kindly thank you for your time and effort, >> >> Barak >> >> On Mon, Jul 22, 2019 at 8:10 AM Yaniv Kaul wrote: >> >>> >>> >>> On Mon, Jul 22, 2019 at 1:20 AM Barak Sason wrote: >>> >>>> Hello everyone, >>>> >>>> My name is Barak and I'll soon be joining the Gluster development team >>>> as a part of Red Hat. >>>> >>> >>> Hello and welcome to the Gluster community. >>> >>>> >>>> As a preparation for my upcoming employment I've been trying to get >>>> Gluster up and running on my system, but came across some technical >>>> difficulties. >>>> I'll appreciate any assistance you may provide. >>>> >>>> I have 2 VMs on my PC - Ubuntu 18, which I used for previous >>>> development and RHEL 8 which I installed a fresh copy just days ago. >>>> >>> >>> 2 VMs is really minimal. You should use more. >>> >>>> The copy of Gluster code I'm working with is a clone of the master >>>> repository. >>>> >>>> On Ubuntu installation completed, but running the command 'sudo >>>> glusterd' does nothing. Debugging with gdb shows that the program >>>> terminates very early due to an error. >>>> At glusterfsd.c:2878 (main method) there is a call to 'daemonize' >>>> method. at glusterfsd.c:2568 a call to sys_read fails with errno 17. >>>> I'm unsure why this happens and I was unable to solve this. >>>> I've tried to run 'sudo glusterd -N' in order to deactivate >>>> deamonization, but this also fails at glusterfsd.c:2712 >>>> ('glusterfs_process_volfp' method). I was unable to solve this issue too. >>>> >>>> On RHEL, running ./configure results in an error regarding 'rpcgen'. >>>> Running ./configure --without-libtirp was unhelpful and results in the >>>> same error. >>>> >>> >>> I'd separate the two issues to two different email threads, as they may >>> or may not be related. >>> Please provide logs for each. >>> Why are you running glusterd manually, btw? >>> >>> You may want to take a look at https://github.com/mykaul/vg - which is >>> a simple way to set up Gluster on CentOS 7 VMs for testing. I have not >>> tried it for some time - let me know how it works for you. >>> Y. >>> >>>> >>>> As of right now I'm unable to proceed so I ask for your assistance. >>>> >>>> Thank you all very much. >>>> _______________________________________________ >>>> >>>> Community Meeting Calendar: >>>> >>>> APAC Schedule - >>>> Every 2nd and 4th Tuesday at 11:30 AM IST >>>> Bridge: https://bluejeans.com/836554017 >>>> >>>> NA/EMEA Schedule - >>>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>>> Bridge: https://bluejeans.com/486278655 >>>> >>>> Gluster-devel mailing list >>>> Gluster-devel at gluster.org >>>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>>> >>>> _______________________________________________ >> >> Community Meeting Calendar: >> >> APAC Schedule - >> Every 2nd and 4th Tuesday at 11:30 AM IST >> Bridge: https://bluejeans.com/836554017 >> >> NA/EMEA Schedule - >> Every 1st and 3rd Tuesday at 01:00 PM EDT >> Bridge: https://bluejeans.com/486278655 >> >> Gluster-devel mailing list >> Gluster-devel at gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-devel >> >> -- > - Atin (atinm) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From srakonde at redhat.com Tue Jul 23 10:14:31 2019 From: srakonde at redhat.com (Sanju Rakonde) Date: Tue, 23 Jul 2019 15:44:31 +0530 Subject: [Gluster-devel] Assistance setting up Gluster In-Reply-To: References: Message-ID: Hello Barak, It's great that you could resolve the issues. I was searching about how to resolve "rpcgen" issue, usually ./configure --without-libtirp works. I will try to help you with your other issues. On Tue, Jul 23, 2019 at 3:37 PM Barak Sason wrote: > Hello Sanju, > > I greatly appreciate your assistance. > > The problem has been solved already - There was indeed a process running > in the background. > I do have another problem with setting up Gluster on RHEL 8, but as > suggested before I'll post it in another thread. > > Again, Thank you very much for your help, > > Barak > > On Tue, Jul 23, 2019 at 12:19 PM Atin Mukherjee > wrote: > >> Sanju - can you please help Barak? >> >> From a quick glance of the log it seems that this wasn?t a clean setup. >> >> Barak - can you please have an empty /var/lib/glusterd/ and start over >> again? Also make sure that there?s no glusterd process already running. >> >> On Mon, 22 Jul 2019 at 14:40, Barak Sason wrote: >> >>> Greeting Yaniv, >>> >>> Thank you very much for your response. >>> >>> As you suggested, I'm installing additional VM (CentOs) on which I'll >>> try to use the repo you suggested in order to get Gluster up and running. >>> I'll update on progress in this matter later today, as it'll take a bit of >>> time to get the VM ready. >>> >>> In addition, I'll post the RHEL problem in a separate thread, as you >>> requested. >>> >>> In the meantime, let's focus on the Ubuntu problem. >>> I'm attaching the log file from Ubuntu, corresponding to running 'sudo >>> glusterd' command (attachment - glusterd.log). >>> Regarding you question about running manually - I've followed the >>> instructions specified in the INSTALL.txt file which comes with the repo >>> and specifies the following steps for installation: >>> 1- ./autogen.sh >>> 2- ./configure >>> 3- make install >>> Please let me know if this somehow incorrect. >>> >>> I kindly thank you for your time and effort, >>> >>> Barak >>> >>> On Mon, Jul 22, 2019 at 8:10 AM Yaniv Kaul wrote: >>> >>>> >>>> >>>> On Mon, Jul 22, 2019 at 1:20 AM Barak Sason wrote: >>>> >>>>> Hello everyone, >>>>> >>>>> My name is Barak and I'll soon be joining the Gluster development team >>>>> as a part of Red Hat. >>>>> >>>> >>>> Hello and welcome to the Gluster community. >>>> >>>>> >>>>> As a preparation for my upcoming employment I've been trying to get >>>>> Gluster up and running on my system, but came across some technical >>>>> difficulties. >>>>> I'll appreciate any assistance you may provide. >>>>> >>>>> I have 2 VMs on my PC - Ubuntu 18, which I used for previous >>>>> development and RHEL 8 which I installed a fresh copy just days ago. >>>>> >>>> >>>> 2 VMs is really minimal. You should use more. >>>> >>>>> The copy of Gluster code I'm working with is a clone of the master >>>>> repository. >>>>> >>>>> On Ubuntu installation completed, but running the command 'sudo >>>>> glusterd' does nothing. Debugging with gdb shows that the program >>>>> terminates very early due to an error. >>>>> At glusterfsd.c:2878 (main method) there is a call to 'daemonize' >>>>> method. at glusterfsd.c:2568 a call to sys_read fails with errno 17. >>>>> I'm unsure why this happens and I was unable to solve this. >>>>> I've tried to run 'sudo glusterd -N' in order to deactivate >>>>> deamonization, but this also fails at glusterfsd.c:2712 >>>>> ('glusterfs_process_volfp' method). I was unable to solve this issue too. >>>>> >>>>> On RHEL, running ./configure results in an error regarding 'rpcgen'. >>>>> Running ./configure --without-libtirp was unhelpful and results in >>>>> the same error. >>>>> >>>> >>>> I'd separate the two issues to two different email threads, as they may >>>> or may not be related. >>>> Please provide logs for each. >>>> Why are you running glusterd manually, btw? >>>> >>>> You may want to take a look at https://github.com/mykaul/vg - which is >>>> a simple way to set up Gluster on CentOS 7 VMs for testing. I have not >>>> tried it for some time - let me know how it works for you. >>>> Y. >>>> >>>>> >>>>> As of right now I'm unable to proceed so I ask for your assistance. >>>>> >>>>> Thank you all very much. >>>>> _______________________________________________ >>>>> >>>>> Community Meeting Calendar: >>>>> >>>>> APAC Schedule - >>>>> Every 2nd and 4th Tuesday at 11:30 AM IST >>>>> Bridge: https://bluejeans.com/836554017 >>>>> >>>>> NA/EMEA Schedule - >>>>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>>>> Bridge: https://bluejeans.com/486278655 >>>>> >>>>> Gluster-devel mailing list >>>>> Gluster-devel at gluster.org >>>>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>>>> >>>>> _______________________________________________ >>> >>> Community Meeting Calendar: >>> >>> APAC Schedule - >>> Every 2nd and 4th Tuesday at 11:30 AM IST >>> Bridge: https://bluejeans.com/836554017 >>> >>> NA/EMEA Schedule - >>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>> Bridge: https://bluejeans.com/486278655 >>> >>> Gluster-devel mailing list >>> Gluster-devel at gluster.org >>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>> >>> -- >> - Atin (atinm) >> > -- Thanks, Sanju -------------- next part -------------- An HTML attachment was scrubbed... URL: From sason922 at gmail.com Tue Jul 23 11:08:38 2019 From: sason922 at gmail.com (Barak Sason) Date: Tue, 23 Jul 2019 14:08:38 +0300 Subject: [Gluster-devel] Gluster on RHEL 8 Message-ID: Greeting all, I've made a fresh installation of RHEL 8 on a VM and have been trying to set up Gluster on that system. Running ./autogen.sh completes OK, but running ./config results in an error regarding missing 'rpcgen'. 'libtirpc-devel package is installed. Running ./configure --without-libtirp results in the same error. I'm attaching reverent terminal output. I'm currently out of ideas. I appreciate any help you may offer, Barak -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- [root at localhost ~]# dnf install rpcgen Updating Subscription Management repositories. Last metadata expiration check: 1:50:52 ago on Mon 22 Jul 2019 04:21:58 PM IDT. No match for argument: rpcgen Error: Unable to find a match [root at localhost ~]# yum install rpcgen Updating Subscription Management repositories. Last metadata expiration check: 1:51:00 ago on Mon 22 Jul 2019 04:21:58 PM IDT. No match for argument: rpcgen Error: Unable to find a match [root at localhost ~]# -------------- next part -------------- [root at localhost src]# ./configure --without-libtirp configure: WARNING: unrecognized options: --without-libtirp checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /usr/bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking whether make supports nested variables... yes checking how to create a pax tar archive... gnutar checking whether make supports nested variables... (cached) yes checking build system type... x86_64-pc-linux-gnu checking host system type... x86_64-pc-linux-gnu checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking whether gcc understands -c and -o together... yes checking whether make supports the include directive... yes (GNU style) checking dependency style of gcc... gcc3 checking how to print strings... printf checking for a sed that does not truncate output... /usr/bin/sed checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for fgrep... /usr/bin/grep -F checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 1572864 checking how to convert x86_64-pc-linux-gnu file names to x86_64-pc-linux-gnu format... func_convert_file_noop checking how to convert x86_64-pc-linux-gnu file names to toolchain format... func_convert_file_noop checking for /usr/bin/ld option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for dlltool... no checking how to associate runtime and link libraries... printf %s\n checking for ar... ar checking for archiver @FILE support... @ checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from gcc object... ok checking for sysroot... no checking for a working dd... /usr/bin/dd checking how to truncate binary pipes... /usr/bin/dd bs=4096 count=1 checking for mt... no checking if : is a manifest tool... no checking how to run the C preprocessor... gcc -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for dlfcn.h... yes checking for objdir... .libs checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC -DPIC checking if gcc PIC flag -fPIC -DPIC works... yes checking if gcc static flag -static works... no checking if gcc supports -c -o file.o... yes checking if gcc supports -c -o file.o... (cached) yes checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no checking for rpcgen... no configure: error: `rpcgen` not found, glusterfs needs `rpcgen` exiting.. [root at localhost src]# From sason922 at gmail.com Tue Jul 23 11:11:09 2019 From: sason922 at gmail.com (Barak Sason) Date: Tue, 23 Jul 2019 14:11:09 +0300 Subject: [Gluster-devel] Assistance setting up Gluster In-Reply-To: References: Message-ID: Hello again Sanju, Thank you very much for your input. Unfortunately, running './configure --without-libtirp' does not help and the problem still persists. I've opened a new thread on the matter under the name "Gluster on RHEL 8" for organization purposes (and also attached some output files) and would appreciate if you could offer further assistance. Thank you very much, Barak On Tue, Jul 23, 2019 at 1:14 PM Sanju Rakonde wrote: > Hello Barak, > > It's great that you could resolve the issues. I was searching about how to > resolve "rpcgen" issue, usually ./configure --without-libtirp works. I > will try to help you with your other issues. > > On Tue, Jul 23, 2019 at 3:37 PM Barak Sason wrote: > >> Hello Sanju, >> >> I greatly appreciate your assistance. >> >> The problem has been solved already - There was indeed a process running >> in the background. >> I do have another problem with setting up Gluster on RHEL 8, but as >> suggested before I'll post it in another thread. >> >> Again, Thank you very much for your help, >> >> Barak >> >> On Tue, Jul 23, 2019 at 12:19 PM Atin Mukherjee >> wrote: >> >>> Sanju - can you please help Barak? >>> >>> From a quick glance of the log it seems that this wasn?t a clean setup. >>> >>> Barak - can you please have an empty /var/lib/glusterd/ and start over >>> again? Also make sure that there?s no glusterd process already running. >>> >>> On Mon, 22 Jul 2019 at 14:40, Barak Sason wrote: >>> >>>> Greeting Yaniv, >>>> >>>> Thank you very much for your response. >>>> >>>> As you suggested, I'm installing additional VM (CentOs) on which I'll >>>> try to use the repo you suggested in order to get Gluster up and running. >>>> I'll update on progress in this matter later today, as it'll take a bit of >>>> time to get the VM ready. >>>> >>>> In addition, I'll post the RHEL problem in a separate thread, as you >>>> requested. >>>> >>>> In the meantime, let's focus on the Ubuntu problem. >>>> I'm attaching the log file from Ubuntu, corresponding to running 'sudo >>>> glusterd' command (attachment - glusterd.log). >>>> Regarding you question about running manually - I've followed the >>>> instructions specified in the INSTALL.txt file which comes with the repo >>>> and specifies the following steps for installation: >>>> 1- ./autogen.sh >>>> 2- ./configure >>>> 3- make install >>>> Please let me know if this somehow incorrect. >>>> >>>> I kindly thank you for your time and effort, >>>> >>>> Barak >>>> >>>> On Mon, Jul 22, 2019 at 8:10 AM Yaniv Kaul wrote: >>>> >>>>> >>>>> >>>>> On Mon, Jul 22, 2019 at 1:20 AM Barak Sason >>>>> wrote: >>>>> >>>>>> Hello everyone, >>>>>> >>>>>> My name is Barak and I'll soon be joining the Gluster development >>>>>> team as a part of Red Hat. >>>>>> >>>>> >>>>> Hello and welcome to the Gluster community. >>>>> >>>>>> >>>>>> As a preparation for my upcoming employment I've been trying to get >>>>>> Gluster up and running on my system, but came across some technical >>>>>> difficulties. >>>>>> I'll appreciate any assistance you may provide. >>>>>> >>>>>> I have 2 VMs on my PC - Ubuntu 18, which I used for previous >>>>>> development and RHEL 8 which I installed a fresh copy just days ago. >>>>>> >>>>> >>>>> 2 VMs is really minimal. You should use more. >>>>> >>>>>> The copy of Gluster code I'm working with is a clone of the master >>>>>> repository. >>>>>> >>>>>> On Ubuntu installation completed, but running the command 'sudo >>>>>> glusterd' does nothing. Debugging with gdb shows that the program >>>>>> terminates very early due to an error. >>>>>> At glusterfsd.c:2878 (main method) there is a call to 'daemonize' >>>>>> method. at glusterfsd.c:2568 a call to sys_read fails with errno 17. >>>>>> I'm unsure why this happens and I was unable to solve this. >>>>>> I've tried to run 'sudo glusterd -N' in order to deactivate >>>>>> deamonization, but this also fails at glusterfsd.c:2712 >>>>>> ('glusterfs_process_volfp' method). I was unable to solve this issue too. >>>>>> >>>>>> On RHEL, running ./configure results in an error regarding 'rpcgen'. >>>>>> Running ./configure --without-libtirp was unhelpful and results in >>>>>> the same error. >>>>>> >>>>> >>>>> I'd separate the two issues to two different email threads, as they >>>>> may or may not be related. >>>>> Please provide logs for each. >>>>> Why are you running glusterd manually, btw? >>>>> >>>>> You may want to take a look at https://github.com/mykaul/vg - which >>>>> is a simple way to set up Gluster on CentOS 7 VMs for testing. I have not >>>>> tried it for some time - let me know how it works for you. >>>>> Y. >>>>> >>>>>> >>>>>> As of right now I'm unable to proceed so I ask for your assistance. >>>>>> >>>>>> Thank you all very much. >>>>>> _______________________________________________ >>>>>> >>>>>> Community Meeting Calendar: >>>>>> >>>>>> APAC Schedule - >>>>>> Every 2nd and 4th Tuesday at 11:30 AM IST >>>>>> Bridge: https://bluejeans.com/836554017 >>>>>> >>>>>> NA/EMEA Schedule - >>>>>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>>>>> Bridge: https://bluejeans.com/486278655 >>>>>> >>>>>> Gluster-devel mailing list >>>>>> Gluster-devel at gluster.org >>>>>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>>>>> >>>>>> _______________________________________________ >>>> >>>> Community Meeting Calendar: >>>> >>>> APAC Schedule - >>>> Every 2nd and 4th Tuesday at 11:30 AM IST >>>> Bridge: https://bluejeans.com/836554017 >>>> >>>> NA/EMEA Schedule - >>>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>>> Bridge: https://bluejeans.com/486278655 >>>> >>>> Gluster-devel mailing list >>>> Gluster-devel at gluster.org >>>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>>> >>>> -- >>> - Atin (atinm) >>> >> > > -- > Thanks, > Sanju > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkalever at redhat.com Tue Jul 23 11:14:25 2019 From: pkalever at redhat.com (Prasanna Kalever) Date: Tue, 23 Jul 2019 16:44:25 +0530 Subject: [Gluster-devel] Gluster on RHEL 8 In-Reply-To: References: Message-ID: On Tue, Jul 23, 2019 at 4:41 PM Barak Sason wrote: > > Greeting all, > > I've made a fresh installation of RHEL 8 on a VM and have been trying to set up Gluster on that system. > > Running ./autogen.sh completes OK, but running ./config results in an error regarding missing 'rpcgen'. > 'libtirpc-devel package is installed. > Running ./configure --without-libtirp results in the same error. I see: [root at localhost src]# ./configure --without-libtirp configure: WARNING: unrecognized options: --without-libtirp should it be '--without-libtirpc' instead of '--without-libtirp' ? BRs, -- Prasanna > I'm attaching reverent terminal output. > I'm currently out of ideas. > > I appreciate any help you may offer, > > Barak > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > From sason922 at gmail.com Tue Jul 23 11:21:07 2019 From: sason922 at gmail.com (Barak Sason) Date: Tue, 23 Jul 2019 14:21:07 +0300 Subject: [Gluster-devel] Gluster on RHEL 8 In-Reply-To: References: Message-ID: Hello Prasanna *,*Thank you for the quick response. I've made a mistake and attached the wring output file (I've noticed the spelling error as you've pointed out, and copied output to a new file, but attached the old by mistake). Attached now is the relevant file. I apologize for the inconvenience, Barak On Tue, Jul 23, 2019 at 2:14 PM Prasanna Kalever wrote: > On Tue, Jul 23, 2019 at 4:41 PM Barak Sason wrote: > > > > Greeting all, > > > > I've made a fresh installation of RHEL 8 on a VM and have been trying to > set up Gluster on that system. > > > > Running ./autogen.sh completes OK, but running ./config results in an > error regarding missing 'rpcgen'. > > 'libtirpc-devel package is installed. > > Running ./configure --without-libtirp results in the same error. > > I see: > [root at localhost src]# ./configure --without-libtirp > configure: WARNING: unrecognized options: --without-libtirp > > should it be '--without-libtirpc' instead of '--without-libtirp' ? > > BRs, > -- > Prasanna > > > > > > > I'm attaching reverent terminal output. > > I'm currently out of ideas. > > > > I appreciate any help you may offer, > > > > Barak > > _______________________________________________ > > > > Community Meeting Calendar: > > > > APAC Schedule - > > Every 2nd and 4th Tuesday at 11:30 AM IST > > Bridge: https://bluejeans.com/836554017 > > > > NA/EMEA Schedule - > > Every 1st and 3rd Tuesday at 01:00 PM EDT > > Bridge: https://bluejeans.com/486278655 > > > > Gluster-devel mailing list > > Gluster-devel at gluster.org > > https://lists.gluster.org/mailman/listinfo/gluster-devel > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- [root at localhost src]# ./configure --without-libtirpc checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /usr/bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking whether make supports nested variables... yes checking how to create a pax tar archive... gnutar checking whether make supports nested variables... (cached) yes checking build system type... x86_64-pc-linux-gnu checking host system type... x86_64-pc-linux-gnu checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking whether gcc understands -c and -o together... yes checking whether make supports the include directive... yes (GNU style) checking dependency style of gcc... gcc3 checking how to print strings... printf checking for a sed that does not truncate output... /usr/bin/sed checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for fgrep... /usr/bin/grep -F checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 1572864 checking how to convert x86_64-pc-linux-gnu file names to x86_64-pc-linux-gnu format... func_convert_file_noop checking how to convert x86_64-pc-linux-gnu file names to toolchain format... func_convert_file_noop checking for /usr/bin/ld option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for dlltool... no checking how to associate runtime and link libraries... printf %s\n checking for ar... ar checking for archiver @FILE support... @ checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from gcc object... ok checking for sysroot... no checking for a working dd... /usr/bin/dd checking how to truncate binary pipes... /usr/bin/dd bs=4096 count=1 checking for mt... no checking if : is a manifest tool... no checking how to run the C preprocessor... gcc -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for dlfcn.h... yes checking for objdir... .libs checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC -DPIC checking if gcc PIC flag -fPIC -DPIC works... yes checking if gcc static flag -static works... no checking if gcc supports -c -o file.o... yes checking if gcc supports -c -o file.o... (cached) yes checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no checking for rpcgen... no configure: error: `rpcgen` not found, glusterfs needs `rpcgen` exiting.. [root at localhost src]# From kkeithle at redhat.com Tue Jul 23 11:25:11 2019 From: kkeithle at redhat.com (Kaleb Keithley) Date: Tue, 23 Jul 2019 07:25:11 -0400 Subject: [Gluster-devel] Gluster on RHEL 8 In-Reply-To: References: Message-ID: you must not use --without-libtirpc on RHEL8. rpcgen is in the rpcgen package on RHEL8 and Fedora 29+ -- Kaleb On Tue, Jul 23, 2019 at 7:11 AM Barak Sason wrote: > Greeting all, > > I've made a fresh installation of RHEL 8 on a VM and have been trying to > set up Gluster on that system. > > Running ./autogen.sh completes OK, but running ./config results in an > error regarding missing 'rpcgen'. > 'libtirpc-devel package is installed. > Running ./configure --without-libtirp results in the same error. > I'm attaching reverent terminal output. > I'm currently out of ideas. > > I appreciate any help you may offer, > > Barak > _______________________________________________ > > Community Meeting Calendar: > > APAC Schedule - > Every 2nd and 4th Tuesday at 11:30 AM IST > Bridge: https://bluejeans.com/836554017 > > NA/EMEA Schedule - > Every 1st and 3rd Tuesday at 01:00 PM EDT > Bridge: https://bluejeans.com/486278655 > > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sason922 at gmail.com Tue Jul 23 11:30:27 2019 From: sason922 at gmail.com (Barak Sason) Date: Tue, 23 Jul 2019 14:30:27 +0300 Subject: [Gluster-devel] Gluster on RHEL 8 In-Reply-To: References: Message-ID: Greeting Kaleb, Thank you for the clarification. Though, I did try to install rpcgen package, but without success (libtirpc-devel package is installed). I'm attaching relevant terminal output. What am I missing? Barak On Tue, Jul 23, 2019 at 2:25 PM Kaleb Keithley wrote: > > you must not use --without-libtirpc on RHEL8. > > rpcgen is in the rpcgen package on RHEL8 and Fedora 29+ > > -- > > Kaleb > > > On Tue, Jul 23, 2019 at 7:11 AM Barak Sason wrote: > >> Greeting all, >> >> I've made a fresh installation of RHEL 8 on a VM and have been trying to >> set up Gluster on that system. >> >> Running ./autogen.sh completes OK, but running ./config results in an >> error regarding missing 'rpcgen'. >> 'libtirpc-devel package is installed. >> Running ./configure --without-libtirp results in the same error. >> I'm attaching reverent terminal output. >> I'm currently out of ideas. >> >> I appreciate any help you may offer, >> >> Barak >> _______________________________________________ >> >> Community Meeting Calendar: >> >> APAC Schedule - >> Every 2nd and 4th Tuesday at 11:30 AM IST >> Bridge: https://bluejeans.com/836554017 >> >> NA/EMEA Schedule - >> Every 1st and 3rd Tuesday at 01:00 PM EDT >> Bridge: https://bluejeans.com/486278655 >> >> Gluster-devel mailing list >> Gluster-devel at gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-devel >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- [root at localhost ~]# dnf install rpcgen Updating Subscription Management repositories. Last metadata expiration check: 1:50:52 ago on Mon 22 Jul 2019 04:21:58 PM IDT. No match for argument: rpcgen Error: Unable to find a match [root at localhost ~]# yum install rpcgen Updating Subscription Management repositories. Last metadata expiration check: 1:51:00 ago on Mon 22 Jul 2019 04:21:58 PM IDT. No match for argument: rpcgen Error: Unable to find a match [root at localhost ~]# From kkeithle at redhat.com Tue Jul 23 11:40:45 2019 From: kkeithle at redhat.com (Kaleb Keithley) Date: Tue, 23 Jul 2019 07:40:45 -0400 Subject: [Gluster-devel] Gluster on RHEL 8 In-Reply-To: References: Message-ID: rpcgen is in the CRB (codeready-builder) repo. ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: rpcgen x86_64 1.3.1-4.el8 codeready-builder-for-rhel-8-x86_64-rpms 52 k Transaction Summary ================================================================================ Make sure that the codeready-builder-for-rhel-8-x86_64-rpms repo is enabled (enabled = 1) in /etc/yum.repos.d/redhat.repo or or run dnf with `... --enable codeready-builder-for-rhel-8-x86_64-rpms ...` -- Kaleb On Tue, Jul 23, 2019 at 7:31 AM Barak Sason wrote: > Greeting Kaleb, > > Thank you for the clarification. > > Though, I did try to install rpcgen package, but without success (libtirpc-devel > package is installed). > I'm attaching relevant terminal output. > > What am I missing? > > Barak > > On Tue, Jul 23, 2019 at 2:25 PM Kaleb Keithley > wrote: > >> >> you must not use --without-libtirpc on RHEL8. >> >> rpcgen is in the rpcgen package on RHEL8 and Fedora 29+ >> >> -- >> >> Kaleb >> >> >> On Tue, Jul 23, 2019 at 7:11 AM Barak Sason wrote: >> >>> Greeting all, >>> >>> I've made a fresh installation of RHEL 8 on a VM and have been trying to >>> set up Gluster on that system. >>> >>> Running ./autogen.sh completes OK, but running ./config results in an >>> error regarding missing 'rpcgen'. >>> 'libtirpc-devel package is installed. >>> Running ./configure --without-libtirp results in the same error. >>> I'm attaching reverent terminal output. >>> I'm currently out of ideas. >>> >>> I appreciate any help you may offer, >>> >>> Barak >>> _______________________________________________ >>> >>> Community Meeting Calendar: >>> >>> APAC Schedule - >>> Every 2nd and 4th Tuesday at 11:30 AM IST >>> Bridge: https://bluejeans.com/836554017 >>> >>> NA/EMEA Schedule - >>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>> Bridge: https://bluejeans.com/486278655 >>> >>> Gluster-devel mailing list >>> Gluster-devel at gluster.org >>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sason922 at gmail.com Tue Jul 23 12:15:28 2019 From: sason922 at gmail.com (Barak Sason) Date: Tue, 23 Jul 2019 15:15:28 +0300 Subject: [Gluster-devel] Gluster on RHEL 8 In-Reply-To: References: Message-ID: Hello again Kaleb, Thank you for the clarification - at least now I know where the rpcgen is. Unfortunately, the problem still persists. I'm attaching the repo file and terminal output. Might you have any idea what I'm doing wrong? Thank you very much for your assistance, Barak On Tue, Jul 23, 2019 at 2:40 PM Kaleb Keithley wrote: > rpcgen is in the CRB (codeready-builder) repo. > > > ================================================================================ > Package Arch Version Repository > Size > > ================================================================================ > Installing: > rpcgen x86_64 1.3.1-4.el8 codeready-builder-for-rhel-8-x86_64-rpms > 52 k > > Transaction Summary > > ================================================================================ > > Make sure that the codeready-builder-for-rhel-8-x86_64-rpms repo is > enabled (enabled = 1) in /etc/yum.repos.d/redhat.repo or or run dnf with > `... --enable codeready-builder-for-rhel-8-x86_64-rpms ...` > > -- > > Kaleb > > > On Tue, Jul 23, 2019 at 7:31 AM Barak Sason wrote: > >> Greeting Kaleb, >> >> Thank you for the clarification. >> >> Though, I did try to install rpcgen package, but without success (libtirpc-devel >> package is installed). >> I'm attaching relevant terminal output. >> >> What am I missing? >> >> Barak >> >> On Tue, Jul 23, 2019 at 2:25 PM Kaleb Keithley >> wrote: >> >>> >>> you must not use --without-libtirpc on RHEL8. >>> >>> rpcgen is in the rpcgen package on RHEL8 and Fedora 29+ >>> >>> -- >>> >>> Kaleb >>> >>> >>> On Tue, Jul 23, 2019 at 7:11 AM Barak Sason wrote: >>> >>>> Greeting all, >>>> >>>> I've made a fresh installation of RHEL 8 on a VM and have been trying >>>> to set up Gluster on that system. >>>> >>>> Running ./autogen.sh completes OK, but running ./config results in an >>>> error regarding missing 'rpcgen'. >>>> 'libtirpc-devel package is installed. >>>> Running ./configure --without-libtirp results in the same error. >>>> I'm attaching reverent terminal output. >>>> I'm currently out of ideas. >>>> >>>> I appreciate any help you may offer, >>>> >>>> Barak >>>> _______________________________________________ >>>> >>>> Community Meeting Calendar: >>>> >>>> APAC Schedule - >>>> Every 2nd and 4th Tuesday at 11:30 AM IST >>>> Bridge: https://bluejeans.com/836554017 >>>> >>>> NA/EMEA Schedule - >>>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>>> Bridge: https://bluejeans.com/486278655 >>>> >>>> Gluster-devel mailing list >>>> Gluster-devel at gluster.org >>>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- [root at localhost src]# subscription-manager repos --enable codeready-builder-for-rhel-8-x86_64-rpms Repository 'codeready-builder-for-rhel-8-x86_64-rpms' is enabled for this system. [root at localhost src]# dnf upgrade Updating Subscription Management repositories. Last metadata expiration check: 0:05:06 ago on Tue 23 Jul 2019 03:06:19 PM IDT. Dependencies resolved. Nothing to do. Complete! [root at localhost src]# ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /usr/bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking whether make supports nested variables... yes checking how to create a pax tar archive... gnutar checking whether make supports nested variables... (cached) yes checking build system type... x86_64-pc-linux-gnu checking host system type... x86_64-pc-linux-gnu checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking whether gcc understands -c and -o together... yes checking whether make supports the include directive... yes (GNU style) checking dependency style of gcc... gcc3 checking how to print strings... printf checking for a sed that does not truncate output... /usr/bin/sed checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for fgrep... /usr/bin/grep -F checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 1572864 checking how to convert x86_64-pc-linux-gnu file names to x86_64-pc-linux-gnu format... func_convert_file_noop checking how to convert x86_64-pc-linux-gnu file names to toolchain format... func_convert_file_noop checking for /usr/bin/ld option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for dlltool... no checking how to associate runtime and link libraries... printf %s\n checking for ar... ar checking for archiver @FILE support... @ checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from gcc object... ok checking for sysroot... no checking for a working dd... /usr/bin/dd checking how to truncate binary pipes... /usr/bin/dd bs=4096 count=1 checking for mt... no checking if : is a manifest tool... no checking how to run the C preprocessor... gcc -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for dlfcn.h... yes checking for objdir... .libs checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC -DPIC checking if gcc PIC flag -fPIC -DPIC works... yes checking if gcc static flag -static works... no checking if gcc supports -c -o file.o... yes checking if gcc supports -c -o file.o... (cached) yes checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no checking for rpcgen... no configure: error: `rpcgen` not found, glusterfs needs `rpcgen` exiting.. [root at localhost src]# -------------- next part -------------- A non-text attachment was scrubbed... Name: redhat.repo Type: application/octet-stream Size: 34662 bytes Desc: not available URL: From hgowtham at redhat.com Tue Jul 23 12:23:14 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Tue, 23 Jul 2019 17:53:14 +0530 Subject: [Gluster-devel] Announcing Gluster release 6.4 Message-ID: Hi, The Gluster community is pleased to announce the release of Gluster 6.4 (packages available at [1]). Release notes for the release can be found at [2]. Major changes, features and limitations addressed in this release: None Thanks, Gluster community [1] Packages for 6.4: https://download.gluster.org/pub/gluster/glusterfs/6/6.4/ [2] Release notes for 6.4: https://docs.gluster.org/en/latest/release-notes/6.4/ -- Regards, Hari Gowtham. From hgowtham at redhat.com Tue Jul 23 12:25:11 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Tue, 23 Jul 2019 17:55:11 +0530 Subject: [Gluster-devel] Announcing Gluster release 4.1.10 Message-ID: Hi, The Gluster community is pleased to announce the release of Gluster 4.1.10 (packages available at [1]). Release notes for the release can be found at [2]. Major changes, features and limitations addressed in this release: None NOTE: This is the last release for 4.1 series. Thanks, Gluster community [1] Packages for 4.1.10: https://download.gluster.org/pub/gluster/glusterfs/4/4.1.10/ [2] Release notes for 4.1.10: https://docs.gluster.org/en/latest/release-notes/4.1.10/ -- Regards, Hari Gowtham. From kkeithle at redhat.com Tue Jul 23 12:55:43 2019 From: kkeithle at redhat.com (Kaleb Keithley) Date: Tue, 23 Jul 2019 08:55:43 -0400 Subject: [Gluster-devel] Gluster on RHEL 8 In-Reply-To: References: Message-ID: You still have to install rpcgen before building. The build process isn't going to do that for you. On Tue, Jul 23, 2019 at 8:16 AM Barak Sason wrote: > Hello again Kaleb, > > Thank you for the clarification - at least now I know where the rpcgen is. > Unfortunately, the problem still persists. > > I'm attaching the repo file and terminal output. > Might you have any idea what I'm doing wrong? > > Thank you very much for your assistance, > > Barak > > On Tue, Jul 23, 2019 at 2:40 PM Kaleb Keithley > wrote: > >> rpcgen is in the CRB (codeready-builder) repo. >> >> >> ================================================================================ >> Package Arch Version Repository >> Size >> >> ================================================================================ >> Installing: >> rpcgen x86_64 1.3.1-4.el8 codeready-builder-for-rhel-8-x86_64-rpms >> 52 k >> >> Transaction Summary >> >> ================================================================================ >> >> Make sure that the codeready-builder-for-rhel-8-x86_64-rpms repo is >> enabled (enabled = 1) in /etc/yum.repos.d/redhat.repo or or run dnf with >> `... --enable codeready-builder-for-rhel-8-x86_64-rpms ...` >> >> -- >> >> Kaleb >> >> >> On Tue, Jul 23, 2019 at 7:31 AM Barak Sason wrote: >> >>> Greeting Kaleb, >>> >>> Thank you for the clarification. >>> >>> Though, I did try to install rpcgen package, but without success (libtirpc-devel >>> package is installed). >>> I'm attaching relevant terminal output. >>> >>> What am I missing? >>> >>> Barak >>> >>> On Tue, Jul 23, 2019 at 2:25 PM Kaleb Keithley >>> wrote: >>> >>>> >>>> you must not use --without-libtirpc on RHEL8. >>>> >>>> rpcgen is in the rpcgen package on RHEL8 and Fedora 29+ >>>> >>>> -- >>>> >>>> Kaleb >>>> >>>> >>>> On Tue, Jul 23, 2019 at 7:11 AM Barak Sason wrote: >>>> >>>>> Greeting all, >>>>> >>>>> I've made a fresh installation of RHEL 8 on a VM and have been trying >>>>> to set up Gluster on that system. >>>>> >>>>> Running ./autogen.sh completes OK, but running ./config results in an >>>>> error regarding missing 'rpcgen'. >>>>> 'libtirpc-devel package is installed. >>>>> Running ./configure --without-libtirp results in the same error. >>>>> I'm attaching reverent terminal output. >>>>> I'm currently out of ideas. >>>>> >>>>> I appreciate any help you may offer, >>>>> >>>>> Barak >>>>> _______________________________________________ >>>>> >>>>> Community Meeting Calendar: >>>>> >>>>> APAC Schedule - >>>>> Every 2nd and 4th Tuesday at 11:30 AM IST >>>>> Bridge: https://bluejeans.com/836554017 >>>>> >>>>> NA/EMEA Schedule - >>>>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>>>> Bridge: https://bluejeans.com/486278655 >>>>> >>>>> Gluster-devel mailing list >>>>> Gluster-devel at gluster.org >>>>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sason922 at gmail.com Tue Jul 23 13:21:42 2019 From: sason922 at gmail.com (Barak Sason) Date: Tue, 23 Jul 2019 16:21:42 +0300 Subject: [Gluster-devel] Gluster on RHEL 8 In-Reply-To: References: Message-ID: Thank you very much! Everything is working now. I've gathered several technical difficulties such as such that I've encountered during setup. Maybe in the future, once I have better understanding of Gluster. I could update documentation and help future users :) Cheers, Barak On Tue, Jul 23, 2019 at 3:55 PM Kaleb Keithley wrote: > You still have to install rpcgen before building. The build process isn't > going to do that for you. > > > On Tue, Jul 23, 2019 at 8:16 AM Barak Sason wrote: > >> Hello again Kaleb, >> >> Thank you for the clarification - at least now I know where the rpcgen is. >> Unfortunately, the problem still persists. >> >> I'm attaching the repo file and terminal output. >> Might you have any idea what I'm doing wrong? >> >> Thank you very much for your assistance, >> >> Barak >> >> On Tue, Jul 23, 2019 at 2:40 PM Kaleb Keithley >> wrote: >> >>> rpcgen is in the CRB (codeready-builder) repo. >>> >>> >>> ================================================================================ >>> Package Arch Version Repository >>> Size >>> >>> ================================================================================ >>> Installing: >>> rpcgen x86_64 1.3.1-4.el8 >>> codeready-builder-for-rhel-8-x86_64-rpms 52 k >>> >>> Transaction Summary >>> >>> ================================================================================ >>> >>> Make sure that the codeready-builder-for-rhel-8-x86_64-rpms repo is >>> enabled (enabled = 1) in /etc/yum.repos.d/redhat.repo or or run dnf with >>> `... --enable codeready-builder-for-rhel-8-x86_64-rpms ...` >>> >>> -- >>> >>> Kaleb >>> >>> >>> On Tue, Jul 23, 2019 at 7:31 AM Barak Sason wrote: >>> >>>> Greeting Kaleb, >>>> >>>> Thank you for the clarification. >>>> >>>> Though, I did try to install rpcgen package, but without success (libtirpc-devel >>>> package is installed). >>>> I'm attaching relevant terminal output. >>>> >>>> What am I missing? >>>> >>>> Barak >>>> >>>> On Tue, Jul 23, 2019 at 2:25 PM Kaleb Keithley >>>> wrote: >>>> >>>>> >>>>> you must not use --without-libtirpc on RHEL8. >>>>> >>>>> rpcgen is in the rpcgen package on RHEL8 and Fedora 29+ >>>>> >>>>> -- >>>>> >>>>> Kaleb >>>>> >>>>> >>>>> On Tue, Jul 23, 2019 at 7:11 AM Barak Sason >>>>> wrote: >>>>> >>>>>> Greeting all, >>>>>> >>>>>> I've made a fresh installation of RHEL 8 on a VM and have been trying >>>>>> to set up Gluster on that system. >>>>>> >>>>>> Running ./autogen.sh completes OK, but running ./config results in an >>>>>> error regarding missing 'rpcgen'. >>>>>> 'libtirpc-devel package is installed. >>>>>> Running ./configure --without-libtirp results in the same error. >>>>>> I'm attaching reverent terminal output. >>>>>> I'm currently out of ideas. >>>>>> >>>>>> I appreciate any help you may offer, >>>>>> >>>>>> Barak >>>>>> _______________________________________________ >>>>>> >>>>>> Community Meeting Calendar: >>>>>> >>>>>> APAC Schedule - >>>>>> Every 2nd and 4th Tuesday at 11:30 AM IST >>>>>> Bridge: https://bluejeans.com/836554017 >>>>>> >>>>>> NA/EMEA Schedule - >>>>>> Every 1st and 3rd Tuesday at 01:00 PM EDT >>>>>> Bridge: https://bluejeans.com/486278655 >>>>>> >>>>>> Gluster-devel mailing list >>>>>> Gluster-devel at gluster.org >>>>>> https://lists.gluster.org/mailman/listinfo/gluster-devel >>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From kirr at nexedi.com Wed Jul 24 09:46:02 2019 From: kirr at nexedi.com (Kirill Smelkov) Date: Wed, 24 Jul 2019 09:46:02 +0000 Subject: [Gluster-devel] [PATCH, RESEND3] fuse: require /dev/fuse reads to have enough buffer capacity (take 2) Message-ID: <20190724094556.GA19383@deco.navytux.spb.ru> Miklos, I was sending this patch for ~1.5 month without any feedback from you[1,2,3]. The patch was tested by Sander Eikelenboom (original GlusterFS problem reporter)[4], and you said that it will be ok to retry for next cycle[5]. I was hoping for this patch to be picked up for 5.3 and queued to Linus's tree, but in despite several resends from me (the same patch; just reminders) nothing is happening. v5.3-rc1 came out on last Sunday, which, in my understanding, denotes the close of 5.3 merge window. What is going on? Could you please pick up the patch and handle it? Thanks beforehand (again), Kirill [1] https://lore.kernel.org/linux-fsdevel/20190612141220.GA25389 at deco.navytux.spb.ru/ [2] https://lore.kernel.org/linux-fsdevel/20190623072619.31037-1-kirr at nexedi.com/ [3] https://lore.kernel.org/linux-fsdevel/20190708170314.27982-1-kirr at nexedi.com/ [4] https://lore.kernel.org/linux-fsdevel/f79ff13f-701b-89d8-149c-e53bb880bb77 at eikelenboom.it/ [5] https://lore.kernel.org/linux-fsdevel/CAOssrKfj-MDujX0_t_fgobL_KwpuG2fxFmT=4nURuJA=sUvYYg at mail.gmail.com/ ---- 8< ---- [ This retries commit d4b13963f217 which was reverted in 766741fcaa1f. In this version we require only `sizeof(fuse_in_header) + sizeof(fuse_write_in)` instead of 4K for FUSE request header room, because, contrary to libfuse and kernel client behaviour, GlusterFS actually provides only so much room for request header. ] A FUSE filesystem server queues /dev/fuse sys_read calls to get filesystem requests to handle. It does not know in advance what would be that request as it can be anything that client issues - LOOKUP, READ, WRITE, ... Many requests are short and retrieve data from the filesystem. However WRITE and NOTIFY_REPLY write data into filesystem. Before getting into operation phase, FUSE filesystem server and kernel client negotiate what should be the maximum write size the client will ever issue. After negotiation the contract in between server/client is that the filesystem server then should queue /dev/fuse sys_read calls with enough buffer capacity to receive any client request - WRITE in particular, while FUSE client should not, in particular, send WRITE requests with > negotiated max_write payload. FUSE client in kernel and libfuse historically reserve 4K for request header. However an existing filesystem server - GlusterFS - was found which reserves only 80 bytes for header room (= `sizeof(fuse_in_header) + sizeof(fuse_write_in)`). https://lore.kernel.org/linux-fsdevel/20190611202738.GA22556 at deco.navytux.spb.ru/ https://github.com/gluster/glusterfs/blob/v3.8.15-0-gd174f021a/xlators/mount/fuse/src/fuse-bridge.c#L4894 Since `sizeof(fuse_in_header) + sizeof(fuse_write_in)` == `sizeof(fuse_in_header) + sizeof(fuse_read_in)` == `sizeof(fuse_in_header) + sizeof(fuse_notify_retrieve_in)` is the absolute minimum any sane filesystem should be using for header room, the contract is that filesystem server should queue sys_reads with `sizeof(fuse_in_header) + sizeof(fuse_write_in)` + max_write buffer. If the filesystem server does not follow this contract, what can happen is that fuse_dev_do_read will see that request size is > buffer size, and then it will return EIO to client who issued the request but won't indicate in any way that there is a problem to filesystem server. This can be hard to diagnose because for some requests, e.g. for NOTIFY_REPLY which mimics WRITE, there is no client thread that is waiting for request completion and that EIO goes nowhere, while on filesystem server side things look like the kernel is not replying back after successful NOTIFY_RETRIEVE request made by the server. We can make the problem easy to diagnose if we indicate via error return to filesystem server when it is violating the contract. This should not practically cause problems because if a filesystem server is using shorter buffer, writes to it were already very likely to cause EIO, and if the filesystem is read-only it should be too following FUSE_MIN_READ_BUFFER minimum buffer size. Please see [1] for context where the problem of stuck filesystem was hit for real (because kernel client was incorrectly sending more than max_write data with NOTIFY_REPLY; see also previous patch), how the situation was traced and for more involving patch that did not make it into the tree. [1] https://marc.info/?l=linux-fsdevel&m=155057023600853&w=2 Signed-off-by: Kirill Smelkov Tested-by: Sander Eikelenboom Cc: Han-Wen Nienhuys Cc: Jakob Unterwurzacher --- fs/fuse/dev.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c index ea8237513dfa..b2b2344eadcf 100644 --- a/fs/fuse/dev.c +++ b/fs/fuse/dev.c @@ -1317,6 +1317,26 @@ static ssize_t fuse_dev_do_read(struct fuse_dev *fud, struct file *file, unsigned reqsize; unsigned int hash; + /* + * Require sane minimum read buffer - that has capacity for fixed part + * of any request header + negotiated max_write room for data. If the + * requirement is not satisfied return EINVAL to the filesystem server + * to indicate that it is not following FUSE server/client contract. + * Don't dequeue / abort any request. + * + * Historically libfuse reserves 4K for fixed header room, but e.g. + * GlusterFS reserves only 80 bytes + * + * = `sizeof(fuse_in_header) + sizeof(fuse_write_in)` + * + * which is the absolute minimum any sane filesystem should be using + * for header room. + */ + if (nbytes < max_t(size_t, FUSE_MIN_READ_BUFFER, + sizeof(struct fuse_in_header) + sizeof(struct fuse_write_in) + + fc->max_write)) + return -EINVAL; + restart: spin_lock(&fiq->waitq.lock); err = -EAGAIN; -- 2.20.1 From hgowtham at redhat.com Wed Jul 24 13:35:47 2019 From: hgowtham at redhat.com (Hari Gowtham) Date: Wed, 24 Jul 2019 19:05:47 +0530 Subject: [Gluster-devel] Announcing Gluster release 5.8 Message-ID: Hi, The Gluster community is pleased to announce the release of Gluster 5.8 (packages available at [1]). Release notes for the release can be found at [2]. Major changes, features and limitations addressed in this release: https://bugzilla.redhat.com/1728988 was an issue blocking the build. NOTE: The 5.7 was dead on release. Thanks, Gluster community [1] Packages for 5.8: https://download.gluster.org/pub/gluster/glusterfs/5/5.8/ [2] Release notes for 5.8: https://docs.gluster.org/en/latest/release-notes/5.8/ -- Regards, Hari Gowtham. From jenkins at build.gluster.org Mon Jul 29 01:45:03 2019 From: jenkins at build.gluster.org (jenkins at build.gluster.org) Date: Mon, 29 Jul 2019 01:45:03 +0000 (UTC) Subject: [Gluster-devel] Weekly Untriaged Bugs Message-ID: <1887925855.40.1564364703438.JavaMail.jenkins@jenkins-el7.rht.gluster.org> [...truncated 7 lines...] https://bugzilla.redhat.com/1733667 / bitrot: glusterfs brick process core https://bugzilla.redhat.com/1731041 / build: GlusterFS fails on RHEL-8 during build. https://bugzilla.redhat.com/1730433 / build: Gluster release 6 build errors on ppc64le https://bugzilla.redhat.com/1726175 / fuse: CentOs 6 GlusterFS client creates files with time 01/01/1970 https://bugzilla.redhat.com/1730948 / fuse: [Glusterfs4.1.9] memory leak in fuse mount process. https://bugzilla.redhat.com/1726038 / ganesha-nfs: ganesha : nfstest_lock from NFSTest failed on v3 https://bugzilla.redhat.com/1730565 / geo-replication: Geo-replication does not sync default ACL https://bugzilla.redhat.com/1728183 / gluster-smb: SMBD thread panics on file operations from Windows, OS X and Linux when using vfs_glusterfs https://bugzilla.redhat.com/1726205 / md-cache: Windows client fails to copy large file to GlusterFS volume share with fruit and streams_xattr VFS modules via Samba https://bugzilla.redhat.com/1730962 / project-infrastructure: My emails to gluster-users are not hitting the list https://bugzilla.redhat.com/1731067 / project-infrastructure: Need nightly build for release 7 branch [...truncated 2 lines...] -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 1565 bytes Desc: not available URL: From mscherer at redhat.com Mon Jul 29 13:06:59 2019 From: mscherer at redhat.com (Michael Scherer) Date: Mon, 29 Jul 2019 15:06:59 +0200 Subject: [Gluster-devel] Jenkins security reboot for plugins Message-ID: <65c9a9b676cbcf4c36d19135543b1dc070a571f9.camel@redhat.com> Hi, a jenkins plugin upgrade is planned. There is a setting "download and reboot once there is no jobs running", so I hope that with any luck, this will work (first time I used the option, so I hope this will not create issue with Gerrit). Also, after a java upgrade on the underlying platform, a dozen or so of jobs seems to have disappeared. Jenkins agent seems to break when a minor version of the rpm is installed (like the main jenkins server...), the issue should now be mitigated as well (ansible to send a signal, and let jenkins reconnect to the builder). -- Michael Scherer Sysadmin, Community Infrastructure -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From mscherer at redhat.com Wed Jul 31 17:31:12 2019 From: mscherer at redhat.com (Michael Scherer) Date: Wed, 31 Jul 2019 19:31:12 +0200 Subject: [Gluster-devel] Ongoing issue with docs.gluster.org Message-ID: <26e3d4c1e1ceeeecb0fb2fcaeff12687eae66251.camel@redhat.com> Hi, people (and nagios) have reported issue with the docs.gluster.org domain. I got texted last night, but the problem solved itself while I was sleeping. It seems to be back, and I restarted the proxy since it seems that readthedocs changed the IP of their load balancer (and it was cached by nginx so slow to propagate) See https://twitter.com/readthedocs/status/1156337277640908801 for the initial report. This is kinda out of the control of the gluster infra team, but do not hesitate to send support, love or money to the volunteers of RTD. We are monitoring the issue. -- Michael Scherer Sysadmin, Community Infrastructure -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: