From sankarshan.mukhopadhyay at gmail.com Mon Mar 11 04:55:43 2019 From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay) Date: Mon, 11 Mar 2019 10:25:43 +0530 Subject: [automated-testing] OPEN reviews on Gerrit for glusto-tests - what does the future hold? Message-ID: I am looking at and this is a reasonably long list going back to 30Jan2018 Are these all being actively worked upon? What is keeping them from being merged? From sankarshan.mukhopadhyay at gmail.com Wed Mar 13 02:51:17 2019 From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay) Date: Wed, 13 Mar 2019 08:21:17 +0530 Subject: [automated-testing] OPEN reviews on Gerrit for glusto-tests - what does the future hold? In-Reply-To: References: Message-ID: Circling back on this. On Mon, Mar 11, 2019 at 10:25 AM Sankarshan Mukhopadhyay wrote: > > I am looking at > and > this is a reasonably long list going back to 30Jan2018 > > Are these all being actively worked upon? What is keeping them from > being merged? From sankarshan.mukhopadhyay at gmail.com Wed Mar 13 02:52:57 2019 From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay) Date: Wed, 13 Mar 2019 08:22:57 +0530 Subject: [automated-testing] What is the current state of the Glusto test framework in upstream? Message-ID: What I am essentially looking to understand is whether there are regular Glusto runs and whether the tests receive refreshes. However, if there is no available Glusto service running upstream - that is a whole new conversation. From ykaul at redhat.com Wed Mar 13 09:33:21 2019 From: ykaul at redhat.com (Yaniv Kaul) Date: Wed, 13 Mar 2019 11:33:21 +0200 Subject: [automated-testing] What is the current state of the Glusto test framework in upstream? In-Reply-To: References: Message-ID: On Wed, Mar 13, 2019, 3:53 AM Sankarshan Mukhopadhyay < sankarshan.mukhopadhyay at gmail.com> wrote: > What I am essentially looking to understand is whether there are > regular Glusto runs and whether the tests receive refreshes. However, > if there is no available Glusto service running upstream - that is a > whole new conversation. > I'm* still trying to get it running properly on my simple Vagrant+Ansible setup[1]. Right now I'm installing Gluster + Glusto + creating bricks, pool and a volume in ~3m on my latop. Once I do get it fully working, we'll get to make it work faster, clean it up and and see how can we get code coverage. Unless there's an alternative to the whole framework that I'm not aware of? Surely for most of the positive paths, we can (and perhaps should) use the the Gluster Ansible modules. Y. [1] https://github.com/mykaul/vg * with an intern's help. _______________________________________________ > automated-testing mailing list > automated-testing at gluster.org > https://lists.gluster.org/mailman/listinfo/automated-testing > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sankarshan.mukhopadhyay at gmail.com Wed Mar 13 10:07:44 2019 From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay) Date: Wed, 13 Mar 2019 15:37:44 +0530 Subject: [automated-testing] What is the current state of the Glusto test framework in upstream? In-Reply-To: References: Message-ID: On Wed, Mar 13, 2019 at 3:03 PM Yaniv Kaul wrote: > On Wed, Mar 13, 2019, 3:53 AM Sankarshan Mukhopadhyay wrote: >> >> What I am essentially looking to understand is whether there are >> regular Glusto runs and whether the tests receive refreshes. However, >> if there is no available Glusto service running upstream - that is a >> whole new conversation. > > > I'm* still trying to get it running properly on my simple Vagrant+Ansible setup[1]. > Right now I'm installing Gluster + Glusto + creating bricks, pool and a volume in ~3m on my latop. > This is good. I think my original question was to the maintainer(s) of Glusto along with the individuals involved in the automated testing part of Gluster to understand the challenges in deploying this for the project. > Once I do get it fully working, we'll get to make it work faster, clean it up and and see how can we get code coverage. > > Unless there's an alternative to the whole framework that I'm not aware of? I haven't read anything to this effect on any list. > Surely for most of the positive paths, we can (and perhaps should) use the the Gluster Ansible modules. > Y. > > [1] https://github.com/mykaul/vg > * with an intern's help. From jholloway at redhat.com Wed Mar 13 14:14:12 2019 From: jholloway at redhat.com (Jonathan Holloway) Date: Wed, 13 Mar 2019 09:14:12 -0500 Subject: [automated-testing] What is the current state of the Glusto test framework in upstream? In-Reply-To: References: Message-ID: On Wed, Mar 13, 2019 at 5:08 AM Sankarshan Mukhopadhyay < sankarshan.mukhopadhyay at gmail.com> wrote: > On Wed, Mar 13, 2019 at 3:03 PM Yaniv Kaul wrote: > > On Wed, Mar 13, 2019, 3:53 AM Sankarshan Mukhopadhyay < > sankarshan.mukhopadhyay at gmail.com> wrote: > >> > >> What I am essentially looking to understand is whether there are > >> regular Glusto runs and whether the tests receive refreshes. However, > >> if there is no available Glusto service running upstream - that is a > >> whole new conversation. > > > > > > I'm* still trying to get it running properly on my simple > Vagrant+Ansible setup[1]. > > Right now I'm installing Gluster + Glusto + creating bricks, pool and a > volume in ~3m on my latop. > > > > This is good. I think my original question was to the maintainer(s) of > Glusto along with the individuals involved in the automated testing > part of Gluster to understand the challenges in deploying this for the > project. > > > Once I do get it fully working, we'll get to make it work faster, clean > it up and and see how can we get code coverage. > > > > Unless there's an alternative to the whole framework that I'm not aware > of? > > I haven't read anything to this effect on any list. > > This is cool. I haven't had a chance to give it a run on my laptop, but it looked good. Are you running into issues with Glusto, glusterlibs, and/or Glusto-tests? I was using the glusto-tests container to run tests locally and for BVT in the lab. I was running against lab VMs, so looking forward to giving the vagrant piece a go. By upstream service are we talking about the Jenkins in the CentOS environment, etc? @Vijay Bhaskar Reddy Avuthu @Akarsha Rai any insight? Cheers, Jonathan > Surely for most of the positive paths, we can (and perhaps should) use > the the Gluster Ansible modules. > > Y. > > > > [1] https://github.com/mykaul/vg > > * with an intern's help. > _______________________________________________ > automated-testing mailing list > automated-testing at gluster.org > https://lists.gluster.org/mailman/listinfo/automated-testing > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vavuthu at redhat.com Wed Mar 13 15:38:31 2019 From: vavuthu at redhat.com (Vijay Bhaskar Reddy Avuthu) Date: Wed, 13 Mar 2019 21:08:31 +0530 Subject: [automated-testing] OPEN reviews on Gerrit for glusto-tests - what does the future hold? In-Reply-To: References: Message-ID: There are only 3 PR which are pending for review and rest of all needs rework. Regards, Vijay A On Wed, Mar 13, 2019 at 8:21 AM Sankarshan Mukhopadhyay < sankarshan.mukhopadhyay at gmail.com> wrote: > Circling back on this. > > On Mon, Mar 11, 2019 at 10:25 AM Sankarshan Mukhopadhyay > wrote: > > > > I am looking at > > and > > this is a reasonably long list going back to 30Jan2018 > > > > Are these all being actively worked upon? What is keeping them from > > being merged? > _______________________________________________ > automated-testing mailing list > automated-testing at gluster.org > https://lists.gluster.org/mailman/listinfo/automated-testing > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kiyer at redhat.com Fri Mar 15 08:09:52 2019 From: kiyer at redhat.com (Kshithij Iyer) Date: Fri, 15 Mar 2019 13:39:52 +0530 Subject: [automated-testing] Patches getting stuck in CentOS-CI Message-ID: Hi, >From yesterday I am observing that CentOS-CI is unable to run patches and gets stuck just after all the installation is done. I have one such patch stuck now [1]. Can someone please fix this? [1] https://ci.centos.org/job/gluster_glusto-patch-check/1232/console Thanks, Kshithij Iyer ASSOCIATE QUALITY ENGINEER Red Hat India kiyer at redhat.com IM: kiyer TRIED. TESTED. TRUSTED. @redhatjobs redhatjobs @redhatjobs -------------- next part -------------- An HTML attachment was scrubbed... URL: From saraut at redhat.com Fri Mar 15 09:13:04 2019 From: saraut at redhat.com (Sayalee Raut) Date: Fri, 15 Mar 2019 14:43:04 +0530 Subject: [automated-testing] Patches getting stuck in CentOS-CI In-Reply-To: References: Message-ID: Hello, Even I had faced the same issue a while ago, and had to request that the job be aborted. The job no was 1188 Thanks & Regards, sayalee raut ASSOCIATE QUALITY ENGINEER Red Hat India saraut at redhat.com TRIED. TESTED. TRUSTED. @redhatjobs redhatjobs @redhatjobs On Fri, Mar 15, 2019 at 1:40 PM Kshithij Iyer wrote: > Hi, > From yesterday I am observing that CentOS-CI is unable to run patches and > gets stuck just after all the installation is done. I have one such patch > stuck now [1]. Can someone please fix this? > [1] https://ci.centos.org/job/gluster_glusto-patch-check/1232/console > > Thanks, > > Kshithij Iyer > > ASSOCIATE QUALITY ENGINEER > > Red Hat India > > kiyer at redhat.com IM: kiyer > > TRIED. TESTED. TRUSTED. > @redhatjobs redhatjobs > @redhatjobs > > > > _______________________________________________ > automated-testing mailing list > automated-testing at gluster.org > https://lists.gluster.org/mailman/listinfo/automated-testing > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kiyer at redhat.com Fri Mar 15 09:22:27 2019 From: kiyer at redhat.com (Kshithij Iyer) Date: Fri, 15 Mar 2019 14:52:27 +0530 Subject: [automated-testing] Patches getting stuck in CentOS-CI In-Reply-To: References: Message-ID: I don't think aborting the the job will help.The patch isn't running at all. Something seems to be broken. Thanks, Kshithij Iyer ASSOCIATE QUALITY ENGINEER Red Hat India kiyer at redhat.com IM: kiyer TRIED. TESTED. TRUSTED. @redhatjobs redhatjobs @redhatjobs On Fri, Mar 15, 2019 at 2:43 PM Sayalee Raut wrote: > Hello, > > Even I had faced the same issue a while ago, and had to request that the > job be aborted. > The job no was 1188 > > Thanks & Regards, > > sayalee raut > > ASSOCIATE QUALITY ENGINEER > > Red Hat India > > saraut at redhat.com > > TRIED. TESTED. TRUSTED. > @redhatjobs redhatjobs > @redhatjobs > > > > On Fri, Mar 15, 2019 at 1:40 PM Kshithij Iyer wrote: > >> Hi, >> From yesterday I am observing that CentOS-CI is unable to run patches and >> gets stuck just after all the installation is done. I have one such patch >> stuck now [1]. Can someone please fix this? >> [1] https://ci.centos.org/job/gluster_glusto-patch-check/1232/console >> >> Thanks, >> >> Kshithij Iyer >> >> ASSOCIATE QUALITY ENGINEER >> >> Red Hat India >> >> kiyer at redhat.com IM: kiyer >> >> TRIED. TESTED. TRUSTED. >> @redhatjobs redhatjobs >> @redhatjobs >> >> >> >> _______________________________________________ >> automated-testing mailing list >> automated-testing at gluster.org >> https://lists.gluster.org/mailman/listinfo/automated-testing >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Thu Mar 28 13:46:40 2019 From: ykaul at redhat.com (Yaniv Kaul) Date: Thu, 28 Mar 2019 15:46:40 +0200 Subject: [automated-testing] Glusto-tests code quality Message-ID: 1. There are several typos in the messages. Would be nice to fix (I'll send a patch to those I stumble upon) 2. Do we have some code convention? Flake8, pep8, pylint? 3. Is there some CI, to ensure changes do not break tests? TIA, Y. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Thu Mar 28 14:00:05 2019 From: ykaul at redhat.com (Yaniv Kaul) Date: Thu, 28 Mar 2019 16:00:05 +0200 Subject: [automated-testing] Use of Linux commands in Glusto Message-ID: Why do we have: cd %s; ls .glusterfs/ Instead of: ls %s/.glusterfs/ Or better yet: ls -Abf --color=never %s/.glusterfs/ (we don't want sorting, we do want dot-files (hidden), we don't want coloring) TIA, Y. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Thu Mar 28 14:00:57 2019 From: ykaul at redhat.com (Yaniv Kaul) Date: Thu, 28 Mar 2019 16:00:57 +0200 Subject: [automated-testing] Tear down - shouldn't it unmount client? Message-ID: Teardown (at least where I'm looking at, test_vvt.py) is cleaning up the volume. Shouldn't it also unmount the client? TIA, Y. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jholloway at redhat.com Thu Mar 28 17:21:15 2019 From: jholloway at redhat.com (Jonathan Holloway) Date: Thu, 28 Mar 2019 12:21:15 -0500 Subject: [automated-testing] Tear down - shouldn't it unmount client? In-Reply-To: References: Message-ID: Only where a mount is exec'd in setUp. In some cases, tests are grouped by Class with the volume created in setUp without a mount. Any tests requiring a mount handle the mount and subsequent umount before tearDown gets run. e.g., test_volume_create_start_stop_start() is only testing the volume and doesn't require the mount, whereas... test_file_dir_create_ops_on_volume() is creating ops on the mounted volume and does it's own mount/umount. This file could be broken into a volume only class and a mounted volume class to handle the mount/umount in tearDown, or even allow the super GlusterBaseClass.tearDownClass() method do it automatically. On another note, this test_vvt.py test can probably be eliminated with the code covered in another volume test suite (or suites) and the volume verification test step in BVT run using pytest markers against @pytest.mark.bvt_vvt decorator as I'd originally intended. The idea there was to create a BVT test from a sample of existing testcases written in the full test suites--eliminating duplication of code. Cheers, Jonathan On Thu, Mar 28, 2019 at 9:02 AM Yaniv Kaul wrote: > Teardown (at least where I'm looking at, test_vvt.py) is cleaning up the > volume. > Shouldn't it also unmount the client? > > TIA, > Y. > _______________________________________________ > automated-testing mailing list > automated-testing at gluster.org > https://lists.gluster.org/mailman/listinfo/automated-testing > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sankarshan.mukhopadhyay at gmail.com Thu Mar 28 18:52:46 2019 From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay) Date: Fri, 29 Mar 2019 00:22:46 +0530 Subject: [automated-testing] Circling back on upgrade testing Message-ID: Not so long back there was a conversation around the topic of automating the upgrade testing paths. While I cannot find the thread in the archives and thus I believe it was a private thread, I wanted to know about any progress being made along the lines of being able to now have that in place. From ykaul at redhat.com Thu Mar 28 18:55:34 2019 From: ykaul at redhat.com (Yaniv Kaul) Date: Thu, 28 Mar 2019 20:55:34 +0200 Subject: [automated-testing] Tear down - shouldn't it unmount client? In-Reply-To: References: Message-ID: On Thu, Mar 28, 2019 at 7:21 PM Jonathan Holloway wrote: > Only where a mount is exec'd in setUp. In some cases, tests are grouped by > Class with the volume created in setUp without a mount. Any tests requiring > a mount handle the mount and subsequent umount before tearDown gets run. > > e.g., > test_volume_create_start_stop_start() is only testing the volume and > doesn't require the mount, whereas... > test_file_dir_create_ops_on_volume() is creating ops on the mounted volume > and does it's own mount/umount. > (It's also taking 100% CPU during execution, need to find out why...) > > This file could be broken into a volume only class and a mounted volume > class to handle the mount/umount in tearDown, or even allow the super > GlusterBaseClass.tearDownClass() method do it automatically. > Ok, so since for some reason test_volume_sanity() is failing for me[2], it doesn't unmount. Unmount before making the check, so it'll clean well, even if it fails seem to help[3]. > > On another note, this test_vvt.py test can probably be eliminated with the > code covered in another volume test suite (or suites) and the volume > verification test step in BVT run using pytest markers against > @pytest.mark.bvt_vvt decorator as I'd originally intended. > The idea there was to create a BVT test from a sample of existing > testcases written in the full test suites--eliminating duplication of code. > This is what is running today (I think) in upstream[1], so if it needs to / can be, that'd be great, but has to be coordinated. Y. [1] https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/b9e7dbc57bc96c8a538593f7a5ff0f03fc38e335/centos-ci/scripts/run-glusto.sh [2] Donno what it means: E AssertionError: Lists are not equal. E Before creating file: ['00\nchangelogs\nindices\nlandfill\nunlink\n', '00\nchangelogs\nindices\nlandfill\nunlink\n', '00\nchangelogs\nindices\nlandfill\nunlink\n', '00\nchangelogs\nindices\nlandfill\nunlink\n'] E After deleting file: ['00\nchangelogs\nindices\nlandfill\nunlink\n', '00\nchangelogs\nindices\nlandfill\nunlink\n', '00\n25\nchangelogs\nindices\nlandfill\nunlink\n', '00\n25\n2d\nchangelogs\nindices\nlandfill\nunlink\n'] [3] https://review.gluster.org/#/c/glusto-tests/+/22440/ > > Cheers, > Jonathan > > On Thu, Mar 28, 2019 at 9:02 AM Yaniv Kaul wrote: > >> Teardown (at least where I'm looking at, test_vvt.py) is cleaning up the >> volume. >> Shouldn't it also unmount the client? >> >> TIA, >> Y. >> _______________________________________________ >> automated-testing mailing list >> automated-testing at gluster.org >> https://lists.gluster.org/mailman/listinfo/automated-testing >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Thu Mar 28 18:56:23 2019 From: ykaul at redhat.com (Yaniv Kaul) Date: Thu, 28 Mar 2019 20:56:23 +0200 Subject: [automated-testing] Circling back on upgrade testing In-Reply-To: References: Message-ID: Specifically op version update, doesn't seem to be covered also in the .t tests (at least that was my understanding from looking at the code coverage). Y. On Thu, Mar 28, 2019 at 8:53 PM Sankarshan Mukhopadhyay < sankarshan.mukhopadhyay at gmail.com> wrote: > Not so long back there was a conversation around the topic of > automating the upgrade testing paths. While I cannot find the thread > in the archives and thus I believe it was a private thread, I wanted > to know about any progress being made along the lines of being able to > now have that in place. > _______________________________________________ > automated-testing mailing list > automated-testing at gluster.org > https://lists.gluster.org/mailman/listinfo/automated-testing > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vavuthu at redhat.com Fri Mar 29 05:37:23 2019 From: vavuthu at redhat.com (Vijay Bhaskar Reddy Avuthu) Date: Fri, 29 Mar 2019 11:07:23 +0530 Subject: [automated-testing] Glusto-tests code quality In-Reply-To: References: Message-ID: Thanks Yaniv for pointing out the typos. We follow flake8 and pylint. As soon as patch is submitted, Gluster Build System will run both flake8 and pylint. If it fails Build System will give -1 Before merging the patch, we run the test case in upstream through "/run tests" Regards, Vijay A On Thu, Mar 28, 2019 at 7:24 PM Yaniv Kaul wrote: > 1. There are several typos in the messages. Would be nice to fix (I'll > send a patch to those I stumble upon) > 2. Do we have some code convention? Flake8, pep8, pylint? > 3. Is there some CI, to ensure changes do not break tests? > > TIA, > Y. > _______________________________________________ > automated-testing mailing list > automated-testing at gluster.org > https://lists.gluster.org/mailman/listinfo/automated-testing > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Fri Mar 29 05:46:31 2019 From: ykaul at redhat.com (Yaniv Kaul) Date: Fri, 29 Mar 2019 08:46:31 +0300 Subject: [automated-testing] Glusto-tests code quality In-Reply-To: References: Message-ID: On Fri, Mar 29, 2019 at 8:38 AM Vijay Bhaskar Reddy Avuthu < vavuthu at redhat.com> wrote: > Thanks Yaniv for pointing out the typos. We follow flake8 and pylint. > As soon as patch is submitted, Gluster Build System will run both flake8 > and pylint. If it fails Build System will give -1 > Before merging the patch, we run the test case in upstream through "/run > tests" > Thanks. Is there a simple way to apply the same flake8 and pylint rules on your code, before submitting a patch? I see there's a .pylintrc in the repository, any instructions for flake8? TIA, Y. > > Regards, > Vijay A > > On Thu, Mar 28, 2019 at 7:24 PM Yaniv Kaul wrote: > >> 1. There are several typos in the messages. Would be nice to fix (I'll >> send a patch to those I stumble upon) >> 2. Do we have some code convention? Flake8, pep8, pylint? >> 3. Is there some CI, to ensure changes do not break tests? >> >> TIA, >> Y. >> _______________________________________________ >> automated-testing mailing list >> automated-testing at gluster.org >> https://lists.gluster.org/mailman/listinfo/automated-testing >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vavuthu at redhat.com Fri Mar 29 06:24:22 2019 From: vavuthu at redhat.com (Vijay Bhaskar Reddy) Date: Fri, 29 Mar 2019 11:54:22 +0530 Subject: [automated-testing] Tear down - shouldn't it unmount client? In-Reply-To: References: Message-ID: On 03/29/2019 12:25 AM, Yaniv Kaul wrote: > > > On Thu, Mar 28, 2019 at 7:21 PM Jonathan Holloway > > wrote: > > Only where a mount is exec'd in setUp. In some cases, tests are > grouped by Class with the volume created in setUp without a mount. > Any tests requiring a mount handle the mount and subsequent umount > before tearDown gets run. > > e.g., > test_volume_create_start_stop_start() is only testing the volume > and doesn't require the mount, whereas... > test_file_dir_create_ops_on_volume() is creating ops on the > mounted volume and does it's own mount/umount. > > > (It's also taking 100% CPU during execution, need to find out why...) > > > This file could be broken into a volume only class and a mounted > volume class to handle the mount/umount in tearDown, or even allow > the super GlusterBaseClass.tearDownClass() method do it automatically. > > > Ok, so since for some reason test_volume_sanity() is failing for > me[2], it doesn't unmount. > Unmount before making the check, so it'll clean well, even if it fails > seem to help[3]. > > > On another note, this test_vvt.py test can probably be eliminated > with the code covered in another volume test suite (or suites) and > the volume verification test step in BVT run using pytest markers > against @pytest.mark.bvt_vvt decorator as I'd originally intended. > The idea there was to create a BVT test from a sample of existing > testcases written in the full test suites--eliminating duplication > of code. > > > This is what is running today (I think) in upstream[1], so if it needs > to / can be, that'd be great, but has to be coordinated. > Y. > > [1] > https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/b9e7dbc57bc96c8a538593f7a5ff0f03fc38e335/centos-ci/scripts/run-glusto.sh > [2] Donno what it means: > E?????? AssertionError: Lists are not equal. > E??????? Before creating file: > ['00\nchangelogs\nindices\nlandfill\nunlink\n', > '00\nchangelogs\nindices\nlandfill\nunlink\n', > '00\nchangelogs\nindices\nlandfill\nunlink\n', > '00\nchangelogs\nindices\nlandfill\nunlink\n'] > E??????? After deleting file: > ['00\nchangelogs\nindices\nlandfill\nunlink\n', > '00\nchangelogs\nindices\nlandfill\nunlink\n', > '00\n25\nchangelogs\nindices\nlandfill\nunlink\n', > '00\n25\n2d\nchangelogs\nindices\nlandfill\nunlink\n'] I remember this test cases was created as part of Closed Gap and later bug turned to WONTFIX. I think we need to skip or remove the test cases. Since test cases is asserting out before unmount, it leaves the mount point as it is. > > [3] https://review.gluster.org/#/c/glusto-tests/+/22440/ > > > Cheers, > Jonathan > > On Thu, Mar 28, 2019 at 9:02 AM Yaniv Kaul > wrote: > > Teardown (at least where I'm looking at, test_vvt.py) is > cleaning up the volume. > Shouldn't it also unmount the client? > > TIA, > Y. > _______________________________________________ > automated-testing mailing list > automated-testing at gluster.org > > https://lists.gluster.org/mailman/listinfo/automated-testing > > > > _______________________________________________ > automated-testing mailing list > automated-testing at gluster.org > https://lists.gluster.org/mailman/listinfo/automated-testing -------------- next part -------------- An HTML attachment was scrubbed... URL: From vavuthu at redhat.com Fri Mar 29 06:25:00 2019 From: vavuthu at redhat.com (Vijay Bhaskar Reddy Avuthu) Date: Fri, 29 Mar 2019 11:55:00 +0530 Subject: [automated-testing] Use of Linux commands in Glusto In-Reply-To: References: Message-ID: Agreed. On Thu, Mar 28, 2019 at 7:31 PM Yaniv Kaul wrote: > Why do we have: > cd %s; ls .glusterfs/ > > Instead of: > ls %s/.glusterfs/ > > Or better yet: > ls -Abf --color=never %s/.glusterfs/ > > (we don't want sorting, we do want dot-files (hidden), we don't want > coloring) > > TIA, > Y. > _______________________________________________ > automated-testing mailing list > automated-testing at gluster.org > https://lists.gluster.org/mailman/listinfo/automated-testing > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vavuthu at redhat.com Fri Mar 29 06:36:29 2019 From: vavuthu at redhat.com (Vijay Bhaskar Reddy Avuthu) Date: Fri, 29 Mar 2019 12:06:29 +0530 Subject: [automated-testing] Glusto-tests code quality In-Reply-To: References: Message-ID: We can check using below command before submitting patch #flake8 or #flake8 Regards, Vijay A On Fri, Mar 29, 2019 at 11:17 AM Yaniv Kaul wrote: > On Fri, Mar 29, 2019 at 8:38 AM Vijay Bhaskar Reddy Avuthu < > vavuthu at redhat.com> wrote: > >> Thanks Yaniv for pointing out the typos. We follow flake8 and pylint. >> As soon as patch is submitted, Gluster Build System will run both flake8 >> and pylint. If it fails Build System will give -1 >> Before merging the patch, we run the test case in upstream through "/run >> tests" >> > > Thanks. > Is there a simple way to apply the same flake8 and pylint rules on your > code, before submitting a patch? > I see there's a .pylintrc in the repository, any instructions for flake8? > TIA, > Y. > >> >> Regards, >> Vijay A >> >> On Thu, Mar 28, 2019 at 7:24 PM Yaniv Kaul wrote: >> >>> 1. There are several typos in the messages. Would be nice to fix (I'll >>> send a patch to those I stumble upon) >>> 2. Do we have some code convention? Flake8, pep8, pylint? >>> 3. Is there some CI, to ensure changes do not break tests? >>> >>> TIA, >>> Y. >>> _______________________________________________ >>> automated-testing mailing list >>> automated-testing at gluster.org >>> https://lists.gluster.org/mailman/listinfo/automated-testing >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Fri Mar 29 06:52:40 2019 From: ykaul at redhat.com (Yaniv Kaul) Date: Fri, 29 Mar 2019 09:52:40 +0300 Subject: [automated-testing] Use of Linux commands in Glusto In-Reply-To: References: Message-ID: On Fri, Mar 29, 2019 at 9:25 AM Vijay Bhaskar Reddy Avuthu < vavuthu at redhat.com> wrote: > Agreed. > More worrying is the use of 'dd' without oflag=direct, which means data can be cached on the client before fully written. If we end up trying to read it from somewhere else, it may not be there, yet. Y. > > On Thu, Mar 28, 2019 at 7:31 PM Yaniv Kaul wrote: > >> Why do we have: >> cd %s; ls .glusterfs/ >> >> Instead of: >> ls %s/.glusterfs/ >> >> Or better yet: >> ls -Abf --color=never %s/.glusterfs/ >> >> (we don't want sorting, we do want dot-files (hidden), we don't want >> coloring) >> >> TIA, >> Y. >> _______________________________________________ >> automated-testing mailing list >> automated-testing at gluster.org >> https://lists.gluster.org/mailman/listinfo/automated-testing >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Fri Mar 29 06:53:38 2019 From: ykaul at redhat.com (Yaniv Kaul) Date: Fri, 29 Mar 2019 09:53:38 +0300 Subject: [automated-testing] Tear down - shouldn't it unmount client? In-Reply-To: References: Message-ID: On Fri, Mar 29, 2019 at 9:24 AM Vijay Bhaskar Reddy wrote: > > > On 03/29/2019 12:25 AM, Yaniv Kaul wrote: > > > > On Thu, Mar 28, 2019 at 7:21 PM Jonathan Holloway > wrote: > >> Only where a mount is exec'd in setUp. In some cases, tests are grouped >> by Class with the volume created in setUp without a mount. Any tests >> requiring a mount handle the mount and subsequent umount before tearDown >> gets run. >> >> e.g., >> test_volume_create_start_stop_start() is only testing the volume and >> doesn't require the mount, whereas... >> test_file_dir_create_ops_on_volume() is creating ops on the mounted >> volume and does it's own mount/umount. >> > > (It's also taking 100% CPU during execution, need to find out why...) > >> >> This file could be broken into a volume only class and a mounted volume >> class to handle the mount/umount in tearDown, or even allow the super >> GlusterBaseClass.tearDownClass() method do it automatically. >> > > Ok, so since for some reason test_volume_sanity() is failing for me[2], it > doesn't unmount. > Unmount before making the check, so it'll clean well, even if it fails > seem to help[3]. > >> >> On another note, this test_vvt.py test can probably be eliminated with >> the code covered in another volume test suite (or suites) and the volume >> verification test step in BVT run using pytest markers against >> @pytest.mark.bvt_vvt decorator as I'd originally intended. >> The idea there was to create a BVT test from a sample of existing >> testcases written in the full test suites--eliminating duplication of code. >> > > This is what is running today (I think) in upstream[1], so if it needs to > / can be, that'd be great, but has to be coordinated. > Y. > > [1] > https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/b9e7dbc57bc96c8a538593f7a5ff0f03fc38e335/centos-ci/scripts/run-glusto.sh > [2] Donno what it means: > E AssertionError: Lists are not equal. > E Before creating file: > ['00\nchangelogs\nindices\nlandfill\nunlink\n', > '00\nchangelogs\nindices\nlandfill\nunlink\n', > '00\nchangelogs\nindices\nlandfill\nunlink\n', > '00\nchangelogs\nindices\nlandfill\nunlink\n'] > E After deleting file: > ['00\nchangelogs\nindices\nlandfill\nunlink\n', > '00\nchangelogs\nindices\nlandfill\nunlink\n', > '00\n25\nchangelogs\nindices\nlandfill\nunlink\n', > '00\n25\n2d\nchangelogs\nindices\nlandfill\nunlink\n'] > > > I remember this test cases was created as part of Closed Gap and later bug > turned to WONTFIX. I think we need to skip or remove the test cases. Since > test cases is asserting out before unmount, it > leaves the mount point as it is. > The latter part I've fixed. the former one, do we need to simply depracate this test? (which makes me wonder who's running those tests at all, if they are broken...) Y. > > > [3] https://review.gluster.org/#/c/glusto-tests/+/22440/ > >> >> Cheers, >> Jonathan >> >> On Thu, Mar 28, 2019 at 9:02 AM Yaniv Kaul wrote: >> >>> Teardown (at least where I'm looking at, test_vvt.py) is cleaning up the >>> volume. >>> Shouldn't it also unmount the client? >>> >>> TIA, >>> Y. >>> _______________________________________________ >>> automated-testing mailing list >>> automated-testing at gluster.org >>> https://lists.gluster.org/mailman/listinfo/automated-testing >>> >> > > _______________________________________________ > automated-testing mailing listautomated-testing at gluster.orghttps://lists.gluster.org/mailman/listinfo/automated-testing > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sankarshan.mukhopadhyay at gmail.com Fri Mar 29 07:06:01 2019 From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay) Date: Fri, 29 Mar 2019 12:36:01 +0530 Subject: [automated-testing] Tear down - shouldn't it unmount client? In-Reply-To: References: Message-ID: On Fri, Mar 29, 2019 at 12:24 PM Yaniv Kaul wrote: [snip] > The latter part I've fixed. the former one, do we need to simply depracate this test? > (which makes me wonder who's running those tests at all, if they are broken...) Pretty much the same thing as I wanted to discover at From vavuthu at redhat.com Fri Mar 29 07:14:07 2019 From: vavuthu at redhat.com (Vijay Bhaskar Reddy Avuthu) Date: Fri, 29 Mar 2019 12:44:07 +0530 Subject: [automated-testing] Tear down - shouldn't it unmount client? In-Reply-To: References: Message-ID: Yes, We need to deprecate this test. We are explicitly saying not to run this test by using option "-k not " in our runs. If everyone agrees, Akarsha will submit patch to skip the test case using markers. Regards, Vijay A Regards, Vijay A On Fri, Mar 29, 2019 at 12:24 PM Yaniv Kaul wrote: > > > On Fri, Mar 29, 2019 at 9:24 AM Vijay Bhaskar Reddy > wrote: > >> >> >> On 03/29/2019 12:25 AM, Yaniv Kaul wrote: >> >> >> >> On Thu, Mar 28, 2019 at 7:21 PM Jonathan Holloway >> wrote: >> >>> Only where a mount is exec'd in setUp. In some cases, tests are grouped >>> by Class with the volume created in setUp without a mount. Any tests >>> requiring a mount handle the mount and subsequent umount before tearDown >>> gets run. >>> >>> e.g., >>> test_volume_create_start_stop_start() is only testing the volume and >>> doesn't require the mount, whereas... >>> test_file_dir_create_ops_on_volume() is creating ops on the mounted >>> volume and does it's own mount/umount. >>> >> >> (It's also taking 100% CPU during execution, need to find out why...) >> >>> >>> This file could be broken into a volume only class and a mounted volume >>> class to handle the mount/umount in tearDown, or even allow the super >>> GlusterBaseClass.tearDownClass() method do it automatically. >>> >> >> Ok, so since for some reason test_volume_sanity() is failing for me[2], >> it doesn't unmount. >> Unmount before making the check, so it'll clean well, even if it fails >> seem to help[3]. >> >>> >>> On another note, this test_vvt.py test can probably be eliminated with >>> the code covered in another volume test suite (or suites) and the volume >>> verification test step in BVT run using pytest markers against >>> @pytest.mark.bvt_vvt decorator as I'd originally intended. >>> The idea there was to create a BVT test from a sample of existing >>> testcases written in the full test suites--eliminating duplication of code. >>> >> >> This is what is running today (I think) in upstream[1], so if it needs to >> / can be, that'd be great, but has to be coordinated. >> Y. >> >> [1] >> https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/b9e7dbc57bc96c8a538593f7a5ff0f03fc38e335/centos-ci/scripts/run-glusto.sh >> [2] Donno what it means: >> E AssertionError: Lists are not equal. >> E Before creating file: >> ['00\nchangelogs\nindices\nlandfill\nunlink\n', >> '00\nchangelogs\nindices\nlandfill\nunlink\n', >> '00\nchangelogs\nindices\nlandfill\nunlink\n', >> '00\nchangelogs\nindices\nlandfill\nunlink\n'] >> E After deleting file: >> ['00\nchangelogs\nindices\nlandfill\nunlink\n', >> '00\nchangelogs\nindices\nlandfill\nunlink\n', >> '00\n25\nchangelogs\nindices\nlandfill\nunlink\n', >> '00\n25\n2d\nchangelogs\nindices\nlandfill\nunlink\n'] >> >> >> I remember this test cases was created as part of Closed Gap and later >> bug turned to WONTFIX. I think we need to skip or remove the test cases. >> Since test cases is asserting out before unmount, it >> leaves the mount point as it is. >> > > The latter part I've fixed. the former one, do we need to simply depracate > this test? > (which makes me wonder who's running those tests at all, if they are > broken...) > Y. > >> >> >> [3] https://review.gluster.org/#/c/glusto-tests/+/22440/ >> >>> >>> Cheers, >>> Jonathan >>> >>> On Thu, Mar 28, 2019 at 9:02 AM Yaniv Kaul wrote: >>> >>>> Teardown (at least where I'm looking at, test_vvt.py) is cleaning up >>>> the volume. >>>> Shouldn't it also unmount the client? >>>> >>>> TIA, >>>> Y. >>>> _______________________________________________ >>>> automated-testing mailing list >>>> automated-testing at gluster.org >>>> https://lists.gluster.org/mailman/listinfo/automated-testing >>>> >>> >> >> _______________________________________________ >> automated-testing mailing listautomated-testing at gluster.orghttps://lists.gluster.org/mailman/listinfo/automated-testing >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Fri Mar 29 07:18:06 2019 From: ykaul at redhat.com (Yaniv Kaul) Date: Fri, 29 Mar 2019 10:18:06 +0300 Subject: [automated-testing] Tear down - shouldn't it unmount client? In-Reply-To: References: Message-ID: On Fri, Mar 29, 2019 at 10:14 AM Vijay Bhaskar Reddy Avuthu < vavuthu at redhat.com> wrote: > Yes, We need to deprecate this test. We are explicitly saying not to run > this test by using option "-k not " in our runs. > So why wasn't it contributed to upstream? > > If everyone agrees, Akarsha will submit patch to skip the test case using > markers. > No, please remove it. There's no point in confusing more people about it. Deepshikha - I'd appreciate if you can please ensure it's also removed from upstream[1]. Y. [1] https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/b9e7dbc57bc96c8a538593f7a5ff0f03fc38e335/centos-ci/scripts/run-glusto.sh > Regards, > Vijay A > > > Regards, > Vijay A > > On Fri, Mar 29, 2019 at 12:24 PM Yaniv Kaul wrote: > >> >> >> On Fri, Mar 29, 2019 at 9:24 AM Vijay Bhaskar Reddy >> wrote: >> >>> >>> >>> On 03/29/2019 12:25 AM, Yaniv Kaul wrote: >>> >>> >>> >>> On Thu, Mar 28, 2019 at 7:21 PM Jonathan Holloway >>> wrote: >>> >>>> Only where a mount is exec'd in setUp. In some cases, tests are grouped >>>> by Class with the volume created in setUp without a mount. Any tests >>>> requiring a mount handle the mount and subsequent umount before tearDown >>>> gets run. >>>> >>>> e.g., >>>> test_volume_create_start_stop_start() is only testing the volume and >>>> doesn't require the mount, whereas... >>>> test_file_dir_create_ops_on_volume() is creating ops on the mounted >>>> volume and does it's own mount/umount. >>>> >>> >>> (It's also taking 100% CPU during execution, need to find out why...) >>> >>>> >>>> This file could be broken into a volume only class and a mounted volume >>>> class to handle the mount/umount in tearDown, or even allow the super >>>> GlusterBaseClass.tearDownClass() method do it automatically. >>>> >>> >>> Ok, so since for some reason test_volume_sanity() is failing for me[2], >>> it doesn't unmount. >>> Unmount before making the check, so it'll clean well, even if it fails >>> seem to help[3]. >>> >>>> >>>> On another note, this test_vvt.py test can probably be eliminated with >>>> the code covered in another volume test suite (or suites) and the volume >>>> verification test step in BVT run using pytest markers against >>>> @pytest.mark.bvt_vvt decorator as I'd originally intended. >>>> The idea there was to create a BVT test from a sample of existing >>>> testcases written in the full test suites--eliminating duplication of code. >>>> >>> >>> This is what is running today (I think) in upstream[1], so if it needs >>> to / can be, that'd be great, but has to be coordinated. >>> Y. >>> >>> [1] >>> https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/b9e7dbc57bc96c8a538593f7a5ff0f03fc38e335/centos-ci/scripts/run-glusto.sh >>> [2] Donno what it means: >>> E AssertionError: Lists are not equal. >>> E Before creating file: >>> ['00\nchangelogs\nindices\nlandfill\nunlink\n', >>> '00\nchangelogs\nindices\nlandfill\nunlink\n', >>> '00\nchangelogs\nindices\nlandfill\nunlink\n', >>> '00\nchangelogs\nindices\nlandfill\nunlink\n'] >>> E After deleting file: >>> ['00\nchangelogs\nindices\nlandfill\nunlink\n', >>> '00\nchangelogs\nindices\nlandfill\nunlink\n', >>> '00\n25\nchangelogs\nindices\nlandfill\nunlink\n', >>> '00\n25\n2d\nchangelogs\nindices\nlandfill\nunlink\n'] >>> >>> >>> I remember this test cases was created as part of Closed Gap and later >>> bug turned to WONTFIX. I think we need to skip or remove the test cases. >>> Since test cases is asserting out before unmount, it >>> leaves the mount point as it is. >>> >> >> The latter part I've fixed. the former one, do we need to simply >> depracate this test? >> (which makes me wonder who's running those tests at all, if they are >> broken...) >> Y. >> >>> >>> >>> [3] https://review.gluster.org/#/c/glusto-tests/+/22440/ >>> >>>> >>>> Cheers, >>>> Jonathan >>>> >>>> On Thu, Mar 28, 2019 at 9:02 AM Yaniv Kaul wrote: >>>> >>>>> Teardown (at least where I'm looking at, test_vvt.py) is cleaning up >>>>> the volume. >>>>> Shouldn't it also unmount the client? >>>>> >>>>> TIA, >>>>> Y. >>>>> _______________________________________________ >>>>> automated-testing mailing list >>>>> automated-testing at gluster.org >>>>> https://lists.gluster.org/mailman/listinfo/automated-testing >>>>> >>>> >>> >>> _______________________________________________ >>> automated-testing mailing listautomated-testing at gluster.orghttps://lists.gluster.org/mailman/listinfo/automated-testing >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dkhandel at redhat.com Fri Mar 29 07:43:57 2019 From: dkhandel at redhat.com (Deepshikha Khandelwal) Date: Fri, 29 Mar 2019 13:13:57 +0530 Subject: [automated-testing] Tear down - shouldn't it unmount client? In-Reply-To: References: Message-ID: Ack. I'll update it. On Fri, Mar 29, 2019 at 12:48 PM Yaniv Kaul wrote: > > > On Fri, Mar 29, 2019 at 10:14 AM Vijay Bhaskar Reddy Avuthu < > vavuthu at redhat.com> wrote: > >> Yes, We need to deprecate this test. We are explicitly saying not to run >> this test by using option "-k not " in our runs. >> > > So why wasn't it contributed to upstream? > >> >> If everyone agrees, Akarsha will submit patch to skip the test case using >> markers. >> > > No, please remove it. There's no point in confusing more people about it. > Deepshikha - I'd appreciate if you can please ensure it's also removed > from upstream[1]. > > Y. > [1] > https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/b9e7dbc57bc96c8a538593f7a5ff0f03fc38e335/centos-ci/scripts/run-glusto.sh > > >> Regards, >> Vijay A >> >> >> Regards, >> Vijay A >> >> On Fri, Mar 29, 2019 at 12:24 PM Yaniv Kaul wrote: >> >>> >>> >>> On Fri, Mar 29, 2019 at 9:24 AM Vijay Bhaskar Reddy >>> wrote: >>> >>>> >>>> >>>> On 03/29/2019 12:25 AM, Yaniv Kaul wrote: >>>> >>>> >>>> >>>> On Thu, Mar 28, 2019 at 7:21 PM Jonathan Holloway >>>> wrote: >>>> >>>>> Only where a mount is exec'd in setUp. In some cases, tests are >>>>> grouped by Class with the volume created in setUp without a mount. Any >>>>> tests requiring a mount handle the mount and subsequent umount before >>>>> tearDown gets run. >>>>> >>>>> e.g., >>>>> test_volume_create_start_stop_start() is only testing the volume and >>>>> doesn't require the mount, whereas... >>>>> test_file_dir_create_ops_on_volume() is creating ops on the mounted >>>>> volume and does it's own mount/umount. >>>>> >>>> >>>> (It's also taking 100% CPU during execution, need to find out why...) >>>> >>>>> >>>>> This file could be broken into a volume only class and a mounted >>>>> volume class to handle the mount/umount in tearDown, or even allow the >>>>> super GlusterBaseClass.tearDownClass() method do it automatically. >>>>> >>>> >>>> Ok, so since for some reason test_volume_sanity() is failing for me[2], >>>> it doesn't unmount. >>>> Unmount before making the check, so it'll clean well, even if it fails >>>> seem to help[3]. >>>> >>>>> >>>>> On another note, this test_vvt.py test can probably be eliminated with >>>>> the code covered in another volume test suite (or suites) and the volume >>>>> verification test step in BVT run using pytest markers against >>>>> @pytest.mark.bvt_vvt decorator as I'd originally intended. >>>>> The idea there was to create a BVT test from a sample of existing >>>>> testcases written in the full test suites--eliminating duplication of code. >>>>> >>>> >>>> This is what is running today (I think) in upstream[1], so if it needs >>>> to / can be, that'd be great, but has to be coordinated. >>>> Y. >>>> >>>> [1] >>>> https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/b9e7dbc57bc96c8a538593f7a5ff0f03fc38e335/centos-ci/scripts/run-glusto.sh >>>> [2] Donno what it means: >>>> E AssertionError: Lists are not equal. >>>> E Before creating file: >>>> ['00\nchangelogs\nindices\nlandfill\nunlink\n', >>>> '00\nchangelogs\nindices\nlandfill\nunlink\n', >>>> '00\nchangelogs\nindices\nlandfill\nunlink\n', >>>> '00\nchangelogs\nindices\nlandfill\nunlink\n'] >>>> E After deleting file: >>>> ['00\nchangelogs\nindices\nlandfill\nunlink\n', >>>> '00\nchangelogs\nindices\nlandfill\nunlink\n', >>>> '00\n25\nchangelogs\nindices\nlandfill\nunlink\n', >>>> '00\n25\n2d\nchangelogs\nindices\nlandfill\nunlink\n'] >>>> >>>> >>>> I remember this test cases was created as part of Closed Gap and later >>>> bug turned to WONTFIX. I think we need to skip or remove the test cases. >>>> Since test cases is asserting out before unmount, it >>>> leaves the mount point as it is. >>>> >>> >>> The latter part I've fixed. the former one, do we need to simply >>> depracate this test? >>> (which makes me wonder who's running those tests at all, if they are >>> broken...) >>> Y. >>> >>>> >>>> >>>> [3] https://review.gluster.org/#/c/glusto-tests/+/22440/ >>>> >>>>> >>>>> Cheers, >>>>> Jonathan >>>>> >>>>> On Thu, Mar 28, 2019 at 9:02 AM Yaniv Kaul wrote: >>>>> >>>>>> Teardown (at least where I'm looking at, test_vvt.py) is cleaning up >>>>>> the volume. >>>>>> Shouldn't it also unmount the client? >>>>>> >>>>>> TIA, >>>>>> Y. >>>>>> _______________________________________________ >>>>>> automated-testing mailing list >>>>>> automated-testing at gluster.org >>>>>> https://lists.gluster.org/mailman/listinfo/automated-testing >>>>>> >>>>> >>>> >>>> _______________________________________________ >>>> automated-testing mailing listautomated-testing at gluster.orghttps://lists.gluster.org/mailman/listinfo/automated-testing >>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Fri Mar 29 07:57:37 2019 From: ykaul at redhat.com (Yaniv Kaul) Date: Fri, 29 Mar 2019 10:57:37 +0300 Subject: [automated-testing] Reverse-engineering Glusto tests requirementss Message-ID: Instead of me failing, looking at the logs, and reverse engineering what requirements needed to be on the nodes, can we have some documentation to it? Examples include: - smbclient (on the server) - Samba server up and running - FUSE client etc. TIA, Y. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dkhandel at redhat.com Fri Mar 29 09:04:10 2019 From: dkhandel at redhat.com (Deepshikha Khandelwal) Date: Fri, 29 Mar 2019 14:34:10 +0530 Subject: [automated-testing] Tear down - shouldn't it unmount client? In-Reply-To: References: Message-ID: Yaniv- FYI, you are looking at the deprecated folder[1]. We have moved all the centos ci related jobs to its own repo[2]. Removed functional test_vvt from script. [1] https://github.com/gluster/glusterfs-patch-acceptance-tests/tree/master/centos-ci [2] https://github.com/gluster/centosci On Fri, Mar 29, 2019 at 1:13 PM Deepshikha Khandelwal wrote: > Ack. I'll update it. > > On Fri, Mar 29, 2019 at 12:48 PM Yaniv Kaul wrote: > >> >> >> On Fri, Mar 29, 2019 at 10:14 AM Vijay Bhaskar Reddy Avuthu < >> vavuthu at redhat.com> wrote: >> >>> Yes, We need to deprecate this test. We are explicitly saying not to run >>> this test by using option "-k not " in our runs. >>> >> >> So why wasn't it contributed to upstream? >> >>> >>> If everyone agrees, Akarsha will submit patch to skip the test case >>> using markers. >>> >> >> No, please remove it. There's no point in confusing more people about it. >> Deepshikha - I'd appreciate if you can please ensure it's also removed >> from upstream[1]. >> >> Y. >> [1] >> https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/b9e7dbc57bc96c8a538593f7a5ff0f03fc38e335/centos-ci/scripts/run-glusto.sh >> >> >>> Regards, >>> Vijay A >>> >>> >>> Regards, >>> Vijay A >>> >>> On Fri, Mar 29, 2019 at 12:24 PM Yaniv Kaul wrote: >>> >>>> >>>> >>>> On Fri, Mar 29, 2019 at 9:24 AM Vijay Bhaskar Reddy >>>> wrote: >>>> >>>>> >>>>> >>>>> On 03/29/2019 12:25 AM, Yaniv Kaul wrote: >>>>> >>>>> >>>>> >>>>> On Thu, Mar 28, 2019 at 7:21 PM Jonathan Holloway < >>>>> jholloway at redhat.com> wrote: >>>>> >>>>>> Only where a mount is exec'd in setUp. In some cases, tests are >>>>>> grouped by Class with the volume created in setUp without a mount. Any >>>>>> tests requiring a mount handle the mount and subsequent umount before >>>>>> tearDown gets run. >>>>>> >>>>>> e.g., >>>>>> test_volume_create_start_stop_start() is only testing the volume and >>>>>> doesn't require the mount, whereas... >>>>>> test_file_dir_create_ops_on_volume() is creating ops on the mounted >>>>>> volume and does it's own mount/umount. >>>>>> >>>>> >>>>> (It's also taking 100% CPU during execution, need to find out why...) >>>>> >>>>>> >>>>>> This file could be broken into a volume only class and a mounted >>>>>> volume class to handle the mount/umount in tearDown, or even allow the >>>>>> super GlusterBaseClass.tearDownClass() method do it automatically. >>>>>> >>>>> >>>>> Ok, so since for some reason test_volume_sanity() is failing for >>>>> me[2], it doesn't unmount. >>>>> Unmount before making the check, so it'll clean well, even if it fails >>>>> seem to help[3]. >>>>> >>>>>> >>>>>> On another note, this test_vvt.py test can probably be eliminated >>>>>> with the code covered in another volume test suite (or suites) and the >>>>>> volume verification test step in BVT run using pytest markers against >>>>>> @pytest.mark.bvt_vvt decorator as I'd originally intended. >>>>>> The idea there was to create a BVT test from a sample of existing >>>>>> testcases written in the full test suites--eliminating duplication of code. >>>>>> >>>>> >>>>> This is what is running today (I think) in upstream[1], so if it needs >>>>> to / can be, that'd be great, but has to be coordinated. >>>>> Y. >>>>> >>>>> [1] >>>>> https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/b9e7dbc57bc96c8a538593f7a5ff0f03fc38e335/centos-ci/scripts/run-glusto.sh >>>>> [2] Donno what it means: >>>>> E AssertionError: Lists are not equal. >>>>> E Before creating file: >>>>> ['00\nchangelogs\nindices\nlandfill\nunlink\n', >>>>> '00\nchangelogs\nindices\nlandfill\nunlink\n', >>>>> '00\nchangelogs\nindices\nlandfill\nunlink\n', >>>>> '00\nchangelogs\nindices\nlandfill\nunlink\n'] >>>>> E After deleting file: >>>>> ['00\nchangelogs\nindices\nlandfill\nunlink\n', >>>>> '00\nchangelogs\nindices\nlandfill\nunlink\n', >>>>> '00\n25\nchangelogs\nindices\nlandfill\nunlink\n', >>>>> '00\n25\n2d\nchangelogs\nindices\nlandfill\nunlink\n'] >>>>> >>>>> >>>>> I remember this test cases was created as part of Closed Gap and later >>>>> bug turned to WONTFIX. I think we need to skip or remove the test cases. >>>>> Since test cases is asserting out before unmount, it >>>>> leaves the mount point as it is. >>>>> >>>> >>>> The latter part I've fixed. the former one, do we need to simply >>>> depracate this test? >>>> (which makes me wonder who's running those tests at all, if they are >>>> broken...) >>>> Y. >>>> >>>>> >>>>> >>>>> [3] https://review.gluster.org/#/c/glusto-tests/+/22440/ >>>>> >>>>>> >>>>>> Cheers, >>>>>> Jonathan >>>>>> >>>>>> On Thu, Mar 28, 2019 at 9:02 AM Yaniv Kaul wrote: >>>>>> >>>>>>> Teardown (at least where I'm looking at, test_vvt.py) is cleaning up >>>>>>> the volume. >>>>>>> Shouldn't it also unmount the client? >>>>>>> >>>>>>> TIA, >>>>>>> Y. >>>>>>> _______________________________________________ >>>>>>> automated-testing mailing list >>>>>>> automated-testing at gluster.org >>>>>>> https://lists.gluster.org/mailman/listinfo/automated-testing >>>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> automated-testing mailing listautomated-testing at gluster.orghttps://lists.gluster.org/mailman/listinfo/automated-testing >>>>> >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Fri Mar 29 09:47:48 2019 From: ykaul at redhat.com (Yaniv Kaul) Date: Fri, 29 Mar 2019 12:47:48 +0300 Subject: [automated-testing] Tear down - shouldn't it unmount client? In-Reply-To: References: Message-ID: On Fri, Mar 29, 2019 at 12:04 PM Deepshikha Khandelwal wrote: > Yaniv- FYI, you are looking at the deprecated folder[1]. We have moved all > the centos ci related jobs to its own repo[2]. > Thanks. I'll take a look. Why do we keep a deprecated folder? Y. > Removed functional test_vvt from script. > > [1] > https://github.com/gluster/glusterfs-patch-acceptance-tests/tree/master/centos-ci > [2] https://github.com/gluster/centosci > > On Fri, Mar 29, 2019 at 1:13 PM Deepshikha Khandelwal > wrote: > >> Ack. I'll update it. >> >> On Fri, Mar 29, 2019 at 12:48 PM Yaniv Kaul wrote: >> >>> >>> >>> On Fri, Mar 29, 2019 at 10:14 AM Vijay Bhaskar Reddy Avuthu < >>> vavuthu at redhat.com> wrote: >>> >>>> Yes, We need to deprecate this test. We are explicitly saying not to >>>> run this test by using option "-k not " in our runs. >>>> >>> >>> So why wasn't it contributed to upstream? >>> >>>> >>>> If everyone agrees, Akarsha will submit patch to skip the test case >>>> using markers. >>>> >>> >>> No, please remove it. There's no point in confusing more people about it. >>> Deepshikha - I'd appreciate if you can please ensure it's also removed >>> from upstream[1]. >>> >>> Y. >>> [1] >>> https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/b9e7dbc57bc96c8a538593f7a5ff0f03fc38e335/centos-ci/scripts/run-glusto.sh >>> >>> >>>> Regards, >>>> Vijay A >>>> >>>> >>>> Regards, >>>> Vijay A >>>> >>>> On Fri, Mar 29, 2019 at 12:24 PM Yaniv Kaul wrote: >>>> >>>>> >>>>> >>>>> On Fri, Mar 29, 2019 at 9:24 AM Vijay Bhaskar Reddy < >>>>> vavuthu at redhat.com> wrote: >>>>> >>>>>> >>>>>> >>>>>> On 03/29/2019 12:25 AM, Yaniv Kaul wrote: >>>>>> >>>>>> >>>>>> >>>>>> On Thu, Mar 28, 2019 at 7:21 PM Jonathan Holloway < >>>>>> jholloway at redhat.com> wrote: >>>>>> >>>>>>> Only where a mount is exec'd in setUp. In some cases, tests are >>>>>>> grouped by Class with the volume created in setUp without a mount. Any >>>>>>> tests requiring a mount handle the mount and subsequent umount before >>>>>>> tearDown gets run. >>>>>>> >>>>>>> e.g., >>>>>>> test_volume_create_start_stop_start() is only testing the volume and >>>>>>> doesn't require the mount, whereas... >>>>>>> test_file_dir_create_ops_on_volume() is creating ops on the mounted >>>>>>> volume and does it's own mount/umount. >>>>>>> >>>>>> >>>>>> (It's also taking 100% CPU during execution, need to find out why...) >>>>>> >>>>>>> >>>>>>> This file could be broken into a volume only class and a mounted >>>>>>> volume class to handle the mount/umount in tearDown, or even allow the >>>>>>> super GlusterBaseClass.tearDownClass() method do it automatically. >>>>>>> >>>>>> >>>>>> Ok, so since for some reason test_volume_sanity() is failing for >>>>>> me[2], it doesn't unmount. >>>>>> Unmount before making the check, so it'll clean well, even if it >>>>>> fails seem to help[3]. >>>>>> >>>>>>> >>>>>>> On another note, this test_vvt.py test can probably be eliminated >>>>>>> with the code covered in another volume test suite (or suites) and the >>>>>>> volume verification test step in BVT run using pytest markers against >>>>>>> @pytest.mark.bvt_vvt decorator as I'd originally intended. >>>>>>> The idea there was to create a BVT test from a sample of existing >>>>>>> testcases written in the full test suites--eliminating duplication of code. >>>>>>> >>>>>> >>>>>> This is what is running today (I think) in upstream[1], so if it >>>>>> needs to / can be, that'd be great, but has to be coordinated. >>>>>> Y. >>>>>> >>>>>> [1] >>>>>> https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/b9e7dbc57bc96c8a538593f7a5ff0f03fc38e335/centos-ci/scripts/run-glusto.sh >>>>>> [2] Donno what it means: >>>>>> E AssertionError: Lists are not equal. >>>>>> E Before creating file: >>>>>> ['00\nchangelogs\nindices\nlandfill\nunlink\n', >>>>>> '00\nchangelogs\nindices\nlandfill\nunlink\n', >>>>>> '00\nchangelogs\nindices\nlandfill\nunlink\n', >>>>>> '00\nchangelogs\nindices\nlandfill\nunlink\n'] >>>>>> E After deleting file: >>>>>> ['00\nchangelogs\nindices\nlandfill\nunlink\n', >>>>>> '00\nchangelogs\nindices\nlandfill\nunlink\n', >>>>>> '00\n25\nchangelogs\nindices\nlandfill\nunlink\n', >>>>>> '00\n25\n2d\nchangelogs\nindices\nlandfill\nunlink\n'] >>>>>> >>>>>> >>>>>> I remember this test cases was created as part of Closed Gap and >>>>>> later bug turned to WONTFIX. I think we need to skip or remove the test >>>>>> cases. Since test cases is asserting out before unmount, it >>>>>> leaves the mount point as it is. >>>>>> >>>>> >>>>> The latter part I've fixed. the former one, do we need to simply >>>>> depracate this test? >>>>> (which makes me wonder who's running those tests at all, if they are >>>>> broken...) >>>>> Y. >>>>> >>>>>> >>>>>> >>>>>> [3] https://review.gluster.org/#/c/glusto-tests/+/22440/ >>>>>> >>>>>>> >>>>>>> Cheers, >>>>>>> Jonathan >>>>>>> >>>>>>> On Thu, Mar 28, 2019 at 9:02 AM Yaniv Kaul wrote: >>>>>>> >>>>>>>> Teardown (at least where I'm looking at, test_vvt.py) is cleaning >>>>>>>> up the volume. >>>>>>>> Shouldn't it also unmount the client? >>>>>>>> >>>>>>>> TIA, >>>>>>>> Y. >>>>>>>> _______________________________________________ >>>>>>>> automated-testing mailing list >>>>>>>> automated-testing at gluster.org >>>>>>>> https://lists.gluster.org/mailman/listinfo/automated-testing >>>>>>>> >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> automated-testing mailing listautomated-testing at gluster.orghttps://lists.gluster.org/mailman/listinfo/automated-testing >>>>>> >>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Sun Mar 31 12:59:50 2019 From: ykaul at redhat.com (Yaniv Kaul) Date: Sun, 31 Mar 2019 15:59:50 +0300 Subject: [automated-testing] What is the current state of the Glusto test framework in upstream? In-Reply-To: References: Message-ID: On Wed, Mar 13, 2019 at 4:14 PM Jonathan Holloway wrote: > > > On Wed, Mar 13, 2019 at 5:08 AM Sankarshan Mukhopadhyay < > sankarshan.mukhopadhyay at gmail.com> wrote: > >> On Wed, Mar 13, 2019 at 3:03 PM Yaniv Kaul wrote: >> > On Wed, Mar 13, 2019, 3:53 AM Sankarshan Mukhopadhyay < >> sankarshan.mukhopadhyay at gmail.com> wrote: >> >> >> >> What I am essentially looking to understand is whether there are >> >> regular Glusto runs and whether the tests receive refreshes. However, >> >> if there is no available Glusto service running upstream - that is a >> >> whole new conversation. >> > >> > >> > I'm* still trying to get it running properly on my simple >> Vagrant+Ansible setup[1]. >> > Right now I'm installing Gluster + Glusto + creating bricks, pool and a >> volume in ~3m on my latop. >> > >> >> This is good. I think my original question was to the maintainer(s) of >> Glusto along with the individuals involved in the automated testing >> part of Gluster to understand the challenges in deploying this for the >> project. >> >> > Once I do get it fully working, we'll get to make it work faster, clean >> it up and and see how can we get code coverage. >> > >> > Unless there's an alternative to the whole framework that I'm not aware >> of? >> >> I haven't read anything to this effect on any list. >> >> > This is cool. I haven't had a chance to give it a run on my laptop, but it > looked good. > Are you running into issues with Glusto, glusterlibs, and/or Glusto-tests? > All of the above. - The client consumes at times 100% CPU, not sure why. - There are missing deps which I'm reverse engineering from Gluster CI (which by itself has some strange deps - why do we need python-docx ?) - I'm failing with the cvt test, with test_shrinking_volume_when_io_in_progress with the error: AssertionError: IO failed on some of the clients I had hoped it could give me a bit more hint: - which clients? (I happen to have one, so that's easy) - What IO workload? - What error? - I hope there's a mode that does NOT perform cleanup/teardown, so it's easier to look at the issue at hand. - From glustomain.log, I can see: 2019-03-31 12:56:00,627 INFO (validate_io_procs) Validating IO on 192.168.250.10:/mnt/testvol_distributed-replicated_cifs 2019-03-31 12:56:00,627 INFO (_log_results) ESC[34;1mRETCODE ( root at 192.168.250.10): 1ESC[0m 2019-03-31 12:56:00,628 INFO (_log_results) ESC[47;30;1mSTDOUT ( root at 192.168.250.10)... Starting File/Dir Ops: 12:55:27:PM:Mar_31_2019 Unable to create dir '/mnt/testvol_distributed-replicated_cifs/user6' : Invalid argument Unable to create dir '/mnt/testvol_distributed-replicated_cifs/user6/dir0' : Invalid argument Unable to create dir '/mnt/testvol_distributed-replicated_cifs/user6/dir0/dir0' : Invalid argument Unable to create dir '/mnt/testvol_distributed-replicated_cifs/user6/dir0/dir1' : Invalid argument Unable to create dir '/mnt/testvol_distributed-replicated_cifs/user6/dir1' : Invalid argument Unable to create dir '/mnt/testvol_distributed-replicated_cifs/user6/dir1/dir0' : Invalid argument I'm right now assuming something's wrong on my setup. Unclear what, yet. > I was using the glusto-tests container to run tests locally and for BVT in > the lab. > I was running against lab VMs, so looking forward to giving the vagrant > piece a go. > > By upstream service are we talking about the Jenkins in the CentOS > environment, etc? > Yes. Y. @Vijay Bhaskar Reddy Avuthu @Akarsha Rai > any insight? > > Cheers, > Jonathan > > > Surely for most of the positive paths, we can (and perhaps should) use >> the the Gluster Ansible modules. >> > Y. >> > >> > [1] https://github.com/mykaul/vg >> > * with an intern's help. >> _______________________________________________ >> automated-testing mailing list >> automated-testing at gluster.org >> https://lists.gluster.org/mailman/listinfo/automated-testing >> > _______________________________________________ > automated-testing mailing list > automated-testing at gluster.org > https://lists.gluster.org/mailman/listinfo/automated-testing > -------------- next part -------------- An HTML attachment was scrubbed... URL: