[Gluster-devel] Two consistent regression failures in release-3.6 HEAD
Justin Clift
justin at gluster.org
Wed Feb 18 11:36:26 UTC 2015
On 18 Feb 2015, at 08:23, Avra Sengupta <asengupt at redhat.com> wrote:
> Hi,
>
> I had a look at the test case and the logs. A mount command is failing in the testcase, where we try to mount a snapshot to /mnt/glusterfs/2.
>
> [2015-02-17 19:28:24.291801] I [MSGID: 100030] [glusterfsd.c:2018:main] 0-glusterfs: Started running glusterfs version 3.6.3beta1 (args: glusterfs -s slave30.cloud.gluster.org --volfile-id=/snaps/patchy_single_gluster_volume_is_accessible_by_multiple_clients_offline_snapshot_is_a_long_name/patchy /mnt/glusterfs/2)
> [2015-02-17 19:28:24.292848] E [fuse-bridge.c:5334:init] 0-fuse: mountpoint /mnt/glusterfs/2 does not exist
> [2015-02-17 19:28:24.292871] E [xlator.c:425:xlator_init] 0-fuse: Initialization of volume 'fuse' failed, review your volfile again
>
> The mount fails with the error that it is unable to find /mnt/glusterfs/2, where as this directory is created as part of the basic include.rc. The only reason I can think of is that the same machine might be used for other runs, or any other activity independent of the test case which is removing /mnt/glusterfs/2, at the time this test is running.
> Is there any way we can confirm the same?
Interesting. For the slave30 VM, it had run a few partial regression
test prior before this one. But this was the first time it had run the
regression test completely.
For the slave31 VM, it's a new VM and is the only time it's run a
regression test or Jenkins job of any sort.
Do you want to log into slave31 and take a look? It failed on both
of them, and they're both using our standard Jenkins login details.
+ Justin
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
More information about the Gluster-devel
mailing list