[Gluster-devel] Spurious failure in ./tests/bugs/bug-948686.t [14, 15, 16]
Krishnan Parthasarathi
kparthas at redhat.com
Wed May 28 11:43:00 UTC 2014
I am looking into this issue. I will update this email thread
once I have the root cause.
thanks,
Krish
----- Original Message -----
> hi kp,
> Could you look into it.
>
> Patch ==> http://review.gluster.com/7889/1
> Author ==> Avra Sengupta asengupt at redhat.com
> Build triggered by ==> amarts
> Build-url ==>
> http://build.gluster.org/job/regression/4586/consoleFull
> Download-log-at ==>
> http://build.gluster.org:443/logs/regression/glusterfs-logs-20140527:14:51:09.tgz
> Test written by ==> Author: Krishnan Parthasarathi
> <kparthas at redhat.com>
>
> ./tests/bugs/bug-948686.t [14, 15, 16]
> #!/bin/bash
>
> . $(dirname $0)/../include.rc
> . $(dirname $0)/../volume.rc
> . $(dirname $0)/../cluster.rc
>
> function check_peers {
> $CLI_1 peer status | grep 'Peer in Cluster (Connected)' | wc -l
> }
> cleanup;
> #setup cluster and test volume
> 1 TEST launch_cluster 3; # start 3-node virtual cluster
> 2 TEST $CLI_1 peer probe $H2; # peer probe server 2 from server 1 cli
> 3 TEST $CLI_1 peer probe $H3; # peer probe server 3 from server 1 cli
>
> 4 EXPECT_WITHIN $PROBE_TIMEOUT 2 check_peers;
>
> 5 TEST $CLI_1 volume create $V0 replica 2 $H1:$B1/$V0 $H1:$B1/${V0}_1
> $H2:$B2/$V0 $H3:$B3/$V0
> 6 TEST $CLI_1 volume start $V0
> 7 TEST glusterfs --volfile-server=$H1 --volfile-id=$V0 $M0
>
> #kill a node
> 8 TEST kill_node 3
>
> #modify volume config to see change in volume-sync
> 9 TEST $CLI_1 volume set $V0 write-behind off
> #add some files to the volume to see effect of volume-heal cmd
> 10 TEST touch $M0/{1..100};
> 11 TEST $CLI_1 volume stop $V0;
> 12 TEST $glusterd_3;
> 13 EXPECT_WITHIN $PROBE_TIMEOUT 2 check_peers;
> ***14 TEST $CLI_3 volume start $V0;
> ***15 TEST $CLI_2 volume stop $V0;
> ***16 TEST $CLI_2 volume delete $V0;
>
> cleanup;
>
> 17 TEST glusterd;
> 18 TEST $CLI volume create $V0 $H0:$B0/$V0
> 19 TEST $CLI volume start $V0
> pkill glusterd;
> pkill glusterfsd;
> 20 TEST glusterd
> 21 TEST $CLI volume status $V0
>
> cleanup;
>
More information about the Gluster-devel
mailing list