[Gluster-devel] netbsd regression update : cdc.t

Emmanuel Dreyfus manu at netbsd.org
Mon May 4 08:33:52 UTC 2015


On Mon, May 04, 2015 at 09:20:45AM +0530, Atin Mukherjee wrote:
> I see the following log from the brick process:
> 
> [2015-05-04 03:43:50.309769] E [socket.c:823:__socket_server_bind]
> 4-tcp.patchy-server: binding to  failed: Address already in use

This happens before the failing test 52 (volume stop), on test 51, which is
volume reset network.compression operation.

At that time the volume is already started, with the brick process running.
volume reset network.compression cause the brick process to be started 
again. But since the previous brick process was not terminated, it still
holds the port and the new process fails to start.

As a result we have a volume started with its only brick not running. 
It seems volume stop waits for the missing brick to get online and
here is why we fail.

The patch below is enough to workaround the problem: first stop the 
volume before doing colume reset network.compression.

Questions: 
1) is it expected that volume reset network.compression. restarts 
   the bricks?
2) shall we consider it a bug that volume stop waits for brcks  that 
   are down? I thing we should.
3) how does it pass on Linux?

diff --git a/tests/basic/cdc.t b/tests/basic/cdc.t
index 6a80b92..8653a77 100755
--- a/tests/basic/cdc.t
+++ b/tests/basic/cdc.t
@@ -132,15 +132,15 @@ TEST ! test -e /tmp/cdcdump.gz
 TEST rm -f /tmp/cdc* $M0/cdc*
 EXPECT_WITHIN $UMOUNT_TIMEOUT "Y" force_umount $M0
 
+## Stop the volume
+TEST $CLI volume stop $V0;
+EXPECT 'Stopped' volinfo_field $V0 'Status';
+
 ## Reset the network.compression options
 TEST $CLI volume reset $V0 network.compression.debug
 TEST $CLI volume reset $V0 network.compression.min-size
 TEST $CLI volume reset $V0 network.compression
 
-## Stop the volume
-TEST $CLI volume stop $V0;
-EXPECT 'Stopped' volinfo_field $V0 'Status';
-
 ## Delete the volume
 TEST $CLI volume delete $V0;
 ! $CLI volume info $V0;

-- 
Emmanuel Dreyfus
manu at netbsd.org


More information about the Gluster-devel mailing list