[Gluster-users] add-brick and remove-brick on a nearly full volume

張為超 j1899j1899 at gmail.com
Tue Mar 4 09:31:52 UTC 2014


Hi all,

I have 3 peers (peer-A, peer-B and peer-C). I tried to use add-brick and
remove-brick to replace peers.
(version: glusterfs 3.4)

What I did:

   1. created a distribute volume with two 10-GB bricks (peer-A:/brick and
   peer-B:/brick. Actually they are 9.7 GB after ext4 formatting).
   2. mount it and write 16 1-GB files in to it (command: seq 16 | xargs -i
   dd if=/dev/zero of=/mnt/file-{} bs=1G count=1).
   3. add peer-C:/brick (also 10-GB) to this volume.
   4. execute remove peer-A:/brick start.
   5. check remove status and wait until all of the hosts are completed.
   6. execute remove peer-A:/brick commit.

After step 6, I lost 2 files in the volume.


I list the files in bricks after step 2 and step 5:
After step 2:

peer-A:/brick:

-rw-r--r--    2 root     root     1073741824 Mar  4 17:05 file-1
-rw-r--r--    2 root     root     1073741824 Mar  4 17:07 file-12
-rw-r--r--    2 root     root     1073741824 Mar  4 17:07 file-14
-rw-r--r--    2 root     root     1073741824 Mar  4 17:07 file-15
-rw-r--r--    2 root     root     1073741824 Mar  4 17:08 file-16
-rw-r--r--    2 root     root     1073741824 Mar  4 17:05 file-3
-rw-r--r--    2 root     root     1073741824 Mar  4 17:06 file-6


peer-B:/brick:
-rw-r--r--    2 root     root     1073741824 Mar  4 17:06 file-10
-rw-r--r--    2 root     root     1073741824 Mar  4 17:07 file-11
-rw-r--r--    2 root     root     1073741824 Mar  4 17:07 file-13
---------T    2 root     root             0 Mar  4 17:07 file-15
---------T    2 root     root             0 Mar  4 17:07 file-16
-rw-r--r--    2 root     root     1073741824 Mar  4 17:05 file-2
-rw-r--r--    2 root     root     1073741824 Mar  4 17:05 file-4
-rw-r--r--    2 root     root     1073741824 Mar  4 17:05 file-5
-rw-r--r--    2 root     root     1073741824 Mar  4 17:06 file-7
-rw-r--r--    2 root     root     1073741824 Mar  4 17:06 file-8
-rw-r--r--    2 root     root     1073741824 Mar  4 17:06 file-9

After step 5:

peer-A:/brick:
-rw-r--r--    2 root     root     1073741824 Mar  4 17:07 file-15
-rw-r--r--    2 root     root     1073741824 Mar  4 17:08 file-16

peer-B:/brick:
-rw-r--r--    2 root     root     1073741824 Mar  4 17:06 file-10
-rw-r--r--    2 root     root     1073741824 Mar  4 17:07 file-11
-rw-r--r--    2 root     root     1073741824 Mar  4 17:07 file-13
---------T    2 root     root     1073741824 Mar  4 17:17 file-15
---------T    2 root     root     1073741824 Mar  4 17:17 file-16
-rw-r--r--    2 root     root     1073741824 Mar  4 17:05 file-2
-rw-r--r--    2 root     root     1073741824 Mar  4 17:05 file-4
-rw-r--r--    2 root     root     1073741824 Mar  4 17:05 file-5
-rw-r--r--    2 root     root     1073741824 Mar  4 17:06 file-7
-rw-r--r--    2 root     root     1073741824 Mar  4 17:06 file-8
-rw-r--r--    2 root     root     1073741824 Mar  4 17:06 file-9

peer-C:/brick:
-rw-r--r--    2 root     root     1073741824 Mar  4 17:05 file-1
-rw-r--r--    2 root     root     1073741824 Mar  4 17:07 file-12
-rw-r--r--    2 root     root     1073741824 Mar  4 17:07 file-14
-rw-r--r--    2 root     root     1073741824 Mar  4 17:05 file-3
-rw-r--r--    2 root     root     1073741824 Mar  4 17:06 file-6


After step 6, I lost file-15 and file-16 in the volume.
Anyone know why file-15 and file-16 are not moved to peer-C?
If it's caused by peer-B is full, why does the status show "completed"?

Node Rebalanced-files          size       scanned      failures
skipped         status run-time in secs
---------      -----------   -----------   -----------   -----------
-----------   ------------   --------------
localhost                5         5.0GB            21             0
 completed           126.00
localhost                5         5.0GB            21             0
 completed           126.00
localhost                5         5.0GB            21             0
 completed           126.00
localhost                5         5.0GB            21             0
 completed           126.00


--
Best regards,
Johnny
j1899j1899 at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140304/bb5c13fa/attachment.html>


More information about the Gluster-users mailing list