[Gluster-users] forcing a brick to unload storage to replace disks?
Harry Mangalam
harry.mangalam at uci.edu
Thu Dec 8 19:54:51 UTC 2011
Hi All,
More time for gluster; after much travel in the twisty dark tunnels
of OFED, IB, card firmware & upgrades, OS compatibility, etc, I now
have a distributed rdma volume over 5 bricks (2 on one server) and it
seems to be working well. I would now like to force-unload one brick
to emulate a disk upgrade process.
Here's my vol info:
---------------------------
Thu Dec 08 11:44:05 [0.08 0.05 0.01] root at pbs3:~
522 $ gluster volume info
Volume Name: glrdma
Type: Distribute
Status: Started
Number of Bricks: 5
Transport-type: rdma
Bricks:
Brick1: pbs1:/data2
Brick2: pbs2:/data2
Brick3: pbs3:/data2
Brick4: pbs3:/data
Brick5: pbs4:/data
---------------------------
From the Admin doc, I can do a 'replace-brick' operation but that
seems to require an unused brick as when I try to do that to an
already incorporated brick, gluster complains that:
---------------------------
Thu Dec 08 11:52:12 [0.00 0.01 0.00] root at pbs3:~
524 $ gluster volume replace-brick glrdma pbs2:/data2 pbs4:/data start
Brick: pbs4:/data already in use
---------------------------
Is there a process whereby I can clear a brick by forcing the files to
migrate to the other bricks?
--
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[ZOT 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)
--
This signature has been OCCUPIED!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20111208/f5000b3a/attachment.html>
More information about the Gluster-users
mailing list