[Gluster-users] Hey All, Anyway to remove the last brick or change it? Added erronously.

TomK tk at mdevsys.com
Tue May 3 03:46:13 UTC 2016


Thanks very much!

Did that then recovered the ID as well using the below off of google:

vol=mdsglusterv01
brick=/mnt/p01-d01/glusterv01
setfattr -n trusted.glusterfs.volume-id \
   -v 0x$(grep volume-id /var/lib/glusterd/vols/$vol/info \
   | cut -d= -f2 | sed 's/-//g') $brick

Another question I had is, if I add a 2TB LUN brick to gluster, w/ the 
LUN existing on this physical node where I also have KVM running, will 
the KVM guests write directly to the LUN @ FC speeds or would the writes 
occur via glusterfs and the transport I selected below?   Is gluster 
intelligent enough to recognize that a particular brick exists on the 
same physical from which writes are occurring and allow direct writes to 
the brick via what ever medium it's mounted on (NFS, FC, FCoE, iSCSI etc)?

Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors

[root at mdskvm-p01 ~]# gluster volume info

Volume Name: mdsglusterv01
Type: Distribute
Volume ID: 84df37b3-3c68-4cc0-9383-3ff539a4d785
Status: Started
Number of Bricks: 1
Transport-type: tcp,rdma
Bricks:
Brick1: mdskvm-p01:/mnt/p01-d01/glusterv01
Options Reconfigured:
config.transport: tcp,rdma
performance.readdir-ahead: on
[root at mdskvm-p01 ~]#

Cheers,
Tom K.
------------------------------------------------------------------------------------- 

Living on earth is expensive, but it includes a free trip around the sun.

On 5/2/2016 10:17 PM, Chen Chen wrote:
> Hi Tom,
>
> You may try:
> gluster volume set volname config.transport tcp
>
> ref:
> http://www.gluster.org/community/documentation/index.php/RDMA_Transport#Changing_Transport_of_Volume 
>
>
> Best regards,
> Chen
>
> On 5/3/2016 9:55 AM, TomK wrote:
>> Hey All,
>>
>> New here and first time posting.  I've made a typo in configuration and
>> entered:
>>
>> gluster volume create mdsglusterv01 transport rdma
>> mdskvm-p01:/mnt/p01-d01/glusterv01/
>>
>> but couldn't start since rdma didn't exist:
>>
>> [root at mdskvm-p01 glusterfs]# ls -altri
>> /usr/lib64/glusterfs/3.7.11/rpc-transport/*
>> 135169384 -rwxr-xr-x. 1 root root 99648 Apr 18 08:21
>> /usr/lib64/glusterfs/3.7.11/rpc-transport/socket.so
>> [root at mdskvm-p01 glusterfs]#
>>
>> So how do I reconfigure gluster to remove the rdma option and redefine
>> this brick?
>>
>> [root at mdskvm-p01 glusterfs]# gluster volume info
>>
>> Volume Name: mdsglusterv01
>> Type: Distribute
>> Volume ID: 84df37b3-3c68-4cc0-9383-3ff539a4d785
>> Status: Stopped
>> Number of Bricks: 1
>> Transport-type: rdma
>> Bricks:
>> Brick1: mdskvm-p01:/mnt/p01-d01/glusterv01
>> Options Reconfigured:
>> performance.readdir-ahead: on
>> [root at mdskvm-p01 glusterfs]#
>>
>> I figure I can add in a dummy block device however wanted to find out if
>> there's anyway to change the above or redefine it before I do that.
>>
>> Cheers,
>> Tom K.
>> ------------------------------------------------------------------------------------- 
>>
>>
>> Living on earth is expensive, but it includes a free trip around the 
>> sun.
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160502/18f9474f/attachment.html>


More information about the Gluster-users mailing list