[Gluster-users] Problem adding brick (replica)
Saša Friedrich
sasa.friedrich at bitlab.si
Wed Dec 18 20:32:43 UTC 2013
Here is the line that gets thrown in that log file (which I wasn't aware
of - thanks Anirban)
[2013-12-18 20:31:21.005913] : volume add-brick iso_domain replica 2
gluster2.data:/glusterfs/iso_domain : FAILED :
Dne 18. 12. 2013 21:27, piše Anirban Ghoshal:
> Ok, I am not associated with, or part of the glusterFS development
> team in any way; I fact I only started using glusterfs since the past
> 3-4 months or so, but I have often observed that useful info might be
> found at <log file dir>/.cmd_history.log, which is, in your case,
>
> /var/log/glusterfs/.cmd_history.log
>
>
>
> On Wednesday, 18 December 2013 8:08 PM, Saša Friedrich
> <sasa.friedrich at bitlab.si> wrote:
> Hi!
>
> I have some trouble adding a brick to existing gluster volume.
>
> When I try to (in CLI):
>
> gluster> volume add-brick data_domain replica 3
> gluster2.data:/glusterfs/data_domain
>
> I get:
>
> volume add-brick: failed:
>
> I probed the peer successfully, peer status returns:
>
> Hostname: gluster3.data
> Uuid: e694f552-636a-4cf3-a04f-997ec87a880c
> State: Peer in Cluster (Connected)
>
> Hostname: gluster2.data
> Port: 24007
> Uuid: 36922d4c-55f2-4cc6-85b9-a9541e5619a2
> State: Peer in Cluster (Connected)
>
> Existing volume info:
>
> Volume Name: data_domain
> Type: Replicate
> Volume ID: ae096e7d-cf0c-46ed-863a-9ecc3e8ce288
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: gluster1.data:/glusterfs/data_domain
> Brick2: gluster3.data:/glusterfs/data_domain
> Options Reconfigured:
> storage.owner-gid: 36
> storage.owner-uid: 36
> server.allow-insecure: on
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
>
>
> Only thing I found in log is:
>
> (/var/log/glusterfs/cli.log)
> [2013-12-18 12:09:17.281310] W [cli-rl.c:106:cli_rl_process_line]
> 0-glusterfs: failed to process line
> [2013-12-18 12:10:07.650267] I
> [cli-rpc-ops.c:332:gf_cli_list_friends_cbk] 0-cli: Received resp
> to list: 0
>
> (/var/log/glusterfs/etc-glusterfs-glusterd.vol.log)
> [2013-12-18 12:12:38.887911] I
> [glusterd-brick-ops.c:370:__glusterd_handle_add_brick]
> 0-management: Received add brick req
> [2013-12-18 12:12:38.888064] I
> [glusterd-brick-ops.c:417:__glusterd_handle_add_brick]
> 0-management: replica-count is 3
> [2013-12-18 12:12:38.888124] I
> [glusterd-brick-ops.c:256:gd_addbr_validate_replica_count]
> 0-management: Changing the replica count of volume data_domain
> from 2 to 3
>
>
> I'm running some VM-s on this volume so I'd really like to avoid
> restarting glusterd service.
> OS is FC19, kernel 3.11.10-200.fc19.x86_64, glusterfs.x86_64 3.4.1-1.fc19
>
>
> tnx for help!
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131218/f357938f/attachment.html>
More information about the Gluster-users
mailing list