[Gluster-users] Gluster 3.5 problems with libgfapi/qemu

Ivano Talamo Ivano.Talamo at roma1.infn.it
Thu Jun 12 08:59:23 UTC 2014


Hi Jae,

starting the vm works fine, but only in the old-way, ie. with the path 
to the mounted volume.
If trying to start with the libgfapi the virsh start commands waits 
forever (is blocked on a futex).

Maybe I should leave the libgfapi for the future.

Thanks,
Ivano

On 6/12/14 7:11 AM, Jae Park wrote:
> Ivano,
>
> Did you try to start the vm on the replicated volume? Does it work?
> I remember a vm on a replicated volume failed to start (from virsh) in
> 3.5.0 due to similar errors, but now I installed 3.5.1beta2 and it starts
> up successfully even though qemu-img still shows the same error, which I
> can ignore now.
>
> Beta2 download:
> http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.1beta2/
>
> Jae
>
> On 6/11/14 10:55 AM, "Ivano Talamo" <Ivano.Talamo at roma1.infn.it> wrote:
>
>> Hello,
>> I recently update 2 servers (Scientific Linux 6) with a replicate volume
> >from gluster 3.4 to 3.5.0-2.
>> The volume was previously used to host qemu/kvm VM images accessed via a
>> fuse-mounted mount-point.
>> Now I would like to use the libgfapi but I'm seeing this error:
>>
>> [root at cmsrm-service02 ~]# qemu-img info
>> gluster://cmsrm-service02/vol1/vms/disks/cmsrm-ui01.raw2
>> [2014-06-11 17:47:22.084842] E [afr-common.c:3959:afr_notify]
>> 0-vol1-replicate-0: All subvolumes are down. Going offline until atleast
>> one of them comes back up.
>> image: gluster://cmsrm-service03/vol1/vms/disks/cmsrm-ui01.raw2
>> file format: raw
>> virtual size: 20G (21474836480 bytes)
>> disk size: 4.7G
>> [2014-06-11 17:47:22.318034] E [afr-common.c:3959:afr_notify]
>> 0-vol1-replicate-0: All subvolumes are down. Going offline until atleast
>> one of them comes back up.
>>
>> The error message does not appear if I access the file via the
>> mount-point.
>>
>> The volume seems fine:
>> [root at cmsrm-service02 ~]# gluster volume info
>>
>> Volume Name: vol1
>> Type: Replicate
>> Volume ID: 35de92de-d6b3-4784-9ccb-65518e014a49
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: cmsrm-service02:/brick/vol1
>> Brick2: cmsrm-service03:/brick/vol1
>> Options Reconfigured:
>> server.allow-insecure: on
>> [root at cmsrm-service02 ~]# gluster volume status
>> Status of volume: vol1
>> Gluster process                                         Port Online  Pid
>> --------------------------------------------------------------------------
>> ----
>> Brick cmsrm-service02:/brick/vol1                       49152 Y
>> 16904
>> Brick cmsrm-service03:/brick/vol1                       49152 Y
>> 12868
>> NFS Server on localhost                                 2049 Y       4263
>> Self-heal Daemon on localhost                           N/A Y       4283
>> NFS Server on 141.108.36.8                              2049 Y       13679
>> Self-heal Daemon on 141.108.36.8                        N/A Y       13691
>>
>> Task Status of Volume vol1
>> --------------------------------------------------------------------------
>> ----
>> There are no active volume tasks
>>
>>
>>
>> Thank you,
>> Ivano
>>
>>
>


-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 1877 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140612/12309bab/attachment.p7s>


More information about the Gluster-users mailing list