[Gluster-users] Problem migration 3.7.6 to 3.13.2
Daniele Frulla
daniele.frulla at gmail.com
Fri Feb 23 14:36:23 UTC 2018
Thanks for replay. I founded this values
datastore_temp.serda1.glusterfs-p1-b1.vol: option shared-brick-count 2
datastore_temp.serda1.glusterfs-p2-b2.vol: option shared-brick-count 2
datastore_temp.serda2.glusterfs-p1-b1.vol: option shared-brick-count 0
datastore_temp.serda2.glusterfs-p2-b2.vol: option shared-brick-count 0
I need to change the values of serda2 node?!
2018-02-23 13:07 GMT+01:00 Nithya Balachandran <nbalacha at redhat.com>:
> Hi Daniele,
>
> Do you mean that the df -h output is incorrect for the volume post the
> upgrade?
>
> If yes and the bricks are on separate partitions, you might be running
> into [1]. Can you search for the string "option shared-brick-count" in
> the files in /var/lib/glusterd/vols/<volumename> and let us know the
> value? The workaround to get this working on the cluster is available in
> [2].
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1517260#c19
>
>
> Regards,
> Nithya
>
> On 23 February 2018 at 15:12, Daniele Frulla <daniele.frulla at gmail.com>
> wrote:
>
>> I have done a migration to new version of gluster but
>> when i doing a command
>>
>> df -h
>>
>> the result of space are minor of total.
>>
>> Configuration:
>> 2 peers
>>
>> Brick serda2:/glusterfs/p2/b2 49152 0 Y
>> 1560
>> Brick serda1:/glusterfs/p2/b2 49152 0 Y
>> 1462
>> Brick serda1:/glusterfs/p1/b1 49153 0 Y
>> 1476
>> Brick serda2:/glusterfs/p1/b1 49153 0 Y
>> 1566
>> Self-heal Daemon on localhost N/A N/A Y
>> 1469
>> Self-heal Daemon on serda1 N/A N/A Y
>> 1286
>>
>> Thanks
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180223/c77220f9/attachment.html>
More information about the Gluster-users
mailing list