[Gluster-users] Getting glusterfs to expand volume size to brick size

Artem Russakovskii archon810 at gmail.com
Tue Apr 17 04:27:03 UTC 2018


pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
3:    option shared-brick-count 3

dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol
3:    option shared-brick-count 3

dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol
3:    option shared-brick-count 3


Sincerely,
Artem

--
Founder, Android Police <http://www.androidpolice.com>, APK Mirror
<http://www.apkmirror.com/>, Illogical Robot LLC
beerpla.net | +ArtemRussakovskii
<https://plus.google.com/+ArtemRussakovskii> | @ArtemR
<http://twitter.com/ArtemR>

On Mon, Apr 16, 2018 at 9:22 PM, Nithya Balachandran <nbalacha at redhat.com>
wrote:

> Hi Artem,
>
> Was the volume size correct before the bricks were expanded?
>
> This sounds like [1] but that should have been fixed in 4.0.0. Can you let
> us know the values of shared-brick-count in the files in
> /var/lib/glusterd/vols/dev_apkmirror_data/ ?
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1541880
>
> On 17 April 2018 at 05:17, Artem Russakovskii <archon810 at gmail.com> wrote:
>
>> Hi Nithya,
>>
>> I'm on Gluster 4.0.1.
>>
>> I don't think the bricks were smaller before - if they were, maybe 20GB
>> because Linode's minimum is 20GB, then I extended them to 25GB, resized
>> with resize2fs as instructed, and rebooted many times over since. Yet,
>> gluster refuses to see the full disk size.
>>
>> Here's the status detail output:
>>
>> gluster volume status dev_apkmirror_data detail
>> Status of volume: dev_apkmirror_data
>> ------------------------------------------------------------
>> ------------------
>> Brick                : Brick pylon:/mnt/pylon_block1/dev_apkmirror_data
>> TCP Port             : 49152
>> RDMA Port            : 0
>> Online               : Y
>> Pid                  : 1263
>> File System          : ext4
>> Device               : /dev/sdd
>> Mount Options        : rw,relatime,data=ordered
>> Inode Size           : 256
>> Disk Space Free      : 23.0GB
>> Total Disk Space     : 24.5GB
>> Inode Count          : 1638400
>> Free Inodes          : 1625429
>> ------------------------------------------------------------
>> ------------------
>> Brick                : Brick pylon:/mnt/pylon_block2/dev_apkmirror_data
>> TCP Port             : 49153
>> RDMA Port            : 0
>> Online               : Y
>> Pid                  : 1288
>> File System          : ext4
>> Device               : /dev/sdc
>> Mount Options        : rw,relatime,data=ordered
>> Inode Size           : 256
>> Disk Space Free      : 24.0GB
>> Total Disk Space     : 25.5GB
>> Inode Count          : 1703936
>> Free Inodes          : 1690965
>> ------------------------------------------------------------
>> ------------------
>> Brick                : Brick pylon:/mnt/pylon_block3/dev_apkmirror_data
>> TCP Port             : 49154
>> RDMA Port            : 0
>> Online               : Y
>> Pid                  : 1313
>> File System          : ext4
>> Device               : /dev/sde
>> Mount Options        : rw,relatime,data=ordered
>> Inode Size           : 256
>> Disk Space Free      : 23.0GB
>> Total Disk Space     : 24.5GB
>> Inode Count          : 1638400
>> Free Inodes          : 1625433
>>
>>
>>
>> What's interesting here is that the gluster volume size is exactly 1/3 of
>> the total (8357M * 3 = 25071M). Yet, each block device is separate, and the
>> total storage available is 25071M on each brick.
>>
>> The fstab is as follows:
>> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block1 /mnt/pylon_block1 ext4
>> defaults 0 2
>> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block2 /mnt/pylon_block2 ext4
>> defaults 0 2
>> /dev/disk/by-id/scsi-0Linode_Volume_pylon_block3 /mnt/pylon_block3 ext4
>> defaults 0 2
>>
>> localhost:/dev_apkmirror_data    /mnt/dev_apkmirror_data1   glusterfs
>> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
>> localhost:/dev_apkmirror_data    /mnt/dev_apkmirror_data2   glusterfs
>> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
>> localhost:/dev_apkmirror_data    /mnt/dev_apkmirror_data3   glusterfs
>> defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0
>> localhost:/dev_apkmirror_data    /mnt/dev_apkmirror_data_ganesha   nfs4
>> defaults,_netdev,bg,intr,soft,timeo=5,retrans=5,actimeo=10,retry=5 0 0
>>
>> The latter entry is for an nfs ganesha test, in case it matters (which,
>> btw, fails miserably with all kinds of stability issues about broken pipes).
>>
>> Note: this is a test server, so all 3 bricks are attached and mounted on
>> the same server.
>>
>>
>> Sincerely,
>> Artem
>>
>> --
>> Founder, Android Police <http://www.androidpolice.com>, APK Mirror
>> <http://www.apkmirror.com/>, Illogical Robot LLC
>> beerpla.net | +ArtemRussakovskii
>> <https://plus.google.com/+ArtemRussakovskii> | @ArtemR
>> <http://twitter.com/ArtemR>
>>
>> On Sun, Apr 15, 2018 at 10:56 PM, Nithya Balachandran <
>> nbalacha at redhat.com> wrote:
>>
>>> What version of Gluster are you running? Were the bricks smaller earlier?
>>>
>>> Regards,
>>> Nithya
>>>
>>> On 15 April 2018 at 00:09, Artem Russakovskii <archon810 at gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I have a 3-brick replicate volume, but for some reason I can't get it
>>>> to expand to the size of the bricks. The bricks are 25GB, but even after
>>>> multiple gluster restarts and remounts, the volume is only about 8GB.
>>>>
>>>> I believed I could always extend the bricks (we're using Linode block
>>>> storage, which allows extending block devices after they're created), and
>>>> gluster would see the newly available space and extend to use it.
>>>>
>>>> Multiple Google searches, and I'm still nowhere. Any ideas?
>>>>
>>>> df | ack "block|data"
>>>> Filesystem                                                   1M-blocks
>>>>    Used Available Use% Mounted on
>>>> /dev/sdd                                                        25071M
>>>>   1491M    22284M   7% /mnt/pylon_block1
>>>> /dev/sdc                                                        26079M
>>>>   1491M    23241M   7% /mnt/pylon_block2
>>>> /dev/sde                                                        25071M
>>>>   1491M    22315M   7% /mnt/pylon_block3
>>>> localhost:/dev_apkmirror_data                                    8357M
>>>>    581M     7428M   8% /mnt/dev_apkmirror_data1
>>>> localhost:/dev_apkmirror_data                                    8357M
>>>>    581M     7428M   8% /mnt/dev_apkmirror_data2
>>>> localhost:/dev_apkmirror_data                                    8357M
>>>>    581M     7428M   8% /mnt/dev_apkmirror_data3
>>>>
>>>>
>>>>
>>>> gluster volume info
>>>>
>>>> Volume Name: dev_apkmirror_data
>>>> Type: Replicate
>>>> Volume ID: cd5621ee-7fab-401b-b720-08863717ed56
>>>> Status: Started
>>>> Snapshot Count: 0
>>>> Number of Bricks: 1 x 3 = 3
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: pylon:/mnt/pylon_block1/dev_apkmirror_data
>>>> Brick2: pylon:/mnt/pylon_block2/dev_apkmirror_data
>>>> Brick3: pylon:/mnt/pylon_block3/dev_apkmirror_data
>>>> Options Reconfigured:
>>>> disperse.eager-lock: off
>>>> cluster.lookup-unhashed: auto
>>>> cluster.read-hash-mode: 0
>>>> performance.strict-o-direct: on
>>>> cluster.shd-max-threads: 12
>>>> performance.nl-cache-timeout: 600
>>>> performance.nl-cache: on
>>>> cluster.quorum-count: 1
>>>> cluster.quorum-type: fixed
>>>> network.ping-timeout: 5
>>>> network.remote-dio: enable
>>>> performance.rda-cache-limit: 256MB
>>>> performance.parallel-readdir: on
>>>> network.inode-lru-limit: 500000
>>>> performance.md-cache-timeout: 600
>>>> performance.cache-invalidation: on
>>>> performance.stat-prefetch: on
>>>> features.cache-invalidation-timeout: 600
>>>> features.cache-invalidation: on
>>>> performance.io-thread-count: 32
>>>> server.event-threads: 4
>>>> client.event-threads: 4
>>>> performance.read-ahead: off
>>>> cluster.lookup-optimize: on
>>>> performance.client-io-threads: on
>>>> performance.cache-size: 1GB
>>>> transport.address-family: inet
>>>> performance.readdir-ahead: on
>>>> nfs.disable: on
>>>> cluster.readdir-optimize: on
>>>>
>>>>
>>>> Thank you.
>>>>
>>>> Sincerely,
>>>> Artem
>>>>
>>>> --
>>>> Founder, Android Police <http://www.androidpolice.com>, APK Mirror
>>>> <http://www.apkmirror.com/>, Illogical Robot LLC
>>>> beerpla.net | +ArtemRussakovskii
>>>> <https://plus.google.com/+ArtemRussakovskii> | @ArtemR
>>>> <http://twitter.com/ArtemR>
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180416/ccd8de01/attachment.html>


More information about the Gluster-users mailing list