[Gluster-users] Getting glusterfs to expand volume size to brick size

Nithya Balachandran nbalacha at redhat.com
Mon Apr 16 05:56:08 UTC 2018


What version of Gluster are you running? Were the bricks smaller earlier?

Regards,
Nithya

On 15 April 2018 at 00:09, Artem Russakovskii <archon810 at gmail.com> wrote:

> Hi,
>
> I have a 3-brick replicate volume, but for some reason I can't get it to
> expand to the size of the bricks. The bricks are 25GB, but even after
> multiple gluster restarts and remounts, the volume is only about 8GB.
>
> I believed I could always extend the bricks (we're using Linode block
> storage, which allows extending block devices after they're created), and
> gluster would see the newly available space and extend to use it.
>
> Multiple Google searches, and I'm still nowhere. Any ideas?
>
> df | ack "block|data"
> Filesystem                                                   1M-blocks
>  Used Available Use% Mounted on
> /dev/sdd                                                        25071M
> 1491M    22284M   7% /mnt/pylon_block1
> /dev/sdc                                                        26079M
> 1491M    23241M   7% /mnt/pylon_block2
> /dev/sde                                                        25071M
> 1491M    22315M   7% /mnt/pylon_block3
> localhost:/dev_apkmirror_data                                    8357M
>  581M     7428M   8% /mnt/dev_apkmirror_data1
> localhost:/dev_apkmirror_data                                    8357M
>  581M     7428M   8% /mnt/dev_apkmirror_data2
> localhost:/dev_apkmirror_data                                    8357M
>  581M     7428M   8% /mnt/dev_apkmirror_data3
>
>
>
> gluster volume info
>
> Volume Name: dev_apkmirror_data
> Type: Replicate
> Volume ID: cd5621ee-7fab-401b-b720-08863717ed56
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: pylon:/mnt/pylon_block1/dev_apkmirror_data
> Brick2: pylon:/mnt/pylon_block2/dev_apkmirror_data
> Brick3: pylon:/mnt/pylon_block3/dev_apkmirror_data
> Options Reconfigured:
> disperse.eager-lock: off
> cluster.lookup-unhashed: auto
> cluster.read-hash-mode: 0
> performance.strict-o-direct: on
> cluster.shd-max-threads: 12
> performance.nl-cache-timeout: 600
> performance.nl-cache: on
> cluster.quorum-count: 1
> cluster.quorum-type: fixed
> network.ping-timeout: 5
> network.remote-dio: enable
> performance.rda-cache-limit: 256MB
> performance.parallel-readdir: on
> network.inode-lru-limit: 500000
> performance.md-cache-timeout: 600
> performance.cache-invalidation: on
> performance.stat-prefetch: on
> features.cache-invalidation-timeout: 600
> features.cache-invalidation: on
> performance.io-thread-count: 32
> server.event-threads: 4
> client.event-threads: 4
> performance.read-ahead: off
> cluster.lookup-optimize: on
> performance.client-io-threads: on
> performance.cache-size: 1GB
> transport.address-family: inet
> performance.readdir-ahead: on
> nfs.disable: on
> cluster.readdir-optimize: on
>
>
> Thank you.
>
> Sincerely,
> Artem
>
> --
> Founder, Android Police <http://www.androidpolice.com>, APK Mirror
> <http://www.apkmirror.com/>, Illogical Robot LLC
> beerpla.net | +ArtemRussakovskii
> <https://plus.google.com/+ArtemRussakovskii> | @ArtemR
> <http://twitter.com/ArtemR>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180416/1b443c1f/attachment.html>


More information about the Gluster-users mailing list