[Gluster-users] Disk size and virtual size drive me crazy!
Gilberto Ferreira
gilberto.nunes32 at gmail.com
Fri Nov 29 19:58:04 UTC 2024
Is there any caveat to do so?
Any risk?
Em sex., 29 de nov. de 2024 às 16:47, Gilberto Ferreira <
gilberto.nunes32 at gmail.com> escreveu:
> No! I didn't! I wasn't aware of this option.
> I will try.
> Thanks
>
>
>
>
>
>
> Em sex., 29 de nov. de 2024 às 16:43, Strahil Nikolov <
> hunter86_bg at yahoo.com> escreveu:
>
>> Have you figured it out ?
>>
>> Have you tried setting storage.reserve to 0 ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Thu, Nov 21, 2024 at 0:39, Gilberto Ferreira
>> <gilberto.nunes32 at gmail.com> wrote:
>>
>> 11.1
>> ---
>> Gilberto Nunes Ferreira
>> +55 (47) 99676-7530
>> Proxmox VE
>> VinChin Backup & Restore
>>
>> Em qua., 20 de nov. de 2024, 19:28, Strahil Nikolov <
>> hunter86_bg at yahoo.com> escreveu:
>>
>> What's your gluster version ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> В понеделник, 11 ноември 2024 г. в 20:57:50 ч. Гринуич+2, Gilberto
>> Ferreira <gilberto.nunes32 at gmail.com> написа:
>>
>>
>> Hi there.
>>
>> I can't understand why I am having this different values:
>>
>> proxmox01:/vms/images# df
>> Sist. Arq. Tam. Usado Disp. Uso% Montado em
>> udev 252G 0 252G 0% /dev
>> tmpfs 51G 9,4M 51G 1% /run
>> /dev/sda4 433G 20G 413G 5% /
>> tmpfs 252G 63M 252G 1% /dev/shm
>> tmpfs 5,0M 0 5,0M 0% /run/lock
>> efivarfs 496K 335K 157K 69% /sys/firmware/efi/efivars
>> /dev/sda2 1,8G 204M 1,5G 12% /boot
>> /dev/sda1 1,9G 12M 1,9G 1% /boot/efi
>> /dev/sdb 932G 728G 204G 79% /disco1TB-0
>> /dev/sdc 932G 718G 214G 78% /disco1TB-1
>> /dev/sde 932G 720G 212G 78% /disco1TB-2
>> /dev/sdd 1,9T 1,5T 387G 80% /disco2TB-0
>> tmpfs 51G 4,0K 51G 1% /run/user/0
>> *gluster1:VMS 4,6T 3,6T 970G 80% /vms*
>> /dev/fuse 128M 36K 128M 1% /etc/pve
>> proxmox01:/vms/images# cd 103
>> proxmox01:/vms/images/103# ls
>> vm-103-disk-0.qcow2 vm-103-disk-1.qcow2
>> proxmox01:/vms/images/103# ls -lh
>> total 21T
>>
>> *-rw-r----- 1 root root 101G nov 11 15:53 vm-103-disk-0.qcow2-rw-r----- 1
>> root root 210G nov 11 15:45 vm-103-disk-1.qcow2*
>> proxmox01:/vms/images/103# qemu-img info vm-103-disk-0.qcow2
>> image: vm-103-disk-0.qcow2
>> file format: qcow2
>>
>> *virtual size: 100 GiB (107374182400 bytes)disk size: 3.78 TiB*
>> cluster_size: 65536
>> Format specific information:
>> compat: 1.1
>> compression type: zlib
>> lazy refcounts: false
>> refcount bits: 16
>> corrupt: false
>> extended l2: false
>> Child node '/file':
>> filename: vm-103-disk-0.qcow2
>> protocol type: file
>>
>> * file length: 100 GiB (107390828544 bytes) disk size: 3.78 TiB*
>> proxmox01:/vms/images/103# qemu-img info vm-103-disk-1.qcow2
>> image: vm-103-disk-1.qcow2
>> file format: qcow2
>>
>> *virtual size: 2 TiB (2199023255552 bytes)disk size: 16.3 TiB*
>> cluster_size: 65536
>> Format specific information:
>> compat: 1.1
>> compression type: zlib
>> lazy refcounts: false
>> refcount bits: 16
>> corrupt: false
>> extended l2: false
>> Child node '/file':
>> filename: vm-103-disk-1.qcow2
>> protocol type: file
>>
>> * file length: 210 GiB (225117732864 bytes) disk size: 16.3 TiB*
>> proxmox01:/vms/images/103#
>>
>> Here is the vol info.
>>
>> proxmox01:/vms/images/103# gluster vol info
>>
>> Volume Name: VMS
>> Type: Distributed-Replicate
>> Volume ID: a98f7944-4308-499f-994e-9029f3be56c0
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 4 x 2 = 8
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster1:/disco2TB-0/vms
>> Brick2: gluster2:/disco2TB-0/vms
>> Brick3: gluster1:/disco1TB-0/vms
>> Brick4: gluster2:/disco1TB-0/vms
>> Brick5: gluster1:/disco1TB-1/vms
>> Brick6: gluster2:/disco1TB-1/vms
>> Brick7: gluster1:/disco1TB-2/vms
>> Brick8: gluster2:/disco1TB-2/vms
>> Options Reconfigured:
>> cluster.lookup-optimize: off
>> server.keepalive-count: 5
>> server.keepalive-interval: 2
>> server.keepalive-time: 10
>> server.tcp-user-timeout: 20
>> server.event-threads: 4
>> client.event-threads: 4
>> cluster.choose-local: off
>> cluster.shd-wait-qlength: 10000
>> cluster.shd-max-threads: 8
>> cluster.locking-scheme: granular
>> cluster.server-quorum-type: none
>> cluster.quorum-type: fixed
>> network.remote-dio: disable
>> performance.client-io-threads: on
>> performance.strict-o-direct: on
>> performance.low-prio-threads: 32
>> performance.io-cache: off
>> performance.read-ahead: off
>> performance.quick-read: off
>> performance.flush-behind: off
>> performance.write-behind: off
>> cluster.data-self-heal-algorithm: full
>> cluster.favorite-child-policy: mtime
>> network.ping-timeout: 20
>> cluster.quorum-count: 1
>> cluster.quorum-reads: false
>> cluster.self-heal-daemon: enable
>> cluster.heal-timeout: 5
>> user.cifs: off
>> features.shard: on
>> cluster.granular-entry-heal: enable
>> storage.fips-mode-rchecksum: on
>> transport.address-family: inet
>> nfs.disable: on
>> ---
>>
>>
>> Gilberto Nunes Ferreira
>>
>>
>>
>>
>> ________
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20241129/5314b0b1/attachment.html>
More information about the Gluster-users
mailing list