[Gluster-users] Disk size and virtual size drive me crazy!

Strahil Nikolov hunter86_bg at yahoo.com
Tue Dec 3 16:49:08 UTC 2024


Can you do fstrim and blkdiscard (This action is destructive and runs only on block devices) inside the VM?

Best Regards,
Strahil Nikolov 
 
  On Mon, Dec 2, 2024 at 16:12, Gilberto Ferreira<gilberto.nunes32 at gmail.com> wrote:   qemu-img info 100/vm-100-disk-0.qcow2 
image: 100/vm-100-disk-0.qcow2
file format: qcow2
virtual size: 120 GiB (128849018880 bytes)
disk size: 916 GiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
    extended l2: false
Child node '/file':
    filename: 100/vm-100-disk-0.qcow2
    protocol type: file
    file length: 120 GiB (128868941824 bytes)
    disk size: 916 GiB
proxmox01:/vms/images# 











Em seg., 2 de dez. de 2024 às 11:07, Gilberto Ferreira <gilberto.nunes32 at gmail.com> escreveu:

Just to notify, that the option made no difference!
Any further tips will be appreciated.
Cheers---











Em sex., 29 de nov. de 2024 às 16:58, Gilberto Ferreira <gilberto.nunes32 at gmail.com> escreveu:

Is there any caveat to do so?
Any risk?
Em sex., 29 de nov. de 2024 às 16:47, Gilberto Ferreira <gilberto.nunes32 at gmail.com> escreveu:

No! I didn't! I wasn't aware of this option.I will try.Thanks











Em sex., 29 de nov. de 2024 às 16:43, Strahil Nikolov <hunter86_bg at yahoo.com> escreveu:

Have you figured it out ?
Have you tried setting storage.reserve to 0 ?

Best Regards,
Strahil Nikolov 
 
  On Thu, Nov 21, 2024 at 0:39, Gilberto Ferreira<gilberto.nunes32 at gmail.com> wrote:   
11.1
---
Gilberto Nunes Ferreira 
+55 (47) 99676-7530
Proxmox VE
VinChin Backup & Restore 
Em qua., 20 de nov. de 2024, 19:28, Strahil Nikolov <hunter86_bg at yahoo.com> escreveu:

 What's your gluster version ?
Best Regards,Strahil Nikolov

    В понеделник, 11 ноември 2024 г. в 20:57:50 ч. Гринуич+2, Gilberto Ferreira <gilberto.nunes32 at gmail.com> написа:  
 
 Hi there.

I can't understand why I am having  this different values:

proxmox01:/vms/images# df
Sist. Arq.      Tam. Usado Disp. Uso% Montado em
udev            252G     0  252G   0% /dev
tmpfs            51G  9,4M   51G   1% /run
/dev/sda4       433G   20G  413G   5% /
tmpfs           252G   63M  252G   1% /dev/shm
tmpfs           5,0M     0  5,0M   0% /run/lock
efivarfs        496K  335K  157K  69% /sys/firmware/efi/efivars
/dev/sda2       1,8G  204M  1,5G  12% /boot
/dev/sda1       1,9G   12M  1,9G   1% /boot/efi
/dev/sdb        932G  728G  204G  79% /disco1TB-0
/dev/sdc        932G  718G  214G  78% /disco1TB-1
/dev/sde        932G  720G  212G  78% /disco1TB-2
/dev/sdd        1,9T  1,5T  387G  80% /disco2TB-0
tmpfs            51G  4,0K   51G   1% /run/user/0
gluster1:VMS    4,6T  3,6T  970G  80% /vms
/dev/fuse       128M   36K  128M   1% /etc/pve
proxmox01:/vms/images# cd 103
proxmox01:/vms/images/103# ls
vm-103-disk-0.qcow2  vm-103-disk-1.qcow2
proxmox01:/vms/images/103# ls -lh
total 21T
-rw-r----- 1 root root 101G nov 11 15:53 vm-103-disk-0.qcow2
-rw-r----- 1 root root 210G nov 11 15:45 vm-103-disk-1.qcow2
proxmox01:/vms/images/103# qemu-img info vm-103-disk-0.qcow2 
image: vm-103-disk-0.qcow2
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 3.78 TiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
    extended l2: false
Child node '/file':
    filename: vm-103-disk-0.qcow2
    protocol type: file
    file length: 100 GiB (107390828544 bytes)
    disk size: 3.78 TiB
proxmox01:/vms/images/103# qemu-img info vm-103-disk-1.qcow2 
image: vm-103-disk-1.qcow2
file format: qcow2
virtual size: 2 TiB (2199023255552 bytes)
disk size: 16.3 TiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
    extended l2: false
Child node '/file':
    filename: vm-103-disk-1.qcow2
    protocol type: file
    file length: 210 GiB (225117732864 bytes)
    disk size: 16.3 TiB
proxmox01:/vms/images/103#

 Here is the vol info.
proxmox01:/vms/images/103# gluster vol info
 
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: a98f7944-4308-499f-994e-9029f3be56c0
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: gluster1:/disco2TB-0/vms
Brick2: gluster2:/disco2TB-0/vms
Brick3: gluster1:/disco1TB-0/vms
Brick4: gluster2:/disco1TB-0/vms
Brick5: gluster1:/disco1TB-1/vms
Brick6: gluster2:/disco1TB-1/vms
Brick7: gluster1:/disco1TB-2/vms
Brick8: gluster2:/disco1TB-2/vms
Options Reconfigured:
cluster.lookup-optimize: off
server.keepalive-count: 5
server.keepalive-interval: 2
server.keepalive-time: 10
server.tcp-user-timeout: 20
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.server-quorum-type: none
cluster.quorum-type: fixed
network.remote-dio: disable
performance.client-io-threads: on
performance.strict-o-direct: on
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.flush-behind: off
performance.write-behind: off
cluster.data-self-heal-algorithm: full
cluster.favorite-child-policy: mtime
network.ping-timeout: 20
cluster.quorum-count: 1
cluster.quorum-reads: false
cluster.self-heal-daemon: enable
cluster.heal-timeout: 5
user.cifs: off
features.shard: on
cluster.granular-entry-heal: enable
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on---

Gilberto Nunes Ferreira









________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  
  




  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20241203/32f5ddec/attachment.html>


More information about the Gluster-users mailing list