[Bugs] [Bug 1203739] Self-heal of sparse image files on 3-way replica "unsparsifies" the image

bugzilla at redhat.com bugzilla at redhat.com
Tue Mar 31 18:47:54 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1203739

Matt R <mriedel at umaryland.edu> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
              Flags|needinfo?(mriedel at umaryland |
                   |.edu)                       |



--- Comment #4 from Matt R <mriedel at umaryland.edu> ---
Hi Ravishankar,

Here's more detailed information.

For point 3 & 5
I removed the brick and reduced the replica. Then after rebooting, I formatted
the file system, and re-added the brick, increasing the replica count.

Below is a detailed list of the steps I performed to create this issue. Let me
know if you need any more info.

Thanks,
Matt

Before:
Server1:
[~]{575}# df -h /gluster/ovirt/
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-ovirtlv
                      500G   25G  475G   5% /gluster/ovirt

[~]{577}# du -h -s /gluster/ovirt/
25G    /gluster/ovirt/

Server2:
[~]{484}# df -h /gluster/ovirt/
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-ovirtlv
                      500G   25G  475G   5% /gluster/ovirt

[~]{486}# du -hs /gluster/ovirt/
25G    /gluster/ovirt/


Server3:
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-ovirtlv
                      500G   25G  475G   5% /gluster/ovirt

[~]{402}# du -hs /gluster/ovirt/
25G    /gluster/ovirt/


On oVirt:
Put Server1 into maintenance mode

Then:
Remove brick:
gluster> volume remove-brick ovirt replica 2
server1.umaryland.edu:/gluster/ovirt/brick force
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success

Reboot Server1

Before re-adding the brick:
[~]{588}# gluster volume status ovirt
Status of volume: ovirt
Gluster process                        Port    Online    Pid
------------------------------------------------------------------------------
Brick server3.umaryland.edu:/gluster/ovirt/brick    49154    Y    3187
Brick server2.umaryland.edu:/gluster/ovirt/brick    49154    Y    8894
NFS Server on localhost                    2049    Y    3200
Self-heal Daemon on localhost                N/A    Y    3207
NFS Server on server2.umaryland.edu            2049    Y    3399
Self-heal Daemon on server2.umaryland.edu        N/A    Y    3413
NFS Server on server3.umaryland.edu        2049    Y    27589
Self-heal Daemon on server3.umaryland.edu        N/A    Y    27599

Task Status of Volume ovirt
------------------------------------------------------------------------------
There are no active volume tasks

Server1:
[~]{592}# mkfs.xfs -f -i size=512 /dev/rootvg/ovirtlv 
meta-data=/dev/rootvg/ovirtlv    isize=512    agcount=16, agsize=8191984 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=131071744, imaxpct=25
         =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=64000, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Add back brick:
gluster> volume add-brick ovirt replica 3
server1.umaryland.edu:/gluster/ovirt/brick force
volume add-brick: success

After adding the brick:
gluster> volume status ovirt
Status of volume: ovirt
Gluster process                        Port    Online    Pid
------------------------------------------------------------------------------
Brick server3.umaryland.edu:/gluster/ovirt/brick    49154    Y    3187
Brick server2.umaryland.edu:/gluster/ovirt/brick    49154    Y    8894
Brick server1.umaryland.edu:/gluster/ovirt/brick    49155    Y    10213
NFS Server on localhost                    2049    Y    8804
Self-heal Daemon on localhost                N/A    Y    8814
NFS Server on server1                    2049    Y    10226
Self-heal Daemon on server1                N/A    Y    10234
NFS Server on server3.umaryland.edu        2049    Y    31960
Self-heal Daemon on server3.umaryland.edu        N/A    Y    31971

Task Status of Volume ovirt
------------------------------------------------------------------------------
There are no active volume tasks


After heal starts:
Server1:
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-ovirtlv
                      500G  7.9G  492G   2% /gluster/ovirt
[~]{614}# du -hs /gluster/ovirt/
7.9G    /gluster/ovirt/
(This will continue to grow until 25G, which is the actual amount of disk space
used)

Server2 (oVirt SPM):
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-ovirtlv
                      500G   25G  475G   6% /gluster/ovirt
[~]{510}# du -hs /gluster/ovirt/
25G    /gluster/ovirt/

(This will stay the same)

Server 3:
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-ovirtlv
                      500G   44G  457G   9% /gluster/ovirt
[~]{426}# du -hs /gluster/ovirt/
44G    /gluster/ovirt/

(This will continue to grow to allocated, rather than sparse, size)


And for one particular disk image directory:
Server1:
[./e545dbec-c16c-4b25-8d8e-6bcae0f925d1]{645}# du -m .
5898    .
(Staying the same, oddly ~1GB smaller than the original)

Server2:
[./e545dbec-c16c-4b25-8d8e-6bcae0f925d1]{539}# du -m .
6874    .
(Staying the same)

Server3:
[./e545dbec-c16c-4b25-8d8e-6bcae0f925d1]{454}# du -m .
23382    .
(Growing)

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=6DwLwoagJP&a=cc_unsubscribe


More information about the Bugs mailing list