[Gluster-users] File Size and Brick Size

Ravishankar N ravishankar at redhat.com
Wed Sep 28 01:28:46 UTC 2016


On 09/28/2016 12:16 AM, ML Wong wrote:
> Hello Ravishankar,
> Thanks for introducing the sharding feature to me.
> It does seems to resolve the problem i was encountering earlier. But I 
> have 1 question, do we expect the checksum of the file to be different 
> if i copy from directory A to a shard-enabled volume?

No the checksums must match. Perhaps Krutika who works on Sharding 
(CC'ed) can help you figure out why that isn't the case here.
-Ravi
>
> [xxxxx at ip-172-31-1-72 ~]$ sudo sha1sum /var/tmp/oVirt-Live-4.0.4.iso
> ea8472f6408163fa9a315d878c651a519fc3f438  /var/tmp/oVirt-Live-4.0.4.iso
> [xxxxx at ip-172-31-1-72 ~]$ sudo rsync -avH 
> /var/tmp/oVirt-Live-4.0.4.iso /mnt/
> sending incremental file list
> oVirt-Live-4.0.4.iso
>
> sent 1373802342 bytes  received 31 bytes  30871963.44 bytes/sec
> total size is 1373634560  speedup is 1.00
> [xxxxx at ip-172-31-1-72 ~]$ sudo sha1sum /mnt/oVirt-Live-4.0.4.iso
> 14e9064857b40face90c91750d79c4d8665b9cab  /mnt/oVirt-Live-4.0.4.iso
>
> On Mon, Sep 26, 2016 at 6:42 PM, Ravishankar N <ravishankar at redhat.com 
> <mailto:ravishankar at redhat.com>> wrote:
>
>     On 09/27/2016 05:15 AM, ML Wong wrote:
>>     Have anyone in the list who has tried copying file which is
>>     bigger than the individual brick/replica size?
>>     Test Scenario:
>>     Distributed-Replicated volume, 2GB size, 2x2 = 4 bricks, 2 replicas
>>     Each replica has 1GB
>>
>>     When i tried to copy file this volume, by both fuse, or nfs
>>     mount. i get I/O error.
>>     Filesystem                  Size  Used Avail Use% Mounted on
>>     /dev/mapper/vg0-brick1     1017M   33M  985M   4% /data/brick1
>>     /dev/mapper/vg0-brick2     1017M  109M  909M  11% /data/brick2
>>     lbre-cloud-dev1:/sharevol1  2.0G  141M  1.9G   7% /sharevol1
>>
>>     [xxxxxx at cloud-dev1 ~]$ du -sh /var/tmp/ovirt-live-el7-3.6.2.iso
>>     1.3G/var/tmp/ovirt-live-el7-3.6.2.iso
>>
>>     [melvinw at lbre-cloud-dev1 ~]$ sudo cp
>>     /var/tmp/ovirt-live-el7-3.6.2.iso /sharevol1/
>>     cp: error writing ‘/sharevol1/ovirt-live-el7-3.6.2.iso’:
>>     Input/output error
>>     cp: failed to extend ‘/sharevol1/ovirt-live-el7-3.6.2.iso’:
>>     Input/output error
>>     cp: failed to close ‘/sharevol1/ovirt-live-el7-3.6.2.iso’:
>>     Input/output error
>
>     Does the mount log give you more information? It it was a disk
>     full issue, the error you would get is ENOSPC and not EIO. This
>     looks like something else.
>>
>>     I know, we have experts in this mailing list. And, i assume, this
>>     is a common situation where many Gluster users may have
>>     encountered.  The worry i have what if you have a big VM file
>>     sitting on top of Gluster volume ...?
>>
>     It is recommended to use sharding
>     (http://blog.gluster.org/2015/12/introducing-shard-translator/
>     <http://blog.gluster.org/2015/12/introducing-shard-translator/>)
>     for VM workloads to alleviate these kinds of issues.
>     -Ravi
>
>>     Any insights will be much appreciated.
>>
>>
>>
>>     _______________________________________________
>>     Gluster-users mailing list
>>     Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>>     http://www.gluster.org/mailman/listinfo/gluster-users
>>     <http://www.gluster.org/mailman/listinfo/gluster-users>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160928/a97fb575/attachment.html>


More information about the Gluster-users mailing list