[Gluster-users] No performance difference using libgfapi?
Humble Devassy Chirammal
humble.devassy at gmail.com
Fri Apr 4 07:05:20 UTC 2014
Hi David,
Regarding hdparm:
'hdparm' has to be used against SATA/IDE device.
--snip--
hdparm - get/set SATA/IDE device parameters
hdparm provides a command line interface to various kernel
interfaces supported by the Linux SATA/PATA/SAS "libata" subsystem and the
older
IDE driver subsystem. Many newer (2008 and later) USB drive
enclosures now also support "SAT" (SCSI-ATA Command Translation) and
therefore
may also work with hdparm. E.g. recent WD "Passport" models and
recent NexStar-3 enclosures. Some options may work correctly only with
the latest kernels.
--/snip--
Here in your guest , its 'virtio' disk ( /dev/vd{a,b,c..} which uses
'virtio' bus and virtio-blk is not ATA, so this looks to be an incorrect
way of using 'hdparm'.
More or less, now the virt software allows you to use "virtio-scsi" ( the
disk shown inside the guest will be sd{a,b,..}, where most of featureset is
respected from scsi protocol point of view.. you may look into that as
well.
--Humble
On Thu, Apr 3, 2014 at 3:35 PM, Dave Christianson <
davidchristianson3 at gmail.com> wrote:
> Good Morning,
>
> In my earlier experience invoking a VM using qemu/libgfapi, I reported
> that it was noticeably faster than the same VM invoked from libvirt using a
> FUSE mount; however, this was erroneous as the qemu/libgfapi-invoked image
> was created using 2x the RAM and cpu's...
>
> So, invoking the image using both methods using consistent specs of 2GB
> RAM and 2 cpu's, I attempted to check drive performance using the following
> commands:
>
> (For regular FUSE mount I have the gluster volume mounted at
> /var/lib/libvirt/images.)
>
> (For libgfapi I specify -disk file=gluster://gfs-00/gfsvol/tester1/img.)
>
> Using libvirt/FUSE mount:
> [root at tester1 ~]# hdparm -Tt /dev/vda1
> /dev/vda1:
> Timing cached reads: 11346 MB in 2.00 seconds = 5681.63 MB/sec
> Timing buffered disk reads: 36 MB in 3.05 seconds = 11.80 MB/sec
> [root at tester1 ~]# dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f
> /tmp/output
> 10240+0 records in
> 10240+0 records out
> 41943040 bytes (42MB) copied, 0.0646241 s, 649 MB/sec
>
> Using qemu/libgfapi:
> [root at tester1 ~]# hdparm -Tt /dev/vda1
> /dev/vda1:
> Timing cached reads: 11998 MB in 2.00 seconds = 6008.57 MB/sec
> Timing buffered disk reads: 36 MB in 3.03 seconds = 11.89 MB/sec
> [root at tester1 ~]# dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f
> /tmp/output
> 10240+0 records in
> 10240+0 records out
> 41943040 bytes (42MB) copied, 0.0621412 s, 675 MB/sec
>
> Should I be seeing a bigger difference, or am I doing something wrong?
>
> I'm also curious whether people have found that the performance difference
> is greater as the size of the gluster cluster scales up.
>
> Thanks,
> David
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140404/6726d61b/attachment.html>
More information about the Gluster-users
mailing list