[Gluster-users] Gluster for Vmware

Michael DePaulo mikedep333 at gmail.com
Sun Aug 31 04:17:24 UTC 2014


Hi Peter,

On Aug 26, 2014 2:18 AM, "Peter Auyeung" <pauyeung at shopzilla.com> wrote:
>
> You can use ctdb for nfs failover
>
> But I do want to know if we can use native glusterfs client for vmware
>
> Peter

I did done research and it looks like the native GlusterFS client is not
available for ESXi. The primary reason is probably that FUSE is not
available for ESXi.

If you do not need all the features of vSphere, you might try VMware
Workstation running in server mode on top of Linux. I'm sure that will work
with GlusterFS.
http://blogs.vmware.com/workstation/2012/02/vmware-workstation-8-as-an-alternative-to-vmware-server.html

-Mike

> On Aug 25, 2014, at 9:28 PM, "Chandrahasa S" <chandrahasa.s at tcs.com>
wrote:
>
>> I am thinking to User Gluster Volume for creation of Data store in
Vmware.
>>
>> I can use NFS but I wont get  HA and Load balancing like glusterfs.
>>
>> Chandra
>>
>>
>>
>> From:        "John G. Heim" <jheim at math.wisc.edu>
>> To:        Chandrahasa S <chandrahasa.s at tcs.com>,
gluster-users at gluster.org
>> Date:        08/25/2014 06:49 PM
>> Subject:        Re: [Gluster-users] Gluster for Vmware
>> ________________________________
>>
>>
>>
>> Do you mean you want to mount a gluster volume on a virtual machine? You
can do that the same way you'd do it on a real machine.  You can probably
even create a brick on a virtual machine but I don't see much point in
that.
>>
>> But we regularly mount our gluster volume on virtual machines.  We use
debian so it's as simple as this:
>>
>> 1. service glusterfs-server start
>> 2. mount -t glusterfs localhost:/volumename /mountpoint
>>
>>
>>
>> On 08/25/2014 12:06 AM, Chandrahasa S wrote:
>> Dear All,
>>
>> Is there any way to use Glusterfs volume for Vmware environment.
>>
>>
>> Chandra.
>>
>>
>>
>> From:        Ben Turner <bturner at redhat.com>
>> To:        Juan José Pavlik Salles <jjpavlik at gmail.com>
>> Cc:        gluster-users at gluster.org
>> Date:        08/22/2014 08:57 PM
>> Subject:        Re: [Gluster-users] Gluster 3.5.2 gluster, how does
cache work?
>> Sent by:        gluster-users-bounces at gluster.org
>> ________________________________
>>
>>
>>
>> ----- Original Message -----
>> > From: "Juan José Pavlik Salles" <jjpavlik at gmail.com>
>> > To: gluster-users at gluster.org
>> > Sent: Thursday, August 21, 2014 4:07:28 PM
>> > Subject: [Gluster-users] Gluster 3.5.2 gluster, how does cache work?
>> >
>> > Hi guys, I've been reading a bit about caching in gluster volumes, but
I
>> > still don't get a few things. I set up a gluster replica 2 volume like
this:
>> >
>> > [root at gluster-test-1 ~]# gluster vol info vol_rep
>> > Volume Name: vol_rep
>> > Type: Replicate
>> > Volume ID: b77db06d-2686-46c7-951f-e43bde21d8ec
>> > Status: Started
>> > Number of Bricks: 1 x 2 = 2
>> > Transport-type: tcp
>> > Bricks:
>> > Brick1: gluster-test-1:/ladrillos/l1/l
>> > Brick2: gluster-test-2:/ladrillos/l1/l
>> > Options Reconfigured:
>> > performance.cache-min-file-size: 90MB
>> > performance.cache-max-file-size: 256MB
>> > performance.cache-refresh-timeout: 60
>> > performance.cache-size: 256MB
>> > [root at gluster-test-1 ~]#
>> >
>> > Then I mounted the volume with gluster client on another machine. I
created
>> > an 80Mbytes file called 80, and here you have the reading test:
>> >
>> > [root at gluster-client-1 gluster_vol]# dd if=/mnt/gluster_vol/80
of=/dev/null
>> > bs=1M
>> > 80+0 records in
>> > 80+0 records out
>> > 83886080 bytes (84 MB) copied, 1,34145 s, 62,5 MB/s
>> > [root at gluster-client-1 gluster_vol]# dd if=/mnt/gluster_vol/80
of=/dev/null
>> > bs=1M
>> > 80+0 records in
>> > 80+0 records out
>> > 83886080 bytes (84 MB) copied, 0,0246918 s, 3,4 GB/s
>> > [root at gluster-client-1 gluster_vol]# dd if=/mnt/gluster_vol/80
of=/dev/null
>> > bs=1M
>> > 80+0 records in
>> > 80+0 records out
>> > 83886080 bytes (84 MB) copied, 0,0195678 s, 4,3 GB/s
>> > [root at gluster-client-1 gluster_vol]#
>>
>> You are seeing the effect of client side kernel caching.  If you want to
see the actual throughput for reads run:
>>
>> sync; echo 3 > /proc/sys/vm/drop_caches; dd blah
>>
>> Kernel caching happens on both the client and server side, when I want
to see uncached performance I drop caches on both clients and servers:
>>
>> run_drop_cache()
>> {
>>   for host in $MASTERNODE $NODE $CLIENT
>>   do
>>       ssh -i /root/.ssh/my_id root@${host} echo "Dropping cache on $host"
>>       ssh -i /root/.ssh/my_id root@${host} sync
>>       ssh -i /root/.ssh/my_id root@${host} "echo 3 >
/proc/sys/vm/drop_caches"
>>   done
>> }
>>
>> HTH!
>>
>> -b
>>
>> > Cache is working flawlessly, (even though that 80 Mbytes is smaller
than the
>> > min-file-size value, but I don't care about it right now) what I don't
get
>> > is where cache is being stored. Is it stored on the client side or on
the
>> > server side? According to documentation, the io-cache translator could
be
>> > loaded on both sides (client and server), how can I know where it is
being
>> > loeaded? It looks like as it was being stored locally because of the
speed,
>> > but I'd like to be sure.
>> >
>> > Thanks!
>> >
>> > --
>> > Pavlik Salles Juan José
>> > Blog - http://viviendolared.blogspot.com
>> >
>> > _______________________________________________
>> > Gluster-users mailing list
>> > Gluster-users at gluster.org
>> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>> =====-----=====-----=====
>> Notice: The information contained in this e-mail
>> message and/or attachments to it may contain
>> confidential or privileged information. If you are
>> not the intended recipient, any dissemination, use,
>> review, distribution, printing or copying of the
>> information contained in this e-mail message
>> and/or attachments to it are strictly prohibited. If
>> you have received this communication in error,
>> please notify us by reply e-mail or telephone and
>> immediately and permanently delete the message
>> and any attachments. Thank you
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140831/c5c93065/attachment.html>


More information about the Gluster-users mailing list