[Gluster-users] Help: gluster-block

Prasanna Kalever pkalever at redhat.com
Wed Apr 3 15:41:23 UTC 2019


On Tue, Apr 2, 2019 at 1:34 AM Karim Roumani <karim.roumani at tekreach.com>
wrote:

> Actually we have a question.
>
> We did two tests as follows.
>
> Test 1 - iSCSI target on the glusterFS server
> Test 2 - iSCSI target on a separate server with gluster client
>
> Test 2 performed a read speed of <1GB/second while Test 1 about
> 300MB/second
>
> Any reason you see to why this may be the case?
>

For Test 1 case,

1. ops b/w
* iscsi initiator <-> iscsi target and
* tcmu-runner <-> gluster server

are all using the same NIC resource.

2.  Also, it might be possible that, the node might be facing high resource
usage like cpu is high and/or memory is low, as everything is on the same
node.

You can check also check gluster profile info, to corner down some of these.

Thanks!
--
Prasanna


>>
> On Mon, Apr 1, 2019 at 1:00 PM Karim Roumani <karim.roumani at tekreach.com>
> wrote:
>
>> Thank you Prasanna for your quick response very much appreaciated we will
>> review and get back to you.
>>>>
>> On Mon, Mar 25, 2019 at 9:00 AM Prasanna Kalever <pkalever at redhat.com>
>> wrote:
>>
>>> [ adding +gluster-users for archive purpose ]
>>>
>>> On Sat, Mar 23, 2019 at 1:51 AM Jeffrey Chin <jeffrey.chin at tekreach.com>
>>> wrote:
>>> >
>>> > Hello Mr. Kalever,
>>>
>>> Hello Jeffrey,
>>>
>>> >
>>> > I am currently working on a project to utilize GlusterFS for VMWare
>>> VMs. In our research, we found that utilizing block devices with GlusterFS
>>> would be the best approach for our use case (correct me if I am wrong). I
>>> saw the gluster utility that you are a contributor for called gluster-block
>>> (https://github.com/gluster/gluster-block), and I had a question about
>>> the configuration. From what I understand, gluster-block only works on the
>>> servers that are serving the gluster volume. Would it be possible to run
>>> the gluster-block utility on a client machine that has a gluster volume
>>> mounted to it?
>>>
>>> Yes, that is right! At the moment gluster-block is coupled with
>>> glusterd for simplicity.
>>> But we have made some changes here [1] to provide a way to specify
>>> server address (volfile-server) which is outside the gluster-blockd
>>> node, please take a look.
>>>
>>> Although it is not complete solution, but it should at-least help for
>>> some usecases. Feel free to raise an issue [2] with the details about
>>> your usecase and etc or submit a PR by your self :-)
>>> We never picked it, as we never have a usecase needing separation of
>>> gluster-blockd and glusterd.
>>>
>>> >
>>> > I also have another question: how do I make the iSCSI targets persist
>>> if all of the gluster nodes were rebooted? It seems like once all of the
>>> nodes reboot, I am unable to reconnect to the iSCSI targets created by the
>>> gluster-block utility.
>>>
>>> do you mean rebooting iscsi initiator ? or gluster-block/gluster
>>> target/server nodes ?
>>>
>>> 1. for initiator to automatically connect to block devices post
>>> reboot, we need to make below changes in /etc/iscsi/iscsid.conf:
>>> node.startup = automatic
>>>
>>> 2. if you mean, just in case if all the gluster nodes goes down, on
>>> the initiator all the available HA path's will be down, but we still
>>> want the IO to be queued on the initiator, until one of the path
>>> (gluster node) is availabe:
>>>
>>> for this in gluster-block sepcific section of multipath.conf you need
>>> to replace 'no_path_retry 120' as 'no_path_retry queue'
>>> Note: refer README for current multipath.conf setting recommendations.
>>>
>>> [1] https://github.com/gluster/gluster-block/pull/161
>>> [2] https://github.com/gluster/gluster-block/issues/new
>>>
>>> BRs,
>>> --
>>> Prasanna
>>>
>>
>>
>> --
>>
>> Thank you,
>>
>> *Karim Roumani*
>> Director of Technology Solutions
>>
>> TekReach Solutions / Albatross Cloud
>> 714-916-5677
>> Karim.Roumani at tekreach.com
>> Albatross.cloud <https://albatross.cloud/> - One Stop Cloud Solutions
>> Portalfronthosting.com <http://portalfronthosting.com/> - Complete
>> SharePoint Solutions
>>
>
>
> --
>
> Thank you,
>
> *Karim Roumani*
> Director of Technology Solutions
>
> TekReach Solutions / Albatross Cloud
> 714-916-5677
> Karim.Roumani at tekreach.com
> Albatross.cloud <https://albatross.cloud/> - One Stop Cloud Solutions
> Portalfronthosting.com <http://portalfronthosting.com/> - Complete
> SharePoint Solutions
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190403/0b69c2d6/attachment.html>


More information about the Gluster-users mailing list