[Gluster-users] Help: gluster-block

Karim Roumani karim.roumani at tekreach.com
Mon Apr 1 20:03:54 UTC 2019


Actually we have a question.

We did two tests as follows.

Test 1 - iSCSI target on the glusterFS server
Test 2 - iSCSI target on a separate server with gluster client

Test 2 performed a read speed of <1GB/second while Test 1 about 300MB/second

Any reason you see to why this may be the case?
ᐧ

On Mon, Apr 1, 2019 at 1:00 PM Karim Roumani <karim.roumani at tekreach.com>
wrote:

> Thank you Prasanna for your quick response very much appreaciated we will
> review and get back to you.
>>
> On Mon, Mar 25, 2019 at 9:00 AM Prasanna Kalever <pkalever at redhat.com>
> wrote:
>
>> [ adding +gluster-users for archive purpose ]
>>
>> On Sat, Mar 23, 2019 at 1:51 AM Jeffrey Chin <jeffrey.chin at tekreach.com>
>> wrote:
>> >
>> > Hello Mr. Kalever,
>>
>> Hello Jeffrey,
>>
>> >
>> > I am currently working on a project to utilize GlusterFS for VMWare
>> VMs. In our research, we found that utilizing block devices with GlusterFS
>> would be the best approach for our use case (correct me if I am wrong). I
>> saw the gluster utility that you are a contributor for called gluster-block
>> (https://github.com/gluster/gluster-block), and I had a question about
>> the configuration. From what I understand, gluster-block only works on the
>> servers that are serving the gluster volume. Would it be possible to run
>> the gluster-block utility on a client machine that has a gluster volume
>> mounted to it?
>>
>> Yes, that is right! At the moment gluster-block is coupled with
>> glusterd for simplicity.
>> But we have made some changes here [1] to provide a way to specify
>> server address (volfile-server) which is outside the gluster-blockd
>> node, please take a look.
>>
>> Although it is not complete solution, but it should at-least help for
>> some usecases. Feel free to raise an issue [2] with the details about
>> your usecase and etc or submit a PR by your self :-)
>> We never picked it, as we never have a usecase needing separation of
>> gluster-blockd and glusterd.
>>
>> >
>> > I also have another question: how do I make the iSCSI targets persist
>> if all of the gluster nodes were rebooted? It seems like once all of the
>> nodes reboot, I am unable to reconnect to the iSCSI targets created by the
>> gluster-block utility.
>>
>> do you mean rebooting iscsi initiator ? or gluster-block/gluster
>> target/server nodes ?
>>
>> 1. for initiator to automatically connect to block devices post
>> reboot, we need to make below changes in /etc/iscsi/iscsid.conf:
>> node.startup = automatic
>>
>> 2. if you mean, just in case if all the gluster nodes goes down, on
>> the initiator all the available HA path's will be down, but we still
>> want the IO to be queued on the initiator, until one of the path
>> (gluster node) is availabe:
>>
>> for this in gluster-block sepcific section of multipath.conf you need
>> to replace 'no_path_retry 120' as 'no_path_retry queue'
>> Note: refer README for current multipath.conf setting recommendations.
>>
>> [1] https://github.com/gluster/gluster-block/pull/161
>> [2] https://github.com/gluster/gluster-block/issues/new
>>
>> BRs,
>> --
>> Prasanna
>>
>
>
> --
>
> Thank you,
>
> *Karim Roumani*
> Director of Technology Solutions
>
> TekReach Solutions / Albatross Cloud
> 714-916-5677
> Karim.Roumani at tekreach.com
> Albatross.cloud <https://albatross.cloud/> - One Stop Cloud Solutions
> Portalfronthosting.com <http://portalfronthosting.com/> - Complete
> SharePoint Solutions
>


-- 

Thank you,

*Karim Roumani*
Director of Technology Solutions

TekReach Solutions / Albatross Cloud
714-916-5677
Karim.Roumani at tekreach.com
Albatross.cloud <https://albatross.cloud/> - One Stop Cloud Solutions
Portalfronthosting.com <http://portalfronthosting.com/> - Complete
SharePoint Solutions
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190401/dc63f56f/attachment.html>


More information about the Gluster-users mailing list