[GEDI] [PATCH-for-9.1 v2 2/3] migration: Remove RDMA protocol handling

Zheng Chuan zhengchuan at huawei.com
Thu May 9 08:58:34 UTC 2024


Hi, Peter,Lei,Jinpu.

On 2024/5/8 0:28, Peter Xu wrote:
> On Tue, May 07, 2024 at 01:50:43AM +0000, Gonglei (Arei) wrote:
>> Hello,
>>
>>> -----Original Message-----
>>> From: Peter Xu [mailto:peterx at redhat.com]
>>> Sent: Monday, May 6, 2024 11:18 PM
>>> To: Gonglei (Arei) <arei.gonglei at huawei.com>
>>> Cc: Daniel P. Berrangé <berrange at redhat.com>; Markus Armbruster
>>> <armbru at redhat.com>; Michael Galaxy <mgalaxy at akamai.com>; Yu Zhang
>>> <yu.zhang at ionos.com>; Zhijian Li (Fujitsu) <lizhijian at fujitsu.com>; Jinpu Wang
>>> <jinpu.wang at ionos.com>; Elmar Gerdes <elmar.gerdes at ionos.com>;
>>> qemu-devel at nongnu.org; Yuval Shaia <yuval.shaia.ml at gmail.com>; Kevin Wolf
>>> <kwolf at redhat.com>; Prasanna Kumar Kalever
>>> <prasanna.kalever at redhat.com>; Cornelia Huck <cohuck at redhat.com>;
>>> Michael Roth <michael.roth at amd.com>; Prasanna Kumar Kalever
>>> <prasanna4324 at gmail.com>; integration at gluster.org; Paolo Bonzini
>>> <pbonzini at redhat.com>; qemu-block at nongnu.org; devel at lists.libvirt.org;
>>> Hanna Reitz <hreitz at redhat.com>; Michael S. Tsirkin <mst at redhat.com>;
>>> Thomas Huth <thuth at redhat.com>; Eric Blake <eblake at redhat.com>; Song
>>> Gao <gaosong at loongson.cn>; Marc-André Lureau
>>> <marcandre.lureau at redhat.com>; Alex Bennée <alex.bennee at linaro.org>;
>>> Wainer dos Santos Moschetta <wainersm at redhat.com>; Beraldo Leal
>>> <bleal at redhat.com>; Pannengyuan <pannengyuan at huawei.com>;
>>> Xiexiangyou <xiexiangyou at huawei.com>
>>> Subject: Re: [PATCH-for-9.1 v2 2/3] migration: Remove RDMA protocol handling
>>>
>>> On Mon, May 06, 2024 at 02:06:28AM +0000, Gonglei (Arei) wrote:
>>>> Hi, Peter
>>>
>>> Hey, Lei,
>>>
>>> Happy to see you around again after years.
>>>
>> Haha, me too.
>>
>>>> RDMA features high bandwidth, low latency (in non-blocking lossless
>>>> network), and direct remote memory access by bypassing the CPU (As you
>>>> know, CPU resources are expensive for cloud vendors, which is one of
>>>> the reasons why we introduced offload cards.), which TCP does not have.
>>>
>>> It's another cost to use offload cards, v.s. preparing more cpu resources?
>>>
>> Software and hardware offload converged architecture is the way to go for all cloud vendors 
>> (Including comprehensive benefits in terms of performance, cost, security, and innovation speed), 
>> it's not just a matter of adding the resource of a DPU card.
>>
>>>> In some scenarios where fast live migration is needed (extremely short
>>>> interruption duration and migration duration) is very useful. To this
>>>> end, we have also developed RDMA support for multifd.
>>>
>>> Will any of you upstream that work?  I'm curious how intrusive would it be
>>> when adding it to multifd, if it can keep only 5 exported functions like what
>>> rdma.h does right now it'll be pretty nice.  We also want to make sure it works
>>> with arbitrary sized loads and buffers, e.g. vfio is considering to add IO loads to
>>> multifd channels too.
>>>
>>
>> In fact, we sent the patchset to the community in 2021. Pls see:
>> https://lore.kernel.org/all/20210203185906.GT2950@work-vm/T/
> 

Yes, I have sent the patchset of multifd support for rdma migration by taking over my colleague, and also
sorry for not keeping on this work at that time due to some reasons.
And also I am strongly agree with Lei that the RDMA protocol has some special advantages against with TCP
in some scenario, and we are indeed to use it in our product.

> I wasn't aware of that for sure in the past..
> 
> Multifd has changed quite a bit in the last 9.0 release, that may not apply
> anymore.  One thing to mention is please look at Dan's comment on possible
> use of rsocket.h:
> 
> https://lore.kernel.org/all/ZjJm6rcqS5EhoKgK@redhat.com/
> 
> And Jinpu did help provide an initial test result over the library:
> 
> https://lore.kernel.org/qemu-devel/CAMGffEk8wiKNQmoUYxcaTHGtiEm2dwoCF_W7T0vMcD-i30tUkA@mail.gmail.com/
> 
> It looks like we have a chance to apply that in QEMU.
> 
>>
>>
>>> One thing to note that the question here is not about a pure performance
>>> comparison between rdma and nics only.  It's about help us make a decision
>>> on whether to drop rdma, iow, even if rdma performs well, the community still
>>> has the right to drop it if nobody can actively work and maintain it.
>>> It's just that if nics can perform as good it's more a reason to drop, unless
>>> companies can help to provide good support and work together.
>>>
>>
>> We are happy to provide the necessary review and maintenance work for RDMA
>> if the community needs it.
>>
>> CC'ing Chuan Zheng.
> 
> I'm not sure whether you and Jinpu's team would like to work together and
> provide a final solution for rdma over multifd.  It could be much simpler
> than the original 2021 proposal if the rsocket API will work out.
> 
> Thanks,
> 
That's a good news to see the socket abstraction for RDMA!
When I was developed the series above, the most pain is the RDMA migration has no QIOChannel abstraction and i need to take a 'fake channel'
for it which is awkward in code implementation.
So, as far as I know, we can do this by
i. the first thing is that we need to evaluate the rsocket is good enough to satisfy our QIOChannel fundamental abstraction
ii. if it works right, then we will continue to see if it can give us opportunity to hide the detail of rdma protocol
    into rsocket by remove most of code in rdma.c and also some hack in migration main process.
iii. implement the advanced features like multi-fd and multi-uri for rdma migration.

Since I am not familiar with rsocket, I need some times to look at it and do some quick verify with rdma migration based on rsocket.
But, yes, I am willing to involved in this refactor work and to see if we can make this migration feature more better:)


-- 
Regards.
Chuan


More information about the integration mailing list