[GEDI] [PATCH-for-9.1 v2 2/3] migration: Remove RDMA protocol handling
Gonglei (Arei)
arei.gonglei at huawei.com
Tue May 7 01:50:43 UTC 2024
Hello,
> -----Original Message-----
> From: Peter Xu [mailto:peterx at redhat.com]
> Sent: Monday, May 6, 2024 11:18 PM
> To: Gonglei (Arei) <arei.gonglei at huawei.com>
> Cc: Daniel P. Berrangé <berrange at redhat.com>; Markus Armbruster
> <armbru at redhat.com>; Michael Galaxy <mgalaxy at akamai.com>; Yu Zhang
> <yu.zhang at ionos.com>; Zhijian Li (Fujitsu) <lizhijian at fujitsu.com>; Jinpu Wang
> <jinpu.wang at ionos.com>; Elmar Gerdes <elmar.gerdes at ionos.com>;
> qemu-devel at nongnu.org; Yuval Shaia <yuval.shaia.ml at gmail.com>; Kevin Wolf
> <kwolf at redhat.com>; Prasanna Kumar Kalever
> <prasanna.kalever at redhat.com>; Cornelia Huck <cohuck at redhat.com>;
> Michael Roth <michael.roth at amd.com>; Prasanna Kumar Kalever
> <prasanna4324 at gmail.com>; integration at gluster.org; Paolo Bonzini
> <pbonzini at redhat.com>; qemu-block at nongnu.org; devel at lists.libvirt.org;
> Hanna Reitz <hreitz at redhat.com>; Michael S. Tsirkin <mst at redhat.com>;
> Thomas Huth <thuth at redhat.com>; Eric Blake <eblake at redhat.com>; Song
> Gao <gaosong at loongson.cn>; Marc-André Lureau
> <marcandre.lureau at redhat.com>; Alex Bennée <alex.bennee at linaro.org>;
> Wainer dos Santos Moschetta <wainersm at redhat.com>; Beraldo Leal
> <bleal at redhat.com>; Pannengyuan <pannengyuan at huawei.com>;
> Xiexiangyou <xiexiangyou at huawei.com>
> Subject: Re: [PATCH-for-9.1 v2 2/3] migration: Remove RDMA protocol handling
>
> On Mon, May 06, 2024 at 02:06:28AM +0000, Gonglei (Arei) wrote:
> > Hi, Peter
>
> Hey, Lei,
>
> Happy to see you around again after years.
>
Haha, me too.
> > RDMA features high bandwidth, low latency (in non-blocking lossless
> > network), and direct remote memory access by bypassing the CPU (As you
> > know, CPU resources are expensive for cloud vendors, which is one of
> > the reasons why we introduced offload cards.), which TCP does not have.
>
> It's another cost to use offload cards, v.s. preparing more cpu resources?
>
Software and hardware offload converged architecture is the way to go for all cloud vendors
(Including comprehensive benefits in terms of performance, cost, security, and innovation speed),
it's not just a matter of adding the resource of a DPU card.
> > In some scenarios where fast live migration is needed (extremely short
> > interruption duration and migration duration) is very useful. To this
> > end, we have also developed RDMA support for multifd.
>
> Will any of you upstream that work? I'm curious how intrusive would it be
> when adding it to multifd, if it can keep only 5 exported functions like what
> rdma.h does right now it'll be pretty nice. We also want to make sure it works
> with arbitrary sized loads and buffers, e.g. vfio is considering to add IO loads to
> multifd channels too.
>
In fact, we sent the patchset to the community in 2021. Pls see:
https://lore.kernel.org/all/20210203185906.GT2950@work-vm/T/
> One thing to note that the question here is not about a pure performance
> comparison between rdma and nics only. It's about help us make a decision
> on whether to drop rdma, iow, even if rdma performs well, the community still
> has the right to drop it if nobody can actively work and maintain it.
> It's just that if nics can perform as good it's more a reason to drop, unless
> companies can help to provide good support and work together.
>
We are happy to provide the necessary review and maintenance work for RDMA
if the community needs it.
CC'ing Chuan Zheng.
Regards,
-Gonglei
More information about the integration
mailing list