[Gluster-users] rdma transport uses scif0 device (iWARP) on on server with XeonPHI
Fedele Stabile
fedele.stabile at fis.unical.it
Mon Jan 16 17:48:23 UTC 2017
Hello all,
at the end I found the cause of my problems:
if in the gluster-server is installed a Xeon-PHI (mic) card and the
server is configured with a scif0 (virtual adapter for bridging withmic) device and a qib0
(you can see output of ibv_devinfo below)
Gluster uses by default scif0 on which there is no RDMA support.
So the question is: can we change device in any configuration file?
Thank you in advance,
Fedele
# ibv_devinfo
hca_id: scif0
transport: iWARP (1)
fw_ver: 0.0.1
node_guid: 4c79:baff:fe66:0781
sys_image_guid: 4c79:baff:fe66:0781
vendor_id: 0x8086
vendor_part_id: 0
hw_ver: 0x1
phys_port_cnt: 1
port: 1
state: PORT_ACTIVE (4)
max_mtu: 4096 (5)
active_mtu: 4096 (5)
sm_lid: 1
port_lid: 1000
port_lmc: 0x00
link_layer: Ethernet
hca_id: qib0
transport: InfiniBand (0)
fw_ver: 0.0.0
node_guid: 0011:7500:006f:7446
sys_image_guid: 0011:7500:006f:7446
vendor_id: 0x1175
vendor_part_id: 29474
hw_ver: 0x2
board_id: InfiniPath_QLE7340
phys_port_cnt: 1
port: 1
state: PORT_ACTIVE (4)
max_mtu: 4096 (5)
active_mtu: 2048 (4)
sm_lid: 1
port_lid: 34
port_lmc: 0x00
link_layer: InfiniBand
More information about the Gluster-users
mailing list