networking - 无法将 testpmd 连接到 OVS+DPDK

标签 networking openvswitch dpdk

总结

我正在尝试使用 testpmd 作为来自物理 NIC 的流量接收器,通过带有 DPDK 的 OVS。

当我运行 testpmd 时,它失败了。错误消息非常简短,所以我不知道出了什么问题。

如何让 testpmd 使用 DPDK 连接到 OVS 中的虚拟端口?

步骤

我主要关注 these Mellanox instructions

# step 5 - "Specify initial Open vSwitch (OVS) database to use"
export PATH=$PATH:/usr/local/share/openvswitch/scripts
export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
ovsdb-tool create /usr/local/etc/openvswitch/conf.db /usr/local/share/openvswitch/vswitch.ovsschema
ovsdb-server --remote=punix:$DB_SOCK --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach

# step 6 - "Configure OVS to support DPDK ports"
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true

# step 7 - "Start OVS-DPDK service"
ovs-ctl --no-ovsdb-server --db-sock="$DB_SOCK" start # what does this do? I forget

# step 8 - "Configure the source code analyzer (PMD) to work with 2G hugespages and NUMA node0"
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="2048,2048" # 2048 = 2GB


# step 9 - "Set core mask to enable several PMDs"
ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xFF0 # cores 4-11, 4 per NUMA node

# core masks are one's hot. LSB is core 0
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x8 # core 3

# step 10 - there is no step 10 in the doc linked above

# step 11 - Create an OVS bridge
BRIDGE="br0"
ovs-vsctl add-br $BRIDGE -- set bridge br0 datapath_type=netdev

然后对于 OVS 元素,我尝试遵循 these steps

# add physical NICs to bridge, must be named dpdk(\d+)
sudo ovs-vsctl add-port $BRIDGE dpdk0 \
   -- set Interface dpdk0 type=dpdk \
   options:dpdk-devargs=0000:5e:00.0 ofport_request=1
sudo ovs-vsctl add-port $BRIDGE dpdk1 \
   -- set Interface dpdk1 type=dpdk \
   options:dpdk-devargs=0000:5e:00.1 ofport_request=2

# add a virtual port to connect to testpmd/VM
# Not sure if I want dpdkvhostuser or dpdkvhostuserclient
sudo ovs-vsctl add-port $BRIDGE dpdkvhostuser0 \
   -- \
   set Interface dpdkvhostuser0 \
   type=dpdkvhostuser \
   options:n_rxq=2,pmd-rxq-affinity="0:4,1:6"  \
   ofport_request=3

sudo ovs-vsctl add-port $BRIDGE dpdkvhostuser1 \
   -- \
   set Interface dpdkvhostuser1 \
   type=dpdkvhostuser \
   options:n_rxq=2,pmd-rxq-affinity="0:8,1:10"  \
   ofport_request=4

# add flows to join interfaces (based on ofport_request numbers)
sudo ovs-ofctl add-flow $BRIDGE in_port=1,action=output:3
sudo ovs-ofctl add-flow $BRIDGE in_port=3,action=output:1
sudo ovs-ofctl add-flow $BRIDGE in_port=2,action=output:4
sudo ovs-ofctl add-flow $BRIDGE in_port=4,action=output:2

然后我运行testpmd

sudo -E $DPDK_DIR/x86_64-native-linuxapp-gcc/app/testpmd  \
   --vdev virtio_user0,path=/usr/local/var/run/openvswitch/dpdkvhostuser0 \
   --vdev virtio_user1,path=/usr/local/var/run/openvswitch/dpdkvhostuser1 \
   -c 0x00fff000  \
   -n 1 \
   --socket-mem=2048,2048  \
   --file-prefix=testpmd \
   --log-level=9 \
   --no-pci \
   -- \
   --port-numa-config=0,0,1,0 \
   --ring-numa-config=0,1,0,1,1,0 \
   --numa  \
   --socket-num=0 \
   --txd=512 \
   --rxd=512 \
   --mbcache=512 \
   --rxq=1 \
   --txq=1 \
   --nb-cores=4 \
   -i \
   --rss-udp \
   --auto-start

输出是:

...
EAL: lcore 18 is ready (tid=456c700;cpuset=[18])
EAL: lcore 21 is ready (tid=2d69700;cpuset=[21])
Interactive-mode selected
Auto-start selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=327680, size=2176, socket=0
USER1: create a new mbuf pool <mbuf_pool_socket_1>: n=327680, size=2176, socket=1
Configuring Port 0 (socket 0)
Fail to configure port 0
EAL: Error - exiting with code: 1
  Cause: Start ports failed

/usr/local/var/log/openvswitch/ovs-vswitchd.log的底部是

2018-11-30T02:45:49.115Z|00026|netdev_dpdk|INFO|vHost Device '/usr/local/var/run/openvswitch/dpdkvhostuser0' has been added on numa node 0
2018-11-30T02:45:49.115Z|00027|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2018-11-30T02:45:49.115Z|00028|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 0
2018-11-30T02:45:49.115Z|00029|netdev_dpdk|INFO|State of queue 0 ( tx_qid 0 ) of vhost device '/usr/local/var/run/openvswitch/dpdkvhostuser0'changed to 'enabled'
2018-11-30T02:45:49.115Z|00030|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2018-11-30T02:45:49.115Z|00031|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 1
2018-11-30T02:45:49.278Z|00032|dpdk|ERR|VHOST_CONFIG: recvmsg failed
2018-11-30T02:45:49.279Z|00033|dpdk|INFO|VHOST_CONFIG: vhost peer closed
2018-11-30T02:45:49.280Z|00034|netdev_dpdk|INFO|vHost Device '/usr/local/var/run/openvswitch/dpdkvhostuser0' has been removed

失败的原因是什么?

我应该使用 dpdkvhostuserclient 而不是 dpdkvhostuser 吗?

我还尝试了什么

  • /var/log/messages 中查找更多信息 - 但那只是 stdout 和 stderr 的副本。
  • 重新启动
  • 正在查找 OVS 文档,但它们 don't mention "logs"

我试过更改我的 testpmd 命令参数。 (文档 here)

  • 摆脱 --no-pci。结果是 配置端口 0(套接字 0) 端口 0:24:8A:07:9E:94:94 配置端口 1(套接字 0) 端口 1:24:8A:07:9E:94:95 配置端口 2(套接字 0) 配置端口 2 失败 EAL:错误 - 退出代码:1 原因:启动端口失败 这些 MAC 地址用于我已经连接到 OVS 的物理 NIC。
  • 删除 --auto-start:相同的结果
  • --nb-cores=1:结果相同
  • 删除第二个--vdev:警告!无法使用当前端口拓扑处理奇数个端口。必须将配置更改为具有偶数个端口,或者使用 --port-topology=chained 重新启动应用程序。当我添加 --port-topology=chained 时,我最终遇到了原来的错误。

其他信息

  • DPDK 17.11.4
  • OVS 2.10.1
  • 网卡:Mellanox Connect-X5
  • 操作系统:Centos 7.5
  • 我的 NIC 在 NUMA 节点 0 上
  • 当我运行 ip addr 时,我看到名为 br0 的接口(interface)具有与我的物理 NIC(p3p1)相同的 MAC 地址,当它绑定(bind)到内核)

当我运行 sudo ovs-vsctl show 时,我看到了

d3e721eb-6aeb-44c0-9fa8-5fcf023008c5
    Bridge "br0"
        Port "dpdkvhostuser1"
            Interface "dpdkvhostuser1"
                type: dpdkvhostuser
                options: {n_rxq="2,pmd-rxq-affinity=0:8,1:10"}
        Port "dpdk1"
            Interface "dpdk1"
                type: dpdk
                options: {dpdk-devargs="0000:5e:00.1"}
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
                options: {dpdk-devargs="0000:5e:00.0"}
        Port "br0"
            Interface "br0"
                type: internal
        Port "dpdkvhostuser0"
            Interface "dpdkvhostuser0"
                type: dpdkvhostuser
                options: {n_rxq="2,pmd-rxq-affinity=0:4,1:6"}

编辑:添加了 /usr/local/var/log/openvswitch/ovs-vswitchd.log 的内容

最佳答案

  1. 确实,我们需要dpdkvhostuser,而不是客户端。

  2. OVS options:n_rxq=2 与 testpmd --txq=1 中的队列数量不匹配。

关于networking - 无法将 testpmd 连接到 OVS+DPDK,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53530589/

相关文章:

android - 需要有关设置网络系统以更新同一网络上的多个设备的指导

c++ - 函数前的参数有什么意义?

ubuntu - Mininet OVS-Controller无法加载运行

dpdk - E810 和 DPDK 19.11 上 dpdk-pdump 数据包的时间戳源

dpdk - 无法运行 pktgen-dpdk(错误 : Illegal instruction)

c++ - DPDK 应用程序可以用 C++ 编写吗?如果可以,是如何完成的?

java - 在 java 中检索客户端 IP 地址时获取 request.getHeader ("X-FORWARDED-FOR"的空值?

java - socket.close()会中断数据传输吗?

networking - NIC 内存管理和 RSS 队列

networking - 如何在 open-vswitch 中将流量从特定端口转发到另一个端口