本文档提供软件架构信息,开发环境及优化方案。 有关编程示例以及如何编译运行这些示例,请参阅《DPDK 示例用户指南》。 有关编译运行应用程序的基本信息,请参阅《DPDK 入门指南》。
2022-06-23 10:20:21 4.48MB dpdk
1
lwip_dpdk DPDK 加速 lwip 用户空间协议栈,基于 lwip-2.1.2、dpdk-stable-17.11.9 构建。 有什么不同? 我们没有对DPDK和lwip做任何修改,只是在contrib-2.1.0/ports/unix/port/netif/目录下增加了一种dpdkif设备。 所以lwip协议栈可以使用DPDK驱动来接收和发送数据包。 这将使 lwip 成为“真正的用户空间”协议栈。 我们还在/ports/unix/socket_client (客户端)和/ports/unix/socket_server (服务器)中编写了一个socket应用程序。 所以你可以按照这个程序中的逻辑来编写你自己的应用程序。 目前,我们将 dpdk 工作线程绑定到逻辑核心 1。其他线程,如“tcpipthread”,未固定到任何特定核心。 所以请确保你至少有 2 个 CPU
2022-06-10 18:19:47 4.68MB HTML
1
中文概述dpdk的各项原理及优化,非常适合dpdk开发借鉴。
2022-05-13 10:27:35 8.03MB dpdk
1
F栈 介绍 随着网络接口卡的飞速发展,使用Linux内核处理数据包的不良性能已成为现代网络系统的瓶颈。 但是,随着Internet增长的需求不断增长,需要一种性能更高的网络处理解决方案。 内核旁路技术已经引起越来越多的关注。 有各种类似的技术,例如:DPDK,NETMAP和PF_RING。 内核绕过的主要思想是Linux仅用于处理控制流。 所有数据流都在用户空间中处理。 因此,内核旁路可以避免由于内核数据包复制,线程调度,系统调用和中断而导致的性能瓶颈。 此外,内核旁路可以通过多重优化方法获得更高的性能。 在各种技术中,DPDK被广泛使用,因为它与内核调度和活动社区支持之间的隔离更加彻底。 是基于DPDK的开源高性能网络框架,具有以下特征: 网卡在满负载下可以实现的超高网络性能:1000万个并发连接,500万RPS,100万CPS。 移植FreeBSD 11.01用户空间堆栈,它提供
2022-05-04 00:05:40 54.25MB C
1
dpdk 内容简介(dpdk_engineer_manual)
2022-02-17 14:00:40 281.63MB dpdk
1
dpdk安装包
2022-01-10 19:06:09 10.26MB dpdk
1
dpdk-cmdline源码
2022-01-06 09:04:06 11.2MB dpdk-cmdline dpdk cmdline c
1
DPDK安装步骤详细说明,DPDK安装步骤详细说明,DPDK安装步骤详细说明
2021-12-30 14:01:14 31KB dpdk
1
对应dpdk-20.11.3版本
2021-12-30 13:03:17 12.16MB dpdk l3fwd dpdk-l3fwd 源码
1
2. DPDK Release 18.08 2.1. New Features Added support for Hyper-V netvsc PMD. The new netvsc poll mode driver provides native support for networking on Hyper-V. See the Netvsc poll mode driver NIC driver guide for more details on this new driver. Added Flow API support for CXGBE PMD. Flow API support has been added to CXGBE Poll Mode Driver to offload flows to Chelsio T5/T6 NICs. Support added for: Wildcard (LE-TCAM) and Exact (HASH) match filters. Match items: physical ingress port, IPv4, IPv6, TCP and UDP. Action items: queue, drop, count, and physical egress port redirect. Added ixgbe preferred Rx/Tx parameters. Rather than applications providing explicit Rx and Tx parameters such as queue and burst sizes, they can request that the EAL instead uses preferred values provided by the PMD, falling back to defaults within the EAL if the PMD does not provide any. The provision of such tuned values now includes the ixgbe PMD. Added descriptor status check support for fm10k. The rte_eth_rx_descriptor_status and rte_eth_tx_descriptor_status APIs are now supported by fm10K. Updated the enic driver. Add low cycle count Tx handler for no-offload Tx. Add low cycle count Rx handler for non-scattered Rx. Minor performance improvements to scattered Rx handler. Add handlers to add/delete VxLAN port number. Add devarg to specify ingress VLAN rewrite mode. Updated mlx5 driver. Updated the mlx5 driver including the following changes: Added port representors support. Added Flow API support for e-switch rules. Added support for ACTION_PORT_ID, ACTION_DROP, ACTION_OF_POP_VLAN, ACTION_OF_PUSH_VLAN, ACTION_OF_SET_VLAN_VID, ACTION_OF_SET_VLAN_PCP and ITEM_PORT_ID. Added support for 32-bit compilation. Added TSO support for the mlx4 driver. Added TSO support for the mlx4 drivers from MLNX_OFED_4.4 and above. SoftNIC PMD rework. The SoftNIC PMD infrastructure has been restructured to use the Packet Framework, which makes it more flexible, modular and easier to add new functionality in the future. Updated the AESNI MB PMD. The AESNI MB PMD has been updated with additional support for: 3DES for 8, 16 and 24 byte keys. Added a new compression PMD using Intel’s QuickAssist (QAT) device family. Added the new QAT compression driver, for compression and decompression operations in software. See the Intel(R) QuickAssist (QAT) Compression Poll Mode Driver compression driver guide for details on this new driver. Updated the ISA-L PMD. Added support for chained mbufs (input and output).
2021-12-29 17:08:20 9.99MB DPDK
1