vmxnet3: fix ethtool ring buffer size setting

Noticed that vmxnet3's get_ringparam function was returning the summation of all
ring buffers on a NIC, rather than just the size of any one ring.  This causes
problems when a vmxnet3 instance has multiple queues, as ethtool, when setting
ring parameters, first gets the current ring parameters to set the existing
values in the set_ringparm commannd.  The result is, that unless both rx and tx
ring sizes are set in a single operation, which ever ring is not set will
silently have its ring count multiplied by the number of queues on the NIC until
it reaches a driver defined maxiumum value.

Fix it by not multiplying the rx and tx ring sizes by the number of queues in
the system, like every other driver.  Tested by myself successfully.

Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
CC: Shreyas Bhatewara <sbhatewara@vmware.com>
CC: "VMware, Inc." <pv-drivers@vmware.com>
CC: "David S. Miller" <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
Neil Horman 2013-02-22 10:32:24 +00:00 committed by David S. Miller
parent 280b74f7f9
commit 48412a7e7d

View File

@ -448,10 +448,8 @@ vmxnet3_get_ringparam(struct net_device *netdev,
param->rx_mini_max_pending = 0;
param->rx_jumbo_max_pending = 0;
param->rx_pending = adapter->rx_queue[0].rx_ring[0].size *
adapter->num_rx_queues;
param->tx_pending = adapter->tx_queue[0].tx_ring.size *
adapter->num_tx_queues;
param->rx_pending = adapter->rx_queue[0].rx_ring[0].size;
param->tx_pending = adapter->tx_queue[0].tx_ring.size;
param->rx_mini_pending = 0;
param->rx_jumbo_pending = 0;
}