The verbose flag was added in commit 81c7288b17 ("sched: cls: enable
verbose logging") to avoid suppressing logging of error messages that
occur "when the rule is not to be exclusively executed by the hardware".
However, such error messages are currently suppressed when setup of flow
action fails. Take the verbose flag into account to avoid suppressing
error messages. This is done by using the extack pointer initialized by
tc_cls_common_offload_init(), which performs the necessary checks.
Before:
# tc filter add dev dummy0 ingress pref 1 proto ip flower dst_ip 198.51.100.1 action police rate 100Mbit burst 10000
# tc filter add dev dummy0 ingress pref 2 proto ip flower verbose dst_ip 198.51.100.1 action police rate 100Mbit burst 10000
After:
# tc filter add dev dummy0 ingress pref 1 proto ip flower dst_ip 198.51.100.1 action police rate 100Mbit burst 10000
# tc filter add dev dummy0 ingress pref 2 proto ip flower verbose dst_ip 198.51.100.1 action police rate 100Mbit burst 10000
Warning: cls_flower: Failed to setup flow action.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The verbose flag was added in commit 81c7288b17 ("sched: cls: enable
verbose logging") to avoid suppressing logging of error messages that
occur "when the rule is not to be exclusively executed by the hardware".
However, such error messages are currently suppressed when setup of flow
action fails. Take the verbose flag into account to avoid suppressing
error messages. This is done by using the extack pointer initialized by
tc_cls_common_offload_init(), which performs the necessary checks.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
t-queue
Tony Nguyen says:
====================
100GbE Intel Wired LAN Driver Updates 2022-04-07
Alexander Lobakin says:
This hunts down several places around packet templates/dummies for
switch rules which are either repetitive, fragile or just not
really readable code.
It's a common need to add new packet templates and to review such
changes as well, try to simplify both with the help of a pair
macros and aliases.
ice_find_dummy_packet() became very complex at this point with tons
of nested if-elses. It clearly showed this approach does not scale,
so convert its logics to the simple mask-match + static const array.
bloat-o-meter is happy about that (built w/ LLVM 13):
add/remove: 0/1 grow/shrink: 1/1 up/down: 2/-1058 (-1056)
Function old new delta
ice_fill_adv_dummy_packet 289 291 +2
ice_adv_add_update_vsi_list 201 - -201
ice_add_adv_rule 2950 2093 -857
Total: Before=414512, After=413456, chg -0.25%
add/remove: 53/52 grow/shrink: 0/0 up/down: 4660/-3988 (672)
RO Data old new delta
ice_dummy_pkt_profiles - 672 +672
Total: Before=37895, After=38567, chg +1.77%
Diffstat also looks nice, and adding new packet templates now takes
less lines.
We'll probably come out with dynamic template crafting in a while,
but for now let's improve what we have currently.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Potin Lai says:
====================
mdio: aspeed: Add Clause 45 support for Aspeed MDIO
This patch series add Clause 45 support for Aspeed MDIO driver, and
separate c22 and c45 implementation into different functions.
LINK: [v1] https://lore.kernel.org/all/20220329161949.19762-1-potin.lai@quantatw.com/
LINK: [v2] https://lore.kernel.org/all/20220406170055.28516-1-potin.lai@quantatw.com/
Changes v2 --> v3:
- sort local variable sequence in reverse Christmas tree format.
Changes v1 --> v2:
- add C45 to probe_capabilities
- break one patch into 3 small patches
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Add Clause 45 support for Aspeed mdio driver.
Signed-off-by: Potin Lai <potin.lai@quantatw.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add following additional functions to move out the implementation from
aspeed_mdio_read() and aspeed_mdio_write().
c22:
- aspeed_mdio_read_c22()
- aspeed_mdio_write_c22()
c45:
- aspeed_mdio_read_c45()
- aspeed_mdio_write_c45()
Signed-off-by: Potin Lai <potin.lai@quantatw.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add aspeed_mdio_op() and aseed_mdio_get_data() for register accessing.
aspeed_mdio_op() handles operations, write command to control register,
then check and wait operations is finished (bit 31 is cleared).
aseed_mdio_get_data() fetchs the result value of operation from data
register.
Signed-off-by: Potin Lai <potin.lai@quantatw.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
The driver for ATM Ambassador devices spews build warnings on
microblaze. The virt_to_bus() calls discard the volatile keyword.
The right thing to do would be to migrate this driver to a modern
DMA API but it seems unlikely anyone is actually using it.
There had been no fixes or functional changes here since
the git era begun.
In fact it sounds like the FW loading was broken from 2008
'til 2012 - see commit fcdc90b025 ("atm: forever loop loading
ambassador firmware").
Let's remove this driver, there isn't much changing in the APIs,
if users come forward we can apologize and revert.
Link: https://lore.kernel.org/all/20220321144013.440d7fc0@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com/
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan says:
====================
bnxt: Support XDP multi buffer
This series adds XDP multi buffer support, allowing MTU to go beyond
the page size limit.
v4: Rebase with latest net-next
v3: Simplify page mode buffer size calculation
Check to make sure XDP program supports multipage packets
v2: Fix uninitialized variable warnings in patch 1 and 10.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Allow aggregation buffers to be in place in the receive path and
allow XDP programs to be attached when using a larger than 4k MTU.
v3: Add a check to sure XDP program supports multipage packets.
Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds the following features:
- Support for XDP_TX and XDP_DROP action when using xdp_buff
with frags
- Support for freeing all frags attached to an xdp_buff
- Cleanup of TX ring buffers after transmits complete
- Slight change in definition of bnxt_sw_tx_bd since nr_frags
and RX producer may both need to be used
- Clear out skb_shared_info at the end of the buffer
v2: Fix uninitialized variable warning in bnxt_xdp_buff_frags_free().
Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since we have an xdp_buff with frags there needs to be a way to
convert that into a valid sk_buff in the event that XDP_PASS is
the resulting operation. This adds a new rx_skb_func when the
netdev has an MTU that prevents the packets from sitting in a
single page.
This also make sure that GRO/LRO stay disabled even when using
the aggregation ring for large buffers.
v3: Use BNXT_PAGE_MODE_BUF_SIZE for build_skb
Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If we are using aggregation rings with XDP enabled, allocate page
buffers for the aggregation rings from the page_pool.
Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Modify ring header data split and jumbo parameters to account
for the fact that the design for XDP multibuffer puts close to
the first 4k of data in a page and the remaining portions of
the packet go in the aggregation ring.
v3: Simplified code around initial buffer size calculation
Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Set the pfmemaloc flag in the xdp buff so that this can be
copied to the skb if needed for an XDP_PASS action.
Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds a new function that will read pages from the
aggregation ring and create an xdp_buff with frags based on
the entries in the aggregation ring.
Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Clarify that this is reading buffers from the aggregation ring.
Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rather than operating on an sk_buff, add frags from the aggregation
ring into the frags of an skb_shared_info. This will allow the
caller to use either an sk_buff or xdp_buff.
Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This will be used to determine if bnxt_rx_xdp should be called
rather than calling it every time.
Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Move initialization of xdp_buff outside of bnxt_rx_xdp to prepare
for allowing bnxt_rx_xdp to operate on multibuffer xdp_buffs.
v2: Fix uninitalized variables warning in bnxt_xdp.c.
v3: Add new define BNXT_PAGE_MODE_BUF_SIZE
Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski says:
====================
tls: rx: random refactoring part 1
TLS Rx refactoring. Part 1 of 3. A couple of features to follow.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of tls_device poking into internals of the message
return 1 from tls_device_decrypted() if the device handled
the decryption.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use early return and a jump label to remove two indentation levels.
No functional changes.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
We inform the applications that data is available when
the record is received. Decryption happens inline inside
recvmsg or splice call. Generating another wakeup inside
the decryption handler seems pointless as someone must
be actively reading the socket if we are executing this
code.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
The padding length TLS 1.3 logic is searching for content_type from
the end of text. IMHO the code is easier to parse if we calculate
offset and decrement it rather than try to maintain positive offset
from the end of the record called "back".
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
TLS 1.3 has to strip padding, and it starts out 16 bytes
from the end of the record. Make it clear this is because
of the auth tag.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
We set the record type in tls_read_size(), can as well init
the tlm->decrypted field there.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Similar justification to previous change, the information
about decryption status belongs in the skb.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Original TLS implementation was handling one record at a time.
It stashed the type of the record inside tls context (per socket
structure) for convenience. When async crypto support was added
[1] the author had to use skb->cb to store the type per-message.
The use of skb->cb overlaps with strparser, however, so a hybrid
approach was taken where type is stored in context while parsing
(since we parse a message at a time) but once parsed its copied
to skb->cb.
Recently a workaround for sockmaps [2] exposed the previously
private struct _strp_msg and started a trend of adding user
fields directly in strparser's header. This is cleaner than
storing information about an skb in the context.
This change is not strictly necessary, but IMHO the ownership
of the context field is confusing. Information naturally
belongs to the skb.
[1] commit 94524d8fc9 ("net/tls: Add support for async decryption of tls records")
[2] commit b2c4618162 ("bpf, sockmap: sk_skb data_end access incorrect when src_reg = dst_reg")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Pointless else branch after goto makes the code harder to refactor
down the line.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
'recv_end:' checks num_async and decrypted, and is then followed
by the 'end' label. Since we know that decrypted and num_async
are 0 at the start we can jump to 'end'.
Move the init of decrypted and num_async to let the compiler
catch if I'm wrong.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
That being useful for debugging purposes.
Notice that the packet descriptor is in "private" guest memory, so
that Hyper-V can not tamper with it.
While at it, remove two unnecessary u64-casts.
Signed-off-by: Andrea Parri (Microsoft) <parri.andrea@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
The define for_each_pci_dev(d) is:
while ((d = pci_get_device(PCI_ANY_ID, PCI_ANY_ID, d)) != NULL)
Thus, the list iterator 'd' is always non-NULL so it doesn't need to
be checked. So just remove the unnecessary NULL check. Also remove the
unnecessary initializer because the list iterator is always initialized.
Signed-off-by: Xiaomeng Tong <xiam0nd.tong@gmail.com>
Link: https://lore.kernel.org/r/20220406015921.29267-1-xiam0nd.tong@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Even if an IOMMU might be present for some PCI segment in the system,
that doesn't necessarily mean it provides translation for the device
we care about. It appears that what we care about here is specifically
whether DMA mapping ops involve any IOMMU overhead or not, so check for
translation actually being active for our device.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Edward Cree <ecree.xilinx@gmail.com>
Link: https://lore.kernel.org/r/7350f957944ecfce6cce90f422e3992a1f428775.1649166055.git.robin.murphy@arm.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
As noted in the original commit 685343fc3b ("net: add
name_assign_type netdev attribute")
... when the kernel has given the interface a name using global
device enumeration based on order of discovery (ethX, wlanY, etc)
... are labelled NET_NAME_ENUM.
That describes this case, so set the default for the devices here to
NET_NAME_ENUM. Current popular network setup tools like systemd use
this only to warn if you're setting static settings on interfaces that
might change, so it is expected this only leads to better user
information, but not changing of interfaces, etc.
Signed-off-by: Ian Wienand <iwienand@redhat.com>
Link: https://lore.kernel.org/r/20220406093635.1601506-1-iwienand@redhat.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
The congestion status of a tcp flow may be updated since there
is congestion between tcp sender and receiver. It makes sense to
add tracepoint for congestion status set function to summate cc
status duration and evaluate the performance of network
and congestion algorithm. the backgound of this patch is below.
Link: https://github.com/iovisor/bcc/pull/3899
Signed-off-by: Ping Gan <jacky_gam_2001@163.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20220406010956.19656-1-jacky_gam_2001@163.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Increment rx_otherhost_dropped counter when packet dropped due to
mismatched dest MAC addr.
An example when this drop can occur is when manually crafting raw
packets that will be consumed by a user space application via a tap
device. For testing purposes local traffic was generated using trafgen
for the client and netcat to start a server
Tested: Created 2 netns, sent 1 packet using trafgen from 1 to the other
with "{eth(daddr=$INCORRECT_MAC...}", verified that iproute2 showed the
counter was incremented. (Also had to modify iproute2 to show the stat,
additional patch for that coming next.)
Signed-off-by: Jeffrey Ji <jeffreyji@google.com>
Reviewed-by: Brian Vazquez <brianvv@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20220406172600.1141083-1-jeffreyjilinux@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski says:
====================
net: create a net/core/ internal header
We are adding stuff to netdevice.h which really should be
local to net/core/. Create a net/core/dev.h header and use it.
Minor cleanups precede.
====================
Link: https://lore.kernel.org/r/20220406213754.731066-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
There's a number of functions and static variables used
under net/core/ but not from the outside. We currently
dump most of them into netdevice.h. That bad for many
reasons:
- netdevice.h is very cluttered, hard to figure out
what the APIs are;
- netdevice.h is very long;
- we have to touch netdevice.h more which causes expensive
incremental builds.
Create a header under net/core/ and move some declarations.
The new header is also a bit of a catch-all but that's
fine, if we create more specific headers people will
likely over-think where their declaration fit best.
And end up putting them in netdevice.h, again.
More work should be done on splitting netdevice.h into more
targeted headers, but that'd be more time consuming so small
steps.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
We have a bunch of functions which are only used under
net/core/ yet they get exported. Remove the exports.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Following patch will hide that typedef. There seems to be
no strong reason for hyperv to use it, so let's not.
Acked-by: Wei Liu <wei.liu@kernel.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEq5lC5tSkz8NBJiCnSfxwEqXeA64FAmJO48cACgkQSfxwEqXe
A667LA//cIZcAx2gi7S0MwpQJFUlVovRHgPYbSWlMaPuTYxzhyLoevG2ubuvfT5/
1QT/uLiJhjKtsbqoIOUKCcihN2RgquOCIBUw1aHdwTTpGA/jfEbutQwr/A8o0u+i
5q8hNlafK6M2d4hAcw89iTNSQ5BSBaBfIfXUGhCJDfk8rISAIWO/Ta0rL6omzQBu
y1RhiwPoLA1hIyWyATy3eaLkAMEHUJllsCpa7n/knx5xb650NJoBAb1zmYtkjqWc
RQMYqJken4EpC4tR9xFVrer8nkfc5H9XfBxmh6YLT7f8LFGHM8TKxMaPHSQyFs6f
bXOG+5WtdPquuIq9aDmLbD2ktj4fS6CWMrz0HDnJ/dLvNAIfPnlY1wbvpyguDfvS
gC7eKvxieQrm/JrQTbB3BglAz+c0fThP8sbe5d63Vu/83TFvmRlIwnAJgaZ6Uj7G
To+pSHHS2l8I0XjXnGhe04ezGXjl+hClodBzNxar92lK00YY/1L7cSFT5pWtQBZP
xddb3E18pu1oef86BVprxHGU17M/Y6KbDN++mPUocUZjQDvNUi3ot4msa5HKJPik
+DQOgJ4niveyCZuLmMJRT+rYHaYhlMOcdYF+8q9esxj0csLok5wfQ0htM4apjNIT
muu9SEQC2v+OQQEZwiqMlnjVWJAZO4C+3m9kaJD57+m6stiz58A=
=cTzo
-----END PGP SIGNATURE-----
Merge tag 'random-5.18-rc2-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random
Pull random number generator fixes from Jason Donenfeld:
- Another fixup to the fast_init/crng_init split, this time in how much
entropy is being credited, from Jan Varho.
- As discussed, we now opportunistically call try_to_generate_entropy()
in /dev/urandom reads, as a replacement for the reverted commit. I
opted to not do the more invasive wait_for_random_bytes() change at
least for now, preferring to do something smaller and more obvious
for the time being, but maybe that can be revisited as things evolve
later.
- Userspace can use FUSE or userfaultfd or simply move a process to
idle priority in order to make a read from the random device never
complete, which breaks forward secrecy, fixed by overwriting
sensitive bytes early on in the function.
- Jann Horn noticed that /dev/urandom reads were only checking for
pending signals if need_resched() was true, a bug going back to the
genesis commit, now fixed by always checking for signal_pending() and
calling cond_resched(). This explains various noticeable signal
delivery delays I've seen in programs over the years that do long
reads from /dev/urandom.
- In order to be more like other devices (e.g. /dev/zero) and to
mitigate the impact of fixing the above bug, which has been around
forever (users have never really needed to check the return value of
read() for medium-sized reads and so perhaps many didn't), we now
move signal checking to the bottom part of the loop, and do so every
PAGE_SIZE-bytes.
* tag 'random-5.18-rc2-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random:
random: check for signals every PAGE_SIZE chunk of /dev/[u]random
random: check for signal_pending() outside of need_resched() check
random: do not allow user to keep crng key around on stack
random: opportunistically initialize on /dev/urandom reads
random: do not split fast init input in add_hwgenerator_randomness()
A small set of fixes for 5.18-rc2:
* Fix a compilation warning due to an uninitialized variable in
ata_sff_lost_interrupt(), from me.
* Fix invalid internal command tag handling in the sata_dwc_460ex
driver, from Christian.
* Disable READ LOG DMA EXT with Samsung 840 EVO SSDs as this command
causes the drives to hang, from Christian.
* Change the config option CONFIG_SATA_LPM_POLICY back to its original
name CONFIG_SATA_LPM_MOBILE_POLICY to avoid potential problems with
users losing their configuration (as discussed during the merge
window), from Mario.
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQSRPv8tYSvhwAzJdzjdoc3SxdoYdgUCYk7YwgAKCRDdoc3SxdoY
dhTNAQDlkD62hT8471dC5NZTpY7CI4b0uDajV5O8KnVKKQ7iNwD/fuMw50kzFK/f
MRMWNFzW8z/gTZAjyE3jiSGLfZvYdAw=
=xH3n
-----END PGP SIGNATURE-----
Merge tag 'ata-5.18-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/libata
Pull ata fixes from Damien Le Moal:
- Fix a compilation warning due to an uninitialized variable in
ata_sff_lost_interrupt(), from me.
- Fix invalid internal command tag handling in the sata_dwc_460ex
driver, from Christian.
- Disable READ LOG DMA EXT with Samsung 840 EVO SSDs as this command
causes the drives to hang, from Christian.
- Change the config option CONFIG_SATA_LPM_POLICY back to its original
name CONFIG_SATA_LPM_MOBILE_POLICY to avoid potential problems with
users losing their configuration (as discussed during the merge
window), from Mario.
* tag 'ata-5.18-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/libata:
ata: ahci: Rename CONFIG_SATA_LPM_POLICY configuration item back
ata: libata-core: Disable READ LOG DMA EXT for Samsung 840 EVOs
ata: sata_dwc_460ex: Fix crash due to OOB write
ata: libata-sff: Fix compilation warning in ata_sff_lost_interrupt()
Trade text size for rodata size and replace tons of nested if-elses
to the const mask match based structs. The almost entire
ice_find_dummy_packet() now becomes just one plain while-increment
loop. The order in ice_dummy_pkt_profiles[] should be same with the
if-elses order previously, as masks become less and less strict
through the array to follow the original code flow.
Apart from removing 80 locs of 4-level if-elses, it brings a solid
text size optimization:
add/remove: 0/1 grow/shrink: 1/1 up/down: 2/-1058 (-1056)
Function old new delta
ice_fill_adv_dummy_packet 289 291 +2
ice_adv_add_update_vsi_list 201 - -201
ice_add_adv_rule 2950 2093 -857
Total: Before=414512, After=413456, chg -0.25%
add/remove: 53/52 grow/shrink: 0/0 up/down: 4660/-3988 (672)
RO Data old new delta
ice_dummy_pkt_profiles - 672 +672
Total: Before=37895, After=38567, chg +1.77%
Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com>
Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
Tested-by: Marcin Szycik <marcin.szycik@linux.intel.com>
Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Declarations of dummy/template packet headers and offsets can be
minified to improve readability and simplify adding new templates.
Move all the repetitive constructions into two macros and let them
do the name and type expansions.
Linewrap removal is yet another positive side effect.
Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com>
Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
Tested-by: Marcin Szycik <marcin.szycik@linux.intel.com>
Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
ice_find_dummy_packet() contains a lot of boilerplate code and a
nice room for copy-paste mistakes.
Instead of passing 3 separate pointers back and forth to get packet
template (dummy) params, directly return a structure containing
them. Then, use a macro to compose compound literals and avoid code
duplication on return path.
Now, dummy packet type/name is needed only once to return a full
correct triple pkt-pkt_len-offsets, and those are all one-liners.
dummy_ipv4_gtpu_ipv4_packet_offsets is just moved around and renamed
(as well as dummy_ipv6_gtp_packet_offsets) with no function changes.
Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com>
Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
Tested-by: Marcin Szycik <marcin.szycik@linux.intel.com>
Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>