Jason Gunthorpe
a6f844da39
Merge tag 'v5.18' into rdma.git for-next
...
Following patches have dependencies.
Resolve the merge conflict in
drivers/net/ethernet/mellanox/mlx5/core/main.c by keeping the new names
for the fs functions following linux-next:
https://lore.kernel.org/r/20220519113529.226bc3e2@canb.auug.org.au/
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2022-05-24 12:40:28 -03:00
Mustafa Ismail
81091d7696
RDMA/irdma: Add SW mechanism to generate completions on error
...
HW flushes after QP in error state is not reliable. This can lead to
application hang waiting on a completion for outstanding WRs. Implement a
SW mechanism to generate completions for any outstanding WR's after the QP
is modified to error.
This is accomplished by starting a delayed worker after the QP is modified
to error and the HW flush is performed. The worker will generate
completions that will be returned to the application when it polls the
CQ. This mechanism only applies to Kernel applications.
Link: https://lore.kernel.org/r/20220425181624.1617-1-shiraz.saleem@intel.com
Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com >
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2022-05-11 15:58:40 -03:00
Tatyana Nikolova
7b8943b821
RDMA/irdma: Flush iWARP QP if modified to ERR from RTR state
...
When connection establishment fails in iWARP mode, an app can drain the
QPs and hang because flush isn't issued when the QP is modified from RTR
state to error. Issue a flush in this case using function
irdma_cm_disconn().
Update irdma_cm_disconn() to do flush when cm_id is NULL, which is the
case when the QP is in RTR state and there is an error in the connection
establishment.
Fixes: b48c24c2d7 ("RDMA/irdma: Implement device supported verb APIs")
Link: https://lore.kernel.org/r/20220425181703.1634-2-shiraz.saleem@intel.com
Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com >
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2022-05-02 11:10:33 -03:00
Jason Gunthorpe
e945c653c8
RDMA: Split kernel-only global device caps from uverbs device caps
...
Split out flags from ib_device::device_cap_flags that are only used
internally to the kernel into kernel_cap_flags that is not part of the
uapi. This limits the device_cap_flags to being the same bitmap that will
be copied to userspace.
This cleanly splits out the uverbs flags from the kernel flags to avoid
confusion in the flags bitmap.
Add some short comments describing which each of the kernel flags is
connected to. Remove unused kernel flags.
Link: https://lore.kernel.org/r/0-v2-22c19e565eef+139a-kern_caps_jgg@nvidia.com
Reviewed-by: Christoph Hellwig <hch@lst.de >
Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2022-04-06 15:02:13 -03:00
Mustafa Ismail
51cad28724
RDMA/irdma: Add support for address handle re-use
...
Address handles (AH) are a limited HW resource and some user applications
may create large numbers of identical AH's. Avoid running out of AH's by
reusing existing identical ones.
Link: https://lore.kernel.org/r/20220228183650.290-1-shiraz.saleem@intel.com
Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com >
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2022-03-15 16:22:55 -03:00
Mustafa Ismail
17850f2b0b
RDMA/irdma: Remove incorrect masking of PD
...
The PD id is masked with 0x7fff, while PD can be 18 bits for GEN2 HW.
Remove the masking as it should not be needed and can cause incorrect PD
id to be used.
Fixes: b48c24c2d7 ("RDMA/irdma: Implement device supported verb APIs")
Link: https://lore.kernel.org/r/20220225163211.127-4-shiraz.saleem@intel.com
Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com >
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2022-02-28 12:07:40 -04:00
Zhu Yanjun
884194ef26
RDMA/irdma: Move union irdma_sockaddr to header file
...
The union irdma_sockaddr is used frequently. So move it to the header
file.
Link: https://lore.kernel.org/r/20220223024252.3873736-4-yanjun.zhu@linux.dev
Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2022-02-23 20:38:56 -04:00
Zhu Yanjun
8627da62cc
RDMA/irdma: Remove the unnecessary variable saddr
...
Firstly the variable saddr was to check the type of a network. Now the
variable net_type is used to do the same work. So it is removed.
Link: https://lore.kernel.org/r/20220223024252.3873736-3-yanjun.zhu@linux.dev
Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2022-02-23 20:38:56 -04:00
Zhu Yanjun
80005c43d4
RDMA/irdma: Use net_type to check network type
...
The member variable net_type is to check the type of network.
Link: https://lore.kernel.org/r/20220223024252.3873736-2-yanjun.zhu@linux.dev
Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2022-02-23 20:38:56 -04:00
Shiraz Saleem
2322d17abf
RDMA/irdma: Remove excess error variables
...
As irdma_status_code is replaced with an int, there is no need for two
variables to hold error codes.
Remove the excess variable in functions where this occurs. Also, remove
any redundant initializations which are no longer needed.
Link: https://lore.kernel.org/r/20220217151851.1518-4-shiraz.saleem@intel.com
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2022-02-23 15:24:19 -04:00
Shiraz Saleem
45225a93cc
RDMA/irdma: Propagate error codes
...
All functions now return linux error codes. Propagate the return from
these functions as opposed to converting them to generic values.
Link: https://lore.kernel.org/r/20220217151851.1518-3-shiraz.saleem@intel.com
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2022-02-23 15:24:18 -04:00
Shiraz Saleem
2c4b14ea95
RDMA/irdma: Remove enum irdma_status_code
...
Replace use of custom irdma_status_code with linux error codes.
Remove enum irdma_status_code and header in which its defined.
Link: https://lore.kernel.org/r/20220217151851.1518-2-shiraz.saleem@intel.com
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2022-02-23 15:24:18 -04:00
Mustafa Ismail
8348305532
RDMA/irdma: Refactor DCB bits in prep for DSCP support
...
Rename dcb flag to dcb_vlan_mode in irdma_device struct. Add a new helper
function, irdma_set_qos_info, to set the VSI QoS information passed by the
PCI driver.
Link: https://lore.kernel.org/r/20220202191921.1638-3-shiraz.saleem@intel.com
Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com >
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2022-02-08 12:54:47 -04:00
Zhu Yanjun
69e609ba96
RDMA/irdma: Make the source udp port vary
...
Get the source udp port number for a QP based on the grh.flow_label or
lqpn/rqrpn. This provides a better spread of traffic across NIC RX queues.
Link: https://lore.kernel.org/r/20220106180359.2915060-4-yanjun.zhu@linux.dev
Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev >
Reviewed-by: Leon Romanovsky <leonro@nvidia.com >
Acked-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2022-01-07 19:34:55 -04:00
Jason Gunthorpe
4922f09209
Merge tag 'v5.16-rc5' into rdma.git for-next
...
Required due to dependencies in following patches.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-12-14 20:18:48 -04:00
Tatyana Nikolova
10467ce09f
RDMA/irdma: Don't arm the CQ more than two times if no CE for this CQ
...
Completion events (CEs) are lost if the application is allowed to arm the
CQ more than two times when no new CE for this CQ has been generated by
the HW.
Check if arming has been done for the CQ and if not, arm the CQ for any
event otherwise promote to arm the CQ for any event only when the last arm
event was solicited.
Fixes: b48c24c2d7 ("RDMA/irdma: Implement device supported verb APIs")
Link: https://lore.kernel.org/r/20211201231509.1930-2-shiraz.saleem@intel.com
Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com >
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-12-07 13:53:01 -04:00
Kamal Heib
fc9d19e18a
RDMA/irdma: Use helper function to set GUIDs
...
Use the addrconf_addr_eui48() helper function to set the GUIDs for both
RoCE and iWARP modes, Also make sure the GUIDs are valid EUI-64
identifiers.
Link: https://lore.kernel.org/r/20211107212227.44610-1-kamalheib1@gmail.com
Signed-off-by: Kamal Heib <kamalheib1@gmail.com >
Reviewed-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-11-16 13:54:24 -04:00
Jason Gunthorpe
a2a2a69d14
Merge tag 'v5.15' into rdma.git for-next
...
Pull in the accepted for-rc patches as the next merge needs a newer base.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-11-01 14:49:20 -03:00
Zhu Yanjun
9ed8110c9b
RDMA/irdma: optimize rx path by removing unnecessary copy
...
In the function irdma_post_recv, the function irdma_copy_sg_list is
not needed since the struct irdma_sge and ib_sge have the similar
member variables. The struct irdma_sge can be replaced with the
struct ib_sge totally.
This can increase the rx performance of irdma.
Link: https://lore.kernel.org/r/20211030104226.253346-1-yanjun.zhu@linux.dev
Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev >
Reviewed-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-11-01 14:39:33 -03:00
Zhu Yanjun
50604757e7
RDMA/irdma: Remove the unused variable local_qp
...
Since the member variable local_qp is not used, remove it.
Link: https://lore.kernel.org/r/20211027175457.201822-1-yanjun.zhu@linux.dev
Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev >
Acked-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-10-28 08:58:26 -03:00
Zhu Yanjun
86479f8a3f
RDMA/irdma: Remove the unused spin lock in struct irdma_qp_uk
...
The spin lock in struct irdma_qp_uk is not used. So remove it.
Link: https://lore.kernel.org/r/20211021230612.153812-1-yanjun.zhu@linux.dev
Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev >
Acked-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-10-25 14:36:58 -03:00
Jakub Kicinski
fd92213e9a
RDMA: Constify netdev->dev_addr accesses
...
netdev->dev_addr will become const soon, make sure drivers propagate the
qualifier.
Link: https://lore.kernel.org/r/20211019182604.1441387-4-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org >
Reviewed-by: Leon Romanovsky <leonro@nvidia.com >
Acked-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-10-25 14:33:09 -03:00
Mustafa Ismail
cc07b73ef1
RDMA/irdma: Set VLAN in UD work completion correctly
...
Currently VLAN is reported in UD work completion when VLAN id is zero,
i.e. no VLAN case.
Report VLAN in UD work completion only when VLAN id is non-zero.
Fixes: b48c24c2d7 ("RDMA/irdma: Implement device supported verb APIs")
Link: https://lore.kernel.org/r/20211019151654.1943-1-shiraz.saleem@intel.com
Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com >
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-10-19 20:22:01 -03:00
Aharon Landau
13f30b0fa0
RDMA/counter: Add a descriptor in struct rdma_hw_stats
...
Add a counter statistic descriptor structure in rdma_hw_stats. In addition
to the counter name, more meta-information will be added. This code
extension is needed for optional-counter support in the following patches.
Link: https://lore.kernel.org/r/20211008122439.166063-4-markzhang@nvidia.com
Signed-off-by: Aharon Landau <aharonl@nvidia.com >
Signed-off-by: Leon Romanovsky <leonro@nvidia.com >
Signed-off-by: Mark Zhang <markzhang@nvidia.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-10-12 12:48:04 -03:00
Sindhu Devale
9f7fa37a6b
RDMA/irdma: Report correct WC error when there are MW bind errors
...
Report the correct WC error when MW bind error related asynchronous events
are generated by HW.
Fixes: b48c24c2d7 ("RDMA/irdma: Implement device supported verb APIs")
Link: https://lore.kernel.org/r/20210916191222.824-5-shiraz.saleem@intel.com
Signed-off-by: Sindhu Devale <sindhu.devale@intel.com >
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-09-20 14:13:23 -03:00
Sindhu Devale
d3bdcd5963
RDMA/irdma: Report correct WC error when transport retry counter is exceeded
...
When the retry counter exceeds, as the remote QP didn't send any Ack or
Nack an asynchronous event (AE) for too many retries is generated. Add
code to handle the AE and set the correct IB WC error code
IB_WC_RETRY_EXC_ERR.
Fixes: b48c24c2d7 ("RDMA/irdma: Implement device supported verb APIs")
Link: https://lore.kernel.org/r/20210916191222.824-4-shiraz.saleem@intel.com
Signed-off-by: Sindhu Devale <sindhu.devale@intel.com >
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-09-20 14:13:23 -03:00
Sindhu Devale
f4475f2494
RDMA/irdma: Validate number of CQ entries on create CQ
...
Add lower bound check for CQ entries at creation time.
Fixes: b48c24c2d7 ("RDMA/irdma: Implement device supported verb APIs")
Link: https://lore.kernel.org/r/20210916191222.824-3-shiraz.saleem@intel.com
Signed-off-by: Sindhu Devale <sindhu.devale@intel.com >
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-09-20 14:13:23 -03:00
Sindhu Devale
5b1e985f76
RDMA/irdma: Skip CQP ring during a reset
...
Due to duplicate reset flags, CQP commands are processed during reset.
This leads CQP failures such as below:
irdma0: [Delete Local MAC Entry Cmd Error][op_code=49] status=-27 waiting=1 completion_err=0 maj=0x0 min=0x0
Remove the redundant flag and set the correct reset flag so CPQ is paused
during reset
Fixes: 8498a30e1b ("RDMA/irdma: Register auxiliary driver and implement private channel OPs")
Link: https://lore.kernel.org/r/20210916191222.824-2-shiraz.saleem@intel.com
Reported-by: LiLiang <liali@redhat.com >
Signed-off-by: Sindhu Devale <sindhu.devale@intel.com >
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-09-20 14:13:22 -03:00
Jason Gunthorpe
6a217437f9
Merge branch 'sg_nents' into rdma.git for-next
...
From Maor Gottlieb
====================
Fix the use of nents and orig_nents in the sg table append helpers. The
nents should be used by the DMA layer to store the number of DMA mapped
sges, the orig_nents is the number of CPU sges.
Since the sg append logic doesn't always create a SGL with exactly
orig_nents entries store a total_nents as well to allow the table to be
properly free'd and reorganize the freeing logic to share across all the
use cases.
====================
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
* 'sg_nents':
RDMA: Use the sg_table directly and remove the opencoded version from umem
lib/scatterlist: Fix wrong update of orig_nents
lib/scatterlist: Provide a dedicated function to support table append
2021-08-30 09:49:59 -03:00
Maor Gottlieb
79fbd3e124
RDMA: Use the sg_table directly and remove the opencoded version from umem
...
This allows using the normal sg_table APIs and makes all the code
cleaner. Remove sgt, nents and nmapd from ib_umem.
Link: https://lore.kernel.org/r/20210824142531.3877007-4-maorg@nvidia.com
Signed-off-by: Maor Gottlieb <maorg@nvidia.com >
Signed-off-by: Leon Romanovsky <leonro@nvidia.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-08-24 19:52:40 -03:00
Leon Romanovsky
514aee660d
RDMA: Globally allocate and release QP memory
...
Convert QP object to follow IB/core general allocation scheme. That
change allows us to make sure that restrack properly kref the memory.
Link: https://lore.kernel.org/r/48e767124758aeecc433360ddd85eaa6325b34d9.1627040189.git.leonro@nvidia.com
Reviewed-by: Gal Pressman <galpress@amazon.com > #efa
Tested-by: Gal Pressman <galpress@amazon.com >
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com > #rdma and core
Tested-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com >
Signed-off-by: Leon Romanovsky <leonro@nvidia.com >
Tested-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-08-03 13:44:27 -03:00
Zhu Yanjun
dc6afef7e1
RDMA/irdma: Change returned type of irdma_setup_virt_qp to void
...
Since the returned value of the function irdma_setup_virt_qp is always 0,
remove the returned value check and change the returned type to void.
Link: https://lore.kernel.org/r/20210714031130.1511109-4-yanjun.zhu@linux.dev
Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-07-15 15:14:11 -03:00
Shiraz Saleem
46308965ae
RDMA/irdma: Check contents of user-space irdma_mem_reg_req object
...
The contents of user-space req object is used in array indexing in
irdma_handle_q_mem without checking for valid values.
Guard against bad input on each of these req object pages by limiting them
to number of pages that make up the region.
Link: https://lore.kernel.org/r/20210625162329.1654-2-tatyana.e.nikolova@intel.com
Reported-by: coverity-bot <keescook+coverity-bot@chromium.org >
Addresses-Coverity-ID: 1505160 ("TAINTED_SCALAR")
Fixes: b48c24c2d7 ("RDMA/irdma: Implement device supported verb APIs")
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-06-25 14:08:30 -03:00
Kamal Heib
feda49a1a5
RDMA/irdma: Use the queried port attributes
...
Instead of hard code the gid_table_len value, use the value from the
ib_query_port() attributes.
Fixes: b48c24c2d7 ("RDMA/irdma: Implement device supported verb APIs")
Link: https://lore.kernel.org/r/20210620201503.67055-1-kamalheib1@gmail.com
Signed-off-by: Kamal Heib <kamalheib1@gmail.com >
Acked-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-06-22 21:00:53 -03:00
Shiraz Saleem
c4eb44ffd9
RDMA/irdma: Check return value from ib_umem_find_best_pgsz
...
iwmr->page_size stores the return from ib_umem_find_best_pgsz and maybe
zero when used in ib_umem_num_dma_blocks thus causing a divide by zero
error.
Fix this by erroring out of irdma_reg_user when 0 is returned from
ib_umem_find_best_pgsz.
Link: https://lore.kernel.org/r/20210622175232.439-3-tatyana.e.nikolova@intel.com
Reported-by: coverity-bot <keescook+coverity-bot@chromium.org >
Addresses-Coverity-ID: 1505149 ("Integer handling issues")
Fixes: b48c24c2d7 ("RDMA/irdma: Implement device supported verb APIs")
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-06-22 15:25:47 -03:00
Jason Gunthorpe
4b5f4d3fb4
RDMA: Split the alloc_hw_stats() ops to port and device variants
...
This is being used to implement both the port and device global stats,
which is causing some confusion in the drivers. For instance EFA and i40iw
both seem to be misusing the device stats.
Split it into two ops so drivers that don't support one or the other can
leave the op NULL'd, making the calling code a little simpler to
understand.
Link: https://lore.kernel.org/r/1955c154197b2a159adc2dc97266ddc74afe420c.1623427137.git.leonro@nvidia.com
Tested-by: Gal Pressman <galpress@amazon.com >
Signed-off-by: Leon Romanovsky <leonro@nvidia.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-06-16 20:58:29 -03:00
Shiraz Saleem
2db7b2eac7
RDMA/irdma: Store PBL info address a pointer type
...
The level1 PBL info address is stored as u64. This requires casting
through a uinptr_t before used as a pointer type.
And this leads to sparse warning such as this when uinptr_t is missing:
drivers/infiniband/hw/irdma/hw.c: In function 'irdma_destroy_virt_aeq':
drivers/infiniband/hw/irdma/hw.c:579:23: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
579 | dma_addr_t *pg_arr = (dma_addr_t *)aeq->palloc.level1.addr;
This can be fixed using an intermediate uintptr_t, but rather it is better
to fix the structure irdm_pble_info to store the address as u64* and the
VA it is assigned in irdma_chunk as a void*. This greatly reduces the
casting on this address.
Fixes: 44d9e52977 ("RDMA/irdma: Implement device initialization definitions")
Link: https://lore.kernel.org/r/20210609234924.938-1-shiraz.saleem@intel.com
Reported-by: kernel test robot <lkp@intel.com >
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-06-10 09:39:27 -03:00
Kamal Heib
61c7d826b8
RDMA/irdma: Fix return error sign from irdma_modify_qp
...
There is a typo in the returned error code sign from irdma_modify_qp()
when the attr_mask is not supported - Fix it.
Fixes: b48c24c2d7 ("RDMA/irdma: Implement device supported verb APIs")
Link: https://lore.kernel.org/r/20210607221543.254144-1-kamalheib1@gmail.com
Signed-off-by: Kamal Heib <kamalheib1@gmail.com >
Acked-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-06-07 20:21:10 -03:00
Colin Ian King
1b01a42c9c
RDMA/irdma: remove extraneous indentation on a statement
...
A single statement is indented one level too deeply, clean up the
code by removing the extraneous tab.
Link: https://lore.kernel.org/r/20210605130400.25987-1-colin.king@canonical.com
Signed-off-by: Colin Ian King <colin.king@canonical.com >
Acked-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-06-07 20:19:31 -03:00
Mustafa Ismail
b48c24c2d7
RDMA/irdma: Implement device supported verb APIs
...
Implement device supported verb APIs. The supported APIs
vary based on the underlying transport the ibdev is
registered as (i.e. iWARP or RoCEv2).
Link: https://lore.kernel.org/r/20210602205138.889-10-shiraz.saleem@intel.com
Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com >
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com >
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com >
2021-06-02 19:55:18 -03:00