wireless-next patches for v5.19

First set of patches for v5.19 and this is a big one. We have two new
 drivers, a change in mac80211 STA API affecting most drivers and
 ath11k getting support for WCN6750. And as usual lots of fixes and
 cleanups all over.
 
 Major changes:
 
 new drivers
 
 * wfx: silicon labs devices
 
 * plfxlc: pureLiFi X, XL, XC devices
 
 mac80211
 
 * host based BSS color collision detection
 
 * prepare sta handling for IEEE 802.11be Multi-Link Operation (MLO) support
 
 rtw88
 
 * support TP-Link T2E devices
 
 rtw89
 
 * support firmware crash simulation
 
 * preparation for 8852ce hardware support
 
 ath11k
 
 * Wake-on-WLAN support for QCA6390 and WCN6855
 
 * device recovery (firmware restart) support for QCA6390 and WCN6855
 
 * support setting Specific Absorption Rate (SAR) for WCN6855
 
 * read country code from SMBIOS for WCN6855/QCA6390
 
 * support for WCN6750
 
 wcn36xx
 
 * support for transmit rate reporting to user space
 -----BEGIN PGP SIGNATURE-----
 
 iQFFBAABCgAvFiEEiBjanGPFTz4PRfLobhckVSbrbZsFAmJxS2sRHGt2YWxvQGtl
 cm5lbC5vcmcACgkQbhckVSbrbZuNgwf9H2oxMKLKrlFoX1qHtNBwZuHS6IERhOkM
 NI9DjS4MCyiUSbA5r3sWlpqXQeKIbG/05gUZ6Y0ircGFwnAGjZ6isPwo8pKFgbh5
 QljXQjUTHbkshrXW8K+VGJxw4F1oiPlOGUDVdXPy2FLx5ZvBlaUV2rWQUzsWX9I0
 EnrM6ygHBVejVYDe+JSkb1gzb/07xuZN410IJPuZTPKJfYiE0oGU3zpTbExFitaz
 ObjfFUWqHrVue525WFAJ9Dbk8kYEKyMThr7rkkWekYJjujJLJo0qhEiZVZu0eEsk
 Vq4PdKmQAqlgbShQ/3Mv8BRsSH2wy62+zKjPWL+8t4Gmm9DbLu+++A==
 =T7Ii
 -----END PGP SIGNATURE-----

Merge tag 'wireless-next-2022-05-03' of git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next

Kalle Valo says:

====================
wireless-next patches for v5.19

First set of patches for v5.19 and this is a big one. We have two new
drivers, a change in mac80211 STA API affecting most drivers and
ath11k getting support for WCN6750. And as usual lots of fixes and
cleanups all over.

Major changes:

new drivers
 - wfx: silicon labs devices
 - plfxlc: pureLiFi X, XL, XC devices

mac80211
 - host based BSS color collision detection
 - prepare sta handling for IEEE 802.11be Multi-Link Operation (MLO) support

rtw88
 - support TP-Link T2E devices

rtw89
 - support firmware crash simulation
 - preparation for 8852ce hardware support

ath11k
 - Wake-on-WLAN support for QCA6390 and WCN6855
 - device recovery (firmware restart) support for QCA6390 and WCN6855
 - support setting Specific Absorption Rate (SAR) for WCN6855
 - read country code from SMBIOS for WCN6855/QCA6390
 - support for WCN6750

wcn36xx
 - support for transmit rate reporting to user space

* tag 'wireless-next-2022-05-03' of git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next: (228 commits)
  rtw89: 8852c: rfk: add DPK
  rtw89: 8852c: rfk: add IQK
  rtw89: 8852c: rfk: add RX DCK
  rtw89: 8852c: rfk: add RCK
  rtw89: 8852c: rfk: add TSSI
  rtw89: 8852c: rfk: add LCK
  rtw89: 8852c: rfk: add DACK
  rtw89: 8852c: rfk: add RFK tables
  plfxlc: fix le16_to_cpu warning for beacon_interval
  rtw88: remove a copy of the NAPI_POLL_WEIGHT define
  carl9170: tx: fix an incorrect use of list iterator
  wil6210: use NAPI_POLL_WEIGHT for napi budget
  ath10k: remove a copy of the NAPI_POLL_WEIGHT define
  ath11k: Add support for WCN6750 device
  ath11k: Datapath changes to support WCN6750
  ath11k: HAL changes to support WCN6750
  ath11k: Add QMI changes for WCN6750
  ath11k: Fetch device information via QMI for WCN6750
  ath11k: Add register access logic for WCN6750
  ath11k: Add HW params for WCN6750
  ...
====================

Link: https://lore.kernel.org/r/20220503153622.C1671C385A4@smtp.kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2022-05-03 17:27:49 -07:00
commit f43f0cd2d9
298 changed files with 43518 additions and 4138 deletions

View File

@ -20,120 +20,17 @@ properties:
enum:
- qcom,ipq8074-wifi
- qcom,ipq6018-wifi
- qcom,wcn6750-wifi
reg:
maxItems: 1
interrupts:
items:
- description: misc-pulse1 interrupt events
- description: misc-latch interrupt events
- description: sw exception interrupt events
- description: watchdog interrupt events
- description: interrupt event for ring CE0
- description: interrupt event for ring CE1
- description: interrupt event for ring CE2
- description: interrupt event for ring CE3
- description: interrupt event for ring CE4
- description: interrupt event for ring CE5
- description: interrupt event for ring CE6
- description: interrupt event for ring CE7
- description: interrupt event for ring CE8
- description: interrupt event for ring CE9
- description: interrupt event for ring CE10
- description: interrupt event for ring CE11
- description: interrupt event for ring host2wbm-desc-feed
- description: interrupt event for ring host2reo-re-injection
- description: interrupt event for ring host2reo-command
- description: interrupt event for ring host2rxdma-monitor-ring3
- description: interrupt event for ring host2rxdma-monitor-ring2
- description: interrupt event for ring host2rxdma-monitor-ring1
- description: interrupt event for ring reo2ost-exception
- description: interrupt event for ring wbm2host-rx-release
- description: interrupt event for ring reo2host-status
- description: interrupt event for ring reo2host-destination-ring4
- description: interrupt event for ring reo2host-destination-ring3
- description: interrupt event for ring reo2host-destination-ring2
- description: interrupt event for ring reo2host-destination-ring1
- description: interrupt event for ring rxdma2host-monitor-destination-mac3
- description: interrupt event for ring rxdma2host-monitor-destination-mac2
- description: interrupt event for ring rxdma2host-monitor-destination-mac1
- description: interrupt event for ring ppdu-end-interrupts-mac3
- description: interrupt event for ring ppdu-end-interrupts-mac2
- description: interrupt event for ring ppdu-end-interrupts-mac1
- description: interrupt event for ring rxdma2host-monitor-status-ring-mac3
- description: interrupt event for ring rxdma2host-monitor-status-ring-mac2
- description: interrupt event for ring rxdma2host-monitor-status-ring-mac1
- description: interrupt event for ring host2rxdma-host-buf-ring-mac3
- description: interrupt event for ring host2rxdma-host-buf-ring-mac2
- description: interrupt event for ring host2rxdma-host-buf-ring-mac1
- description: interrupt event for ring rxdma2host-destination-ring-mac3
- description: interrupt event for ring rxdma2host-destination-ring-mac2
- description: interrupt event for ring rxdma2host-destination-ring-mac1
- description: interrupt event for ring host2tcl-input-ring4
- description: interrupt event for ring host2tcl-input-ring3
- description: interrupt event for ring host2tcl-input-ring2
- description: interrupt event for ring host2tcl-input-ring1
- description: interrupt event for ring wbm2host-tx-completions-ring3
- description: interrupt event for ring wbm2host-tx-completions-ring2
- description: interrupt event for ring wbm2host-tx-completions-ring1
- description: interrupt event for ring tcl2host-status-ring
minItems: 32
maxItems: 52
interrupt-names:
items:
- const: misc-pulse1
- const: misc-latch
- const: sw-exception
- const: watchdog
- const: ce0
- const: ce1
- const: ce2
- const: ce3
- const: ce4
- const: ce5
- const: ce6
- const: ce7
- const: ce8
- const: ce9
- const: ce10
- const: ce11
- const: host2wbm-desc-feed
- const: host2reo-re-injection
- const: host2reo-command
- const: host2rxdma-monitor-ring3
- const: host2rxdma-monitor-ring2
- const: host2rxdma-monitor-ring1
- const: reo2ost-exception
- const: wbm2host-rx-release
- const: reo2host-status
- const: reo2host-destination-ring4
- const: reo2host-destination-ring3
- const: reo2host-destination-ring2
- const: reo2host-destination-ring1
- const: rxdma2host-monitor-destination-mac3
- const: rxdma2host-monitor-destination-mac2
- const: rxdma2host-monitor-destination-mac1
- const: ppdu-end-interrupts-mac3
- const: ppdu-end-interrupts-mac2
- const: ppdu-end-interrupts-mac1
- const: rxdma2host-monitor-status-ring-mac3
- const: rxdma2host-monitor-status-ring-mac2
- const: rxdma2host-monitor-status-ring-mac1
- const: host2rxdma-host-buf-ring-mac3
- const: host2rxdma-host-buf-ring-mac2
- const: host2rxdma-host-buf-ring-mac1
- const: rxdma2host-destination-ring-mac3
- const: rxdma2host-destination-ring-mac2
- const: rxdma2host-destination-ring-mac1
- const: host2tcl-input-ring4
- const: host2tcl-input-ring3
- const: host2tcl-input-ring2
- const: host2tcl-input-ring1
- const: wbm2host-tx-completions-ring3
- const: wbm2host-tx-completions-ring2
- const: wbm2host-tx-completions-ring1
- const: tcl2host-status-ring
maxItems: 52
qcom,rproc:
$ref: /schemas/types.yaml#/definitions/phandle
@ -151,20 +48,205 @@ properties:
board-2.bin for designs with colliding bus and device specific ids
memory-region:
maxItems: 1
minItems: 1
maxItems: 2
description:
phandle to a node describing reserved memory (System RAM memory)
used by ath11k firmware (see bindings/reserved-memory/reserved-memory.txt)
iommus:
minItems: 1
maxItems: 2
wifi-firmware:
type: object
description: |
WCN6750 wifi node can contain one optional firmware subnode.
Firmware subnode is needed when the platform does not have Trustzone.
required:
- iommus
required:
- compatible
- reg
- interrupts
- interrupt-names
- qcom,rproc
additionalProperties: false
allOf:
- if:
properties:
compatible:
contains:
enum:
- qcom,ipq8074-wifi
- qcom,ipq6018-wifi
then:
properties:
interrupts:
items:
- description: misc-pulse1 interrupt events
- description: misc-latch interrupt events
- description: sw exception interrupt events
- description: watchdog interrupt events
- description: interrupt event for ring CE0
- description: interrupt event for ring CE1
- description: interrupt event for ring CE2
- description: interrupt event for ring CE3
- description: interrupt event for ring CE4
- description: interrupt event for ring CE5
- description: interrupt event for ring CE6
- description: interrupt event for ring CE7
- description: interrupt event for ring CE8
- description: interrupt event for ring CE9
- description: interrupt event for ring CE10
- description: interrupt event for ring CE11
- description: interrupt event for ring host2wbm-desc-feed
- description: interrupt event for ring host2reo-re-injection
- description: interrupt event for ring host2reo-command
- description: interrupt event for ring host2rxdma-monitor-ring3
- description: interrupt event for ring host2rxdma-monitor-ring2
- description: interrupt event for ring host2rxdma-monitor-ring1
- description: interrupt event for ring reo2ost-exception
- description: interrupt event for ring wbm2host-rx-release
- description: interrupt event for ring reo2host-status
- description: interrupt event for ring reo2host-destination-ring4
- description: interrupt event for ring reo2host-destination-ring3
- description: interrupt event for ring reo2host-destination-ring2
- description: interrupt event for ring reo2host-destination-ring1
- description: interrupt event for ring rxdma2host-monitor-destination-mac3
- description: interrupt event for ring rxdma2host-monitor-destination-mac2
- description: interrupt event for ring rxdma2host-monitor-destination-mac1
- description: interrupt event for ring ppdu-end-interrupts-mac3
- description: interrupt event for ring ppdu-end-interrupts-mac2
- description: interrupt event for ring ppdu-end-interrupts-mac1
- description: interrupt event for ring rxdma2host-monitor-status-ring-mac3
- description: interrupt event for ring rxdma2host-monitor-status-ring-mac2
- description: interrupt event for ring rxdma2host-monitor-status-ring-mac1
- description: interrupt event for ring host2rxdma-host-buf-ring-mac3
- description: interrupt event for ring host2rxdma-host-buf-ring-mac2
- description: interrupt event for ring host2rxdma-host-buf-ring-mac1
- description: interrupt event for ring rxdma2host-destination-ring-mac3
- description: interrupt event for ring rxdma2host-destination-ring-mac2
- description: interrupt event for ring rxdma2host-destination-ring-mac1
- description: interrupt event for ring host2tcl-input-ring4
- description: interrupt event for ring host2tcl-input-ring3
- description: interrupt event for ring host2tcl-input-ring2
- description: interrupt event for ring host2tcl-input-ring1
- description: interrupt event for ring wbm2host-tx-completions-ring3
- description: interrupt event for ring wbm2host-tx-completions-ring2
- description: interrupt event for ring wbm2host-tx-completions-ring1
- description: interrupt event for ring tcl2host-status-ring
interrupt-names:
items:
- const: misc-pulse1
- const: misc-latch
- const: sw-exception
- const: watchdog
- const: ce0
- const: ce1
- const: ce2
- const: ce3
- const: ce4
- const: ce5
- const: ce6
- const: ce7
- const: ce8
- const: ce9
- const: ce10
- const: ce11
- const: host2wbm-desc-feed
- const: host2reo-re-injection
- const: host2reo-command
- const: host2rxdma-monitor-ring3
- const: host2rxdma-monitor-ring2
- const: host2rxdma-monitor-ring1
- const: reo2ost-exception
- const: wbm2host-rx-release
- const: reo2host-status
- const: reo2host-destination-ring4
- const: reo2host-destination-ring3
- const: reo2host-destination-ring2
- const: reo2host-destination-ring1
- const: rxdma2host-monitor-destination-mac3
- const: rxdma2host-monitor-destination-mac2
- const: rxdma2host-monitor-destination-mac1
- const: ppdu-end-interrupts-mac3
- const: ppdu-end-interrupts-mac2
- const: ppdu-end-interrupts-mac1
- const: rxdma2host-monitor-status-ring-mac3
- const: rxdma2host-monitor-status-ring-mac2
- const: rxdma2host-monitor-status-ring-mac1
- const: host2rxdma-host-buf-ring-mac3
- const: host2rxdma-host-buf-ring-mac2
- const: host2rxdma-host-buf-ring-mac1
- const: rxdma2host-destination-ring-mac3
- const: rxdma2host-destination-ring-mac2
- const: rxdma2host-destination-ring-mac1
- const: host2tcl-input-ring4
- const: host2tcl-input-ring3
- const: host2tcl-input-ring2
- const: host2tcl-input-ring1
- const: wbm2host-tx-completions-ring3
- const: wbm2host-tx-completions-ring2
- const: wbm2host-tx-completions-ring1
- const: tcl2host-status-ring
- if:
properties:
compatible:
contains:
enum:
- qcom,ipq8074-wifi
- qcom,ipq6018-wifi
then:
required:
- interrupt-names
- if:
properties:
compatible:
contains:
enum:
- qcom,wcn6750-wifi
then:
properties:
interrupts:
items:
- description: interrupt event for ring CE1
- description: interrupt event for ring CE2
- description: interrupt event for ring CE3
- description: interrupt event for ring CE4
- description: interrupt event for ring CE5
- description: interrupt event for ring CE6
- description: interrupt event for ring CE7
- description: interrupt event for ring CE8
- description: interrupt event for ring CE9
- description: interrupt event for ring CE10
- description: interrupt event for ring DP1
- description: interrupt event for ring DP2
- description: interrupt event for ring DP3
- description: interrupt event for ring DP4
- description: interrupt event for ring DP5
- description: interrupt event for ring DP6
- description: interrupt event for ring DP7
- description: interrupt event for ring DP8
- description: interrupt event for ring DP9
- description: interrupt event for ring DP10
- description: interrupt event for ring DP11
- description: interrupt event for ring DP12
- description: interrupt event for ring DP13
- description: interrupt event for ring DP14
- description: interrupt event for ring DP15
- description: interrupt event for ring DP16
- description: interrupt event for ring DP17
- description: interrupt event for ring DP18
- description: interrupt event for ring DP19
- description: interrupt event for ring DP20
- description: interrupt event for ring DP21
- description: interrupt event for ring DP22
examples:
- |
@ -309,3 +391,64 @@ examples:
};
};
};
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
reserved-memory {
#address-cells = <2>;
#size-cells = <2>;
wlan_ce_mem: memory@4cd000 {
no-map;
reg = <0x0 0x004cd000 0x0 0x1000>;
};
wlan_fw_mem: memory@80c00000 {
no-map;
reg = <0x0 0x80c00000 0x0 0xc00000>;
};
};
wifi: wifi@17a10040 {
compatible = "qcom,wcn6750-wifi";
reg = <0x17a10040 0x0>;
iommus = <&apps_smmu 0x1c00 0x1>;
interrupts = <GIC_SPI 768 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 769 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 770 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 771 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 772 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 773 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 774 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 775 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 776 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 777 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 778 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 779 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 780 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 781 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 782 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 783 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 784 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 785 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 786 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 787 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 788 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 789 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 790 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 791 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 792 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 793 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 794 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 795 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 796 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 797 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 798 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 799 IRQ_TYPE_EDGE_RISING>;
qcom,rproc = <&remoteproc_wpss>;
memory-region = <&wlan_fw_mem>, <&wlan_ce_mem>;
wifi-firmware {
iommus = <&apps_smmu 0x1c02 0x1>;
};
};

View File

@ -3,7 +3,7 @@
%YAML 1.2
---
$id: http://devicetree.org/schemas/staging/net/wireless/silabs,wfx.yaml#
$id: http://devicetree.org/schemas/net/wireless/silabs,wfx.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Silicon Labs WFxxx devicetree bindings

View File

@ -15989,6 +15989,12 @@ T: git git://linuxtv.org/media_tree.git
F: Documentation/admin-guide/media/pulse8-cec.rst
F: drivers/media/cec/usb/pulse8/
PURELIFI PLFXLC DRIVER
M: Srinivasan Raju <srini.raju@purelifi.com>
L: linux-wireless@vger.kernel.org
S: Supported
F: drivers/net/wireless/purelifi/plfxlc/
PVRUSB2 VIDEO4LINUX DRIVER
M: Mike Isely <isely@pobox.com>
L: pvrusb2@isely.net (subscribers-only)
@ -17997,8 +18003,8 @@ F: drivers/platform/x86/touchscreen_dmi.c
SILICON LABS WIRELESS DRIVERS (for WFxxx series)
M: Jérôme Pouiller <jerome.pouiller@silabs.com>
S: Supported
F: Documentation/devicetree/bindings/staging/net/wireless/silabs,wfx.yaml
F: drivers/staging/wfx/
F: Documentation/devicetree/bindings/net/wireless/silabs,wfx.yaml
F: drivers/net/wireless/silabs/wfx/
SILICON MOTION SM712 FRAME BUFFER DRIVER
M: Sudip Mukherjee <sudipm.mukherjee@gmail.com>

View File

@ -28,9 +28,11 @@ source "drivers/net/wireless/intersil/Kconfig"
source "drivers/net/wireless/marvell/Kconfig"
source "drivers/net/wireless/mediatek/Kconfig"
source "drivers/net/wireless/microchip/Kconfig"
source "drivers/net/wireless/purelifi/Kconfig"
source "drivers/net/wireless/ralink/Kconfig"
source "drivers/net/wireless/realtek/Kconfig"
source "drivers/net/wireless/rsi/Kconfig"
source "drivers/net/wireless/silabs/Kconfig"
source "drivers/net/wireless/st/Kconfig"
source "drivers/net/wireless/ti/Kconfig"
source "drivers/net/wireless/zydas/Kconfig"

View File

@ -13,9 +13,11 @@ obj-$(CONFIG_WLAN_VENDOR_INTERSIL) += intersil/
obj-$(CONFIG_WLAN_VENDOR_MARVELL) += marvell/
obj-$(CONFIG_WLAN_VENDOR_MEDIATEK) += mediatek/
obj-$(CONFIG_WLAN_VENDOR_MICROCHIP) += microchip/
obj-$(CONFIG_WLAN_VENDOR_PURELIFI) += purelifi/
obj-$(CONFIG_WLAN_VENDOR_RALINK) += ralink/
obj-$(CONFIG_WLAN_VENDOR_REALTEK) += realtek/
obj-$(CONFIG_WLAN_VENDOR_RSI) += rsi/
obj-$(CONFIG_WLAN_VENDOR_SILABS) += silabs/
obj-$(CONFIG_WLAN_VENDOR_ST) += st/
obj-$(CONFIG_WLAN_VENDOR_TI) += ti/
obj-$(CONFIG_WLAN_VENDOR_ZYDAS) += zydas/

View File

@ -1160,7 +1160,7 @@ static int ar5523_get_wlan_mode(struct ar5523 *ar,
ar5523_info(ar, "STA not found!\n");
return WLAN_MODE_11b;
}
sta_rate_set = sta->supp_rates[ar->hw->conf.chandef.chan->band];
sta_rate_set = sta->deflink.supp_rates[ar->hw->conf.chandef.chan->band];
for (bit = 0; bit < band->n_bitrates; bit++) {
if (sta_rate_set & 1) {
@ -1198,7 +1198,7 @@ static void ar5523_create_rateset(struct ar5523 *ar,
ar5523_info(ar, "STA not found. Cannot set rates\n");
sta_rate_set = bss_conf->basic_rates;
} else
sta_rate_set = sta->supp_rates[ar->hw->conf.chandef.chan->band];
sta_rate_set = sta->deflink.supp_rates[ar->hw->conf.chandef.chan->band];
ar5523_dbg(ar, "sta rate_set = %08x\n", sta_rate_set);

View File

@ -728,20 +728,17 @@ static int ath10k_ahb_probe(struct platform_device *pdev)
struct ath10k *ar;
struct ath10k_ahb *ar_ahb;
struct ath10k_pci *ar_pci;
const struct of_device_id *of_id;
enum ath10k_hw_rev hw_rev;
size_t size;
int ret;
struct ath10k_bus_params bus_params = {};
of_id = of_match_device(ath10k_ahb_of_match, &pdev->dev);
if (!of_id) {
dev_err(&pdev->dev, "failed to find matching device tree id\n");
hw_rev = (enum ath10k_hw_rev)of_device_get_match_data(&pdev->dev);
if (!hw_rev) {
dev_err(&pdev->dev, "OF data missing\n");
return -EINVAL;
}
hw_rev = (enum ath10k_hw_rev)of_id->data;
size = sizeof(*ar_pci) + sizeof(*ar_ahb);
ar = ath10k_core_create(size, &pdev->dev, ATH10K_BUS_AHB,
hw_rev, &ath10k_ahb_hif_ops);

View File

@ -94,6 +94,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.credit_size_workaround = false,
.tx_stats_over_pktlog = true,
.dynamic_sar_support = false,
.hw_restart_disconnect = false,
},
{
.id = QCA988X_HW_2_0_VERSION,
@ -131,6 +132,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.credit_size_workaround = false,
.tx_stats_over_pktlog = true,
.dynamic_sar_support = false,
.hw_restart_disconnect = false,
},
{
.id = QCA9887_HW_1_0_VERSION,
@ -169,6 +171,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.credit_size_workaround = false,
.tx_stats_over_pktlog = false,
.dynamic_sar_support = false,
.hw_restart_disconnect = false,
},
{
.id = QCA6174_HW_3_2_VERSION,
@ -202,6 +205,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.bmi_large_size_download = true,
.supports_peer_stats_info = true,
.dynamic_sar_support = true,
.hw_restart_disconnect = false,
},
{
.id = QCA6174_HW_2_1_VERSION,
@ -239,6 +243,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.credit_size_workaround = false,
.tx_stats_over_pktlog = false,
.dynamic_sar_support = false,
.hw_restart_disconnect = false,
},
{
.id = QCA6174_HW_2_1_VERSION,
@ -276,6 +281,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.credit_size_workaround = false,
.tx_stats_over_pktlog = false,
.dynamic_sar_support = false,
.hw_restart_disconnect = false,
},
{
.id = QCA6174_HW_3_0_VERSION,
@ -313,6 +319,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.credit_size_workaround = false,
.tx_stats_over_pktlog = false,
.dynamic_sar_support = false,
.hw_restart_disconnect = false,
},
{
.id = QCA6174_HW_3_2_VERSION,
@ -354,6 +361,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.tx_stats_over_pktlog = false,
.supports_peer_stats_info = true,
.dynamic_sar_support = true,
.hw_restart_disconnect = false,
},
{
.id = QCA99X0_HW_2_0_DEV_VERSION,
@ -397,6 +405,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.credit_size_workaround = false,
.tx_stats_over_pktlog = false,
.dynamic_sar_support = false,
.hw_restart_disconnect = false,
},
{
.id = QCA9984_HW_1_0_DEV_VERSION,
@ -447,6 +456,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.credit_size_workaround = false,
.tx_stats_over_pktlog = false,
.dynamic_sar_support = false,
.hw_restart_disconnect = false,
},
{
.id = QCA9888_HW_2_0_DEV_VERSION,
@ -494,6 +504,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.credit_size_workaround = false,
.tx_stats_over_pktlog = false,
.dynamic_sar_support = false,
.hw_restart_disconnect = false,
},
{
.id = QCA9377_HW_1_0_DEV_VERSION,
@ -531,6 +542,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.credit_size_workaround = false,
.tx_stats_over_pktlog = false,
.dynamic_sar_support = false,
.hw_restart_disconnect = false,
},
{
.id = QCA9377_HW_1_1_DEV_VERSION,
@ -570,6 +582,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.credit_size_workaround = false,
.tx_stats_over_pktlog = false,
.dynamic_sar_support = false,
.hw_restart_disconnect = false,
},
{
.id = QCA9377_HW_1_1_DEV_VERSION,
@ -600,6 +613,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.uart_pin_workaround = true,
.credit_size_workaround = true,
.dynamic_sar_support = false,
.hw_restart_disconnect = false,
},
{
.id = QCA4019_HW_1_0_DEV_VERSION,
@ -644,6 +658,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.credit_size_workaround = false,
.tx_stats_over_pktlog = false,
.dynamic_sar_support = false,
.hw_restart_disconnect = false,
},
{
.id = WCN3990_HW_1_0_DEV_VERSION,
@ -674,6 +689,7 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.credit_size_workaround = false,
.tx_stats_over_pktlog = false,
.dynamic_sar_support = true,
.hw_restart_disconnect = true,
},
};
@ -2442,6 +2458,7 @@ EXPORT_SYMBOL(ath10k_core_napi_sync_disable);
static void ath10k_core_restart(struct work_struct *work)
{
struct ath10k *ar = container_of(work, struct ath10k, restart_work);
struct ath10k_vif *arvif;
int ret;
set_bit(ATH10K_FLAG_CRASH_FLUSH, &ar->dev_flags);
@ -2480,6 +2497,14 @@ static void ath10k_core_restart(struct work_struct *work)
ar->state = ATH10K_STATE_RESTARTING;
ath10k_halt(ar);
ath10k_scan_finish(ar);
if (ar->hw_params.hw_restart_disconnect) {
list_for_each_entry(arvif, &ar->arvifs, list) {
if (arvif->is_up &&
arvif->vdev_type == WMI_VDEV_TYPE_STA)
ieee80211_hw_restart_disconnect(arvif->vif);
}
}
ieee80211_restart_hw(ar->hw);
break;
case ATH10K_STATE_OFF:

View File

@ -59,9 +59,6 @@
#define ATH10K_KEEPALIVE_MAX_IDLE 3895
#define ATH10K_KEEPALIVE_MAX_UNRESPONSIVE 3900
/* NAPI poll budget */
#define ATH10K_NAPI_BUDGET 64
/* SMBIOS type containing Board Data File Name Extension */
#define ATH10K_SMBIOS_BDF_EXT_TYPE 0xF8

View File

@ -633,6 +633,8 @@ struct ath10k_hw_params {
bool supports_peer_stats_info;
bool dynamic_sar_support;
bool hw_restart_disconnect;
};
struct htt_resp;

View File

@ -2251,7 +2251,7 @@ static void ath10k_peer_assoc_h_rates(struct ath10k *ar,
band = def.chan->band;
sband = ar->hw->wiphy->bands[band];
ratemask = sta->supp_rates[band];
ratemask = sta->deflink.supp_rates[band];
ratemask &= arvif->bitrate_mask.control[band].legacy;
rates = sband->bitrates;
@ -2296,7 +2296,7 @@ static void ath10k_peer_assoc_h_ht(struct ath10k *ar,
struct ieee80211_sta *sta,
struct wmi_peer_assoc_complete_arg *arg)
{
const struct ieee80211_sta_ht_cap *ht_cap = &sta->ht_cap;
const struct ieee80211_sta_ht_cap *ht_cap = &sta->deflink.ht_cap;
struct ath10k_vif *arvif = (void *)vif->drv_priv;
struct cfg80211_chan_def def;
enum nl80211_band band;
@ -2335,7 +2335,7 @@ static void ath10k_peer_assoc_h_ht(struct ath10k *ar,
if (ht_cap->cap & IEEE80211_HT_CAP_LDPC_CODING)
arg->peer_flags |= ar->wmi.peer_flags->ldbc;
if (sta->bandwidth >= IEEE80211_STA_RX_BW_40) {
if (sta->deflink.bandwidth >= IEEE80211_STA_RX_BW_40) {
arg->peer_flags |= ar->wmi.peer_flags->bw40;
arg->peer_rate_caps |= WMI_RC_CW40_FLAG;
}
@ -2388,7 +2388,8 @@ static void ath10k_peer_assoc_h_ht(struct ath10k *ar,
arg->peer_ht_rates.rates[i] = i;
} else {
arg->peer_ht_rates.num_rates = n;
arg->peer_num_spatial_streams = min(sta->rx_nss, max_nss);
arg->peer_num_spatial_streams = min(sta->deflink.rx_nss,
max_nss);
}
ath10k_dbg(ar, ATH10K_DBG_MAC, "mac ht peer %pM mcs cnt %d nss %d\n",
@ -2545,7 +2546,7 @@ static void ath10k_peer_assoc_h_vht(struct ath10k *ar,
struct ieee80211_sta *sta,
struct wmi_peer_assoc_complete_arg *arg)
{
const struct ieee80211_sta_vht_cap *vht_cap = &sta->vht_cap;
const struct ieee80211_sta_vht_cap *vht_cap = &sta->deflink.vht_cap;
struct ath10k_vif *arvif = (void *)vif->drv_priv;
struct ath10k_hw_params *hw = &ar->hw_params;
struct cfg80211_chan_def def;
@ -2587,10 +2588,10 @@ static void ath10k_peer_assoc_h_vht(struct ath10k *ar,
(1U << (IEEE80211_HT_MAX_AMPDU_FACTOR +
ampdu_factor)) - 1);
if (sta->bandwidth == IEEE80211_STA_RX_BW_80)
if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_80)
arg->peer_flags |= ar->wmi.peer_flags->bw80;
if (sta->bandwidth == IEEE80211_STA_RX_BW_160)
if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_160)
arg->peer_flags |= ar->wmi.peer_flags->bw160;
/* Calculate peer NSS capability from VHT capabilities if STA
@ -2604,7 +2605,7 @@ static void ath10k_peer_assoc_h_vht(struct ath10k *ar,
vht_mcs_mask[i])
max_nss = i + 1;
}
arg->peer_num_spatial_streams = min(sta->rx_nss, max_nss);
arg->peer_num_spatial_streams = min(sta->deflink.rx_nss, max_nss);
arg->peer_vht_rates.rx_max_rate =
__le16_to_cpu(vht_cap->vht_mcs.rx_highest);
arg->peer_vht_rates.rx_mcs_set =
@ -2684,15 +2685,15 @@ static void ath10k_peer_assoc_h_qos(struct ath10k *ar,
static bool ath10k_mac_sta_has_ofdm_only(struct ieee80211_sta *sta)
{
return sta->supp_rates[NL80211_BAND_2GHZ] >>
return sta->deflink.supp_rates[NL80211_BAND_2GHZ] >>
ATH10K_MAC_FIRST_OFDM_RATE_IDX;
}
static enum wmi_phy_mode ath10k_mac_get_phymode_vht(struct ath10k *ar,
struct ieee80211_sta *sta)
{
if (sta->bandwidth == IEEE80211_STA_RX_BW_160) {
switch (sta->vht_cap.cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK) {
if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_160) {
switch (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK) {
case IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ:
return MODE_11AC_VHT160;
case IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ:
@ -2703,13 +2704,13 @@ static enum wmi_phy_mode ath10k_mac_get_phymode_vht(struct ath10k *ar,
}
}
if (sta->bandwidth == IEEE80211_STA_RX_BW_80)
if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_80)
return MODE_11AC_VHT80;
if (sta->bandwidth == IEEE80211_STA_RX_BW_40)
if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_40)
return MODE_11AC_VHT40;
if (sta->bandwidth == IEEE80211_STA_RX_BW_20)
if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_20)
return MODE_11AC_VHT20;
return MODE_UNKNOWN;
@ -2736,15 +2737,15 @@ static void ath10k_peer_assoc_h_phymode(struct ath10k *ar,
switch (band) {
case NL80211_BAND_2GHZ:
if (sta->vht_cap.vht_supported &&
if (sta->deflink.vht_cap.vht_supported &&
!ath10k_peer_assoc_h_vht_masked(vht_mcs_mask)) {
if (sta->bandwidth == IEEE80211_STA_RX_BW_40)
if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_40)
phymode = MODE_11AC_VHT40;
else
phymode = MODE_11AC_VHT20;
} else if (sta->ht_cap.ht_supported &&
} else if (sta->deflink.ht_cap.ht_supported &&
!ath10k_peer_assoc_h_ht_masked(ht_mcs_mask)) {
if (sta->bandwidth == IEEE80211_STA_RX_BW_40)
if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_40)
phymode = MODE_11NG_HT40;
else
phymode = MODE_11NG_HT20;
@ -2759,12 +2760,12 @@ static void ath10k_peer_assoc_h_phymode(struct ath10k *ar,
/*
* Check VHT first.
*/
if (sta->vht_cap.vht_supported &&
if (sta->deflink.vht_cap.vht_supported &&
!ath10k_peer_assoc_h_vht_masked(vht_mcs_mask)) {
phymode = ath10k_mac_get_phymode_vht(ar, sta);
} else if (sta->ht_cap.ht_supported &&
} else if (sta->deflink.ht_cap.ht_supported &&
!ath10k_peer_assoc_h_ht_masked(ht_mcs_mask)) {
if (sta->bandwidth >= IEEE80211_STA_RX_BW_40)
if (sta->deflink.bandwidth >= IEEE80211_STA_RX_BW_40)
phymode = MODE_11NA_HT40;
else
phymode = MODE_11NA_HT20;
@ -3079,8 +3080,8 @@ static void ath10k_bss_assoc(struct ieee80211_hw *hw,
/* ap_sta must be accessed only within rcu section which must be left
* before calling ath10k_setup_peer_smps() which might sleep.
*/
ht_cap = ap_sta->ht_cap;
vht_cap = ap_sta->vht_cap;
ht_cap = ap_sta->deflink.ht_cap;
vht_cap = ap_sta->deflink.vht_cap;
ret = ath10k_peer_assoc_prepare(ar, vif, ap_sta, &peer_arg);
if (ret) {
@ -3278,7 +3279,7 @@ static int ath10k_station_assoc(struct ath10k *ar,
*/
if (!reassoc) {
ret = ath10k_setup_peer_smps(ar, arvif, sta->addr,
&sta->ht_cap);
&sta->deflink.ht_cap);
if (ret) {
ath10k_warn(ar, "failed to setup peer SMPS for vdev %d: %d\n",
arvif->vdev_id, ret);
@ -4118,11 +4119,10 @@ void ath10k_offchan_tx_work(struct work_struct *work)
peer = ath10k_peer_find(ar, vdev_id, peer_addr);
spin_unlock_bh(&ar->data_lock);
if (peer)
if (peer) {
ath10k_warn(ar, "peer %pM on vdev %d already present\n",
peer_addr, vdev_id);
if (!peer) {
} else {
ret = ath10k_peer_create(ar, NULL, NULL, vdev_id,
peer_addr,
WMI_PEER_TYPE_DEFAULT);
@ -5339,13 +5339,29 @@ err:
static void ath10k_stop(struct ieee80211_hw *hw)
{
struct ath10k *ar = hw->priv;
u32 opt;
ath10k_drain_tx(ar);
mutex_lock(&ar->conf_mutex);
if (ar->state != ATH10K_STATE_OFF) {
if (!ar->hw_rfkill_on)
ath10k_halt(ar);
if (!ar->hw_rfkill_on) {
/* If the current driver state is RESTARTING but not yet
* fully RESTARTED because of incoming suspend event,
* then ath10k_halt() is already called via
* ath10k_core_restart() and should not be called here.
*/
if (ar->state != ATH10K_STATE_RESTARTING) {
ath10k_halt(ar);
} else {
/* Suspending here, because when in RESTARTING
* state, ath10k_core_stop() skips
* ath10k_wait_for_suspend().
*/
opt = WMI_PDEV_SUSPEND_AND_DISABLE_INTR;
ath10k_wait_for_suspend(ar, opt);
}
}
ar->state = ATH10K_STATE_OFF;
}
mutex_unlock(&ar->conf_mutex);
@ -6787,10 +6803,10 @@ static int ath10k_sta_set_txpwr(struct ieee80211_hw *hw,
int ret = 0;
s16 txpwr;
if (sta->txpwr.type == NL80211_TX_POWER_AUTOMATIC) {
if (sta->deflink.txpwr.type == NL80211_TX_POWER_AUTOMATIC) {
txpwr = 0;
} else {
txpwr = sta->txpwr.power;
txpwr = sta->deflink.txpwr.power;
if (!txpwr)
return -EINVAL;
}
@ -6910,26 +6926,26 @@ static int ath10k_mac_validate_rate_mask(struct ath10k *ar,
struct ieee80211_sta *sta,
u32 rate_ctrl_flag, u8 nss)
{
if (nss > sta->rx_nss) {
if (nss > sta->deflink.rx_nss) {
ath10k_warn(ar, "Invalid nss field, configured %u limit %u\n",
nss, sta->rx_nss);
nss, sta->deflink.rx_nss);
return -EINVAL;
}
if (ATH10K_HW_PREAMBLE(rate_ctrl_flag) == WMI_RATE_PREAMBLE_VHT) {
if (!sta->vht_cap.vht_supported) {
if (!sta->deflink.vht_cap.vht_supported) {
ath10k_warn(ar, "Invalid VHT rate for sta %pM\n",
sta->addr);
return -EINVAL;
}
} else if (ATH10K_HW_PREAMBLE(rate_ctrl_flag) == WMI_RATE_PREAMBLE_HT) {
if (!sta->ht_cap.ht_supported || sta->vht_cap.vht_supported) {
if (!sta->deflink.ht_cap.ht_supported || sta->deflink.vht_cap.vht_supported) {
ath10k_warn(ar, "Invalid HT rate for sta %pM\n",
sta->addr);
return -EINVAL;
}
} else {
if (sta->ht_cap.ht_supported || sta->vht_cap.vht_supported)
if (sta->deflink.ht_cap.ht_supported || sta->deflink.vht_cap.vht_supported)
return -EINVAL;
}
@ -8272,7 +8288,7 @@ static bool ath10k_mac_set_vht_bitrate_mask_fixup(struct ath10k *ar,
u8 rate = arvif->vht_pfr;
/* skip non vht and multiple rate peers */
if (!sta->vht_cap.vht_supported || arvif->vht_num_rates != 1)
if (!sta->deflink.vht_cap.vht_supported || arvif->vht_num_rates != 1)
return false;
err = ath10k_wmi_peer_set_param(ar, arvif->vdev_id, sta->addr,
@ -8313,7 +8329,7 @@ static void ath10k_mac_clr_bitrate_mask_iter(void *data,
int err;
/* clear vht peers only */
if (arsta->arvif != arvif || !sta->vht_cap.vht_supported)
if (arsta->arvif != arvif || !sta->deflink.vht_cap.vht_supported)
return;
err = ath10k_wmi_peer_set_param(ar, arvif->vdev_id, sta->addr,
@ -8457,13 +8473,14 @@ static void ath10k_sta_rc_update(struct ieee80211_hw *hw,
ath10k_dbg(ar, ATH10K_DBG_STA,
"mac sta rc update for %pM changed %08x bw %d nss %d smps %d\n",
sta->addr, changed, sta->bandwidth, sta->rx_nss,
sta->addr, changed, sta->deflink.bandwidth,
sta->deflink.rx_nss,
sta->smps_mode);
if (changed & IEEE80211_RC_BW_CHANGED) {
bw = WMI_PEER_CHWIDTH_20MHZ;
switch (sta->bandwidth) {
switch (sta->deflink.bandwidth) {
case IEEE80211_STA_RX_BW_20:
bw = WMI_PEER_CHWIDTH_20MHZ;
break;
@ -8478,7 +8495,7 @@ static void ath10k_sta_rc_update(struct ieee80211_hw *hw,
break;
default:
ath10k_warn(ar, "Invalid bandwidth %d in rc update for %pM\n",
sta->bandwidth, sta->addr);
sta->deflink.bandwidth, sta->addr);
bw = WMI_PEER_CHWIDTH_20MHZ;
break;
}
@ -8487,7 +8504,7 @@ static void ath10k_sta_rc_update(struct ieee80211_hw *hw,
}
if (changed & IEEE80211_RC_NSS_CHANGED)
arsta->nss = sta->rx_nss;
arsta->nss = sta->deflink.rx_nss;
if (changed & IEEE80211_RC_SMPS_CHANGED) {
smps = WMI_PEER_SMPS_PS_NONE;

View File

@ -3216,7 +3216,7 @@ static void ath10k_pci_free_irq(struct ath10k *ar)
void ath10k_pci_init_napi(struct ath10k *ar)
{
netif_napi_add(&ar->napi_dev, &ar->napi, ath10k_pci_napi_poll,
ATH10K_NAPI_BUDGET);
NAPI_POLL_WEIGHT);
}
static int ath10k_pci_init_irq(struct ath10k *ar)

View File

@ -2532,7 +2532,7 @@ static int ath10k_sdio_probe(struct sdio_func *func,
}
netif_napi_add(&ar->napi_dev, &ar->napi, ath10k_sdio_napi_poll,
ATH10K_NAPI_BUDGET);
NAPI_POLL_WEIGHT);
ath10k_dbg(ar, ATH10K_DBG_BOOT,
"sdio new func %d vendor 0x%x device 0x%x block 0x%x/0x%x\n",

View File

@ -1243,7 +1243,7 @@ static int ath10k_snoc_napi_poll(struct napi_struct *ctx, int budget)
static void ath10k_snoc_init_napi(struct ath10k *ar)
{
netif_napi_add(&ar->napi_dev, &ar->napi, ath10k_snoc_napi_poll,
ATH10K_NAPI_BUDGET);
NAPI_POLL_WEIGHT);
}
static int ath10k_snoc_request_irq(struct ath10k *ar)

View File

@ -345,6 +345,12 @@ static void ath10k_usb_rx_complete(struct ath10k *ar, struct sk_buff *skb)
ep->ep_ops.ep_rx_complete(ar, skb);
/* The RX complete handler now owns the skb... */
if (test_bit(ATH10K_FLAG_CORE_REGISTERED, &ar->dev_flags)) {
local_bh_disable();
napi_schedule(&ar->napi);
local_bh_enable();
}
return;
out_free_skb:
@ -387,6 +393,7 @@ static int ath10k_usb_hif_start(struct ath10k *ar)
int i;
struct ath10k_usb *ar_usb = ath10k_usb_priv(ar);
ath10k_core_napi_enable(ar);
ath10k_usb_start_recv_pipes(ar);
/* set the TX resource avail threshold for each TX pipe */
@ -462,6 +469,7 @@ err:
static void ath10k_usb_hif_stop(struct ath10k *ar)
{
ath10k_usb_flush_all(ar);
ath10k_core_napi_sync_disable(ar);
}
static u16 ath10k_usb_hif_get_free_queue_number(struct ath10k *ar, u8 pipe_id)
@ -966,6 +974,20 @@ err:
return ret;
}
static int ath10k_usb_napi_poll(struct napi_struct *ctx, int budget)
{
struct ath10k *ar = container_of(ctx, struct ath10k, napi);
int done;
done = ath10k_htt_rx_hl_indication(ar, budget);
ath10k_dbg(ar, ATH10K_DBG_USB, "napi poll: done: %d, budget:%d\n", done, budget);
if (done < budget)
napi_complete_done(ctx, done);
return done;
}
/* ath10k usb driver registered functions */
static int ath10k_usb_probe(struct usb_interface *interface,
const struct usb_device_id *id)
@ -992,6 +1014,9 @@ static int ath10k_usb_probe(struct usb_interface *interface,
return -ENOMEM;
}
netif_napi_add(&ar->napi_dev, &ar->napi, ath10k_usb_napi_poll,
NAPI_POLL_WEIGHT);
usb_get_dev(dev);
vendor_id = le16_to_cpu(dev->descriptor.idVendor);
product_id = le16_to_cpu(dev->descriptor.idProduct);
@ -1013,6 +1038,7 @@ static int ath10k_usb_probe(struct usb_interface *interface,
bus_params.dev_type = ATH10K_DEV_TYPE_HL;
/* TODO: don't know yet how to get chip_id with USB */
bus_params.chip_id = 0;
bus_params.hl_msdu_ids = true;
ret = ath10k_core_register(ar, &bus_params);
if (ret) {
ath10k_warn(ar, "failed to register driver core: %d\n", ret);
@ -1044,6 +1070,7 @@ static void ath10k_usb_remove(struct usb_interface *interface)
return;
ath10k_core_unregister(ar_usb->ar);
netif_napi_del(&ar_usb->ar->napi);
ath10k_usb_destroy(ar_usb->ar);
usb_put_dev(interface_to_usbdev(interface));
ath10k_core_destroy(ar_usb->ar);

View File

@ -17,13 +17,14 @@ ath11k-y += core.o \
peer.o \
dbring.o \
hw.o \
wow.o
pcic.o
ath11k-$(CONFIG_ATH11K_DEBUGFS) += debugfs.o debugfs_htt_stats.o debugfs_sta.o
ath11k-$(CONFIG_NL80211_TESTMODE) += testmode.o
ath11k-$(CONFIG_ATH11K_TRACING) += trace.o
ath11k-$(CONFIG_THERMAL) += thermal.o
ath11k-$(CONFIG_ATH11K_SPECTRAL) += spectral.o
ath11k-$(CONFIG_PM) += wow.o
obj-$(CONFIG_ATH11K_AHB) += ath11k_ahb.o
ath11k_ahb-y += ahb.o

View File

@ -1,6 +1,7 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
* Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include <linux/module.h>
@ -12,6 +13,7 @@
#include "debug.h"
#include "hif.h"
#include <linux/remoteproc.h>
#include "pcic.h"
static const struct of_device_id ath11k_ahb_of_match[] = {
/* TODO: Should we change the compatible string to something similar
@ -23,18 +25,14 @@ static const struct of_device_id ath11k_ahb_of_match[] = {
{ .compatible = "qcom,ipq6018-wifi",
.data = (void *)ATH11K_HW_IPQ6018_HW10,
},
{ .compatible = "qcom,wcn6750-wifi",
.data = (void *)ATH11K_HW_WCN6750_HW10,
},
{ }
};
MODULE_DEVICE_TABLE(of, ath11k_ahb_of_match);
static const struct ath11k_bus_params ath11k_ahb_bus_params = {
.mhi_support = false,
.m3_fw_support = false,
.fixed_bdf_addr = true,
.fixed_mem_region = true,
};
#define ATH11K_IRQ_CE0_OFFSET 4
static const char *irq_name[ATH11K_IRQ_NUM_MAX] = {
@ -134,6 +132,16 @@ enum ext_irq_num {
tcl2host_status_ring,
};
static int
ath11k_ahb_get_msi_irq_wcn6750(struct ath11k_base *ab, unsigned int vector)
{
return ab->pci.msi.irqs[vector];
}
static const struct ath11k_pci_ops ath11k_ahb_pci_ops_wcn6750 = {
.get_msi_irq = ath11k_ahb_get_msi_irq_wcn6750,
};
static inline u32 ath11k_ahb_read32(struct ath11k_base *ab, u32 offset)
{
return ioread32(ab->mem + offset);
@ -401,6 +409,9 @@ static void ath11k_ahb_free_irq(struct ath11k_base *ab)
int irq_idx;
int i;
if (ab->hw_params.hybrid_bus_type)
return ath11k_pcic_free_irq(ab);
for (i = 0; i < ab->hw_params.ce_count; i++) {
if (ath11k_ce_get_attr_flags(ab, i) & CE_ATTR_DIS_INTR)
continue;
@ -555,6 +566,9 @@ static int ath11k_ahb_config_irq(struct ath11k_base *ab)
int irq, irq_idx, i;
int ret;
if (ab->hw_params.hybrid_bus_type)
return ath11k_pcic_config_irq(ab);
/* Configure CE irqs */
for (i = 0; i < ab->hw_params.ce_count; i++) {
struct ath11k_ce_pipe *ce_pipe = &ab->ce.ce_pipe[i];
@ -624,7 +638,7 @@ static int ath11k_ahb_map_service_to_pipe(struct ath11k_base *ab, u16 service_id
return 0;
}
static const struct ath11k_hif_ops ath11k_ahb_hif_ops = {
static const struct ath11k_hif_ops ath11k_ahb_hif_ops_ipq8074 = {
.start = ath11k_ahb_start,
.stop = ath11k_ahb_stop,
.read32 = ath11k_ahb_read32,
@ -636,6 +650,20 @@ static const struct ath11k_hif_ops ath11k_ahb_hif_ops = {
.power_up = ath11k_ahb_power_up,
};
static const struct ath11k_hif_ops ath11k_ahb_hif_ops_wcn6750 = {
.start = ath11k_pcic_start,
.stop = ath11k_pcic_stop,
.read32 = ath11k_pcic_read32,
.write32 = ath11k_pcic_write32,
.irq_enable = ath11k_pcic_ext_irq_enable,
.irq_disable = ath11k_pcic_ext_irq_disable,
.get_msi_address = ath11k_pcic_get_msi_address,
.get_user_msi_vector = ath11k_pcic_get_user_msi_assignment,
.map_service_to_pipe = ath11k_pcic_map_service_to_pipe,
.power_down = ath11k_ahb_power_down,
.power_up = ath11k_ahb_power_up,
};
static int ath11k_core_get_rproc(struct ath11k_base *ab)
{
struct ath11k_ahb *ab_ahb = ath11k_ahb_priv(ab);
@ -658,12 +686,84 @@ static int ath11k_core_get_rproc(struct ath11k_base *ab)
return 0;
}
static int ath11k_ahb_setup_msi_resources(struct ath11k_base *ab)
{
struct platform_device *pdev = ab->pdev;
phys_addr_t msi_addr_pa;
dma_addr_t msi_addr_iova;
struct resource *res;
int int_prop;
int ret;
int i;
ret = ath11k_pcic_init_msi_config(ab);
if (ret) {
ath11k_err(ab, "failed to init msi config: %d\n", ret);
return ret;
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
ath11k_err(ab, "failed to fetch msi_addr\n");
return -ENOENT;
}
msi_addr_pa = res->start;
msi_addr_iova = dma_map_resource(ab->dev, msi_addr_pa, PAGE_SIZE,
DMA_FROM_DEVICE, 0);
if (dma_mapping_error(ab->dev, msi_addr_iova))
return -ENOMEM;
ab->pci.msi.addr_lo = lower_32_bits(msi_addr_iova);
ab->pci.msi.addr_hi = upper_32_bits(msi_addr_iova);
ret = of_property_read_u32_index(ab->dev->of_node, "interrupts", 1, &int_prop);
if (ret)
return ret;
ab->pci.msi.ep_base_data = int_prop + 32;
for (i = 0; i < ab->pci.msi.config->total_vectors; i++) {
res = platform_get_resource(pdev, IORESOURCE_IRQ, i);
if (!res)
return -ENODEV;
ab->pci.msi.irqs[i] = res->start;
}
set_bit(ATH11K_FLAG_MULTI_MSI_VECTORS, &ab->dev_flags);
return 0;
}
static int ath11k_ahb_setup_resources(struct ath11k_base *ab)
{
struct platform_device *pdev = ab->pdev;
struct resource *mem_res;
void __iomem *mem;
if (ab->hw_params.hybrid_bus_type)
return ath11k_ahb_setup_msi_resources(ab);
mem = devm_platform_get_and_ioremap_resource(pdev, 0, &mem_res);
if (IS_ERR(mem)) {
dev_err(&pdev->dev, "ioremap error\n");
return PTR_ERR(mem);
}
ab->mem = mem;
ab->mem_len = resource_size(mem_res);
return 0;
}
static int ath11k_ahb_probe(struct platform_device *pdev)
{
struct ath11k_base *ab;
const struct of_device_id *of_id;
struct resource *mem_res;
void __iomem *mem;
const struct ath11k_hif_ops *hif_ops;
const struct ath11k_pci_ops *pci_ops;
enum ath11k_hw_rev hw_rev;
int ret;
of_id = of_match_device(ath11k_ahb_of_match, &pdev->dev);
@ -672,10 +772,21 @@ static int ath11k_ahb_probe(struct platform_device *pdev)
return -EINVAL;
}
mem = devm_platform_get_and_ioremap_resource(pdev, 0, &mem_res);
if (IS_ERR(mem)) {
dev_err(&pdev->dev, "ioremap error\n");
return PTR_ERR(mem);
hw_rev = (enum ath11k_hw_rev)of_id->data;
switch (hw_rev) {
case ATH11K_HW_IPQ8074:
case ATH11K_HW_IPQ6018_HW10:
hif_ops = &ath11k_ahb_hif_ops_ipq8074;
pci_ops = NULL;
break;
case ATH11K_HW_WCN6750_HW10:
hif_ops = &ath11k_ahb_hif_ops_wcn6750;
pci_ops = &ath11k_ahb_pci_ops_wcn6750;
break;
default:
dev_err(&pdev->dev, "unsupported device type %d\n", hw_rev);
return -EOPNOTSUPP;
}
ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
@ -685,20 +796,22 @@ static int ath11k_ahb_probe(struct platform_device *pdev)
}
ab = ath11k_core_alloc(&pdev->dev, sizeof(struct ath11k_ahb),
ATH11K_BUS_AHB,
&ath11k_ahb_bus_params);
ATH11K_BUS_AHB);
if (!ab) {
dev_err(&pdev->dev, "failed to allocate ath11k base\n");
return -ENOMEM;
}
ab->hif.ops = &ath11k_ahb_hif_ops;
ab->hif.ops = hif_ops;
ab->pci.ops = pci_ops;
ab->pdev = pdev;
ab->hw_rev = (enum ath11k_hw_rev)of_id->data;
ab->mem = mem;
ab->mem_len = resource_size(mem_res);
ab->hw_rev = hw_rev;
platform_set_drvdata(pdev, ab);
ret = ath11k_ahb_setup_resources(ab);
if (ret)
goto err_core_free;
ret = ath11k_core_pre_init(ab);
if (ret)
goto err_core_free;

View File

@ -1,6 +1,7 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
* Copyright (c) 2021, Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include "dp_rx.h"
@ -918,9 +919,6 @@ int ath11k_ce_init_pipes(struct ath11k_base *ab)
int i;
int ret;
ath11k_ce_get_shadow_config(ab, &ab->qmi.ce_cfg.shadow_reg_v2,
&ab->qmi.ce_cfg.shadow_reg_v2_len);
for (i = 0; i < ab->hw_params.ce_count; i++) {
pipe = &ab->ce.ce_pipe[i];

View File

@ -1,7 +1,7 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
* Copyright (c) 2021 Qualcomm Innovation Center, Inc. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include <linux/module.h>
@ -9,6 +9,7 @@
#include <linux/remoteproc.h>
#include <linux/firmware.h>
#include <linux/of.h>
#include "core.h"
#include "dp_tx.h"
#include "dp_rx.h"
@ -95,11 +96,20 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.hal_params = &ath11k_hw_hal_params_ipq8074,
.supports_dynamic_smps_6ghz = false,
.alloc_cacheable_memory = true,
.wakeup_mhi = false,
.supports_rssi_stats = false,
.fw_wmi_diag_event = false,
.current_cc_support = false,
.dbr_debug_support = true,
.global_reset = false,
.bios_sar_capa = NULL,
.m3_fw_support = false,
.fixed_bdf_addr = true,
.fixed_mem_region = true,
.static_window_map = false,
.hybrid_bus_type = false,
.dp_window_idx = 0,
.ce_window_idx = 0,
.fixed_fw_mem = false,
},
{
.hw_rev = ATH11K_HW_IPQ6018_HW10,
@ -161,11 +171,20 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.hal_params = &ath11k_hw_hal_params_ipq8074,
.supports_dynamic_smps_6ghz = false,
.alloc_cacheable_memory = true,
.wakeup_mhi = false,
.supports_rssi_stats = false,
.fw_wmi_diag_event = false,
.current_cc_support = false,
.dbr_debug_support = true,
.global_reset = false,
.bios_sar_capa = NULL,
.m3_fw_support = false,
.fixed_bdf_addr = true,
.fixed_mem_region = true,
.static_window_map = false,
.hybrid_bus_type = false,
.dp_window_idx = 0,
.ce_window_idx = 0,
.fixed_fw_mem = false,
},
{
.name = "qca6390 hw2.0",
@ -219,18 +238,27 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.num_peers = 512,
.supports_suspend = true,
.hal_desc_sz = sizeof(struct hal_rx_desc_ipq8074),
.supports_regdb = true,
.supports_regdb = false,
.fix_l1ss = true,
.credit_flow = true,
.max_tx_ring = DP_TCL_NUM_RING_MAX_QCA6390,
.hal_params = &ath11k_hw_hal_params_qca6390,
.supports_dynamic_smps_6ghz = false,
.alloc_cacheable_memory = false,
.wakeup_mhi = true,
.supports_rssi_stats = true,
.fw_wmi_diag_event = true,
.current_cc_support = true,
.dbr_debug_support = false,
.global_reset = true,
.bios_sar_capa = NULL,
.m3_fw_support = true,
.fixed_bdf_addr = false,
.fixed_mem_region = false,
.static_window_map = false,
.hybrid_bus_type = false,
.dp_window_idx = 0,
.ce_window_idx = 0,
.fixed_fw_mem = false,
},
{
.name = "qcn9074 hw1.0",
@ -291,11 +319,20 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.hal_params = &ath11k_hw_hal_params_ipq8074,
.supports_dynamic_smps_6ghz = true,
.alloc_cacheable_memory = true,
.wakeup_mhi = false,
.supports_rssi_stats = false,
.fw_wmi_diag_event = false,
.current_cc_support = false,
.dbr_debug_support = true,
.global_reset = false,
.bios_sar_capa = NULL,
.m3_fw_support = true,
.fixed_bdf_addr = false,
.fixed_mem_region = false,
.static_window_map = true,
.hybrid_bus_type = false,
.dp_window_idx = 3,
.ce_window_idx = 2,
.fixed_fw_mem = false,
},
{
.name = "wcn6855 hw2.0",
@ -356,11 +393,20 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.hal_params = &ath11k_hw_hal_params_qca6390,
.supports_dynamic_smps_6ghz = false,
.alloc_cacheable_memory = false,
.wakeup_mhi = true,
.supports_rssi_stats = true,
.fw_wmi_diag_event = true,
.current_cc_support = true,
.dbr_debug_support = false,
.global_reset = true,
.bios_sar_capa = &ath11k_hw_sar_capa_wcn6855,
.m3_fw_support = true,
.fixed_bdf_addr = false,
.fixed_mem_region = false,
.static_window_map = false,
.hybrid_bus_type = false,
.dp_window_idx = 0,
.ce_window_idx = 0,
.fixed_fw_mem = false,
},
{
.name = "wcn6855 hw2.1",
@ -420,25 +466,119 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.hal_params = &ath11k_hw_hal_params_qca6390,
.supports_dynamic_smps_6ghz = false,
.alloc_cacheable_memory = false,
.wakeup_mhi = true,
.supports_rssi_stats = true,
.fw_wmi_diag_event = true,
.current_cc_support = true,
.dbr_debug_support = false,
.global_reset = true,
.bios_sar_capa = &ath11k_hw_sar_capa_wcn6855,
.m3_fw_support = true,
.fixed_bdf_addr = false,
.fixed_mem_region = false,
.static_window_map = false,
.hybrid_bus_type = false,
.dp_window_idx = 0,
.ce_window_idx = 0,
.fixed_fw_mem = false,
},
{
.name = "wcn6750 hw1.0",
.hw_rev = ATH11K_HW_WCN6750_HW10,
.fw = {
.dir = "WCN6750/hw1.0",
.board_size = 256 * 1024,
.cal_offset = 128 * 1024,
},
.max_radios = 1,
.bdf_addr = 0x4B0C0000,
.hw_ops = &wcn6750_ops,
.ring_mask = &ath11k_hw_ring_mask_qca6390,
.internal_sleep_clock = false,
.regs = &wcn6750_regs,
.qmi_service_ins_id = ATH11K_QMI_WLFW_SERVICE_INS_ID_V01_WCN6750,
.host_ce_config = ath11k_host_ce_config_qca6390,
.ce_count = 9,
.target_ce_config = ath11k_target_ce_config_wlan_qca6390,
.target_ce_count = 9,
.svc_to_ce_map = ath11k_target_service_to_ce_map_wlan_qca6390,
.svc_to_ce_map_len = 14,
.rfkill_pin = 0,
.rfkill_cfg = 0,
.rfkill_on_level = 0,
.single_pdev_only = true,
.rxdma1_enable = false,
.num_rxmda_per_pdev = 1,
.rx_mac_buf_ring = true,
.vdev_start_delay = true,
.htt_peer_map_v2 = false,
.spectral = {
.fft_sz = 0,
.fft_pad_sz = 0,
.summary_pad_sz = 0,
.fft_hdr_len = 0,
.max_fft_bins = 0,
},
.interface_modes = BIT(NL80211_IFTYPE_STATION) |
BIT(NL80211_IFTYPE_AP),
.supports_monitor = false,
.supports_shadow_regs = true,
.idle_ps = true,
.supports_sta_ps = true,
.cold_boot_calib = false,
.fw_mem_mode = 0,
.num_vdevs = 16 + 1,
.num_peers = 512,
.supports_suspend = false,
.hal_desc_sz = sizeof(struct hal_rx_desc_qcn9074),
.supports_regdb = true,
.fix_l1ss = false,
.credit_flow = true,
.max_tx_ring = DP_TCL_NUM_RING_MAX_QCA6390,
.hal_params = &ath11k_hw_hal_params_qca6390,
.supports_dynamic_smps_6ghz = false,
.alloc_cacheable_memory = false,
.supports_rssi_stats = true,
.fw_wmi_diag_event = false,
.current_cc_support = true,
.dbr_debug_support = false,
.global_reset = false,
.bios_sar_capa = NULL,
.m3_fw_support = false,
.fixed_bdf_addr = false,
.fixed_mem_region = false,
.static_window_map = true,
.hybrid_bus_type = true,
.dp_window_idx = 1,
.ce_window_idx = 2,
.fixed_fw_mem = true,
},
};
static inline struct ath11k_pdev *ath11k_core_get_single_pdev(struct ath11k_base *ab)
{
WARN_ON(!ab->hw_params.single_pdev_only);
return &ab->pdevs[0];
}
int ath11k_core_suspend(struct ath11k_base *ab)
{
int ret;
struct ath11k_pdev *pdev;
struct ath11k *ar;
if (!ab->hw_params.supports_suspend)
return -EOPNOTSUPP;
/* TODO: there can frames in queues so for now add delay as a hack.
* Need to implement to handle and remove this delay.
/* so far single_pdev_only chips have supports_suspend as true
* and only the first pdev is valid.
*/
msleep(500);
pdev = ath11k_core_get_single_pdev(ab);
ar = pdev->ar;
if (!ar || ar->state != ATH11K_STATE_OFF)
return 0;
ret = ath11k_dp_rx_pktlog_stop(ab, true);
if (ret) {
@ -447,6 +587,12 @@ int ath11k_core_suspend(struct ath11k_base *ab)
return ret;
}
ret = ath11k_mac_wait_tx_complete(ar);
if (ret) {
ath11k_warn(ab, "failed to wait tx complete: %d\n", ret);
return ret;
}
ret = ath11k_wow_enable(ab);
if (ret) {
ath11k_warn(ab, "failed to enable wow during suspend: %d\n", ret);
@ -479,10 +625,20 @@ EXPORT_SYMBOL(ath11k_core_suspend);
int ath11k_core_resume(struct ath11k_base *ab)
{
int ret;
struct ath11k_pdev *pdev;
struct ath11k *ar;
if (!ab->hw_params.supports_suspend)
return -EOPNOTSUPP;
/* so far signle_pdev_only chips have supports_suspend as true
* and only the first pdev is valid.
*/
pdev = ath11k_core_get_single_pdev(ab);
ar = pdev->ar;
if (!ar || ar->state != ATH11K_STATE_OFF)
return 0;
ret = ath11k_hif_resume(ab);
if (ret) {
ath11k_warn(ab, "failed to resume hif during resume: %d\n", ret);
@ -509,6 +665,97 @@ int ath11k_core_resume(struct ath11k_base *ab)
}
EXPORT_SYMBOL(ath11k_core_resume);
static void ath11k_core_check_cc_code_bdfext(const struct dmi_header *hdr, void *data)
{
struct ath11k_base *ab = data;
const char *magic = ATH11K_SMBIOS_BDF_EXT_MAGIC;
struct ath11k_smbios_bdf *smbios = (struct ath11k_smbios_bdf *)hdr;
ssize_t copied;
size_t len;
int i;
if (ab->qmi.target.bdf_ext[0] != '\0')
return;
if (hdr->type != ATH11K_SMBIOS_BDF_EXT_TYPE)
return;
if (hdr->length != ATH11K_SMBIOS_BDF_EXT_LENGTH) {
ath11k_dbg(ab, ATH11K_DBG_BOOT,
"wrong smbios bdf ext type length (%d).\n",
hdr->length);
return;
}
spin_lock_bh(&ab->base_lock);
switch (smbios->country_code_flag) {
case ATH11K_SMBIOS_CC_ISO:
ab->new_alpha2[0] = (smbios->cc_code >> 8) & 0xff;
ab->new_alpha2[1] = smbios->cc_code & 0xff;
ath11k_dbg(ab, ATH11K_DBG_BOOT, "boot smbios cc_code %c%c\n",
ab->new_alpha2[0], ab->new_alpha2[1]);
break;
case ATH11K_SMBIOS_CC_WW:
ab->new_alpha2[0] = '0';
ab->new_alpha2[1] = '0';
ath11k_dbg(ab, ATH11K_DBG_BOOT, "boot smbios worldwide regdomain\n");
break;
default:
ath11k_dbg(ab, ATH11K_DBG_BOOT, "boot ignore smbios country code setting %d\n",
smbios->country_code_flag);
break;
}
spin_unlock_bh(&ab->base_lock);
if (!smbios->bdf_enabled) {
ath11k_dbg(ab, ATH11K_DBG_BOOT, "bdf variant name not found.\n");
return;
}
/* Only one string exists (per spec) */
if (memcmp(smbios->bdf_ext, magic, strlen(magic)) != 0) {
ath11k_dbg(ab, ATH11K_DBG_BOOT,
"bdf variant magic does not match.\n");
return;
}
len = min_t(size_t,
strlen(smbios->bdf_ext), sizeof(ab->qmi.target.bdf_ext));
for (i = 0; i < len; i++) {
if (!isascii(smbios->bdf_ext[i]) || !isprint(smbios->bdf_ext[i])) {
ath11k_dbg(ab, ATH11K_DBG_BOOT,
"bdf variant name contains non ascii chars.\n");
return;
}
}
/* Copy extension name without magic prefix */
copied = strscpy(ab->qmi.target.bdf_ext, smbios->bdf_ext + strlen(magic),
sizeof(ab->qmi.target.bdf_ext));
if (copied < 0) {
ath11k_dbg(ab, ATH11K_DBG_BOOT,
"bdf variant string is longer than the buffer can accommodate\n");
return;
}
ath11k_dbg(ab, ATH11K_DBG_BOOT,
"found and validated bdf variant smbios_type 0x%x bdf %s\n",
ATH11K_SMBIOS_BDF_EXT_TYPE, ab->qmi.target.bdf_ext);
}
int ath11k_core_check_smbios(struct ath11k_base *ab)
{
ab->qmi.target.bdf_ext[0] = '\0';
dmi_walk(ath11k_core_check_cc_code_bdfext, ab);
if (ab->qmi.target.bdf_ext[0] == '\0')
return -ENODATA;
return 0;
}
int ath11k_core_check_dt(struct ath11k_base *ab)
{
size_t max_len = sizeof(ab->qmi.target.bdf_ext);
@ -532,13 +779,13 @@ int ath11k_core_check_dt(struct ath11k_base *ab)
return 0;
}
static int ath11k_core_create_board_name(struct ath11k_base *ab, char *name,
size_t name_len)
static int __ath11k_core_create_board_name(struct ath11k_base *ab, char *name,
size_t name_len, bool with_variant)
{
/* strlen(',variant=') + strlen(ab->qmi.target.bdf_ext) */
char variant[9 + ATH11K_QMI_BDF_EXT_STR_LENGTH] = { 0 };
if (ab->qmi.target.bdf_ext[0] != '\0')
if (with_variant && ab->qmi.target.bdf_ext[0] != '\0')
scnprintf(variant, sizeof(variant), ",variant=%s",
ab->qmi.target.bdf_ext);
@ -568,6 +815,18 @@ static int ath11k_core_create_board_name(struct ath11k_base *ab, char *name,
return 0;
}
static int ath11k_core_create_board_name(struct ath11k_base *ab, char *name,
size_t name_len)
{
return __ath11k_core_create_board_name(ab, name, name_len, true);
}
static int ath11k_core_create_fallback_board_name(struct ath11k_base *ab, char *name,
size_t name_len)
{
return __ath11k_core_create_board_name(ab, name, name_len, false);
}
const struct firmware *ath11k_core_firmware_request(struct ath11k_base *ab,
const char *file)
{
@ -602,7 +861,9 @@ static int ath11k_core_parse_bd_ie_board(struct ath11k_base *ab,
struct ath11k_board_data *bd,
const void *buf, size_t buf_len,
const char *boardname,
int bd_ie_type)
int ie_id,
int name_id,
int data_id)
{
const struct ath11k_fw_ie *hdr;
bool name_match_found;
@ -612,7 +873,7 @@ static int ath11k_core_parse_bd_ie_board(struct ath11k_base *ab,
name_match_found = false;
/* go through ATH11K_BD_IE_BOARD_ elements */
/* go through ATH11K_BD_IE_BOARD_/ATH11K_BD_IE_REGDB_ elements */
while (buf_len > sizeof(struct ath11k_fw_ie)) {
hdr = buf;
board_ie_id = le32_to_cpu(hdr->id);
@ -623,48 +884,50 @@ static int ath11k_core_parse_bd_ie_board(struct ath11k_base *ab,
buf += sizeof(*hdr);
if (buf_len < ALIGN(board_ie_len, 4)) {
ath11k_err(ab, "invalid ATH11K_BD_IE_BOARD length: %zu < %zu\n",
ath11k_err(ab, "invalid %s length: %zu < %zu\n",
ath11k_bd_ie_type_str(ie_id),
buf_len, ALIGN(board_ie_len, 4));
ret = -EINVAL;
goto out;
}
switch (board_ie_id) {
case ATH11K_BD_IE_BOARD_NAME:
if (board_ie_id == name_id) {
ath11k_dbg_dump(ab, ATH11K_DBG_BOOT, "board name", "",
board_ie_data, board_ie_len);
if (board_ie_len != strlen(boardname))
break;
goto next;
ret = memcmp(board_ie_data, boardname, strlen(boardname));
if (ret)
break;
goto next;
name_match_found = true;
ath11k_dbg(ab, ATH11K_DBG_BOOT,
"boot found match for name '%s'",
"boot found match %s for name '%s'",
ath11k_bd_ie_type_str(ie_id),
boardname);
break;
case ATH11K_BD_IE_BOARD_DATA:
} else if (board_ie_id == data_id) {
if (!name_match_found)
/* no match found */
break;
goto next;
ath11k_dbg(ab, ATH11K_DBG_BOOT,
"boot found board data for '%s'", boardname);
"boot found %s for '%s'",
ath11k_bd_ie_type_str(ie_id),
boardname);
bd->data = board_ie_data;
bd->len = board_ie_len;
ret = 0;
goto out;
default:
ath11k_warn(ab, "unknown ATH11K_BD_IE_BOARD found: %d\n",
} else {
ath11k_warn(ab, "unknown %s id found: %d\n",
ath11k_bd_ie_type_str(ie_id),
board_ie_id);
break;
}
next:
/* jump over the padding */
board_ie_len = ALIGN(board_ie_len, 4);
@ -681,7 +944,10 @@ out:
static int ath11k_core_fetch_board_data_api_n(struct ath11k_base *ab,
struct ath11k_board_data *bd,
const char *boardname)
const char *boardname,
int ie_id_match,
int name_id,
int data_id)
{
size_t len, magic_len;
const u8 *data;
@ -746,22 +1012,23 @@ static int ath11k_core_fetch_board_data_api_n(struct ath11k_base *ab,
goto err;
}
switch (ie_id) {
case ATH11K_BD_IE_BOARD:
if (ie_id == ie_id_match) {
ret = ath11k_core_parse_bd_ie_board(ab, bd, data,
ie_len,
boardname,
ATH11K_BD_IE_BOARD);
ie_id_match,
name_id,
data_id);
if (ret == -ENOENT)
/* no match found, continue */
break;
goto next;
else if (ret)
/* there was an error, bail out */
goto err;
/* either found or error, so stop searching */
goto out;
}
next:
/* jump over the padding */
ie_len = ALIGN(ie_len, 4);
@ -771,8 +1038,9 @@ static int ath11k_core_fetch_board_data_api_n(struct ath11k_base *ab,
out:
if (!bd->data || !bd->len) {
ath11k_err(ab,
"failed to fetch board data for %s from %s\n",
ath11k_dbg(ab, ATH11K_DBG_BOOT,
"failed to fetch %s for %s from %s\n",
ath11k_bd_ie_type_str(ie_id_match),
boardname, filepath);
ret = -ENODATA;
goto err;
@ -803,24 +1071,52 @@ int ath11k_core_fetch_board_data_api_1(struct ath11k_base *ab,
#define BOARD_NAME_SIZE 200
int ath11k_core_fetch_bdf(struct ath11k_base *ab, struct ath11k_board_data *bd)
{
char boardname[BOARD_NAME_SIZE];
char boardname[BOARD_NAME_SIZE], fallback_boardname[BOARD_NAME_SIZE];
char *filename, filepath[100];
int ret;
ret = ath11k_core_create_board_name(ab, boardname, BOARD_NAME_SIZE);
filename = ATH11K_BOARD_API2_FILE;
ret = ath11k_core_create_board_name(ab, boardname, sizeof(boardname));
if (ret) {
ath11k_err(ab, "failed to create board name: %d", ret);
return ret;
}
ab->bd_api = 2;
ret = ath11k_core_fetch_board_data_api_n(ab, bd, boardname);
ret = ath11k_core_fetch_board_data_api_n(ab, bd, boardname,
ATH11K_BD_IE_BOARD,
ATH11K_BD_IE_BOARD_NAME,
ATH11K_BD_IE_BOARD_DATA);
if (!ret)
goto success;
ret = ath11k_core_create_fallback_board_name(ab, fallback_boardname,
sizeof(fallback_boardname));
if (ret) {
ath11k_err(ab, "failed to create fallback board name: %d", ret);
return ret;
}
ret = ath11k_core_fetch_board_data_api_n(ab, bd, fallback_boardname,
ATH11K_BD_IE_BOARD,
ATH11K_BD_IE_BOARD_NAME,
ATH11K_BD_IE_BOARD_DATA);
if (!ret)
goto success;
ab->bd_api = 1;
ret = ath11k_core_fetch_board_data_api_1(ab, bd, ATH11K_DEFAULT_BOARD_FILE);
if (ret) {
ath11k_err(ab, "failed to fetch board-2.bin or board.bin from %s\n",
ath11k_core_create_firmware_path(ab, filename,
filepath, sizeof(filepath));
ath11k_err(ab, "failed to fetch board data for %s from %s\n",
boardname, filepath);
if (memcmp(boardname, fallback_boardname, strlen(boardname)))
ath11k_err(ab, "failed to fetch board data for %s from %s\n",
fallback_boardname, filepath);
ath11k_err(ab, "failed to fetch board.bin from %s\n",
ab->hw_params.fw.dir);
return ret;
}
@ -832,13 +1128,32 @@ success:
int ath11k_core_fetch_regdb(struct ath11k_base *ab, struct ath11k_board_data *bd)
{
char boardname[BOARD_NAME_SIZE];
int ret;
ret = ath11k_core_create_board_name(ab, boardname, BOARD_NAME_SIZE);
if (ret) {
ath11k_dbg(ab, ATH11K_DBG_BOOT,
"failed to create board name for regdb: %d", ret);
goto exit;
}
ret = ath11k_core_fetch_board_data_api_n(ab, bd, boardname,
ATH11K_BD_IE_REGDB,
ATH11K_BD_IE_REGDB_NAME,
ATH11K_BD_IE_REGDB_DATA);
if (!ret)
goto exit;
ret = ath11k_core_fetch_board_data_api_1(ab, bd, ATH11K_REGDB_FILE_NAME);
if (ret)
ath11k_dbg(ab, ATH11K_DBG_BOOT, "failed to fetch %s from %s\n",
ATH11K_REGDB_FILE_NAME, ab->hw_params.fw.dir);
exit:
if (!ret)
ath11k_dbg(ab, ATH11K_DBG_BOOT, "fetched regdb\n");
return ret;
}
@ -952,21 +1267,14 @@ static void ath11k_core_pdev_destroy(struct ath11k_base *ab)
ath11k_debugfs_pdev_destroy(ab);
}
static int ath11k_core_start(struct ath11k_base *ab,
enum ath11k_firmware_mode mode)
static int ath11k_core_start(struct ath11k_base *ab)
{
int ret;
ret = ath11k_qmi_firmware_start(ab, mode);
if (ret) {
ath11k_err(ab, "failed to attach wmi: %d\n", ret);
return ret;
}
ret = ath11k_wmi_attach(ab);
if (ret) {
ath11k_err(ab, "failed to attach wmi: %d\n", ret);
goto err_firmware_stop;
return ret;
}
ret = ath11k_htc_init(ab);
@ -1041,7 +1349,7 @@ static int ath11k_core_start(struct ath11k_base *ab,
}
/* put hardware to DBS mode */
if (ab->hw_params.single_pdev_only) {
if (ab->hw_params.single_pdev_only && ab->hw_params.num_rxmda_per_pdev > 1) {
ret = ath11k_wmi_set_hw_mode(ab, WMI_HOST_HW_MODE_DBS);
if (ret) {
ath11k_err(ab, "failed to send dbs mode: %d\n", ret);
@ -1066,8 +1374,23 @@ err_hif_stop:
ath11k_hif_stop(ab);
err_wmi_detach:
ath11k_wmi_detach(ab);
err_firmware_stop:
ath11k_qmi_firmware_stop(ab);
return ret;
}
static int ath11k_core_start_firmware(struct ath11k_base *ab,
enum ath11k_firmware_mode mode)
{
int ret;
ath11k_ce_get_shadow_config(ab, &ab->qmi.ce_cfg.shadow_reg_v2,
&ab->qmi.ce_cfg.shadow_reg_v2_len);
ret = ath11k_qmi_firmware_start(ab, mode);
if (ret) {
ath11k_err(ab, "failed to send firmware start: %d\n", ret);
return ret;
}
return ret;
}
@ -1097,16 +1420,22 @@ int ath11k_core_qmi_firmware_ready(struct ath11k_base *ab)
{
int ret;
ret = ath11k_core_start_firmware(ab, ATH11K_FIRMWARE_MODE_NORMAL);
if (ret) {
ath11k_err(ab, "failed to start firmware: %d\n", ret);
return ret;
}
ret = ath11k_ce_init_pipes(ab);
if (ret) {
ath11k_err(ab, "failed to initialize CE: %d\n", ret);
return ret;
goto err_firmware_stop;
}
ret = ath11k_dp_alloc(ab);
if (ret) {
ath11k_err(ab, "failed to init DP: %d\n", ret);
return ret;
goto err_firmware_stop;
}
switch (ath11k_crypto_mode) {
@ -1127,7 +1456,7 @@ int ath11k_core_qmi_firmware_ready(struct ath11k_base *ab)
set_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);
mutex_lock(&ab->core_lock);
ret = ath11k_core_start(ab, ATH11K_FIRMWARE_MODE_NORMAL);
ret = ath11k_core_start(ab);
if (ret) {
ath11k_err(ab, "failed to start core: %d\n", ret);
goto err_dp_free;
@ -1156,6 +1485,9 @@ err_core_stop:
err_dp_free:
ath11k_dp_free(ab);
mutex_unlock(&ab->core_lock);
err_firmware_stop:
ath11k_qmi_firmware_stop(ab);
return ret;
}
@ -1261,6 +1593,7 @@ static void ath11k_update_11d(struct work_struct *work)
pdev = &ab->pdevs[i];
ar = pdev->ar;
memcpy(&ar->alpha2, &set_current_param.alpha2, 2);
ret = ath11k_wmi_send_set_current_country_cmd(ar, &set_current_param);
if (ret)
ath11k_warn(ar->ab,
@ -1269,12 +1602,11 @@ static void ath11k_update_11d(struct work_struct *work)
}
}
static void ath11k_core_restart(struct work_struct *work)
static void ath11k_core_pre_reconfigure_recovery(struct ath11k_base *ab)
{
struct ath11k_base *ab = container_of(work, struct ath11k_base, restart_work);
struct ath11k *ar;
struct ath11k_pdev *pdev;
int i, ret = 0;
int i;
spin_lock_bh(&ab->base_lock);
ab->stats.fw_crash_counter++;
@ -1288,6 +1620,7 @@ static void ath11k_core_restart(struct work_struct *work)
ieee80211_stop_queues(ar->hw);
ath11k_mac_drain_tx(ar);
complete(&ar->completed_11d_scan);
complete(&ar->scan.started);
complete(&ar->scan.completed);
complete(&ar->peer_assoc_done);
@ -1307,12 +1640,13 @@ static void ath11k_core_restart(struct work_struct *work)
wake_up(&ab->wmi_ab.tx_credits_wq);
wake_up(&ab->peer_mapping_wq);
}
ret = ath11k_core_reconfigure_on_crash(ab);
if (ret) {
ath11k_err(ab, "failed to reconfigure driver on crash recovery\n");
return;
}
static void ath11k_core_post_reconfigure_recovery(struct ath11k_base *ab)
{
struct ath11k *ar;
struct ath11k_pdev *pdev;
int i;
for (i = 0; i < ab->num_radios; i++) {
pdev = &ab->pdevs[i];
@ -1348,6 +1682,98 @@ static void ath11k_core_restart(struct work_struct *work)
complete(&ab->driver_recovery);
}
static void ath11k_core_restart(struct work_struct *work)
{
struct ath11k_base *ab = container_of(work, struct ath11k_base, restart_work);
int ret;
if (!ab->is_reset)
ath11k_core_pre_reconfigure_recovery(ab);
ret = ath11k_core_reconfigure_on_crash(ab);
if (ret) {
ath11k_err(ab, "failed to reconfigure driver on crash recovery\n");
return;
}
if (ab->is_reset)
complete_all(&ab->reconfigure_complete);
if (!ab->is_reset)
ath11k_core_post_reconfigure_recovery(ab);
}
static void ath11k_core_reset(struct work_struct *work)
{
struct ath11k_base *ab = container_of(work, struct ath11k_base, reset_work);
int reset_count, fail_cont_count;
long time_left;
if (!(test_bit(ATH11K_FLAG_REGISTERED, &ab->dev_flags))) {
ath11k_warn(ab, "ignore reset dev flags 0x%lx\n", ab->dev_flags);
return;
}
/* Sometimes the recovery will fail and then the next all recovery fail,
* this is to avoid infinite recovery since it can not recovery success.
*/
fail_cont_count = atomic_read(&ab->fail_cont_count);
if (fail_cont_count >= ATH11K_RESET_MAX_FAIL_COUNT_FINAL)
return;
if (fail_cont_count >= ATH11K_RESET_MAX_FAIL_COUNT_FIRST &&
time_before(jiffies, ab->reset_fail_timeout))
return;
reset_count = atomic_inc_return(&ab->reset_count);
if (reset_count > 1) {
/* Sometimes it happened another reset worker before the previous one
* completed, then the second reset worker will destroy the previous one,
* thus below is to avoid that.
*/
ath11k_warn(ab, "already resetting count %d\n", reset_count);
reinit_completion(&ab->reset_complete);
time_left = wait_for_completion_timeout(&ab->reset_complete,
ATH11K_RESET_TIMEOUT_HZ);
if (time_left) {
ath11k_dbg(ab, ATH11K_DBG_BOOT, "to skip reset\n");
atomic_dec(&ab->reset_count);
return;
}
ab->reset_fail_timeout = jiffies + ATH11K_RESET_FAIL_TIMEOUT_HZ;
/* Record the continuous recovery fail count when recovery failed*/
atomic_inc(&ab->fail_cont_count);
}
ath11k_dbg(ab, ATH11K_DBG_BOOT, "reset starting\n");
ab->is_reset = true;
atomic_set(&ab->recovery_count, 0);
reinit_completion(&ab->recovery_start);
atomic_set(&ab->recovery_start_count, 0);
ath11k_core_pre_reconfigure_recovery(ab);
reinit_completion(&ab->reconfigure_complete);
ath11k_core_post_reconfigure_recovery(ab);
ath11k_dbg(ab, ATH11K_DBG_BOOT, "waiting recovery start...\n");
time_left = wait_for_completion_timeout(&ab->recovery_start,
ATH11K_RECOVER_START_TIMEOUT_HZ);
ath11k_hif_power_down(ab);
ath11k_qmi_free_resource(ab);
ath11k_hif_power_up(ab);
ath11k_dbg(ab, ATH11K_DBG_BOOT, "reset started\n");
}
static int ath11k_init_hw_params(struct ath11k_base *ab)
{
const struct ath11k_hw_params *hw_params = NULL;
@ -1417,6 +1843,7 @@ EXPORT_SYMBOL(ath11k_core_deinit);
void ath11k_core_free(struct ath11k_base *ab)
{
destroy_workqueue(ab->workqueue_aux);
destroy_workqueue(ab->workqueue);
kfree(ab);
@ -1424,8 +1851,7 @@ void ath11k_core_free(struct ath11k_base *ab)
EXPORT_SYMBOL(ath11k_core_free);
struct ath11k_base *ath11k_core_alloc(struct device *dev, size_t priv_size,
enum ath11k_bus bus,
const struct ath11k_bus_params *bus_params)
enum ath11k_bus bus)
{
struct ath11k_base *ab;
@ -1439,9 +1865,17 @@ struct ath11k_base *ath11k_core_alloc(struct device *dev, size_t priv_size,
if (!ab->workqueue)
goto err_sc_free;
ab->workqueue_aux = create_singlethread_workqueue("ath11k_aux_wq");
if (!ab->workqueue_aux)
goto err_free_wq;
mutex_init(&ab->core_lock);
mutex_init(&ab->tbl_mtx_lock);
spin_lock_init(&ab->base_lock);
mutex_init(&ab->vdev_id_11d_lock);
init_completion(&ab->reset_complete);
init_completion(&ab->reconfigure_complete);
init_completion(&ab->recovery_start);
INIT_LIST_HEAD(&ab->peers);
init_waitqueue_head(&ab->peer_mapping_wq);
@ -1450,16 +1884,18 @@ struct ath11k_base *ath11k_core_alloc(struct device *dev, size_t priv_size,
INIT_WORK(&ab->restart_work, ath11k_core_restart);
INIT_WORK(&ab->update_11d_work, ath11k_update_11d);
INIT_WORK(&ab->rfkill_work, ath11k_rfkill_work);
INIT_WORK(&ab->reset_work, ath11k_core_reset);
timer_setup(&ab->rx_replenish_retry, ath11k_ce_rx_replenish_retry, 0);
init_completion(&ab->htc_suspend);
init_completion(&ab->wow.wakeup_completed);
ab->dev = dev;
ab->bus_params = *bus_params;
ab->hif.bus = bus;
return ab;
err_free_wq:
destroy_workqueue(ab->workqueue);
err_sc_free:
kfree(ab);
return NULL;

View File

@ -1,6 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH11K_CORE_H
@ -10,6 +11,9 @@
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/bitfield.h>
#include <linux/dmi.h>
#include <linux/ctype.h>
#include <linux/rhashtable.h>
#include "qmi.h"
#include "htc.h"
#include "wmi.h"
@ -23,6 +27,7 @@
#include "thermal.h"
#include "dbring.h"
#include "spectral.h"
#include "wow.h"
#define SM(_v, _f) (((_v) << _f##_LSB) & _f##_MASK)
@ -36,9 +41,26 @@
#define ATH11K_INVALID_HW_MAC_ID 0xFF
#define ATH11K_CONNECTION_LOSS_HZ (3 * HZ)
/* SMBIOS type containing Board Data File Name Extension */
#define ATH11K_SMBIOS_BDF_EXT_TYPE 0xF8
/* SMBIOS type structure length (excluding strings-set) */
#define ATH11K_SMBIOS_BDF_EXT_LENGTH 0x9
/* The magic used by QCA spec */
#define ATH11K_SMBIOS_BDF_EXT_MAGIC "BDF_"
extern unsigned int ath11k_frame_mode;
#define ATH11K_SCAN_TIMEOUT_HZ (20 * HZ)
#define ATH11K_MON_TIMER_INTERVAL 10
#define ATH11K_RESET_TIMEOUT_HZ (20 * HZ)
#define ATH11K_RESET_MAX_FAIL_COUNT_FIRST 3
#define ATH11K_RESET_MAX_FAIL_COUNT_FINAL 5
#define ATH11K_RESET_FAIL_TIMEOUT_HZ (20 * HZ)
#define ATH11K_RECONFIGURE_TIMEOUT_HZ (10 * HZ)
#define ATH11K_RECOVER_START_TIMEOUT_HZ (20 * HZ)
enum ath11k_supported_bw {
ATH11K_BW_20 = 0,
@ -118,6 +140,7 @@ enum ath11k_hw_rev {
ATH11K_HW_QCN9074_HW10,
ATH11K_HW_WCN6855_HW20,
ATH11K_HW_WCN6855_HW21,
ATH11K_HW_WCN6750_HW10,
};
enum ath11k_firmware_mode {
@ -147,6 +170,39 @@ struct ath11k_ext_irq_grp {
struct net_device napi_ndev;
};
enum ath11k_smbios_cc_type {
/* disable country code setting from SMBIOS */
ATH11K_SMBIOS_CC_DISABLE = 0,
/* set country code by ANSI country name, based on ISO3166-1 alpha2 */
ATH11K_SMBIOS_CC_ISO = 1,
/* worldwide regdomain */
ATH11K_SMBIOS_CC_WW = 2,
};
struct ath11k_smbios_bdf {
struct dmi_header hdr;
u8 features_disabled;
/* enum ath11k_smbios_cc_type */
u8 country_code_flag;
/* To set specific country, you need to set country code
* flag=ATH11K_SMBIOS_CC_ISO first, then if country is United
* States, then country code value = 0x5553 ("US",'U' = 0x55, 'S'=
* 0x53). To set country to INDONESIA, then country code value =
* 0x4944 ("IN", 'I'=0x49, 'D'=0x44). If country code flag =
* ATH11K_SMBIOS_CC_WW, then you can use worldwide regulatory
* setting.
*/
u16 cc_code;
u8 bdf_enabled;
u8 bdf_ext[];
} __packed;
#define HEHANDLE_CAP_PHYINFO_SIZE 3
#define HECAP_PHYINFO_SIZE 9
#define HECAP_MACINFO_SIZE 5
@ -189,6 +245,12 @@ enum ath11k_scan_state {
ATH11K_SCAN_ABORTING,
};
enum ath11k_11d_state {
ATH11K_11D_IDLE,
ATH11K_11D_PREPARING,
ATH11K_11D_RUNNING,
};
enum ath11k_dev_flags {
ATH11K_CAC_RUNNING,
ATH11K_FLAG_CORE_REGISTERED,
@ -204,6 +266,8 @@ enum ath11k_dev_flags {
ATH11K_FLAG_CE_IRQ_ENABLED,
ATH11K_FLAG_EXT_IRQ_ENABLED,
ATH11K_FLAG_FIXED_MEM_RGN,
ATH11K_FLAG_DEVICE_INIT_DONE,
ATH11K_FLAG_MULTI_MSI_VECTORS,
};
enum ath11k_monitor_flags {
@ -212,6 +276,30 @@ enum ath11k_monitor_flags {
ATH11K_FLAG_MONITOR_VDEV_CREATED,
};
#define ATH11K_IPV6_UC_TYPE 0
#define ATH11K_IPV6_AC_TYPE 1
#define ATH11K_IPV6_MAX_COUNT 16
#define ATH11K_IPV4_MAX_COUNT 2
struct ath11k_arp_ns_offload {
u8 ipv4_addr[ATH11K_IPV4_MAX_COUNT][4];
u32 ipv4_count;
u32 ipv6_count;
u8 ipv6_addr[ATH11K_IPV6_MAX_COUNT][16];
u8 self_ipv6_addr[ATH11K_IPV6_MAX_COUNT][16];
u8 ipv6_type[ATH11K_IPV6_MAX_COUNT];
bool ipv6_valid[ATH11K_IPV6_MAX_COUNT];
u8 mac_addr[ETH_ALEN];
};
struct ath11k_rekey_data {
u8 kck[NL80211_KCK_LEN];
u8 kek[NL80211_KCK_LEN];
u64 replay_ctr;
bool enable_offload;
};
struct ath11k_vif {
u32 vdev_id;
enum wmi_vdev_type vdev_type;
@ -263,6 +351,9 @@ struct ath11k_vif {
bool bcca_zero_sent;
bool do_not_send_tmpl;
struct ieee80211_chanctx_conf chanctx;
struct ath11k_arp_ns_offload arp_ns_offload;
struct ath11k_rekey_data rekey_data;
#ifdef CONFIG_ATH11K_DEBUGFS
struct dentry *debugfs_twt;
#endif /* CONFIG_ATH11K_DEBUGFS */
@ -590,6 +681,9 @@ struct ath11k {
struct work_struct wmi_mgmt_tx_work;
struct sk_buff_head wmi_mgmt_tx_queue;
struct ath11k_wow wow;
struct completion target_suspend;
bool target_suspend_ack;
struct ath11k_per_peer_tx_stats peer_tx_stats;
struct list_head ppdu_stats_info;
u32 ppdu_stat_list_depth;
@ -607,12 +701,13 @@ struct ath11k {
bool dfs_block_radar_events;
struct ath11k_thermal thermal;
u32 vdev_id_11d_scan;
struct completion finish_11d_scan;
struct completion finish_11d_ch_list;
bool pending_11d;
struct completion completed_11d_scan;
enum ath11k_11d_state state_11d;
bool regdom_set_by_user;
int hw_rate_code;
u8 twt_enabled;
bool nlo_enabled;
u8 alpha2[REG_ALPHA2_LEN + 1];
};
struct ath11k_band_cap {
@ -654,12 +749,12 @@ struct ath11k_board_data {
size_t len;
};
struct ath11k_bus_params {
bool mhi_support;
bool m3_fw_support;
bool fixed_bdf_addr;
bool fixed_mem_region;
bool static_window_map;
struct ath11k_pci_ops {
int (*wakeup)(struct ath11k_base *ab);
void (*release)(struct ath11k_base *ab);
int (*get_msi_irq)(struct ath11k_base *ab, unsigned int vector);
void (*window_write32)(struct ath11k_base *ab, u32 offset, u32 value);
u32 (*window_read32)(struct ath11k_base *ab, u32 offset);
};
/* IPQ8074 HW channel counters frequency value in hertz */
@ -703,6 +798,19 @@ struct ath11k_soc_dp_stats {
struct ath11k_dp_ring_bp_stats bp_stats;
};
struct ath11k_msi_user {
char *name;
int num_vectors;
u32 base_vector;
};
struct ath11k_msi_config {
int total_vectors;
int total_users;
struct ath11k_msi_user *users;
u16 hw_rev;
};
/* Master structure to hold the hw data which may be used in core module */
struct ath11k_base {
enum ath11k_hw_rev hw_rev;
@ -747,6 +855,18 @@ struct ath11k_base {
struct ath11k_pdev __rcu *pdevs_active[MAX_RADIOS];
struct ath11k_hal_reg_capabilities_ext hal_reg_cap[MAX_RADIOS];
unsigned long long free_vdev_map;
/* To synchronize rhash tbl write operation */
struct mutex tbl_mtx_lock;
/* The rhashtable containing struct ath11k_peer keyed by mac addr */
struct rhashtable *rhead_peer_addr;
struct rhashtable_params rhash_peer_addr_param;
/* The rhashtable containing struct ath11k_peer keyed by id */
struct rhashtable *rhead_peer_id;
struct rhashtable_params rhash_peer_id_param;
struct list_head peers;
wait_queue_head_t peer_mapping_wq;
u8 mac_addr[ETH_ALEN];
@ -760,7 +880,6 @@ struct ath11k_base {
int bd_api;
struct ath11k_hw_params hw_params;
struct ath11k_bus_params bus_params;
const struct firmware *cal_file;
@ -788,6 +907,18 @@ struct ath11k_base {
struct work_struct restart_work;
struct work_struct update_11d_work;
u8 new_alpha2[3];
struct workqueue_struct *workqueue_aux;
struct work_struct reset_work;
atomic_t reset_count;
atomic_t recovery_count;
atomic_t recovery_start_count;
bool is_reset;
struct completion reset_complete;
struct completion reconfigure_complete;
struct completion recovery_start;
/* continuous recovery fail count */
atomic_t fail_cont_count;
unsigned long reset_fail_timeout;
struct {
/* protected by data_lock */
u32 fw_crash_counter;
@ -815,6 +946,18 @@ struct ath11k_base {
u32 subsystem_device;
} id;
struct {
struct {
const struct ath11k_msi_config *config;
u32 ep_base_data;
u32 irqs[32];
u32 addr_lo;
u32 addr_hi;
} msi;
const struct ath11k_pci_ops *ops;
} pci;
/* must be last */
u8 drv_priv[] __aligned(sizeof(void *));
};
@ -985,8 +1128,7 @@ int ath11k_core_pre_init(struct ath11k_base *ab);
int ath11k_core_init(struct ath11k_base *ath11k);
void ath11k_core_deinit(struct ath11k_base *ath11k);
struct ath11k_base *ath11k_core_alloc(struct device *dev, size_t priv_size,
enum ath11k_bus bus,
const struct ath11k_bus_params *bus_params);
enum ath11k_bus bus);
void ath11k_core_free(struct ath11k_base *ath11k);
int ath11k_core_fetch_bdf(struct ath11k_base *ath11k,
struct ath11k_board_data *bd);
@ -996,7 +1138,7 @@ int ath11k_core_fetch_board_data_api_1(struct ath11k_base *ab,
const char *name);
void ath11k_core_free_bdf(struct ath11k_base *ab, struct ath11k_board_data *bd);
int ath11k_core_check_dt(struct ath11k_base *ath11k);
int ath11k_core_check_smbios(struct ath11k_base *ab);
void ath11k_core_halt(struct ath11k *ar);
int ath11k_core_resume(struct ath11k_base *ab);
int ath11k_core_suspend(struct ath11k_base *ab);

View File

@ -596,6 +596,10 @@ static ssize_t ath11k_write_simulate_fw_crash(struct file *file,
ret = ath11k_wmi_force_fw_hang_cmd(ar,
ATH11K_WMI_FW_HANG_ASSERT_TYPE,
ATH11K_WMI_FW_HANG_DELAY);
} else if (!strcmp(buf, "hw-restart")) {
ath11k_info(ab, "user requested hw restart\n");
queue_work(ab->workqueue_aux, &ab->reset_work);
ret = 0;
} else {
ret = -EINVAL;
goto exit;

View File

@ -1,6 +1,7 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
* Copyright (c) 2022, Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include <linux/dma-mapping.h>
#include "hal_tx.h"
@ -1082,10 +1083,10 @@ static void ath11k_hal_srng_update_hp_tp_addr(struct ath11k_base *ab,
srng = &hal->srng_list[ring_id];
if (srng_config->ring_dir == HAL_SRNG_DIR_DST)
srng->u.dst_ring.tp_addr = (u32 *)(HAL_SHADOW_REG(shadow_cfg_idx) +
srng->u.dst_ring.tp_addr = (u32 *)(HAL_SHADOW_REG(ab, shadow_cfg_idx) +
(unsigned long)ab->mem);
else
srng->u.src_ring.hp_addr = (u32 *)(HAL_SHADOW_REG(shadow_cfg_idx) +
srng->u.src_ring.hp_addr = (u32 *)(HAL_SHADOW_REG(ab, shadow_cfg_idx) +
(unsigned long)ab->mem);
}
@ -1120,7 +1121,7 @@ int ath11k_hal_srng_update_shadow_config(struct ath11k_base *ab,
ath11k_dbg(ab, ATH11k_DBG_HAL,
"target_reg %x, shadow reg 0x%x shadow_idx 0x%x, ring_type %d, ring num %d",
target_reg,
HAL_SHADOW_REG(shadow_cfg_idx),
HAL_SHADOW_REG(ab, shadow_cfg_idx),
shadow_cfg_idx,
ring_type, ring_num);
@ -1193,12 +1194,12 @@ static int ath11k_hal_srng_create_config(struct ath11k_base *ab)
s->reg_start[1] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_TCL_RING_HP(ab);
s = &hal->srng_config[HAL_REO_REINJECT];
s->reg_start[0] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_SW2REO_RING_BASE_LSB;
s->reg_start[1] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_SW2REO_RING_HP;
s->reg_start[0] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_SW2REO_RING_BASE_LSB(ab);
s->reg_start[1] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_SW2REO_RING_HP(ab);
s = &hal->srng_config[HAL_REO_CMD];
s->reg_start[0] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_CMD_RING_BASE_LSB;
s->reg_start[1] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_CMD_HP;
s->reg_start[0] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_CMD_RING_BASE_LSB(ab);
s->reg_start[1] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_CMD_HP(ab);
s = &hal->srng_config[HAL_REO_STATUS];
s->reg_start[0] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_STATUS_RING_BASE_LSB(ab);

View File

@ -1,6 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
* Copyright (c) 2022, Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH11K_HAL_H
@ -31,12 +32,12 @@ struct ath11k_base;
#define HAL_DSCP_TID_TBL_SIZE 24
/* calculate the register address from bar0 of shadow register x */
#define HAL_SHADOW_BASE_ADDR 0x000008fc
#define HAL_SHADOW_BASE_ADDR(ab) ab->hw_params.regs->hal_shadow_base_addr
#define HAL_SHADOW_NUM_REGS 36
#define HAL_HP_OFFSET_IN_REG_START 1
#define HAL_OFFSET_FROM_HP_TO_TP 4
#define HAL_SHADOW_REG(x) (HAL_SHADOW_BASE_ADDR + (4 * (x)))
#define HAL_SHADOW_REG(ab, x) (HAL_SHADOW_BASE_ADDR(ab) + (4 * (x)))
/* WCSS Relative address */
#define HAL_SEQ_WCSS_UMAC_OFFSET 0x00a00000
@ -180,16 +181,18 @@ struct ath11k_base;
#define HAL_REO_TCL_RING_HP(ab) ab->hw_params.regs->hal_reo_tcl_ring_hp
/* REO CMD R0 address */
#define HAL_REO_CMD_RING_BASE_LSB 0x00000194
#define HAL_REO_CMD_RING_BASE_LSB(ab) \
ab->hw_params.regs->hal_reo_cmd_ring_base_lsb
/* REO CMD R2 address */
#define HAL_REO_CMD_HP 0x00003020
#define HAL_REO_CMD_HP(ab) ab->hw_params.regs->hal_reo_cmd_ring_hp
/* SW2REO R0 address */
#define HAL_SW2REO_RING_BASE_LSB 0x000001ec
#define HAL_SW2REO_RING_BASE_LSB(ab) \
ab->hw_params.regs->hal_sw2reo_ring_base_lsb
/* SW2REO R2 address */
#define HAL_SW2REO_RING_HP 0x00003028
#define HAL_SW2REO_RING_HP(ab) ab->hw_params.regs->hal_sw2reo_ring_hp
/* CE ring R0 address */
#define HAL_CE_DST_RING_BASE_LSB 0x00000000

View File

@ -272,6 +272,11 @@ void ath11k_htc_tx_completion_handler(struct ath11k_base *ab,
ep_tx_complete(htc->ab, skb);
}
static void ath11k_htc_wakeup_from_suspend(struct ath11k_base *ab)
{
ath11k_dbg(ab, ATH11K_DBG_BOOT, "boot wakeup from suspend is received\n");
}
void ath11k_htc_rx_completion_handler(struct ath11k_base *ab,
struct sk_buff *skb)
{
@ -376,6 +381,7 @@ void ath11k_htc_rx_completion_handler(struct ath11k_base *ab,
ath11k_htc_suspend_complete(ab, false);
break;
case ATH11K_HTC_MSG_WAKEUP_FROM_SUSPEND_ID:
ath11k_htc_wakeup_from_suspend(ab);
break;
default:
ath11k_warn(ab, "ignoring unsolicited htc ep0 event %ld\n",

View File

@ -1,6 +1,7 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2020 The Linux Foundation. All rights reserved.
* Copyright (c) 2022, Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include <linux/types.h>
@ -1014,6 +1015,45 @@ const struct ath11k_hw_ops wcn6855_ops = {
.rx_desc_mpdu_start_addr2 = ath11k_hw_wcn6855_rx_desc_mpdu_start_addr2,
};
const struct ath11k_hw_ops wcn6750_ops = {
.get_hw_mac_from_pdev_id = ath11k_hw_ipq8074_mac_from_pdev_id,
.wmi_init_config = ath11k_init_wmi_config_qca6390,
.mac_id_to_pdev_id = ath11k_hw_mac_id_to_pdev_id_qca6390,
.mac_id_to_srng_id = ath11k_hw_mac_id_to_srng_id_qca6390,
.tx_mesh_enable = ath11k_hw_qcn9074_tx_mesh_enable,
.rx_desc_get_first_msdu = ath11k_hw_qcn9074_rx_desc_get_first_msdu,
.rx_desc_get_last_msdu = ath11k_hw_qcn9074_rx_desc_get_last_msdu,
.rx_desc_get_l3_pad_bytes = ath11k_hw_qcn9074_rx_desc_get_l3_pad_bytes,
.rx_desc_get_hdr_status = ath11k_hw_qcn9074_rx_desc_get_hdr_status,
.rx_desc_encrypt_valid = ath11k_hw_qcn9074_rx_desc_encrypt_valid,
.rx_desc_get_encrypt_type = ath11k_hw_qcn9074_rx_desc_get_encrypt_type,
.rx_desc_get_decap_type = ath11k_hw_qcn9074_rx_desc_get_decap_type,
.rx_desc_get_mesh_ctl = ath11k_hw_qcn9074_rx_desc_get_mesh_ctl,
.rx_desc_get_ldpc_support = ath11k_hw_qcn9074_rx_desc_get_ldpc_support,
.rx_desc_get_mpdu_seq_ctl_vld = ath11k_hw_qcn9074_rx_desc_get_mpdu_seq_ctl_vld,
.rx_desc_get_mpdu_fc_valid = ath11k_hw_qcn9074_rx_desc_get_mpdu_fc_valid,
.rx_desc_get_mpdu_start_seq_no = ath11k_hw_qcn9074_rx_desc_get_mpdu_start_seq_no,
.rx_desc_get_msdu_len = ath11k_hw_qcn9074_rx_desc_get_msdu_len,
.rx_desc_get_msdu_sgi = ath11k_hw_qcn9074_rx_desc_get_msdu_sgi,
.rx_desc_get_msdu_rate_mcs = ath11k_hw_qcn9074_rx_desc_get_msdu_rate_mcs,
.rx_desc_get_msdu_rx_bw = ath11k_hw_qcn9074_rx_desc_get_msdu_rx_bw,
.rx_desc_get_msdu_freq = ath11k_hw_qcn9074_rx_desc_get_msdu_freq,
.rx_desc_get_msdu_pkt_type = ath11k_hw_qcn9074_rx_desc_get_msdu_pkt_type,
.rx_desc_get_msdu_nss = ath11k_hw_qcn9074_rx_desc_get_msdu_nss,
.rx_desc_get_mpdu_tid = ath11k_hw_qcn9074_rx_desc_get_mpdu_tid,
.rx_desc_get_mpdu_peer_id = ath11k_hw_qcn9074_rx_desc_get_mpdu_peer_id,
.rx_desc_copy_attn_end_tlv = ath11k_hw_qcn9074_rx_desc_copy_attn_end,
.rx_desc_get_mpdu_start_tag = ath11k_hw_qcn9074_rx_desc_get_mpdu_start_tag,
.rx_desc_get_mpdu_ppdu_id = ath11k_hw_qcn9074_rx_desc_get_mpdu_ppdu_id,
.rx_desc_set_msdu_len = ath11k_hw_qcn9074_rx_desc_set_msdu_len,
.rx_desc_get_attention = ath11k_hw_qcn9074_rx_desc_get_attention,
.rx_desc_get_msdu_payload = ath11k_hw_qcn9074_rx_desc_get_msdu_payload,
.reo_setup = ath11k_hw_wcn6855_reo_setup,
.mpdu_info_get_peerid = ath11k_hw_ipq8074_mpdu_info_get_peerid,
.rx_desc_mac_addr2_valid = ath11k_hw_ipq9074_rx_desc_mac_addr2_valid,
.rx_desc_mpdu_start_addr2 = ath11k_hw_ipq9074_rx_desc_mpdu_start_addr2,
};
#define ATH11K_TX_RING_MASK_0 0x1
#define ATH11K_TX_RING_MASK_1 0x2
#define ATH11K_TX_RING_MASK_2 0x4
@ -1908,10 +1948,18 @@ const struct ath11k_hw_regs ipq8074_regs = {
.hal_reo_tcl_ring_base_lsb = 0x000003fc,
.hal_reo_tcl_ring_hp = 0x00003058,
/* REO CMD ring address */
.hal_reo_cmd_ring_base_lsb = 0x00000194,
.hal_reo_cmd_ring_hp = 0x00003020,
/* REO status address */
.hal_reo_status_ring_base_lsb = 0x00000504,
.hal_reo_status_hp = 0x00003070,
/* SW2REO ring address */
.hal_sw2reo_ring_base_lsb = 0x000001ec,
.hal_sw2reo_ring_hp = 0x00003028,
/* WCSS relative address */
.hal_seq_wcss_umac_ce0_src_reg = 0x00a00000,
.hal_seq_wcss_umac_ce0_dst_reg = 0x00a01000,
@ -1932,6 +1980,9 @@ const struct ath11k_hw_regs ipq8074_regs = {
/* PCIe base address */
.pcie_qserdes_sysclk_en_sel = 0x0,
.pcie_pcs_osc_dtct_config_base = 0x0,
/* Shadow register area */
.hal_shadow_base_addr = 0x0,
};
const struct ath11k_hw_regs qca6390_regs = {
@ -1979,10 +2030,18 @@ const struct ath11k_hw_regs qca6390_regs = {
.hal_reo_tcl_ring_base_lsb = 0x000003a4,
.hal_reo_tcl_ring_hp = 0x00003050,
/* REO CMD ring address */
.hal_reo_cmd_ring_base_lsb = 0x00000194,
.hal_reo_cmd_ring_hp = 0x00003020,
/* REO status address */
.hal_reo_status_ring_base_lsb = 0x000004ac,
.hal_reo_status_hp = 0x00003068,
/* SW2REO ring address */
.hal_sw2reo_ring_base_lsb = 0x000001ec,
.hal_sw2reo_ring_hp = 0x00003028,
/* WCSS relative address */
.hal_seq_wcss_umac_ce0_src_reg = 0x00a00000,
.hal_seq_wcss_umac_ce0_dst_reg = 0x00a01000,
@ -2003,6 +2062,9 @@ const struct ath11k_hw_regs qca6390_regs = {
/* PCIe base address */
.pcie_qserdes_sysclk_en_sel = 0x01e0c0ac,
.pcie_pcs_osc_dtct_config_base = 0x01e0c628,
/* Shadow register area */
.hal_shadow_base_addr = 0x000008fc,
};
const struct ath11k_hw_regs qcn9074_regs = {
@ -2050,10 +2112,18 @@ const struct ath11k_hw_regs qcn9074_regs = {
.hal_reo_tcl_ring_base_lsb = 0x000003fc,
.hal_reo_tcl_ring_hp = 0x00003058,
/* REO CMD ring address */
.hal_reo_cmd_ring_base_lsb = 0x00000194,
.hal_reo_cmd_ring_hp = 0x00003020,
/* REO status address */
.hal_reo_status_ring_base_lsb = 0x00000504,
.hal_reo_status_hp = 0x00003070,
/* SW2REO ring address */
.hal_sw2reo_ring_base_lsb = 0x000001ec,
.hal_sw2reo_ring_hp = 0x00003028,
/* WCSS relative address */
.hal_seq_wcss_umac_ce0_src_reg = 0x01b80000,
.hal_seq_wcss_umac_ce0_dst_reg = 0x01b81000,
@ -2074,6 +2144,9 @@ const struct ath11k_hw_regs qcn9074_regs = {
/* PCIe base address */
.pcie_qserdes_sysclk_en_sel = 0x01e0e0a8,
.pcie_pcs_osc_dtct_config_base = 0x01e0f45c,
/* Shadow register area */
.hal_shadow_base_addr = 0x0,
};
const struct ath11k_hw_regs wcn6855_regs = {
@ -2121,10 +2194,18 @@ const struct ath11k_hw_regs wcn6855_regs = {
.hal_reo_tcl_ring_base_lsb = 0x00000454,
.hal_reo_tcl_ring_hp = 0x00003060,
/* REO CMD ring address */
.hal_reo_cmd_ring_base_lsb = 0x00000194,
.hal_reo_cmd_ring_hp = 0x00003020,
/* REO status address */
.hal_reo_status_ring_base_lsb = 0x0000055c,
.hal_reo_status_hp = 0x00003078,
/* SW2REO ring address */
.hal_sw2reo_ring_base_lsb = 0x000001ec,
.hal_sw2reo_ring_hp = 0x00003028,
/* WCSS relative address */
.hal_seq_wcss_umac_ce0_src_reg = 0x1b80000,
.hal_seq_wcss_umac_ce0_dst_reg = 0x1b81000,
@ -2145,6 +2226,91 @@ const struct ath11k_hw_regs wcn6855_regs = {
/* PCIe base address */
.pcie_qserdes_sysclk_en_sel = 0x01e0c0ac,
.pcie_pcs_osc_dtct_config_base = 0x01e0c628,
/* Shadow register area */
.hal_shadow_base_addr = 0x000008fc,
};
const struct ath11k_hw_regs wcn6750_regs = {
/* SW2TCL(x) R0 ring configuration address */
.hal_tcl1_ring_base_lsb = 0x00000694,
.hal_tcl1_ring_base_msb = 0x00000698,
.hal_tcl1_ring_id = 0x0000069c,
.hal_tcl1_ring_misc = 0x000006a4,
.hal_tcl1_ring_tp_addr_lsb = 0x000006b0,
.hal_tcl1_ring_tp_addr_msb = 0x000006b4,
.hal_tcl1_ring_consumer_int_setup_ix0 = 0x000006c4,
.hal_tcl1_ring_consumer_int_setup_ix1 = 0x000006c8,
.hal_tcl1_ring_msi1_base_lsb = 0x000006dc,
.hal_tcl1_ring_msi1_base_msb = 0x000006e0,
.hal_tcl1_ring_msi1_data = 0x000006e4,
.hal_tcl2_ring_base_lsb = 0x000006ec,
.hal_tcl_ring_base_lsb = 0x0000079c,
/* TCL STATUS ring address */
.hal_tcl_status_ring_base_lsb = 0x000008a4,
/* REO2SW(x) R0 ring configuration address */
.hal_reo1_ring_base_lsb = 0x000001ec,
.hal_reo1_ring_base_msb = 0x000001f0,
.hal_reo1_ring_id = 0x000001f4,
.hal_reo1_ring_misc = 0x000001fc,
.hal_reo1_ring_hp_addr_lsb = 0x00000200,
.hal_reo1_ring_hp_addr_msb = 0x00000204,
.hal_reo1_ring_producer_int_setup = 0x00000210,
.hal_reo1_ring_msi1_base_lsb = 0x00000234,
.hal_reo1_ring_msi1_base_msb = 0x00000238,
.hal_reo1_ring_msi1_data = 0x0000023c,
.hal_reo2_ring_base_lsb = 0x00000244,
.hal_reo1_aging_thresh_ix_0 = 0x00000564,
.hal_reo1_aging_thresh_ix_1 = 0x00000568,
.hal_reo1_aging_thresh_ix_2 = 0x0000056c,
.hal_reo1_aging_thresh_ix_3 = 0x00000570,
/* REO2SW(x) R2 ring pointers (head/tail) address */
.hal_reo1_ring_hp = 0x00003028,
.hal_reo1_ring_tp = 0x0000302c,
.hal_reo2_ring_hp = 0x00003030,
/* REO2TCL R0 ring configuration address */
.hal_reo_tcl_ring_base_lsb = 0x000003fc,
.hal_reo_tcl_ring_hp = 0x00003058,
/* REO CMD ring address */
.hal_reo_cmd_ring_base_lsb = 0x000000e4,
.hal_reo_cmd_ring_hp = 0x00003010,
/* REO status address */
.hal_reo_status_ring_base_lsb = 0x00000504,
.hal_reo_status_hp = 0x00003070,
/* SW2REO ring address */
.hal_sw2reo_ring_base_lsb = 0x0000013c,
.hal_sw2reo_ring_hp = 0x00003018,
/* WCSS relative address */
.hal_seq_wcss_umac_ce0_src_reg = 0x01b80000,
.hal_seq_wcss_umac_ce0_dst_reg = 0x01b81000,
.hal_seq_wcss_umac_ce1_src_reg = 0x01b82000,
.hal_seq_wcss_umac_ce1_dst_reg = 0x01b83000,
/* WBM Idle address */
.hal_wbm_idle_link_ring_base_lsb = 0x00000874,
.hal_wbm_idle_link_ring_misc = 0x00000884,
/* SW2WBM release address */
.hal_wbm_release_ring_base_lsb = 0x000001ec,
/* WBM2SW release address */
.hal_wbm0_release_ring_base_lsb = 0x00000924,
.hal_wbm1_release_ring_base_lsb = 0x0000097c,
/* PCIe base address */
.pcie_qserdes_sysclk_en_sel = 0x0,
.pcie_pcs_osc_dtct_config_base = 0x0,
/* Shadow register area */
.hal_shadow_base_addr = 0x00000504,
};
const struct ath11k_hw_hal_params ath11k_hw_hal_params_ipq8074 = {
@ -2154,3 +2320,23 @@ const struct ath11k_hw_hal_params ath11k_hw_hal_params_ipq8074 = {
const struct ath11k_hw_hal_params ath11k_hw_hal_params_qca6390 = {
.rx_buf_rbm = HAL_RX_BUF_RBM_SW1_BM,
};
static const struct cfg80211_sar_freq_ranges ath11k_hw_sar_freq_ranges_wcn6855[] = {
{.start_freq = 2402, .end_freq = 2482 }, /* 2G ch1~ch13 */
{.start_freq = 5150, .end_freq = 5250 }, /* 5G UNII-1 ch32~ch48 */
{.start_freq = 5250, .end_freq = 5725 }, /* 5G UNII-2 ch50~ch144 */
{.start_freq = 5725, .end_freq = 5810 }, /* 5G UNII-3 ch149~ch161 */
{.start_freq = 5815, .end_freq = 5895 }, /* 5G UNII-4 ch163~ch177 */
{.start_freq = 5925, .end_freq = 6165 }, /* 6G UNII-5 Ch1, Ch2 ~ Ch41 */
{.start_freq = 6165, .end_freq = 6425 }, /* 6G UNII-5 ch45~ch93 */
{.start_freq = 6425, .end_freq = 6525 }, /* 6G UNII-6 ch97~ch113 */
{.start_freq = 6525, .end_freq = 6705 }, /* 6G UNII-7 ch117~ch149 */
{.start_freq = 6705, .end_freq = 6875 }, /* 6G UNII-7 ch153~ch185 */
{.start_freq = 6875, .end_freq = 7125 }, /* 6G UNII-8 ch189~ch233 */
};
const struct cfg80211_sar_capa ath11k_hw_sar_capa_wcn6855 = {
.type = NL80211_SAR_TYPE_POWER,
.num_freq_ranges = (ARRAY_SIZE(ath11k_hw_sar_freq_ranges_wcn6855)),
.freq_ranges = ath11k_hw_sar_freq_ranges_wcn6855,
};

View File

@ -1,6 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022, Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH11K_HW_H
@ -189,11 +190,20 @@ struct ath11k_hw_params {
const struct ath11k_hw_hal_params *hal_params;
bool supports_dynamic_smps_6ghz;
bool alloc_cacheable_memory;
bool wakeup_mhi;
bool supports_rssi_stats;
bool fw_wmi_diag_event;
bool current_cc_support;
bool dbr_debug_support;
bool global_reset;
const struct cfg80211_sar_capa *bios_sar_capa;
bool m3_fw_support;
bool fixed_bdf_addr;
bool fixed_mem_region;
bool static_window_map;
bool hybrid_bus_type;
u8 dp_window_idx;
u8 ce_window_idx;
bool fixed_fw_mem;
};
struct ath11k_hw_ops {
@ -243,6 +253,7 @@ extern const struct ath11k_hw_ops ipq6018_ops;
extern const struct ath11k_hw_ops qca6390_ops;
extern const struct ath11k_hw_ops qcn9074_ops;
extern const struct ath11k_hw_ops wcn6855_ops;
extern const struct ath11k_hw_ops wcn6750_ops;
extern const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_ipq8074;
extern const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_qca6390;
@ -290,10 +301,16 @@ enum ath11k_bd_ie_board_type {
ATH11K_BD_IE_BOARD_DATA = 1,
};
enum ath11k_bd_ie_regdb_type {
ATH11K_BD_IE_REGDB_NAME = 0,
ATH11K_BD_IE_REGDB_DATA = 1,
};
enum ath11k_bd_ie_type {
/* contains sub IEs of enum ath11k_bd_ie_board_type */
ATH11K_BD_IE_BOARD = 0,
ATH11K_BD_IE_BOARD_EXT = 1,
/* contains sub IEs of enum ath11k_bd_ie_regdb_type */
ATH11K_BD_IE_REGDB = 1,
};
struct ath11k_hw_regs {
@ -339,6 +356,12 @@ struct ath11k_hw_regs {
u32 hal_reo_status_ring_base_lsb;
u32 hal_reo_status_hp;
u32 hal_reo_cmd_ring_base_lsb;
u32 hal_reo_cmd_ring_hp;
u32 hal_sw2reo_ring_base_lsb;
u32 hal_sw2reo_ring_hp;
u32 hal_seq_wcss_umac_ce0_src_reg;
u32 hal_seq_wcss_umac_ce0_dst_reg;
u32 hal_seq_wcss_umac_ce1_src_reg;
@ -354,11 +377,27 @@ struct ath11k_hw_regs {
u32 pcie_qserdes_sysclk_en_sel;
u32 pcie_pcs_osc_dtct_config_base;
u32 hal_shadow_base_addr;
};
extern const struct ath11k_hw_regs ipq8074_regs;
extern const struct ath11k_hw_regs qca6390_regs;
extern const struct ath11k_hw_regs qcn9074_regs;
extern const struct ath11k_hw_regs wcn6855_regs;
extern const struct ath11k_hw_regs wcn6750_regs;
static inline const char *ath11k_bd_ie_type_str(enum ath11k_bd_ie_type type)
{
switch (type) {
case ATH11K_BD_IE_BOARD:
return "board data";
case ATH11K_BD_IE_REGDB:
return "regdb data";
}
return "unknown";
}
extern const struct cfg80211_sar_capa ath11k_hw_sar_capa_wcn6855;
#endif

File diff suppressed because it is too large Load Diff

View File

@ -130,7 +130,7 @@ extern const struct htt_rx_ring_tlv_filter ath11k_mac_mon_status_filter_default;
#define ATH11K_SCAN_11D_INTERVAL 600000
#define ATH11K_11D_INVALID_VDEV_ID 0xFFFF
void ath11k_mac_11d_scan_start(struct ath11k *ar, u32 vdev_id, bool wait);
void ath11k_mac_11d_scan_start(struct ath11k *ar, u32 vdev_id);
void ath11k_mac_11d_scan_stop(struct ath11k *ar);
void ath11k_mac_11d_scan_stop_all(struct ath11k_base *ab);
@ -172,4 +172,5 @@ enum hal_encrypt_type ath11k_dp_tx_get_encrypt_type(u32 cipher);
void ath11k_mac_handle_beacon(struct ath11k *ar, struct sk_buff *skb);
void ath11k_mac_handle_beacon_miss(struct ath11k *ar, u32 vdev_id);
void ath11k_mac_bcn_tx_event(struct ath11k_vif *arvif);
int ath11k_mac_wait_tx_complete(struct ath11k *ar);
#endif

View File

@ -1,5 +1,8 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/* Copyright (c) 2020 The Linux Foundation. All rights reserved. */
/*
* Copyright (c) 2020 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022, Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include <linux/msi.h>
#include <linux/pci.h>
@ -11,6 +14,7 @@
#include "debug.h"
#include "mhi.h"
#include "pci.h"
#include "pcic.h"
#define MHI_TIMEOUT_DEFAULT_MS 90000
#define RDDM_DUMP_SIZE 0x420000
@ -205,7 +209,7 @@ void ath11k_mhi_set_mhictrl_reset(struct ath11k_base *ab)
{
u32 val;
val = ath11k_pci_read32(ab, MHISTATUS);
val = ath11k_pcic_read32(ab, MHISTATUS);
ath11k_dbg(ab, ATH11K_DBG_PCI, "MHISTATUS 0x%x\n", val);
@ -213,29 +217,29 @@ void ath11k_mhi_set_mhictrl_reset(struct ath11k_base *ab)
* has SYSERR bit set and thus need to set MHICTRL_RESET
* to clear SYSERR.
*/
ath11k_pci_write32(ab, MHICTRL, MHICTRL_RESET_MASK);
ath11k_pcic_write32(ab, MHICTRL, MHICTRL_RESET_MASK);
mdelay(10);
}
static void ath11k_mhi_reset_txvecdb(struct ath11k_base *ab)
{
ath11k_pci_write32(ab, PCIE_TXVECDB, 0);
ath11k_pcic_write32(ab, PCIE_TXVECDB, 0);
}
static void ath11k_mhi_reset_txvecstatus(struct ath11k_base *ab)
{
ath11k_pci_write32(ab, PCIE_TXVECSTATUS, 0);
ath11k_pcic_write32(ab, PCIE_TXVECSTATUS, 0);
}
static void ath11k_mhi_reset_rxvecdb(struct ath11k_base *ab)
{
ath11k_pci_write32(ab, PCIE_RXVECDB, 0);
ath11k_pcic_write32(ab, PCIE_RXVECDB, 0);
}
static void ath11k_mhi_reset_rxvecstatus(struct ath11k_base *ab)
{
ath11k_pci_write32(ab, PCIE_RXVECSTATUS, 0);
ath11k_pcic_write32(ab, PCIE_RXVECSTATUS, 0);
}
void ath11k_mhi_clear_vector(struct ath11k_base *ab)
@ -254,9 +258,8 @@ static int ath11k_mhi_get_msi(struct ath11k_pci *ab_pci)
int *irq;
unsigned int msi_data;
ret = ath11k_pci_get_user_msi_assignment(ab_pci,
"MHI", &num_vectors,
&user_base_data, &base_vector);
ret = ath11k_pcic_get_user_msi_assignment(ab, "MHI", &num_vectors,
&user_base_data, &base_vector);
if (ret)
return ret;
@ -270,11 +273,10 @@ static int ath11k_mhi_get_msi(struct ath11k_pci *ab_pci)
for (i = 0; i < num_vectors; i++) {
msi_data = base_vector;
if (test_bit(ATH11K_PCI_FLAG_MULTI_MSI_VECTORS, &ab_pci->flags))
if (test_bit(ATH11K_FLAG_MULTI_MSI_VECTORS, &ab->dev_flags))
msi_data += i;
irq[i] = ath11k_pci_get_msi_irq(ab->dev,
msi_data);
irq[i] = ath11k_pci_get_msi_irq(ab, msi_data);
}
ab_pci->mhi_ctrl->irq = irq;
@ -292,15 +294,48 @@ static void ath11k_mhi_op_runtime_put(struct mhi_controller *mhi_cntrl)
{
}
static char *ath11k_mhi_op_callback_to_str(enum mhi_callback reason)
{
switch (reason) {
case MHI_CB_IDLE:
return "MHI_CB_IDLE";
case MHI_CB_PENDING_DATA:
return "MHI_CB_PENDING_DATA";
case MHI_CB_LPM_ENTER:
return "MHI_CB_LPM_ENTER";
case MHI_CB_LPM_EXIT:
return "MHI_CB_LPM_EXIT";
case MHI_CB_EE_RDDM:
return "MHI_CB_EE_RDDM";
case MHI_CB_EE_MISSION_MODE:
return "MHI_CB_EE_MISSION_MODE";
case MHI_CB_SYS_ERROR:
return "MHI_CB_SYS_ERROR";
case MHI_CB_FATAL_ERROR:
return "MHI_CB_FATAL_ERROR";
case MHI_CB_BW_REQ:
return "MHI_CB_BW_REQ";
default:
return "UNKNOWN";
}
};
static void ath11k_mhi_op_status_cb(struct mhi_controller *mhi_cntrl,
enum mhi_callback cb)
{
struct ath11k_base *ab = dev_get_drvdata(mhi_cntrl->cntrl_dev);
ath11k_dbg(ab, ATH11K_DBG_BOOT, "mhi notify status reason %s\n",
ath11k_mhi_op_callback_to_str(cb));
switch (cb) {
case MHI_CB_SYS_ERROR:
ath11k_warn(ab, "firmware crashed: MHI_CB_SYS_ERROR\n");
break;
case MHI_CB_EE_RDDM:
if (!(test_bit(ATH11K_FLAG_UNREGISTERING, &ab->dev_flags)))
queue_work(ab->workqueue_aux, &ab->reset_work);
break;
default:
break;
}
@ -371,7 +406,7 @@ int ath11k_mhi_register(struct ath11k_pci *ab_pci)
return ret;
}
if (!test_bit(ATH11K_PCI_FLAG_MULTI_MSI_VECTORS, &ab_pci->flags))
if (!test_bit(ATH11K_FLAG_MULTI_MSI_VECTORS, &ab->dev_flags))
mhi_ctrl->irq_flags = IRQF_SHARED | IRQF_NOBALANCING;
if (test_bit(ATH11K_FLAG_FIXED_MEM_RGN, &ab->dev_flags)) {
@ -428,216 +463,62 @@ void ath11k_mhi_unregister(struct ath11k_pci *ab_pci)
mhi_free_controller(mhi_ctrl);
}
static char *ath11k_mhi_state_to_str(enum ath11k_mhi_state mhi_state)
{
switch (mhi_state) {
case ATH11K_MHI_INIT:
return "INIT";
case ATH11K_MHI_DEINIT:
return "DEINIT";
case ATH11K_MHI_POWER_ON:
return "POWER_ON";
case ATH11K_MHI_POWER_OFF:
return "POWER_OFF";
case ATH11K_MHI_FORCE_POWER_OFF:
return "FORCE_POWER_OFF";
case ATH11K_MHI_SUSPEND:
return "SUSPEND";
case ATH11K_MHI_RESUME:
return "RESUME";
case ATH11K_MHI_TRIGGER_RDDM:
return "TRIGGER_RDDM";
case ATH11K_MHI_RDDM_DONE:
return "RDDM_DONE";
default:
return "UNKNOWN";
}
};
static void ath11k_mhi_set_state_bit(struct ath11k_pci *ab_pci,
enum ath11k_mhi_state mhi_state)
{
struct ath11k_base *ab = ab_pci->ab;
switch (mhi_state) {
case ATH11K_MHI_INIT:
set_bit(ATH11K_MHI_INIT, &ab_pci->mhi_state);
break;
case ATH11K_MHI_DEINIT:
clear_bit(ATH11K_MHI_INIT, &ab_pci->mhi_state);
break;
case ATH11K_MHI_POWER_ON:
set_bit(ATH11K_MHI_POWER_ON, &ab_pci->mhi_state);
break;
case ATH11K_MHI_POWER_OFF:
case ATH11K_MHI_FORCE_POWER_OFF:
clear_bit(ATH11K_MHI_POWER_ON, &ab_pci->mhi_state);
clear_bit(ATH11K_MHI_TRIGGER_RDDM, &ab_pci->mhi_state);
clear_bit(ATH11K_MHI_RDDM_DONE, &ab_pci->mhi_state);
break;
case ATH11K_MHI_SUSPEND:
set_bit(ATH11K_MHI_SUSPEND, &ab_pci->mhi_state);
break;
case ATH11K_MHI_RESUME:
clear_bit(ATH11K_MHI_SUSPEND, &ab_pci->mhi_state);
break;
case ATH11K_MHI_TRIGGER_RDDM:
set_bit(ATH11K_MHI_TRIGGER_RDDM, &ab_pci->mhi_state);
break;
case ATH11K_MHI_RDDM_DONE:
set_bit(ATH11K_MHI_RDDM_DONE, &ab_pci->mhi_state);
break;
default:
ath11k_err(ab, "unhandled mhi state (%d)\n", mhi_state);
}
}
static int ath11k_mhi_check_state_bit(struct ath11k_pci *ab_pci,
enum ath11k_mhi_state mhi_state)
{
struct ath11k_base *ab = ab_pci->ab;
switch (mhi_state) {
case ATH11K_MHI_INIT:
if (!test_bit(ATH11K_MHI_INIT, &ab_pci->mhi_state))
return 0;
break;
case ATH11K_MHI_DEINIT:
case ATH11K_MHI_POWER_ON:
if (test_bit(ATH11K_MHI_INIT, &ab_pci->mhi_state) &&
!test_bit(ATH11K_MHI_POWER_ON, &ab_pci->mhi_state))
return 0;
break;
case ATH11K_MHI_FORCE_POWER_OFF:
if (test_bit(ATH11K_MHI_POWER_ON, &ab_pci->mhi_state))
return 0;
break;
case ATH11K_MHI_POWER_OFF:
case ATH11K_MHI_SUSPEND:
if (test_bit(ATH11K_MHI_POWER_ON, &ab_pci->mhi_state) &&
!test_bit(ATH11K_MHI_SUSPEND, &ab_pci->mhi_state))
return 0;
break;
case ATH11K_MHI_RESUME:
if (test_bit(ATH11K_MHI_SUSPEND, &ab_pci->mhi_state))
return 0;
break;
case ATH11K_MHI_TRIGGER_RDDM:
if (test_bit(ATH11K_MHI_POWER_ON, &ab_pci->mhi_state) &&
!test_bit(ATH11K_MHI_TRIGGER_RDDM, &ab_pci->mhi_state))
return 0;
break;
case ATH11K_MHI_RDDM_DONE:
return 0;
default:
ath11k_err(ab, "unhandled mhi state: %s(%d)\n",
ath11k_mhi_state_to_str(mhi_state), mhi_state);
}
ath11k_err(ab, "failed to set mhi state %s(%d) in current mhi state (0x%lx)\n",
ath11k_mhi_state_to_str(mhi_state), mhi_state,
ab_pci->mhi_state);
return -EINVAL;
}
static int ath11k_mhi_set_state(struct ath11k_pci *ab_pci,
enum ath11k_mhi_state mhi_state)
{
struct ath11k_base *ab = ab_pci->ab;
int ret;
ret = ath11k_mhi_check_state_bit(ab_pci, mhi_state);
if (ret)
goto out;
ath11k_dbg(ab, ATH11K_DBG_PCI, "setting mhi state: %s(%d)\n",
ath11k_mhi_state_to_str(mhi_state), mhi_state);
switch (mhi_state) {
case ATH11K_MHI_INIT:
ret = mhi_prepare_for_power_up(ab_pci->mhi_ctrl);
break;
case ATH11K_MHI_DEINIT:
mhi_unprepare_after_power_down(ab_pci->mhi_ctrl);
ret = 0;
break;
case ATH11K_MHI_POWER_ON:
ret = mhi_sync_power_up(ab_pci->mhi_ctrl);
break;
case ATH11K_MHI_POWER_OFF:
mhi_power_down(ab_pci->mhi_ctrl, true);
ret = 0;
break;
case ATH11K_MHI_FORCE_POWER_OFF:
mhi_power_down(ab_pci->mhi_ctrl, false);
ret = 0;
break;
case ATH11K_MHI_SUSPEND:
ret = mhi_pm_suspend(ab_pci->mhi_ctrl);
break;
case ATH11K_MHI_RESUME:
/* Do force MHI resume as some devices like QCA6390, WCN6855
* are not in M3 state but they are functional. So just ignore
* the MHI state while resuming.
*/
ret = mhi_pm_resume_force(ab_pci->mhi_ctrl);
break;
case ATH11K_MHI_TRIGGER_RDDM:
ret = mhi_force_rddm_mode(ab_pci->mhi_ctrl);
break;
case ATH11K_MHI_RDDM_DONE:
break;
default:
ath11k_err(ab, "unhandled MHI state (%d)\n", mhi_state);
ret = -EINVAL;
}
if (ret)
goto out;
ath11k_mhi_set_state_bit(ab_pci, mhi_state);
return 0;
out:
ath11k_err(ab, "failed to set mhi state: %s(%d)\n",
ath11k_mhi_state_to_str(mhi_state), mhi_state);
return ret;
}
int ath11k_mhi_start(struct ath11k_pci *ab_pci)
{
struct ath11k_base *ab = ab_pci->ab;
int ret;
ab_pci->mhi_ctrl->timeout_ms = MHI_TIMEOUT_DEFAULT_MS;
ret = ath11k_mhi_set_state(ab_pci, ATH11K_MHI_INIT);
if (ret)
goto out;
ret = mhi_prepare_for_power_up(ab_pci->mhi_ctrl);
if (ret) {
ath11k_warn(ab, "failed to prepare mhi: %d", ret);
return ret;
}
ret = ath11k_mhi_set_state(ab_pci, ATH11K_MHI_POWER_ON);
if (ret)
goto out;
ret = mhi_sync_power_up(ab_pci->mhi_ctrl);
if (ret) {
ath11k_warn(ab, "failed to power up mhi: %d", ret);
return ret;
}
return 0;
out:
return ret;
}
void ath11k_mhi_stop(struct ath11k_pci *ab_pci)
{
ath11k_mhi_set_state(ab_pci, ATH11K_MHI_POWER_OFF);
ath11k_mhi_set_state(ab_pci, ATH11K_MHI_DEINIT);
mhi_power_down(ab_pci->mhi_ctrl, true);
mhi_unprepare_after_power_down(ab_pci->mhi_ctrl);
}
void ath11k_mhi_suspend(struct ath11k_pci *ab_pci)
int ath11k_mhi_suspend(struct ath11k_pci *ab_pci)
{
ath11k_mhi_set_state(ab_pci, ATH11K_MHI_SUSPEND);
struct ath11k_base *ab = ab_pci->ab;
int ret;
ret = mhi_pm_suspend(ab_pci->mhi_ctrl);
if (ret) {
ath11k_warn(ab, "failed to suspend mhi: %d", ret);
return ret;
}
return 0;
}
void ath11k_mhi_resume(struct ath11k_pci *ab_pci)
int ath11k_mhi_resume(struct ath11k_pci *ab_pci)
{
ath11k_mhi_set_state(ab_pci, ATH11K_MHI_RESUME);
struct ath11k_base *ab = ab_pci->ab;
int ret;
/* Do force MHI resume as some devices like QCA6390, WCN6855
* are not in M3 state but they are functional. So just ignore
* the MHI state while resuming.
*/
ret = mhi_pm_resume_force(ab_pci->mhi_ctrl);
if (ret) {
ath11k_warn(ab, "failed to resume mhi: %d", ret);
return ret;
}
return 0;
}

View File

@ -16,19 +16,6 @@
#define MHICTRL 0x38
#define MHICTRL_RESET_MASK 0x2
enum ath11k_mhi_state {
ATH11K_MHI_INIT,
ATH11K_MHI_DEINIT,
ATH11K_MHI_POWER_ON,
ATH11K_MHI_POWER_OFF,
ATH11K_MHI_FORCE_POWER_OFF,
ATH11K_MHI_SUSPEND,
ATH11K_MHI_RESUME,
ATH11K_MHI_TRIGGER_RDDM,
ATH11K_MHI_RDDM,
ATH11K_MHI_RDDM_DONE,
};
int ath11k_mhi_start(struct ath11k_pci *ar_pci);
void ath11k_mhi_stop(struct ath11k_pci *ar_pci);
int ath11k_mhi_register(struct ath11k_pci *ar_pci);
@ -36,7 +23,7 @@ void ath11k_mhi_unregister(struct ath11k_pci *ar_pci);
void ath11k_mhi_set_mhictrl_reset(struct ath11k_base *ab);
void ath11k_mhi_clear_vector(struct ath11k_base *ab);
void ath11k_mhi_suspend(struct ath11k_pci *ar_pci);
void ath11k_mhi_resume(struct ath11k_pci *ar_pci);
int ath11k_mhi_suspend(struct ath11k_pci *ar_pci);
int ath11k_mhi_resume(struct ath11k_pci *ar_pci);
#endif

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2019-2020 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022, Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef _ATH11K_PCI_H
#define _ATH11K_PCI_H
@ -52,23 +53,8 @@
#define WLAON_QFPROM_PWR_CTRL_REG 0x01f8031c
#define QFPROM_PWR_CTRL_VDD4BLOW_MASK 0x4
struct ath11k_msi_user {
char *name;
int num_vectors;
u32 base_vector;
};
struct ath11k_msi_config {
int total_vectors;
int total_users;
struct ath11k_msi_user *users;
};
enum ath11k_pci_flags {
ATH11K_PCI_FLAG_INIT_DONE,
ATH11K_PCI_FLAG_IS_MSI_64,
ATH11K_PCI_ASPM_RESTORE,
ATH11K_PCI_FLAG_MULTI_MSI_VECTORS,
};
struct ath11k_pci {
@ -76,10 +62,8 @@ struct ath11k_pci {
struct ath11k_base *ab;
u16 dev_id;
char amss_path[100];
u32 msi_ep_base_data;
struct mhi_controller *mhi_ctrl;
const struct ath11k_msi_config *msi_config;
unsigned long mhi_state;
u32 register_window;
/* protects register_window above */
@ -88,8 +72,6 @@ struct ath11k_pci {
/* enum ath11k_pci_flags */
unsigned long flags;
u16 link_ctl;
unsigned long irq_flags;
};
static inline struct ath11k_pci *ath11k_pci_priv(struct ath11k_base *ab)
@ -97,11 +79,5 @@ static inline struct ath11k_pci *ath11k_pci_priv(struct ath11k_base *ab)
return (struct ath11k_pci *)ab->drv_priv;
}
int ath11k_pci_get_user_msi_assignment(struct ath11k_pci *ar_pci, char *user_name,
int *num_vectors, u32 *user_base_data,
u32 *base_vector);
int ath11k_pci_get_msi_irq(struct device *dev, unsigned int vector);
void ath11k_pci_write32(struct ath11k_base *ab, u32 offset, u32 value);
u32 ath11k_pci_read32(struct ath11k_base *ab, u32 offset);
int ath11k_pci_get_msi_irq(struct ath11k_base *ab, unsigned int vector);
#endif

View File

@ -0,0 +1,748 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022, Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include "core.h"
#include "pcic.h"
#include "debug.h"
static const char *irq_name[ATH11K_IRQ_NUM_MAX] = {
"bhi",
"mhi-er0",
"mhi-er1",
"ce0",
"ce1",
"ce2",
"ce3",
"ce4",
"ce5",
"ce6",
"ce7",
"ce8",
"ce9",
"ce10",
"ce11",
"host2wbm-desc-feed",
"host2reo-re-injection",
"host2reo-command",
"host2rxdma-monitor-ring3",
"host2rxdma-monitor-ring2",
"host2rxdma-monitor-ring1",
"reo2ost-exception",
"wbm2host-rx-release",
"reo2host-status",
"reo2host-destination-ring4",
"reo2host-destination-ring3",
"reo2host-destination-ring2",
"reo2host-destination-ring1",
"rxdma2host-monitor-destination-mac3",
"rxdma2host-monitor-destination-mac2",
"rxdma2host-monitor-destination-mac1",
"ppdu-end-interrupts-mac3",
"ppdu-end-interrupts-mac2",
"ppdu-end-interrupts-mac1",
"rxdma2host-monitor-status-ring-mac3",
"rxdma2host-monitor-status-ring-mac2",
"rxdma2host-monitor-status-ring-mac1",
"host2rxdma-host-buf-ring-mac3",
"host2rxdma-host-buf-ring-mac2",
"host2rxdma-host-buf-ring-mac1",
"rxdma2host-destination-ring-mac3",
"rxdma2host-destination-ring-mac2",
"rxdma2host-destination-ring-mac1",
"host2tcl-input-ring4",
"host2tcl-input-ring3",
"host2tcl-input-ring2",
"host2tcl-input-ring1",
"wbm2host-tx-completions-ring3",
"wbm2host-tx-completions-ring2",
"wbm2host-tx-completions-ring1",
"tcl2host-status-ring",
};
static const struct ath11k_msi_config ath11k_msi_config[] = {
{
.total_vectors = 32,
.total_users = 4,
.users = (struct ath11k_msi_user[]) {
{ .name = "MHI", .num_vectors = 3, .base_vector = 0 },
{ .name = "CE", .num_vectors = 10, .base_vector = 3 },
{ .name = "WAKE", .num_vectors = 1, .base_vector = 13 },
{ .name = "DP", .num_vectors = 18, .base_vector = 14 },
},
.hw_rev = ATH11K_HW_QCA6390_HW20,
},
{
.total_vectors = 16,
.total_users = 3,
.users = (struct ath11k_msi_user[]) {
{ .name = "MHI", .num_vectors = 3, .base_vector = 0 },
{ .name = "CE", .num_vectors = 5, .base_vector = 3 },
{ .name = "DP", .num_vectors = 8, .base_vector = 8 },
},
.hw_rev = ATH11K_HW_QCN9074_HW10,
},
{
.total_vectors = 32,
.total_users = 4,
.users = (struct ath11k_msi_user[]) {
{ .name = "MHI", .num_vectors = 3, .base_vector = 0 },
{ .name = "CE", .num_vectors = 10, .base_vector = 3 },
{ .name = "WAKE", .num_vectors = 1, .base_vector = 13 },
{ .name = "DP", .num_vectors = 18, .base_vector = 14 },
},
.hw_rev = ATH11K_HW_WCN6855_HW20,
},
{
.total_vectors = 32,
.total_users = 4,
.users = (struct ath11k_msi_user[]) {
{ .name = "MHI", .num_vectors = 3, .base_vector = 0 },
{ .name = "CE", .num_vectors = 10, .base_vector = 3 },
{ .name = "WAKE", .num_vectors = 1, .base_vector = 13 },
{ .name = "DP", .num_vectors = 18, .base_vector = 14 },
},
.hw_rev = ATH11K_HW_WCN6855_HW21,
},
{
.total_vectors = 28,
.total_users = 2,
.users = (struct ath11k_msi_user[]) {
{ .name = "CE", .num_vectors = 10, .base_vector = 0 },
{ .name = "DP", .num_vectors = 18, .base_vector = 10 },
},
.hw_rev = ATH11K_HW_WCN6750_HW10,
},
};
int ath11k_pcic_init_msi_config(struct ath11k_base *ab)
{
const struct ath11k_msi_config *msi_config;
int i;
for (i = 0; i < ARRAY_SIZE(ath11k_msi_config); i++) {
msi_config = &ath11k_msi_config[i];
if (msi_config->hw_rev == ab->hw_rev)
break;
}
if (i == ARRAY_SIZE(ath11k_msi_config)) {
ath11k_err(ab, "failed to fetch msi config, unsupported hw version: 0x%x\n",
ab->hw_rev);
return -EINVAL;
}
ab->pci.msi.config = msi_config;
return 0;
}
EXPORT_SYMBOL(ath11k_pcic_init_msi_config);
static inline u32 ath11k_pcic_get_window_start(struct ath11k_base *ab,
u32 offset)
{
u32 window_start = 0;
if ((offset ^ HAL_SEQ_WCSS_UMAC_OFFSET) < ATH11K_PCI_WINDOW_RANGE_MASK)
window_start = ab->hw_params.dp_window_idx * ATH11K_PCI_WINDOW_START;
else if ((offset ^ HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab)) <
ATH11K_PCI_WINDOW_RANGE_MASK)
window_start = ab->hw_params.ce_window_idx * ATH11K_PCI_WINDOW_START;
return window_start;
}
void ath11k_pcic_write32(struct ath11k_base *ab, u32 offset, u32 value)
{
u32 window_start;
int ret = 0;
/* for offset beyond BAR + 4K - 32, may
* need to wakeup the device to access.
*/
if (test_bit(ATH11K_FLAG_DEVICE_INIT_DONE, &ab->dev_flags) &&
offset >= ATH11K_PCI_ACCESS_ALWAYS_OFF && ab->pci.ops->wakeup)
ret = ab->pci.ops->wakeup(ab);
if (offset < ATH11K_PCI_WINDOW_START) {
iowrite32(value, ab->mem + offset);
} else if (ab->hw_params.static_window_map) {
window_start = ath11k_pcic_get_window_start(ab, offset);
iowrite32(value, ab->mem + window_start +
(offset & ATH11K_PCI_WINDOW_RANGE_MASK));
} else if (ab->pci.ops->window_write32) {
ab->pci.ops->window_write32(ab, offset, value);
}
if (test_bit(ATH11K_FLAG_DEVICE_INIT_DONE, &ab->dev_flags) &&
offset >= ATH11K_PCI_ACCESS_ALWAYS_OFF && ab->pci.ops->release &&
!ret)
ab->pci.ops->release(ab);
}
EXPORT_SYMBOL(ath11k_pcic_write32);
u32 ath11k_pcic_read32(struct ath11k_base *ab, u32 offset)
{
u32 val = 0;
u32 window_start;
int ret = 0;
/* for offset beyond BAR + 4K - 32, may
* need to wakeup the device to access.
*/
if (test_bit(ATH11K_FLAG_DEVICE_INIT_DONE, &ab->dev_flags) &&
offset >= ATH11K_PCI_ACCESS_ALWAYS_OFF && ab->pci.ops->wakeup)
ret = ab->pci.ops->wakeup(ab);
if (offset < ATH11K_PCI_WINDOW_START) {
val = ioread32(ab->mem + offset);
} else if (ab->hw_params.static_window_map) {
window_start = ath11k_pcic_get_window_start(ab, offset);
val = ioread32(ab->mem + window_start +
(offset & ATH11K_PCI_WINDOW_RANGE_MASK));
} else if (ab->pci.ops->window_read32) {
val = ab->pci.ops->window_read32(ab, offset);
}
if (test_bit(ATH11K_FLAG_DEVICE_INIT_DONE, &ab->dev_flags) &&
offset >= ATH11K_PCI_ACCESS_ALWAYS_OFF && ab->pci.ops->release &&
!ret)
ab->pci.ops->release(ab);
return val;
}
EXPORT_SYMBOL(ath11k_pcic_read32);
void ath11k_pcic_get_msi_address(struct ath11k_base *ab, u32 *msi_addr_lo,
u32 *msi_addr_hi)
{
*msi_addr_lo = ab->pci.msi.addr_lo;
*msi_addr_hi = ab->pci.msi.addr_hi;
}
EXPORT_SYMBOL(ath11k_pcic_get_msi_address);
int ath11k_pcic_get_user_msi_assignment(struct ath11k_base *ab, char *user_name,
int *num_vectors, u32 *user_base_data,
u32 *base_vector)
{
const struct ath11k_msi_config *msi_config = ab->pci.msi.config;
int idx;
for (idx = 0; idx < msi_config->total_users; idx++) {
if (strcmp(user_name, msi_config->users[idx].name) == 0) {
*num_vectors = msi_config->users[idx].num_vectors;
*base_vector = msi_config->users[idx].base_vector;
*user_base_data = *base_vector + ab->pci.msi.ep_base_data;
ath11k_dbg(ab, ATH11K_DBG_PCI,
"Assign MSI to user: %s, num_vectors: %d, user_base_data: %u, base_vector: %u\n",
user_name, *num_vectors, *user_base_data,
*base_vector);
return 0;
}
}
ath11k_err(ab, "Failed to find MSI assignment for %s!\n", user_name);
return -EINVAL;
}
EXPORT_SYMBOL(ath11k_pcic_get_user_msi_assignment);
void ath11k_pcic_get_ce_msi_idx(struct ath11k_base *ab, u32 ce_id, u32 *msi_idx)
{
u32 i, msi_data_idx;
for (i = 0, msi_data_idx = 0; i < ab->hw_params.ce_count; i++) {
if (ath11k_ce_get_attr_flags(ab, i) & CE_ATTR_DIS_INTR)
continue;
if (ce_id == i)
break;
msi_data_idx++;
}
*msi_idx = msi_data_idx;
}
EXPORT_SYMBOL(ath11k_pcic_get_ce_msi_idx);
static void ath11k_pcic_free_ext_irq(struct ath11k_base *ab)
{
int i, j;
for (i = 0; i < ATH11K_EXT_IRQ_GRP_NUM_MAX; i++) {
struct ath11k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
for (j = 0; j < irq_grp->num_irq; j++)
free_irq(ab->irq_num[irq_grp->irqs[j]], irq_grp);
netif_napi_del(&irq_grp->napi);
}
}
void ath11k_pcic_free_irq(struct ath11k_base *ab)
{
int i, irq_idx;
for (i = 0; i < ab->hw_params.ce_count; i++) {
if (ath11k_ce_get_attr_flags(ab, i) & CE_ATTR_DIS_INTR)
continue;
irq_idx = ATH11K_PCI_IRQ_CE0_OFFSET + i;
free_irq(ab->irq_num[irq_idx], &ab->ce.ce_pipe[i]);
}
ath11k_pcic_free_ext_irq(ab);
}
EXPORT_SYMBOL(ath11k_pcic_free_irq);
static void ath11k_pcic_ce_irq_enable(struct ath11k_base *ab, u16 ce_id)
{
u32 irq_idx;
/* In case of one MSI vector, we handle irq enable/disable in a
* uniform way since we only have one irq
*/
if (!test_bit(ATH11K_FLAG_MULTI_MSI_VECTORS, &ab->dev_flags))
return;
irq_idx = ATH11K_PCI_IRQ_CE0_OFFSET + ce_id;
enable_irq(ab->irq_num[irq_idx]);
}
static void ath11k_pcic_ce_irq_disable(struct ath11k_base *ab, u16 ce_id)
{
u32 irq_idx;
/* In case of one MSI vector, we handle irq enable/disable in a
* uniform way since we only have one irq
*/
if (!test_bit(ATH11K_FLAG_MULTI_MSI_VECTORS, &ab->dev_flags))
return;
irq_idx = ATH11K_PCI_IRQ_CE0_OFFSET + ce_id;
disable_irq_nosync(ab->irq_num[irq_idx]);
}
static void ath11k_pcic_ce_irqs_disable(struct ath11k_base *ab)
{
int i;
clear_bit(ATH11K_FLAG_CE_IRQ_ENABLED, &ab->dev_flags);
for (i = 0; i < ab->hw_params.ce_count; i++) {
if (ath11k_ce_get_attr_flags(ab, i) & CE_ATTR_DIS_INTR)
continue;
ath11k_pcic_ce_irq_disable(ab, i);
}
}
static void ath11k_pcic_sync_ce_irqs(struct ath11k_base *ab)
{
int i;
int irq_idx;
for (i = 0; i < ab->hw_params.ce_count; i++) {
if (ath11k_ce_get_attr_flags(ab, i) & CE_ATTR_DIS_INTR)
continue;
irq_idx = ATH11K_PCI_IRQ_CE0_OFFSET + i;
synchronize_irq(ab->irq_num[irq_idx]);
}
}
static void ath11k_pcic_ce_tasklet(struct tasklet_struct *t)
{
struct ath11k_ce_pipe *ce_pipe = from_tasklet(ce_pipe, t, intr_tq);
int irq_idx = ATH11K_PCI_IRQ_CE0_OFFSET + ce_pipe->pipe_num;
ath11k_ce_per_engine_service(ce_pipe->ab, ce_pipe->pipe_num);
enable_irq(ce_pipe->ab->irq_num[irq_idx]);
}
static irqreturn_t ath11k_pcic_ce_interrupt_handler(int irq, void *arg)
{
struct ath11k_ce_pipe *ce_pipe = arg;
struct ath11k_base *ab = ce_pipe->ab;
int irq_idx = ATH11K_PCI_IRQ_CE0_OFFSET + ce_pipe->pipe_num;
if (!test_bit(ATH11K_FLAG_CE_IRQ_ENABLED, &ab->dev_flags))
return IRQ_HANDLED;
/* last interrupt received for this CE */
ce_pipe->timestamp = jiffies;
disable_irq_nosync(ab->irq_num[irq_idx]);
tasklet_schedule(&ce_pipe->intr_tq);
return IRQ_HANDLED;
}
static void ath11k_pcic_ext_grp_disable(struct ath11k_ext_irq_grp *irq_grp)
{
struct ath11k_base *ab = irq_grp->ab;
int i;
/* In case of one MSI vector, we handle irq enable/disable
* in a uniform way since we only have one irq
*/
if (!test_bit(ATH11K_FLAG_MULTI_MSI_VECTORS, &ab->dev_flags))
return;
for (i = 0; i < irq_grp->num_irq; i++)
disable_irq_nosync(irq_grp->ab->irq_num[irq_grp->irqs[i]]);
}
static void __ath11k_pcic_ext_irq_disable(struct ath11k_base *sc)
{
int i;
clear_bit(ATH11K_FLAG_EXT_IRQ_ENABLED, &sc->dev_flags);
for (i = 0; i < ATH11K_EXT_IRQ_GRP_NUM_MAX; i++) {
struct ath11k_ext_irq_grp *irq_grp = &sc->ext_irq_grp[i];
ath11k_pcic_ext_grp_disable(irq_grp);
if (irq_grp->napi_enabled) {
napi_synchronize(&irq_grp->napi);
napi_disable(&irq_grp->napi);
irq_grp->napi_enabled = false;
}
}
}
static void ath11k_pcic_ext_grp_enable(struct ath11k_ext_irq_grp *irq_grp)
{
struct ath11k_base *ab = irq_grp->ab;
int i;
/* In case of one MSI vector, we handle irq enable/disable in a
* uniform way since we only have one irq
*/
if (!test_bit(ATH11K_FLAG_MULTI_MSI_VECTORS, &ab->dev_flags))
return;
for (i = 0; i < irq_grp->num_irq; i++)
enable_irq(irq_grp->ab->irq_num[irq_grp->irqs[i]]);
}
void ath11k_pcic_ext_irq_enable(struct ath11k_base *ab)
{
int i;
set_bit(ATH11K_FLAG_EXT_IRQ_ENABLED, &ab->dev_flags);
for (i = 0; i < ATH11K_EXT_IRQ_GRP_NUM_MAX; i++) {
struct ath11k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
if (!irq_grp->napi_enabled) {
napi_enable(&irq_grp->napi);
irq_grp->napi_enabled = true;
}
ath11k_pcic_ext_grp_enable(irq_grp);
}
}
EXPORT_SYMBOL(ath11k_pcic_ext_irq_enable);
static void ath11k_pcic_sync_ext_irqs(struct ath11k_base *ab)
{
int i, j, irq_idx;
for (i = 0; i < ATH11K_EXT_IRQ_GRP_NUM_MAX; i++) {
struct ath11k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
for (j = 0; j < irq_grp->num_irq; j++) {
irq_idx = irq_grp->irqs[j];
synchronize_irq(ab->irq_num[irq_idx]);
}
}
}
void ath11k_pcic_ext_irq_disable(struct ath11k_base *ab)
{
__ath11k_pcic_ext_irq_disable(ab);
ath11k_pcic_sync_ext_irqs(ab);
}
EXPORT_SYMBOL(ath11k_pcic_ext_irq_disable);
static int ath11k_pcic_ext_grp_napi_poll(struct napi_struct *napi, int budget)
{
struct ath11k_ext_irq_grp *irq_grp = container_of(napi,
struct ath11k_ext_irq_grp,
napi);
struct ath11k_base *ab = irq_grp->ab;
int work_done;
int i;
work_done = ath11k_dp_service_srng(ab, irq_grp, budget);
if (work_done < budget) {
napi_complete_done(napi, work_done);
for (i = 0; i < irq_grp->num_irq; i++)
enable_irq(irq_grp->ab->irq_num[irq_grp->irqs[i]]);
}
if (work_done > budget)
work_done = budget;
return work_done;
}
static irqreturn_t ath11k_pcic_ext_interrupt_handler(int irq, void *arg)
{
struct ath11k_ext_irq_grp *irq_grp = arg;
struct ath11k_base *ab = irq_grp->ab;
int i;
if (!test_bit(ATH11K_FLAG_EXT_IRQ_ENABLED, &ab->dev_flags))
return IRQ_HANDLED;
ath11k_dbg(irq_grp->ab, ATH11K_DBG_PCI, "ext irq:%d\n", irq);
/* last interrupt received for this group */
irq_grp->timestamp = jiffies;
for (i = 0; i < irq_grp->num_irq; i++)
disable_irq_nosync(irq_grp->ab->irq_num[irq_grp->irqs[i]]);
napi_schedule(&irq_grp->napi);
return IRQ_HANDLED;
}
static int
ath11k_pcic_get_msi_irq(struct ath11k_base *ab, unsigned int vector)
{
if (!ab->pci.ops->get_msi_irq) {
WARN_ONCE(1, "get_msi_irq pci op not defined");
return -EOPNOTSUPP;
}
return ab->pci.ops->get_msi_irq(ab, vector);
}
static int ath11k_pcic_ext_irq_config(struct ath11k_base *ab)
{
int i, j, ret, num_vectors = 0;
u32 user_base_data = 0, base_vector = 0;
unsigned long irq_flags;
ret = ath11k_pcic_get_user_msi_assignment(ab, "DP", &num_vectors,
&user_base_data,
&base_vector);
if (ret < 0)
return ret;
irq_flags = IRQF_SHARED;
if (!test_bit(ATH11K_FLAG_MULTI_MSI_VECTORS, &ab->dev_flags))
irq_flags |= IRQF_NOBALANCING;
for (i = 0; i < ATH11K_EXT_IRQ_GRP_NUM_MAX; i++) {
struct ath11k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
u32 num_irq = 0;
irq_grp->ab = ab;
irq_grp->grp_id = i;
init_dummy_netdev(&irq_grp->napi_ndev);
netif_napi_add(&irq_grp->napi_ndev, &irq_grp->napi,
ath11k_pcic_ext_grp_napi_poll, NAPI_POLL_WEIGHT);
if (ab->hw_params.ring_mask->tx[i] ||
ab->hw_params.ring_mask->rx[i] ||
ab->hw_params.ring_mask->rx_err[i] ||
ab->hw_params.ring_mask->rx_wbm_rel[i] ||
ab->hw_params.ring_mask->reo_status[i] ||
ab->hw_params.ring_mask->rxdma2host[i] ||
ab->hw_params.ring_mask->host2rxdma[i] ||
ab->hw_params.ring_mask->rx_mon_status[i]) {
num_irq = 1;
}
irq_grp->num_irq = num_irq;
irq_grp->irqs[0] = ATH11K_PCI_IRQ_DP_OFFSET + i;
for (j = 0; j < irq_grp->num_irq; j++) {
int irq_idx = irq_grp->irqs[j];
int vector = (i % num_vectors) + base_vector;
int irq = ath11k_pcic_get_msi_irq(ab, vector);
if (irq < 0)
return irq;
ab->irq_num[irq_idx] = irq;
ath11k_dbg(ab, ATH11K_DBG_PCI,
"irq:%d group:%d\n", irq, i);
irq_set_status_flags(irq, IRQ_DISABLE_UNLAZY);
ret = request_irq(irq, ath11k_pcic_ext_interrupt_handler,
irq_flags, "DP_EXT_IRQ", irq_grp);
if (ret) {
ath11k_err(ab, "failed request irq %d: %d\n",
vector, ret);
return ret;
}
}
ath11k_pcic_ext_grp_disable(irq_grp);
}
return 0;
}
int ath11k_pcic_config_irq(struct ath11k_base *ab)
{
struct ath11k_ce_pipe *ce_pipe;
u32 msi_data_start;
u32 msi_data_count, msi_data_idx;
u32 msi_irq_start;
unsigned int msi_data;
int irq, i, ret, irq_idx;
unsigned long irq_flags;
ret = ath11k_pcic_get_user_msi_assignment(ab, "CE", &msi_data_count,
&msi_data_start, &msi_irq_start);
if (ret)
return ret;
irq_flags = IRQF_SHARED;
if (!test_bit(ATH11K_FLAG_MULTI_MSI_VECTORS, &ab->dev_flags))
irq_flags |= IRQF_NOBALANCING;
/* Configure CE irqs */
for (i = 0, msi_data_idx = 0; i < ab->hw_params.ce_count; i++) {
if (ath11k_ce_get_attr_flags(ab, i) & CE_ATTR_DIS_INTR)
continue;
msi_data = (msi_data_idx % msi_data_count) + msi_irq_start;
irq = ath11k_pcic_get_msi_irq(ab, msi_data);
if (irq < 0)
return irq;
ce_pipe = &ab->ce.ce_pipe[i];
irq_idx = ATH11K_PCI_IRQ_CE0_OFFSET + i;
tasklet_setup(&ce_pipe->intr_tq, ath11k_pcic_ce_tasklet);
ret = request_irq(irq, ath11k_pcic_ce_interrupt_handler,
irq_flags, irq_name[irq_idx], ce_pipe);
if (ret) {
ath11k_err(ab, "failed to request irq %d: %d\n",
irq_idx, ret);
return ret;
}
ab->irq_num[irq_idx] = irq;
msi_data_idx++;
ath11k_pcic_ce_irq_disable(ab, i);
}
ret = ath11k_pcic_ext_irq_config(ab);
if (ret)
return ret;
return 0;
}
EXPORT_SYMBOL(ath11k_pcic_config_irq);
void ath11k_pcic_ce_irqs_enable(struct ath11k_base *ab)
{
int i;
set_bit(ATH11K_FLAG_CE_IRQ_ENABLED, &ab->dev_flags);
for (i = 0; i < ab->hw_params.ce_count; i++) {
if (ath11k_ce_get_attr_flags(ab, i) & CE_ATTR_DIS_INTR)
continue;
ath11k_pcic_ce_irq_enable(ab, i);
}
}
EXPORT_SYMBOL(ath11k_pcic_ce_irqs_enable);
static void ath11k_pcic_kill_tasklets(struct ath11k_base *ab)
{
int i;
for (i = 0; i < ab->hw_params.ce_count; i++) {
struct ath11k_ce_pipe *ce_pipe = &ab->ce.ce_pipe[i];
if (ath11k_ce_get_attr_flags(ab, i) & CE_ATTR_DIS_INTR)
continue;
tasklet_kill(&ce_pipe->intr_tq);
}
}
void ath11k_pcic_ce_irq_disable_sync(struct ath11k_base *ab)
{
ath11k_pcic_ce_irqs_disable(ab);
ath11k_pcic_sync_ce_irqs(ab);
ath11k_pcic_kill_tasklets(ab);
}
EXPORT_SYMBOL(ath11k_pcic_ce_irq_disable_sync);
void ath11k_pcic_stop(struct ath11k_base *ab)
{
ath11k_pcic_ce_irq_disable_sync(ab);
ath11k_ce_cleanup_pipes(ab);
}
EXPORT_SYMBOL(ath11k_pcic_stop);
int ath11k_pcic_start(struct ath11k_base *ab)
{
set_bit(ATH11K_FLAG_DEVICE_INIT_DONE, &ab->dev_flags);
ath11k_pcic_ce_irqs_enable(ab);
ath11k_ce_rx_post_buf(ab);
return 0;
}
EXPORT_SYMBOL(ath11k_pcic_start);
int ath11k_pcic_map_service_to_pipe(struct ath11k_base *ab, u16 service_id,
u8 *ul_pipe, u8 *dl_pipe)
{
const struct service_to_pipe *entry;
bool ul_set = false, dl_set = false;
int i;
for (i = 0; i < ab->hw_params.svc_to_ce_map_len; i++) {
entry = &ab->hw_params.svc_to_ce_map[i];
if (__le32_to_cpu(entry->service_id) != service_id)
continue;
switch (__le32_to_cpu(entry->pipedir)) {
case PIPEDIR_NONE:
break;
case PIPEDIR_IN:
WARN_ON(dl_set);
*dl_pipe = __le32_to_cpu(entry->pipenum);
dl_set = true;
break;
case PIPEDIR_OUT:
WARN_ON(ul_set);
*ul_pipe = __le32_to_cpu(entry->pipenum);
ul_set = true;
break;
case PIPEDIR_INOUT:
WARN_ON(dl_set);
WARN_ON(ul_set);
*dl_pipe = __le32_to_cpu(entry->pipenum);
*ul_pipe = __le32_to_cpu(entry->pipenum);
dl_set = true;
ul_set = true;
break;
}
}
if (WARN_ON(!ul_set || !dl_set))
return -ENOENT;
return 0;
}
EXPORT_SYMBOL(ath11k_pcic_map_service_to_pipe);

View File

@ -0,0 +1,46 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022, Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef _ATH11K_PCI_CMN_H
#define _ATH11K_PCI_CMN_H
#include "core.h"
#define ATH11K_PCI_IRQ_CE0_OFFSET 3
#define ATH11K_PCI_IRQ_DP_OFFSET 14
#define ATH11K_PCI_WINDOW_ENABLE_BIT 0x40000000
#define ATH11K_PCI_WINDOW_REG_ADDRESS 0x310c
#define ATH11K_PCI_WINDOW_VALUE_MASK GENMASK(24, 19)
#define ATH11K_PCI_WINDOW_START 0x80000
#define ATH11K_PCI_WINDOW_RANGE_MASK GENMASK(18, 0)
/* BAR0 + 4k is always accessible, and no
* need to force wakeup.
* 4K - 32 = 0xFE0
*/
#define ATH11K_PCI_ACCESS_ALWAYS_OFF 0xFE0
int ath11k_pcic_get_user_msi_assignment(struct ath11k_base *ab, char *user_name,
int *num_vectors, u32 *user_base_data,
u32 *base_vector);
void ath11k_pcic_write32(struct ath11k_base *ab, u32 offset, u32 value);
u32 ath11k_pcic_read32(struct ath11k_base *ab, u32 offset);
void ath11k_pcic_get_msi_address(struct ath11k_base *ab, u32 *msi_addr_lo,
u32 *msi_addr_hi);
void ath11k_pcic_get_ce_msi_idx(struct ath11k_base *ab, u32 ce_id, u32 *msi_idx);
void ath11k_pcic_free_irq(struct ath11k_base *ab);
int ath11k_pcic_config_irq(struct ath11k_base *ab);
void ath11k_pcic_ext_irq_enable(struct ath11k_base *ab);
void ath11k_pcic_ext_irq_disable(struct ath11k_base *ab);
void ath11k_pcic_stop(struct ath11k_base *ab);
int ath11k_pcic_start(struct ath11k_base *ab);
int ath11k_pcic_map_service_to_pipe(struct ath11k_base *ab, u16 service_id,
u8 *ul_pipe, u8 *dl_pipe);
void ath11k_pcic_ce_irqs_enable(struct ath11k_base *ab);
void ath11k_pcic_ce_irq_disable_sync(struct ath11k_base *ab);
int ath11k_pcic_init_msi_config(struct ath11k_base *ab);
#endif

View File

@ -1,12 +1,30 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include "core.h"
#include "peer.h"
#include "debug.h"
static struct ath11k_peer *ath11k_peer_find_list_by_id(struct ath11k_base *ab,
int peer_id)
{
struct ath11k_peer *peer;
lockdep_assert_held(&ab->base_lock);
list_for_each_entry(peer, &ab->peers, list) {
if (peer->peer_id != peer_id)
continue;
return peer;
}
return NULL;
}
struct ath11k_peer *ath11k_peer_find(struct ath11k_base *ab, int vdev_id,
const u8 *addr)
{
@ -26,25 +44,6 @@ struct ath11k_peer *ath11k_peer_find(struct ath11k_base *ab, int vdev_id,
return NULL;
}
static struct ath11k_peer *ath11k_peer_find_by_pdev_idx(struct ath11k_base *ab,
u8 pdev_idx, const u8 *addr)
{
struct ath11k_peer *peer;
lockdep_assert_held(&ab->base_lock);
list_for_each_entry(peer, &ab->peers, list) {
if (peer->pdev_idx != pdev_idx)
continue;
if (!ether_addr_equal(peer->addr, addr))
continue;
return peer;
}
return NULL;
}
struct ath11k_peer *ath11k_peer_find_by_addr(struct ath11k_base *ab,
const u8 *addr)
{
@ -52,14 +51,13 @@ struct ath11k_peer *ath11k_peer_find_by_addr(struct ath11k_base *ab,
lockdep_assert_held(&ab->base_lock);
list_for_each_entry(peer, &ab->peers, list) {
if (!ether_addr_equal(peer->addr, addr))
continue;
if (!ab->rhead_peer_addr)
return NULL;
return peer;
}
peer = rhashtable_lookup_fast(ab->rhead_peer_addr, addr,
ab->rhash_peer_addr_param);
return NULL;
return peer;
}
struct ath11k_peer *ath11k_peer_find_by_id(struct ath11k_base *ab,
@ -69,11 +67,13 @@ struct ath11k_peer *ath11k_peer_find_by_id(struct ath11k_base *ab,
lockdep_assert_held(&ab->base_lock);
list_for_each_entry(peer, &ab->peers, list)
if (peer_id == peer->peer_id)
return peer;
if (!ab->rhead_peer_id)
return NULL;
return NULL;
peer = rhashtable_lookup_fast(ab->rhead_peer_id, &peer_id,
ab->rhash_peer_id_param);
return peer;
}
struct ath11k_peer *ath11k_peer_find_by_vdev_id(struct ath11k_base *ab,
@ -99,7 +99,7 @@ void ath11k_peer_unmap_event(struct ath11k_base *ab, u16 peer_id)
spin_lock_bh(&ab->base_lock);
peer = ath11k_peer_find_by_id(ab, peer_id);
peer = ath11k_peer_find_list_by_id(ab, peer_id);
if (!peer) {
ath11k_warn(ab, "peer-unmap-event: unknown peer id %d\n",
peer_id);
@ -167,6 +167,76 @@ static int ath11k_wait_for_peer_common(struct ath11k_base *ab, int vdev_id,
return 0;
}
static inline int ath11k_peer_rhash_insert(struct ath11k_base *ab,
struct rhashtable *rtbl,
struct rhash_head *rhead,
struct rhashtable_params *params,
void *key)
{
struct ath11k_peer *tmp;
lockdep_assert_held(&ab->tbl_mtx_lock);
tmp = rhashtable_lookup_get_insert_fast(rtbl, rhead, *params);
if (!tmp)
return 0;
else if (IS_ERR(tmp))
return PTR_ERR(tmp);
else
return -EEXIST;
}
static inline int ath11k_peer_rhash_remove(struct ath11k_base *ab,
struct rhashtable *rtbl,
struct rhash_head *rhead,
struct rhashtable_params *params)
{
int ret;
lockdep_assert_held(&ab->tbl_mtx_lock);
ret = rhashtable_remove_fast(rtbl, rhead, *params);
if (ret && ret != -ENOENT)
return ret;
return 0;
}
static int ath11k_peer_rhash_add(struct ath11k_base *ab, struct ath11k_peer *peer)
{
int ret;
lockdep_assert_held(&ab->base_lock);
lockdep_assert_held(&ab->tbl_mtx_lock);
if (!ab->rhead_peer_id || !ab->rhead_peer_addr)
return -EPERM;
ret = ath11k_peer_rhash_insert(ab, ab->rhead_peer_id, &peer->rhash_id,
&ab->rhash_peer_id_param, &peer->peer_id);
if (ret) {
ath11k_warn(ab, "failed to add peer %pM with id %d in rhash_id ret %d\n",
peer->addr, peer->peer_id, ret);
return ret;
}
ret = ath11k_peer_rhash_insert(ab, ab->rhead_peer_addr, &peer->rhash_addr,
&ab->rhash_peer_addr_param, &peer->addr);
if (ret) {
ath11k_warn(ab, "failed to add peer %pM with id %d in rhash_addr ret %d\n",
peer->addr, peer->peer_id, ret);
goto err_clean;
}
return 0;
err_clean:
ath11k_peer_rhash_remove(ab, ab->rhead_peer_id, &peer->rhash_id,
&ab->rhash_peer_id_param);
return ret;
}
void ath11k_peer_cleanup(struct ath11k *ar, u32 vdev_id)
{
struct ath11k_peer *peer, *tmp;
@ -174,6 +244,7 @@ void ath11k_peer_cleanup(struct ath11k *ar, u32 vdev_id)
lockdep_assert_held(&ar->conf_mutex);
mutex_lock(&ab->tbl_mtx_lock);
spin_lock_bh(&ab->base_lock);
list_for_each_entry_safe(peer, tmp, &ab->peers, list) {
if (peer->vdev_id != vdev_id)
@ -182,12 +253,14 @@ void ath11k_peer_cleanup(struct ath11k *ar, u32 vdev_id)
ath11k_warn(ab, "removing stale peer %pM from vdev_id %d\n",
peer->addr, vdev_id);
ath11k_peer_rhash_delete(ab, peer);
list_del(&peer->list);
kfree(peer);
ar->num_peers--;
}
spin_unlock_bh(&ab->base_lock);
mutex_unlock(&ab->tbl_mtx_lock);
}
static int ath11k_wait_for_peer_deleted(struct ath11k *ar, int vdev_id, const u8 *addr)
@ -217,17 +290,38 @@ int ath11k_wait_for_peer_delete_done(struct ath11k *ar, u32 vdev_id,
return 0;
}
int ath11k_peer_delete(struct ath11k *ar, u32 vdev_id, u8 *addr)
static int __ath11k_peer_delete(struct ath11k *ar, u32 vdev_id, const u8 *addr)
{
int ret;
struct ath11k_peer *peer;
struct ath11k_base *ab = ar->ab;
lockdep_assert_held(&ar->conf_mutex);
mutex_lock(&ab->tbl_mtx_lock);
spin_lock_bh(&ab->base_lock);
peer = ath11k_peer_find_by_addr(ab, addr);
if (!peer) {
spin_unlock_bh(&ab->base_lock);
mutex_unlock(&ab->tbl_mtx_lock);
ath11k_warn(ab,
"failed to find peer vdev_id %d addr %pM in delete\n",
vdev_id, addr);
return -EINVAL;
}
ath11k_peer_rhash_delete(ab, peer);
spin_unlock_bh(&ab->base_lock);
mutex_unlock(&ab->tbl_mtx_lock);
reinit_completion(&ar->peer_delete_done);
ret = ath11k_wmi_send_peer_delete_cmd(ar, addr, vdev_id);
if (ret) {
ath11k_warn(ar->ab,
ath11k_warn(ab,
"failed to delete peer vdev_id %d addr %pM ret %d\n",
vdev_id, addr, ret);
return ret;
@ -237,6 +331,19 @@ int ath11k_peer_delete(struct ath11k *ar, u32 vdev_id, u8 *addr)
if (ret)
return ret;
return 0;
}
int ath11k_peer_delete(struct ath11k *ar, u32 vdev_id, u8 *addr)
{
int ret;
lockdep_assert_held(&ar->conf_mutex);
ret = __ath11k_peer_delete(ar, vdev_id, addr);
if (ret)
return ret;
ar->num_peers--;
return 0;
@ -263,7 +370,7 @@ int ath11k_peer_create(struct ath11k *ar, struct ath11k_vif *arvif,
}
spin_lock_bh(&ar->ab->base_lock);
peer = ath11k_peer_find_by_pdev_idx(ar->ab, ar->pdev_idx, param->peer_addr);
peer = ath11k_peer_find_by_addr(ar->ab, param->peer_addr);
if (peer) {
spin_unlock_bh(&ar->ab->base_lock);
return -EINVAL;
@ -283,11 +390,13 @@ int ath11k_peer_create(struct ath11k *ar, struct ath11k_vif *arvif,
if (ret)
return ret;
mutex_lock(&ar->ab->tbl_mtx_lock);
spin_lock_bh(&ar->ab->base_lock);
peer = ath11k_peer_find(ar->ab, param->vdev_id, param->peer_addr);
if (!peer) {
spin_unlock_bh(&ar->ab->base_lock);
mutex_unlock(&ar->ab->tbl_mtx_lock);
ath11k_warn(ar->ab, "failed to find peer %pM on vdev %i after creation\n",
param->peer_addr, param->vdev_id);
@ -295,6 +404,13 @@ int ath11k_peer_create(struct ath11k *ar, struct ath11k_vif *arvif,
goto cleanup;
}
ret = ath11k_peer_rhash_add(ar->ab, peer);
if (ret) {
spin_unlock_bh(&ar->ab->base_lock);
mutex_unlock(&ar->ab->tbl_mtx_lock);
goto cleanup;
}
peer->pdev_idx = ar->pdev_idx;
peer->sta = sta;
@ -319,26 +435,213 @@ int ath11k_peer_create(struct ath11k *ar, struct ath11k_vif *arvif,
ar->num_peers++;
spin_unlock_bh(&ar->ab->base_lock);
mutex_unlock(&ar->ab->tbl_mtx_lock);
return 0;
cleanup:
reinit_completion(&ar->peer_delete_done);
fbret = ath11k_wmi_send_peer_delete_cmd(ar, param->peer_addr,
param->vdev_id);
if (fbret) {
ath11k_warn(ar->ab, "failed to delete peer vdev_id %d addr %pM\n",
param->vdev_id, param->peer_addr);
goto exit;
}
fbret = ath11k_wait_for_peer_delete_done(ar, param->vdev_id,
param->peer_addr);
fbret = __ath11k_peer_delete(ar, param->vdev_id, param->peer_addr);
if (fbret)
ath11k_warn(ar->ab, "failed wait for peer %pM delete done id %d fallback ret %d\n",
ath11k_warn(ar->ab, "failed peer %pM delete vdev_id %d fallback ret %d\n",
param->peer_addr, param->vdev_id, fbret);
exit:
return ret;
}
int ath11k_peer_rhash_delete(struct ath11k_base *ab, struct ath11k_peer *peer)
{
int ret;
lockdep_assert_held(&ab->base_lock);
lockdep_assert_held(&ab->tbl_mtx_lock);
if (!ab->rhead_peer_id || !ab->rhead_peer_addr)
return -EPERM;
ret = ath11k_peer_rhash_remove(ab, ab->rhead_peer_addr, &peer->rhash_addr,
&ab->rhash_peer_addr_param);
if (ret) {
ath11k_warn(ab, "failed to remove peer %pM id %d in rhash_addr ret %d\n",
peer->addr, peer->peer_id, ret);
return ret;
}
ret = ath11k_peer_rhash_remove(ab, ab->rhead_peer_id, &peer->rhash_id,
&ab->rhash_peer_id_param);
if (ret) {
ath11k_warn(ab, "failed to remove peer %pM id %d in rhash_id ret %d\n",
peer->addr, peer->peer_id, ret);
return ret;
}
return 0;
}
static int ath11k_peer_rhash_id_tbl_init(struct ath11k_base *ab)
{
struct rhashtable_params *param;
struct rhashtable *rhash_id_tbl;
int ret;
size_t size;
lockdep_assert_held(&ab->tbl_mtx_lock);
if (ab->rhead_peer_id)
return 0;
size = sizeof(*ab->rhead_peer_id);
rhash_id_tbl = kzalloc(size, GFP_KERNEL);
if (!rhash_id_tbl) {
ath11k_warn(ab, "failed to init rhash id table due to no mem (size %zu)\n",
size);
return -ENOMEM;
}
param = &ab->rhash_peer_id_param;
param->key_offset = offsetof(struct ath11k_peer, peer_id);
param->head_offset = offsetof(struct ath11k_peer, rhash_id);
param->key_len = sizeof_field(struct ath11k_peer, peer_id);
param->automatic_shrinking = true;
param->nelem_hint = ab->num_radios * TARGET_NUM_PEERS_PDEV(ab);
ret = rhashtable_init(rhash_id_tbl, param);
if (ret) {
ath11k_warn(ab, "failed to init peer id rhash table %d\n", ret);
goto err_free;
}
spin_lock_bh(&ab->base_lock);
if (!ab->rhead_peer_id) {
ab->rhead_peer_id = rhash_id_tbl;
} else {
spin_unlock_bh(&ab->base_lock);
goto cleanup_tbl;
}
spin_unlock_bh(&ab->base_lock);
return 0;
cleanup_tbl:
rhashtable_destroy(rhash_id_tbl);
err_free:
kfree(rhash_id_tbl);
return ret;
}
static int ath11k_peer_rhash_addr_tbl_init(struct ath11k_base *ab)
{
struct rhashtable_params *param;
struct rhashtable *rhash_addr_tbl;
int ret;
size_t size;
lockdep_assert_held(&ab->tbl_mtx_lock);
if (ab->rhead_peer_addr)
return 0;
size = sizeof(*ab->rhead_peer_addr);
rhash_addr_tbl = kzalloc(size, GFP_KERNEL);
if (!rhash_addr_tbl) {
ath11k_warn(ab, "failed to init rhash addr table due to no mem (size %zu)\n",
size);
return -ENOMEM;
}
param = &ab->rhash_peer_addr_param;
param->key_offset = offsetof(struct ath11k_peer, addr);
param->head_offset = offsetof(struct ath11k_peer, rhash_addr);
param->key_len = sizeof_field(struct ath11k_peer, addr);
param->automatic_shrinking = true;
param->nelem_hint = ab->num_radios * TARGET_NUM_PEERS_PDEV(ab);
ret = rhashtable_init(rhash_addr_tbl, param);
if (ret) {
ath11k_warn(ab, "failed to init peer addr rhash table %d\n", ret);
goto err_free;
}
spin_lock_bh(&ab->base_lock);
if (!ab->rhead_peer_addr) {
ab->rhead_peer_addr = rhash_addr_tbl;
} else {
spin_unlock_bh(&ab->base_lock);
goto cleanup_tbl;
}
spin_unlock_bh(&ab->base_lock);
return 0;
cleanup_tbl:
rhashtable_destroy(rhash_addr_tbl);
err_free:
kfree(rhash_addr_tbl);
return ret;
}
static inline void ath11k_peer_rhash_id_tbl_destroy(struct ath11k_base *ab)
{
lockdep_assert_held(&ab->tbl_mtx_lock);
if (!ab->rhead_peer_id)
return;
rhashtable_destroy(ab->rhead_peer_id);
kfree(ab->rhead_peer_id);
ab->rhead_peer_id = NULL;
}
static inline void ath11k_peer_rhash_addr_tbl_destroy(struct ath11k_base *ab)
{
lockdep_assert_held(&ab->tbl_mtx_lock);
if (!ab->rhead_peer_addr)
return;
rhashtable_destroy(ab->rhead_peer_addr);
kfree(ab->rhead_peer_addr);
ab->rhead_peer_addr = NULL;
}
int ath11k_peer_rhash_tbl_init(struct ath11k_base *ab)
{
int ret;
mutex_lock(&ab->tbl_mtx_lock);
ret = ath11k_peer_rhash_id_tbl_init(ab);
if (ret)
goto out;
ret = ath11k_peer_rhash_addr_tbl_init(ab);
if (ret)
goto cleanup_tbl;
mutex_unlock(&ab->tbl_mtx_lock);
return 0;
cleanup_tbl:
ath11k_peer_rhash_id_tbl_destroy(ab);
out:
mutex_unlock(&ab->tbl_mtx_lock);
return ret;
}
void ath11k_peer_rhash_tbl_destroy(struct ath11k_base *ab)
{
mutex_lock(&ab->tbl_mtx_lock);
ath11k_peer_rhash_addr_tbl_destroy(ab);
ath11k_peer_rhash_id_tbl_destroy(ab);
mutex_unlock(&ab->tbl_mtx_lock);
}

View File

@ -1,6 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH11K_PEER_H
@ -20,6 +21,11 @@ struct ath11k_peer {
struct ieee80211_key_conf *keys[WMI_MAX_KEY_INDEX + 1];
struct dp_rx_tid rx_tid[IEEE80211_NUM_TIDS + 1];
/* peer id based rhashtable list pointer */
struct rhash_head rhash_id;
/* peer addr based rhashtable list pointer */
struct rhash_head rhash_addr;
/* Info used in MMIC verification of
* RX fragments
*/
@ -47,5 +53,7 @@ int ath11k_wait_for_peer_delete_done(struct ath11k *ar, u32 vdev_id,
const u8 *addr);
struct ath11k_peer *ath11k_peer_find_by_vdev_id(struct ath11k_base *ab,
int vdev_id);
int ath11k_peer_rhash_tbl_init(struct ath11k_base *ab);
void ath11k_peer_rhash_tbl_destroy(struct ath11k_base *ab);
int ath11k_peer_rhash_delete(struct ath11k_base *ab, struct ath11k_peer *peer);
#endif /* _PEER_H_ */

View File

@ -1,6 +1,7 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
* Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include <linux/elf.h>
@ -12,9 +13,14 @@
#include <linux/of_address.h>
#include <linux/ioport.h>
#include <linux/firmware.h>
#include <linux/of_device.h>
#include <linux/of_irq.h>
#define SLEEP_CLOCK_SELECT_INTERNAL_BIT 0x02
#define HOST_CSTATE_BIT 0x04
#define PLATFORM_CAP_PCIE_GLOBAL_RESET 0x08
#define FW_BUILD_ID_MASK "QC_IMAGE_VERSION_STRING="
bool ath11k_cold_boot_cal = 1;
EXPORT_SYMBOL(ath11k_cold_boot_cal);
@ -745,6 +751,68 @@ static struct qmi_elem_info qmi_wlanfw_cap_req_msg_v01_ei[] = {
},
};
static struct qmi_elem_info qmi_wlanfw_device_info_req_msg_v01_ei[] = {
{
.data_type = QMI_EOTI,
.array_type = NO_ARRAY,
.tlv_type = QMI_COMMON_TLV_TYPE,
},
};
static struct qmi_elem_info qmi_wlfw_device_info_resp_msg_v01_ei[] = {
{
.data_type = QMI_STRUCT,
.elem_len = 1,
.elem_size = sizeof(struct qmi_response_type_v01),
.array_type = NO_ARRAY,
.tlv_type = 0x02,
.offset = offsetof(struct qmi_wlanfw_device_info_resp_msg_v01,
resp),
.ei_array = qmi_response_type_v01_ei,
},
{
.data_type = QMI_OPT_FLAG,
.elem_len = 1,
.elem_size = sizeof(u8),
.array_type = NO_ARRAY,
.tlv_type = 0x10,
.offset = offsetof(struct qmi_wlanfw_device_info_resp_msg_v01,
bar_addr_valid),
},
{
.data_type = QMI_UNSIGNED_8_BYTE,
.elem_len = 1,
.elem_size = sizeof(u64),
.array_type = NO_ARRAY,
.tlv_type = 0x10,
.offset = offsetof(struct qmi_wlanfw_device_info_resp_msg_v01,
bar_addr),
},
{
.data_type = QMI_OPT_FLAG,
.elem_len = 1,
.elem_size = sizeof(u8),
.array_type = NO_ARRAY,
.tlv_type = 0x11,
.offset = offsetof(struct qmi_wlanfw_device_info_resp_msg_v01,
bar_size_valid),
},
{
.data_type = QMI_UNSIGNED_4_BYTE,
.elem_len = 1,
.elem_size = sizeof(u32),
.array_type = NO_ARRAY,
.tlv_type = 0x11,
.offset = offsetof(struct qmi_wlanfw_device_info_resp_msg_v01,
bar_size),
},
{
.data_type = QMI_EOTI,
.array_type = NO_ARRAY,
.tlv_type = QMI_COMMON_TLV_TYPE,
},
};
static struct qmi_elem_info qmi_wlanfw_rf_chip_info_s_v01_ei[] = {
{
.data_type = QMI_UNSIGNED_4_BYTE,
@ -1645,7 +1713,7 @@ static int ath11k_qmi_host_cap_send(struct ath11k_base *ab)
req.bdf_support_valid = 1;
req.bdf_support = 1;
if (ab->bus_params.m3_fw_support) {
if (ab->hw_params.m3_fw_support) {
req.m3_support_valid = 1;
req.m3_support = 1;
req.m3_cache_support_valid = 1;
@ -1674,6 +1742,9 @@ static int ath11k_qmi_host_cap_send(struct ath11k_base *ab)
req.nm_modem |= SLEEP_CLOCK_SELECT_INTERNAL_BIT;
}
if (ab->hw_params.global_reset)
req.nm_modem |= PLATFORM_CAP_PCIE_GLOBAL_RESET;
ath11k_dbg(ab, ATH11K_DBG_QMI, "qmi host cap request\n");
ret = qmi_txn_init(&ab->qmi.handle, &txn,
@ -1728,10 +1799,6 @@ static int ath11k_qmi_fw_ind_register_send(struct ath11k_base *ab)
req->client_id = QMI_WLANFW_CLIENT_ID;
req->fw_ready_enable_valid = 1;
req->fw_ready_enable = 1;
req->request_mem_enable_valid = 1;
req->request_mem_enable = 1;
req->fw_mem_ready_enable_valid = 1;
req->fw_mem_ready_enable = 1;
req->cal_done_enable_valid = 1;
req->cal_done_enable = 1;
req->fw_init_done_enable_valid = 1;
@ -1740,6 +1807,17 @@ static int ath11k_qmi_fw_ind_register_send(struct ath11k_base *ab)
req->pin_connect_result_enable_valid = 0;
req->pin_connect_result_enable = 0;
/* WCN6750 doesn't request for DDR memory via QMI,
* instead it uses a fixed 12MB reserved memory
* region in DDR.
*/
if (!ab->hw_params.fixed_fw_mem) {
req->request_mem_enable_valid = 1;
req->request_mem_enable = 1;
req->fw_mem_ready_enable_valid = 1;
req->fw_mem_ready_enable = 1;
}
ret = qmi_txn_init(handle, &txn,
qmi_wlanfw_ind_register_resp_msg_v01_ei, resp);
if (ret < 0)
@ -1797,7 +1875,7 @@ static int ath11k_qmi_respond_fw_mem_request(struct ath11k_base *ab)
* failure to FW and FW will then request mulitple blocks of small
* chunk size memory.
*/
if (!(ab->bus_params.fixed_mem_region ||
if (!(ab->hw_params.fixed_mem_region ||
test_bit(ATH11K_FLAG_FIXED_MEM_RGN, &ab->dev_flags)) &&
ab->qmi.target_mem_delayed) {
delayed = true;
@ -1867,7 +1945,7 @@ static void ath11k_qmi_free_target_mem_chunk(struct ath11k_base *ab)
int i;
for (i = 0; i < ab->qmi.mem_seg_count; i++) {
if ((ab->bus_params.fixed_mem_region ||
if ((ab->hw_params.fixed_mem_region ||
test_bit(ATH11K_FLAG_FIXED_MEM_RGN, &ab->dev_flags)) &&
ab->qmi.target_mem[i].iaddr)
iounmap(ab->qmi.target_mem[i].iaddr);
@ -2001,6 +2079,80 @@ static int ath11k_qmi_assign_target_mem_chunk(struct ath11k_base *ab)
return 0;
}
static int ath11k_qmi_request_device_info(struct ath11k_base *ab)
{
struct qmi_wlanfw_device_info_req_msg_v01 req = {};
struct qmi_wlanfw_device_info_resp_msg_v01 resp = {};
struct qmi_txn txn;
void __iomem *bar_addr_va;
int ret;
/* device info message req is only sent for hybrid bus devices */
if (!ab->hw_params.hybrid_bus_type)
return 0;
ret = qmi_txn_init(&ab->qmi.handle, &txn,
qmi_wlfw_device_info_resp_msg_v01_ei, &resp);
if (ret < 0)
goto out;
ret = qmi_send_request(&ab->qmi.handle, NULL, &txn,
QMI_WLANFW_DEVICE_INFO_REQ_V01,
QMI_WLANFW_DEVICE_INFO_REQ_MSG_V01_MAX_LEN,
qmi_wlanfw_device_info_req_msg_v01_ei, &req);
if (ret < 0) {
qmi_txn_cancel(&txn);
ath11k_warn(ab, "failed to send qmi target device info request: %d\n",
ret);
goto out;
}
ret = qmi_txn_wait(&txn, msecs_to_jiffies(ATH11K_QMI_WLANFW_TIMEOUT_MS));
if (ret < 0) {
ath11k_warn(ab, "failed to wait qmi target device info request: %d\n",
ret);
goto out;
}
if (resp.resp.result != QMI_RESULT_SUCCESS_V01) {
ath11k_warn(ab, "qmi device info request failed: %d %d\n",
resp.resp.result, resp.resp.error);
ret = -EINVAL;
goto out;
}
if (!resp.bar_addr_valid || !resp.bar_size_valid) {
ath11k_warn(ab, "qmi device info response invalid: %d %d\n",
resp.resp.result, resp.resp.error);
ret = -EINVAL;
goto out;
}
if (!resp.bar_addr ||
resp.bar_size != ATH11K_QMI_DEVICE_BAR_SIZE) {
ath11k_warn(ab, "qmi device info invalid address and size: %llu %u\n",
resp.bar_addr, resp.bar_size);
ret = -EINVAL;
goto out;
}
bar_addr_va = devm_ioremap(ab->dev, resp.bar_addr, resp.bar_size);
if (!bar_addr_va) {
ath11k_warn(ab, "qmi device info ioremap failed\n");
ab->mem_len = 0;
ret = -EIO;
goto out;
}
ab->mem = bar_addr_va;
ab->mem_len = resp.bar_size;
return 0;
out:
return ret;
}
static int ath11k_qmi_request_target_cap(struct ath11k_base *ab)
{
struct qmi_wlanfw_cap_req_msg_v01 req;
@ -2008,6 +2160,8 @@ static int ath11k_qmi_request_target_cap(struct ath11k_base *ab)
struct qmi_txn txn;
int ret = 0;
int r;
char *fw_build_id;
int fw_build_id_mask_len;
memset(&req, 0, sizeof(req));
memset(&resp, 0, sizeof(resp));
@ -2073,6 +2227,11 @@ static int ath11k_qmi_request_target_cap(struct ath11k_base *ab)
ath11k_dbg(ab, ATH11K_DBG_QMI, "qmi cal data supported from eeprom\n");
}
fw_build_id = ab->qmi.target.fw_build_id;
fw_build_id_mask_len = strlen(FW_BUILD_ID_MASK);
if (!strncmp(fw_build_id, FW_BUILD_ID_MASK, fw_build_id_mask_len))
fw_build_id = fw_build_id + fw_build_id_mask_len;
ath11k_info(ab, "chip_id 0x%x chip_family 0x%x board_id 0x%x soc_id 0x%x\n",
ab->qmi.target.chip_id, ab->qmi.target.chip_family,
ab->qmi.target.board_id, ab->qmi.target.soc_id);
@ -2080,7 +2239,11 @@ static int ath11k_qmi_request_target_cap(struct ath11k_base *ab)
ath11k_info(ab, "fw_version 0x%x fw_build_timestamp %s fw_build_id %s",
ab->qmi.target.fw_version,
ab->qmi.target.fw_build_timestamp,
ab->qmi.target.fw_build_id);
fw_build_id);
r = ath11k_core_check_smbios(ab);
if (r)
ath11k_dbg(ab, ATH11K_DBG_QMI, "SMBIOS bdf variant name not set.\n");
r = ath11k_core_check_dt(ab);
if (r)
@ -2107,7 +2270,7 @@ static int ath11k_qmi_load_file_target_mem(struct ath11k_base *ab,
memset(&resp, 0, sizeof(resp));
if (ab->bus_params.fixed_bdf_addr) {
if (ab->hw_params.fixed_bdf_addr) {
bdf_addr = ioremap(ab->hw_params.bdf_addr, ab->hw_params.fw.board_size);
if (!bdf_addr) {
ath11k_warn(ab, "qmi ioremap error for bdf_addr\n");
@ -2136,7 +2299,7 @@ static int ath11k_qmi_load_file_target_mem(struct ath11k_base *ab,
req->end = 1;
}
if (ab->bus_params.fixed_bdf_addr ||
if (ab->hw_params.fixed_bdf_addr ||
type == ATH11K_QMI_FILE_TYPE_EEPROM) {
req->data_valid = 0;
req->end = 1;
@ -2145,7 +2308,7 @@ static int ath11k_qmi_load_file_target_mem(struct ath11k_base *ab,
memcpy(req->data, temp, req->data_len);
}
if (ab->bus_params.fixed_bdf_addr) {
if (ab->hw_params.fixed_bdf_addr) {
if (type == ATH11K_QMI_FILE_TYPE_CALDATA)
bdf_addr += ab->hw_params.fw.cal_offset;
@ -2184,7 +2347,7 @@ static int ath11k_qmi_load_file_target_mem(struct ath11k_base *ab,
goto err_iounmap;
}
if (ab->bus_params.fixed_bdf_addr ||
if (ab->hw_params.fixed_bdf_addr ||
type == ATH11K_QMI_FILE_TYPE_EEPROM) {
remaining = 0;
} else {
@ -2197,7 +2360,7 @@ static int ath11k_qmi_load_file_target_mem(struct ath11k_base *ab,
}
err_iounmap:
if (ab->bus_params.fixed_bdf_addr)
if (ab->hw_params.fixed_bdf_addr)
iounmap(bdf_addr);
err_free_req:
@ -2336,7 +2499,7 @@ static void ath11k_qmi_m3_free(struct ath11k_base *ab)
{
struct m3_mem_region *m3_mem = &ab->qmi.m3_mem;
if (!ab->bus_params.m3_fw_support || !m3_mem->vaddr)
if (!ab->hw_params.m3_fw_support || !m3_mem->vaddr)
return;
dma_free_coherent(ab->dev, m3_mem->size,
@ -2356,7 +2519,7 @@ static int ath11k_qmi_wlanfw_m3_info_send(struct ath11k_base *ab)
memset(&req, 0, sizeof(req));
memset(&resp, 0, sizeof(resp));
if (ab->bus_params.m3_fw_support) {
if (ab->hw_params.m3_fw_support) {
ret = ath11k_qmi_m3_load(ab);
if (ret) {
ath11k_err(ab, "failed to load m3 firmware: %d", ret);
@ -2684,27 +2847,6 @@ ath11k_qmi_driver_event_post(struct ath11k_qmi *qmi,
return 0;
}
static int ath11k_qmi_event_server_arrive(struct ath11k_qmi *qmi)
{
struct ath11k_base *ab = qmi->ab;
int ret;
ret = ath11k_qmi_fw_ind_register_send(ab);
if (ret < 0) {
ath11k_warn(ab, "failed to send qmi firmware indication: %d\n",
ret);
return ret;
}
ret = ath11k_qmi_host_cap_send(ab);
if (ret < 0) {
ath11k_warn(ab, "failed to send qmi host cap: %d\n", ret);
return ret;
}
return ret;
}
static int ath11k_qmi_event_mem_request(struct ath11k_qmi *qmi)
{
struct ath11k_base *ab = qmi->ab;
@ -2731,6 +2873,12 @@ static int ath11k_qmi_event_load_bdf(struct ath11k_qmi *qmi)
return ret;
}
ret = ath11k_qmi_request_device_info(ab);
if (ret < 0) {
ath11k_warn(ab, "failed to request qmi device info: %d\n", ret);
return ret;
}
if (ab->hw_params.supports_regdb)
ath11k_qmi_load_bdf_qmi(ab, true);
@ -2740,9 +2888,33 @@ static int ath11k_qmi_event_load_bdf(struct ath11k_qmi *qmi)
return ret;
}
ret = ath11k_qmi_wlanfw_m3_info_send(ab);
return 0;
}
static int ath11k_qmi_event_server_arrive(struct ath11k_qmi *qmi)
{
struct ath11k_base *ab = qmi->ab;
int ret;
ret = ath11k_qmi_fw_ind_register_send(ab);
if (ret < 0) {
ath11k_warn(ab, "failed to send qmi m3 info req: %d\n", ret);
ath11k_warn(ab, "failed to send qmi firmware indication: %d\n",
ret);
return ret;
}
ret = ath11k_qmi_host_cap_send(ab);
if (ret < 0) {
ath11k_warn(ab, "failed to send qmi host cap: %d\n", ret);
return ret;
}
if (!ab->hw_params.fixed_fw_mem)
return ret;
ret = ath11k_qmi_event_load_bdf(qmi);
if (ret < 0) {
ath11k_warn(ab, "qmi failed to download BDF:%d\n", ret);
return ret;
}
@ -2775,7 +2947,7 @@ static void ath11k_qmi_msg_mem_request_cb(struct qmi_handle *qmi_hdl,
msg->mem_seg[i].type, msg->mem_seg[i].size);
}
if (ab->bus_params.fixed_mem_region ||
if (ab->hw_params.fixed_mem_region ||
test_bit(ATH11K_FLAG_FIXED_MEM_RGN, &ab->dev_flags)) {
ret = ath11k_qmi_assign_target_mem_chunk(ab);
if (ret) {
@ -2942,8 +3114,18 @@ static void ath11k_qmi_driver_event_work(struct work_struct *work)
break;
case ATH11K_QMI_EVENT_FW_MEM_READY:
ret = ath11k_qmi_event_load_bdf(qmi);
if (ret < 0)
if (ret < 0) {
set_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags);
break;
}
ret = ath11k_qmi_wlanfw_m3_info_send(ab);
if (ret < 0) {
ath11k_warn(ab,
"failed to send qmi m3 info req: %d\n", ret);
set_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags);
}
break;
case ATH11K_QMI_EVENT_FW_READY:
clear_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags);

View File

@ -1,6 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
* Copyright (c) 2022, Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH11K_QMI_H
@ -20,6 +21,7 @@
#define ATH11K_QMI_WLFW_SERVICE_INS_ID_V01_QCA6390 0x01
#define ATH11K_QMI_WLFW_SERVICE_INS_ID_V01_IPQ8074 0x02
#define ATH11K_QMI_WLFW_SERVICE_INS_ID_V01_QCN9074 0x07
#define ATH11K_QMI_WLFW_SERVICE_INS_ID_V01_WCN6750 0x03
#define ATH11K_QMI_WLANFW_MAX_TIMESTAMP_LEN_V01 32
#define ATH11K_QMI_RESP_LEN_MAX 8192
#define ATH11K_QMI_WLANFW_MAX_NUM_MEM_SEG_V01 52
@ -36,6 +38,8 @@
#define ATH11K_FIRMWARE_MODE_OFF 4
#define ATH11K_COLD_BOOT_FW_RESET_DELAY (40 * HZ)
#define ATH11K_QMI_DEVICE_BAR_SIZE 0x200000
struct ath11k_base;
enum ath11k_qmi_file_type {
@ -285,10 +289,12 @@ struct qmi_wlanfw_fw_cold_cal_done_ind_msg_v01 {
char placeholder;
};
#define QMI_WLANFW_CAP_REQ_MSG_V01_MAX_LEN 0
#define QMI_WLANFW_CAP_RESP_MSG_V01_MAX_LEN 235
#define QMI_WLANFW_CAP_REQ_V01 0x0024
#define QMI_WLANFW_CAP_RESP_V01 0x0024
#define QMI_WLANFW_CAP_REQ_MSG_V01_MAX_LEN 0
#define QMI_WLANFW_CAP_RESP_MSG_V01_MAX_LEN 235
#define QMI_WLANFW_CAP_REQ_V01 0x0024
#define QMI_WLANFW_CAP_RESP_V01 0x0024
#define QMI_WLANFW_DEVICE_INFO_REQ_V01 0x004C
#define QMI_WLANFW_DEVICE_INFO_REQ_MSG_V01_MAX_LEN 0
enum qmi_wlanfw_pipedir_enum_v01 {
QMI_WLFW_PIPEDIR_NONE_V01 = 0,
@ -381,6 +387,18 @@ struct qmi_wlanfw_cap_req_msg_v01 {
char placeholder;
};
struct qmi_wlanfw_device_info_req_msg_v01 {
char placeholder;
};
struct qmi_wlanfw_device_info_resp_msg_v01 {
struct qmi_response_type_v01 resp;
u64 bar_addr;
u32 bar_size;
u8 bar_addr_valid;
u8 bar_size_valid;
};
#define QMI_WLANFW_BDF_DOWNLOAD_REQ_MSG_V01_MAX_LEN 6182
#define QMI_WLANFW_BDF_DOWNLOAD_RESP_MSG_V01_MAX_LEN 7
#define QMI_WLANFW_BDF_DOWNLOAD_RESP_V01 0x0025

View File

@ -83,6 +83,7 @@ ath11k_reg_notifier(struct wiphy *wiphy, struct regulatory_request *request)
*/
if (ar->ab->hw_params.current_cc_support) {
memcpy(&set_current_param.alpha2, request->alpha2, 2);
memcpy(&ar->alpha2, &set_current_param.alpha2, 2);
ret = ath11k_wmi_send_set_current_country_cmd(ar, &set_current_param);
if (ret)
ath11k_warn(ar->ab,
@ -102,7 +103,7 @@ ath11k_reg_notifier(struct wiphy *wiphy, struct regulatory_request *request)
ar->regdom_set_by_user = true;
}
int ath11k_reg_update_chan_list(struct ath11k *ar)
int ath11k_reg_update_chan_list(struct ath11k *ar, bool wait)
{
struct ieee80211_supported_band **bands;
struct scan_chan_list_params *params;
@ -111,7 +112,32 @@ int ath11k_reg_update_chan_list(struct ath11k *ar)
struct channel_param *ch;
enum nl80211_band band;
int num_channels = 0;
int i, ret;
int i, ret, left;
if (wait && ar->state_11d != ATH11K_11D_IDLE) {
left = wait_for_completion_timeout(&ar->completed_11d_scan,
ATH11K_SCAN_TIMEOUT_HZ);
if (!left) {
ath11k_dbg(ar->ab, ATH11K_DBG_REG,
"failed to receive 11d scan complete: timed out\n");
ar->state_11d = ATH11K_11D_IDLE;
}
ath11k_dbg(ar->ab, ATH11K_DBG_REG,
"reg 11d scan wait left time %d\n", left);
}
if (wait &&
(ar->scan.state == ATH11K_SCAN_STARTING ||
ar->scan.state == ATH11K_SCAN_RUNNING)) {
left = wait_for_completion_timeout(&ar->scan.completed,
ATH11K_SCAN_TIMEOUT_HZ);
if (!left)
ath11k_dbg(ar->ab, ATH11K_DBG_REG,
"failed to receive hw scan complete: timed out\n");
ath11k_dbg(ar->ab, ATH11K_DBG_REG,
"reg hw scan wait left time %d\n", left);
}
bands = hw->wiphy->bands;
for (band = 0; band < NUM_NL80211_BANDS; band++) {
@ -193,11 +219,6 @@ int ath11k_reg_update_chan_list(struct ath11k *ar)
ret = ath11k_wmi_send_scan_chan_list_cmd(ar, params);
kfree(params);
if (ar->pending_11d) {
complete(&ar->finish_11d_ch_list);
ar->pending_11d = false;
}
return ret;
}
@ -263,15 +284,8 @@ int ath11k_regd_update(struct ath11k *ar)
goto err;
}
if (ar->pending_11d)
complete(&ar->finish_11d_scan);
rtnl_lock();
wiphy_lock(ar->hw->wiphy);
if (ar->pending_11d)
reinit_completion(&ar->finish_11d_ch_list);
ret = regulatory_set_wiphy_regd_sync(ar->hw->wiphy, regd_copy);
wiphy_unlock(ar->hw->wiphy);
rtnl_unlock();
@ -282,7 +296,7 @@ int ath11k_regd_update(struct ath11k *ar)
goto err;
if (ar->state == ATH11K_STATE_ON) {
ret = ath11k_reg_update_chan_list(ar);
ret = ath11k_reg_update_chan_list(ar, true);
if (ret)
goto err;
}

View File

@ -32,5 +32,5 @@ struct ieee80211_regdomain *
ath11k_reg_build_regd(struct ath11k_base *ab,
struct cur_regulatory_info *reg_info, bool intersect);
int ath11k_regd_update(struct ath11k *ar);
int ath11k_reg_update_chan_list(struct ath11k *ar);
int ath11k_reg_update_chan_list(struct ath11k *ar, bool wait);
#endif

View File

@ -212,7 +212,10 @@ static int ath11k_spectral_scan_config(struct ath11k *ar,
return -ENODEV;
arvif->spectral_enabled = (mode != ATH11K_SPECTRAL_DISABLED);
spin_lock_bh(&ar->spectral.lock);
ar->spectral.mode = mode;
spin_unlock_bh(&ar->spectral.lock);
ret = ath11k_wmi_vdev_spectral_enable(ar, arvif->vdev_id,
ATH11K_WMI_SPECTRAL_TRIGGER_CMD_CLEAR,
@ -843,9 +846,6 @@ static inline void ath11k_spectral_ring_free(struct ath11k *ar)
{
struct ath11k_spectral *sp = &ar->spectral;
if (!sp->enabled)
return;
ath11k_dbring_srng_cleanup(ar, &sp->rx_ring);
ath11k_dbring_buf_cleanup(ar, &sp->rx_ring);
}
@ -897,15 +897,16 @@ void ath11k_spectral_deinit(struct ath11k_base *ab)
if (!sp->enabled)
continue;
ath11k_spectral_debug_unregister(ar);
ath11k_spectral_ring_free(ar);
mutex_lock(&ar->conf_mutex);
ath11k_spectral_scan_config(ar, ATH11K_SPECTRAL_DISABLED);
mutex_unlock(&ar->conf_mutex);
spin_lock_bh(&sp->lock);
sp->mode = ATH11K_SPECTRAL_DISABLED;
sp->enabled = false;
spin_unlock_bh(&sp->lock);
ath11k_spectral_debug_unregister(ar);
ath11k_spectral_ring_free(ar);
}
}

View File

@ -1,6 +1,7 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
* Copyright (c) 2021, Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include <linux/skbuff.h>
#include <linux/ctype.h>
@ -390,6 +391,10 @@ ath11k_pull_mac_phy_cap_svc_ready_ext(struct ath11k_pdev_wmi *wmi_handle,
ab->target_pdev_ids[ab->target_pdev_count].pdev_id = mac_phy_caps->pdev_id;
ab->target_pdev_count++;
if (!(mac_phy_caps->supported_bands & WMI_HOST_WLAN_2G_CAP) &&
!(mac_phy_caps->supported_bands & WMI_HOST_WLAN_5G_CAP))
return -EINVAL;
/* Take non-zero tx/rx chainmask. If tx/rx chainmask differs from
* band to band for a single radio, need to see how this should be
* handled.
@ -397,7 +402,9 @@ ath11k_pull_mac_phy_cap_svc_ready_ext(struct ath11k_pdev_wmi *wmi_handle,
if (mac_phy_caps->supported_bands & WMI_HOST_WLAN_2G_CAP) {
pdev_cap->tx_chain_mask = mac_phy_caps->tx_chain_mask_2g;
pdev_cap->rx_chain_mask = mac_phy_caps->rx_chain_mask_2g;
} else if (mac_phy_caps->supported_bands & WMI_HOST_WLAN_5G_CAP) {
}
if (mac_phy_caps->supported_bands & WMI_HOST_WLAN_5G_CAP) {
pdev_cap->vht_cap = mac_phy_caps->vht_cap_info_5g;
pdev_cap->vht_mcs = mac_phy_caps->vht_supp_mcs_5g;
pdev_cap->he_mcs = mac_phy_caps->he_supp_mcs_5g;
@ -407,8 +414,6 @@ ath11k_pull_mac_phy_cap_svc_ready_ext(struct ath11k_pdev_wmi *wmi_handle,
WMI_NSS_RATIO_ENABLE_DISABLE_GET(mac_phy_caps->nss_ratio);
pdev_cap->nss_ratio_info =
WMI_NSS_RATIO_INFO_GET(mac_phy_caps->nss_ratio);
} else {
return -EINVAL;
}
/* tx/rx chainmask reported from fw depends on the actual hw chains used,
@ -2015,7 +2020,10 @@ void ath11k_wmi_start_scan_init(struct ath11k *ar,
{
/* setup commonly used values */
arg->scan_req_id = 1;
arg->scan_priority = WMI_SCAN_PRIORITY_LOW;
if (ar->state_11d == ATH11K_11D_PREPARING)
arg->scan_priority = WMI_SCAN_PRIORITY_MEDIUM;
else
arg->scan_priority = WMI_SCAN_PRIORITY_LOW;
arg->dwell_time_active = 50;
arg->dwell_time_active_2g = 0;
arg->dwell_time_passive = 150;
@ -5786,9 +5794,9 @@ static int ath11k_wmi_tlv_rssi_chain_parse(struct ath11k_base *ab,
arvif->bssid,
NULL);
if (!sta) {
ath11k_warn(ab, "not found station for bssid %pM\n",
arvif->bssid);
ret = -EPROTO;
ath11k_dbg(ab, ATH11K_DBG_WMI,
"not found station of bssid %pM for rssi chain\n",
arvif->bssid);
goto exit;
}
@ -5886,8 +5894,9 @@ static int ath11k_wmi_tlv_fw_stats_data_parse(struct ath11k_base *ab,
"wmi stats vdev id %d snr %d\n",
src->vdev_id, src->beacon_snr);
} else {
ath11k_warn(ab, "not found station for bssid %pM\n",
arvif->bssid);
ath11k_dbg(ab, ATH11K_DBG_WMI,
"not found station of bssid %pM for vdev stat\n",
arvif->bssid);
}
}
@ -6350,8 +6359,10 @@ static void ath11k_wmi_op_ep_tx_credits(struct ath11k_base *ab)
static int ath11k_reg_11d_new_cc_event(struct ath11k_base *ab, struct sk_buff *skb)
{
const struct wmi_11d_new_cc_ev *ev;
struct ath11k *ar;
struct ath11k_pdev *pdev;
const void **tb;
int ret;
int ret, i;
tb = ath11k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
if (IS_ERR(tb)) {
@ -6377,6 +6388,13 @@ static int ath11k_reg_11d_new_cc_event(struct ath11k_base *ab, struct sk_buff *s
kfree(tb);
for (i = 0; i < ab->num_radios; i++) {
pdev = &ab->pdevs[i];
ar = pdev->ar;
ar->state_11d = ATH11K_11D_IDLE;
complete(&ar->completed_11d_scan);
}
queue_work(ab->workqueue, &ab->update_11d_work);
return 0;
@ -7285,47 +7303,64 @@ static void ath11k_vdev_install_key_compl_event(struct ath11k_base *ab,
rcu_read_unlock();
}
static void ath11k_service_available_event(struct ath11k_base *ab, struct sk_buff *skb)
static int ath11k_wmi_tlv_services_parser(struct ath11k_base *ab,
u16 tag, u16 len,
const void *ptr, void *data)
{
const void **tb;
const struct wmi_service_available_event *ev;
int ret;
u32 *wmi_ext2_service_bitmap;
int i, j;
tb = ath11k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
if (IS_ERR(tb)) {
ret = PTR_ERR(tb);
ath11k_warn(ab, "failed to parse tlv: %d\n", ret);
return;
switch (tag) {
case WMI_TAG_SERVICE_AVAILABLE_EVENT:
ev = (struct wmi_service_available_event *)ptr;
for (i = 0, j = WMI_MAX_SERVICE;
i < WMI_SERVICE_SEGMENT_BM_SIZE32 && j < WMI_MAX_EXT_SERVICE;
i++) {
do {
if (ev->wmi_service_segment_bitmap[i] &
BIT(j % WMI_AVAIL_SERVICE_BITS_IN_SIZE32))
set_bit(j, ab->wmi_ab.svc_map);
} while (++j % WMI_AVAIL_SERVICE_BITS_IN_SIZE32);
}
ath11k_dbg(ab, ATH11K_DBG_WMI,
"wmi_ext_service_bitmap 0:0x%04x, 1:0x%04x, 2:0x%04x, 3:0x%04x",
ev->wmi_service_segment_bitmap[0],
ev->wmi_service_segment_bitmap[1],
ev->wmi_service_segment_bitmap[2],
ev->wmi_service_segment_bitmap[3]);
break;
case WMI_TAG_ARRAY_UINT32:
wmi_ext2_service_bitmap = (u32 *)ptr;
for (i = 0, j = WMI_MAX_EXT_SERVICE;
i < WMI_SERVICE_SEGMENT_BM_SIZE32 && j < WMI_MAX_EXT2_SERVICE;
i++) {
do {
if (wmi_ext2_service_bitmap[i] &
BIT(j % WMI_AVAIL_SERVICE_BITS_IN_SIZE32))
set_bit(j, ab->wmi_ab.svc_map);
} while (++j % WMI_AVAIL_SERVICE_BITS_IN_SIZE32);
}
ath11k_dbg(ab, ATH11K_DBG_WMI,
"wmi_ext2_service__bitmap 0:0x%04x, 1:0x%04x, 2:0x%04x, 3:0x%04x",
wmi_ext2_service_bitmap[0], wmi_ext2_service_bitmap[1],
wmi_ext2_service_bitmap[2], wmi_ext2_service_bitmap[3]);
break;
}
return 0;
}
ev = tb[WMI_TAG_SERVICE_AVAILABLE_EVENT];
if (!ev) {
ath11k_warn(ab, "failed to fetch svc available ev");
kfree(tb);
return;
}
static void ath11k_service_available_event(struct ath11k_base *ab, struct sk_buff *skb)
{
int ret;
/* TODO: Use wmi_service_segment_offset information to get the service
* especially when more services are advertised in multiple sevice
* available events.
*/
for (i = 0, j = WMI_MAX_SERVICE;
i < WMI_SERVICE_SEGMENT_BM_SIZE32 && j < WMI_MAX_EXT_SERVICE;
i++) {
do {
if (ev->wmi_service_segment_bitmap[i] &
BIT(j % WMI_AVAIL_SERVICE_BITS_IN_SIZE32))
set_bit(j, ab->wmi_ab.svc_map);
} while (++j % WMI_AVAIL_SERVICE_BITS_IN_SIZE32);
}
ath11k_dbg(ab, ATH11K_DBG_WMI,
"wmi_ext_service_bitmap 0:0x%x, 1:0x%x, 2:0x%x, 3:0x%x",
ev->wmi_service_segment_bitmap[0], ev->wmi_service_segment_bitmap[1],
ev->wmi_service_segment_bitmap[2], ev->wmi_service_segment_bitmap[3]);
kfree(tb);
ret = ath11k_wmi_tlv_iter(ab, skb->data, skb->len,
ath11k_wmi_tlv_services_parser,
NULL);
if (ret)
ath11k_warn(ab, "failed to parse services available tlv %d\n", ret);
}
static void ath11k_peer_assoc_conf_event(struct ath11k_base *ab, struct sk_buff *skb)
@ -7765,6 +7800,56 @@ exit:
kfree(tb);
}
static void ath11k_wmi_gtk_offload_status_event(struct ath11k_base *ab,
struct sk_buff *skb)
{
const void **tb;
const struct wmi_gtk_offload_status_event *ev;
struct ath11k_vif *arvif;
__be64 replay_ctr_be;
u64 replay_ctr;
int ret;
tb = ath11k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
if (IS_ERR(tb)) {
ret = PTR_ERR(tb);
ath11k_warn(ab, "failed to parse tlv: %d\n", ret);
return;
}
ev = tb[WMI_TAG_GTK_OFFLOAD_STATUS_EVENT];
if (!ev) {
ath11k_warn(ab, "failed to fetch gtk offload status ev");
kfree(tb);
return;
}
arvif = ath11k_mac_get_arvif_by_vdev_id(ab, ev->vdev_id);
if (!arvif) {
ath11k_warn(ab, "failed to get arvif for vdev_id:%d\n",
ev->vdev_id);
kfree(tb);
return;
}
ath11k_dbg(ab, ATH11K_DBG_WMI, "wmi gtk offload event refresh_cnt %d\n",
ev->refresh_cnt);
ath11k_dbg_dump(ab, ATH11K_DBG_WMI, "replay_cnt",
NULL, ev->replay_ctr.counter, GTK_REPLAY_COUNTER_BYTES);
replay_ctr = ev->replay_ctr.word1;
replay_ctr = (replay_ctr << 32) | ev->replay_ctr.word0;
arvif->rekey_data.replay_ctr = replay_ctr;
/* supplicant expects big-endian replay counter */
replay_ctr_be = cpu_to_be64(replay_ctr);
ieee80211_gtk_rekey_notify(arvif->vif, arvif->bssid,
(void *)&replay_ctr_be, GFP_ATOMIC);
kfree(tb);
}
static void ath11k_wmi_tlv_op_rx(struct ath11k_base *ab, struct sk_buff *skb)
{
struct wmi_cmd_hdr *cmd_hdr;
@ -7896,6 +7981,9 @@ static void ath11k_wmi_tlv_op_rx(struct ath11k_base *ab, struct sk_buff *skb)
case WMI_DIAG_EVENTID:
ath11k_wmi_diag_event(ab, skb);
break;
case WMI_GTK_OFFLOAD_STATUS_EVENTID:
ath11k_wmi_gtk_offload_status_event(ab, skb);
break;
/* TODO: Add remaining events */
default:
ath11k_dbg(ab, ATH11K_DBG_WMI, "Unknown eventid: 0x%x\n", id);
@ -8143,7 +8231,7 @@ int ath11k_wmi_attach(struct ath11k_base *ab)
ab->wmi_ab.preferred_hw_mode = WMI_HOST_HW_MODE_MAX;
/* It's overwritten when service_ext_ready is handled */
if (ab->hw_params.single_pdev_only)
if (ab->hw_params.single_pdev_only && ab->hw_params.num_rxmda_per_pdev > 1)
ab->wmi_ab.preferred_hw_mode = WMI_HOST_HW_MODE_SINGLE;
/* TODO: Init remaining wmi soc resources required */
@ -8165,6 +8253,39 @@ void ath11k_wmi_detach(struct ath11k_base *ab)
ath11k_wmi_free_dbring_caps(ab);
}
int ath11k_wmi_hw_data_filter_cmd(struct ath11k *ar, u32 vdev_id,
u32 filter_bitmap, bool enable)
{
struct wmi_hw_data_filter_cmd *cmd;
struct sk_buff *skb;
int len;
len = sizeof(*cmd);
skb = ath11k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return -ENOMEM;
cmd = (struct wmi_hw_data_filter_cmd *)skb->data;
cmd->tlv_header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_HW_DATA_FILTER_CMD) |
FIELD_PREP(WMI_TLV_LEN, sizeof(*cmd) - TLV_HDR_SIZE);
cmd->vdev_id = vdev_id;
cmd->enable = enable;
/* Set all modes in case of disable */
if (cmd->enable)
cmd->hw_filter_bitmap = filter_bitmap;
else
cmd->hw_filter_bitmap = ((u32)~0U);
ath11k_dbg(ar->ab, ATH11K_DBG_WMI,
"wmi hw data filter enable %d filter_bitmap 0x%x\n",
enable, filter_bitmap);
return ath11k_wmi_cmd_send(ar->wmi, skb, WMI_HW_DATA_FILTER_CMDID);
}
int ath11k_wmi_wow_host_wakeup_ind(struct ath11k *ar)
{
struct wmi_wow_host_wakeup_ind *cmd;
@ -8235,3 +8356,606 @@ int ath11k_wmi_scan_prob_req_oui(struct ath11k *ar,
return ath11k_wmi_cmd_send(ar->wmi, skb, WMI_SCAN_PROB_REQ_OUI_CMDID);
}
int ath11k_wmi_wow_add_wakeup_event(struct ath11k *ar, u32 vdev_id,
enum wmi_wow_wakeup_event event,
u32 enable)
{
struct wmi_wow_add_del_event_cmd *cmd;
struct sk_buff *skb;
size_t len;
len = sizeof(*cmd);
skb = ath11k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return -ENOMEM;
cmd = (struct wmi_wow_add_del_event_cmd *)skb->data;
cmd->tlv_header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_WOW_ADD_DEL_EVT_CMD) |
FIELD_PREP(WMI_TLV_LEN, sizeof(*cmd) - TLV_HDR_SIZE);
cmd->vdev_id = vdev_id;
cmd->is_add = enable;
cmd->event_bitmap = (1 << event);
ath11k_dbg(ar->ab, ATH11K_DBG_WMI, "wmi tlv wow add wakeup event %s enable %d vdev_id %d\n",
wow_wakeup_event(event), enable, vdev_id);
return ath11k_wmi_cmd_send(ar->wmi, skb, WMI_WOW_ENABLE_DISABLE_WAKE_EVENT_CMDID);
}
int ath11k_wmi_wow_add_pattern(struct ath11k *ar, u32 vdev_id, u32 pattern_id,
const u8 *pattern, const u8 *mask,
int pattern_len, int pattern_offset)
{
struct wmi_wow_add_pattern_cmd *cmd;
struct wmi_wow_bitmap_pattern *bitmap;
struct wmi_tlv *tlv;
struct sk_buff *skb;
u8 *ptr;
size_t len;
len = sizeof(*cmd) +
sizeof(*tlv) + /* array struct */
sizeof(*bitmap) + /* bitmap */
sizeof(*tlv) + /* empty ipv4 sync */
sizeof(*tlv) + /* empty ipv6 sync */
sizeof(*tlv) + /* empty magic */
sizeof(*tlv) + /* empty info timeout */
sizeof(*tlv) + sizeof(u32); /* ratelimit interval */
skb = ath11k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return -ENOMEM;
/* cmd */
ptr = (u8 *)skb->data;
cmd = (struct wmi_wow_add_pattern_cmd *)ptr;
cmd->tlv_header = FIELD_PREP(WMI_TLV_TAG,
WMI_TAG_WOW_ADD_PATTERN_CMD) |
FIELD_PREP(WMI_TLV_LEN, sizeof(*cmd) - TLV_HDR_SIZE);
cmd->vdev_id = vdev_id;
cmd->pattern_id = pattern_id;
cmd->pattern_type = WOW_BITMAP_PATTERN;
ptr += sizeof(*cmd);
/* bitmap */
tlv = (struct wmi_tlv *)ptr;
tlv->header = FIELD_PREP(WMI_TLV_TAG,
WMI_TAG_ARRAY_STRUCT) |
FIELD_PREP(WMI_TLV_LEN, sizeof(*bitmap));
ptr += sizeof(*tlv);
bitmap = (struct wmi_wow_bitmap_pattern *)ptr;
bitmap->tlv_header = FIELD_PREP(WMI_TLV_TAG,
WMI_TAG_WOW_BITMAP_PATTERN_T) |
FIELD_PREP(WMI_TLV_LEN, sizeof(*bitmap) - TLV_HDR_SIZE);
memcpy(bitmap->patternbuf, pattern, pattern_len);
ath11k_ce_byte_swap(bitmap->patternbuf, roundup(pattern_len, 4));
memcpy(bitmap->bitmaskbuf, mask, pattern_len);
ath11k_ce_byte_swap(bitmap->bitmaskbuf, roundup(pattern_len, 4));
bitmap->pattern_offset = pattern_offset;
bitmap->pattern_len = pattern_len;
bitmap->bitmask_len = pattern_len;
bitmap->pattern_id = pattern_id;
ptr += sizeof(*bitmap);
/* ipv4 sync */
tlv = (struct wmi_tlv *)ptr;
tlv->header = FIELD_PREP(WMI_TLV_TAG,
WMI_TAG_ARRAY_STRUCT) |
FIELD_PREP(WMI_TLV_LEN, 0);
ptr += sizeof(*tlv);
/* ipv6 sync */
tlv = (struct wmi_tlv *)ptr;
tlv->header = FIELD_PREP(WMI_TLV_TAG,
WMI_TAG_ARRAY_STRUCT) |
FIELD_PREP(WMI_TLV_LEN, 0);
ptr += sizeof(*tlv);
/* magic */
tlv = (struct wmi_tlv *)ptr;
tlv->header = FIELD_PREP(WMI_TLV_TAG,
WMI_TAG_ARRAY_STRUCT) |
FIELD_PREP(WMI_TLV_LEN, 0);
ptr += sizeof(*tlv);
/* pattern info timeout */
tlv = (struct wmi_tlv *)ptr;
tlv->header = FIELD_PREP(WMI_TLV_TAG,
WMI_TAG_ARRAY_UINT32) |
FIELD_PREP(WMI_TLV_LEN, 0);
ptr += sizeof(*tlv);
/* ratelimit interval */
tlv = (struct wmi_tlv *)ptr;
tlv->header = FIELD_PREP(WMI_TLV_TAG,
WMI_TAG_ARRAY_UINT32) |
FIELD_PREP(WMI_TLV_LEN, sizeof(u32));
ath11k_dbg(ar->ab, ATH11K_DBG_WMI, "wmi tlv wow add pattern vdev_id %d pattern_id %d pattern_offset %d\n",
vdev_id, pattern_id, pattern_offset);
return ath11k_wmi_cmd_send(ar->wmi, skb, WMI_WOW_ADD_WAKE_PATTERN_CMDID);
}
int ath11k_wmi_wow_del_pattern(struct ath11k *ar, u32 vdev_id, u32 pattern_id)
{
struct wmi_wow_del_pattern_cmd *cmd;
struct sk_buff *skb;
size_t len;
len = sizeof(*cmd);
skb = ath11k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return -ENOMEM;
cmd = (struct wmi_wow_del_pattern_cmd *)skb->data;
cmd->tlv_header = FIELD_PREP(WMI_TLV_TAG,
WMI_TAG_WOW_DEL_PATTERN_CMD) |
FIELD_PREP(WMI_TLV_LEN, sizeof(*cmd) - TLV_HDR_SIZE);
cmd->vdev_id = vdev_id;
cmd->pattern_id = pattern_id;
cmd->pattern_type = WOW_BITMAP_PATTERN;
ath11k_dbg(ar->ab, ATH11K_DBG_WMI, "wmi tlv wow del pattern vdev_id %d pattern_id %d\n",
vdev_id, pattern_id);
return ath11k_wmi_cmd_send(ar->wmi, skb, WMI_WOW_DEL_WAKE_PATTERN_CMDID);
}
static struct sk_buff *
ath11k_wmi_op_gen_config_pno_start(struct ath11k *ar,
u32 vdev_id,
struct wmi_pno_scan_req *pno)
{
struct nlo_configured_parameters *nlo_list;
struct wmi_wow_nlo_config_cmd *cmd;
struct wmi_tlv *tlv;
struct sk_buff *skb;
u32 *channel_list;
size_t len, nlo_list_len, channel_list_len;
u8 *ptr;
u32 i;
len = sizeof(*cmd) +
sizeof(*tlv) +
/* TLV place holder for array of structures
* nlo_configured_parameters(nlo_list)
*/
sizeof(*tlv);
/* TLV place holder for array of uint32 channel_list */
channel_list_len = sizeof(u32) * pno->a_networks[0].channel_count;
len += channel_list_len;
nlo_list_len = sizeof(*nlo_list) * pno->uc_networks_count;
len += nlo_list_len;
skb = ath11k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return ERR_PTR(-ENOMEM);
ptr = (u8 *)skb->data;
cmd = (struct wmi_wow_nlo_config_cmd *)ptr;
cmd->tlv_header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_NLO_CONFIG_CMD) |
FIELD_PREP(WMI_TLV_LEN, sizeof(*cmd) - TLV_HDR_SIZE);
cmd->vdev_id = pno->vdev_id;
cmd->flags = WMI_NLO_CONFIG_START | WMI_NLO_CONFIG_SSID_HIDE_EN;
/* current FW does not support min-max range for dwell time */
cmd->active_dwell_time = pno->active_max_time;
cmd->passive_dwell_time = pno->passive_max_time;
if (pno->do_passive_scan)
cmd->flags |= WMI_NLO_CONFIG_SCAN_PASSIVE;
cmd->fast_scan_period = pno->fast_scan_period;
cmd->slow_scan_period = pno->slow_scan_period;
cmd->fast_scan_max_cycles = pno->fast_scan_max_cycles;
cmd->delay_start_time = pno->delay_start_time;
if (pno->enable_pno_scan_randomization) {
cmd->flags |= WMI_NLO_CONFIG_SPOOFED_MAC_IN_PROBE_REQ |
WMI_NLO_CONFIG_RANDOM_SEQ_NO_IN_PROBE_REQ;
ether_addr_copy(cmd->mac_addr.addr, pno->mac_addr);
ether_addr_copy(cmd->mac_mask.addr, pno->mac_addr_mask);
ath11k_ce_byte_swap(cmd->mac_addr.addr, 8);
ath11k_ce_byte_swap(cmd->mac_mask.addr, 8);
}
ptr += sizeof(*cmd);
/* nlo_configured_parameters(nlo_list) */
cmd->no_of_ssids = pno->uc_networks_count;
tlv = (struct wmi_tlv *)ptr;
tlv->header = FIELD_PREP(WMI_TLV_TAG,
WMI_TAG_ARRAY_STRUCT) |
FIELD_PREP(WMI_TLV_LEN, nlo_list_len);
ptr += sizeof(*tlv);
nlo_list = (struct nlo_configured_parameters *)ptr;
for (i = 0; i < cmd->no_of_ssids; i++) {
tlv = (struct wmi_tlv *)(&nlo_list[i].tlv_header);
tlv->header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_ARRAY_BYTE) |
FIELD_PREP(WMI_TLV_LEN, sizeof(*nlo_list) - sizeof(*tlv));
nlo_list[i].ssid.valid = true;
nlo_list[i].ssid.ssid.ssid_len = pno->a_networks[i].ssid.ssid_len;
memcpy(nlo_list[i].ssid.ssid.ssid,
pno->a_networks[i].ssid.ssid,
nlo_list[i].ssid.ssid.ssid_len);
ath11k_ce_byte_swap(nlo_list[i].ssid.ssid.ssid,
roundup(nlo_list[i].ssid.ssid.ssid_len, 4));
if (pno->a_networks[i].rssi_threshold &&
pno->a_networks[i].rssi_threshold > -300) {
nlo_list[i].rssi_cond.valid = true;
nlo_list[i].rssi_cond.rssi =
pno->a_networks[i].rssi_threshold;
}
nlo_list[i].bcast_nw_type.valid = true;
nlo_list[i].bcast_nw_type.bcast_nw_type =
pno->a_networks[i].bcast_nw_type;
}
ptr += nlo_list_len;
cmd->num_of_channels = pno->a_networks[0].channel_count;
tlv = (struct wmi_tlv *)ptr;
tlv->header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_ARRAY_UINT32) |
FIELD_PREP(WMI_TLV_LEN, channel_list_len);
ptr += sizeof(*tlv);
channel_list = (u32 *)ptr;
for (i = 0; i < cmd->num_of_channels; i++)
channel_list[i] = pno->a_networks[0].channels[i];
ath11k_dbg(ar->ab, ATH11K_DBG_WMI, "wmi tlv start pno config vdev_id %d\n",
vdev_id);
return skb;
}
static struct sk_buff *ath11k_wmi_op_gen_config_pno_stop(struct ath11k *ar,
u32 vdev_id)
{
struct wmi_wow_nlo_config_cmd *cmd;
struct sk_buff *skb;
size_t len;
len = sizeof(*cmd);
skb = ath11k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return ERR_PTR(-ENOMEM);
cmd = (struct wmi_wow_nlo_config_cmd *)skb->data;
cmd->tlv_header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_NLO_CONFIG_CMD) |
FIELD_PREP(WMI_TLV_LEN, len - TLV_HDR_SIZE);
cmd->vdev_id = vdev_id;
cmd->flags = WMI_NLO_CONFIG_STOP;
ath11k_dbg(ar->ab, ATH11K_DBG_WMI,
"wmi tlv stop pno config vdev_id %d\n", vdev_id);
return skb;
}
int ath11k_wmi_wow_config_pno(struct ath11k *ar, u32 vdev_id,
struct wmi_pno_scan_req *pno_scan)
{
struct sk_buff *skb;
if (pno_scan->enable)
skb = ath11k_wmi_op_gen_config_pno_start(ar, vdev_id, pno_scan);
else
skb = ath11k_wmi_op_gen_config_pno_stop(ar, vdev_id);
if (IS_ERR_OR_NULL(skb))
return -ENOMEM;
return ath11k_wmi_cmd_send(ar->wmi, skb, WMI_NETWORK_LIST_OFFLOAD_CONFIG_CMDID);
}
static void ath11k_wmi_fill_ns_offload(struct ath11k *ar,
struct ath11k_arp_ns_offload *offload,
u8 **ptr,
bool enable,
bool ext)
{
struct wmi_ns_offload_tuple *ns;
struct wmi_tlv *tlv;
u8 *buf_ptr = *ptr;
u32 ns_cnt, ns_ext_tuples;
int i, max_offloads;
ns_cnt = offload->ipv6_count;
tlv = (struct wmi_tlv *)buf_ptr;
if (ext) {
ns_ext_tuples = offload->ipv6_count - WMI_MAX_NS_OFFLOADS;
tlv->header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_ARRAY_STRUCT) |
FIELD_PREP(WMI_TLV_LEN, ns_ext_tuples * sizeof(*ns));
i = WMI_MAX_NS_OFFLOADS;
max_offloads = offload->ipv6_count;
} else {
tlv->header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_ARRAY_STRUCT) |
FIELD_PREP(WMI_TLV_LEN, WMI_MAX_NS_OFFLOADS * sizeof(*ns));
i = 0;
max_offloads = WMI_MAX_NS_OFFLOADS;
}
buf_ptr += sizeof(*tlv);
for (; i < max_offloads; i++) {
ns = (struct wmi_ns_offload_tuple *)buf_ptr;
ns->tlv_header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_NS_OFFLOAD_TUPLE) |
FIELD_PREP(WMI_TLV_LEN, sizeof(*ns) - TLV_HDR_SIZE);
if (enable) {
if (i < ns_cnt)
ns->flags |= WMI_NSOL_FLAGS_VALID;
memcpy(ns->target_ipaddr[0], offload->ipv6_addr[i], 16);
memcpy(ns->solicitation_ipaddr, offload->self_ipv6_addr[i], 16);
ath11k_ce_byte_swap(ns->target_ipaddr[0], 16);
ath11k_ce_byte_swap(ns->solicitation_ipaddr, 16);
if (offload->ipv6_type[i])
ns->flags |= WMI_NSOL_FLAGS_IS_IPV6_ANYCAST;
memcpy(ns->target_mac.addr, offload->mac_addr, ETH_ALEN);
ath11k_ce_byte_swap(ns->target_mac.addr, 8);
if (ns->target_mac.word0 != 0 ||
ns->target_mac.word1 != 0) {
ns->flags |= WMI_NSOL_FLAGS_MAC_VALID;
}
ath11k_dbg(ar->ab, ATH11K_DBG_WMI,
"wmi index %d ns_solicited %pI6 target %pI6",
i, ns->solicitation_ipaddr,
ns->target_ipaddr[0]);
}
buf_ptr += sizeof(*ns);
}
*ptr = buf_ptr;
}
static void ath11k_wmi_fill_arp_offload(struct ath11k *ar,
struct ath11k_arp_ns_offload *offload,
u8 **ptr,
bool enable)
{
struct wmi_arp_offload_tuple *arp;
struct wmi_tlv *tlv;
u8 *buf_ptr = *ptr;
int i;
/* fill arp tuple */
tlv = (struct wmi_tlv *)buf_ptr;
tlv->header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_ARRAY_STRUCT) |
FIELD_PREP(WMI_TLV_LEN, WMI_MAX_ARP_OFFLOADS * sizeof(*arp));
buf_ptr += sizeof(*tlv);
for (i = 0; i < WMI_MAX_ARP_OFFLOADS; i++) {
arp = (struct wmi_arp_offload_tuple *)buf_ptr;
arp->tlv_header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_ARP_OFFLOAD_TUPLE) |
FIELD_PREP(WMI_TLV_LEN, sizeof(*arp) - TLV_HDR_SIZE);
if (enable && i < offload->ipv4_count) {
/* Copy the target ip addr and flags */
arp->flags = WMI_ARPOL_FLAGS_VALID;
memcpy(arp->target_ipaddr, offload->ipv4_addr[i], 4);
ath11k_ce_byte_swap(arp->target_ipaddr, 4);
ath11k_dbg(ar->ab, ATH11K_DBG_WMI, "wmi arp offload address %pI4",
arp->target_ipaddr);
}
buf_ptr += sizeof(*arp);
}
*ptr = buf_ptr;
}
int ath11k_wmi_arp_ns_offload(struct ath11k *ar,
struct ath11k_vif *arvif, bool enable)
{
struct ath11k_arp_ns_offload *offload;
struct wmi_set_arp_ns_offload_cmd *cmd;
struct wmi_tlv *tlv;
struct sk_buff *skb;
u8 *buf_ptr;
size_t len;
u8 ns_cnt, ns_ext_tuples = 0;
offload = &arvif->arp_ns_offload;
ns_cnt = offload->ipv6_count;
len = sizeof(*cmd) +
sizeof(*tlv) +
WMI_MAX_NS_OFFLOADS * sizeof(struct wmi_ns_offload_tuple) +
sizeof(*tlv) +
WMI_MAX_ARP_OFFLOADS * sizeof(struct wmi_arp_offload_tuple);
if (ns_cnt > WMI_MAX_NS_OFFLOADS) {
ns_ext_tuples = ns_cnt - WMI_MAX_NS_OFFLOADS;
len += sizeof(*tlv) +
ns_ext_tuples * sizeof(struct wmi_ns_offload_tuple);
}
skb = ath11k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return -ENOMEM;
buf_ptr = skb->data;
cmd = (struct wmi_set_arp_ns_offload_cmd *)buf_ptr;
cmd->tlv_header = FIELD_PREP(WMI_TLV_TAG,
WMI_TAG_SET_ARP_NS_OFFLOAD_CMD) |
FIELD_PREP(WMI_TLV_LEN, sizeof(*cmd) - TLV_HDR_SIZE);
cmd->flags = 0;
cmd->vdev_id = arvif->vdev_id;
cmd->num_ns_ext_tuples = ns_ext_tuples;
buf_ptr += sizeof(*cmd);
ath11k_wmi_fill_ns_offload(ar, offload, &buf_ptr, enable, 0);
ath11k_wmi_fill_arp_offload(ar, offload, &buf_ptr, enable);
if (ns_ext_tuples)
ath11k_wmi_fill_ns_offload(ar, offload, &buf_ptr, enable, 1);
return ath11k_wmi_cmd_send(ar->wmi, skb, WMI_SET_ARP_NS_OFFLOAD_CMDID);
}
int ath11k_wmi_gtk_rekey_offload(struct ath11k *ar,
struct ath11k_vif *arvif, bool enable)
{
struct wmi_gtk_rekey_offload_cmd *cmd;
struct ath11k_rekey_data *rekey_data = &arvif->rekey_data;
int len;
struct sk_buff *skb;
__le64 replay_ctr;
len = sizeof(*cmd);
skb = ath11k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return -ENOMEM;
cmd = (struct wmi_gtk_rekey_offload_cmd *)skb->data;
cmd->tlv_header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_GTK_OFFLOAD_CMD) |
FIELD_PREP(WMI_TLV_LEN, sizeof(*cmd) - TLV_HDR_SIZE);
cmd->vdev_id = arvif->vdev_id;
if (enable) {
cmd->flags = GTK_OFFLOAD_ENABLE_OPCODE;
/* the length in rekey_data and cmd is equal */
memcpy(cmd->kck, rekey_data->kck, sizeof(cmd->kck));
ath11k_ce_byte_swap(cmd->kck, GTK_OFFLOAD_KEK_BYTES);
memcpy(cmd->kek, rekey_data->kek, sizeof(cmd->kek));
ath11k_ce_byte_swap(cmd->kek, GTK_OFFLOAD_KEK_BYTES);
replay_ctr = cpu_to_le64(rekey_data->replay_ctr);
memcpy(cmd->replay_ctr, &replay_ctr,
sizeof(replay_ctr));
ath11k_ce_byte_swap(cmd->replay_ctr, GTK_REPLAY_COUNTER_BYTES);
} else {
cmd->flags = GTK_OFFLOAD_DISABLE_OPCODE;
}
ath11k_dbg(ar->ab, ATH11K_DBG_WMI, "offload gtk rekey vdev: %d %d\n",
arvif->vdev_id, enable);
return ath11k_wmi_cmd_send(ar->wmi, skb, WMI_GTK_OFFLOAD_CMDID);
}
int ath11k_wmi_gtk_rekey_getinfo(struct ath11k *ar,
struct ath11k_vif *arvif)
{
struct wmi_gtk_rekey_offload_cmd *cmd;
int len;
struct sk_buff *skb;
len = sizeof(*cmd);
skb = ath11k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return -ENOMEM;
cmd = (struct wmi_gtk_rekey_offload_cmd *)skb->data;
cmd->tlv_header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_GTK_OFFLOAD_CMD) |
FIELD_PREP(WMI_TLV_LEN, sizeof(*cmd) - TLV_HDR_SIZE);
cmd->vdev_id = arvif->vdev_id;
cmd->flags = GTK_OFFLOAD_REQUEST_STATUS_OPCODE;
ath11k_dbg(ar->ab, ATH11K_DBG_WMI, "get gtk rekey vdev_id: %d\n",
arvif->vdev_id);
return ath11k_wmi_cmd_send(ar->wmi, skb, WMI_GTK_OFFLOAD_CMDID);
}
int ath11k_wmi_pdev_set_bios_sar_table_param(struct ath11k *ar, const u8 *sar_val)
{ struct ath11k_pdev_wmi *wmi = ar->wmi;
struct wmi_pdev_set_sar_table_cmd *cmd;
struct wmi_tlv *tlv;
struct sk_buff *skb;
u8 *buf_ptr;
u32 len, sar_len_aligned, rsvd_len_aligned;
sar_len_aligned = roundup(BIOS_SAR_TABLE_LEN, sizeof(u32));
rsvd_len_aligned = roundup(BIOS_SAR_RSVD1_LEN, sizeof(u32));
len = sizeof(*cmd) +
TLV_HDR_SIZE + sar_len_aligned +
TLV_HDR_SIZE + rsvd_len_aligned;
skb = ath11k_wmi_alloc_skb(wmi->wmi_ab, len);
if (!skb)
return -ENOMEM;
cmd = (struct wmi_pdev_set_sar_table_cmd *)skb->data;
cmd->tlv_header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_PDEV_SET_BIOS_SAR_TABLE_CMD) |
FIELD_PREP(WMI_TLV_LEN, sizeof(*cmd) - TLV_HDR_SIZE);
cmd->pdev_id = ar->pdev->pdev_id;
cmd->sar_len = BIOS_SAR_TABLE_LEN;
cmd->rsvd_len = BIOS_SAR_RSVD1_LEN;
buf_ptr = skb->data + sizeof(*cmd);
tlv = (struct wmi_tlv *)buf_ptr;
tlv->header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_ARRAY_BYTE) |
FIELD_PREP(WMI_TLV_LEN, sar_len_aligned);
buf_ptr += TLV_HDR_SIZE;
memcpy(buf_ptr, sar_val, BIOS_SAR_TABLE_LEN);
buf_ptr += sar_len_aligned;
tlv = (struct wmi_tlv *)buf_ptr;
tlv->header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_ARRAY_BYTE) |
FIELD_PREP(WMI_TLV_LEN, rsvd_len_aligned);
return ath11k_wmi_cmd_send(wmi, skb, WMI_PDEV_SET_BIOS_SAR_TABLE_CMDID);
}
int ath11k_wmi_pdev_set_bios_geo_table_param(struct ath11k *ar)
{
struct ath11k_pdev_wmi *wmi = ar->wmi;
struct wmi_pdev_set_geo_table_cmd *cmd;
struct wmi_tlv *tlv;
struct sk_buff *skb;
u8 *buf_ptr;
u32 len, rsvd_len_aligned;
rsvd_len_aligned = roundup(BIOS_SAR_RSVD2_LEN, sizeof(u32));
len = sizeof(*cmd) + TLV_HDR_SIZE + rsvd_len_aligned;
skb = ath11k_wmi_alloc_skb(wmi->wmi_ab, len);
if (!skb)
return -ENOMEM;
cmd = (struct wmi_pdev_set_geo_table_cmd *)skb->data;
cmd->tlv_header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_PDEV_SET_BIOS_GEO_TABLE_CMD) |
FIELD_PREP(WMI_TLV_LEN, sizeof(*cmd) - TLV_HDR_SIZE);
cmd->pdev_id = ar->pdev->pdev_id;
cmd->rsvd_len = BIOS_SAR_RSVD2_LEN;
buf_ptr = skb->data + sizeof(*cmd);
tlv = (struct wmi_tlv *)buf_ptr;
tlv->header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_ARRAY_BYTE) |
FIELD_PREP(WMI_TLV_LEN, rsvd_len_aligned);
return ath11k_wmi_cmd_send(wmi, skb, WMI_PDEV_SET_BIOS_GEO_TABLE_CMDID);
}

View File

@ -13,6 +13,7 @@ struct ath11k_base;
struct ath11k;
struct ath11k_fw_stats;
struct ath11k_fw_dbglog;
struct ath11k_vif;
#define PSOC_HOST_MAX_NUM_SS (8)
@ -284,6 +285,11 @@ enum wmi_tlv_cmd_id {
WMI_PDEV_SET_SRG_OBSS_BSSID_ENABLE_BITMAP_CMDID,
WMI_PDEV_SET_NON_SRG_OBSS_COLOR_ENABLE_BITMAP_CMDID,
WMI_PDEV_SET_NON_SRG_OBSS_BSSID_ENABLE_BITMAP_CMDID,
WMI_PDEV_GET_TPC_STATS_CMDID,
WMI_PDEV_ENABLE_DURATION_BASED_TX_MODE_SELECTION_CMDID,
WMI_PDEV_GET_DPD_STATUS_CMDID,
WMI_PDEV_SET_BIOS_SAR_TABLE_CMDID,
WMI_PDEV_SET_BIOS_GEO_TABLE_CMDID,
WMI_VDEV_CREATE_CMDID = WMI_TLV_CMD(WMI_GRP_VDEV),
WMI_VDEV_DELETE_CMDID,
WMI_VDEV_START_REQUEST_CMDID,
@ -1858,6 +1864,8 @@ enum wmi_tlv_tag {
WMI_TAG_PDEV_SRG_OBSS_BSSID_ENABLE_BITMAP_CMD,
WMI_TAG_PDEV_NON_SRG_OBSS_COLOR_ENABLE_BITMAP_CMD,
WMI_TAG_PDEV_NON_SRG_OBSS_BSSID_ENABLE_BITMAP_CMD,
WMI_TAG_PDEV_SET_BIOS_SAR_TABLE_CMD = 0x3D8,
WMI_TAG_PDEV_SET_BIOS_GEO_TABLE_CMD,
WMI_TAG_MAX
};
@ -1991,6 +1999,7 @@ enum wmi_tlv_service {
WMI_TLV_SERVICE_ACK_TIMEOUT = 126,
WMI_TLV_SERVICE_PDEV_BSS_CHANNEL_INFO_64 = 127,
/* The first 128 bits */
WMI_MAX_SERVICE = 128,
WMI_TLV_SERVICE_CHAN_LOAD_INFO = 128,
@ -2083,7 +2092,12 @@ enum wmi_tlv_service {
WMI_TLV_SERVICE_EXT2_MSG = 220,
WMI_TLV_SERVICE_SRG_SRP_SPATIAL_REUSE_SUPPORT = 249,
WMI_MAX_EXT_SERVICE
/* The second 128 bits */
WMI_MAX_EXT_SERVICE = 256,
WMI_TLV_SERVICE_BIOS_SAR_SUPPORT = 326,
/* The third 128 bits */
WMI_MAX_EXT2_SERVICE = 384
};
enum {
@ -3088,9 +3102,6 @@ enum scan_dwelltime_adaptive_mode {
SCAN_DWELL_MODE_STATIC = 4
};
#define WLAN_SCAN_MAX_NUM_SSID 10
#define WLAN_SCAN_MAX_NUM_BSSID 10
#define WLAN_SSID_MAX_LEN 32
struct element_info {
@ -3105,7 +3116,6 @@ struct wlan_ssid {
#define WMI_IE_BITMAP_SIZE 8
#define WMI_SCAN_MAX_NUM_SSID 0x0A
/* prefix used by scan requestor ids on the host */
#define WMI_HOST_SCAN_REQUESTOR_ID_PREFIX 0xA000
@ -3113,10 +3123,6 @@ struct wlan_ssid {
/* host cycles through the lower 12 bits to generate ids */
#define WMI_HOST_SCAN_REQ_ID_PREFIX 0xA000
#define WLAN_SCAN_PARAMS_MAX_SSID 16
#define WLAN_SCAN_PARAMS_MAX_BSSID 4
#define WLAN_SCAN_PARAMS_MAX_IE_LEN 256
/* Values lower than this may be refused by some firmware revisions with a scan
* completion with a timedout reason.
*/
@ -3312,8 +3318,8 @@ struct scan_req_params {
u32 n_probes;
u32 *chan_list;
u32 notify_scan_events;
struct wlan_ssid ssid[WLAN_SCAN_MAX_NUM_SSID];
struct wmi_mac_addr bssid_list[WLAN_SCAN_MAX_NUM_BSSID];
struct wlan_ssid ssid[WLAN_SCAN_PARAMS_MAX_SSID];
struct wmi_mac_addr bssid_list[WLAN_SCAN_PARAMS_MAX_BSSID];
struct element_info extraie;
struct element_info htcap;
struct element_info vhtcap;
@ -5377,7 +5383,7 @@ struct ath11k_wmi_base {
struct completion service_ready;
struct completion unified_ready;
DECLARE_BITMAP(svc_map, WMI_MAX_EXT_SERVICE);
DECLARE_BITMAP(svc_map, WMI_MAX_EXT2_SERVICE);
wait_queue_head_t tx_credits_wq;
const struct wmi_peer_flags_map *peer_flags;
u32 num_mem_chunks;
@ -5390,6 +5396,19 @@ struct ath11k_wmi_base {
struct ath11k_targ_cap *targ_cap;
};
/* Definition of HW data filtering */
enum hw_data_filter_type {
WMI_HW_DATA_FILTER_DROP_NON_ARP_BC = BIT(0),
WMI_HW_DATA_FILTER_DROP_NON_ICMPV6_MC = BIT(1),
};
struct wmi_hw_data_filter_cmd {
u32 tlv_header;
u32 vdev_id;
u32 enable;
u32 hw_filter_bitmap;
} __packed;
/* WOW structures */
enum wmi_wow_wakeup_event {
WOW_BMISS_EVENT = 0,
@ -5534,6 +5553,45 @@ static inline const char *wow_reason(enum wmi_wow_wake_reason reason)
#undef C2S
struct wmi_wow_ev_arg {
u32 vdev_id;
u32 flag;
enum wmi_wow_wake_reason wake_reason;
u32 data_len;
};
enum wmi_tlv_pattern_type {
WOW_PATTERN_MIN = 0,
WOW_BITMAP_PATTERN = WOW_PATTERN_MIN,
WOW_IPV4_SYNC_PATTERN,
WOW_IPV6_SYNC_PATTERN,
WOW_WILD_CARD_PATTERN,
WOW_TIMER_PATTERN,
WOW_MAGIC_PATTERN,
WOW_IPV6_RA_PATTERN,
WOW_IOAC_PKT_PATTERN,
WOW_IOAC_TMR_PATTERN,
WOW_PATTERN_MAX
};
#define WOW_DEFAULT_BITMAP_PATTERN_SIZE 148
#define WOW_DEFAULT_BITMASK_SIZE 148
#define WOW_MIN_PATTERN_SIZE 1
#define WOW_MAX_PATTERN_SIZE 148
#define WOW_MAX_PKT_OFFSET 128
#define WOW_HDR_LEN (sizeof(struct ieee80211_hdr_3addr) + \
sizeof(struct rfc1042_hdr))
#define WOW_MAX_REDUCE (WOW_HDR_LEN - sizeof(struct ethhdr) - \
offsetof(struct ieee80211_hdr_3addr, addr1))
struct wmi_wow_add_del_event_cmd {
u32 tlv_header;
u32 vdev_id;
u32 is_add;
u32 event_bitmap;
} __packed;
struct wmi_wow_enable_cmd {
u32 tlv_header;
u32 enable;
@ -5546,13 +5604,309 @@ struct wmi_wow_host_wakeup_ind {
u32 reserved;
} __packed;
struct wmi_wow_ev_arg {
struct wmi_tlv_wow_event_info {
u32 vdev_id;
u32 flag;
enum wmi_wow_wake_reason wake_reason;
u32 wake_reason;
u32 data_len;
} __packed;
struct wmi_wow_bitmap_pattern {
u32 tlv_header;
u8 patternbuf[WOW_DEFAULT_BITMAP_PATTERN_SIZE];
u8 bitmaskbuf[WOW_DEFAULT_BITMASK_SIZE];
u32 pattern_offset;
u32 pattern_len;
u32 bitmask_len;
u32 pattern_id;
} __packed;
struct wmi_wow_add_pattern_cmd {
u32 tlv_header;
u32 vdev_id;
u32 pattern_id;
u32 pattern_type;
} __packed;
struct wmi_wow_del_pattern_cmd {
u32 tlv_header;
u32 vdev_id;
u32 pattern_id;
u32 pattern_type;
} __packed;
#define WMI_PNO_MAX_SCHED_SCAN_PLANS 2
#define WMI_PNO_MAX_SCHED_SCAN_PLAN_INT 7200
#define WMI_PNO_MAX_SCHED_SCAN_PLAN_ITRNS 100
#define WMI_PNO_MAX_NETW_CHANNELS 26
#define WMI_PNO_MAX_NETW_CHANNELS_EX 60
#define WMI_PNO_MAX_SUPP_NETWORKS WLAN_SCAN_PARAMS_MAX_SSID
#define WMI_PNO_MAX_IE_LENGTH WLAN_SCAN_PARAMS_MAX_IE_LEN
/* size based of dot11 declaration without extra IEs as we will not carry those for PNO */
#define WMI_PNO_MAX_PB_REQ_SIZE 450
#define WMI_PNO_24G_DEFAULT_CH 1
#define WMI_PNO_5G_DEFAULT_CH 36
#define WMI_ACTIVE_MAX_CHANNEL_TIME 40
#define WMI_PASSIVE_MAX_CHANNEL_TIME 110
/* SSID broadcast type */
enum wmi_ssid_bcast_type {
BCAST_UNKNOWN = 0,
BCAST_NORMAL = 1,
BCAST_HIDDEN = 2,
};
#define WMI_NLO_MAX_SSIDS 16
#define WMI_NLO_MAX_CHAN 48
#define WMI_NLO_CONFIG_STOP BIT(0)
#define WMI_NLO_CONFIG_START BIT(1)
#define WMI_NLO_CONFIG_RESET BIT(2)
#define WMI_NLO_CONFIG_SLOW_SCAN BIT(4)
#define WMI_NLO_CONFIG_FAST_SCAN BIT(5)
#define WMI_NLO_CONFIG_SSID_HIDE_EN BIT(6)
/* This bit is used to indicate if EPNO or supplicant PNO is enabled.
* Only one of them can be enabled at a given time
*/
#define WMI_NLO_CONFIG_ENLO BIT(7)
#define WMI_NLO_CONFIG_SCAN_PASSIVE BIT(8)
#define WMI_NLO_CONFIG_ENLO_RESET BIT(9)
#define WMI_NLO_CONFIG_SPOOFED_MAC_IN_PROBE_REQ BIT(10)
#define WMI_NLO_CONFIG_RANDOM_SEQ_NO_IN_PROBE_REQ BIT(11)
#define WMI_NLO_CONFIG_ENABLE_IE_WHITELIST_IN_PROBE_REQ BIT(12)
#define WMI_NLO_CONFIG_ENABLE_CNLO_RSSI_CONFIG BIT(13)
struct wmi_nlo_ssid_param {
u32 valid;
struct wmi_ssid ssid;
} __packed;
struct wmi_nlo_enc_param {
u32 valid;
u32 enc_type;
} __packed;
struct wmi_nlo_auth_param {
u32 valid;
u32 auth_type;
} __packed;
struct wmi_nlo_bcast_nw_param {
u32 valid;
u32 bcast_nw_type;
} __packed;
struct wmi_nlo_rssi_param {
u32 valid;
s32 rssi;
} __packed;
struct nlo_configured_parameters {
/* TLV tag and len;*/
u32 tlv_header;
struct wmi_nlo_ssid_param ssid;
struct wmi_nlo_enc_param enc_type;
struct wmi_nlo_auth_param auth_type;
struct wmi_nlo_rssi_param rssi_cond;
/* indicates if the SSID is hidden or not */
struct wmi_nlo_bcast_nw_param bcast_nw_type;
} __packed;
struct wmi_network_type {
struct wmi_ssid ssid;
u32 authentication;
u32 encryption;
u32 bcast_nw_type;
u8 channel_count;
u16 channels[WMI_PNO_MAX_NETW_CHANNELS_EX];
s32 rssi_threshold;
};
struct wmi_pno_scan_req {
u8 enable;
u8 vdev_id;
u8 uc_networks_count;
struct wmi_network_type a_networks[WMI_PNO_MAX_SUPP_NETWORKS];
u32 fast_scan_period;
u32 slow_scan_period;
u8 fast_scan_max_cycles;
bool do_passive_scan;
u32 delay_start_time;
u32 active_min_time;
u32 active_max_time;
u32 passive_min_time;
u32 passive_max_time;
/* mac address randomization attributes */
u32 enable_pno_scan_randomization;
u8 mac_addr[ETH_ALEN];
u8 mac_addr_mask[ETH_ALEN];
};
struct wmi_wow_nlo_config_cmd {
u32 tlv_header;
u32 flags;
u32 vdev_id;
u32 fast_scan_max_cycles;
u32 active_dwell_time;
u32 passive_dwell_time;
u32 probe_bundle_size;
/* ART = IRT */
u32 rest_time;
/* Max value that can be reached after SBM */
u32 max_rest_time;
/* SBM */
u32 scan_backoff_multiplier;
/* SCBM */
u32 fast_scan_period;
/* specific to windows */
u32 slow_scan_period;
u32 no_of_ssids;
u32 num_of_channels;
/* NLO scan start delay time in milliseconds */
u32 delay_start_time;
/* MAC Address to use in Probe Req as SA */
struct wmi_mac_addr mac_addr;
/* Mask on which MAC has to be randomized */
struct wmi_mac_addr mac_mask;
/* IE bitmap to use in Probe Req */
u32 ie_bitmap[8];
/* Number of vendor OUIs. In the TLV vendor_oui[] */
u32 num_vendor_oui;
/* Number of connected NLO band preferences */
u32 num_cnlo_band_pref;
/* The TLVs will follow.
* nlo_configured_parameters nlo_list[];
* u32 channel_list[num_of_channels];
*/
} __packed;
#define WMI_MAX_NS_OFFLOADS 2
#define WMI_MAX_ARP_OFFLOADS 2
#define WMI_ARPOL_FLAGS_VALID BIT(0)
#define WMI_ARPOL_FLAGS_MAC_VALID BIT(1)
#define WMI_ARPOL_FLAGS_REMOTE_IP_VALID BIT(2)
struct wmi_arp_offload_tuple {
u32 tlv_header;
u32 flags;
u8 target_ipaddr[4];
u8 remote_ipaddr[4];
struct wmi_mac_addr target_mac;
} __packed;
#define WMI_NSOL_FLAGS_VALID BIT(0)
#define WMI_NSOL_FLAGS_MAC_VALID BIT(1)
#define WMI_NSOL_FLAGS_REMOTE_IP_VALID BIT(2)
#define WMI_NSOL_FLAGS_IS_IPV6_ANYCAST BIT(3)
#define WMI_NSOL_MAX_TARGET_IPS 2
struct wmi_ns_offload_tuple {
u32 tlv_header;
u32 flags;
u8 target_ipaddr[WMI_NSOL_MAX_TARGET_IPS][16];
u8 solicitation_ipaddr[16];
u8 remote_ipaddr[16];
struct wmi_mac_addr target_mac;
} __packed;
struct wmi_set_arp_ns_offload_cmd {
u32 tlv_header;
u32 flags;
u32 vdev_id;
u32 num_ns_ext_tuples;
/* The TLVs follow:
* wmi_ns_offload_tuple ns_tuples[WMI_MAX_NS_OFFLOADS];
* wmi_arp_offload_tuple arp_tuples[WMI_MAX_ARP_OFFLOADS];
* wmi_ns_offload_tuple ns_ext_tuples[num_ns_ext_tuples];
*/
} __packed;
#define GTK_OFFLOAD_OPCODE_MASK 0xFF000000
#define GTK_OFFLOAD_ENABLE_OPCODE 0x01000000
#define GTK_OFFLOAD_DISABLE_OPCODE 0x02000000
#define GTK_OFFLOAD_REQUEST_STATUS_OPCODE 0x04000000
#define GTK_OFFLOAD_KEK_BYTES 16
#define GTK_OFFLOAD_KCK_BYTES 16
#define GTK_REPLAY_COUNTER_BYTES 8
#define WMI_MAX_KEY_LEN 32
#define IGTK_PN_SIZE 6
struct wmi_replayc_cnt {
union {
u8 counter[GTK_REPLAY_COUNTER_BYTES];
struct {
u32 word0;
u32 word1;
} __packed;
} __packed;
} __packed;
struct wmi_gtk_offload_status_event {
u32 vdev_id;
u32 flags;
u32 refresh_cnt;
struct wmi_replayc_cnt replay_ctr;
u8 igtk_key_index;
u8 igtk_key_length;
u8 igtk_key_rsc[IGTK_PN_SIZE];
u8 igtk_key[WMI_MAX_KEY_LEN];
u8 gtk_key_index;
u8 gtk_key_length;
u8 gtk_key_rsc[GTK_REPLAY_COUNTER_BYTES];
u8 gtk_key[WMI_MAX_KEY_LEN];
} __packed;
struct wmi_gtk_rekey_offload_cmd {
u32 tlv_header;
u32 vdev_id;
u32 flags;
u8 kek[GTK_OFFLOAD_KEK_BYTES];
u8 kck[GTK_OFFLOAD_KCK_BYTES];
u8 replay_ctr[GTK_REPLAY_COUNTER_BYTES];
} __packed;
#define BIOS_SAR_TABLE_LEN (22)
#define BIOS_SAR_RSVD1_LEN (6)
#define BIOS_SAR_RSVD2_LEN (18)
struct wmi_pdev_set_sar_table_cmd {
u32 tlv_header;
u32 pdev_id;
u32 sar_len;
u32 rsvd_len;
} __packed;
struct wmi_pdev_set_geo_table_cmd {
u32 tlv_header;
u32 pdev_id;
u32 rsvd_len;
} __packed;
int ath11k_wmi_cmd_send(struct ath11k_pdev_wmi *wmi, struct sk_buff *skb,
u32 cmd_id);
struct sk_buff *ath11k_wmi_alloc_skb(struct ath11k_wmi_base *wmi_sc, u32 len);
@ -5714,4 +6068,24 @@ int ath11k_wmi_scan_prob_req_oui(struct ath11k *ar,
const u8 mac_addr[ETH_ALEN]);
int ath11k_wmi_fw_dbglog_cfg(struct ath11k *ar, u32 *module_id_bitmap,
struct ath11k_fw_dbglog *dbglog);
int ath11k_wmi_wow_config_pno(struct ath11k *ar, u32 vdev_id,
struct wmi_pno_scan_req *pno_scan);
int ath11k_wmi_wow_del_pattern(struct ath11k *ar, u32 vdev_id, u32 pattern_id);
int ath11k_wmi_wow_add_pattern(struct ath11k *ar, u32 vdev_id, u32 pattern_id,
const u8 *pattern, const u8 *mask,
int pattern_len, int pattern_offset);
int ath11k_wmi_wow_add_wakeup_event(struct ath11k *ar, u32 vdev_id,
enum wmi_wow_wakeup_event event,
u32 enable);
int ath11k_wmi_hw_data_filter_cmd(struct ath11k *ar, u32 vdev_id,
u32 filter_bitmap, bool enable);
int ath11k_wmi_arp_ns_offload(struct ath11k *ar,
struct ath11k_vif *arvif, bool enable);
int ath11k_wmi_gtk_rekey_offload(struct ath11k *ar,
struct ath11k_vif *arvif, bool enable);
int ath11k_wmi_gtk_rekey_getinfo(struct ath11k *ar,
struct ath11k_vif *arvif);
int ath11k_wmi_pdev_set_bios_sar_table_param(struct ath11k *ar, const u8 *sar_val);
int ath11k_wmi_pdev_set_bios_geo_table_param(struct ath11k *ar);
#endif

View File

@ -6,11 +6,24 @@
#include <linux/delay.h>
#include "mac.h"
#include <net/mac80211.h>
#include "core.h"
#include "hif.h"
#include "debug.h"
#include "wmi.h"
#include "wow.h"
#include "dp_rx.h"
static const struct wiphy_wowlan_support ath11k_wowlan_support = {
.flags = WIPHY_WOWLAN_DISCONNECT |
WIPHY_WOWLAN_MAGIC_PKT |
WIPHY_WOWLAN_SUPPORTS_GTK_REKEY |
WIPHY_WOWLAN_GTK_REKEY_FAILURE,
.pattern_min_len = WOW_MIN_PATTERN_SIZE,
.pattern_max_len = WOW_MAX_PATTERN_SIZE,
.max_pkt_offset = WOW_MAX_PKT_OFFSET,
};
int ath11k_wow_enable(struct ath11k_base *ab)
{
@ -71,3 +84,753 @@ int ath11k_wow_wakeup(struct ath11k_base *ab)
return 0;
}
static int ath11k_wow_vif_cleanup(struct ath11k_vif *arvif)
{
struct ath11k *ar = arvif->ar;
int i, ret;
for (i = 0; i < WOW_EVENT_MAX; i++) {
ret = ath11k_wmi_wow_add_wakeup_event(ar, arvif->vdev_id, i, 0);
if (ret) {
ath11k_warn(ar->ab, "failed to issue wow wakeup for event %s on vdev %i: %d\n",
wow_wakeup_event(i), arvif->vdev_id, ret);
return ret;
}
}
for (i = 0; i < ar->wow.max_num_patterns; i++) {
ret = ath11k_wmi_wow_del_pattern(ar, arvif->vdev_id, i);
if (ret) {
ath11k_warn(ar->ab, "failed to delete wow pattern %d for vdev %i: %d\n",
i, arvif->vdev_id, ret);
return ret;
}
}
return 0;
}
static int ath11k_wow_cleanup(struct ath11k *ar)
{
struct ath11k_vif *arvif;
int ret;
lockdep_assert_held(&ar->conf_mutex);
list_for_each_entry(arvif, &ar->arvifs, list) {
ret = ath11k_wow_vif_cleanup(arvif);
if (ret) {
ath11k_warn(ar->ab, "failed to clean wow wakeups on vdev %i: %d\n",
arvif->vdev_id, ret);
return ret;
}
}
return 0;
}
/* Convert a 802.3 format to a 802.11 format.
* +------------+-----------+--------+----------------+
* 802.3: |dest mac(6B)|src mac(6B)|type(2B)| body... |
* +------------+-----------+--------+----------------+
* |__ |_______ |____________ |________
* | | | |
* +--+------------+----+-----------+---------------+-----------+
* 802.11: |4B|dest mac(6B)| 6B |src mac(6B)| 8B |type(2B)| body... |
* +--+------------+----+-----------+---------------+-----------+
*/
static void ath11k_wow_convert_8023_to_80211(struct cfg80211_pkt_pattern *new,
const struct cfg80211_pkt_pattern *old)
{
u8 hdr_8023_pattern[ETH_HLEN] = {};
u8 hdr_8023_bit_mask[ETH_HLEN] = {};
u8 hdr_80211_pattern[WOW_HDR_LEN] = {};
u8 hdr_80211_bit_mask[WOW_HDR_LEN] = {};
int total_len = old->pkt_offset + old->pattern_len;
int hdr_80211_end_offset;
struct ieee80211_hdr_3addr *new_hdr_pattern =
(struct ieee80211_hdr_3addr *)hdr_80211_pattern;
struct ieee80211_hdr_3addr *new_hdr_mask =
(struct ieee80211_hdr_3addr *)hdr_80211_bit_mask;
struct ethhdr *old_hdr_pattern = (struct ethhdr *)hdr_8023_pattern;
struct ethhdr *old_hdr_mask = (struct ethhdr *)hdr_8023_bit_mask;
int hdr_len = sizeof(*new_hdr_pattern);
struct rfc1042_hdr *new_rfc_pattern =
(struct rfc1042_hdr *)(hdr_80211_pattern + hdr_len);
struct rfc1042_hdr *new_rfc_mask =
(struct rfc1042_hdr *)(hdr_80211_bit_mask + hdr_len);
int rfc_len = sizeof(*new_rfc_pattern);
memcpy(hdr_8023_pattern + old->pkt_offset,
old->pattern, ETH_HLEN - old->pkt_offset);
memcpy(hdr_8023_bit_mask + old->pkt_offset,
old->mask, ETH_HLEN - old->pkt_offset);
/* Copy destination address */
memcpy(new_hdr_pattern->addr1, old_hdr_pattern->h_dest, ETH_ALEN);
memcpy(new_hdr_mask->addr1, old_hdr_mask->h_dest, ETH_ALEN);
/* Copy source address */
memcpy(new_hdr_pattern->addr3, old_hdr_pattern->h_source, ETH_ALEN);
memcpy(new_hdr_mask->addr3, old_hdr_mask->h_source, ETH_ALEN);
/* Copy logic link type */
memcpy(&new_rfc_pattern->snap_type,
&old_hdr_pattern->h_proto,
sizeof(old_hdr_pattern->h_proto));
memcpy(&new_rfc_mask->snap_type,
&old_hdr_mask->h_proto,
sizeof(old_hdr_mask->h_proto));
/* Compute new pkt_offset */
if (old->pkt_offset < ETH_ALEN)
new->pkt_offset = old->pkt_offset +
offsetof(struct ieee80211_hdr_3addr, addr1);
else if (old->pkt_offset < offsetof(struct ethhdr, h_proto))
new->pkt_offset = old->pkt_offset +
offsetof(struct ieee80211_hdr_3addr, addr3) -
offsetof(struct ethhdr, h_source);
else
new->pkt_offset = old->pkt_offset + hdr_len + rfc_len - ETH_HLEN;
/* Compute new hdr end offset */
if (total_len > ETH_HLEN)
hdr_80211_end_offset = hdr_len + rfc_len;
else if (total_len > offsetof(struct ethhdr, h_proto))
hdr_80211_end_offset = hdr_len + rfc_len + total_len - ETH_HLEN;
else if (total_len > ETH_ALEN)
hdr_80211_end_offset = total_len - ETH_ALEN +
offsetof(struct ieee80211_hdr_3addr, addr3);
else
hdr_80211_end_offset = total_len +
offsetof(struct ieee80211_hdr_3addr, addr1);
new->pattern_len = hdr_80211_end_offset - new->pkt_offset;
memcpy((u8 *)new->pattern,
hdr_80211_pattern + new->pkt_offset,
new->pattern_len);
memcpy((u8 *)new->mask,
hdr_80211_bit_mask + new->pkt_offset,
new->pattern_len);
if (total_len > ETH_HLEN) {
/* Copy frame body */
memcpy((u8 *)new->pattern + new->pattern_len,
(void *)old->pattern + ETH_HLEN - old->pkt_offset,
total_len - ETH_HLEN);
memcpy((u8 *)new->mask + new->pattern_len,
(void *)old->mask + ETH_HLEN - old->pkt_offset,
total_len - ETH_HLEN);
new->pattern_len += total_len - ETH_HLEN;
}
}
static int ath11k_wmi_pno_check_and_convert(struct ath11k *ar, u32 vdev_id,
struct cfg80211_sched_scan_request *nd_config,
struct wmi_pno_scan_req *pno)
{
int i, j;
u8 ssid_len;
pno->enable = 1;
pno->vdev_id = vdev_id;
pno->uc_networks_count = nd_config->n_match_sets;
if (!pno->uc_networks_count ||
pno->uc_networks_count > WMI_PNO_MAX_SUPP_NETWORKS)
return -EINVAL;
if (nd_config->n_channels > WMI_PNO_MAX_NETW_CHANNELS_EX)
return -EINVAL;
/* Filling per profile params */
for (i = 0; i < pno->uc_networks_count; i++) {
ssid_len = nd_config->match_sets[i].ssid.ssid_len;
if (ssid_len == 0 || ssid_len > 32)
return -EINVAL;
pno->a_networks[i].ssid.ssid_len = ssid_len;
memcpy(pno->a_networks[i].ssid.ssid,
nd_config->match_sets[i].ssid.ssid,
nd_config->match_sets[i].ssid.ssid_len);
pno->a_networks[i].authentication = 0;
pno->a_networks[i].encryption = 0;
pno->a_networks[i].bcast_nw_type = 0;
/* Copying list of valid channel into request */
pno->a_networks[i].channel_count = nd_config->n_channels;
pno->a_networks[i].rssi_threshold = nd_config->match_sets[i].rssi_thold;
for (j = 0; j < nd_config->n_channels; j++) {
pno->a_networks[i].channels[j] =
nd_config->channels[j]->center_freq;
}
}
/* set scan to passive if no SSIDs are specified in the request */
if (nd_config->n_ssids == 0)
pno->do_passive_scan = true;
else
pno->do_passive_scan = false;
for (i = 0; i < nd_config->n_ssids; i++) {
j = 0;
while (j < pno->uc_networks_count) {
if (pno->a_networks[j].ssid.ssid_len ==
nd_config->ssids[i].ssid_len &&
(memcmp(pno->a_networks[j].ssid.ssid,
nd_config->ssids[i].ssid,
pno->a_networks[j].ssid.ssid_len) == 0)) {
pno->a_networks[j].bcast_nw_type = BCAST_HIDDEN;
break;
}
j++;
}
}
if (nd_config->n_scan_plans == 2) {
pno->fast_scan_period = nd_config->scan_plans[0].interval * MSEC_PER_SEC;
pno->fast_scan_max_cycles = nd_config->scan_plans[0].iterations;
pno->slow_scan_period =
nd_config->scan_plans[1].interval * MSEC_PER_SEC;
} else if (nd_config->n_scan_plans == 1) {
pno->fast_scan_period = nd_config->scan_plans[0].interval * MSEC_PER_SEC;
pno->fast_scan_max_cycles = 1;
pno->slow_scan_period = nd_config->scan_plans[0].interval * MSEC_PER_SEC;
} else {
ath11k_warn(ar->ab, "Invalid number of scan plans %d !!",
nd_config->n_scan_plans);
}
if (nd_config->flags & NL80211_SCAN_FLAG_RANDOM_ADDR) {
/* enable mac randomization */
pno->enable_pno_scan_randomization = 1;
memcpy(pno->mac_addr, nd_config->mac_addr, ETH_ALEN);
memcpy(pno->mac_addr_mask, nd_config->mac_addr_mask, ETH_ALEN);
}
pno->delay_start_time = nd_config->delay;
/* Current FW does not support min-max range for dwell time */
pno->active_max_time = WMI_ACTIVE_MAX_CHANNEL_TIME;
pno->passive_max_time = WMI_PASSIVE_MAX_CHANNEL_TIME;
return 0;
}
static int ath11k_vif_wow_set_wakeups(struct ath11k_vif *arvif,
struct cfg80211_wowlan *wowlan)
{
int ret, i;
unsigned long wow_mask = 0;
struct ath11k *ar = arvif->ar;
const struct cfg80211_pkt_pattern *patterns = wowlan->patterns;
int pattern_id = 0;
/* Setup requested WOW features */
switch (arvif->vdev_type) {
case WMI_VDEV_TYPE_IBSS:
__set_bit(WOW_BEACON_EVENT, &wow_mask);
fallthrough;
case WMI_VDEV_TYPE_AP:
__set_bit(WOW_DEAUTH_RECVD_EVENT, &wow_mask);
__set_bit(WOW_DISASSOC_RECVD_EVENT, &wow_mask);
__set_bit(WOW_PROBE_REQ_WPS_IE_EVENT, &wow_mask);
__set_bit(WOW_AUTH_REQ_EVENT, &wow_mask);
__set_bit(WOW_ASSOC_REQ_EVENT, &wow_mask);
__set_bit(WOW_HTT_EVENT, &wow_mask);
__set_bit(WOW_RA_MATCH_EVENT, &wow_mask);
break;
case WMI_VDEV_TYPE_STA:
if (wowlan->disconnect) {
__set_bit(WOW_DEAUTH_RECVD_EVENT, &wow_mask);
__set_bit(WOW_DISASSOC_RECVD_EVENT, &wow_mask);
__set_bit(WOW_BMISS_EVENT, &wow_mask);
__set_bit(WOW_CSA_IE_EVENT, &wow_mask);
}
if (wowlan->magic_pkt)
__set_bit(WOW_MAGIC_PKT_RECVD_EVENT, &wow_mask);
if (wowlan->nd_config) {
struct wmi_pno_scan_req *pno;
int ret;
pno = kzalloc(sizeof(*pno), GFP_KERNEL);
if (!pno)
return -ENOMEM;
ar->nlo_enabled = true;
ret = ath11k_wmi_pno_check_and_convert(ar, arvif->vdev_id,
wowlan->nd_config, pno);
if (!ret) {
ath11k_wmi_wow_config_pno(ar, arvif->vdev_id, pno);
__set_bit(WOW_NLO_DETECTED_EVENT, &wow_mask);
}
kfree(pno);
}
break;
default:
break;
}
for (i = 0; i < wowlan->n_patterns; i++) {
u8 bitmask[WOW_MAX_PATTERN_SIZE] = {};
u8 ath_pattern[WOW_MAX_PATTERN_SIZE] = {};
u8 ath_bitmask[WOW_MAX_PATTERN_SIZE] = {};
struct cfg80211_pkt_pattern new_pattern = {};
struct cfg80211_pkt_pattern old_pattern = patterns[i];
int j;
new_pattern.pattern = ath_pattern;
new_pattern.mask = ath_bitmask;
if (patterns[i].pattern_len > WOW_MAX_PATTERN_SIZE)
continue;
/* convert bytemask to bitmask */
for (j = 0; j < patterns[i].pattern_len; j++)
if (patterns[i].mask[j / 8] & BIT(j % 8))
bitmask[j] = 0xff;
old_pattern.mask = bitmask;
if (ar->wmi->wmi_ab->wlan_resource_config.rx_decap_mode ==
ATH11K_HW_TXRX_NATIVE_WIFI) {
if (patterns[i].pkt_offset < ETH_HLEN) {
u8 pattern_ext[WOW_MAX_PATTERN_SIZE] = {};
memcpy(pattern_ext, old_pattern.pattern,
old_pattern.pattern_len);
old_pattern.pattern = pattern_ext;
ath11k_wow_convert_8023_to_80211(&new_pattern,
&old_pattern);
} else {
new_pattern = old_pattern;
new_pattern.pkt_offset += WOW_HDR_LEN - ETH_HLEN;
}
}
if (WARN_ON(new_pattern.pattern_len > WOW_MAX_PATTERN_SIZE))
return -EINVAL;
ret = ath11k_wmi_wow_add_pattern(ar, arvif->vdev_id,
pattern_id,
new_pattern.pattern,
new_pattern.mask,
new_pattern.pattern_len,
new_pattern.pkt_offset);
if (ret) {
ath11k_warn(ar->ab, "failed to add pattern %i to vdev %i: %d\n",
pattern_id,
arvif->vdev_id, ret);
return ret;
}
pattern_id++;
__set_bit(WOW_PATTERN_MATCH_EVENT, &wow_mask);
}
for (i = 0; i < WOW_EVENT_MAX; i++) {
if (!test_bit(i, &wow_mask))
continue;
ret = ath11k_wmi_wow_add_wakeup_event(ar, arvif->vdev_id, i, 1);
if (ret) {
ath11k_warn(ar->ab, "failed to enable wakeup event %s on vdev %i: %d\n",
wow_wakeup_event(i), arvif->vdev_id, ret);
return ret;
}
}
return 0;
}
static int ath11k_wow_set_wakeups(struct ath11k *ar,
struct cfg80211_wowlan *wowlan)
{
struct ath11k_vif *arvif;
int ret;
lockdep_assert_held(&ar->conf_mutex);
list_for_each_entry(arvif, &ar->arvifs, list) {
ret = ath11k_vif_wow_set_wakeups(arvif, wowlan);
if (ret) {
ath11k_warn(ar->ab, "failed to set wow wakeups on vdev %i: %d\n",
arvif->vdev_id, ret);
return ret;
}
}
return 0;
}
static int ath11k_vif_wow_clean_nlo(struct ath11k_vif *arvif)
{
int ret = 0;
struct ath11k *ar = arvif->ar;
switch (arvif->vdev_type) {
case WMI_VDEV_TYPE_STA:
if (ar->nlo_enabled) {
struct wmi_pno_scan_req *pno;
pno = kzalloc(sizeof(*pno), GFP_KERNEL);
if (!pno)
return -ENOMEM;
pno->enable = 0;
ar->nlo_enabled = false;
ret = ath11k_wmi_wow_config_pno(ar, arvif->vdev_id, pno);
kfree(pno);
}
break;
default:
break;
}
return ret;
}
static int ath11k_wow_nlo_cleanup(struct ath11k *ar)
{
struct ath11k_vif *arvif;
int ret;
lockdep_assert_held(&ar->conf_mutex);
list_for_each_entry(arvif, &ar->arvifs, list) {
ret = ath11k_vif_wow_clean_nlo(arvif);
if (ret) {
ath11k_warn(ar->ab, "failed to clean nlo settings on vdev %i: %d\n",
arvif->vdev_id, ret);
return ret;
}
}
return 0;
}
static int ath11k_wow_set_hw_filter(struct ath11k *ar)
{
struct ath11k_vif *arvif;
u32 bitmap;
int ret;
lockdep_assert_held(&ar->conf_mutex);
list_for_each_entry(arvif, &ar->arvifs, list) {
bitmap = WMI_HW_DATA_FILTER_DROP_NON_ICMPV6_MC |
WMI_HW_DATA_FILTER_DROP_NON_ARP_BC;
ret = ath11k_wmi_hw_data_filter_cmd(ar, arvif->vdev_id,
bitmap,
true);
if (ret) {
ath11k_warn(ar->ab, "failed to set hw data filter on vdev %i: %d\n",
arvif->vdev_id, ret);
return ret;
}
}
return 0;
}
static int ath11k_wow_clear_hw_filter(struct ath11k *ar)
{
struct ath11k_vif *arvif;
int ret;
lockdep_assert_held(&ar->conf_mutex);
list_for_each_entry(arvif, &ar->arvifs, list) {
ret = ath11k_wmi_hw_data_filter_cmd(ar, arvif->vdev_id, 0, false);
if (ret) {
ath11k_warn(ar->ab, "failed to clear hw data filter on vdev %i: %d\n",
arvif->vdev_id, ret);
return ret;
}
}
return 0;
}
static int ath11k_wow_arp_ns_offload(struct ath11k *ar, bool enable)
{
struct ath11k_vif *arvif;
int ret;
lockdep_assert_held(&ar->conf_mutex);
list_for_each_entry(arvif, &ar->arvifs, list) {
if (arvif->vdev_type != WMI_VDEV_TYPE_STA)
continue;
ret = ath11k_wmi_arp_ns_offload(ar, arvif, enable);
if (ret) {
ath11k_warn(ar->ab, "failed to set arp ns offload vdev %i: enable %d, ret %d\n",
arvif->vdev_id, enable, ret);
return ret;
}
}
return 0;
}
static int ath11k_gtk_rekey_offload(struct ath11k *ar, bool enable)
{
struct ath11k_vif *arvif;
int ret;
lockdep_assert_held(&ar->conf_mutex);
list_for_each_entry(arvif, &ar->arvifs, list) {
if (arvif->vdev_type != WMI_VDEV_TYPE_STA ||
!arvif->is_up ||
!arvif->rekey_data.enable_offload)
continue;
/* get rekey info before disable rekey offload */
if (!enable) {
ret = ath11k_wmi_gtk_rekey_getinfo(ar, arvif);
if (ret) {
ath11k_warn(ar->ab, "failed to request rekey info vdev %i, ret %d\n",
arvif->vdev_id, ret);
return ret;
}
}
ret = ath11k_wmi_gtk_rekey_offload(ar, arvif, enable);
if (ret) {
ath11k_warn(ar->ab, "failed to offload gtk reky vdev %i: enable %d, ret %d\n",
arvif->vdev_id, enable, ret);
return ret;
}
}
return 0;
}
static int ath11k_wow_protocol_offload(struct ath11k *ar, bool enable)
{
int ret;
ret = ath11k_wow_arp_ns_offload(ar, enable);
if (ret) {
ath11k_warn(ar->ab, "failed to offload ARP and NS %d %d\n",
enable, ret);
return ret;
}
ret = ath11k_gtk_rekey_offload(ar, enable);
if (ret) {
ath11k_warn(ar->ab, "failed to offload gtk rekey %d %d\n",
enable, ret);
return ret;
}
return 0;
}
int ath11k_wow_op_suspend(struct ieee80211_hw *hw,
struct cfg80211_wowlan *wowlan)
{
struct ath11k *ar = hw->priv;
int ret;
mutex_lock(&ar->conf_mutex);
ret = ath11k_dp_rx_pktlog_stop(ar->ab, true);
if (ret) {
ath11k_warn(ar->ab,
"failed to stop dp rx (and timer) pktlog during wow suspend: %d\n",
ret);
goto exit;
}
ret = ath11k_wow_cleanup(ar);
if (ret) {
ath11k_warn(ar->ab, "failed to clear wow wakeup events: %d\n",
ret);
goto exit;
}
ret = ath11k_wow_set_wakeups(ar, wowlan);
if (ret) {
ath11k_warn(ar->ab, "failed to set wow wakeup events: %d\n",
ret);
goto cleanup;
}
ret = ath11k_wow_protocol_offload(ar, true);
if (ret) {
ath11k_warn(ar->ab, "failed to set wow protocol offload events: %d\n",
ret);
goto cleanup;
}
ath11k_mac_drain_tx(ar);
ret = ath11k_mac_wait_tx_complete(ar);
if (ret) {
ath11k_warn(ar->ab, "failed to wait tx complete: %d\n", ret);
goto cleanup;
}
ret = ath11k_wow_set_hw_filter(ar);
if (ret) {
ath11k_warn(ar->ab, "failed to set hw filter: %d\n",
ret);
goto cleanup;
}
ret = ath11k_wow_enable(ar->ab);
if (ret) {
ath11k_warn(ar->ab, "failed to start wow: %d\n", ret);
goto cleanup;
}
ret = ath11k_dp_rx_pktlog_stop(ar->ab, false);
if (ret) {
ath11k_warn(ar->ab,
"failed to stop dp rx pktlog during wow suspend: %d\n",
ret);
goto cleanup;
}
ath11k_ce_stop_shadow_timers(ar->ab);
ath11k_dp_stop_shadow_timers(ar->ab);
ath11k_hif_irq_disable(ar->ab);
ath11k_hif_ce_irq_disable(ar->ab);
ret = ath11k_hif_suspend(ar->ab);
if (ret) {
ath11k_warn(ar->ab, "failed to suspend hif: %d\n", ret);
goto wakeup;
}
goto exit;
wakeup:
ath11k_wow_wakeup(ar->ab);
cleanup:
ath11k_wow_cleanup(ar);
exit:
mutex_unlock(&ar->conf_mutex);
return ret ? 1 : 0;
}
void ath11k_wow_op_set_wakeup(struct ieee80211_hw *hw, bool enabled)
{
struct ath11k *ar = hw->priv;
mutex_lock(&ar->conf_mutex);
device_set_wakeup_enable(ar->ab->dev, enabled);
mutex_unlock(&ar->conf_mutex);
}
int ath11k_wow_op_resume(struct ieee80211_hw *hw)
{
struct ath11k *ar = hw->priv;
int ret;
mutex_lock(&ar->conf_mutex);
ret = ath11k_hif_resume(ar->ab);
if (ret) {
ath11k_warn(ar->ab, "failed to resume hif: %d\n", ret);
goto exit;
}
ath11k_hif_ce_irq_enable(ar->ab);
ath11k_hif_irq_enable(ar->ab);
ret = ath11k_dp_rx_pktlog_start(ar->ab);
if (ret) {
ath11k_warn(ar->ab, "failed to start rx pktlog from wow: %d\n", ret);
goto exit;
}
ret = ath11k_wow_wakeup(ar->ab);
if (ret) {
ath11k_warn(ar->ab, "failed to wakeup from wow: %d\n", ret);
goto exit;
}
ret = ath11k_wow_nlo_cleanup(ar);
if (ret) {
ath11k_warn(ar->ab, "failed to cleanup nlo: %d\n", ret);
goto exit;
}
ret = ath11k_wow_clear_hw_filter(ar);
if (ret) {
ath11k_warn(ar->ab, "failed to clear hw filter: %d\n", ret);
goto exit;
}
ret = ath11k_wow_protocol_offload(ar, false);
if (ret) {
ath11k_warn(ar->ab, "failed to clear wow protocol offload events: %d\n",
ret);
goto exit;
}
exit:
if (ret) {
switch (ar->state) {
case ATH11K_STATE_ON:
ar->state = ATH11K_STATE_RESTARTING;
ret = 1;
break;
case ATH11K_STATE_OFF:
case ATH11K_STATE_RESTARTING:
case ATH11K_STATE_RESTARTED:
case ATH11K_STATE_WEDGED:
ath11k_warn(ar->ab, "encountered unexpected device state %d on resume, cannot recover\n",
ar->state);
ret = -EIO;
break;
}
}
mutex_unlock(&ar->conf_mutex);
return ret;
}
int ath11k_wow_init(struct ath11k *ar)
{
if (!test_bit(WMI_TLV_SERVICE_WOW, ar->wmi->wmi_ab->svc_map))
return 0;
ar->wow.wowlan_support = ath11k_wowlan_support;
if (ar->wmi->wmi_ab->wlan_resource_config.rx_decap_mode ==
ATH11K_HW_TXRX_NATIVE_WIFI) {
ar->wow.wowlan_support.pattern_max_len -= WOW_MAX_REDUCE;
ar->wow.wowlan_support.max_pkt_offset -= WOW_MAX_REDUCE;
}
if (test_bit(WMI_TLV_SERVICE_NLO, ar->wmi->wmi_ab->svc_map)) {
ar->wow.wowlan_support.flags |= WIPHY_WOWLAN_NET_DETECT;
ar->wow.wowlan_support.max_nd_match_sets = WMI_PNO_MAX_SUPP_NETWORKS;
}
ar->wow.max_num_patterns = ATH11K_WOW_PATTERNS;
ar->wow.wowlan_support.n_patterns = ar->wow.max_num_patterns;
ar->hw->wiphy->wowlan = &ar->wow.wowlan_support;
device_set_wakeup_capable(ar->ab->dev, true);
return 0;
}

View File

@ -3,8 +3,53 @@
* Copyright (c) 2020 The Linux Foundation. All rights reserved.
*/
#ifndef _WOW_H_
#define _WOW_H_
struct ath11k_wow {
u32 max_num_patterns;
struct completion wakeup_completed;
struct wiphy_wowlan_support wowlan_support;
};
struct rfc1042_hdr {
u8 llc_dsap;
u8 llc_ssap;
u8 llc_ctrl;
u8 snap_oui[3];
__be16 snap_type;
} __packed;
#define ATH11K_WOW_RETRY_NUM 3
#define ATH11K_WOW_RETRY_WAIT_MS 200
#define ATH11K_WOW_PATTERNS 22
#ifdef CONFIG_PM
int ath11k_wow_init(struct ath11k *ar);
int ath11k_wow_op_suspend(struct ieee80211_hw *hw,
struct cfg80211_wowlan *wowlan);
int ath11k_wow_op_resume(struct ieee80211_hw *hw);
void ath11k_wow_op_set_wakeup(struct ieee80211_hw *hw, bool enabled);
int ath11k_wow_enable(struct ath11k_base *ab);
int ath11k_wow_wakeup(struct ath11k_base *ab);
#else
static inline int ath11k_wow_init(struct ath11k *ar)
{
return 0;
}
static inline int ath11k_wow_enable(struct ath11k_base *ab)
{
return 0;
}
static inline int ath11k_wow_wakeup(struct ath11k_base *ab)
{
return 0;
}
#endif /* CONFIG_PM */
#endif /* _WOW_H_ */

View File

@ -1538,7 +1538,7 @@ static int ath6kl_htc_rx_alloc(struct htc_target *target,
queue, n_msg);
/*
* This is due to unavailabilty of buffers to rx entire data.
* This is due to unavailability of buffers to rx entire data.
* Return no error so that free buffers from queue can be used
* to receive partial data.
*/

View File

@ -98,13 +98,9 @@ static int ath_ahb_probe(struct platform_device *pdev)
return -ENOMEM;
}
res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
if (res == NULL) {
dev_err(&pdev->dev, "no IRQ resource found\n");
return -ENXIO;
}
irq = res->start;
irq = platform_get_irq(pdev, 0);
if (irq < 0)
return irq;
ath9k_fill_chanctx_ops();
hw = ieee80211_alloc_hw(sizeof(struct ath_softc), &ath9k_ops);

View File

@ -301,10 +301,11 @@ ar9002_set_txdesc(struct ath_hw *ah, void *ds, struct ath_tx_info *i)
WRITE_ONCE(ads->ds_ctl5, set11nPktDurRTSCTS(i->rates, 2)
| set11nPktDurRTSCTS(i->rates, 3));
WRITE_ONCE(ads->ds_ctl7, set11nRateFlags(i->rates, 0)
| set11nRateFlags(i->rates, 1)
| set11nRateFlags(i->rates, 2)
| set11nRateFlags(i->rates, 3)
WRITE_ONCE(ads->ds_ctl7,
set11nRateFlags(i->rates, 0) | set11nChainSel(i->rates, 0)
| set11nRateFlags(i->rates, 1) | set11nChainSel(i->rates, 1)
| set11nRateFlags(i->rates, 2) | set11nChainSel(i->rates, 2)
| set11nRateFlags(i->rates, 3) | set11nChainSel(i->rates, 3)
| SM(i->rtscts_rate, AR_RTSCTSRate));
WRITE_ONCE(ads->ds_ctl9, SM(i->txpower[1], AR_XmitPower1));

View File

@ -177,7 +177,7 @@ static void ar9003_hw_iqcal_collect(struct ath_hw *ah)
int i;
/* Accumulate IQ cal measures for active chains */
for (i = 0; i < AR5416_MAX_CHAINS; i++) {
for (i = 0; i < AR9300_MAX_CHAINS; i++) {
if (ah->txchainmask & BIT(i)) {
ah->totalPowerMeasI[i] +=
REG_READ(ah, AR_PHY_CAL_MEAS_0(i));

View File

@ -3911,7 +3911,7 @@ static void ar9003_hw_atten_apply(struct ath_hw *ah, struct ath9k_channel *chan)
}
/* Test value. if 0 then attenuation is unused. Don't load anything. */
for (i = 0; i < 3; i++) {
for (i = 0; i < AR9300_MAX_CHAINS; i++) {
if (ah->txchainmask & BIT(i)) {
value = ar9003_hw_atten_chain_get(ah, i, chan);
REG_RMW_FIELD(ah, ext_atten_reg[i],
@ -4747,7 +4747,7 @@ static void ar9003_hw_get_target_power_eeprom(struct ath_hw *ah,
}
static int ar9003_hw_cal_pier_get(struct ath_hw *ah,
int mode,
bool is2ghz,
int ipier,
int ichain,
int *pfrequency,
@ -4757,7 +4757,6 @@ static int ar9003_hw_cal_pier_get(struct ath_hw *ah,
{
u8 *pCalPier;
struct ar9300_cal_data_per_freq_op_loop *pCalPierStruct;
int is2GHz;
struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep;
struct ath_common *common = ath9k_hw_common(ah);
@ -4768,17 +4767,7 @@ static int ar9003_hw_cal_pier_get(struct ath_hw *ah,
return -1;
}
if (mode) { /* 5GHz */
if (ipier >= AR9300_NUM_5G_CAL_PIERS) {
ath_dbg(common, EEPROM,
"Invalid 5GHz cal pier index, must be less than %d\n",
AR9300_NUM_5G_CAL_PIERS);
return -1;
}
pCalPier = &(eep->calFreqPier5G[ipier]);
pCalPierStruct = &(eep->calPierData5G[ichain][ipier]);
is2GHz = 0;
} else {
if (is2ghz) {
if (ipier >= AR9300_NUM_2G_CAL_PIERS) {
ath_dbg(common, EEPROM,
"Invalid 2GHz cal pier index, must be less than %d\n",
@ -4788,10 +4777,18 @@ static int ar9003_hw_cal_pier_get(struct ath_hw *ah,
pCalPier = &(eep->calFreqPier2G[ipier]);
pCalPierStruct = &(eep->calPierData2G[ichain][ipier]);
is2GHz = 1;
} else {
if (ipier >= AR9300_NUM_5G_CAL_PIERS) {
ath_dbg(common, EEPROM,
"Invalid 5GHz cal pier index, must be less than %d\n",
AR9300_NUM_5G_CAL_PIERS);
return -1;
}
pCalPier = &(eep->calFreqPier5G[ipier]);
pCalPierStruct = &(eep->calPierData5G[ichain][ipier]);
}
*pfrequency = ath9k_hw_fbin2freq(*pCalPier, is2GHz);
*pfrequency = ath9k_hw_fbin2freq(*pCalPier, is2ghz);
*pcorrection = pCalPierStruct->refPower;
*ptemperature = pCalPierStruct->tempMeas;
*pvoltage = pCalPierStruct->voltMeas;
@ -4960,7 +4957,6 @@ tempslope:
static int ar9003_hw_calibration_apply(struct ath_hw *ah, int frequency)
{
int ichain, ipier, npier;
int mode;
int lfrequency[AR9300_MAX_CHAINS],
lcorrection[AR9300_MAX_CHAINS],
ltemperature[AR9300_MAX_CHAINS], lvoltage[AR9300_MAX_CHAINS],
@ -4976,12 +4972,12 @@ static int ar9003_hw_calibration_apply(struct ath_hw *ah, int frequency)
int pfrequency, pcorrection, ptemperature, pvoltage,
pnf_cal, pnf_pwr;
struct ath_common *common = ath9k_hw_common(ah);
bool is2ghz = frequency < 4000;
mode = (frequency >= 4000);
if (mode)
npier = AR9300_NUM_5G_CAL_PIERS;
else
if (is2ghz)
npier = AR9300_NUM_2G_CAL_PIERS;
else
npier = AR9300_NUM_5G_CAL_PIERS;
for (ichain = 0; ichain < AR9300_MAX_CHAINS; ichain++) {
lfrequency[ichain] = 0;
@ -4990,7 +4986,7 @@ static int ar9003_hw_calibration_apply(struct ath_hw *ah, int frequency)
/* identify best lower and higher frequency calibration measurement */
for (ichain = 0; ichain < AR9300_MAX_CHAINS; ichain++) {
for (ipier = 0; ipier < npier; ipier++) {
if (!ar9003_hw_cal_pier_get(ah, mode, ipier, ichain,
if (!ar9003_hw_cal_pier_get(ah, is2ghz, ipier, ichain,
&pfrequency, &pcorrection,
&ptemperature, &pvoltage,
&pnf_cal, &pnf_pwr)) {
@ -5126,13 +5122,13 @@ static int ar9003_hw_calibration_apply(struct ath_hw *ah, int frequency)
frequency, correction[0], correction[1], correction[2]);
/* Store calibrated noise floor values */
for (ichain = 0; ichain < AR5416_MAX_CHAINS; ichain++)
if (mode) {
ah->nf_5g.cal[ichain] = nf_cal[ichain];
ah->nf_5g.pwr[ichain] = nf_pwr[ichain];
} else {
for (ichain = 0; ichain < AR9300_MAX_CHAINS; ichain++)
if (is2ghz) {
ah->nf_2g.cal[ichain] = nf_cal[ichain];
ah->nf_2g.pwr[ichain] = nf_pwr[ichain];
} else {
ah->nf_5g.cal[ichain] = nf_cal[ichain];
ah->nf_5g.pwr[ichain] = nf_pwr[ichain];
}
return 0;
@ -5449,8 +5445,6 @@ static void ath9k_hw_ar9300_set_txpower(struct ath_hw *ah,
{
struct ath_regulatory *regulatory = ath9k_hw_regulatory(ah);
struct ath_common *common = ath9k_hw_common(ah);
struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep;
struct ar9300_modal_eep_header *modal_hdr;
u8 targetPowerValT2[ar9300RateSize];
u8 target_power_val_t2_eep[ar9300RateSize];
u8 targetPowerValT2_tpc[ar9300RateSize];
@ -5465,17 +5459,12 @@ static void ath9k_hw_ar9300_set_txpower(struct ath_hw *ah,
ar9003_hw_get_target_power_eeprom(ah, chan, targetPowerValT2);
if (ar9003_is_paprd_enabled(ah)) {
if (IS_CHAN_2GHZ(chan))
modal_hdr = &eep->modalHeader2G;
else
modal_hdr = &eep->modalHeader5G;
ah->paprd_ratemask =
le32_to_cpu(modal_hdr->papdRateMaskHt20) &
ar9003_get_paprd_rate_mask_ht20(ah, IS_CHAN_2GHZ(chan)) &
AR9300_PAPRD_RATE_MASK;
ah->paprd_ratemask_ht40 =
le32_to_cpu(modal_hdr->papdRateMaskHt40) &
ar9003_get_paprd_rate_mask_ht40(ah, IS_CHAN_2GHZ(chan)) &
AR9300_PAPRD_RATE_MASK;
paprd_scale_factor = ar9003_get_paprd_scale_factor(ah, chan);
@ -5592,30 +5581,40 @@ u8 *ar9003_get_spur_chan_ptr(struct ath_hw *ah, bool is2ghz)
return ar9003_modal_header(ah, is2ghz)->spurChans;
}
u32 ar9003_get_paprd_rate_mask_ht20(struct ath_hw *ah, bool is2ghz)
{
return le32_to_cpu(ar9003_modal_header(ah, is2ghz)->papdRateMaskHt20);
}
u32 ar9003_get_paprd_rate_mask_ht40(struct ath_hw *ah, bool is2ghz)
{
return le32_to_cpu(ar9003_modal_header(ah, is2ghz)->papdRateMaskHt40);
}
unsigned int ar9003_get_paprd_scale_factor(struct ath_hw *ah,
struct ath9k_channel *chan)
{
struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep;
bool is2ghz = IS_CHAN_2GHZ(chan);
if (IS_CHAN_2GHZ(chan))
return MS(le32_to_cpu(eep->modalHeader2G.papdRateMaskHt20),
if (is2ghz)
return MS(ar9003_get_paprd_rate_mask_ht20(ah, is2ghz),
AR9300_PAPRD_SCALE_1);
else {
if (chan->channel >= 5700)
return MS(le32_to_cpu(eep->modalHeader5G.papdRateMaskHt20),
return MS(ar9003_get_paprd_rate_mask_ht20(ah, is2ghz),
AR9300_PAPRD_SCALE_1);
else if (chan->channel >= 5400)
return MS(le32_to_cpu(eep->modalHeader5G.papdRateMaskHt40),
return MS(ar9003_get_paprd_rate_mask_ht40(ah, is2ghz),
AR9300_PAPRD_SCALE_2);
else
return MS(le32_to_cpu(eep->modalHeader5G.papdRateMaskHt40),
return MS(ar9003_get_paprd_rate_mask_ht40(ah, is2ghz),
AR9300_PAPRD_SCALE_1);
}
}
static u8 ar9003_get_eepmisc(struct ath_hw *ah)
{
return ah->eeprom.map4k.baseEepHeader.eepMisc;
return ah->eeprom.ar9300_eep.baseEepHeader.opCapFlags.eepMisc;
}
const struct eeprom_ops eep_ar9300_ops = {

View File

@ -363,6 +363,8 @@ u32 ar9003_hw_ant_ctrl_common_2_get(struct ath_hw *ah, bool is2ghz);
u8 *ar9003_get_spur_chan_ptr(struct ath_hw *ah, bool is_2ghz);
u32 ar9003_get_paprd_rate_mask_ht20(struct ath_hw *ah, bool is2ghz);
u32 ar9003_get_paprd_rate_mask_ht40(struct ath_hw *ah, bool is2ghz);
unsigned int ar9003_get_paprd_scale_factor(struct ath_hw *ah,
struct ath9k_channel *chan);

View File

@ -144,10 +144,11 @@ ar9003_set_txdesc(struct ath_hw *ah, void *ds, struct ath_tx_info *i)
WRITE_ONCE(ads->ctl16, set11nPktDurRTSCTS(i->rates, 2)
| set11nPktDurRTSCTS(i->rates, 3));
WRITE_ONCE(ads->ctl18, set11nRateFlags(i->rates, 0)
| set11nRateFlags(i->rates, 1)
| set11nRateFlags(i->rates, 2)
| set11nRateFlags(i->rates, 3)
WRITE_ONCE(ads->ctl18,
set11nRateFlags(i->rates, 0) | set11nChainSel(i->rates, 0)
| set11nRateFlags(i->rates, 1) | set11nChainSel(i->rates, 1)
| set11nRateFlags(i->rates, 2) | set11nChainSel(i->rates, 2)
| set11nRateFlags(i->rates, 3) | set11nChainSel(i->rates, 3)
| SM(i->rtscts_rate, AR_RTSCTSRate));
WRITE_ONCE(ads->ctl19, AR_Not_Sounding);

View File

@ -21,7 +21,7 @@
void ar9003_paprd_enable(struct ath_hw *ah, bool val)
{
struct ath9k_channel *chan = ah->curchan;
struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep;
bool is2ghz = IS_CHAN_2GHZ(chan);
/*
* 3 bits for modalHeader5G.papdRateMaskHt20
@ -36,17 +36,17 @@ void ar9003_paprd_enable(struct ath_hw *ah, bool val)
* -- disable PAPRD for lower band 5GHz
*/
if (IS_CHAN_5GHZ(chan)) {
if (!is2ghz) {
if (chan->channel >= UPPER_5G_SUB_BAND_START) {
if (le32_to_cpu(eep->modalHeader5G.papdRateMaskHt20)
if (ar9003_get_paprd_rate_mask_ht20(ah, is2ghz)
& BIT(30))
val = false;
} else if (chan->channel >= MID_5G_SUB_BAND_START) {
if (le32_to_cpu(eep->modalHeader5G.papdRateMaskHt20)
if (ar9003_get_paprd_rate_mask_ht20(ah, is2ghz)
& BIT(29))
val = false;
} else {
if (le32_to_cpu(eep->modalHeader5G.papdRateMaskHt20)
if (ar9003_get_paprd_rate_mask_ht20(ah, is2ghz)
& BIT(28))
val = false;
}

View File

@ -523,21 +523,10 @@ static void ar9003_hw_spur_mitigate_ofdm(struct ath_hw *ah,
int synth_freq;
int range = 10;
int freq_offset = 0;
int mode;
u8* spurChansPtr;
u8 *spur_fbin_ptr = ar9003_get_spur_chan_ptr(ah, IS_CHAN_2GHZ(chan));
unsigned int i;
struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep;
if (IS_CHAN_5GHZ(chan)) {
spurChansPtr = &(eep->modalHeader5G.spurChans[0]);
mode = 0;
}
else {
spurChansPtr = &(eep->modalHeader2G.spurChans[0]);
mode = 1;
}
if (spurChansPtr[0] == 0)
if (spur_fbin_ptr[0] == 0)
return; /* No spur in the mode */
if (IS_CHAN_HT40(chan)) {
@ -554,16 +543,18 @@ static void ar9003_hw_spur_mitigate_ofdm(struct ath_hw *ah,
ar9003_hw_spur_ofdm_clear(ah);
for (i = 0; i < AR_EEPROM_MODAL_SPURS && spurChansPtr[i]; i++) {
freq_offset = ath9k_hw_fbin2freq(spurChansPtr[i], mode);
for (i = 0; i < AR_EEPROM_MODAL_SPURS && spur_fbin_ptr[i]; i++) {
freq_offset = ath9k_hw_fbin2freq(spur_fbin_ptr[i],
IS_CHAN_2GHZ(chan));
freq_offset -= synth_freq;
if (abs(freq_offset) < range) {
ar9003_hw_spur_ofdm_work(ah, chan, freq_offset,
range, synth_freq);
if (AR_SREV_9565(ah) && (i < 4)) {
freq_offset = ath9k_hw_fbin2freq(spurChansPtr[i + 1],
mode);
freq_offset =
ath9k_hw_fbin2freq(spur_fbin_ptr[i + 1],
IS_CHAN_2GHZ(chan));
freq_offset -= synth_freq;
if (abs(freq_offset) < range)
ar9003_hw_spur_ofdm_9565(ah, freq_offset);

View File

@ -720,7 +720,7 @@
#define AR_CH0_TOP2 (AR_SREV_9300(ah) ? 0x1628c : \
(AR_SREV_9462(ah) ? 0x16290 : 0x16284))
#define AR_CH0_TOP2_XPABIASLVL (AR_SREV_9561(ah) ? 0x1e00 : 0xf000)
#define AR_CH0_TOP2_XPABIASLVL_S 12
#define AR_CH0_TOP2_XPABIASLVL_S (AR_SREV_9561(ah) ? 9 : 12)
#define AR_CH0_XTAL (AR_SREV_9300(ah) ? 0x16294 : \
((AR_SREV_9462(ah) || AR_SREV_9565(ah)) ? 0x16298 : \

View File

@ -36,7 +36,7 @@ static ssize_t read_file_node_aggr(struct file *file, char __user *user_buf,
if (buf == NULL)
return -ENOMEM;
if (!an->sta->ht_cap.ht_supported) {
if (!an->sta->deflink.ht_cap.ht_supported) {
len = scnprintf(buf, size, "%s\n",
"HT not supported");
goto exit;
@ -186,7 +186,7 @@ static ssize_t read_file_node_recv(struct file *file, char __user *user_buf,
band = ah->curchan->chan->band;
rstats = &an->rx_rate_stats;
if (!sta->ht_cap.ht_supported)
if (!sta->deflink.ht_cap.ht_supported)
goto legacy;
len += scnprintf(buf + len, size - len,

View File

@ -368,10 +368,9 @@ static int __hif_usb_tx(struct hif_device_usb *hif_dev)
__skb_queue_head_init(&tx_buf->skb_queue);
list_move_tail(&tx_buf->list, &hif_dev->tx.tx_buf);
hif_dev->tx.tx_buf_cnt++;
}
if (!ret)
} else {
TX_STAT_INC(buf_queued);
}
return ret;
}

View File

@ -491,7 +491,7 @@ static int ath9k_htc_add_station(struct ath9k_htc_priv *priv,
ista->index = sta_idx;
tsta.is_vif_sta = 0;
maxampdu = 1 << (IEEE80211_HT_MAX_AMPDU_FACTOR +
sta->ht_cap.ampdu_factor);
sta->deflink.ht_cap.ampdu_factor);
tsta.maxampdu = cpu_to_be16(maxampdu);
} else {
memcpy(&tsta.macaddr, vif->addr, ETH_ALEN);
@ -602,7 +602,7 @@ static void ath9k_htc_setup_rate(struct ath9k_htc_priv *priv,
sband = priv->hw->wiphy->bands[priv->hw->conf.chandef.chan->band];
for (i = 0, j = 0; i < sband->n_bitrates; i++) {
if (sta->supp_rates[sband->band] & BIT(i)) {
if (sta->deflink.supp_rates[sband->band] & BIT(i)) {
trate->rates.legacy_rates.rs_rates[j]
= (sband->bitrates[i].bitrate * 2) / 10;
j++;
@ -610,9 +610,9 @@ static void ath9k_htc_setup_rate(struct ath9k_htc_priv *priv,
}
trate->rates.legacy_rates.rs_nrates = j;
if (sta->ht_cap.ht_supported) {
if (sta->deflink.ht_cap.ht_supported) {
for (i = 0, j = 0; i < 77; i++) {
if (sta->ht_cap.mcs.rx_mask[i/8] & (1<<(i%8)))
if (sta->deflink.ht_cap.mcs.rx_mask[i/8] & (1<<(i%8)))
trate->rates.ht_rates.rs_rates[j++] = i;
if (j == ATH_HTC_RATE_MAX)
break;
@ -620,18 +620,18 @@ static void ath9k_htc_setup_rate(struct ath9k_htc_priv *priv,
trate->rates.ht_rates.rs_nrates = j;
caps = WLAN_RC_HT_FLAG;
if (sta->ht_cap.cap & IEEE80211_HT_CAP_RX_STBC)
if (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_RX_STBC)
caps |= ATH_RC_TX_STBC_FLAG;
if (sta->ht_cap.mcs.rx_mask[1])
if (sta->deflink.ht_cap.mcs.rx_mask[1])
caps |= WLAN_RC_DS_FLAG;
if ((sta->ht_cap.cap & IEEE80211_HT_CAP_SUP_WIDTH_20_40) &&
(conf_is_ht40(&priv->hw->conf)))
if ((sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_SUP_WIDTH_20_40) &&
(conf_is_ht40(&priv->hw->conf)))
caps |= WLAN_RC_40_FLAG;
if (conf_is_ht40(&priv->hw->conf) &&
(sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_40))
(sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_SGI_40))
caps |= WLAN_RC_SGI_FLAG;
else if (conf_is_ht20(&priv->hw->conf) &&
(sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_20))
(sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_SGI_20))
caps |= WLAN_RC_SGI_FLAG;
}

View File

@ -1016,6 +1016,14 @@ static bool ath9k_rx_prepare(struct ath9k_htc_priv *priv,
goto rx_next;
}
if (rxstatus->rs_keyix >= ATH_KEYMAX &&
rxstatus->rs_keyix != ATH9K_RXKEYIX_INVALID) {
ath_dbg(common, ANY,
"Invalid keyix, dropping (keyix: %d)\n",
rxstatus->rs_keyix);
goto rx_next;
}
/* Get the RX status information */
memset(rx_status, 0, sizeof(struct ieee80211_rx_status));

View File

@ -35,8 +35,10 @@
|((_series)[_index].RateFlags & ATH9K_RATESERIES_HALFGI ? \
AR_GI##_index : 0) \
|((_series)[_index].RateFlags & ATH9K_RATESERIES_STBC ? \
AR_STBC##_index : 0) \
|SM((_series)[_index].ChSel, AR_ChainSel##_index))
AR_STBC##_index : 0))
#define set11nChainSel(_series, _index) \
(SM((_series)[_index].ChSel, AR_ChainSel##_index))
#define CCK_SIFS_TIME 10
#define CCK_PREAMBLE_BITS 144

View File

@ -2048,7 +2048,7 @@ static int ath9k_ampdu_action(struct ieee80211_hw *hw,
case IEEE80211_AMPDU_TX_OPERATIONAL:
atid = ath_node_to_tid(an, tid);
atid->baw_size = IEEE80211_MIN_AMPDU_BUF <<
sta->ht_cap.ampdu_factor;
sta->deflink.ht_cap.ampdu_factor;
break;
default:
ath_err(ath9k_hw_common(sc->sc_ah), "Unknown AMPDU action\n");

View File

@ -834,8 +834,8 @@
((_ah)->hw_version.macRev >= AR_SREV_REVISION_5416_22)) || \
((_ah)->hw_version.macVersion >= AR_SREV_VERSION_9100))
#define AR_SREV_9100(ah) \
((ah->hw_version.macVersion) == AR_SREV_VERSION_9100)
#define AR_SREV_9100(_ah) \
(((_ah)->hw_version.macVersion == AR_SREV_VERSION_9100))
#define AR_SREV_9100_OR_LATER(_ah) \
(((_ah)->hw_version.macVersion >= AR_SREV_VERSION_9100))
@ -891,7 +891,7 @@
#define AR_SREV_9300_20_OR_LATER(_ah) \
((_ah)->hw_version.macVersion >= AR_SREV_VERSION_9300)
#define AR_SREV_9300_22(_ah) \
(AR_SREV_9300(ah) && \
(AR_SREV_9300((_ah)) && \
((_ah)->hw_version.macRev == AR_SREV_REVISION_9300_22))
#define AR_SREV_9330(_ah) \
@ -994,8 +994,8 @@
(((_ah)->hw_version.macVersion == AR_SREV_VERSION_9561))
#define AR_SREV_SOC(_ah) \
(AR_SREV_9340(_ah) || AR_SREV_9531(_ah) || AR_SREV_9550(ah) || \
AR_SREV_9561(ah))
(AR_SREV_9340(_ah) || AR_SREV_9531(_ah) || AR_SREV_9550(_ah) || \
AR_SREV_9561(_ah))
/* NOTE: When adding chips newer than Peacock, add chip check here */
#define AR_SREV_9580_10_OR_LATER(_ah) \

View File

@ -1271,7 +1271,7 @@ static void ath_buf_set_rate(struct ath_softc *sc, struct ath_buf *bf,
int phy;
if (!rates[i].count || (rates[i].idx < 0))
continue;
break;
rix = rates[i].idx;
info->rates[i].Tries = rates[i].count;
@ -1574,10 +1574,10 @@ int ath_tx_aggr_start(struct ath_softc *sc, struct ieee80211_sta *sta,
* in HT IBSS when a beacon with HT-info is received after the station
* has already been added.
*/
if (sta->ht_cap.ht_supported) {
if (sta->deflink.ht_cap.ht_supported) {
an->maxampdu = (1 << (IEEE80211_HT_MAX_AMPDU_FACTOR +
sta->ht_cap.ampdu_factor)) - 1;
density = ath9k_parse_mpdudensity(sta->ht_cap.ampdu_density);
sta->deflink.ht_cap.ampdu_factor)) - 1;
density = ath9k_parse_mpdudensity(sta->deflink.ht_cap.ampdu_density);
an->mpdudensity = density;
}

View File

@ -1306,8 +1306,8 @@ static int carl9170_op_sta_add(struct ieee80211_hw *hw,
atomic_set(&sta_info->pending_frames, 0);
if (sta->ht_cap.ht_supported) {
if (sta->ht_cap.ampdu_density > 6) {
if (sta->deflink.ht_cap.ht_supported) {
if (sta->deflink.ht_cap.ampdu_density > 6) {
/*
* HW does support 16us AMPDU density.
* No HT-Xmit for station.
@ -1319,7 +1319,7 @@ static int carl9170_op_sta_add(struct ieee80211_hw *hw,
for (i = 0; i < ARRAY_SIZE(sta_info->agg); i++)
RCU_INIT_POINTER(sta_info->agg[i], NULL);
sta_info->ampdu_max_len = 1 << (3 + sta->ht_cap.ampdu_factor);
sta_info->ampdu_max_len = 1 << (3 + sta->deflink.ht_cap.ampdu_factor);
sta_info->ht_sta = true;
}
@ -1335,7 +1335,7 @@ static int carl9170_op_sta_remove(struct ieee80211_hw *hw,
unsigned int i;
bool cleanup = false;
if (sta->ht_cap.ht_supported) {
if (sta->deflink.ht_cap.ht_supported) {
sta_info->ht_sta = false;

View File

@ -1044,8 +1044,9 @@ static int carl9170_tx_prepare(struct ar9170 *ar,
if (unlikely(!sta || !cvif))
goto err_out;
factor = min_t(unsigned int, 1u, sta->ht_cap.ampdu_factor);
density = sta->ht_cap.ampdu_density;
factor = min_t(unsigned int, 1u,
sta->deflink.ht_cap.ampdu_factor);
density = sta->deflink.ht_cap.ampdu_density;
if (density) {
/*
@ -1558,6 +1559,9 @@ static struct carl9170_vif_info *carl9170_pick_beaconing_vif(struct ar9170 *ar)
goto out;
}
} while (ar->beacon_enabled && i--);
/* no entry found in list */
return NULL;
}
out:

View File

@ -2626,7 +2626,12 @@ enum tx_rate_info {
HAL_TX_RATE_SGI = 0x8,
/* Rate with Long guard interval */
HAL_TX_RATE_LGI = 0x10
HAL_TX_RATE_LGI = 0x10,
/* VHT rates */
HAL_TX_RATE_VHT20 = 0x20,
HAL_TX_RATE_VHT40 = 0x40,
HAL_TX_RATE_VHT80 = 0x80,
};
struct ani_global_class_a_stats_info {

View File

@ -192,70 +192,74 @@ static inline u8 get_sta_index(struct ieee80211_vif *vif,
sta_priv->sta_index;
}
#define DEFINE(s) [s] = #s
static const char * const wcn36xx_caps_names[] = {
"MCC", /* 0 */
"P2P", /* 1 */
"DOT11AC", /* 2 */
"SLM_SESSIONIZATION", /* 3 */
"DOT11AC_OPMODE", /* 4 */
"SAP32STA", /* 5 */
"TDLS", /* 6 */
"P2P_GO_NOA_DECOUPLE_INIT_SCAN",/* 7 */
"WLANACTIVE_OFFLOAD", /* 8 */
"BEACON_OFFLOAD", /* 9 */
"SCAN_OFFLOAD", /* 10 */
"ROAM_OFFLOAD", /* 11 */
"BCN_MISS_OFFLOAD", /* 12 */
"STA_POWERSAVE", /* 13 */
"STA_ADVANCED_PWRSAVE", /* 14 */
"AP_UAPSD", /* 15 */
"AP_DFS", /* 16 */
"BLOCKACK", /* 17 */
"PHY_ERR", /* 18 */
"BCN_FILTER", /* 19 */
"RTT", /* 20 */
"RATECTRL", /* 21 */
"WOW", /* 22 */
"WLAN_ROAM_SCAN_OFFLOAD", /* 23 */
"SPECULATIVE_PS_POLL", /* 24 */
"SCAN_SCH", /* 25 */
"IBSS_HEARTBEAT_OFFLOAD", /* 26 */
"WLAN_SCAN_OFFLOAD", /* 27 */
"WLAN_PERIODIC_TX_PTRN", /* 28 */
"ADVANCE_TDLS", /* 29 */
"BATCH_SCAN", /* 30 */
"FW_IN_TX_PATH", /* 31 */
"EXTENDED_NSOFFLOAD_SLOT", /* 32 */
"CH_SWITCH_V1", /* 33 */
"HT40_OBSS_SCAN", /* 34 */
"UPDATE_CHANNEL_LIST", /* 35 */
"WLAN_MCADDR_FLT", /* 36 */
"WLAN_CH144", /* 37 */
"NAN", /* 38 */
"TDLS_SCAN_COEXISTENCE", /* 39 */
"LINK_LAYER_STATS_MEAS", /* 40 */
"MU_MIMO", /* 41 */
"EXTENDED_SCAN", /* 42 */
"DYNAMIC_WMM_PS", /* 43 */
"MAC_SPOOFED_SCAN", /* 44 */
"BMU_ERROR_GENERIC_RECOVERY", /* 45 */
"DISA", /* 46 */
"FW_STATS", /* 47 */
"WPS_PRBRSP_TMPL", /* 48 */
"BCN_IE_FLT_DELTA", /* 49 */
"TDLS_OFF_CHANNEL", /* 51 */
"RTT3", /* 52 */
"MGMT_FRAME_LOGGING", /* 53 */
"ENHANCED_TXBD_COMPLETION", /* 54 */
"LOGGING_ENHANCEMENT", /* 55 */
"EXT_SCAN_ENHANCED", /* 56 */
"MEMORY_DUMP_SUPPORTED", /* 57 */
"PER_PKT_STATS_SUPPORTED", /* 58 */
"EXT_LL_STAT", /* 60 */
"WIFI_CONFIG", /* 61 */
"ANTENNA_DIVERSITY_SELECTION", /* 62 */
DEFINE(MCC),
DEFINE(P2P),
DEFINE(DOT11AC),
DEFINE(SLM_SESSIONIZATION),
DEFINE(DOT11AC_OPMODE),
DEFINE(SAP32STA),
DEFINE(TDLS),
DEFINE(P2P_GO_NOA_DECOUPLE_INIT_SCAN),
DEFINE(WLANACTIVE_OFFLOAD),
DEFINE(BEACON_OFFLOAD),
DEFINE(SCAN_OFFLOAD),
DEFINE(ROAM_OFFLOAD),
DEFINE(BCN_MISS_OFFLOAD),
DEFINE(STA_POWERSAVE),
DEFINE(STA_ADVANCED_PWRSAVE),
DEFINE(AP_UAPSD),
DEFINE(AP_DFS),
DEFINE(BLOCKACK),
DEFINE(PHY_ERR),
DEFINE(BCN_FILTER),
DEFINE(RTT),
DEFINE(RATECTRL),
DEFINE(WOW),
DEFINE(WLAN_ROAM_SCAN_OFFLOAD),
DEFINE(SPECULATIVE_PS_POLL),
DEFINE(SCAN_SCH),
DEFINE(IBSS_HEARTBEAT_OFFLOAD),
DEFINE(WLAN_SCAN_OFFLOAD),
DEFINE(WLAN_PERIODIC_TX_PTRN),
DEFINE(ADVANCE_TDLS),
DEFINE(BATCH_SCAN),
DEFINE(FW_IN_TX_PATH),
DEFINE(EXTENDED_NSOFFLOAD_SLOT),
DEFINE(CH_SWITCH_V1),
DEFINE(HT40_OBSS_SCAN),
DEFINE(UPDATE_CHANNEL_LIST),
DEFINE(WLAN_MCADDR_FLT),
DEFINE(WLAN_CH144),
DEFINE(NAN),
DEFINE(TDLS_SCAN_COEXISTENCE),
DEFINE(LINK_LAYER_STATS_MEAS),
DEFINE(MU_MIMO),
DEFINE(EXTENDED_SCAN),
DEFINE(DYNAMIC_WMM_PS),
DEFINE(MAC_SPOOFED_SCAN),
DEFINE(BMU_ERROR_GENERIC_RECOVERY),
DEFINE(DISA),
DEFINE(FW_STATS),
DEFINE(WPS_PRBRSP_TMPL),
DEFINE(BCN_IE_FLT_DELTA),
DEFINE(TDLS_OFF_CHANNEL),
DEFINE(RTT3),
DEFINE(MGMT_FRAME_LOGGING),
DEFINE(ENHANCED_TXBD_COMPLETION),
DEFINE(LOGGING_ENHANCEMENT),
DEFINE(EXT_SCAN_ENHANCED),
DEFINE(MEMORY_DUMP_SUPPORTED),
DEFINE(PER_PKT_STATS_SUPPORTED),
DEFINE(EXT_LL_STAT),
DEFINE(WIFI_CONFIG),
DEFINE(ANTENNA_DIVERSITY_SELECTION),
};
#undef DEFINE
static const char *wcn36xx_get_cap_name(enum place_holder_in_cap_bitmap x)
{
if (x >= ARRAY_SIZE(wcn36xx_caps_names))
@ -788,7 +792,7 @@ static void wcn36xx_update_allowed_rates(struct ieee80211_sta *sta,
int i, size;
u16 *rates_table;
struct wcn36xx_sta *sta_priv = wcn36xx_sta_to_priv(sta);
u32 rates = sta->supp_rates[band];
u32 rates = sta->deflink.supp_rates[band];
memset(&sta_priv->supported_rates, 0,
sizeof(sta_priv->supported_rates));
@ -814,20 +818,20 @@ static void wcn36xx_update_allowed_rates(struct ieee80211_sta *sta,
}
}
if (sta->ht_cap.ht_supported) {
BUILD_BUG_ON(sizeof(sta->ht_cap.mcs.rx_mask) >
sizeof(sta_priv->supported_rates.supported_mcs_set));
if (sta->deflink.ht_cap.ht_supported) {
BUILD_BUG_ON(sizeof(sta->deflink.ht_cap.mcs.rx_mask) >
sizeof(sta_priv->supported_rates.supported_mcs_set));
memcpy(sta_priv->supported_rates.supported_mcs_set,
sta->ht_cap.mcs.rx_mask,
sizeof(sta->ht_cap.mcs.rx_mask));
sta->deflink.ht_cap.mcs.rx_mask,
sizeof(sta->deflink.ht_cap.mcs.rx_mask));
}
if (sta->vht_cap.vht_supported) {
if (sta->deflink.vht_cap.vht_supported) {
sta_priv->supported_rates.op_rate_mode = STA_11ac;
sta_priv->supported_rates.vht_rx_mcs_map =
sta->vht_cap.vht_mcs.rx_mcs_map;
sta->deflink.vht_cap.vht_mcs.rx_mcs_map;
sta_priv->supported_rates.vht_tx_mcs_map =
sta->vht_cap.vht_mcs.tx_mcs_map;
sta->deflink.vht_cap.vht_mcs.tx_mcs_map;
}
}
@ -1400,6 +1404,21 @@ static int wcn36xx_get_survey(struct ieee80211_hw *hw, int idx,
return 0;
}
static void wcn36xx_sta_statistics(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
struct ieee80211_sta *sta, struct station_info *sinfo)
{
struct wcn36xx *wcn;
u8 sta_index;
int status;
wcn = hw->priv;
sta_index = get_sta_index(vif, wcn36xx_sta_to_priv(sta));
status = wcn36xx_smd_get_stats(wcn, sta_index, HAL_GLOBAL_CLASS_A_STATS_INFO, sinfo);
if (status)
wcn36xx_err("wcn36xx_smd_get_stats failed\n");
}
static const struct ieee80211_ops wcn36xx_ops = {
.start = wcn36xx_start,
.stop = wcn36xx_stop,
@ -1423,6 +1442,7 @@ static const struct ieee80211_ops wcn36xx_ops = {
.set_rts_threshold = wcn36xx_set_rts_threshold,
.sta_add = wcn36xx_sta_add,
.sta_remove = wcn36xx_sta_remove,
.sta_statistics = wcn36xx_sta_statistics,
.ampdu_action = wcn36xx_ampdu_action,
#if IS_ENABLED(CONFIG_IPV6)
.ipv6_addr_change = wcn36xx_ipv6_addr_change,

View File

@ -208,9 +208,9 @@ static void wcn36xx_smd_set_bss_nw_type(struct wcn36xx *wcn,
{
if (NL80211_BAND_5GHZ == WCN36XX_BAND(wcn))
bss_params->nw_type = WCN36XX_HAL_11A_NW_TYPE;
else if (sta && sta->ht_cap.ht_supported)
else if (sta && sta->deflink.ht_cap.ht_supported)
bss_params->nw_type = WCN36XX_HAL_11N_NW_TYPE;
else if (sta && (sta->supp_rates[NL80211_BAND_2GHZ] & 0x7f))
else if (sta && (sta->deflink.supp_rates[NL80211_BAND_2GHZ] & 0x7f))
bss_params->nw_type = WCN36XX_HAL_11G_NW_TYPE;
else
bss_params->nw_type = WCN36XX_HAL_11B_NW_TYPE;
@ -225,9 +225,10 @@ static void wcn36xx_smd_set_bss_ht_params(struct ieee80211_vif *vif,
struct ieee80211_sta *sta,
struct wcn36xx_hal_config_bss_params *bss_params)
{
if (sta && sta->ht_cap.ht_supported) {
unsigned long caps = sta->ht_cap.cap;
bss_params->ht = sta->ht_cap.ht_supported;
if (sta && sta->deflink.ht_cap.ht_supported) {
unsigned long caps = sta->deflink.ht_cap.cap;
bss_params->ht = sta->deflink.ht_cap.ht_supported;
bss_params->tx_channel_width_set = is_cap_supported(caps,
IEEE80211_HT_CAP_SUP_WIDTH_20_40);
bss_params->lsig_tx_op_protection_full_support =
@ -250,23 +251,24 @@ wcn36xx_smd_set_bss_vht_params(struct ieee80211_vif *vif,
struct ieee80211_sta *sta,
struct wcn36xx_hal_config_bss_params_v1 *bss)
{
if (sta && sta->vht_cap.vht_supported)
if (sta && sta->deflink.vht_cap.vht_supported)
bss->vht_capable = 1;
}
static void wcn36xx_smd_set_sta_ht_params(struct ieee80211_sta *sta,
struct wcn36xx_hal_config_sta_params *sta_params)
{
if (sta->ht_cap.ht_supported) {
unsigned long caps = sta->ht_cap.cap;
sta_params->ht_capable = sta->ht_cap.ht_supported;
if (sta->deflink.ht_cap.ht_supported) {
unsigned long caps = sta->deflink.ht_cap.cap;
sta_params->ht_capable = sta->deflink.ht_cap.ht_supported;
sta_params->tx_channel_width_set = is_cap_supported(caps,
IEEE80211_HT_CAP_SUP_WIDTH_20_40);
sta_params->lsig_txop_protection = is_cap_supported(caps,
IEEE80211_HT_CAP_LSIG_TXOP_PROT);
sta_params->max_ampdu_size = sta->ht_cap.ampdu_factor;
sta_params->max_ampdu_density = sta->ht_cap.ampdu_density;
sta_params->max_ampdu_size = sta->deflink.ht_cap.ampdu_factor;
sta_params->max_ampdu_density = sta->deflink.ht_cap.ampdu_density;
/* max_amsdu_size: 1 : 3839 bytes, 0 : 7935 bytes (max) */
sta_params->max_amsdu_size = !is_cap_supported(caps,
IEEE80211_HT_CAP_MAX_AMSDU);
@ -287,10 +289,10 @@ static void wcn36xx_smd_set_sta_vht_params(struct wcn36xx *wcn,
struct ieee80211_sta *sta,
struct wcn36xx_hal_config_sta_params_v1 *sta_params)
{
if (sta->vht_cap.vht_supported) {
unsigned long caps = sta->vht_cap.cap;
if (sta->deflink.vht_cap.vht_supported) {
unsigned long caps = sta->deflink.vht_cap.cap;
sta_params->vht_capable = sta->vht_cap.vht_supported;
sta_params->vht_capable = sta->deflink.vht_cap.vht_supported;
sta_params->vht_ldpc_enabled =
is_cap_supported(caps, IEEE80211_VHT_CAP_RXLDPC);
if (get_feat_caps(wcn->fw_feat_caps, MU_MIMO)) {
@ -308,9 +310,10 @@ static void wcn36xx_smd_set_sta_vht_params(struct wcn36xx *wcn,
static void wcn36xx_smd_set_sta_ht_ldpc_params(struct ieee80211_sta *sta,
struct wcn36xx_hal_config_sta_params_v1 *sta_params)
{
if (sta->ht_cap.ht_supported) {
if (sta->deflink.ht_cap.ht_supported) {
sta_params->ht_ldpc_enabled =
is_cap_supported(sta->ht_cap.cap, IEEE80211_HT_CAP_LDPC_CODING);
is_cap_supported(sta->deflink.ht_cap.cap,
IEEE80211_HT_CAP_LDPC_CODING);
}
}
@ -2627,6 +2630,62 @@ out:
return ret;
}
int wcn36xx_smd_get_stats(struct wcn36xx *wcn, u8 sta_index, u32 stats_mask,
struct station_info *sinfo)
{
struct wcn36xx_hal_stats_req_msg msg_body;
struct wcn36xx_hal_stats_rsp_msg *rsp;
void *rsp_body;
int ret;
if (stats_mask & ~HAL_GLOBAL_CLASS_A_STATS_INFO) {
wcn36xx_err("stats_mask 0x%x contains unimplemented types\n",
stats_mask);
return -EINVAL;
}
mutex_lock(&wcn->hal_mutex);
INIT_HAL_MSG(msg_body, WCN36XX_HAL_GET_STATS_REQ);
msg_body.sta_id = sta_index;
msg_body.stats_mask = stats_mask;
PREPARE_HAL_BUF(wcn->hal_buf, msg_body);
ret = wcn36xx_smd_send_and_wait(wcn, msg_body.header.len);
if (ret) {
wcn36xx_err("sending hal_get_stats failed\n");
goto out;
}
ret = wcn36xx_smd_rsp_status_check(wcn->hal_buf, wcn->hal_rsp_len);
if (ret) {
wcn36xx_err("hal_get_stats response failed err=%d\n", ret);
goto out;
}
rsp = (struct wcn36xx_hal_stats_rsp_msg *)wcn->hal_buf;
rsp_body = (wcn->hal_buf + sizeof(struct wcn36xx_hal_stats_rsp_msg));
if (rsp->stats_mask != stats_mask) {
wcn36xx_err("stats_mask 0x%x differs from requested 0x%x\n",
rsp->stats_mask, stats_mask);
goto out;
}
if (rsp->stats_mask & HAL_GLOBAL_CLASS_A_STATS_INFO) {
struct ani_global_class_a_stats_info *stats_info = rsp_body;
wcn36xx_process_tx_rate(stats_info, &sinfo->txrate);
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BITRATE);
rsp_body += sizeof(struct ani_global_class_a_stats_info);
}
out:
mutex_unlock(&wcn->hal_mutex);
return ret;
}
static int wcn36xx_smd_trigger_ba_rsp(void *buf, int len, struct add_ba_info *ba_info)
{
struct wcn36xx_hal_trigger_ba_rsp_candidate *candidate;
@ -3092,9 +3151,9 @@ static int wcn36xx_smd_gtk_offload_get_info_rsp(struct wcn36xx *wcn,
cpu_to_le64(rsp->key_replay_counter);
ieee80211_gtk_rekey_notify(vif, vif->bss_conf.bssid,
(void *)&replay_ctr, GFP_KERNEL);
wcn36xx_dbg(WCN36XX_DBG_HAL,
"GTK replay counter increment %llu\n",
rsp->key_replay_counter);
wcn36xx_dbg(WCN36XX_DBG_HAL,
"GTK replay counter increment %llu\n",
rsp->key_replay_counter);
}
wcn36xx_dbg(WCN36XX_DBG_HAL,
@ -3316,6 +3375,7 @@ int wcn36xx_smd_rsp_process(struct rpmsg_device *rpdev,
case WCN36XX_HAL_ADD_BA_SESSION_RSP:
case WCN36XX_HAL_ADD_BA_RSP:
case WCN36XX_HAL_DEL_BA_RSP:
case WCN36XX_HAL_GET_STATS_RSP:
case WCN36XX_HAL_TRIGGER_BA_RSP:
case WCN36XX_HAL_UPDATE_CFG_RSP:
case WCN36XX_HAL_JOIN_RSP:

View File

@ -138,6 +138,8 @@ int wcn36xx_smd_add_ba_session(struct wcn36xx *wcn,
int wcn36xx_smd_add_ba(struct wcn36xx *wcn, u8 session_id);
int wcn36xx_smd_del_ba(struct wcn36xx *wcn, u16 tid, u8 direction, u8 sta_index);
int wcn36xx_smd_trigger_ba(struct wcn36xx *wcn, u8 sta_index, u16 tid, u16 *ssn);
int wcn36xx_smd_get_stats(struct wcn36xx *wcn, u8 sta_index, u32 stats_mask,
struct station_info *sinfo);
int wcn36xx_smd_update_cfg(struct wcn36xx *wcn, u32 cfg_id, u32 value);

View File

@ -699,3 +699,32 @@ int wcn36xx_start_tx(struct wcn36xx *wcn,
return ret;
}
void wcn36xx_process_tx_rate(struct ani_global_class_a_stats_info *stats, struct rate_info *info)
{
/* tx_rate is in units of 500kbps; mac80211 wants them in 100kbps */
if (stats->tx_rate_flags & HAL_TX_RATE_LEGACY)
info->legacy = stats->tx_rate * 5;
info->flags = 0;
info->mcs = stats->mcs_index;
info->nss = 1;
if (stats->tx_rate_flags & (HAL_TX_RATE_HT20 | HAL_TX_RATE_HT40))
info->flags |= RATE_INFO_FLAGS_MCS;
if (stats->tx_rate_flags & (HAL_TX_RATE_VHT20 | HAL_TX_RATE_VHT40 | HAL_TX_RATE_VHT80))
info->flags |= RATE_INFO_FLAGS_VHT_MCS;
if (stats->tx_rate_flags & HAL_TX_RATE_SGI)
info->flags |= RATE_INFO_FLAGS_SHORT_GI;
if (stats->tx_rate_flags & (HAL_TX_RATE_HT20 | HAL_TX_RATE_VHT20))
info->bw = RATE_INFO_BW_20;
if (stats->tx_rate_flags & (HAL_TX_RATE_HT40 | HAL_TX_RATE_VHT40))
info->bw = RATE_INFO_BW_40;
if (stats->tx_rate_flags & HAL_TX_RATE_VHT80)
info->bw = RATE_INFO_BW_80;
}

View File

@ -164,5 +164,6 @@ int wcn36xx_rx_skb(struct wcn36xx *wcn, struct sk_buff *skb);
int wcn36xx_start_tx(struct wcn36xx *wcn,
struct wcn36xx_sta *sta_priv,
struct sk_buff *skb);
void wcn36xx_process_tx_rate(struct ani_global_class_a_stats_info *stats, struct rate_info *info);
#endif /* _TXRX_H_ */

View File

@ -1653,10 +1653,9 @@ static int wil_cfg80211_add_key(struct wiphy *wiphy,
params->seq_len, params->seq);
return -EINVAL;
}
}
if (!IS_ERR(cs))
} else {
wil_del_rx_key(key_index, key_usage, cs);
}
if (params->seq && params->seq_len != IEEE80211_GCMP_PN_LEN) {
wil_err(wil,

View File

@ -457,17 +457,17 @@ int wil_if_add(struct wil6210_priv *wil)
if (wil->use_enhanced_dma_hw) {
netif_napi_add(&wil->napi_ndev, &wil->napi_rx,
wil6210_netdev_poll_rx_edma,
WIL6210_NAPI_BUDGET);
NAPI_POLL_WEIGHT);
netif_tx_napi_add(&wil->napi_ndev,
&wil->napi_tx, wil6210_netdev_poll_tx_edma,
WIL6210_NAPI_BUDGET);
NAPI_POLL_WEIGHT);
} else {
netif_napi_add(&wil->napi_ndev, &wil->napi_rx,
wil6210_netdev_poll_rx,
WIL6210_NAPI_BUDGET);
NAPI_POLL_WEIGHT);
netif_tx_napi_add(&wil->napi_ndev,
&wil->napi_tx, wil6210_netdev_poll_tx,
WIL6210_NAPI_BUDGET);
NAPI_POLL_WEIGHT);
}
wil_update_net_queues_bh(wil, vif, NULL, true);

View File

@ -445,10 +445,9 @@ int wil_pm_runtime_get(struct wil6210_priv *wil)
int rc;
struct device *dev = wil_to_dev(wil);
rc = pm_runtime_get_sync(dev);
rc = pm_runtime_resume_and_get(dev);
if (rc < 0) {
wil_err(wil, "pm_runtime_get_sync() failed, rc = %d\n", rc);
pm_runtime_put_noidle(dev);
wil_err(wil, "pm_runtime_resume_and_get() failed, rc = %d\n", rc);
return rc;
}

View File

@ -82,7 +82,6 @@ static inline u32 WIL_GET_BITS(u32 x, int b0, int b1)
#define WIL6210_MAX_TX_RINGS (24) /* HW limit */
#define WIL6210_MAX_CID (20) /* max number of stations */
#define WIL6210_RX_DESC_MAX_CID (8) /* HW limit */
#define WIL6210_NAPI_BUDGET (16) /* arbitrary */
#define WIL_MAX_AMPDU_SIZE (64 * 1024) /* FW/HW limit */
#define WIL_MAX_AGG_WSIZE (32) /* FW/HW limit */
#define WIL_MAX_AMPDU_SIZE_128 (128 * 1024) /* FW/HW limit */

View File

@ -582,7 +582,7 @@ static void b43_nphy_adjust_lna_gain_table(struct b43_wldev *dev)
u16 data[4];
s16 gain[2];
u16 minmax[2];
static const u16 lna_gain[4] = { -2, 10, 19, 25 };
static const s16 lna_gain[4] = { -2, 10, 19, 25 };
if (nphy->hang_avoid)
b43_nphy_stay_in_carrier_search(dev, 1);

View File

@ -1123,7 +1123,7 @@ void b43legacy_phy_lo_b_measure(struct b43legacy_wldev *dev)
struct b43legacy_phy *phy = &dev->phy;
u16 regstack[12] = { 0 };
u16 mls;
u16 fval;
s16 fval;
int i;
int j;

View File

@ -1119,9 +1119,21 @@ void brcmf_sdio_wowl_config(struct device *dev, bool enabled)
{
struct brcmf_bus *bus_if = dev_get_drvdata(dev);
struct brcmf_sdio_dev *sdiodev = bus_if->bus_priv.sdio;
mmc_pm_flag_t pm_caps = sdio_get_host_pm_caps(sdiodev->func1);
brcmf_dbg(SDIO, "Configuring WOWL, enabled=%d\n", enabled);
sdiodev->wowl_enabled = enabled;
/* Power must be preserved to be able to support WOWL. */
if (!(pm_caps & MMC_PM_KEEP_POWER))
goto notsup;
if (sdiodev->settings->bus.sdio.oob_irq_supported ||
pm_caps & MMC_PM_WAKE_SDIO_IRQ) {
sdiodev->wowl_enabled = enabled;
brcmf_dbg(SDIO, "Configuring WOWL, enabled=%d\n", enabled);
return;
}
notsup:
brcmf_dbg(SDIO, "WOWL not supported\n");
}
#ifdef CONFIG_PM_SLEEP
@ -1130,7 +1142,7 @@ static int brcmf_ops_sdio_suspend(struct device *dev)
struct sdio_func *func;
struct brcmf_bus *bus_if;
struct brcmf_sdio_dev *sdiodev;
mmc_pm_flag_t pm_caps, sdio_flags;
mmc_pm_flag_t sdio_flags;
int ret = 0;
func = container_of(dev, struct sdio_func, dev);
@ -1142,20 +1154,15 @@ static int brcmf_ops_sdio_suspend(struct device *dev)
bus_if = dev_get_drvdata(dev);
sdiodev = bus_if->bus_priv.sdio;
pm_caps = sdio_get_host_pm_caps(func);
if (pm_caps & MMC_PM_KEEP_POWER) {
/* preserve card power during suspend */
if (sdiodev->wowl_enabled) {
brcmf_sdiod_freezer_on(sdiodev);
brcmf_sdio_wd_timer(sdiodev->bus, 0);
sdio_flags = MMC_PM_KEEP_POWER;
if (sdiodev->wowl_enabled) {
if (sdiodev->settings->bus.sdio.oob_irq_supported)
enable_irq_wake(sdiodev->settings->bus.sdio.oob_irq_nr);
else
sdio_flags |= MMC_PM_WAKE_SDIO_IRQ;
}
if (sdiodev->settings->bus.sdio.oob_irq_supported)
enable_irq_wake(sdiodev->settings->bus.sdio.oob_irq_nr);
else
sdio_flags |= MMC_PM_WAKE_SDIO_IRQ;
if (sdio_set_host_pm_flags(sdiodev->func1, sdio_flags))
brcmf_err("Failed to set pm_flags %x\n", sdio_flags);
@ -1176,21 +1183,19 @@ static int brcmf_ops_sdio_resume(struct device *dev)
struct brcmf_bus *bus_if = dev_get_drvdata(dev);
struct brcmf_sdio_dev *sdiodev = bus_if->bus_priv.sdio;
struct sdio_func *func = container_of(dev, struct sdio_func, dev);
mmc_pm_flag_t pm_caps = sdio_get_host_pm_caps(func);
int ret = 0;
brcmf_dbg(SDIO, "Enter: F%d\n", func->num);
if (func->num != 2)
return 0;
if (!(pm_caps & MMC_PM_KEEP_POWER)) {
if (!sdiodev->wowl_enabled) {
/* bus was powered off and device removed, probe again */
ret = brcmf_sdiod_probe(sdiodev);
if (ret)
brcmf_err("Failed to probe device on resume\n");
} else {
if (sdiodev->wowl_enabled &&
sdiodev->settings->bus.sdio.oob_irq_supported)
if (sdiodev->settings->bus.sdio.oob_irq_supported)
disable_irq_wake(sdiodev->settings->bus.sdio.oob_irq_nr);
brcmf_sdiod_freezer_off(sdiodev);

View File

@ -7481,6 +7481,7 @@ static bool brmcf_use_iso3166_ccode_fallback(struct brcmf_pub *drvr)
{
switch (drvr->bus_if->chip) {
case BRCM_CC_4345_CHIP_ID:
case BRCM_CC_43602_CHIP_ID:
return true;
default:
return false;

View File

@ -868,7 +868,7 @@ brcms_ops_ampdu_action(struct ieee80211_hw *hw,
spin_lock_bh(&wl->lock);
brcms_c_ampdu_tx_operational(wl->wlc, tid, buf_size,
(1 << (IEEE80211_HT_MAX_AMPDU_FACTOR +
sta->ht_cap.ampdu_factor)) - 1);
sta->deflink.ht_cap.ampdu_factor)) - 1);
spin_unlock_bh(&wl->lock);
/* Power save wakeup */
break;

View File

@ -3501,7 +3501,7 @@ static void ipw2100_msg_free(struct ipw2100_priv *priv)
priv->msg_buffers = NULL;
}
static ssize_t show_pci(struct device *d, struct device_attribute *attr,
static ssize_t pci_show(struct device *d, struct device_attribute *attr,
char *buf)
{
struct pci_dev *pci_dev = to_pci_dev(d);
@ -3521,34 +3521,34 @@ static ssize_t show_pci(struct device *d, struct device_attribute *attr,
return out - buf;
}
static DEVICE_ATTR(pci, 0444, show_pci, NULL);
static DEVICE_ATTR_RO(pci);
static ssize_t show_cfg(struct device *d, struct device_attribute *attr,
static ssize_t cfg_show(struct device *d, struct device_attribute *attr,
char *buf)
{
struct ipw2100_priv *p = dev_get_drvdata(d);
return sprintf(buf, "0x%08x\n", (int)p->config);
}
static DEVICE_ATTR(cfg, 0444, show_cfg, NULL);
static DEVICE_ATTR_RO(cfg);
static ssize_t show_status(struct device *d, struct device_attribute *attr,
static ssize_t status_show(struct device *d, struct device_attribute *attr,
char *buf)
{
struct ipw2100_priv *p = dev_get_drvdata(d);
return sprintf(buf, "0x%08x\n", (int)p->status);
}
static DEVICE_ATTR(status, 0444, show_status, NULL);
static DEVICE_ATTR_RO(status);
static ssize_t show_capability(struct device *d, struct device_attribute *attr,
static ssize_t capability_show(struct device *d, struct device_attribute *attr,
char *buf)
{
struct ipw2100_priv *p = dev_get_drvdata(d);
return sprintf(buf, "0x%08x\n", (int)p->capability);
}
static DEVICE_ATTR(capability, 0444, show_capability, NULL);
static DEVICE_ATTR_RO(capability);
#define IPW2100_REG(x) { IPW_ ##x, #x }
static const struct {
@ -3785,7 +3785,7 @@ IPW2100_ORD(STAT_TX_HOST_REQUESTS, "requested Host Tx's (MSDU)"),
IPW2100_ORD(NIC_MANF_DATE_TIME, "MANF Date/Time STAMP"),
IPW2100_ORD(UCODE_VERSION, "Ucode Version"),};
static ssize_t show_registers(struct device *d, struct device_attribute *attr,
static ssize_t registers_show(struct device *d, struct device_attribute *attr,
char *buf)
{
int i;
@ -3805,9 +3805,9 @@ static ssize_t show_registers(struct device *d, struct device_attribute *attr,
return out - buf;
}
static DEVICE_ATTR(registers, 0444, show_registers, NULL);
static DEVICE_ATTR_RO(registers);
static ssize_t show_hardware(struct device *d, struct device_attribute *attr,
static ssize_t hardware_show(struct device *d, struct device_attribute *attr,
char *buf)
{
struct ipw2100_priv *priv = dev_get_drvdata(d);
@ -3846,9 +3846,9 @@ static ssize_t show_hardware(struct device *d, struct device_attribute *attr,
return out - buf;
}
static DEVICE_ATTR(hardware, 0444, show_hardware, NULL);
static DEVICE_ATTR_RO(hardware);
static ssize_t show_memory(struct device *d, struct device_attribute *attr,
static ssize_t memory_show(struct device *d, struct device_attribute *attr,
char *buf)
{
struct ipw2100_priv *priv = dev_get_drvdata(d);
@ -3905,7 +3905,7 @@ static ssize_t show_memory(struct device *d, struct device_attribute *attr,
return len;
}
static ssize_t store_memory(struct device *d, struct device_attribute *attr,
static ssize_t memory_store(struct device *d, struct device_attribute *attr,
const char *buf, size_t count)
{
struct ipw2100_priv *priv = dev_get_drvdata(d);
@ -3940,9 +3940,9 @@ static ssize_t store_memory(struct device *d, struct device_attribute *attr,
return count;
}
static DEVICE_ATTR(memory, 0644, show_memory, store_memory);
static DEVICE_ATTR_RW(memory);
static ssize_t show_ordinals(struct device *d, struct device_attribute *attr,
static ssize_t ordinals_show(struct device *d, struct device_attribute *attr,
char *buf)
{
struct ipw2100_priv *priv = dev_get_drvdata(d);
@ -3976,9 +3976,9 @@ static ssize_t show_ordinals(struct device *d, struct device_attribute *attr,
return len;
}
static DEVICE_ATTR(ordinals, 0444, show_ordinals, NULL);
static DEVICE_ATTR_RO(ordinals);
static ssize_t show_stats(struct device *d, struct device_attribute *attr,
static ssize_t stats_show(struct device *d, struct device_attribute *attr,
char *buf)
{
struct ipw2100_priv *priv = dev_get_drvdata(d);
@ -3997,7 +3997,7 @@ static ssize_t show_stats(struct device *d, struct device_attribute *attr,
return out - buf;
}
static DEVICE_ATTR(stats, 0444, show_stats, NULL);
static DEVICE_ATTR_RO(stats);
static int ipw2100_switch_mode(struct ipw2100_priv *priv, u32 mode)
{
@ -4043,7 +4043,7 @@ static int ipw2100_switch_mode(struct ipw2100_priv *priv, u32 mode)
return 0;
}
static ssize_t show_internals(struct device *d, struct device_attribute *attr,
static ssize_t internals_show(struct device *d, struct device_attribute *attr,
char *buf)
{
struct ipw2100_priv *priv = dev_get_drvdata(d);
@ -4095,9 +4095,9 @@ static ssize_t show_internals(struct device *d, struct device_attribute *attr,
return len;
}
static DEVICE_ATTR(internals, 0444, show_internals, NULL);
static DEVICE_ATTR_RO(internals);
static ssize_t show_bssinfo(struct device *d, struct device_attribute *attr,
static ssize_t bssinfo_show(struct device *d, struct device_attribute *attr,
char *buf)
{
struct ipw2100_priv *priv = dev_get_drvdata(d);
@ -4140,7 +4140,7 @@ static ssize_t show_bssinfo(struct device *d, struct device_attribute *attr,
return out - buf;
}
static DEVICE_ATTR(bssinfo, 0444, show_bssinfo, NULL);
static DEVICE_ATTR_RO(bssinfo);
#ifdef CONFIG_IPW2100_DEBUG
static ssize_t debug_level_show(struct device_driver *d, char *buf)
@ -4165,7 +4165,7 @@ static ssize_t debug_level_store(struct device_driver *d,
static DRIVER_ATTR_RW(debug_level);
#endif /* CONFIG_IPW2100_DEBUG */
static ssize_t show_fatal_error(struct device *d,
static ssize_t fatal_error_show(struct device *d,
struct device_attribute *attr, char *buf)
{
struct ipw2100_priv *priv = dev_get_drvdata(d);
@ -4190,7 +4190,7 @@ static ssize_t show_fatal_error(struct device *d,
return out - buf;
}
static ssize_t store_fatal_error(struct device *d,
static ssize_t fatal_error_store(struct device *d,
struct device_attribute *attr, const char *buf,
size_t count)
{
@ -4199,16 +4199,16 @@ static ssize_t store_fatal_error(struct device *d,
return count;
}
static DEVICE_ATTR(fatal_error, 0644, show_fatal_error, store_fatal_error);
static DEVICE_ATTR_RW(fatal_error);
static ssize_t show_scan_age(struct device *d, struct device_attribute *attr,
static ssize_t scan_age_show(struct device *d, struct device_attribute *attr,
char *buf)
{
struct ipw2100_priv *priv = dev_get_drvdata(d);
return sprintf(buf, "%d\n", priv->ieee->scan_age);
}
static ssize_t store_scan_age(struct device *d, struct device_attribute *attr,
static ssize_t scan_age_store(struct device *d, struct device_attribute *attr,
const char *buf, size_t count)
{
struct ipw2100_priv *priv = dev_get_drvdata(d);
@ -4232,9 +4232,9 @@ static ssize_t store_scan_age(struct device *d, struct device_attribute *attr,
return strnlen(buf, count);
}
static DEVICE_ATTR(scan_age, 0644, show_scan_age, store_scan_age);
static DEVICE_ATTR_RW(scan_age);
static ssize_t show_rf_kill(struct device *d, struct device_attribute *attr,
static ssize_t rf_kill_show(struct device *d, struct device_attribute *attr,
char *buf)
{
/* 0 - RF kill not enabled
@ -4278,7 +4278,7 @@ static int ipw_radio_kill_sw(struct ipw2100_priv *priv, int disable_radio)
return 1;
}
static ssize_t store_rf_kill(struct device *d, struct device_attribute *attr,
static ssize_t rf_kill_store(struct device *d, struct device_attribute *attr,
const char *buf, size_t count)
{
struct ipw2100_priv *priv = dev_get_drvdata(d);
@ -4286,7 +4286,7 @@ static ssize_t store_rf_kill(struct device *d, struct device_attribute *attr,
return count;
}
static DEVICE_ATTR(rf_kill, 0644, show_rf_kill, store_rf_kill);
static DEVICE_ATTR_RW(rf_kill);
static struct attribute *ipw2100_sysfs_entries[] = {
&dev_attr_hardware.attr,

View File

@ -1259,7 +1259,7 @@ static struct ipw_fw_error *ipw_alloc_error_log(struct ipw_priv *priv)
return error;
}
static ssize_t show_event_log(struct device *d,
static ssize_t event_log_show(struct device *d,
struct device_attribute *attr, char *buf)
{
struct ipw_priv *priv = dev_get_drvdata(d);
@ -1289,9 +1289,9 @@ static ssize_t show_event_log(struct device *d,
return len;
}
static DEVICE_ATTR(event_log, 0444, show_event_log, NULL);
static DEVICE_ATTR_RO(event_log);
static ssize_t show_error(struct device *d,
static ssize_t error_show(struct device *d,
struct device_attribute *attr, char *buf)
{
struct ipw_priv *priv = dev_get_drvdata(d);
@ -1326,7 +1326,7 @@ static ssize_t show_error(struct device *d,
return len;
}
static ssize_t clear_error(struct device *d,
static ssize_t error_store(struct device *d,
struct device_attribute *attr,
const char *buf, size_t count)
{
@ -1337,9 +1337,9 @@ static ssize_t clear_error(struct device *d,
return count;
}
static DEVICE_ATTR(error, 0644, show_error, clear_error);
static DEVICE_ATTR_RW(error);
static ssize_t show_cmd_log(struct device *d,
static ssize_t cmd_log_show(struct device *d,
struct device_attribute *attr, char *buf)
{
struct ipw_priv *priv = dev_get_drvdata(d);
@ -1364,12 +1364,12 @@ static ssize_t show_cmd_log(struct device *d,
return len;
}
static DEVICE_ATTR(cmd_log, 0444, show_cmd_log, NULL);
static DEVICE_ATTR_RO(cmd_log);
#ifdef CONFIG_IPW2200_PROMISCUOUS
static void ipw_prom_free(struct ipw_priv *priv);
static int ipw_prom_alloc(struct ipw_priv *priv);
static ssize_t store_rtap_iface(struct device *d,
static ssize_t rtap_iface_store(struct device *d,
struct device_attribute *attr,
const char *buf, size_t count)
{
@ -1414,7 +1414,7 @@ static ssize_t store_rtap_iface(struct device *d,
return count;
}
static ssize_t show_rtap_iface(struct device *d,
static ssize_t rtap_iface_show(struct device *d,
struct device_attribute *attr,
char *buf)
{
@ -1429,9 +1429,9 @@ static ssize_t show_rtap_iface(struct device *d,
}
}
static DEVICE_ATTR(rtap_iface, 0600, show_rtap_iface, store_rtap_iface);
static DEVICE_ATTR_ADMIN_RW(rtap_iface);
static ssize_t store_rtap_filter(struct device *d,
static ssize_t rtap_filter_store(struct device *d,
struct device_attribute *attr,
const char *buf, size_t count)
{
@ -1451,7 +1451,7 @@ static ssize_t store_rtap_filter(struct device *d,
return count;
}
static ssize_t show_rtap_filter(struct device *d,
static ssize_t rtap_filter_show(struct device *d,
struct device_attribute *attr,
char *buf)
{
@ -1460,17 +1460,17 @@ static ssize_t show_rtap_filter(struct device *d,
priv->prom_priv ? priv->prom_priv->filter : 0);
}
static DEVICE_ATTR(rtap_filter, 0600, show_rtap_filter, store_rtap_filter);
static DEVICE_ATTR_ADMIN_RW(rtap_filter);
#endif
static ssize_t show_scan_age(struct device *d, struct device_attribute *attr,
static ssize_t scan_age_show(struct device *d, struct device_attribute *attr,
char *buf)
{
struct ipw_priv *priv = dev_get_drvdata(d);
return sprintf(buf, "%d\n", priv->ieee->scan_age);
}
static ssize_t store_scan_age(struct device *d, struct device_attribute *attr,
static ssize_t scan_age_store(struct device *d, struct device_attribute *attr,
const char *buf, size_t count)
{
struct ipw_priv *priv = dev_get_drvdata(d);
@ -1504,16 +1504,16 @@ static ssize_t store_scan_age(struct device *d, struct device_attribute *attr,
return len;
}
static DEVICE_ATTR(scan_age, 0644, show_scan_age, store_scan_age);
static DEVICE_ATTR_RW(scan_age);
static ssize_t show_led(struct device *d, struct device_attribute *attr,
static ssize_t led_show(struct device *d, struct device_attribute *attr,
char *buf)
{
struct ipw_priv *priv = dev_get_drvdata(d);
return sprintf(buf, "%d\n", (priv->config & CFG_NO_LED) ? 0 : 1);
}
static ssize_t store_led(struct device *d, struct device_attribute *attr,
static ssize_t led_store(struct device *d, struct device_attribute *attr,
const char *buf, size_t count)
{
struct ipw_priv *priv = dev_get_drvdata(d);
@ -1537,36 +1537,36 @@ static ssize_t store_led(struct device *d, struct device_attribute *attr,
return count;
}
static DEVICE_ATTR(led, 0644, show_led, store_led);
static DEVICE_ATTR_RW(led);
static ssize_t show_status(struct device *d,
static ssize_t status_show(struct device *d,
struct device_attribute *attr, char *buf)
{
struct ipw_priv *p = dev_get_drvdata(d);
return sprintf(buf, "0x%08x\n", (int)p->status);
}
static DEVICE_ATTR(status, 0444, show_status, NULL);
static DEVICE_ATTR_RO(status);
static ssize_t show_cfg(struct device *d, struct device_attribute *attr,
static ssize_t cfg_show(struct device *d, struct device_attribute *attr,
char *buf)
{
struct ipw_priv *p = dev_get_drvdata(d);
return sprintf(buf, "0x%08x\n", (int)p->config);
}
static DEVICE_ATTR(cfg, 0444, show_cfg, NULL);
static DEVICE_ATTR_RO(cfg);
static ssize_t show_nic_type(struct device *d,
static ssize_t nic_type_show(struct device *d,
struct device_attribute *attr, char *buf)
{
struct ipw_priv *priv = dev_get_drvdata(d);
return sprintf(buf, "TYPE: %d\n", priv->nic_type);
}
static DEVICE_ATTR(nic_type, 0444, show_nic_type, NULL);
static DEVICE_ATTR_RO(nic_type);
static ssize_t show_ucode_version(struct device *d,
static ssize_t ucode_version_show(struct device *d,
struct device_attribute *attr, char *buf)
{
u32 len = sizeof(u32), tmp = 0;
@ -1578,9 +1578,9 @@ static ssize_t show_ucode_version(struct device *d,
return sprintf(buf, "0x%08x\n", tmp);
}
static DEVICE_ATTR(ucode_version, 0644, show_ucode_version, NULL);
static DEVICE_ATTR_RO(ucode_version);
static ssize_t show_rtc(struct device *d, struct device_attribute *attr,
static ssize_t rtc_show(struct device *d, struct device_attribute *attr,
char *buf)
{
u32 len = sizeof(u32), tmp = 0;
@ -1592,20 +1592,20 @@ static ssize_t show_rtc(struct device *d, struct device_attribute *attr,
return sprintf(buf, "0x%08x\n", tmp);
}
static DEVICE_ATTR(rtc, 0644, show_rtc, NULL);
static DEVICE_ATTR_RO(rtc);
/*
* Add a device attribute to view/control the delay between eeprom
* operations.
*/
static ssize_t show_eeprom_delay(struct device *d,
static ssize_t eeprom_delay_show(struct device *d,
struct device_attribute *attr, char *buf)
{
struct ipw_priv *p = dev_get_drvdata(d);
int n = p->eeprom_delay;
return sprintf(buf, "%i\n", n);
}
static ssize_t store_eeprom_delay(struct device *d,
static ssize_t eeprom_delay_store(struct device *d,
struct device_attribute *attr,
const char *buf, size_t count)
{
@ -1614,9 +1614,9 @@ static ssize_t store_eeprom_delay(struct device *d,
return strnlen(buf, count);
}
static DEVICE_ATTR(eeprom_delay, 0644, show_eeprom_delay, store_eeprom_delay);
static DEVICE_ATTR_RW(eeprom_delay);
static ssize_t show_command_event_reg(struct device *d,
static ssize_t command_event_reg_show(struct device *d,
struct device_attribute *attr, char *buf)
{
u32 reg = 0;
@ -1625,7 +1625,7 @@ static ssize_t show_command_event_reg(struct device *d,
reg = ipw_read_reg32(p, IPW_INTERNAL_CMD_EVENT);
return sprintf(buf, "0x%08x\n", reg);
}
static ssize_t store_command_event_reg(struct device *d,
static ssize_t command_event_reg_store(struct device *d,
struct device_attribute *attr,
const char *buf, size_t count)
{
@ -1637,10 +1637,9 @@ static ssize_t store_command_event_reg(struct device *d,
return strnlen(buf, count);
}
static DEVICE_ATTR(command_event_reg, 0644,
show_command_event_reg, store_command_event_reg);
static DEVICE_ATTR_RW(command_event_reg);
static ssize_t show_mem_gpio_reg(struct device *d,
static ssize_t mem_gpio_reg_show(struct device *d,
struct device_attribute *attr, char *buf)
{
u32 reg = 0;
@ -1649,7 +1648,7 @@ static ssize_t show_mem_gpio_reg(struct device *d,
reg = ipw_read_reg32(p, 0x301100);
return sprintf(buf, "0x%08x\n", reg);
}
static ssize_t store_mem_gpio_reg(struct device *d,
static ssize_t mem_gpio_reg_store(struct device *d,
struct device_attribute *attr,
const char *buf, size_t count)
{
@ -1661,9 +1660,9 @@ static ssize_t store_mem_gpio_reg(struct device *d,
return strnlen(buf, count);
}
static DEVICE_ATTR(mem_gpio_reg, 0644, show_mem_gpio_reg, store_mem_gpio_reg);
static DEVICE_ATTR_RW(mem_gpio_reg);
static ssize_t show_indirect_dword(struct device *d,
static ssize_t indirect_dword_show(struct device *d,
struct device_attribute *attr, char *buf)
{
u32 reg = 0;
@ -1676,7 +1675,7 @@ static ssize_t show_indirect_dword(struct device *d,
return sprintf(buf, "0x%08x\n", reg);
}
static ssize_t store_indirect_dword(struct device *d,
static ssize_t indirect_dword_store(struct device *d,
struct device_attribute *attr,
const char *buf, size_t count)
{
@ -1687,10 +1686,9 @@ static ssize_t store_indirect_dword(struct device *d,
return strnlen(buf, count);
}
static DEVICE_ATTR(indirect_dword, 0644,
show_indirect_dword, store_indirect_dword);
static DEVICE_ATTR_RW(indirect_dword);
static ssize_t show_indirect_byte(struct device *d,
static ssize_t indirect_byte_show(struct device *d,
struct device_attribute *attr, char *buf)
{
u8 reg = 0;
@ -1703,7 +1701,7 @@ static ssize_t show_indirect_byte(struct device *d,
return sprintf(buf, "0x%02x\n", reg);
}
static ssize_t store_indirect_byte(struct device *d,
static ssize_t indirect_byte_store(struct device *d,
struct device_attribute *attr,
const char *buf, size_t count)
{
@ -1714,10 +1712,9 @@ static ssize_t store_indirect_byte(struct device *d,
return strnlen(buf, count);
}
static DEVICE_ATTR(indirect_byte, 0644,
show_indirect_byte, store_indirect_byte);
static DEVICE_ATTR_RW(indirect_byte);
static ssize_t show_direct_dword(struct device *d,
static ssize_t direct_dword_show(struct device *d,
struct device_attribute *attr, char *buf)
{
u32 reg = 0;
@ -1730,7 +1727,7 @@ static ssize_t show_direct_dword(struct device *d,
return sprintf(buf, "0x%08x\n", reg);
}
static ssize_t store_direct_dword(struct device *d,
static ssize_t direct_dword_store(struct device *d,
struct device_attribute *attr,
const char *buf, size_t count)
{
@ -1741,7 +1738,7 @@ static ssize_t store_direct_dword(struct device *d,
return strnlen(buf, count);
}
static DEVICE_ATTR(direct_dword, 0644, show_direct_dword, store_direct_dword);
static DEVICE_ATTR_RW(direct_dword);
static int rf_kill_active(struct ipw_priv *priv)
{
@ -1756,7 +1753,7 @@ static int rf_kill_active(struct ipw_priv *priv)
return (priv->status & STATUS_RF_KILL_HW) ? 1 : 0;
}
static ssize_t show_rf_kill(struct device *d, struct device_attribute *attr,
static ssize_t rf_kill_show(struct device *d, struct device_attribute *attr,
char *buf)
{
/* 0 - RF kill not enabled
@ -1802,7 +1799,7 @@ static int ipw_radio_kill_sw(struct ipw_priv *priv, int disable_radio)
return 1;
}
static ssize_t store_rf_kill(struct device *d, struct device_attribute *attr,
static ssize_t rf_kill_store(struct device *d, struct device_attribute *attr,
const char *buf, size_t count)
{
struct ipw_priv *priv = dev_get_drvdata(d);
@ -1812,9 +1809,9 @@ static ssize_t store_rf_kill(struct device *d, struct device_attribute *attr,
return count;
}
static DEVICE_ATTR(rf_kill, 0644, show_rf_kill, store_rf_kill);
static DEVICE_ATTR_RW(rf_kill);
static ssize_t show_speed_scan(struct device *d, struct device_attribute *attr,
static ssize_t speed_scan_show(struct device *d, struct device_attribute *attr,
char *buf)
{
struct ipw_priv *priv = dev_get_drvdata(d);
@ -1829,7 +1826,7 @@ static ssize_t show_speed_scan(struct device *d, struct device_attribute *attr,
return sprintf(buf, "0\n");
}
static ssize_t store_speed_scan(struct device *d, struct device_attribute *attr,
static ssize_t speed_scan_store(struct device *d, struct device_attribute *attr,
const char *buf, size_t count)
{
struct ipw_priv *priv = dev_get_drvdata(d);
@ -1865,16 +1862,16 @@ static ssize_t store_speed_scan(struct device *d, struct device_attribute *attr,
return count;
}
static DEVICE_ATTR(speed_scan, 0644, show_speed_scan, store_speed_scan);
static DEVICE_ATTR_RW(speed_scan);
static ssize_t show_net_stats(struct device *d, struct device_attribute *attr,
static ssize_t net_stats_show(struct device *d, struct device_attribute *attr,
char *buf)
{
struct ipw_priv *priv = dev_get_drvdata(d);
return sprintf(buf, "%c\n", (priv->config & CFG_NET_STATS) ? '1' : '0');
}
static ssize_t store_net_stats(struct device *d, struct device_attribute *attr,
static ssize_t net_stats_store(struct device *d, struct device_attribute *attr,
const char *buf, size_t count)
{
struct ipw_priv *priv = dev_get_drvdata(d);
@ -1886,9 +1883,9 @@ static ssize_t store_net_stats(struct device *d, struct device_attribute *attr,
return count;
}
static DEVICE_ATTR(net_stats, 0644, show_net_stats, store_net_stats);
static DEVICE_ATTR_RW(net_stats);
static ssize_t show_channels(struct device *d,
static ssize_t channels_show(struct device *d,
struct device_attribute *attr,
char *buf)
{
@ -1932,7 +1929,7 @@ static ssize_t show_channels(struct device *d,
return len;
}
static DEVICE_ATTR(channels, 0400, show_channels, NULL);
static DEVICE_ATTR_ADMIN_RO(channels);
static void notify_wx_assoc_event(struct ipw_priv *priv)
{

View File

@ -383,7 +383,7 @@ netdev_tx_t libipw_xmit(struct sk_buff *skb, struct net_device *dev)
/* Each fragment may need to have room for encryption
* pre/postfix */
if (host_encrypt)
if (host_encrypt && crypt && crypt->ops)
bytes_per_frag -= crypt->ops->extra_mpdu_prefix_len +
crypt->ops->extra_mpdu_postfix_len;

View File

@ -354,13 +354,13 @@ il3945_rs_rate_init(struct il_priv *il, struct ieee80211_sta *sta, u8 sta_id)
* after assoc.. */
for (i = sband->n_bitrates - 1; i >= 0; i--) {
if (sta->supp_rates[sband->band] & (1 << i)) {
if (sta->deflink.supp_rates[sband->band] & (1 << i)) {
rs_sta->last_txrate_idx = i;
break;
}
}
il->_3945.sta_supp_rates = sta->supp_rates[sband->band];
il->_3945.sta_supp_rates = sta->deflink.supp_rates[sband->band];
/* For 5 GHz band it start at IL_FIRST_OFDM_RATE */
if (sband->band == NL80211_BAND_5GHZ) {
rs_sta->last_txrate_idx += IL_FIRST_OFDM_RATE;
@ -631,7 +631,7 @@ il3945_rs_get_rate(void *il_r, struct ieee80211_sta *sta, void *il_sta,
il_sta = NULL;
}
rate_mask = sta->supp_rates[sband->band];
rate_mask = sta->deflink.supp_rates[sband->band];
/* get user max rate if set */
max_rate_idx = fls(txrc->rate_idx_mask) - 1;

View File

@ -627,7 +627,7 @@ il4965_rs_toggle_antenna(u32 valid_ant, u32 *rate_n_flags,
static bool
il4965_rs_use_green(struct il_priv *il, struct ieee80211_sta *sta)
{
return (sta->ht_cap.cap & IEEE80211_HT_CAP_GRN_FLD) &&
return (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_GRN_FLD) &&
!il->ht.non_gf_sta_present;
}
@ -970,7 +970,7 @@ il4965_rs_tx_status(void *il_r, struct ieee80211_supported_band *sband,
lq_sta->last_rate_n_flags = tx_rate;
done:
/* See if there's a better rate or modulation mode to try. */
if (sta->supp_rates[sband->band])
if (sta->deflink.supp_rates[sband->band])
il4965_rs_rate_scale_perform(il, skb, sta, lq_sta);
}
@ -1164,7 +1164,7 @@ il4965_rs_switch_to_mimo2(struct il_priv *il, struct il_lq_sta *lq_sta,
s32 rate;
s8 is_green = lq_sta->is_green;
if (!conf_is_ht(conf) || !sta->ht_cap.ht_supported)
if (!conf_is_ht(conf) || !sta->deflink.ht_cap.ht_supported)
return -1;
if (sta->smps_mode == IEEE80211_SMPS_STATIC)
@ -1182,7 +1182,7 @@ il4965_rs_switch_to_mimo2(struct il_priv *il, struct il_lq_sta *lq_sta,
tbl->max_search = IL_MAX_SEARCH;
rate_mask = lq_sta->active_mimo2_rate;
if (il_is_ht40_tx_allowed(il, &sta->ht_cap))
if (il_is_ht40_tx_allowed(il, &sta->deflink.ht_cap))
tbl->is_ht40 = 1;
else
tbl->is_ht40 = 0;
@ -1217,7 +1217,7 @@ il4965_rs_switch_to_siso(struct il_priv *il, struct il_lq_sta *lq_sta,
u8 is_green = lq_sta->is_green;
s32 rate;
if (!conf_is_ht(conf) || !sta->ht_cap.ht_supported)
if (!conf_is_ht(conf) || !sta->deflink.ht_cap.ht_supported)
return -1;
D_RATE("LQ: try to switch to SISO\n");
@ -1228,7 +1228,7 @@ il4965_rs_switch_to_siso(struct il_priv *il, struct il_lq_sta *lq_sta,
tbl->max_search = IL_MAX_SEARCH;
rate_mask = lq_sta->active_siso_rate;
if (il_is_ht40_tx_allowed(il, &sta->ht_cap))
if (il_is_ht40_tx_allowed(il, &sta->deflink.ht_cap))
tbl->is_ht40 = 1;
else
tbl->is_ht40 = 0;
@ -1384,7 +1384,7 @@ il4965_rs_move_siso_to_other(struct il_priv *il, struct il_lq_sta *lq_sta,
struct il_scale_tbl_info *search_tbl =
&(lq_sta->lq_info[(1 - lq_sta->active_tbl)]);
struct il_rate_scale_data *win = &(tbl->win[idx]);
struct ieee80211_sta_ht_cap *ht_cap = &sta->ht_cap;
struct ieee80211_sta_ht_cap *ht_cap = &sta->deflink.ht_cap;
u32 sz =
(sizeof(struct il_scale_tbl_info) -
(sizeof(struct il_rate_scale_data) * RATE_COUNT));
@ -1507,7 +1507,7 @@ il4965_rs_move_mimo2_to_other(struct il_priv *il, struct il_lq_sta *lq_sta,
struct il_scale_tbl_info *search_tbl =
&(lq_sta->lq_info[(1 - lq_sta->active_tbl)]);
struct il_rate_scale_data *win = &(tbl->win[idx]);
struct ieee80211_sta_ht_cap *ht_cap = &sta->ht_cap;
struct ieee80211_sta_ht_cap *ht_cap = &sta->deflink.ht_cap;
u32 sz =
(sizeof(struct il_scale_tbl_info) -
(sizeof(struct il_rate_scale_data) * RATE_COUNT));
@ -1760,7 +1760,7 @@ il4965_rs_rate_scale_perform(struct il_priv *il, struct sk_buff *skb,
(info->flags & IEEE80211_TX_CTL_NO_ACK))
return;
lq_sta->supp_rates = sta->supp_rates[lq_sta->band];
lq_sta->supp_rates = sta->deflink.supp_rates[lq_sta->band];
tid = il4965_rs_tl_add_packet(lq_sta, hdr);
if (tid != MAX_TID_COUNT && (lq_sta->tx_agg_tid_en & (1 << tid))) {
@ -2271,7 +2271,7 @@ il4965_rs_rate_init(struct il_priv *il, struct ieee80211_sta *sta, u8 sta_id)
int i, j;
struct ieee80211_hw *hw = il->hw;
struct ieee80211_conf *conf = &il->hw->conf;
struct ieee80211_sta_ht_cap *ht_cap = &sta->ht_cap;
struct ieee80211_sta_ht_cap *ht_cap = &sta->deflink.ht_cap;
struct il_station_priv *sta_priv;
struct il_lq_sta *lq_sta;
struct ieee80211_supported_band *sband;
@ -2288,7 +2288,7 @@ il4965_rs_rate_init(struct il_priv *il, struct ieee80211_sta *sta, u8 sta_id)
win[i]);
lq_sta->flush_timer = 0;
lq_sta->supp_rates = sta->supp_rates[sband->band];
lq_sta->supp_rates = sta->deflink.supp_rates[sband->band];
for (j = 0; j < LQ_SIZE; j++)
for (i = 0; i < RATE_COUNT; i++)
il4965_rs_rate_scale_clear_win(&lq_sta->lq_info[j].

View File

@ -1863,7 +1863,7 @@ EXPORT_SYMBOL(il_send_add_sta);
static void
il_set_ht_add_station(struct il_priv *il, u8 idx, struct ieee80211_sta *sta)
{
struct ieee80211_sta_ht_cap *sta_ht_inf = &sta->ht_cap;
struct ieee80211_sta_ht_cap *sta_ht_inf = &sta->deflink.ht_cap;
__le32 sta_flags;
if (!sta || !sta_ht_inf->ht_supported)
@ -1900,7 +1900,7 @@ il_set_ht_add_station(struct il_priv *il, u8 idx, struct ieee80211_sta *sta)
cpu_to_le32((u32) sta_ht_inf->
ampdu_density << STA_FLG_AGG_MPDU_DENSITY_POS);
if (il_is_ht40_tx_allowed(il, &sta->ht_cap))
if (il_is_ht40_tx_allowed(il, &sta->deflink.ht_cap))
sta_flags |= STA_FLG_HT40_EN_MSK;
else
sta_flags &= ~STA_FLG_HT40_EN_MSK;
@ -5222,7 +5222,7 @@ il_ht_conf(struct il_priv *il, struct ieee80211_vif *vif)
rcu_read_lock();
sta = ieee80211_find_sta(vif, bss_conf->bssid);
if (sta) {
struct ieee80211_sta_ht_cap *ht_cap = &sta->ht_cap;
struct ieee80211_sta_ht_cap *ht_cap = &sta->deflink.ht_cap;
int maxstreams;
maxstreams =

View File

@ -1039,7 +1039,7 @@ static void rs_tx_status(void *priv_r, struct ieee80211_supported_band *sband,
lq_sta->last_rate_n_flags = tx_rate;
done:
/* See if there's a better rate or modulation mode to try. */
if (sta && sta->supp_rates[sband->band])
if (sta && sta->deflink.supp_rates[sband->band])
rs_rate_scale_perform(priv, skb, sta, lq_sta);
if (priv->lib->bt_params && priv->lib->bt_params->advanced_bt_coexist)
@ -1239,7 +1239,7 @@ static int rs_switch_to_mimo2(struct iwl_priv *priv,
struct iwl_station_priv *sta_priv = (void *)sta->drv_priv;
struct iwl_rxon_context *ctx = sta_priv->ctx;
if (!conf_is_ht(conf) || !sta->ht_cap.ht_supported)
if (!conf_is_ht(conf) || !sta->deflink.ht_cap.ht_supported)
return -1;
if (sta->smps_mode == IEEE80211_SMPS_STATIC)
@ -1294,7 +1294,7 @@ static int rs_switch_to_mimo3(struct iwl_priv *priv,
struct iwl_station_priv *sta_priv = (void *)sta->drv_priv;
struct iwl_rxon_context *ctx = sta_priv->ctx;
if (!conf_is_ht(conf) || !sta->ht_cap.ht_supported)
if (!conf_is_ht(conf) || !sta->deflink.ht_cap.ht_supported)
return -1;
if (sta->smps_mode == IEEE80211_SMPS_STATIC)
@ -1350,7 +1350,7 @@ static int rs_switch_to_siso(struct iwl_priv *priv,
struct iwl_station_priv *sta_priv = (void *)sta->drv_priv;
struct iwl_rxon_context *ctx = sta_priv->ctx;
if (!conf_is_ht(conf) || !sta->ht_cap.ht_supported)
if (!conf_is_ht(conf) || !sta->deflink.ht_cap.ht_supported)
return -1;
IWL_DEBUG_RATE(priv, "LQ: try to switch to SISO\n");
@ -1570,7 +1570,7 @@ static void rs_move_siso_to_other(struct iwl_priv *priv,
struct iwl_scale_tbl_info *search_tbl =
&(lq_sta->lq_info[(1 - lq_sta->active_tbl)]);
struct iwl_rate_scale_data *window = &(tbl->win[index]);
struct ieee80211_sta_ht_cap *ht_cap = &sta->ht_cap;
struct ieee80211_sta_ht_cap *ht_cap = &sta->deflink.ht_cap;
u32 sz = (sizeof(struct iwl_scale_tbl_info) -
(sizeof(struct iwl_rate_scale_data) * IWL_RATE_COUNT));
u8 start_action;
@ -1740,7 +1740,7 @@ static void rs_move_mimo2_to_other(struct iwl_priv *priv,
struct iwl_scale_tbl_info *search_tbl =
&(lq_sta->lq_info[(1 - lq_sta->active_tbl)]);
struct iwl_rate_scale_data *window = &(tbl->win[index]);
struct ieee80211_sta_ht_cap *ht_cap = &sta->ht_cap;
struct ieee80211_sta_ht_cap *ht_cap = &sta->deflink.ht_cap;
u32 sz = (sizeof(struct iwl_scale_tbl_info) -
(sizeof(struct iwl_rate_scale_data) * IWL_RATE_COUNT));
u8 start_action;
@ -1908,7 +1908,7 @@ static void rs_move_mimo3_to_other(struct iwl_priv *priv,
struct iwl_scale_tbl_info *search_tbl =
&(lq_sta->lq_info[(1 - lq_sta->active_tbl)]);
struct iwl_rate_scale_data *window = &(tbl->win[index]);
struct ieee80211_sta_ht_cap *ht_cap = &sta->ht_cap;
struct ieee80211_sta_ht_cap *ht_cap = &sta->deflink.ht_cap;
u32 sz = (sizeof(struct iwl_scale_tbl_info) -
(sizeof(struct iwl_rate_scale_data) * IWL_RATE_COUNT));
u8 start_action;
@ -2212,7 +2212,7 @@ static void rs_rate_scale_perform(struct iwl_priv *priv,
info->flags & IEEE80211_TX_CTL_NO_ACK)
return;
lq_sta->supp_rates = sta->supp_rates[lq_sta->band];
lq_sta->supp_rates = sta->deflink.supp_rates[lq_sta->band];
tid = rs_tl_add_packet(lq_sta, hdr);
if ((tid != IWL_MAX_TID_COUNT) &&
@ -2763,7 +2763,7 @@ void iwl_rs_rate_init(struct iwl_priv *priv, struct ieee80211_sta *sta, u8 sta_i
int i, j;
struct ieee80211_hw *hw = priv->hw;
struct ieee80211_conf *conf = &priv->hw->conf;
struct ieee80211_sta_ht_cap *ht_cap = &sta->ht_cap;
struct ieee80211_sta_ht_cap *ht_cap = &sta->deflink.ht_cap;
struct iwl_station_priv *sta_priv;
struct iwl_lq_sta *lq_sta;
struct ieee80211_supported_band *sband;
@ -2781,7 +2781,7 @@ void iwl_rs_rate_init(struct iwl_priv *priv, struct ieee80211_sta *sta, u8 sta_i
rs_rate_scale_clear_window(&lq_sta->lq_info[j].win[i]);
lq_sta->flush_timer = 0;
lq_sta->supp_rates = sta->supp_rates[sband->band];
lq_sta->supp_rates = sta->deflink.supp_rates[sband->band];
IWL_DEBUG_RATE(priv, "LQ: *** rate scale station global init for station %d ***\n",
sta_id);
@ -2798,7 +2798,7 @@ void iwl_rs_rate_init(struct iwl_priv *priv, struct ieee80211_sta *sta, u8 sta_i
/*
* active legacy rates as per supported rates bitmap
*/
supp = sta->supp_rates[sband->band];
supp = sta->deflink.supp_rates[sband->band];
lq_sta->active_legacy_rate = 0;
for_each_set_bit(i, &supp, BITS_PER_LONG)
lq_sta->active_legacy_rate |= BIT(sband->bitrates[i].hw_value);

View File

@ -1280,7 +1280,7 @@ static void iwlagn_check_needed_chains(struct iwl_priv *priv,
break;
}
ht_cap = &sta->ht_cap;
ht_cap = &sta->deflink.ht_cap;
need_multiple = true;

View File

@ -139,7 +139,7 @@ bool iwl_is_ht40_tx_allowed(struct iwl_priv *priv,
if (!sta)
return true;
return sta->bandwidth >= IEEE80211_STA_RX_BW_40;
return sta->deflink.bandwidth >= IEEE80211_STA_RX_BW_40;
}
static void iwl_sta_calc_ht_flags(struct iwl_priv *priv,
@ -147,7 +147,7 @@ static void iwl_sta_calc_ht_flags(struct iwl_priv *priv,
struct iwl_rxon_context *ctx,
__le32 *flags, __le32 *mask)
{
struct ieee80211_sta_ht_cap *sta_ht_inf = &sta->ht_cap;
struct ieee80211_sta_ht_cap *sta_ht_inf = &sta->deflink.ht_cap;
*mask = STA_FLG_RTS_MIMO_PROT_MSK |
STA_FLG_MIMO_DIS_MSK |

View File

@ -26,7 +26,7 @@ struct iwl_fw_ini_hcmd {
u8 id;
u8 group;
__le16 reserved;
u8 data[0];
u8 data[];
} __packed; /* FW_DEBUG_TLV_HCMD_DATA_API_S_VER_1 */
/**
@ -275,7 +275,7 @@ struct iwl_fw_ini_conf_set_tlv {
__le32 time_point;
__le32 set_type;
__le32 addr_offset;
struct iwl_fw_ini_addr_val addr_val[0];
struct iwl_fw_ini_addr_val addr_val[];
} __packed; /* FW_TLV_DEBUG_CONFIG_SET_API_S_VER_1 */
/**

View File

@ -240,7 +240,7 @@ struct iwl_mfu_assert_dump_notif {
__le16 index_num;
__le16 parts_num;
__le32 data_size;
__le32 data[0];
__le32 data[];
} __packed; /* MFU_DUMP_ASSERT_API_S_VER_1 */
/**
@ -276,7 +276,7 @@ struct iwl_mvm_marker {
u8 marker_id;
__le16 reserved;
__le64 timestamp;
__le32 metadata[0];
__le32 metadata[];
} __packed; /* MARKER_API_S_VER_1 */
/**

View File

@ -33,7 +33,7 @@ struct iwl_mcast_filter_cmd {
u8 pass_all;
u8 bssid[6];
u8 reserved[2];
u8 addr_list[0];
u8 addr_list[];
} __packed; /* MCAST_FILTERING_CMD_API_S_VER_1 */
#endif /* __iwl_fw_api_filter_h__ */

View File

@ -1165,7 +1165,7 @@ struct iwl_scan_offload_profiles_query_v1 {
u8 resume_while_scanning;
u8 self_recovery;
__le16 reserved;
struct iwl_scan_offload_profile_match_v1 matches[0];
struct iwl_scan_offload_profile_match_v1 matches[];
} __packed; /* SCAN_OFFLOAD_PROFILES_QUERY_RSP_S_VER_2 */
/**
@ -1209,7 +1209,7 @@ struct iwl_scan_offload_profiles_query {
u8 resume_while_scanning;
u8 self_recovery;
__le16 reserved;
struct iwl_scan_offload_profile_match matches[0];
struct iwl_scan_offload_profile_match matches[];
} __packed; /* SCAN_OFFLOAD_PROFILES_QUERY_RSP_S_VER_3 */
/**

View File

@ -477,7 +477,7 @@ struct iwl_mvm_wep_key_cmd {
u8 decryption_type;
u8 flags;
u8 reserved;
struct iwl_mvm_wep_key wep_key[0];
struct iwl_mvm_wep_key wep_key[];
} __packed; /* SEC_CURR_WEP_KEY_CMD_API_S_VER_2 */
/**

View File

@ -132,7 +132,7 @@ struct iwl_tdls_config_cmd {
__le32 pti_req_data_offset;
struct iwl_tx_cmd pti_req_tx_cmd;
u8 pti_req_template[0];
u8 pti_req_template[];
} __packed; /* TDLS_CONFIG_CMD_API_S_VER_1 */
/**

View File

@ -84,7 +84,7 @@ struct iwl_fw_error_dump_data {
struct iwl_fw_error_dump_file {
__le32 barker;
__le32 file_len;
u8 data[0];
u8 data[];
} __packed;
/**

View File

@ -145,7 +145,7 @@ struct iwl_tlv_ucode_header {
* Note that each TLV is padded to a length
* that is a multiple of 4 for alignment.
*/
u8 data[0];
u8 data[];
};
/*
@ -603,7 +603,7 @@ struct iwl_fw_dbg_dest_tlv_v1 {
__le32 wrap_count;
u8 base_shift;
u8 end_shift;
struct iwl_fw_dbg_reg_op reg_ops[0];
struct iwl_fw_dbg_reg_op reg_ops[];
} __packed;
/* Mask of the register for defining the LDBG MAC2SMEM buffer SMEM size */
@ -623,14 +623,14 @@ struct iwl_fw_dbg_dest_tlv {
__le32 wrap_count;
u8 base_shift;
u8 size_shift;
struct iwl_fw_dbg_reg_op reg_ops[0];
struct iwl_fw_dbg_reg_op reg_ops[];
} __packed;
struct iwl_fw_dbg_conf_hcmd {
u8 id;
u8 reserved;
__le16 len;
u8 data[0];
u8 data[];
} __packed;
/**
@ -705,7 +705,7 @@ struct iwl_fw_dbg_trigger_tlv {
u8 flags;
u8 reserved[5];
u8 data[0];
u8 data[];
} __packed;
#define FW_DBG_START_FROM_ALIVE 0

View File

@ -298,7 +298,7 @@ struct iwl_sap_hdr {
__le16 type;
__le16 len;
__le32 seq_num;
u8 payload[0];
u8 payload[];
};
/**

View File

@ -915,7 +915,7 @@ iwl_mvm_get_wowlan_config(struct iwl_mvm *mvm,
/* TODO: wowlan_config_cmd->wowlan_ba_teardown_tids */
wowlan_config_cmd->is_11n_connection =
ap_sta->ht_cap.ht_supported;
ap_sta->deflink.ht_cap.ht_supported;
wowlan_config_cmd->flags = ENABLE_L3_FILTERING |
ENABLE_NBNS_FILTERING | ENABLE_DHCP_FILTERING;

View File

@ -1877,8 +1877,8 @@ static void iwl_mvm_set_pkt_ext_from_he_ppe(struct iwl_mvm *mvm,
struct ieee80211_sta *sta,
struct iwl_he_pkt_ext_v2 *pkt_ext)
{
u8 nss = (sta->he_cap.ppe_thres[0] & IEEE80211_PPE_THRES_NSS_MASK) + 1;
u8 *ppe = &sta->he_cap.ppe_thres[0];
u8 nss = (sta->deflink.he_cap.ppe_thres[0] & IEEE80211_PPE_THRES_NSS_MASK) + 1;
u8 *ppe = &sta->deflink.he_cap.ppe_thres[0];
u8 ru_index_bitmap =
u8_get_bits(*ppe,
IEEE80211_PPE_THRES_RU_INDEX_BITMASK_MASK);
@ -1993,7 +1993,7 @@ static void iwl_mvm_cfg_he_sta(struct iwl_mvm *mvm,
return;
}
if (!sta->he_cap.has_he) {
if (!sta->deflink.he_cap.has_he) {
rcu_read_unlock();
return;
}
@ -2005,17 +2005,17 @@ static void iwl_mvm_cfg_he_sta(struct iwl_mvm *mvm,
flags |= STA_CTXT_HE_RU_2MHZ_BLOCK;
/* HTC flags */
if (sta->he_cap.he_cap_elem.mac_cap_info[0] &
if (sta->deflink.he_cap.he_cap_elem.mac_cap_info[0] &
IEEE80211_HE_MAC_CAP0_HTC_HE)
sta_ctxt_cmd.htc_flags |= cpu_to_le32(IWL_HE_HTC_SUPPORT);
if ((sta->he_cap.he_cap_elem.mac_cap_info[1] &
if ((sta->deflink.he_cap.he_cap_elem.mac_cap_info[1] &
IEEE80211_HE_MAC_CAP1_LINK_ADAPTATION) ||
(sta->he_cap.he_cap_elem.mac_cap_info[2] &
(sta->deflink.he_cap.he_cap_elem.mac_cap_info[2] &
IEEE80211_HE_MAC_CAP2_LINK_ADAPTATION)) {
u8 link_adap =
((sta->he_cap.he_cap_elem.mac_cap_info[2] &
((sta->deflink.he_cap.he_cap_elem.mac_cap_info[2] &
IEEE80211_HE_MAC_CAP2_LINK_ADAPTATION) << 1) +
(sta->he_cap.he_cap_elem.mac_cap_info[1] &
(sta->deflink.he_cap.he_cap_elem.mac_cap_info[1] &
IEEE80211_HE_MAC_CAP1_LINK_ADAPTATION);
if (link_adap == 2)
@ -2025,12 +2025,12 @@ static void iwl_mvm_cfg_he_sta(struct iwl_mvm *mvm,
sta_ctxt_cmd.htc_flags |=
cpu_to_le32(IWL_HE_HTC_LINK_ADAP_BOTH);
}
if (sta->he_cap.he_cap_elem.mac_cap_info[2] & IEEE80211_HE_MAC_CAP2_BSR)
if (sta->deflink.he_cap.he_cap_elem.mac_cap_info[2] & IEEE80211_HE_MAC_CAP2_BSR)
sta_ctxt_cmd.htc_flags |= cpu_to_le32(IWL_HE_HTC_BSR_SUPP);
if (sta->he_cap.he_cap_elem.mac_cap_info[3] &
if (sta->deflink.he_cap.he_cap_elem.mac_cap_info[3] &
IEEE80211_HE_MAC_CAP3_OMI_CONTROL)
sta_ctxt_cmd.htc_flags |= cpu_to_le32(IWL_HE_HTC_OMI_SUPP);
if (sta->he_cap.he_cap_elem.mac_cap_info[4] & IEEE80211_HE_MAC_CAP4_BQR)
if (sta->deflink.he_cap.he_cap_elem.mac_cap_info[4] & IEEE80211_HE_MAC_CAP4_BQR)
sta_ctxt_cmd.htc_flags |= cpu_to_le32(IWL_HE_HTC_BQR_SUPP);
/*
@ -2041,7 +2041,7 @@ static void iwl_mvm_cfg_he_sta(struct iwl_mvm *mvm,
sizeof(sta_ctxt_cmd.pkt_ext));
/* If PPE Thresholds exist, parse them into a FW-familiar format. */
if (sta->he_cap.he_cap_elem.phy_cap_info[6] &
if (sta->deflink.he_cap.he_cap_elem.phy_cap_info[6] &
IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT) {
iwl_mvm_set_pkt_ext_from_he_ppe(mvm, sta,
&sta_ctxt_cmd.pkt_ext);
@ -2050,7 +2050,7 @@ static void iwl_mvm_cfg_he_sta(struct iwl_mvm *mvm,
* according to Common Nominal Packet Padding fiels. */
} else {
u8 nominal_padding =
u8_get_bits(sta->he_cap.he_cap_elem.phy_cap_info[9],
u8_get_bits(sta->deflink.he_cap.he_cap_elem.phy_cap_info[9],
IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_MASK);
if (nominal_padding != IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_RESERVED)
iwl_mvm_set_pkt_ext_from_nominal_padding(&sta_ctxt_cmd.pkt_ext,
@ -2058,11 +2058,11 @@ static void iwl_mvm_cfg_he_sta(struct iwl_mvm *mvm,
&flags);
}
if (sta->he_cap.he_cap_elem.mac_cap_info[2] &
if (sta->deflink.he_cap.he_cap_elem.mac_cap_info[2] &
IEEE80211_HE_MAC_CAP2_32BIT_BA_BITMAP)
flags |= STA_CTXT_HE_32BIT_BA_BITMAP;
if (sta->he_cap.he_cap_elem.mac_cap_info[2] &
if (sta->deflink.he_cap.he_cap_elem.mac_cap_info[2] &
IEEE80211_HE_MAC_CAP2_ACK_EN)
flags |= STA_CTXT_HE_ACK_ENABLED;
@ -3157,7 +3157,7 @@ static int iwl_mvm_mac_sta_state(struct ieee80211_hw *hw,
}
if (vif->type == NL80211_IFTYPE_STATION)
vif->bss_conf.he_support = sta->he_cap.has_he;
vif->bss_conf.he_support = sta->deflink.he_cap.has_he;
if (sta->tdls &&
(vif->p2p ||
@ -3189,17 +3189,17 @@ static int iwl_mvm_mac_sta_state(struct ieee80211_hw *hw,
} else if (old_state == IEEE80211_STA_AUTH &&
new_state == IEEE80211_STA_ASSOC) {
if (vif->type == NL80211_IFTYPE_AP) {
vif->bss_conf.he_support = sta->he_cap.has_he;
vif->bss_conf.he_support = sta->deflink.he_cap.has_he;
mvmvif->ap_assoc_sta_count++;
iwl_mvm_mac_ctxt_changed(mvm, vif, false, NULL);
if (vif->bss_conf.he_support &&
!iwlwifi_mod_params.disable_11ax)
iwl_mvm_cfg_he_sta(mvm, vif, mvm_sta->sta_id);
} else if (vif->type == NL80211_IFTYPE_STATION) {
vif->bss_conf.he_support = sta->he_cap.has_he;
vif->bss_conf.he_support = sta->deflink.he_cap.has_he;
mvmvif->he_ru_2mhz_block = false;
if (sta->he_cap.has_he)
if (sta->deflink.he_cap.has_he)
iwl_mvm_check_he_obss_narrow_bw_ru(hw, vif);
iwl_mvm_mac_ctxt_changed(mvm, vif, false, NULL);

Some files were not shown because too many files have changed in this diff Show More