Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Conflicts:

drivers/net/ethernet/google/gve/gve.h
  3ce9345580 ("gve: Secure enough bytes in the first TX desc for all TCP pkts")
  75eaae158b ("gve: Add XDP DROP and TX support for GQI-QPL format")
https://lore.kernel.org/all/20230406104927.45d176f5@canb.auug.org.au/
https://lore.kernel.org/all/c5872985-1a95-0bc8-9dcc-b6f23b439e9d@tessares.net/

Adjacent changes:

net/can/isotp.c
  051737439e ("can: isotp: fix race between isotp_sendsmg() and isotp_release()")
  96d1c81e6a ("can: isotp: add module parameter for maximum pdu size")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2023-04-06 11:58:36 -07:00
commit d9c960675a
203 changed files with 1549 additions and 730 deletions

View File

@ -36,7 +36,6 @@ problems and bugs in particular.
reporting-issues
reporting-regressions
security-bugs
bug-hunting
bug-bisect
tainted-kernels

View File

@ -395,7 +395,7 @@ might want to be aware of; it for example explains how to add your issue to the
list of tracked regressions, to ensure it won't fall through the cracks.
What qualifies as security issue is left to your judgment. Consider reading
Documentation/admin-guide/security-bugs.rst before proceeding, as it
Documentation/process/security-bugs.rst before proceeding, as it
provides additional details how to best handle security issues.
An issue is a 'really severe problem' when something totally unacceptably bad
@ -1269,7 +1269,7 @@ them when sending the report by mail. If you filed it in a bug tracker, forward
the report's text to these addresses; but on top of it put a small note where
you mention that you filed it with a link to the ticket.
See Documentation/admin-guide/security-bugs.rst for more information.
See Documentation/process/security-bugs.rst for more information.
Duties after the report went out

View File

@ -96,9 +96,11 @@ $defs:
2: Lower Slew rate (slower edges)
3: Reserved (No adjustments)
bias-bus-hold: true
bias-pull-down: true
bias-pull-up: true
bias-disable: true
input-enable: true
output-high: true
output-low: true

View File

@ -138,7 +138,7 @@ required reading:
philosophy and is very important for people moving to Linux from
development on other Operating Systems.
:ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`
:ref:`Documentation/process/security-bugs.rst <securitybugs>`
If you feel you have found a security problem in the Linux kernel,
please follow the steps in this document to help notify the kernel
developers, and help solve the issue.

View File

@ -35,6 +35,14 @@ Below are the essential guides that every developer should read.
kernel-enforcement-statement
kernel-driver-statement
For security issues, see:
.. toctree::
:maxdepth: 1
security-bugs
embargoed-hardware-issues
Other guides to the community that are of interest to most developers are:
.. toctree::
@ -47,7 +55,6 @@ Other guides to the community that are of interest to most developers are:
submit-checklist
kernel-docs
deprecated
embargoed-hardware-issues
maintainers
researcher-guidelines

View File

@ -68,7 +68,7 @@ Before contributing, carefully read the appropriate documentation:
* Documentation/process/development-process.rst
* Documentation/process/submitting-patches.rst
* Documentation/admin-guide/reporting-issues.rst
* Documentation/admin-guide/security-bugs.rst
* Documentation/process/security-bugs.rst
Then send a patch (including a commit log with all the details listed
below) and follow up on any feedback from other developers.

View File

@ -39,7 +39,7 @@ Procedure for submitting patches to the -stable tree
Security patches should not be handled (solely) by the -stable review
process but should follow the procedures in
:ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`.
:ref:`Documentation/process/security-bugs.rst <securitybugs>`.
For all other submissions, choose one of the following procedures
-----------------------------------------------------------------

View File

@ -254,7 +254,7 @@ If you have a patch that fixes an exploitable security bug, send that patch
to security@kernel.org. For severe bugs, a short embargo may be considered
to allow distributors to get the patch out to users; in such cases,
obviously, the patch should not be sent to any public lists. See also
Documentation/admin-guide/security-bugs.rst.
Documentation/process/security-bugs.rst.
Patches that fix a severe bug in a released kernel should be directed
toward the stable maintainers by putting a line like this::

View File

@ -1,6 +1,6 @@
.. include:: ../disclaimer-ita.rst
:Original: :ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`
:Original: :ref:`Documentation/process/security-bugs.rst <securitybugs>`
.. _it_securitybugs:

View File

@ -272,7 +272,7 @@ embargo potrebbe essere preso in considerazione per dare il tempo alle
distribuzioni di prendere la patch e renderla disponibile ai loro utenti;
in questo caso, ovviamente, la patch non dovrebbe essere inviata su alcuna
lista di discussione pubblica. Leggete anche
Documentation/admin-guide/security-bugs.rst.
Documentation/process/security-bugs.rst.
Patch che correggono bachi importanti su un kernel già rilasciato, dovrebbero
essere inviate ai manutentori dei kernel stabili aggiungendo la seguente riga::

View File

@ -167,7 +167,7 @@ linux-api@vger.kernel.org に送ることを勧めます。
このドキュメントは Linux 開発の思想を理解するのに非常に重要です。
そして、他のOSでの開発者が Linux に移る時にとても重要です。
:ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`
:ref:`Documentation/process/security-bugs.rst <securitybugs>`
もし Linux カーネルでセキュリティ問題を発見したように思ったら、こ
のドキュメントのステップに従ってカーネル開発者に連絡し、問題解決を
支援してください。

View File

@ -157,7 +157,7 @@ mtk.manpages@gmail.com의 메인테이너에게 보낼 것을 권장한다.
리눅스로 전향하는 사람들에게는 매우 중요하다.
:ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`
:ref:`Documentation/process/security-bugs.rst <securitybugs>`
여러분들이 리눅스 커널의 보안 문제를 발견했다고 생각한다면 이 문서에
나온 단계에 따라서 커널 개발자들에게 알리고 그 문제를 해결할 수 있도록
도와 달라.

View File

@ -135,7 +135,7 @@ de obligada lectura:
de Linux y es muy importante para las personas que se mudan a Linux
tras desarrollar otros sistemas operativos.
:ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`
:ref:`Documentation/process/security-bugs.rst <securitybugs>`
Si cree que ha encontrado un problema de seguridad en el kernel de
Linux, siga los pasos de este documento para ayudar a notificar a los
desarrolladores del kernel y ayudar a resolver el problema.

View File

@ -276,7 +276,7 @@ parche a security@kernel.org. Para errores graves, se debe mantener un
poco de discreción y permitir que los distribuidores entreguen el parche a
los usuarios; en esos casos, obviamente, el parche no debe enviarse a
ninguna lista pública. Revise también
Documentation/admin-guide/security-bugs.rst.
Documentation/process/security-bugs.rst.
Los parches que corrigen un error grave en un kernel en uso deben dirigirse
hacia los maintainers estables poniendo una línea como esta::

View File

@ -1,6 +1,6 @@
.. include:: ../disclaimer-zh_CN.rst
:Original: :doc:`../../../admin-guide/security-bugs`
:Original: :doc:`../../../process/security-bugs`
:译者:

View File

@ -125,7 +125,7 @@ Linux内核代码中包含有大量的文档。这些文档对于学习如何与
这篇文档对于理解Linux的开发哲学至关重要。对于将开发平台从其他操作系
统转移到Linux的人来说也很重要。
:ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`
:ref:`Documentation/process/security-bugs.rst <securitybugs>`
如果你认为自己发现了Linux内核的安全性问题请根据这篇文档中的步骤来
提醒其他内核开发者并帮助解决这个问题。

View File

@ -2,7 +2,7 @@
.. include:: ../disclaimer-zh_TW.rst
:Original: :doc:`../../../admin-guide/security-bugs`
:Original: :doc:`../../../process/security-bugs`
:譯者:

View File

@ -128,7 +128,7 @@ Linux內核代碼中包含有大量的文檔。這些文檔對於學習如何與
這篇文檔對於理解Linux的開發哲學至關重要。對於將開發平台從其他操作系
統轉移到Linux的人來說也很重要。
:ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`
:ref:`Documentation/process/security-bugs.rst <securitybugs>`
如果你認爲自己發現了Linux內核的安全性問題請根據這篇文檔中的步驟來
提醒其他內核開發者並幫助解決這個問題。

View File

@ -8296,11 +8296,11 @@ ENOSYS for the others.
8.35 KVM_CAP_PMU_CAPABILITY
---------------------------
:Capability KVM_CAP_PMU_CAPABILITY
:Capability: KVM_CAP_PMU_CAPABILITY
:Architectures: x86
:Type: vm
:Parameters: arg[0] is bitmask of PMU virtualization capabilities.
:Returns 0 on success, -EINVAL when arg[0] contains invalid bits
:Returns: 0 on success, -EINVAL when arg[0] contains invalid bits
This capability alters PMU virtualization in KVM.

View File

@ -73,7 +73,7 @@ Tips for patch submitters
and ideally, should come with a patch proposal. Please do not send
automated reports to this list either. Such bugs will be handled
better and faster in the usual public places. See
Documentation/admin-guide/security-bugs.rst for details.
Documentation/process/security-bugs.rst for details.
8. Happy hacking.
@ -18312,8 +18312,9 @@ F: drivers/s390/block/dasd*
F: include/linux/dasd_mod.h
S390 IOMMU (PCI)
M: Niklas Schnelle <schnelle@linux.ibm.com>
M: Matthew Rosato <mjrosato@linux.ibm.com>
M: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
R: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
F: drivers/iommu/s390-iommu.c
@ -18822,7 +18823,7 @@ F: include/uapi/linux/sed*
SECURITY CONTACT
M: Security Officers <security@kernel.org>
S: Supported
F: Documentation/admin-guide/security-bugs.rst
F: Documentation/process/security-bugs.rst
SECURITY SUBSYSTEM
M: Paul Moore <paul@paul-moore.com>

View File

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 3
SUBLEVEL = 0
EXTRAVERSION = -rc4
EXTRAVERSION = -rc5
NAME = Hurr durr I'ma ninja sloth
# *DOCUMENTATION*

View File

@ -220,6 +220,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_VCPU_ATTRIBUTES:
case KVM_CAP_PTP_KVM:
case KVM_CAP_ARM_SYSTEM_SUSPEND:
case KVM_CAP_IRQFD_RESAMPLE:
r = 1;
break;
case KVM_CAP_SET_GUEST_DEBUG2:

View File

@ -5,6 +5,8 @@
#include <asm/bmips.h>
#include <asm/io.h>
bool bmips_rac_flush_disable;
void arch_sync_dma_for_cpu_all(void)
{
void __iomem *cbr = BMIPS_GET_CBR();
@ -15,6 +17,9 @@ void arch_sync_dma_for_cpu_all(void)
boot_cpu_type() != CPU_BMIPS4380)
return;
if (unlikely(bmips_rac_flush_disable))
return;
/* Flush stale data out of the readahead cache */
cfg = __raw_readl(cbr + BMIPS_RAC_CONFIG);
__raw_writel(cfg | 0x100, cbr + BMIPS_RAC_CONFIG);

View File

@ -35,6 +35,8 @@
#define REG_BCM6328_OTP ((void __iomem *)CKSEG1ADDR(0x1000062c))
#define BCM6328_TP1_DISABLED BIT(9)
extern bool bmips_rac_flush_disable;
static const unsigned long kbase = VMLINUX_LOAD_ADDRESS & 0xfff00000;
struct bmips_quirk {
@ -104,6 +106,12 @@ static void bcm6358_quirks(void)
* disable SMP for now
*/
bmips_smp_enabled = 0;
/*
* RAC flush causes kernel panics on BCM6358 when booting from TP1
* because the bootloader is not initializing it properly.
*/
bmips_rac_flush_disable = !!(read_c0_brcm_cmt_local() & (1 << 31));
}
static void bcm6368_quirks(void)

View File

@ -148,6 +148,11 @@ static inline void flush_tlb_fix_spurious_fault(struct vm_area_struct *vma,
*/
}
static inline bool __pte_protnone(unsigned long pte)
{
return (pte & (pgprot_val(PAGE_NONE) | _PAGE_RWX)) == pgprot_val(PAGE_NONE);
}
static inline bool __pte_flags_need_flush(unsigned long oldval,
unsigned long newval)
{
@ -164,8 +169,8 @@ static inline bool __pte_flags_need_flush(unsigned long oldval,
/*
* We do not expect kernel mappings or non-PTEs or not-present PTEs.
*/
VM_WARN_ON_ONCE(oldval & _PAGE_PRIVILEGED);
VM_WARN_ON_ONCE(newval & _PAGE_PRIVILEGED);
VM_WARN_ON_ONCE(!__pte_protnone(oldval) && oldval & _PAGE_PRIVILEGED);
VM_WARN_ON_ONCE(!__pte_protnone(newval) && newval & _PAGE_PRIVILEGED);
VM_WARN_ON_ONCE(!(oldval & _PAGE_PTE));
VM_WARN_ON_ONCE(!(newval & _PAGE_PTE));
VM_WARN_ON_ONCE(!(oldval & _PAGE_PRESENT));

View File

@ -290,6 +290,9 @@ static int gpr_set(struct task_struct *target, const struct user_regset *regset,
static int ppr_get(struct task_struct *target, const struct user_regset *regset,
struct membuf to)
{
if (!target->thread.regs)
return -EINVAL;
return membuf_write(&to, &target->thread.regs->ppr, sizeof(u64));
}
@ -297,6 +300,9 @@ static int ppr_set(struct task_struct *target, const struct user_regset *regset,
unsigned int pos, unsigned int count, const void *kbuf,
const void __user *ubuf)
{
if (!target->thread.regs)
return -EINVAL;
return user_regset_copyin(&pos, &count, &kbuf, &ubuf,
&target->thread.regs->ppr, 0, sizeof(u64));
}

View File

@ -576,6 +576,12 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
break;
#endif
#ifdef CONFIG_HAVE_KVM_IRQFD
case KVM_CAP_IRQFD_RESAMPLE:
r = !xive_enabled();
break;
#endif
case KVM_CAP_PPC_ALLOC_HTAB:
r = hv_enabled;
break;

View File

@ -856,6 +856,13 @@ int pseries_vas_dlpar_cpu(void)
{
int new_nr_creds, rc;
/*
* NX-GZIP is not enabled. Nothing to do for DLPAR event
*/
if (!copypaste_feat)
return 0;
rc = h_query_vas_capabilities(H_QUERY_VAS_CAPABILITIES,
vascaps[VAS_GZIP_DEF_FEAT_TYPE].feat,
(u64)virt_to_phys(&hv_cop_caps));
@ -1012,6 +1019,7 @@ static int __init pseries_vas_init(void)
* Linux supports user space COPY/PASTE only with Radix
*/
if (!radix_enabled()) {
copypaste_feat = false;
pr_err("API is supported only with radix page tables\n");
return -ENOTSUPP;
}

View File

@ -126,6 +126,7 @@ config RISCV
select OF_IRQ
select PCI_DOMAINS_GENERIC if PCI
select PCI_MSI if PCI
select RISCV_ALTERNATIVE if !XIP_KERNEL
select RISCV_INTC
select RISCV_TIMER if RISCV_SBI
select SIFIVE_PLIC
@ -401,9 +402,8 @@ config RISCV_ISA_C
config RISCV_ISA_SVPBMT
bool "SVPBMT extension support"
depends on 64BIT && MMU
depends on !XIP_KERNEL
depends on RISCV_ALTERNATIVE
default y
select RISCV_ALTERNATIVE
help
Adds support to dynamically detect the presence of the SVPBMT
ISA-extension (Supervisor-mode: page-based memory types) and
@ -428,8 +428,8 @@ config TOOLCHAIN_HAS_ZBB
config RISCV_ISA_ZBB
bool "Zbb extension support for bit manipulation instructions"
depends on TOOLCHAIN_HAS_ZBB
depends on !XIP_KERNEL && MMU
select RISCV_ALTERNATIVE
depends on MMU
depends on RISCV_ALTERNATIVE
default y
help
Adds support to dynamically detect the presence of the ZBB
@ -443,9 +443,9 @@ config RISCV_ISA_ZBB
config RISCV_ISA_ZICBOM
bool "Zicbom extension support for non-coherent DMA operation"
depends on !XIP_KERNEL && MMU
depends on MMU
depends on RISCV_ALTERNATIVE
default y
select RISCV_ALTERNATIVE
select RISCV_DMA_NONCOHERENT
help
Adds support to dynamically detect the presence of the ZICBOM

View File

@ -2,8 +2,7 @@ menu "CPU errata selection"
config ERRATA_SIFIVE
bool "SiFive errata"
depends on !XIP_KERNEL
select RISCV_ALTERNATIVE
depends on RISCV_ALTERNATIVE
help
All SiFive errata Kconfig depend on this Kconfig. Disabling
this Kconfig will disable all SiFive errata. Please say "Y"
@ -35,8 +34,7 @@ config ERRATA_SIFIVE_CIP_1200
config ERRATA_THEAD
bool "T-HEAD errata"
depends on !XIP_KERNEL
select RISCV_ALTERNATIVE
depends on RISCV_ALTERNATIVE
help
All T-HEAD errata Kconfig depend on this Kconfig. Disabling
this Kconfig will disable all T-HEAD errata. Please say "Y"

View File

@ -57,18 +57,31 @@ struct riscv_isa_ext_data {
unsigned int isa_ext_id;
};
unsigned long riscv_isa_extension_base(const unsigned long *isa_bitmap);
#define riscv_isa_extension_mask(ext) BIT_MASK(RISCV_ISA_EXT_##ext)
bool __riscv_isa_extension_available(const unsigned long *isa_bitmap, int bit);
#define riscv_isa_extension_available(isa_bitmap, ext) \
__riscv_isa_extension_available(isa_bitmap, RISCV_ISA_EXT_##ext)
static __always_inline bool
riscv_has_extension_likely(const unsigned long ext)
{
compiletime_assert(ext < RISCV_ISA_EXT_MAX,
"ext must be < RISCV_ISA_EXT_MAX");
asm_volatile_goto(
ALTERNATIVE("j %l[l_no]", "nop", 0, %[ext], 1)
:
: [ext] "i" (ext)
:
: l_no);
if (IS_ENABLED(CONFIG_RISCV_ALTERNATIVE)) {
asm_volatile_goto(
ALTERNATIVE("j %l[l_no]", "nop", 0, %[ext], 1)
:
: [ext] "i" (ext)
:
: l_no);
} else {
if (!__riscv_isa_extension_available(NULL, ext))
goto l_no;
}
return true;
l_no:
@ -81,26 +94,23 @@ riscv_has_extension_unlikely(const unsigned long ext)
compiletime_assert(ext < RISCV_ISA_EXT_MAX,
"ext must be < RISCV_ISA_EXT_MAX");
asm_volatile_goto(
ALTERNATIVE("nop", "j %l[l_yes]", 0, %[ext], 1)
:
: [ext] "i" (ext)
:
: l_yes);
if (IS_ENABLED(CONFIG_RISCV_ALTERNATIVE)) {
asm_volatile_goto(
ALTERNATIVE("nop", "j %l[l_yes]", 0, %[ext], 1)
:
: [ext] "i" (ext)
:
: l_yes);
} else {
if (__riscv_isa_extension_available(NULL, ext))
goto l_yes;
}
return false;
l_yes:
return true;
}
unsigned long riscv_isa_extension_base(const unsigned long *isa_bitmap);
#define riscv_isa_extension_mask(ext) BIT_MASK(RISCV_ISA_EXT_##ext)
bool __riscv_isa_extension_available(const unsigned long *isa_bitmap, int bit);
#define riscv_isa_extension_available(isa_bitmap, ext) \
__riscv_isa_extension_available(isa_bitmap, RISCV_ISA_EXT_##ext)
#endif
#endif /* _ASM_RISCV_HWCAP_H */

View File

@ -271,10 +271,18 @@ static int handle_prog(struct kvm_vcpu *vcpu)
* handle_external_interrupt - used for external interruption interceptions
* @vcpu: virtual cpu
*
* This interception only occurs if the CPUSTAT_EXT_INT bit was set, or if
* the new PSW does not have external interrupts disabled. In the first case,
* we've got to deliver the interrupt manually, and in the second case, we
* drop to userspace to handle the situation there.
* This interception occurs if:
* - the CPUSTAT_EXT_INT bit was already set when the external interrupt
* occurred. In this case, the interrupt needs to be injected manually to
* preserve interrupt priority.
* - the external new PSW has external interrupts enabled, which will cause an
* interruption loop. We drop to userspace in this case.
*
* The latter case can be detected by inspecting the external mask bit in the
* external new psw.
*
* Under PV, only the latter case can occur, since interrupt priorities are
* handled in the ultravisor.
*/
static int handle_external_interrupt(struct kvm_vcpu *vcpu)
{
@ -285,10 +293,18 @@ static int handle_external_interrupt(struct kvm_vcpu *vcpu)
vcpu->stat.exit_external_interrupt++;
rc = read_guest_lc(vcpu, __LC_EXT_NEW_PSW, &newpsw, sizeof(psw_t));
if (rc)
return rc;
/* We can not handle clock comparator or timer interrupt with bad PSW */
if (kvm_s390_pv_cpu_is_protected(vcpu)) {
newpsw = vcpu->arch.sie_block->gpsw;
} else {
rc = read_guest_lc(vcpu, __LC_EXT_NEW_PSW, &newpsw, sizeof(psw_t));
if (rc)
return rc;
}
/*
* Clock comparator or timer interrupt with external interrupt enabled
* will cause interrupt loop. Drop to userspace.
*/
if ((eic == EXT_IRQ_CLK_COMP || eic == EXT_IRQ_CPU_TIMER) &&
(newpsw.mask & PSW_MASK_EXT))
return -EOPNOTSUPP;

View File

@ -573,6 +573,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_S390_VCPU_RESETS:
case KVM_CAP_SET_GUEST_DEBUG:
case KVM_CAP_S390_DIAG318:
case KVM_CAP_IRQFD_RESAMPLE:
r = 1;
break;
case KVM_CAP_SET_GUEST_DEBUG2:

View File

@ -358,12 +358,16 @@ static void __init ms_hyperv_init_platform(void)
* To mirror what Windows does we should extract CPU management
* features and use the ReservedIdentityBit to detect if Linux is the
* root partition. But that requires negotiating CPU management
* interface (a process to be finalized).
* interface (a process to be finalized). For now, use the privilege
* flag as the indicator for running as root.
*
* For now, use the privilege flag as the indicator for running as
* root.
* Hyper-V should never specify running as root and as a Confidential
* VM. But to protect against a compromised/malicious Hyper-V trying
* to exploit root behavior to expose Confidential VM memory, ignore
* the root partition setting if also a Confidential VM.
*/
if (cpuid_ebx(HYPERV_CPUID_FEATURES) & HV_CPU_MANAGEMENT) {
if ((ms_hyperv.priv_high & HV_CPU_MANAGEMENT) &&
!(ms_hyperv.priv_high & HV_ISOLATION)) {
hv_root_partition = true;
pr_info("Hyper-V: running as root partition\n");
}

View File

@ -368,9 +368,39 @@ static void ioapic_write_indirect(struct kvm_ioapic *ioapic, u32 val)
mask_after = e->fields.mask;
if (mask_before != mask_after)
kvm_fire_mask_notifiers(ioapic->kvm, KVM_IRQCHIP_IOAPIC, index, mask_after);
if (e->fields.trig_mode == IOAPIC_LEVEL_TRIG
&& ioapic->irr & (1 << index))
ioapic_service(ioapic, index, false);
if (e->fields.trig_mode == IOAPIC_LEVEL_TRIG &&
ioapic->irr & (1 << index) && !e->fields.mask && !e->fields.remote_irr) {
/*
* Pending status in irr may be outdated: the IRQ line may have
* already been deasserted by a device while the IRQ was masked.
* This occurs, for instance, if the interrupt is handled in a
* Linux guest as a oneshot interrupt (IRQF_ONESHOT). In this
* case the guest acknowledges the interrupt to the device in
* its threaded irq handler, i.e. after the EOI but before
* unmasking, so at the time of unmasking the IRQ line is
* already down but our pending irr bit is still set. In such
* cases, injecting this pending interrupt to the guest is
* buggy: the guest will receive an extra unwanted interrupt.
*
* So we need to check here if the IRQ is actually still pending.
* As we are generally not able to probe the IRQ line status
* directly, we do it through irqfd resampler. Namely, we clear
* the pending status and notify the resampler that this interrupt
* is done, without actually injecting it into the guest. If the
* IRQ line is actually already deasserted, we are done. If it is
* still asserted, a new interrupt will be shortly triggered
* through irqfd and injected into the guest.
*
* If, however, it's not possible to resample (no irqfd resampler
* registered for this irq), then unconditionally inject this
* pending interrupt into the guest, so the guest will not miss
* an interrupt, although may get an extra unwanted interrupt.
*/
if (kvm_notify_irqfd_resampler(ioapic->kvm, KVM_IRQCHIP_IOAPIC, index))
ioapic->irr &= ~(1 << index);
else
ioapic_service(ioapic, index, false);
}
if (e->fields.delivery_mode == APIC_DM_FIXED) {
struct kvm_lapic_irq irq;

View File

@ -12,6 +12,11 @@ int hv_remote_flush_tlb_with_range(struct kvm *kvm,
int hv_remote_flush_tlb(struct kvm *kvm);
void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp);
#else /* !CONFIG_HYPERV */
static inline int hv_remote_flush_tlb(struct kvm *kvm)
{
return -EOPNOTSUPP;
}
static inline void hv_track_root_tdp(struct kvm_vcpu *vcpu, hpa_t root_tdp)
{
}

View File

@ -3729,7 +3729,7 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu)
svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF);
}
static void svm_flush_tlb_current(struct kvm_vcpu *vcpu)
static void svm_flush_tlb_asid(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
@ -3753,6 +3753,37 @@ static void svm_flush_tlb_current(struct kvm_vcpu *vcpu)
svm->current_vmcb->asid_generation--;
}
static void svm_flush_tlb_current(struct kvm_vcpu *vcpu)
{
hpa_t root_tdp = vcpu->arch.mmu->root.hpa;
/*
* When running on Hyper-V with EnlightenedNptTlb enabled, explicitly
* flush the NPT mappings via hypercall as flushing the ASID only
* affects virtual to physical mappings, it does not invalidate guest
* physical to host physical mappings.
*/
if (svm_hv_is_enlightened_tlb_enabled(vcpu) && VALID_PAGE(root_tdp))
hyperv_flush_guest_mapping(root_tdp);
svm_flush_tlb_asid(vcpu);
}
static void svm_flush_tlb_all(struct kvm_vcpu *vcpu)
{
/*
* When running on Hyper-V with EnlightenedNptTlb enabled, remote TLB
* flushes should be routed to hv_remote_flush_tlb() without requesting
* a "regular" remote flush. Reaching this point means either there's
* a KVM bug or a prior hv_remote_flush_tlb() call failed, both of
* which might be fatal to the guest. Yell, but try to recover.
*/
if (WARN_ON_ONCE(svm_hv_is_enlightened_tlb_enabled(vcpu)))
hv_remote_flush_tlb(vcpu->kvm);
svm_flush_tlb_asid(vcpu);
}
static void svm_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t gva)
{
struct vcpu_svm *svm = to_svm(vcpu);
@ -4745,10 +4776,10 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.set_rflags = svm_set_rflags,
.get_if_flag = svm_get_if_flag,
.flush_tlb_all = svm_flush_tlb_current,
.flush_tlb_all = svm_flush_tlb_all,
.flush_tlb_current = svm_flush_tlb_current,
.flush_tlb_gva = svm_flush_tlb_gva,
.flush_tlb_guest = svm_flush_tlb_current,
.flush_tlb_guest = svm_flush_tlb_asid,
.vcpu_pre_run = svm_vcpu_pre_run,
.vcpu_run = svm_vcpu_run,

View File

@ -6,6 +6,8 @@
#ifndef __ARCH_X86_KVM_SVM_ONHYPERV_H__
#define __ARCH_X86_KVM_SVM_ONHYPERV_H__
#include <asm/mshyperv.h>
#if IS_ENABLED(CONFIG_HYPERV)
#include "kvm_onhyperv.h"
@ -15,6 +17,14 @@ static struct kvm_x86_ops svm_x86_ops;
int svm_hv_enable_l2_tlb_flush(struct kvm_vcpu *vcpu);
static inline bool svm_hv_is_enlightened_tlb_enabled(struct kvm_vcpu *vcpu)
{
struct hv_vmcb_enlightenments *hve = &to_svm(vcpu)->vmcb->control.hv_enlightenments;
return ms_hyperv.nested_features & HV_X64_NESTED_ENLIGHTENED_TLB &&
!!hve->hv_enlightenments_control.enlightened_npt_tlb;
}
static inline void svm_hv_init_vmcb(struct vmcb *vmcb)
{
struct hv_vmcb_enlightenments *hve = &vmcb->control.hv_enlightenments;
@ -80,6 +90,11 @@ static inline void svm_hv_update_vp_id(struct vmcb *vmcb, struct kvm_vcpu *vcpu)
}
#else
static inline bool svm_hv_is_enlightened_tlb_enabled(struct kvm_vcpu *vcpu)
{
return false;
}
static inline void svm_hv_init_vmcb(struct vmcb *vmcb)
{
}

View File

@ -3868,7 +3868,12 @@ static void nested_vmx_inject_exception_vmexit(struct kvm_vcpu *vcpu)
exit_qual = 0;
}
if (ex->has_error_code) {
/*
* Unlike AMD's Paged Real Mode, which reports an error code on #PF
* VM-Exits even if the CPU is in Real Mode, Intel VMX never sets the
* "has error code" flags on VM-Exit if the CPU is in Real Mode.
*/
if (ex->has_error_code && is_protmode(vcpu)) {
/*
* Intel CPUs do not generate error codes with bits 31:16 set,
* and more importantly VMX disallows setting bits 31:16 in the

View File

@ -4432,6 +4432,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_VAPIC:
case KVM_CAP_ENABLE_CAP:
case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES:
case KVM_CAP_IRQFD_RESAMPLE:
r = 1;
break;
case KVM_CAP_EXIT_HYPERCALL:
@ -8903,6 +8904,8 @@ restart:
}
if (ctxt->have_exception) {
WARN_ON_ONCE(vcpu->mmio_needed && !vcpu->mmio_is_write);
vcpu->mmio_needed = false;
r = 1;
inject_emulated_exception(vcpu);
} else if (vcpu->arch.pio.count) {
@ -9906,13 +9909,20 @@ int kvm_check_nested_events(struct kvm_vcpu *vcpu)
static void kvm_inject_exception(struct kvm_vcpu *vcpu)
{
/*
* Suppress the error code if the vCPU is in Real Mode, as Real Mode
* exceptions don't report error codes. The presence of an error code
* is carried with the exception and only stripped when the exception
* is injected as intercepted #PF VM-Exits for AMD's Paged Real Mode do
* report an error code despite the CPU being in Real Mode.
*/
vcpu->arch.exception.has_error_code &= is_protmode(vcpu);
trace_kvm_inj_exception(vcpu->arch.exception.vector,
vcpu->arch.exception.has_error_code,
vcpu->arch.exception.error_code,
vcpu->arch.exception.injected);
if (vcpu->arch.exception.error_code && !is_protmode(vcpu))
vcpu->arch.exception.error_code = false;
static_call(kvm_x86_inject_exception)(vcpu);
}

View File

@ -461,26 +461,22 @@ ivpu_job_prepare_bos_for_submit(struct drm_file *file, struct ivpu_job *job, u32
job->cmd_buf_vpu_addr = bo->vpu_addr + commands_offset;
ret = drm_gem_lock_reservations((struct drm_gem_object **)job->bos, buf_count,
&acquire_ctx);
ret = drm_gem_lock_reservations((struct drm_gem_object **)job->bos, 1, &acquire_ctx);
if (ret) {
ivpu_warn(vdev, "Failed to lock reservations: %d\n", ret);
return ret;
}
for (i = 0; i < buf_count; i++) {
ret = dma_resv_reserve_fences(job->bos[i]->base.resv, 1);
if (ret) {
ivpu_warn(vdev, "Failed to reserve fences: %d\n", ret);
goto unlock_reservations;
}
ret = dma_resv_reserve_fences(bo->base.resv, 1);
if (ret) {
ivpu_warn(vdev, "Failed to reserve fences: %d\n", ret);
goto unlock_reservations;
}
for (i = 0; i < buf_count; i++)
dma_resv_add_fence(job->bos[i]->base.resv, job->done_fence, DMA_RESV_USAGE_WRITE);
dma_resv_add_fence(bo->base.resv, job->done_fence, DMA_RESV_USAGE_WRITE);
unlock_reservations:
drm_gem_unlock_reservations((struct drm_gem_object **)job->bos, buf_count, &acquire_ctx);
drm_gem_unlock_reservations((struct drm_gem_object **)job->bos, 1, &acquire_ctx);
wmb(); /* Flush write combining buffers */

View File

@ -140,32 +140,28 @@ int ivpu_pm_suspend_cb(struct device *dev)
{
struct drm_device *drm = dev_get_drvdata(dev);
struct ivpu_device *vdev = to_ivpu_device(drm);
int ret;
unsigned long timeout;
ivpu_dbg(vdev, PM, "Suspend..\n");
ret = ivpu_suspend(vdev);
if (ret && vdev->pm->suspend_reschedule_counter) {
ivpu_dbg(vdev, PM, "Failed to enter idle, rescheduling suspend, retries left %d\n",
vdev->pm->suspend_reschedule_counter);
pm_schedule_suspend(dev, vdev->timeout.reschedule_suspend);
vdev->pm->suspend_reschedule_counter--;
return -EBUSY;
} else if (!vdev->pm->suspend_reschedule_counter) {
ivpu_warn(vdev, "Failed to enter idle, force suspend\n");
ivpu_pm_prepare_cold_boot(vdev);
} else {
ivpu_pm_prepare_warm_boot(vdev);
timeout = jiffies + msecs_to_jiffies(vdev->timeout.tdr);
while (!ivpu_hw_is_idle(vdev)) {
cond_resched();
if (time_after_eq(jiffies, timeout)) {
ivpu_err(vdev, "Failed to enter idle on system suspend\n");
return -EBUSY;
}
}
vdev->pm->suspend_reschedule_counter = PM_RESCHEDULE_LIMIT;
ivpu_suspend(vdev);
ivpu_pm_prepare_warm_boot(vdev);
pci_save_state(to_pci_dev(dev));
pci_set_power_state(to_pci_dev(dev), PCI_D3hot);
ivpu_dbg(vdev, PM, "Suspend done.\n");
return ret;
return 0;
}
int ivpu_pm_resume_cb(struct device *dev)

View File

@ -459,85 +459,67 @@ out_free:
Notification Handling
-------------------------------------------------------------------------- */
/*
* acpi_bus_notify
* ---------------
* Callback for all 'system-level' device notifications (values 0x00-0x7F).
/**
* acpi_bus_notify - Global system-level (0x00-0x7F) notifications handler
* @handle: Target ACPI object.
* @type: Notification type.
* @data: Ignored.
*
* This only handles notifications related to device hotplug.
*/
static void acpi_bus_notify(acpi_handle handle, u32 type, void *data)
{
struct acpi_device *adev;
u32 ost_code = ACPI_OST_SC_NON_SPECIFIC_FAILURE;
bool hotplug_event = false;
switch (type) {
case ACPI_NOTIFY_BUS_CHECK:
acpi_handle_debug(handle, "ACPI_NOTIFY_BUS_CHECK event\n");
hotplug_event = true;
break;
case ACPI_NOTIFY_DEVICE_CHECK:
acpi_handle_debug(handle, "ACPI_NOTIFY_DEVICE_CHECK event\n");
hotplug_event = true;
break;
case ACPI_NOTIFY_DEVICE_WAKE:
acpi_handle_debug(handle, "ACPI_NOTIFY_DEVICE_WAKE event\n");
break;
return;
case ACPI_NOTIFY_EJECT_REQUEST:
acpi_handle_debug(handle, "ACPI_NOTIFY_EJECT_REQUEST event\n");
hotplug_event = true;
break;
case ACPI_NOTIFY_DEVICE_CHECK_LIGHT:
acpi_handle_debug(handle, "ACPI_NOTIFY_DEVICE_CHECK_LIGHT event\n");
/* TBD: Exactly what does 'light' mean? */
break;
return;
case ACPI_NOTIFY_FREQUENCY_MISMATCH:
acpi_handle_err(handle, "Device cannot be configured due "
"to a frequency mismatch\n");
break;
return;
case ACPI_NOTIFY_BUS_MODE_MISMATCH:
acpi_handle_err(handle, "Device cannot be configured due "
"to a bus mode mismatch\n");
break;
return;
case ACPI_NOTIFY_POWER_FAULT:
acpi_handle_err(handle, "Device has suffered a power fault\n");
break;
return;
default:
acpi_handle_debug(handle, "Unknown event type 0x%x\n", type);
break;
}
adev = acpi_get_acpi_dev(handle);
if (!adev)
goto err;
if (adev->dev.driver) {
struct acpi_driver *driver = to_acpi_driver(adev->dev.driver);
if (driver && driver->ops.notify &&
(driver->flags & ACPI_DRIVER_ALL_NOTIFY_EVENTS))
driver->ops.notify(adev, type);
}
if (!hotplug_event) {
acpi_put_acpi_dev(adev);
return;
}
if (ACPI_SUCCESS(acpi_hotplug_schedule(adev, type)))
adev = acpi_get_acpi_dev(handle);
if (adev && ACPI_SUCCESS(acpi_hotplug_schedule(adev, type)))
return;
acpi_put_acpi_dev(adev);
err:
acpi_evaluate_ost(handle, type, ost_code, NULL);
acpi_evaluate_ost(handle, type, ACPI_OST_SC_NON_SPECIFIC_FAILURE, NULL);
}
static void acpi_notify_device(acpi_handle handle, u32 event, void *data)
@ -562,42 +544,51 @@ static u32 acpi_device_fixed_event(void *data)
return ACPI_INTERRUPT_HANDLED;
}
static int acpi_device_install_notify_handler(struct acpi_device *device)
static int acpi_device_install_notify_handler(struct acpi_device *device,
struct acpi_driver *acpi_drv)
{
acpi_status status;
if (device->device_type == ACPI_BUS_TYPE_POWER_BUTTON)
if (device->device_type == ACPI_BUS_TYPE_POWER_BUTTON) {
status =
acpi_install_fixed_event_handler(ACPI_EVENT_POWER_BUTTON,
acpi_device_fixed_event,
device);
else if (device->device_type == ACPI_BUS_TYPE_SLEEP_BUTTON)
} else if (device->device_type == ACPI_BUS_TYPE_SLEEP_BUTTON) {
status =
acpi_install_fixed_event_handler(ACPI_EVENT_SLEEP_BUTTON,
acpi_device_fixed_event,
device);
else
status = acpi_install_notify_handler(device->handle,
ACPI_DEVICE_NOTIFY,
} else {
u32 type = acpi_drv->flags & ACPI_DRIVER_ALL_NOTIFY_EVENTS ?
ACPI_ALL_NOTIFY : ACPI_DEVICE_NOTIFY;
status = acpi_install_notify_handler(device->handle, type,
acpi_notify_device,
device);
}
if (ACPI_FAILURE(status))
return -EINVAL;
return 0;
}
static void acpi_device_remove_notify_handler(struct acpi_device *device)
static void acpi_device_remove_notify_handler(struct acpi_device *device,
struct acpi_driver *acpi_drv)
{
if (device->device_type == ACPI_BUS_TYPE_POWER_BUTTON)
if (device->device_type == ACPI_BUS_TYPE_POWER_BUTTON) {
acpi_remove_fixed_event_handler(ACPI_EVENT_POWER_BUTTON,
acpi_device_fixed_event);
else if (device->device_type == ACPI_BUS_TYPE_SLEEP_BUTTON)
} else if (device->device_type == ACPI_BUS_TYPE_SLEEP_BUTTON) {
acpi_remove_fixed_event_handler(ACPI_EVENT_SLEEP_BUTTON,
acpi_device_fixed_event);
else
acpi_remove_notify_handler(device->handle, ACPI_DEVICE_NOTIFY,
} else {
u32 type = acpi_drv->flags & ACPI_DRIVER_ALL_NOTIFY_EVENTS ?
ACPI_ALL_NOTIFY : ACPI_DEVICE_NOTIFY;
acpi_remove_notify_handler(device->handle, type,
acpi_notify_device);
}
}
/* Handle events targeting \_SB device (at present only graceful shutdown) */
@ -1039,7 +1030,7 @@ static int acpi_device_probe(struct device *dev)
acpi_drv->name, acpi_dev->pnp.bus_id);
if (acpi_drv->ops.notify) {
ret = acpi_device_install_notify_handler(acpi_dev);
ret = acpi_device_install_notify_handler(acpi_dev, acpi_drv);
if (ret) {
if (acpi_drv->ops.remove)
acpi_drv->ops.remove(acpi_dev);
@ -1062,7 +1053,7 @@ static void acpi_device_remove(struct device *dev)
struct acpi_driver *acpi_drv = to_acpi_driver(dev->driver);
if (acpi_drv->ops.notify)
acpi_device_remove_notify_handler(acpi_dev);
acpi_device_remove_notify_handler(acpi_dev, acpi_drv);
if (acpi_drv->ops.remove)
acpi_drv->ops.remove(acpi_dev);

View File

@ -474,12 +474,18 @@ int detect_cache_attributes(unsigned int cpu)
populate_leaves:
/*
* populate_cache_leaves() may completely setup the cache leaves and
* shared_cpu_map or it may leave it partially setup.
* If LLC is valid the cache leaves were already populated so just go to
* update the cpu map.
*/
ret = populate_cache_leaves(cpu);
if (ret)
goto free_ci;
if (!last_level_cache_is_valid(cpu)) {
/*
* populate_cache_leaves() may completely setup the cache leaves and
* shared_cpu_map or it may leave it partially setup.
*/
ret = populate_cache_leaves(cpu);
if (ret)
goto free_ci;
}
/*
* For systems using DT for cache hierarchy, fw_token

View File

@ -1010,9 +1010,6 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
/* This is safe, since we have a reference from open(). */
__module_get(THIS_MODULE);
/* suppress uevents while reconfiguring the device */
dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 1);
/*
* If we don't hold exclusive handle for the device, upgrade to it
* here to avoid changing device under exclusive owner.
@ -1067,6 +1064,9 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
}
}
/* suppress uevents while reconfiguring the device */
dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 1);
disk_force_media_change(lo->lo_disk, DISK_EVENT_MEDIA_CHANGE);
set_disk_ro(lo->lo_disk, (lo->lo_flags & LO_FLAGS_READ_ONLY) != 0);
@ -1109,17 +1109,17 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
if (partscan)
clear_bit(GD_SUPPRESS_PART_SCAN, &lo->lo_disk->state);
/* enable and uncork uevent now that we are done */
dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0);
loop_global_unlock(lo, is_loop);
if (partscan)
loop_reread_partitions(lo);
if (!(mode & FMODE_EXCL))
bd_abort_claiming(bdev, loop_configure);
error = 0;
done:
/* enable and uncork uevent now that we are done */
dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0);
return error;
return 0;
out_unlock:
loop_global_unlock(lo, is_loop);
@ -1130,7 +1130,7 @@ out_putf:
fput(file);
/* This is safe: open() is still holding a reference. */
module_put(THIS_MODULE);
goto done;
return error;
}
static void __loop_clr_fd(struct loop_device *lo, bool release)

View File

@ -232,7 +232,7 @@ static int intel_dp_dsc_mst_compute_link_config(struct intel_encoder *encoder,
return slots;
}
intel_link_compute_m_n(crtc_state->pipe_bpp,
intel_link_compute_m_n(crtc_state->dsc.compressed_bpp,
crtc_state->lane_count,
adjusted_mode->crtc_clock,
crtc_state->port_clock,

View File

@ -1067,11 +1067,12 @@ static vm_fault_t vm_fault_ttm(struct vm_fault *vmf)
.interruptible = true,
.no_wait_gpu = true, /* should be idle already */
};
int err;
GEM_BUG_ON(!bo->ttm || !(bo->ttm->page_flags & TTM_TT_FLAG_SWAPPED));
ret = ttm_bo_validate(bo, i915_ttm_sys_placement(), &ctx);
if (ret) {
err = ttm_bo_validate(bo, i915_ttm_sys_placement(), &ctx);
if (err) {
dma_resv_unlock(bo->base.resv);
return VM_FAULT_SIGBUS;
}

View File

@ -2018,6 +2018,8 @@ process_csb(struct intel_engine_cs *engine, struct i915_request **inactive)
* inspecting the queue to see if we need to resumbit.
*/
if (*prev != *execlists->active) { /* elide lite-restores */
struct intel_context *prev_ce = NULL, *active_ce = NULL;
/*
* Note the inherent discrepancy between the HW runtime,
* recorded as part of the context switch, and the CPU
@ -2029,9 +2031,15 @@ process_csb(struct intel_engine_cs *engine, struct i915_request **inactive)
* and correct overselves later when updating from HW.
*/
if (*prev)
lrc_runtime_stop((*prev)->context);
prev_ce = (*prev)->context;
if (*execlists->active)
lrc_runtime_start((*execlists->active)->context);
active_ce = (*execlists->active)->context;
if (prev_ce != active_ce) {
if (prev_ce)
lrc_runtime_stop(prev_ce);
if (active_ce)
lrc_runtime_start(active_ce);
}
new_timeslice(execlists);
}

View File

@ -235,6 +235,13 @@ static void delayed_huc_load_fini(struct intel_huc *huc)
i915_sw_fence_fini(&huc->delayed_load.fence);
}
int intel_huc_sanitize(struct intel_huc *huc)
{
delayed_huc_load_complete(huc);
intel_uc_fw_sanitize(&huc->fw);
return 0;
}
static bool vcs_supported(struct intel_gt *gt)
{
intel_engine_mask_t mask = gt->info.engine_mask;

View File

@ -41,6 +41,7 @@ struct intel_huc {
} delayed_load;
};
int intel_huc_sanitize(struct intel_huc *huc);
void intel_huc_init_early(struct intel_huc *huc);
int intel_huc_init(struct intel_huc *huc);
void intel_huc_fini(struct intel_huc *huc);
@ -54,12 +55,6 @@ bool intel_huc_is_authenticated(struct intel_huc *huc);
void intel_huc_register_gsc_notifier(struct intel_huc *huc, struct bus_type *bus);
void intel_huc_unregister_gsc_notifier(struct intel_huc *huc, struct bus_type *bus);
static inline int intel_huc_sanitize(struct intel_huc *huc)
{
intel_uc_fw_sanitize(&huc->fw);
return 0;
}
static inline bool intel_huc_is_supported(struct intel_huc *huc)
{
return intel_uc_fw_is_supported(&huc->fw);

View File

@ -4638,13 +4638,13 @@ int i915_perf_add_config_ioctl(struct drm_device *dev, void *data,
err = oa_config->id;
goto sysfs_err;
}
mutex_unlock(&perf->metrics_lock);
id = oa_config->id;
drm_dbg(&perf->i915->drm,
"Added config %s id=%i\n", oa_config->uuid, oa_config->id);
mutex_unlock(&perf->metrics_lock);
return oa_config->id;
return id;
sysfs_err:
mutex_unlock(&perf->metrics_lock);

View File

@ -363,6 +363,35 @@ nv50_outp_atomic_check_view(struct drm_encoder *encoder,
return 0;
}
static void
nv50_outp_atomic_fix_depth(struct drm_encoder *encoder, struct drm_crtc_state *crtc_state)
{
struct nv50_head_atom *asyh = nv50_head_atom(crtc_state);
struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder);
struct drm_display_mode *mode = &asyh->state.adjusted_mode;
unsigned int max_rate, mode_rate;
switch (nv_encoder->dcb->type) {
case DCB_OUTPUT_DP:
max_rate = nv_encoder->dp.link_nr * nv_encoder->dp.link_bw;
/* we don't support more than 10 anyway */
asyh->or.bpc = min_t(u8, asyh->or.bpc, 10);
/* reduce the bpc until it works out */
while (asyh->or.bpc > 6) {
mode_rate = DIV_ROUND_UP(mode->clock * asyh->or.bpc * 3, 8);
if (mode_rate <= max_rate)
break;
asyh->or.bpc -= 2;
}
break;
default:
break;
}
}
static int
nv50_outp_atomic_check(struct drm_encoder *encoder,
struct drm_crtc_state *crtc_state,
@ -381,6 +410,9 @@ nv50_outp_atomic_check(struct drm_encoder *encoder,
if (crtc_state->mode_changed || crtc_state->connectors_changed)
asyh->or.bpc = connector->display_info.bpc;
/* We might have to reduce the bpc */
nv50_outp_atomic_fix_depth(encoder, crtc_state);
return 0;
}

View File

@ -263,8 +263,6 @@ nouveau_dp_irq(struct work_struct *work)
}
/* TODO:
* - Use the minimum possible BPC here, once we add support for the max bpc
* property.
* - Validate against the DP caps advertised by the GPU (we don't check these
* yet)
*/
@ -276,7 +274,11 @@ nv50_dp_mode_valid(struct drm_connector *connector,
{
const unsigned int min_clock = 25000;
unsigned int max_rate, mode_rate, ds_max_dotclock, clock = mode->clock;
const u8 bpp = connector->display_info.bpc * 3;
/* Check with the minmum bpc always, so we can advertise better modes.
* In particlar not doing this causes modes to be dropped on HDR
* displays as we might check with a bpc of 16 even.
*/
const u8 bpp = 6 * 3;
if (mode->flags & DRM_MODE_FLAG_INTERLACE && !outp->caps.dp_interlace)
return MODE_NO_INTERLACE;

View File

@ -504,6 +504,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
if (IS_ERR(pages[i])) {
mutex_unlock(&bo->base.pages_lock);
ret = PTR_ERR(pages[i]);
pages[i] = NULL;
goto err_pages;
}
}

View File

@ -409,6 +409,10 @@ void vmbus_disconnect(void)
*/
struct vmbus_channel *relid2channel(u32 relid)
{
if (vmbus_connection.channels == NULL) {
pr_warn_once("relid2channel: relid=%d: No channels mapped!\n", relid);
return NULL;
}
if (WARN_ON(relid >= MAX_CHANNEL_RELIDS))
return NULL;
return READ_ONCE(vmbus_connection.channels[relid]);

View File

@ -781,9 +781,6 @@ static void xpad_process_packet(struct usb_xpad *xpad, u16 cmd, unsigned char *d
input_report_key(dev, BTN_C, data[8]);
input_report_key(dev, BTN_Z, data[9]);
/* Profile button has a value of 0-3, so it is reported as an axis */
if (xpad->mapping & MAP_PROFILE_BUTTON)
input_report_abs(dev, ABS_PROFILE, data[34]);
input_sync(dev);
}
@ -1061,6 +1058,10 @@ static void xpadone_process_packet(struct usb_xpad *xpad, u16 cmd, unsigned char
(__u16) le16_to_cpup((__le16 *)(data + 8)));
}
/* Profile button has a value of 0-3, so it is reported as an axis */
if (xpad->mapping & MAP_PROFILE_BUTTON)
input_report_abs(dev, ABS_PROFILE, data[34]);
/* paddle handling */
/* based on SDL's SDL_hidapi_xboxone.c */
if (xpad->mapping & MAP_PADDLES) {

View File

@ -852,8 +852,8 @@ static void alps_process_packet_v6(struct psmouse *psmouse)
x = y = z = 0;
/* Divide 4 since trackpoint's speed is too fast */
input_report_rel(dev2, REL_X, (char)x / 4);
input_report_rel(dev2, REL_Y, -((char)y / 4));
input_report_rel(dev2, REL_X, (s8)x / 4);
input_report_rel(dev2, REL_Y, -((s8)y / 4));
psmouse_report_standard_buttons(dev2, packet[3]);
@ -1104,8 +1104,8 @@ static void alps_process_trackstick_packet_v7(struct psmouse *psmouse)
((packet[3] & 0x20) << 1);
z = (packet[5] & 0x3f) | ((packet[3] & 0x80) >> 1);
input_report_rel(dev2, REL_X, (char)x);
input_report_rel(dev2, REL_Y, -((char)y));
input_report_rel(dev2, REL_X, (s8)x);
input_report_rel(dev2, REL_Y, -((s8)y));
input_report_abs(dev2, ABS_PRESSURE, z);
psmouse_report_standard_buttons(dev2, packet[1]);
@ -2294,20 +2294,20 @@ static int alps_get_v3_v7_resolution(struct psmouse *psmouse, int reg_pitch)
if (reg < 0)
return reg;
x_pitch = (char)(reg << 4) >> 4; /* sign extend lower 4 bits */
x_pitch = (s8)(reg << 4) >> 4; /* sign extend lower 4 bits */
x_pitch = 50 + 2 * x_pitch; /* In 0.1 mm units */
y_pitch = (char)reg >> 4; /* sign extend upper 4 bits */
y_pitch = (s8)reg >> 4; /* sign extend upper 4 bits */
y_pitch = 36 + 2 * y_pitch; /* In 0.1 mm units */
reg = alps_command_mode_read_reg(psmouse, reg_pitch + 1);
if (reg < 0)
return reg;
x_electrode = (char)(reg << 4) >> 4; /* sign extend lower 4 bits */
x_electrode = (s8)(reg << 4) >> 4; /* sign extend lower 4 bits */
x_electrode = 17 + x_electrode;
y_electrode = (char)reg >> 4; /* sign extend upper 4 bits */
y_electrode = (s8)reg >> 4; /* sign extend upper 4 bits */
y_electrode = 13 + y_electrode;
x_phys = x_pitch * (x_electrode - 1); /* In 0.1 mm units */

View File

@ -202,8 +202,8 @@ static void focaltech_process_rel_packet(struct psmouse *psmouse,
state->pressed = packet[0] >> 7;
finger1 = ((packet[0] >> 4) & 0x7) - 1;
if (finger1 < FOC_MAX_FINGERS) {
state->fingers[finger1].x += (char)packet[1];
state->fingers[finger1].y += (char)packet[2];
state->fingers[finger1].x += (s8)packet[1];
state->fingers[finger1].y += (s8)packet[2];
} else {
psmouse_err(psmouse, "First finger in rel packet invalid: %d\n",
finger1);
@ -218,8 +218,8 @@ static void focaltech_process_rel_packet(struct psmouse *psmouse,
*/
finger2 = ((packet[3] >> 4) & 0x7) - 1;
if (finger2 < FOC_MAX_FINGERS) {
state->fingers[finger2].x += (char)packet[4];
state->fingers[finger2].y += (char)packet[5];
state->fingers[finger2].x += (s8)packet[4];
state->fingers[finger2].y += (s8)packet[5];
}
}

View File

@ -610,6 +610,14 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
},
.driver_data = (void *)(SERIO_QUIRK_NOMUX)
},
{
/* Fujitsu Lifebook A574/H */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
DMI_MATCH(DMI_PRODUCT_NAME, "FMVA0501PZ"),
},
.driver_data = (void *)(SERIO_QUIRK_NOMUX)
},
{
/* Gigabyte M912 */
.matches = {
@ -1116,6 +1124,20 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
},
{
/*
* Setting SERIO_QUIRK_NOMUX or SERIO_QUIRK_RESET_ALWAYS makes
* the keyboard very laggy for ~5 seconds after boot and
* sometimes also after resume.
* However both are required for the keyboard to not fail
* completely sometimes after boot or resume.
*/
.matches = {
DMI_MATCH(DMI_BOARD_NAME, "N150CU"),
},
.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
},
{
.matches = {
DMI_MATCH(DMI_BOARD_NAME, "NH5xAx"),
@ -1123,6 +1145,20 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
},
{
/*
* Setting SERIO_QUIRK_NOMUX or SERIO_QUIRK_RESET_ALWAYS makes
* the keyboard very laggy for ~5 seconds after boot and
* sometimes also after resume.
* However both are required for the keyboard to not fail
* completely sometimes after boot or resume.
*/
.matches = {
DMI_MATCH(DMI_BOARD_NAME, "NHxxRZQ"),
},
.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
},
{
.matches = {
DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),

View File

@ -124,10 +124,18 @@ static const unsigned long goodix_irq_flags[] = {
static const struct dmi_system_id nine_bytes_report[] = {
#if defined(CONFIG_DMI) && defined(CONFIG_X86)
{
.ident = "Lenovo YogaBook",
/* YB1-X91L/F and YB1-X90L/F */
/* Lenovo Yoga Book X90F / X90L */
.matches = {
DMI_MATCH(DMI_PRODUCT_NAME, "Lenovo YB1-X9")
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "YETI-11"),
}
},
{
/* Lenovo Yoga Book X91F / X91L */
.matches = {
/* Non exact match to match F + L versions */
DMI_MATCH(DMI_PRODUCT_NAME, "Lenovo YB1-X91"),
}
},
#endif

View File

@ -1415,23 +1415,26 @@ static struct iommu_device *exynos_iommu_probe_device(struct device *dev)
return &data->iommu;
}
static void exynos_iommu_release_device(struct device *dev)
static void exynos_iommu_set_platform_dma(struct device *dev)
{
struct exynos_iommu_owner *owner = dev_iommu_priv_get(dev);
struct sysmmu_drvdata *data;
if (owner->domain) {
struct iommu_group *group = iommu_group_get(dev);
if (group) {
#ifndef CONFIG_ARM
WARN_ON(owner->domain !=
iommu_group_default_domain(group));
#endif
exynos_iommu_detach_device(owner->domain, dev);
iommu_group_put(group);
}
}
}
static void exynos_iommu_release_device(struct device *dev)
{
struct exynos_iommu_owner *owner = dev_iommu_priv_get(dev);
struct sysmmu_drvdata *data;
exynos_iommu_set_platform_dma(dev);
list_for_each_entry(data, &owner->controllers, owner_node)
device_link_del(data->link);
@ -1479,7 +1482,7 @@ static const struct iommu_ops exynos_iommu_ops = {
.domain_alloc = exynos_iommu_domain_alloc,
.device_group = generic_device_group,
#ifdef CONFIG_ARM
.set_platform_dma_ops = exynos_iommu_release_device,
.set_platform_dma_ops = exynos_iommu_set_platform_dma,
#endif
.probe_device = exynos_iommu_probe_device,
.release_device = exynos_iommu_release_device,

View File

@ -1071,7 +1071,8 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
}
err = -EINVAL;
if (cap_sagaw(iommu->cap) == 0) {
if (!cap_sagaw(iommu->cap) &&
(!ecap_smts(iommu->ecap) || ecap_slts(iommu->ecap))) {
pr_info("%s: No supported address widths. Not attempting DMA translation.\n",
iommu->name);
drhd->ignored = 1;

View File

@ -641,6 +641,8 @@ struct iommu_pmu {
DECLARE_BITMAP(used_mask, IOMMU_PMU_IDX_MAX);
struct perf_event *event_list[IOMMU_PMU_IDX_MAX];
unsigned char irq_name[16];
struct hlist_node cpuhp_node;
int cpu;
};
#define IOMMU_IRQ_ID_OFFSET_PRQ (DMAR_UNITS_SUPPORTED)

View File

@ -311,14 +311,12 @@ static int set_ioapic_sid(struct irte *irte, int apic)
if (!irte)
return -1;
down_read(&dmar_global_lock);
for (i = 0; i < MAX_IO_APICS; i++) {
if (ir_ioapic[i].iommu && ir_ioapic[i].id == apic) {
sid = (ir_ioapic[i].bus << 8) | ir_ioapic[i].devfn;
break;
}
}
up_read(&dmar_global_lock);
if (sid == 0) {
pr_warn("Failed to set source-id of IOAPIC (%d)\n", apic);
@ -338,14 +336,12 @@ static int set_hpet_sid(struct irte *irte, u8 id)
if (!irte)
return -1;
down_read(&dmar_global_lock);
for (i = 0; i < MAX_HPET_TBS; i++) {
if (ir_hpet[i].iommu && ir_hpet[i].id == id) {
sid = (ir_hpet[i].bus << 8) | ir_hpet[i].devfn;
break;
}
}
up_read(&dmar_global_lock);
if (sid == 0) {
pr_warn("Failed to set source-id of HPET block (%d)\n", id);
@ -1339,9 +1335,7 @@ static int intel_irq_remapping_alloc(struct irq_domain *domain,
if (!data)
goto out_free_parent;
down_read(&dmar_global_lock);
index = alloc_irte(iommu, &data->irq_2_iommu, nr_irqs);
up_read(&dmar_global_lock);
if (index < 0) {
pr_warn("Failed to allocate IRTE\n");
kfree(data);

View File

@ -773,19 +773,34 @@ static void iommu_pmu_unset_interrupt(struct intel_iommu *iommu)
iommu->perf_irq = 0;
}
static int iommu_pmu_cpu_online(unsigned int cpu)
static int iommu_pmu_cpu_online(unsigned int cpu, struct hlist_node *node)
{
struct iommu_pmu *iommu_pmu = hlist_entry_safe(node, typeof(*iommu_pmu), cpuhp_node);
if (cpumask_empty(&iommu_pmu_cpu_mask))
cpumask_set_cpu(cpu, &iommu_pmu_cpu_mask);
if (cpumask_test_cpu(cpu, &iommu_pmu_cpu_mask))
iommu_pmu->cpu = cpu;
return 0;
}
static int iommu_pmu_cpu_offline(unsigned int cpu)
static int iommu_pmu_cpu_offline(unsigned int cpu, struct hlist_node *node)
{
struct dmar_drhd_unit *drhd;
struct intel_iommu *iommu;
int target;
struct iommu_pmu *iommu_pmu = hlist_entry_safe(node, typeof(*iommu_pmu), cpuhp_node);
int target = cpumask_first(&iommu_pmu_cpu_mask);
/*
* The iommu_pmu_cpu_mask has been updated when offline the CPU
* for the first iommu_pmu. Migrate the other iommu_pmu to the
* new target.
*/
if (target < nr_cpu_ids && target != iommu_pmu->cpu) {
perf_pmu_migrate_context(&iommu_pmu->pmu, cpu, target);
iommu_pmu->cpu = target;
return 0;
}
if (!cpumask_test_and_clear_cpu(cpu, &iommu_pmu_cpu_mask))
return 0;
@ -795,45 +810,50 @@ static int iommu_pmu_cpu_offline(unsigned int cpu)
if (target < nr_cpu_ids)
cpumask_set_cpu(target, &iommu_pmu_cpu_mask);
else
target = -1;
return 0;
rcu_read_lock();
for_each_iommu(iommu, drhd) {
if (!iommu->pmu)
continue;
perf_pmu_migrate_context(&iommu->pmu->pmu, cpu, target);
}
rcu_read_unlock();
perf_pmu_migrate_context(&iommu_pmu->pmu, cpu, target);
iommu_pmu->cpu = target;
return 0;
}
static int nr_iommu_pmu;
static enum cpuhp_state iommu_cpuhp_slot;
static int iommu_pmu_cpuhp_setup(struct iommu_pmu *iommu_pmu)
{
int ret;
if (nr_iommu_pmu++)
return 0;
if (!nr_iommu_pmu) {
ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN,
"driver/iommu/intel/perfmon:online",
iommu_pmu_cpu_online,
iommu_pmu_cpu_offline);
if (ret < 0)
return ret;
iommu_cpuhp_slot = ret;
}
ret = cpuhp_setup_state(CPUHP_AP_PERF_X86_IOMMU_PERF_ONLINE,
"driver/iommu/intel/perfmon:online",
iommu_pmu_cpu_online,
iommu_pmu_cpu_offline);
if (ret)
nr_iommu_pmu = 0;
ret = cpuhp_state_add_instance(iommu_cpuhp_slot, &iommu_pmu->cpuhp_node);
if (ret) {
if (!nr_iommu_pmu)
cpuhp_remove_multi_state(iommu_cpuhp_slot);
return ret;
}
nr_iommu_pmu++;
return ret;
return 0;
}
static void iommu_pmu_cpuhp_free(struct iommu_pmu *iommu_pmu)
{
cpuhp_state_remove_instance(iommu_cpuhp_slot, &iommu_pmu->cpuhp_node);
if (--nr_iommu_pmu)
return;
cpuhp_remove_state(CPUHP_AP_PERF_X86_IOMMU_PERF_ONLINE);
cpuhp_remove_multi_state(iommu_cpuhp_slot);
}
void iommu_pmu_register(struct intel_iommu *iommu)

View File

@ -294,9 +294,9 @@ static void batch_clear_carry(struct pfn_batch *batch, unsigned int keep_pfns)
batch->npfns[batch->end - 1] < keep_pfns);
batch->total_pfns = keep_pfns;
batch->npfns[0] = keep_pfns;
batch->pfns[0] = batch->pfns[batch->end - 1] +
(batch->npfns[batch->end - 1] - keep_pfns);
batch->npfns[0] = keep_pfns;
batch->end = 0;
}
@ -1142,6 +1142,7 @@ struct iopt_pages *iopt_alloc_pages(void __user *uptr, unsigned long length,
bool writable)
{
struct iopt_pages *pages;
unsigned long end;
/*
* The iommu API uses size_t as the length, and protect the DIV_ROUND_UP
@ -1150,6 +1151,9 @@ struct iopt_pages *iopt_alloc_pages(void __user *uptr, unsigned long length,
if (length > SIZE_MAX - PAGE_SIZE || length == 0)
return ERR_PTR(-EINVAL);
if (check_add_overflow((unsigned long)uptr, length, &end))
return ERR_PTR(-EOVERFLOW);
pages = kzalloc(sizeof(*pages), GFP_KERNEL_ACCOUNT);
if (!pages)
return ERR_PTR(-ENOMEM);
@ -1203,13 +1207,21 @@ iopt_area_unpin_domain(struct pfn_batch *batch, struct iopt_area *area,
unsigned long start =
max(start_index, *unmapped_end_index);
if (IS_ENABLED(CONFIG_IOMMUFD_TEST) &&
batch->total_pfns)
WARN_ON(*unmapped_end_index -
batch->total_pfns !=
start_index);
batch_from_domain(batch, domain, area, start,
last_index);
batch_last_index = start + batch->total_pfns - 1;
batch_last_index = start_index + batch->total_pfns - 1;
} else {
batch_last_index = last_index;
}
if (IS_ENABLED(CONFIG_IOMMUFD_TEST))
WARN_ON(batch_last_index > real_last_index);
/*
* unmaps must always 'cut' at a place where the pfns are not
* contiguous to pair with the maps that always install

View File

@ -6260,7 +6260,6 @@ static void __md_stop(struct mddev *mddev)
module_put(pers->owner);
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
percpu_ref_exit(&mddev->writes_pending);
percpu_ref_exit(&mddev->active_io);
bioset_exit(&mddev->bio_set);
bioset_exit(&mddev->sync_set);
@ -6273,6 +6272,7 @@ void md_stop(struct mddev *mddev)
*/
__md_stop_writes(mddev);
__md_stop(mddev);
percpu_ref_exit(&mddev->writes_pending);
}
EXPORT_SYMBOL_GPL(md_stop);
@ -7843,6 +7843,7 @@ static void md_free_disk(struct gendisk *disk)
{
struct mddev *mddev = disk->private_data;
percpu_ref_exit(&mddev->writes_pending);
mddev_free(mddev);
}

View File

@ -1098,7 +1098,7 @@ static int imx290_runtime_suspend(struct device *dev)
}
static const struct dev_pm_ops imx290_pm_ops = {
SET_RUNTIME_PM_OPS(imx290_runtime_suspend, imx290_runtime_resume, NULL)
RUNTIME_PM_OPS(imx290_runtime_suspend, imx290_runtime_resume, NULL)
};
/* ----------------------------------------------------------------------------
@ -1362,8 +1362,8 @@ static struct i2c_driver imx290_i2c_driver = {
.remove = imx290_remove,
.driver = {
.name = "imx290",
.pm = &imx290_pm_ops,
.of_match_table = of_match_ptr(imx290_of_match),
.pm = pm_ptr(&imx290_pm_ops),
.of_match_table = imx290_of_match,
},
};

View File

@ -38,8 +38,8 @@ static void venus_reset_cpu(struct venus_core *core)
writel(fw_size, wrapper_base + WRAPPER_FW_END_ADDR);
writel(0, wrapper_base + WRAPPER_CPA_START_ADDR);
writel(fw_size, wrapper_base + WRAPPER_CPA_END_ADDR);
writel(0, wrapper_base + WRAPPER_NONPIX_START_ADDR);
writel(0, wrapper_base + WRAPPER_NONPIX_END_ADDR);
writel(fw_size, wrapper_base + WRAPPER_NONPIX_START_ADDR);
writel(fw_size, wrapper_base + WRAPPER_NONPIX_END_ADDR);
if (IS_V6(core)) {
/* Bring XTSS out of reset */

View File

@ -3343,7 +3343,19 @@ static struct spi_mem_driver spi_nor_driver = {
.remove = spi_nor_remove,
.shutdown = spi_nor_shutdown,
};
module_spi_mem_driver(spi_nor_driver);
static int __init spi_nor_module_init(void)
{
return spi_mem_driver_register(&spi_nor_driver);
}
module_init(spi_nor_module_init);
static void __exit spi_nor_module_exit(void)
{
spi_mem_driver_unregister(&spi_nor_driver);
spi_nor_debugfs_shutdown();
}
module_exit(spi_nor_module_exit);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Huang Shijie <shijie8@gmail.com>");

View File

@ -711,8 +711,10 @@ static inline struct spi_nor *mtd_to_spi_nor(struct mtd_info *mtd)
#ifdef CONFIG_DEBUG_FS
void spi_nor_debugfs_register(struct spi_nor *nor);
void spi_nor_debugfs_shutdown(void);
#else
static inline void spi_nor_debugfs_register(struct spi_nor *nor) {}
static inline void spi_nor_debugfs_shutdown(void) {}
#endif
#endif /* __LINUX_MTD_SPI_NOR_INTERNAL_H */

View File

@ -226,13 +226,13 @@ static void spi_nor_debugfs_unregister(void *data)
nor->debugfs_root = NULL;
}
static struct dentry *rootdir;
void spi_nor_debugfs_register(struct spi_nor *nor)
{
struct dentry *rootdir, *d;
struct dentry *d;
int ret;
/* Create rootdir once. Will never be deleted again. */
rootdir = debugfs_lookup(SPI_NOR_DEBUGFS_ROOT, NULL);
if (!rootdir)
rootdir = debugfs_create_dir(SPI_NOR_DEBUGFS_ROOT, NULL);
@ -247,3 +247,8 @@ void spi_nor_debugfs_register(struct spi_nor *nor)
debugfs_create_file("capabilities", 0444, d, nor,
&spi_nor_capabilities_fops);
}
void spi_nor_debugfs_shutdown(void)
{
debugfs_remove(rootdir);
}

View File

@ -5613,7 +5613,7 @@ static const struct mv88e6xxx_ops mv88e6393x_ops = {
* .port_set_upstream_port method.
*/
.set_egress_port = mv88e6393x_set_egress_port,
.watchdog_ops = &mv88e6390_watchdog_ops,
.watchdog_ops = &mv88e6393x_watchdog_ops,
.mgmt_rsvd2cpu = mv88e6393x_port_mgmt_rsvd2cpu,
.pot_clear = mv88e6xxx_g2_pot_clear,
.reset = mv88e6352_g1_reset,

View File

@ -943,6 +943,26 @@ const struct mv88e6xxx_irq_ops mv88e6390_watchdog_ops = {
.irq_free = mv88e6390_watchdog_free,
};
static int mv88e6393x_watchdog_action(struct mv88e6xxx_chip *chip, int irq)
{
mv88e6390_watchdog_action(chip, irq);
/* Fix for clearing the force WD event bit.
* Unreleased erratum on mv88e6393x.
*/
mv88e6xxx_g2_write(chip, MV88E6390_G2_WDOG_CTL,
MV88E6390_G2_WDOG_CTL_UPDATE |
MV88E6390_G2_WDOG_CTL_PTR_EVENT);
return IRQ_HANDLED;
}
const struct mv88e6xxx_irq_ops mv88e6393x_watchdog_ops = {
.irq_action = mv88e6393x_watchdog_action,
.irq_setup = mv88e6390_watchdog_setup,
.irq_free = mv88e6390_watchdog_free,
};
static irqreturn_t mv88e6xxx_g2_watchdog_thread_fn(int irq, void *dev_id)
{
struct mv88e6xxx_chip *chip = dev_id;

View File

@ -369,6 +369,7 @@ int mv88e6xxx_g2_device_mapping_write(struct mv88e6xxx_chip *chip, int target,
extern const struct mv88e6xxx_irq_ops mv88e6097_watchdog_ops;
extern const struct mv88e6xxx_irq_ops mv88e6250_watchdog_ops;
extern const struct mv88e6xxx_irq_ops mv88e6390_watchdog_ops;
extern const struct mv88e6xxx_irq_ops mv88e6393x_watchdog_ops;
extern const struct mv88e6xxx_avb_ops mv88e6165_avb_ops;
extern const struct mv88e6xxx_avb_ops mv88e6352_avb_ops;

View File

@ -507,6 +507,11 @@ struct bufdesc_ex {
/* i.MX6Q adds pm_qos support */
#define FEC_QUIRK_HAS_PMQOS BIT(23)
/* Not all FEC hardware block MDIOs support accesses in C45 mode.
* Older blocks in the ColdFire parts do not support it.
*/
#define FEC_QUIRK_HAS_MDIO_C45 BIT(24)
struct bufdesc_prop {
int qid;
/* Address of Rx and Tx buffers */

View File

@ -100,18 +100,19 @@ struct fec_devinfo {
static const struct fec_devinfo fec_imx25_info = {
.quirks = FEC_QUIRK_USE_GASKET | FEC_QUIRK_MIB_CLEAR |
FEC_QUIRK_HAS_FRREG,
FEC_QUIRK_HAS_FRREG | FEC_QUIRK_HAS_MDIO_C45,
};
static const struct fec_devinfo fec_imx27_info = {
.quirks = FEC_QUIRK_MIB_CLEAR | FEC_QUIRK_HAS_FRREG,
.quirks = FEC_QUIRK_MIB_CLEAR | FEC_QUIRK_HAS_FRREG |
FEC_QUIRK_HAS_MDIO_C45,
};
static const struct fec_devinfo fec_imx28_info = {
.quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME |
FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC |
FEC_QUIRK_HAS_FRREG | FEC_QUIRK_CLEAR_SETUP_MII |
FEC_QUIRK_NO_HARD_RESET,
FEC_QUIRK_NO_HARD_RESET | FEC_QUIRK_HAS_MDIO_C45,
};
static const struct fec_devinfo fec_imx6q_info = {
@ -119,11 +120,12 @@ static const struct fec_devinfo fec_imx6q_info = {
FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
FEC_QUIRK_HAS_VLAN | FEC_QUIRK_ERR006358 |
FEC_QUIRK_HAS_RACC | FEC_QUIRK_CLEAR_SETUP_MII |
FEC_QUIRK_HAS_PMQOS,
FEC_QUIRK_HAS_PMQOS | FEC_QUIRK_HAS_MDIO_C45,
};
static const struct fec_devinfo fec_mvf600_info = {
.quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_RACC,
.quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_RACC |
FEC_QUIRK_HAS_MDIO_C45,
};
static const struct fec_devinfo fec_imx6x_info = {
@ -132,7 +134,8 @@ static const struct fec_devinfo fec_imx6x_info = {
FEC_QUIRK_HAS_VLAN | FEC_QUIRK_HAS_AVB |
FEC_QUIRK_ERR007885 | FEC_QUIRK_BUG_CAPTURE |
FEC_QUIRK_HAS_RACC | FEC_QUIRK_HAS_COALESCE |
FEC_QUIRK_CLEAR_SETUP_MII | FEC_QUIRK_HAS_MULTI_QUEUES,
FEC_QUIRK_CLEAR_SETUP_MII | FEC_QUIRK_HAS_MULTI_QUEUES |
FEC_QUIRK_HAS_MDIO_C45,
};
static const struct fec_devinfo fec_imx6ul_info = {
@ -140,7 +143,8 @@ static const struct fec_devinfo fec_imx6ul_info = {
FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
FEC_QUIRK_HAS_VLAN | FEC_QUIRK_ERR007885 |
FEC_QUIRK_BUG_CAPTURE | FEC_QUIRK_HAS_RACC |
FEC_QUIRK_HAS_COALESCE | FEC_QUIRK_CLEAR_SETUP_MII,
FEC_QUIRK_HAS_COALESCE | FEC_QUIRK_CLEAR_SETUP_MII |
FEC_QUIRK_HAS_MDIO_C45,
};
static const struct fec_devinfo fec_imx8mq_info = {
@ -150,7 +154,8 @@ static const struct fec_devinfo fec_imx8mq_info = {
FEC_QUIRK_ERR007885 | FEC_QUIRK_BUG_CAPTURE |
FEC_QUIRK_HAS_RACC | FEC_QUIRK_HAS_COALESCE |
FEC_QUIRK_CLEAR_SETUP_MII | FEC_QUIRK_HAS_MULTI_QUEUES |
FEC_QUIRK_HAS_EEE | FEC_QUIRK_WAKEUP_FROM_INT2,
FEC_QUIRK_HAS_EEE | FEC_QUIRK_WAKEUP_FROM_INT2 |
FEC_QUIRK_HAS_MDIO_C45,
};
static const struct fec_devinfo fec_imx8qm_info = {
@ -160,14 +165,15 @@ static const struct fec_devinfo fec_imx8qm_info = {
FEC_QUIRK_ERR007885 | FEC_QUIRK_BUG_CAPTURE |
FEC_QUIRK_HAS_RACC | FEC_QUIRK_HAS_COALESCE |
FEC_QUIRK_CLEAR_SETUP_MII | FEC_QUIRK_HAS_MULTI_QUEUES |
FEC_QUIRK_DELAYED_CLKS_SUPPORT,
FEC_QUIRK_DELAYED_CLKS_SUPPORT | FEC_QUIRK_HAS_MDIO_C45,
};
static const struct fec_devinfo fec_s32v234_info = {
.quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
FEC_QUIRK_HAS_VLAN | FEC_QUIRK_HAS_AVB |
FEC_QUIRK_ERR007885 | FEC_QUIRK_BUG_CAPTURE,
FEC_QUIRK_ERR007885 | FEC_QUIRK_BUG_CAPTURE |
FEC_QUIRK_HAS_MDIO_C45,
};
static struct platform_device_id fec_devtype[] = {
@ -2434,8 +2440,10 @@ static int fec_enet_mii_init(struct platform_device *pdev)
fep->mii_bus->name = "fec_enet_mii_bus";
fep->mii_bus->read = fec_enet_mdio_read_c22;
fep->mii_bus->write = fec_enet_mdio_write_c22;
fep->mii_bus->read_c45 = fec_enet_mdio_read_c45;
fep->mii_bus->write_c45 = fec_enet_mdio_write_c45;
if (fep->quirks & FEC_QUIRK_HAS_MDIO_C45) {
fep->mii_bus->read_c45 = fec_enet_mdio_read_c45;
fep->mii_bus->write_c45 = fec_enet_mdio_write_c45;
}
snprintf(fep->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x",
pdev->name, fep->dev_id + 1);
fep->mii_bus->priv = fep;

View File

@ -51,6 +51,8 @@
#define GVE_TX_MAX_HEADER_SIZE 182
#define GVE_GQ_TX_MIN_PKT_DESC_BYTES 182
/* Each slot in the desc ring has a 1:1 mapping to a slot in the data ring */
struct gve_rx_desc_queue {
struct gve_rx_desc *desc_ring; /* the descriptor ring */

View File

@ -351,8 +351,8 @@ static inline int gve_skb_fifo_bytes_required(struct gve_tx_ring *tx,
int bytes;
int hlen;
hlen = skb_is_gso(skb) ? skb_checksum_start_offset(skb) +
tcp_hdrlen(skb) : skb_headlen(skb);
hlen = skb_is_gso(skb) ? skb_checksum_start_offset(skb) + tcp_hdrlen(skb) :
min_t(int, GVE_GQ_TX_MIN_PKT_DESC_BYTES, skb->len);
pad_bytes = gve_tx_fifo_pad_alloc_one_frag(&tx->tx_fifo,
hlen);
@ -522,13 +522,11 @@ static int gve_tx_add_skb_copy(struct gve_priv *priv, struct gve_tx_ring *tx, st
pkt_desc = &tx->desc[idx];
l4_hdr_offset = skb_checksum_start_offset(skb);
/* If the skb is gso, then we want the tcp header in the first segment
* otherwise we want the linear portion of the skb (which will contain
* the checksum because skb->csum_start and skb->csum_offset are given
* relative to skb->head) in the first segment.
/* If the skb is gso, then we want the tcp header alone in the first segment
* otherwise we want the minimum required by the gVNIC spec.
*/
hlen = is_gso ? l4_hdr_offset + tcp_hdrlen(skb) :
skb_headlen(skb);
min_t(int, GVE_GQ_TX_MIN_PKT_DESC_BYTES, skb->len);
info->skb = skb;
/* We don't want to split the header, so if necessary, pad to the end

View File

@ -541,6 +541,21 @@ static void ice_vc_fdir_rem_prof_all(struct ice_vf *vf)
}
}
/**
* ice_vc_fdir_reset_cnt_all - reset all FDIR counters for this VF FDIR
* @fdir: pointer to the VF FDIR structure
*/
static void ice_vc_fdir_reset_cnt_all(struct ice_vf_fdir *fdir)
{
enum ice_fltr_ptype flow;
for (flow = ICE_FLTR_PTYPE_NONF_NONE;
flow < ICE_FLTR_PTYPE_MAX; flow++) {
fdir->fdir_fltr_cnt[flow][0] = 0;
fdir->fdir_fltr_cnt[flow][1] = 0;
}
}
/**
* ice_vc_fdir_has_prof_conflict
* @vf: pointer to the VF structure
@ -1871,7 +1886,7 @@ int ice_vc_add_fdir_fltr(struct ice_vf *vf, u8 *msg)
v_ret = VIRTCHNL_STATUS_SUCCESS;
stat->status = VIRTCHNL_FDIR_FAILURE_RULE_NORESOURCE;
dev_dbg(dev, "VF %d: set FDIR context failed\n", vf->vf_id);
goto err_free_conf;
goto err_rem_entry;
}
ret = ice_vc_fdir_write_fltr(vf, conf, true, is_tun);
@ -1880,15 +1895,16 @@ int ice_vc_add_fdir_fltr(struct ice_vf *vf, u8 *msg)
stat->status = VIRTCHNL_FDIR_FAILURE_RULE_NORESOURCE;
dev_err(dev, "VF %d: writing FDIR rule failed, ret:%d\n",
vf->vf_id, ret);
goto err_rem_entry;
goto err_clr_irq;
}
exit:
kfree(stat);
return ret;
err_rem_entry:
err_clr_irq:
ice_vc_fdir_clear_irq_ctx(vf);
err_rem_entry:
ice_vc_fdir_remove_entry(vf, conf, conf->flow_id);
err_free_conf:
devm_kfree(dev, conf);
@ -1997,6 +2013,7 @@ void ice_vf_fdir_init(struct ice_vf *vf)
spin_lock_init(&fdir->ctx_lock);
fdir->ctx_irq.flags = 0;
fdir->ctx_done.flags = 0;
ice_vc_fdir_reset_cnt_all(fdir);
}
/**

View File

@ -729,6 +729,7 @@ static void mtk_mac_link_up(struct phylink_config *config,
MAC_MCR_FORCE_RX_FC);
/* Configure speed */
mac->speed = speed;
switch (speed) {
case SPEED_2500:
case SPEED_1000:
@ -3232,6 +3233,9 @@ found:
if (dp->index >= MTK_QDMA_NUM_QUEUES)
return NOTIFY_DONE;
if (mac->speed > 0 && mac->speed <= s.base.speed)
s.base.speed = 0;
mtk_set_queue_speed(eth, dp->index + 3, s.base.speed);
return NOTIFY_DONE;

View File

@ -251,7 +251,6 @@ static void intel_speed_mode_2500(struct net_device *ndev, void *intel_data)
priv->plat->mdio_bus_data->xpcs_an_inband = false;
} else {
priv->plat->max_speed = 1000;
priv->plat->mdio_bus_data->xpcs_an_inband = true;
}
}

View File

@ -1134,20 +1134,26 @@ static void stmmac_check_pcs_mode(struct stmmac_priv *priv)
static int stmmac_init_phy(struct net_device *dev)
{
struct stmmac_priv *priv = netdev_priv(dev);
struct fwnode_handle *phy_fwnode;
struct fwnode_handle *fwnode;
int ret;
if (!phylink_expects_phy(priv->phylink))
return 0;
fwnode = of_fwnode_handle(priv->plat->phylink_node);
if (!fwnode)
fwnode = dev_fwnode(priv->device);
if (fwnode)
ret = phylink_fwnode_phy_connect(priv->phylink, fwnode, 0);
phy_fwnode = fwnode_get_phy_node(fwnode);
else
phy_fwnode = NULL;
/* Some DT bindings do not set-up the PHY handle. Let's try to
* manually parse it
*/
if (!fwnode || ret) {
if (!phy_fwnode || IS_ERR(phy_fwnode)) {
int addr = priv->plat->phy_addr;
struct phy_device *phydev;
@ -1163,6 +1169,9 @@ static int stmmac_init_phy(struct net_device *dev)
}
ret = phylink_connect_phy(priv->phylink, phydev);
} else {
fwnode_handle_put(phy_fwnode);
ret = phylink_fwnode_phy_connect(priv->phylink, fwnode, 0);
}
if (!priv->plat->pmt) {
@ -6622,6 +6631,8 @@ int stmmac_xdp_open(struct net_device *dev)
goto init_error;
}
stmmac_reset_queues_param(priv);
/* DMA CSR Channel configuration */
for (chan = 0; chan < dma_csr_ch; chan++) {
stmmac_init_chan(priv, priv->ioaddr, priv->plat->dma_cfg, chan);
@ -6948,7 +6959,7 @@ static void stmmac_napi_del(struct net_device *dev)
int stmmac_reinit_queues(struct net_device *dev, u32 rx_cnt, u32 tx_cnt)
{
struct stmmac_priv *priv = netdev_priv(dev);
int ret = 0;
int ret = 0, i;
if (netif_running(dev))
stmmac_release(dev);
@ -6957,6 +6968,10 @@ int stmmac_reinit_queues(struct net_device *dev, u32 rx_cnt, u32 tx_cnt)
priv->plat->rx_queues_to_use = rx_cnt;
priv->plat->tx_queues_to_use = tx_cnt;
if (!netif_is_rxfh_configured(dev))
for (i = 0; i < ARRAY_SIZE(priv->rss.table); i++)
priv->rss.table[i] = ethtool_rxfh_indir_default(i,
rx_cnt);
stmmac_napi_add(dev);

View File

@ -2957,7 +2957,8 @@ err_free_phylink:
am65_cpsw_nuss_phylink_cleanup(common);
am65_cpts_release(common->cpts);
err_of_clear:
of_platform_device_destroy(common->mdio_dev, NULL);
if (common->mdio_dev)
of_platform_device_destroy(common->mdio_dev, NULL);
err_pm_clear:
pm_runtime_put_sync(dev);
pm_runtime_disable(dev);
@ -2987,7 +2988,8 @@ static int am65_cpsw_nuss_remove(struct platform_device *pdev)
am65_cpts_release(common->cpts);
am65_cpsw_disable_serdes_phy(common);
of_platform_device_destroy(common->mdio_dev, NULL);
if (common->mdio_dev)
of_platform_device_destroy(common->mdio_dev, NULL);
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);

View File

@ -1583,6 +1583,25 @@ void phylink_destroy(struct phylink *pl)
}
EXPORT_SYMBOL_GPL(phylink_destroy);
/**
* phylink_expects_phy() - Determine if phylink expects a phy to be attached
* @pl: a pointer to a &struct phylink returned from phylink_create()
*
* When using fixed-link mode, or in-band mode with 1000base-X or 2500base-X,
* no PHY is needed.
*
* Returns true if phylink will be expecting a PHY.
*/
bool phylink_expects_phy(struct phylink *pl)
{
if (pl->cfg_link_an_mode == MLO_AN_FIXED ||
(pl->cfg_link_an_mode == MLO_AN_INBAND &&
phy_interface_mode_is_8023z(pl->link_config.interface)))
return false;
return true;
}
EXPORT_SYMBOL_GPL(phylink_expects_phy);
static void phylink_phy_change(struct phy_device *phydev, bool up)
{
struct phylink *pl = phydev->phylink;

View File

@ -406,6 +406,10 @@ static const struct sfp_quirk sfp_quirks[] = {
SFP_QUIRK_F("HALNy", "HL-GSFP", sfp_fixup_halny_gsfp),
// HG MXPD-483II-F 2.5G supports 2500Base-X, but incorrectly reports
// 2600MBd in their EERPOM
SFP_QUIRK_M("HG GENUINE", "MXPD-483II", sfp_quirk_2500basex),
// Huawei MA5671A can operate at 2500base-X, but report 1.2GBd NRZ in
// their EEPROM
SFP_QUIRK("HUAWEI", "MA5671A", sfp_quirk_2500basex,

View File

@ -16,7 +16,7 @@
#include "pci.h"
#include "pcic.h"
#define MHI_TIMEOUT_DEFAULT_MS 90000
#define MHI_TIMEOUT_DEFAULT_MS 20000
#define RDDM_DUMP_SIZE 0x420000
static struct mhi_channel_config ath11k_mhi_channels_qca6390[] = {

View File

@ -994,15 +994,34 @@ static const struct sdio_device_id brcmf_sdmmc_ids[] = {
MODULE_DEVICE_TABLE(sdio, brcmf_sdmmc_ids);
static void brcmf_sdiod_acpi_set_power_manageable(struct device *dev,
int val)
static void brcmf_sdiod_acpi_save_power_manageable(struct brcmf_sdio_dev *sdiodev)
{
#if IS_ENABLED(CONFIG_ACPI)
struct acpi_device *adev;
adev = ACPI_COMPANION(dev);
adev = ACPI_COMPANION(&sdiodev->func1->dev);
if (adev)
adev->flags.power_manageable = 0;
sdiodev->func1_power_manageable = adev->flags.power_manageable;
adev = ACPI_COMPANION(&sdiodev->func2->dev);
if (adev)
sdiodev->func2_power_manageable = adev->flags.power_manageable;
#endif
}
static void brcmf_sdiod_acpi_set_power_manageable(struct brcmf_sdio_dev *sdiodev,
int enable)
{
#if IS_ENABLED(CONFIG_ACPI)
struct acpi_device *adev;
adev = ACPI_COMPANION(&sdiodev->func1->dev);
if (adev)
adev->flags.power_manageable = enable ? sdiodev->func1_power_manageable : 0;
adev = ACPI_COMPANION(&sdiodev->func2->dev);
if (adev)
adev->flags.power_manageable = enable ? sdiodev->func2_power_manageable : 0;
#endif
}
@ -1012,7 +1031,6 @@ static int brcmf_ops_sdio_probe(struct sdio_func *func,
int err;
struct brcmf_sdio_dev *sdiodev;
struct brcmf_bus *bus_if;
struct device *dev;
brcmf_dbg(SDIO, "Enter\n");
brcmf_dbg(SDIO, "Class=%x\n", func->class);
@ -1020,14 +1038,9 @@ static int brcmf_ops_sdio_probe(struct sdio_func *func,
brcmf_dbg(SDIO, "sdio device ID: 0x%04x\n", func->device);
brcmf_dbg(SDIO, "Function#: %d\n", func->num);
dev = &func->dev;
/* Set MMC_QUIRK_LENIENT_FN0 for this card */
func->card->quirks |= MMC_QUIRK_LENIENT_FN0;
/* prohibit ACPI power management for this device */
brcmf_sdiod_acpi_set_power_manageable(dev, 0);
/* Consume func num 1 but dont do anything with it. */
if (func->num == 1)
return 0;
@ -1059,6 +1072,7 @@ static int brcmf_ops_sdio_probe(struct sdio_func *func,
dev_set_drvdata(&sdiodev->func1->dev, bus_if);
sdiodev->dev = &sdiodev->func1->dev;
brcmf_sdiod_acpi_save_power_manageable(sdiodev);
brcmf_sdiod_change_state(sdiodev, BRCMF_SDIOD_DOWN);
brcmf_dbg(SDIO, "F2 found, calling brcmf_sdiod_probe...\n");
@ -1124,6 +1138,8 @@ void brcmf_sdio_wowl_config(struct device *dev, bool enabled)
if (sdiodev->settings->bus.sdio.oob_irq_supported ||
pm_caps & MMC_PM_WAKE_SDIO_IRQ) {
/* Stop ACPI from turning off the device when wowl is enabled */
brcmf_sdiod_acpi_set_power_manageable(sdiodev, !enabled);
sdiodev->wowl_enabled = enabled;
brcmf_dbg(SDIO, "Configuring WOWL, enabled=%d\n", enabled);
return;

View File

@ -188,6 +188,8 @@ struct brcmf_sdio_dev {
char nvram_name[BRCMF_FW_NAME_LEN];
char clm_name[BRCMF_FW_NAME_LEN];
bool wowl_enabled;
bool func1_power_manageable;
bool func2_power_manageable;
enum brcmf_sdiod_state state;
struct brcmf_sdiod_freezer *freezer;
const struct firmware *clm_fw;

View File

@ -512,15 +512,15 @@ mt7603_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
!(key->flags & IEEE80211_KEY_FLAG_PAIRWISE))
return -EOPNOTSUPP;
if (cmd == SET_KEY) {
key->hw_key_idx = wcid->idx;
wcid->hw_key_idx = idx;
} else {
if (cmd != SET_KEY) {
if (idx == wcid->hw_key_idx)
wcid->hw_key_idx = -1;
key = NULL;
return 0;
}
key->hw_key_idx = wcid->idx;
wcid->hw_key_idx = idx;
mt76_wcid_key_setup(&dev->mt76, wcid, key);
return mt7603_wtbl_set_key(dev, wcid->idx, key);

View File

@ -1193,8 +1193,7 @@ EXPORT_SYMBOL_GPL(mt7615_mac_enable_rtscts);
static int
mt7615_mac_wtbl_update_key(struct mt7615_dev *dev, struct mt76_wcid *wcid,
struct ieee80211_key_conf *key,
enum mt76_cipher_type cipher, u16 cipher_mask,
enum set_key_cmd cmd)
enum mt76_cipher_type cipher, u16 cipher_mask)
{
u32 addr = mt7615_mac_wtbl_addr(dev, wcid->idx) + 30 * 4;
u8 data[32] = {};
@ -1203,27 +1202,18 @@ mt7615_mac_wtbl_update_key(struct mt7615_dev *dev, struct mt76_wcid *wcid,
return -EINVAL;
mt76_rr_copy(dev, addr, data, sizeof(data));
if (cmd == SET_KEY) {
if (cipher == MT_CIPHER_TKIP) {
/* Rx/Tx MIC keys are swapped */
memcpy(data, key->key, 16);
memcpy(data + 16, key->key + 24, 8);
memcpy(data + 24, key->key + 16, 8);
} else {
if (cipher_mask == BIT(cipher))
memcpy(data, key->key, key->keylen);
else if (cipher != MT_CIPHER_BIP_CMAC_128)
memcpy(data, key->key, 16);
if (cipher == MT_CIPHER_BIP_CMAC_128)
memcpy(data + 16, key->key, 16);
}
if (cipher == MT_CIPHER_TKIP) {
/* Rx/Tx MIC keys are swapped */
memcpy(data, key->key, 16);
memcpy(data + 16, key->key + 24, 8);
memcpy(data + 24, key->key + 16, 8);
} else {
if (cipher_mask == BIT(cipher))
memcpy(data, key->key, key->keylen);
else if (cipher != MT_CIPHER_BIP_CMAC_128)
memcpy(data, key->key, 16);
if (cipher == MT_CIPHER_BIP_CMAC_128)
memset(data + 16, 0, 16);
else if (cipher_mask)
memset(data, 0, 16);
if (!cipher_mask)
memset(data, 0, sizeof(data));
memcpy(data + 16, key->key, 16);
}
mt76_wr_copy(dev, addr, data, sizeof(data));
@ -1234,7 +1224,7 @@ mt7615_mac_wtbl_update_key(struct mt7615_dev *dev, struct mt76_wcid *wcid,
static int
mt7615_mac_wtbl_update_pk(struct mt7615_dev *dev, struct mt76_wcid *wcid,
enum mt76_cipher_type cipher, u16 cipher_mask,
int keyidx, enum set_key_cmd cmd)
int keyidx)
{
u32 addr = mt7615_mac_wtbl_addr(dev, wcid->idx), w0, w1;
@ -1253,9 +1243,7 @@ mt7615_mac_wtbl_update_pk(struct mt7615_dev *dev, struct mt76_wcid *wcid,
else
w0 &= ~MT_WTBL_W0_RX_IK_VALID;
if (cmd == SET_KEY &&
(cipher != MT_CIPHER_BIP_CMAC_128 ||
cipher_mask == BIT(cipher))) {
if (cipher != MT_CIPHER_BIP_CMAC_128 || cipher_mask == BIT(cipher)) {
w0 &= ~MT_WTBL_W0_KEY_IDX;
w0 |= FIELD_PREP(MT_WTBL_W0_KEY_IDX, keyidx);
}
@ -1272,19 +1260,10 @@ mt7615_mac_wtbl_update_pk(struct mt7615_dev *dev, struct mt76_wcid *wcid,
static void
mt7615_mac_wtbl_update_cipher(struct mt7615_dev *dev, struct mt76_wcid *wcid,
enum mt76_cipher_type cipher, u16 cipher_mask,
enum set_key_cmd cmd)
enum mt76_cipher_type cipher, u16 cipher_mask)
{
u32 addr = mt7615_mac_wtbl_addr(dev, wcid->idx);
if (!cipher_mask) {
mt76_clear(dev, addr + 2 * 4, MT_WTBL_W2_KEY_TYPE);
return;
}
if (cmd != SET_KEY)
return;
if (cipher == MT_CIPHER_BIP_CMAC_128 &&
cipher_mask & ~BIT(MT_CIPHER_BIP_CMAC_128))
return;
@ -1295,8 +1274,7 @@ mt7615_mac_wtbl_update_cipher(struct mt7615_dev *dev, struct mt76_wcid *wcid,
int __mt7615_mac_wtbl_set_key(struct mt7615_dev *dev,
struct mt76_wcid *wcid,
struct ieee80211_key_conf *key,
enum set_key_cmd cmd)
struct ieee80211_key_conf *key)
{
enum mt76_cipher_type cipher;
u16 cipher_mask = wcid->cipher;
@ -1306,19 +1284,14 @@ int __mt7615_mac_wtbl_set_key(struct mt7615_dev *dev,
if (cipher == MT_CIPHER_NONE)
return -EOPNOTSUPP;
if (cmd == SET_KEY)
cipher_mask |= BIT(cipher);
else
cipher_mask &= ~BIT(cipher);
mt7615_mac_wtbl_update_cipher(dev, wcid, cipher, cipher_mask, cmd);
err = mt7615_mac_wtbl_update_key(dev, wcid, key, cipher, cipher_mask,
cmd);
cipher_mask |= BIT(cipher);
mt7615_mac_wtbl_update_cipher(dev, wcid, cipher, cipher_mask);
err = mt7615_mac_wtbl_update_key(dev, wcid, key, cipher, cipher_mask);
if (err < 0)
return err;
err = mt7615_mac_wtbl_update_pk(dev, wcid, cipher, cipher_mask,
key->keyidx, cmd);
key->keyidx);
if (err < 0)
return err;
@ -1329,13 +1302,12 @@ int __mt7615_mac_wtbl_set_key(struct mt7615_dev *dev,
int mt7615_mac_wtbl_set_key(struct mt7615_dev *dev,
struct mt76_wcid *wcid,
struct ieee80211_key_conf *key,
enum set_key_cmd cmd)
struct ieee80211_key_conf *key)
{
int err;
spin_lock_bh(&dev->mt76.lock);
err = __mt7615_mac_wtbl_set_key(dev, wcid, key, cmd);
err = __mt7615_mac_wtbl_set_key(dev, wcid, key);
spin_unlock_bh(&dev->mt76.lock);
return err;

View File

@ -391,18 +391,17 @@ static int mt7615_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
if (cmd == SET_KEY)
*wcid_keyidx = idx;
else if (idx == *wcid_keyidx)
*wcid_keyidx = -1;
else
else {
if (idx == *wcid_keyidx)
*wcid_keyidx = -1;
goto out;
}
mt76_wcid_key_setup(&dev->mt76, wcid,
cmd == SET_KEY ? key : NULL);
mt76_wcid_key_setup(&dev->mt76, wcid, key);
if (mt76_is_mmio(&dev->mt76))
err = mt7615_mac_wtbl_set_key(dev, wcid, key, cmd);
err = mt7615_mac_wtbl_set_key(dev, wcid, key);
else
err = __mt7615_mac_wtbl_set_key(dev, wcid, key, cmd);
err = __mt7615_mac_wtbl_set_key(dev, wcid, key);
out:
mt7615_mutex_release(dev);

View File

@ -490,11 +490,9 @@ int mt7615_mac_write_txwi(struct mt7615_dev *dev, __le32 *txwi,
void mt7615_mac_set_timing(struct mt7615_phy *phy);
int __mt7615_mac_wtbl_set_key(struct mt7615_dev *dev,
struct mt76_wcid *wcid,
struct ieee80211_key_conf *key,
enum set_key_cmd cmd);
struct ieee80211_key_conf *key);
int mt7615_mac_wtbl_set_key(struct mt7615_dev *dev, struct mt76_wcid *wcid,
struct ieee80211_key_conf *key,
enum set_key_cmd cmd);
struct ieee80211_key_conf *key);
void mt7615_mac_reset_work(struct work_struct *work);
u32 mt7615_mac_get_sta_tid_sn(struct mt7615_dev *dev, int wcid, u8 tid);

View File

@ -454,20 +454,20 @@ int mt76x02_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
msta = sta ? (struct mt76x02_sta *)sta->drv_priv : NULL;
wcid = msta ? &msta->wcid : &mvif->group_wcid;
if (cmd == SET_KEY) {
key->hw_key_idx = wcid->idx;
wcid->hw_key_idx = idx;
if (key->flags & IEEE80211_KEY_FLAG_RX_MGMT) {
key->flags |= IEEE80211_KEY_FLAG_SW_MGMT_TX;
wcid->sw_iv = true;
}
} else {
if (cmd != SET_KEY) {
if (idx == wcid->hw_key_idx) {
wcid->hw_key_idx = -1;
wcid->sw_iv = false;
}
key = NULL;
return 0;
}
key->hw_key_idx = wcid->idx;
wcid->hw_key_idx = idx;
if (key->flags & IEEE80211_KEY_FLAG_RX_MGMT) {
key->flags |= IEEE80211_KEY_FLAG_SW_MGMT_TX;
wcid->sw_iv = true;
}
mt76_wcid_key_setup(&dev->mt76, wcid, key);

View File

@ -410,16 +410,15 @@ static int mt7915_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
mt7915_mcu_add_bss_info(phy, vif, true);
}
if (cmd == SET_KEY)
if (cmd == SET_KEY) {
*wcid_keyidx = idx;
else if (idx == *wcid_keyidx)
*wcid_keyidx = -1;
else
} else {
if (idx == *wcid_keyidx)
*wcid_keyidx = -1;
goto out;
}
mt76_wcid_key_setup(&dev->mt76, wcid,
cmd == SET_KEY ? key : NULL);
mt76_wcid_key_setup(&dev->mt76, wcid, key);
err = mt76_connac_mcu_add_key(&dev->mt76, vif, &msta->bip,
key, MCU_EXT_CMD(STA_REC_UPDATE),
&msta->wcid, cmd);

View File

@ -171,12 +171,12 @@ mt7921_mac_init_band(struct mt7921_dev *dev, u8 band)
u8 mt7921_check_offload_capability(struct device *dev, const char *fw_wm)
{
struct mt7921_fw_features *features = NULL;
const struct mt76_connac2_fw_trailer *hdr;
struct mt7921_realease_info *rel_info;
const struct firmware *fw;
int ret, i, offset = 0;
const u8 *data, *end;
u8 offload_caps = 0;
ret = request_firmware(&fw, fw_wm, dev);
if (ret)
@ -208,7 +208,10 @@ u8 mt7921_check_offload_capability(struct device *dev, const char *fw_wm)
data += sizeof(*rel_info);
if (rel_info->tag == MT7921_FW_TAG_FEATURE) {
struct mt7921_fw_features *features;
features = (struct mt7921_fw_features *)data;
offload_caps = features->data;
break;
}
@ -218,7 +221,7 @@ u8 mt7921_check_offload_capability(struct device *dev, const char *fw_wm)
out:
release_firmware(fw);
return features ? features->data : 0;
return offload_caps;
}
EXPORT_SYMBOL_GPL(mt7921_check_offload_capability);

View File

@ -569,16 +569,15 @@ static int mt7921_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
mt7921_mutex_acquire(dev);
if (cmd == SET_KEY)
if (cmd == SET_KEY) {
*wcid_keyidx = idx;
else if (idx == *wcid_keyidx)
*wcid_keyidx = -1;
else
} else {
if (idx == *wcid_keyidx)
*wcid_keyidx = -1;
goto out;
}
mt76_wcid_key_setup(&dev->mt76, wcid,
cmd == SET_KEY ? key : NULL);
mt76_wcid_key_setup(&dev->mt76, wcid, key);
err = mt76_connac_mcu_add_key(&dev->mt76, vif, &msta->bip,
key, MCU_UNI_CMD(STA_REC_UPDATE),
&msta->wcid, cmd);

View File

@ -20,7 +20,7 @@ static const struct pci_device_id mt7921_pci_device_table[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x0608),
.driver_data = (kernel_ulong_t)MT7921_FIRMWARE_WM },
{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x0616),
.driver_data = (kernel_ulong_t)MT7921_FIRMWARE_WM },
.driver_data = (kernel_ulong_t)MT7922_FIRMWARE_WM },
{ },
};

View File

@ -351,16 +351,15 @@ static int mt7996_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
mt7996_mcu_add_bss_info(phy, vif, true);
}
if (cmd == SET_KEY)
if (cmd == SET_KEY) {
*wcid_keyidx = idx;
else if (idx == *wcid_keyidx)
*wcid_keyidx = -1;
else
} else {
if (idx == *wcid_keyidx)
*wcid_keyidx = -1;
goto out;
}
mt76_wcid_key_setup(&dev->mt76, wcid,
cmd == SET_KEY ? key : NULL);
mt76_wcid_key_setup(&dev->mt76, wcid, key);
err = mt7996_mcu_add_key(&dev->mt76, vif, &msta->bip,
key, MCU_WMWA_UNI_CMD(STA_REC_UPDATE),
&msta->wcid, cmd);

Some files were not shown because too many files have changed in this diff Show More