Linux 5.19-rc6

-----BEGIN PGP SIGNATURE-----
 
 iQFSBAABCAA8FiEEq68RxlopcLEwq+PEeb4+QwBBGIYFAmLLR2MeHHRvcnZhbGRz
 QGxpbnV4LWZvdW5kYXRpb24ub3JnAAoJEHm+PkMAQRiG+hMH/jKGMOAbicR/CRq8
 WLKmpb1eTJP2dbeiEs5amBk9DZQhqjx6tIQRCpZoGxBL+XWq7DX2fRLkAT56yS5/
 NwferpR6IR9GlhjbfczF0JuQkP6eRUXnLrIKS5MViLI5QrCI80kkj4/mdqUXSiBV
 cMfXl5T1j+pb3zHUVXjnmvY+77q6rZTPoGxa/l8d6MaIhAg+jhu2E1HaSaSCX/YK
 TViq7ciI9cXoFV9yqhLkkBdGjBV8VQsKmeWEcA738bdSy1WAJSV1SVTJqLFvwdPI
 PM1asxkPoQ7jRrwsY4G8pZ3zPskJMS4Qwdn64HK+no2AKhJt2p6MePD1XblcrGHK
 QNStMY0=
 =LfuD
 -----END PGP SIGNATURE-----

Merge tag 'v5.19-rc6' into tip:x86/kdump

Merge rc6 to pick up dependent changes to the bootparam UAPI header.

Signed-off-by: Borislav Petkov <bp@suse.de>
This commit is contained in:
Borislav Petkov 2022-07-11 09:58:01 +02:00
commit 5a88c48f41
415 changed files with 4857 additions and 2485 deletions

View File

@ -64,6 +64,9 @@ Bart Van Assche <bvanassche@acm.org> <bart.vanassche@sandisk.com>
Bart Van Assche <bvanassche@acm.org> <bart.vanassche@wdc.com>
Ben Gardner <bgardner@wabtec.com>
Ben M Cahill <ben.m.cahill@intel.com>
Ben Widawsky <bwidawsk@kernel.org> <ben@bwidawsk.net>
Ben Widawsky <bwidawsk@kernel.org> <ben.widawsky@intel.com>
Ben Widawsky <bwidawsk@kernel.org> <benjamin.widawsky@intel.com>
Björn Steinbrink <B.Steinbrink@gmx.de>
Björn Töpel <bjorn@kernel.org> <bjorn.topel@gmail.com>
Björn Töpel <bjorn@kernel.org> <bjorn.topel@intel.com>

View File

@ -67,7 +67,7 @@ if:
then:
properties:
clocks:
maxItems: 2
minItems: 2
required:
- clock-names

View File

@ -13,6 +13,12 @@ EDD Interfaces
.. kernel-doc:: drivers/firmware/edd.c
:internal:
Generic System Framebuffers Interface
-------------------------------------
.. kernel-doc:: drivers/firmware/sysfb.c
:export:
Intel Stratix10 SoC Service Layer
---------------------------------
Some features of the Intel Stratix10 SoC require a level of privilege

View File

@ -6,6 +6,15 @@
netdev FAQ
==========
tl;dr
-----
- designate your patch to a tree - ``[PATCH net]`` or ``[PATCH net-next]``
- for fixes the ``Fixes:`` tag is required, regardless of the tree
- don't post large series (> 15 patches), break them up
- don't repost your patches within one 24h period
- reverse xmas tree
What is netdev?
---------------
It is a mailing list for all network-related Linux stuff. This
@ -136,6 +145,20 @@ it to the maintainer to figure out what is the most recent and current
version that should be applied. If there is any doubt, the maintainer
will reply and ask what should be done.
How do I divide my work into patches?
-------------------------------------
Put yourself in the shoes of the reviewer. Each patch is read separately
and therefore should constitute a comprehensible step towards your stated
goal.
Avoid sending series longer than 15 patches. Larger series takes longer
to review as reviewers will defer looking at it until they find a large
chunk of time. A small series can be reviewed in a short time, so Maintainers
just do it. As a result, a sequence of smaller series gets merged quicker and
with better review coverage. Re-posting large series also increases the mailing
list traffic.
I made changes to only a few patches in a patch series should I resend only those changed?
------------------------------------------------------------------------------------------
No, please resend the entire patch series and make sure you do number your
@ -183,6 +206,19 @@ it is requested that you make it look like this::
* another line of text
*/
What is "reverse xmas tree"?
----------------------------
Netdev has a convention for ordering local variables in functions.
Order the variable declaration lines longest to shortest, e.g.::
struct scatterlist *sg;
struct sk_buff *skb;
int err, i;
If there are dependencies between the variables preventing the ordering
move the initialization out of line.
I am working in existing code which uses non-standard formatting. Which formatting should I use?
------------------------------------------------------------------------------------------------
Make your code follow the most recent guidelines, so that eventually all code

View File

@ -426,7 +426,6 @@ F: drivers/acpi/*thermal*
ACPI VIOT DRIVER
M: Jean-Philippe Brucker <jean-philippe@linaro.org>
L: linux-acpi@vger.kernel.org
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
S: Maintained
F: drivers/acpi/viot.c
@ -960,7 +959,6 @@ F: drivers/video/fbdev/geode/
AMD IOMMU (AMD-VI)
M: Joerg Roedel <joro@8bytes.org>
R: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git
@ -2540,6 +2538,7 @@ W: http://www.armlinux.org.uk/
ARM/QUALCOMM SUPPORT
M: Andy Gross <agross@kernel.org>
M: Bjorn Andersson <bjorn.andersson@linaro.org>
R: Konrad Dybcio <konrad.dybcio@somainline.org>
L: linux-arm-msm@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux.git
@ -3617,16 +3616,18 @@ S: Maintained
F: Documentation/devicetree/bindings/iio/accel/bosch,bma400.yaml
F: drivers/iio/accel/bma400*
BPF (Safe dynamic programs and tools)
BPF [GENERAL] (Safe Dynamic Programs and Tools)
M: Alexei Starovoitov <ast@kernel.org>
M: Daniel Borkmann <daniel@iogearbox.net>
M: Andrii Nakryiko <andrii@kernel.org>
R: Martin KaFai Lau <kafai@fb.com>
R: Song Liu <songliubraving@fb.com>
R: Martin KaFai Lau <martin.lau@linux.dev>
R: Song Liu <song@kernel.org>
R: Yonghong Song <yhs@fb.com>
R: John Fastabend <john.fastabend@gmail.com>
R: KP Singh <kpsingh@kernel.org>
L: netdev@vger.kernel.org
R: Stanislav Fomichev <sdf@google.com>
R: Hao Luo <haoluo@google.com>
R: Jiri Olsa <jolsa@kernel.org>
L: bpf@vger.kernel.org
S: Supported
W: https://bpf.io/
@ -3658,12 +3659,9 @@ F: scripts/pahole-version.sh
F: tools/bpf/
F: tools/lib/bpf/
F: tools/testing/selftests/bpf/
N: bpf
K: bpf
BPF JIT for ARM
M: Shubham Bansal <illusionist.neo@gmail.com>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Odd Fixes
F: arch/arm/net/
@ -3672,7 +3670,6 @@ BPF JIT for ARM64
M: Daniel Borkmann <daniel@iogearbox.net>
M: Alexei Starovoitov <ast@kernel.org>
M: Zi Shen Lim <zlim.lnx@gmail.com>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported
F: arch/arm64/net/
@ -3680,14 +3677,12 @@ F: arch/arm64/net/
BPF JIT for MIPS (32-BIT AND 64-BIT)
M: Johan Almbladh <johan.almbladh@anyfinetworks.com>
M: Paul Burton <paulburton@kernel.org>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained
F: arch/mips/net/
BPF JIT for NFP NICs
M: Jakub Kicinski <kuba@kernel.org>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Odd Fixes
F: drivers/net/ethernet/netronome/nfp/bpf/
@ -3695,7 +3690,6 @@ F: drivers/net/ethernet/netronome/nfp/bpf/
BPF JIT for POWERPC (32-BIT AND 64-BIT)
M: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
M: Michael Ellerman <mpe@ellerman.id.au>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported
F: arch/powerpc/net/
@ -3703,7 +3697,6 @@ F: arch/powerpc/net/
BPF JIT for RISC-V (32-bit)
M: Luke Nelson <luke.r.nels@gmail.com>
M: Xi Wang <xi.wang@gmail.com>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained
F: arch/riscv/net/
@ -3711,7 +3704,6 @@ X: arch/riscv/net/bpf_jit_comp64.c
BPF JIT for RISC-V (64-bit)
M: Björn Töpel <bjorn@kernel.org>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained
F: arch/riscv/net/
@ -3721,7 +3713,6 @@ BPF JIT for S390
M: Ilya Leoshkevich <iii@linux.ibm.com>
M: Heiko Carstens <hca@linux.ibm.com>
M: Vasily Gorbik <gor@linux.ibm.com>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported
F: arch/s390/net/
@ -3729,14 +3720,12 @@ X: arch/s390/net/pnet.c
BPF JIT for SPARC (32-BIT AND 64-BIT)
M: David S. Miller <davem@davemloft.net>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Odd Fixes
F: arch/sparc/net/
BPF JIT for X86 32-BIT
M: Wang YanQing <udknight@gmail.com>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Odd Fixes
F: arch/x86/net/bpf_jit_comp32.c
@ -3744,13 +3733,60 @@ F: arch/x86/net/bpf_jit_comp32.c
BPF JIT for X86 64-BIT
M: Alexei Starovoitov <ast@kernel.org>
M: Daniel Borkmann <daniel@iogearbox.net>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported
F: arch/x86/net/
X: arch/x86/net/bpf_jit_comp32.c
BPF LSM (Security Audit and Enforcement using BPF)
BPF [CORE]
M: Alexei Starovoitov <ast@kernel.org>
M: Daniel Borkmann <daniel@iogearbox.net>
R: John Fastabend <john.fastabend@gmail.com>
L: bpf@vger.kernel.org
S: Maintained
F: kernel/bpf/verifier.c
F: kernel/bpf/tnum.c
F: kernel/bpf/core.c
F: kernel/bpf/syscall.c
F: kernel/bpf/dispatcher.c
F: kernel/bpf/trampoline.c
F: include/linux/bpf*
F: include/linux/filter.h
BPF [BTF]
M: Martin KaFai Lau <martin.lau@linux.dev>
L: bpf@vger.kernel.org
S: Maintained
F: kernel/bpf/btf.c
F: include/linux/btf*
BPF [TRACING]
M: Song Liu <song@kernel.org>
R: Jiri Olsa <jolsa@kernel.org>
L: bpf@vger.kernel.org
S: Maintained
F: kernel/trace/bpf_trace.c
F: kernel/bpf/stackmap.c
BPF [NETWORKING] (tc BPF, sock_addr)
M: Martin KaFai Lau <martin.lau@linux.dev>
M: Daniel Borkmann <daniel@iogearbox.net>
R: John Fastabend <john.fastabend@gmail.com>
L: bpf@vger.kernel.org
L: netdev@vger.kernel.org
S: Maintained
F: net/core/filter.c
F: net/sched/act_bpf.c
F: net/sched/cls_bpf.c
BPF [NETWORKING] (struct_ops, reuseport)
M: Martin KaFai Lau <martin.lau@linux.dev>
L: bpf@vger.kernel.org
L: netdev@vger.kernel.org
S: Maintained
F: kernel/bpf/bpf_struct*
BPF [SECURITY & LSM] (Security Audit and Enforcement using BPF)
M: KP Singh <kpsingh@kernel.org>
R: Florent Revest <revest@chromium.org>
R: Brendan Jackman <jackmanb@chromium.org>
@ -3761,7 +3797,27 @@ F: include/linux/bpf_lsm.h
F: kernel/bpf/bpf_lsm.c
F: security/bpf/
BPF L7 FRAMEWORK
BPF [STORAGE & CGROUPS]
M: Martin KaFai Lau <martin.lau@linux.dev>
L: bpf@vger.kernel.org
S: Maintained
F: kernel/bpf/cgroup.c
F: kernel/bpf/*storage.c
F: kernel/bpf/bpf_lru*
BPF [RINGBUF]
M: Andrii Nakryiko <andrii@kernel.org>
L: bpf@vger.kernel.org
S: Maintained
F: kernel/bpf/ringbuf.c
BPF [ITERATOR]
M: Yonghong Song <yhs@fb.com>
L: bpf@vger.kernel.org
S: Maintained
F: kernel/bpf/*iter.c
BPF [L7 FRAMEWORK] (sockmap)
M: John Fastabend <john.fastabend@gmail.com>
M: Jakub Sitnicki <jakub@cloudflare.com>
L: netdev@vger.kernel.org
@ -3774,13 +3830,31 @@ F: net/ipv4/tcp_bpf.c
F: net/ipv4/udp_bpf.c
F: net/unix/unix_bpf.c
BPFTOOL
BPF [LIBRARY] (libbpf)
M: Andrii Nakryiko <andrii@kernel.org>
L: bpf@vger.kernel.org
S: Maintained
F: tools/lib/bpf/
BPF [TOOLING] (bpftool)
M: Quentin Monnet <quentin@isovalent.com>
L: bpf@vger.kernel.org
S: Maintained
F: kernel/bpf/disasm.*
F: tools/bpf/bpftool/
BPF [SELFTESTS] (Test Runners & Infrastructure)
M: Andrii Nakryiko <andrii@kernel.org>
R: Mykola Lysenko <mykolal@fb.com>
L: bpf@vger.kernel.org
S: Maintained
F: tools/testing/selftests/bpf/
BPF [MISC]
L: bpf@vger.kernel.org
S: Odd Fixes
K: (?:\b|_)bpf(?:\b|_)
BROADCOM B44 10/100 ETHERNET DRIVER
M: Michael Chan <michael.chan@broadcom.com>
L: netdev@vger.kernel.org
@ -4976,6 +5050,7 @@ Q: http://patchwork.kernel.org/project/linux-clk/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux.git
F: Documentation/devicetree/bindings/clock/
F: drivers/clk/
F: include/dt-bindings/clock/
F: include/linux/clk-pr*
F: include/linux/clk/
F: include/linux/of_clk.h
@ -5026,7 +5101,7 @@ COMPUTE EXPRESS LINK (CXL)
M: Alison Schofield <alison.schofield@intel.com>
M: Vishal Verma <vishal.l.verma@intel.com>
M: Ira Weiny <ira.weiny@intel.com>
M: Ben Widawsky <ben.widawsky@intel.com>
M: Ben Widawsky <bwidawsk@kernel.org>
M: Dan Williams <dan.j.williams@intel.com>
L: linux-cxl@vger.kernel.org
S: Maintained
@ -5978,7 +6053,6 @@ DMA MAPPING HELPERS
M: Christoph Hellwig <hch@lst.de>
M: Marek Szyprowski <m.szyprowski@samsung.com>
R: Robin Murphy <robin.murphy@arm.com>
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
S: Supported
W: http://git.infradead.org/users/hch/dma-mapping.git
@ -5991,7 +6065,6 @@ F: kernel/dma/
DMA MAPPING BENCHMARK
M: Xiang Chen <chenxiang66@hisilicon.com>
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
F: kernel/dma/map_benchmark.c
F: tools/testing/selftests/dma/
@ -7576,7 +7649,6 @@ F: drivers/gpu/drm/exynos/exynos_dp*
EXYNOS SYSMMU (IOMMU) driver
M: Marek Szyprowski <m.szyprowski@samsung.com>
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
S: Maintained
F: drivers/iommu/exynos-iommu.c
@ -9835,7 +9907,10 @@ INTEL ASoC DRIVERS
M: Cezary Rojewski <cezary.rojewski@intel.com>
M: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
M: Liam Girdwood <liam.r.girdwood@linux.intel.com>
M: Jie Yang <yang.jie@linux.intel.com>
M: Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
M: Bard Liao <yung-chuan.liao@linux.intel.com>
M: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>
M: Kai Vehmanen <kai.vehmanen@linux.intel.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
S: Supported
F: sound/soc/intel/
@ -9998,7 +10073,6 @@ F: drivers/hid/intel-ish-hid/
INTEL IOMMU (VT-d)
M: David Woodhouse <dwmw2@infradead.org>
M: Lu Baolu <baolu.lu@linux.intel.com>
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git
@ -10378,7 +10452,6 @@ F: include/linux/iomap.h
IOMMU DRIVERS
M: Joerg Roedel <joro@8bytes.org>
M: Will Deacon <will@kernel.org>
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git
@ -12538,7 +12611,6 @@ F: drivers/i2c/busses/i2c-mt65xx.c
MEDIATEK IOMMU DRIVER
M: Yong Wu <yong.wu@mediatek.com>
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
L: linux-mediatek@lists.infradead.org (moderated for non-subscribers)
S: Supported
@ -14397,9 +14469,8 @@ F: Documentation/devicetree/bindings/sound/nxp,tfa989x.yaml
F: sound/soc/codecs/tfa989x.c
NXP-NCI NFC DRIVER
R: Charles Gorand <charles.gorand@effinnov.com>
L: linux-nfc@lists.01.org (subscribers-only)
S: Supported
S: Orphan
F: Documentation/devicetree/bindings/net/nfc/nxp,nci.yaml
F: drivers/nfc/nxp-nci
@ -15788,7 +15859,7 @@ F: drivers/pinctrl/freescale/
PIN CONTROLLER - INTEL
M: Mika Westerberg <mika.westerberg@linux.intel.com>
M: Andy Shevchenko <andy@kernel.org>
S: Maintained
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/pinctrl/intel.git
F: drivers/pinctrl/intel/
@ -16310,7 +16381,7 @@ F: drivers/crypto/qat/
QCOM AUDIO (ASoC) DRIVERS
M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
M: Banajit Goswami <bgoswami@codeaurora.org>
M: Banajit Goswami <bgoswami@quicinc.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
S: Supported
F: sound/soc/codecs/lpass-va-macro.c
@ -16591,7 +16662,6 @@ F: drivers/i2c/busses/i2c-qcom-cci.c
QUALCOMM IOMMU
M: Rob Clark <robdclark@gmail.com>
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
L: linux-arm-msm@vger.kernel.org
S: Maintained
@ -18105,6 +18175,7 @@ F: drivers/misc/sgi-xp/
SHARED MEMORY COMMUNICATIONS (SMC) SOCKETS
M: Karsten Graul <kgraul@linux.ibm.com>
M: Wenjia Zhang <wenjia@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/
@ -18737,8 +18808,10 @@ F: sound/soc/
SOUND - SOUND OPEN FIRMWARE (SOF) DRIVERS
M: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
M: Liam Girdwood <lgirdwood@gmail.com>
M: Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
M: Bard Liao <yung-chuan.liao@linux.intel.com>
M: Ranjani Sridharan <ranjani.sridharan@linux.intel.com>
M: Kai Vehmanen <kai.vehmanen@linux.intel.com>
R: Kai Vehmanen <kai.vehmanen@linux.intel.com>
M: Daniel Baluta <daniel.baluta@nxp.com>
L: sound-open-firmware@alsa-project.org (moderated for non-subscribers)
S: Supported
@ -19217,7 +19290,6 @@ F: arch/x86/boot/video*
SWIOTLB SUBSYSTEM
M: Christoph Hellwig <hch@infradead.org>
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
S: Supported
W: http://git.infradead.org/users/hch/dma-mapping.git
@ -21893,7 +21965,6 @@ XEN SWIOTLB SUBSYSTEM
M: Juergen Gross <jgross@suse.com>
M: Stefano Stabellini <sstabellini@kernel.org>
L: xen-devel@lists.xenproject.org (moderated for non-subscribers)
L: iommu@lists.linux-foundation.org
L: iommu@lists.linux.dev
S: Supported
F: arch/x86/xen/*swiotlb*

View File

@ -2,7 +2,7 @@
VERSION = 5
PATCHLEVEL = 19
SUBLEVEL = 0
EXTRAVERSION = -rc4
EXTRAVERSION = -rc6
NAME = Superb Owl
# *DOCUMENTATION*

View File

@ -233,10 +233,9 @@
status = "okay";
eeprom@53 {
compatible = "atmel,24c32";
compatible = "atmel,24c02";
reg = <0x53>;
pagesize = <16>;
size = <128>;
status = "okay";
};
};

View File

@ -329,21 +329,21 @@
status = "okay";
eeprom@50 {
compatible = "atmel,24c32";
compatible = "atmel,24c02";
reg = <0x50>;
pagesize = <16>;
status = "okay";
};
eeprom@52 {
compatible = "atmel,24c32";
compatible = "atmel,24c02";
reg = <0x52>;
pagesize = <16>;
status = "disabled";
};
eeprom@53 {
compatible = "atmel,24c32";
compatible = "atmel,24c02";
reg = <0x53>;
pagesize = <16>;
status = "disabled";

View File

@ -216,10 +216,8 @@
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_usdhc2>;
bus-width = <4>;
no-1-8-v;
non-removable;
cap-sd-highspeed;
sd-uhs-ddr50;
mmc-ddr-1_8v;
vmmc-supply = <&reg_wifi>;
enable-sdio-wakeup;
status = "okay";

View File

@ -27,6 +27,37 @@
reg = <0x16>;
#reset-cells = <1>;
};
scmi_voltd: protocol@17 {
reg = <0x17>;
scmi_reguls: regulators {
#address-cells = <1>;
#size-cells = <0>;
scmi_reg11: reg11@0 {
reg = <0>;
regulator-name = "reg11";
regulator-min-microvolt = <1100000>;
regulator-max-microvolt = <1100000>;
};
scmi_reg18: reg18@1 {
voltd-name = "reg18";
reg = <1>;
regulator-name = "reg18";
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
};
scmi_usb33: usb33@2 {
reg = <2>;
regulator-name = "usb33";
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
};
};
};
};
};
@ -45,3 +76,30 @@
};
};
};
&reg11 {
status = "disabled";
};
&reg18 {
status = "disabled";
};
&usb33 {
status = "disabled";
};
&usbotg_hs {
usb33d-supply = <&scmi_usb33>;
};
&usbphyc {
vdda1v1-supply = <&scmi_reg11>;
vdda1v8-supply = <&scmi_reg18>;
};
/delete-node/ &clk_hse;
/delete-node/ &clk_hsi;
/delete-node/ &clk_lse;
/delete-node/ &clk_lsi;
/delete-node/ &clk_csi;

View File

@ -565,7 +565,7 @@
compatible = "st,stm32-cec";
reg = <0x40016000 0x400>;
interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&rcc CEC_K>, <&clk_lse>;
clocks = <&rcc CEC_K>, <&rcc CEC>;
clock-names = "cec", "hdmi-cec";
status = "disabled";
};
@ -1474,7 +1474,7 @@
usbh_ohci: usb@5800c000 {
compatible = "generic-ohci";
reg = <0x5800c000 0x1000>;
clocks = <&rcc USBH>, <&usbphyc>;
clocks = <&usbphyc>, <&rcc USBH>;
resets = <&rcc USBH_R>;
interrupts = <GIC_SPI 74 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
@ -1483,7 +1483,7 @@
usbh_ehci: usb@5800d000 {
compatible = "generic-ehci";
reg = <0x5800d000 0x1000>;
clocks = <&rcc USBH>;
clocks = <&usbphyc>, <&rcc USBH>;
resets = <&rcc USBH_R>;
interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>;
companion = <&usbh_ohci>;

View File

@ -29,6 +29,10 @@
clocks = <&scmi_clk CK_SCMI_MPU>;
};
&dsi {
clocks = <&rcc DSI_K>, <&scmi_clk CK_SCMI_HSE>, <&rcc DSI_PX>;
};
&gpioz {
clocks = <&scmi_clk CK_SCMI_GPIOZ>;
};

View File

@ -35,6 +35,7 @@
};
&dsi {
phy-dsi-supply = <&scmi_reg18>;
clocks = <&rcc DSI_K>, <&scmi_clk CK_SCMI_HSE>, <&rcc DSI_PX>;
};

View File

@ -34,6 +34,10 @@
resets = <&scmi_reset RST_SCMI_CRYP1>;
};
&dsi {
clocks = <&rcc DSI_K>, <&scmi_clk CK_SCMI_HSE>, <&rcc DSI_PX>;
};
&gpioz {
clocks = <&scmi_clk CK_SCMI_GPIOZ>;
};

View File

@ -36,6 +36,7 @@
};
&dsi {
phy-dsi-supply = <&scmi_reg18>;
clocks = <&rcc DSI_K>, <&scmi_clk CK_SCMI_HSE>, <&rcc DSI_PX>;
};

View File

@ -93,6 +93,7 @@ CONFIG_REGULATOR_FIXED_VOLTAGE=y
CONFIG_DRM=y
CONFIG_DRM_PANEL_SEIKO_43WVF1G=y
CONFIG_DRM_MXSFB=y
CONFIG_FB=y
CONFIG_FB_MODE_HELPERS=y
CONFIG_LCD_CLASS_DEVICE=y
CONFIG_BACKLIGHT_CLASS_DEVICE=y

View File

@ -202,7 +202,7 @@ static const struct wakeup_source_info ws_info[] = {
static const struct of_device_id sama5d2_ws_ids[] = {
{ .compatible = "atmel,sama5d2-gem", .data = &ws_info[0] },
{ .compatible = "atmel,at91rm9200-rtc", .data = &ws_info[1] },
{ .compatible = "atmel,sama5d2-rtc", .data = &ws_info[1] },
{ .compatible = "atmel,sama5d3-udc", .data = &ws_info[2] },
{ .compatible = "atmel,at91rm9200-ohci", .data = &ws_info[2] },
{ .compatible = "usb-ohci", .data = &ws_info[2] },
@ -213,24 +213,24 @@ static const struct of_device_id sama5d2_ws_ids[] = {
};
static const struct of_device_id sam9x60_ws_ids[] = {
{ .compatible = "atmel,at91sam9x5-rtc", .data = &ws_info[1] },
{ .compatible = "microchip,sam9x60-rtc", .data = &ws_info[1] },
{ .compatible = "atmel,at91rm9200-ohci", .data = &ws_info[2] },
{ .compatible = "usb-ohci", .data = &ws_info[2] },
{ .compatible = "atmel,at91sam9g45-ehci", .data = &ws_info[2] },
{ .compatible = "usb-ehci", .data = &ws_info[2] },
{ .compatible = "atmel,at91sam9260-rtt", .data = &ws_info[4] },
{ .compatible = "microchip,sam9x60-rtt", .data = &ws_info[4] },
{ .compatible = "cdns,sam9x60-macb", .data = &ws_info[5] },
{ /* sentinel */ }
};
static const struct of_device_id sama7g5_ws_ids[] = {
{ .compatible = "atmel,at91sam9x5-rtc", .data = &ws_info[1] },
{ .compatible = "microchip,sama7g5-rtc", .data = &ws_info[1] },
{ .compatible = "microchip,sama7g5-ohci", .data = &ws_info[2] },
{ .compatible = "usb-ohci", .data = &ws_info[2] },
{ .compatible = "atmel,at91sam9g45-ehci", .data = &ws_info[2] },
{ .compatible = "usb-ehci", .data = &ws_info[2] },
{ .compatible = "microchip,sama7g5-sdhci", .data = &ws_info[3] },
{ .compatible = "atmel,at91sam9260-rtt", .data = &ws_info[4] },
{ .compatible = "microchip,sama7g5-rtt", .data = &ws_info[4] },
{ /* sentinel */ }
};
@ -1079,7 +1079,7 @@ securam_fail:
return ret;
}
static void at91_pm_secure_init(void)
static void __init at91_pm_secure_init(void)
{
int suspend_mode;
struct arm_smccc_res res;

View File

@ -71,6 +71,7 @@ static void __init meson_smp_prepare_cpus(const char *scu_compatible,
}
sram_base = of_iomap(node, 0);
of_node_put(node);
if (!sram_base) {
pr_err("Couldn't map SRAM registers\n");
return;
@ -91,6 +92,7 @@ static void __init meson_smp_prepare_cpus(const char *scu_compatible,
}
scu_base = of_iomap(node, 0);
of_node_put(node);
if (!scu_base) {
pr_err("Couldn't map SCU registers\n");
return;

View File

@ -63,11 +63,12 @@ out:
unsigned long __pfn_to_mfn(unsigned long pfn)
{
struct rb_node *n = phys_to_mach.rb_node;
struct rb_node *n;
struct xen_p2m_entry *entry;
unsigned long irqflags;
read_lock_irqsave(&p2m_lock, irqflags);
n = phys_to_mach.rb_node;
while (n) {
entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys);
if (entry->pfn <= pfn &&
@ -152,10 +153,11 @@ bool __set_phys_to_machine_multi(unsigned long pfn,
int rc;
unsigned long irqflags;
struct xen_p2m_entry *p2m_entry;
struct rb_node *n = phys_to_mach.rb_node;
struct rb_node *n;
if (mfn == INVALID_P2M_ENTRY) {
write_lock_irqsave(&p2m_lock, irqflags);
n = phys_to_mach.rb_node;
while (n) {
p2m_entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys);
if (p2m_entry->pfn <= pfn &&

View File

@ -395,41 +395,41 @@
&iomuxc {
pinctrl_eqos: eqosgrp {
fsl,pins = <
MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x3
MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x3
MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x91
MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x91
MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x91
MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x91
MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x91
MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x91
MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x1f
MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x1f
MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x1f
MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x1f
MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x1f
MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x1f
MX8MP_IOMUXC_SAI2_RXC__GPIO4_IO22 0x19
MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x2
MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x2
MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x90
MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x90
MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x90
MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x90
MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x90
MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x90
MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x16
MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x16
MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x16
MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x16
MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x16
MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x16
MX8MP_IOMUXC_SAI2_RXC__GPIO4_IO22 0x10
>;
};
pinctrl_fec: fecgrp {
fsl,pins = <
MX8MP_IOMUXC_SAI1_RXD2__ENET1_MDC 0x3
MX8MP_IOMUXC_SAI1_RXD3__ENET1_MDIO 0x3
MX8MP_IOMUXC_SAI1_RXD4__ENET1_RGMII_RD0 0x91
MX8MP_IOMUXC_SAI1_RXD5__ENET1_RGMII_RD1 0x91
MX8MP_IOMUXC_SAI1_RXD6__ENET1_RGMII_RD2 0x91
MX8MP_IOMUXC_SAI1_RXD7__ENET1_RGMII_RD3 0x91
MX8MP_IOMUXC_SAI1_TXC__ENET1_RGMII_RXC 0x91
MX8MP_IOMUXC_SAI1_TXFS__ENET1_RGMII_RX_CTL 0x91
MX8MP_IOMUXC_SAI1_TXD0__ENET1_RGMII_TD0 0x1f
MX8MP_IOMUXC_SAI1_TXD1__ENET1_RGMII_TD1 0x1f
MX8MP_IOMUXC_SAI1_TXD2__ENET1_RGMII_TD2 0x1f
MX8MP_IOMUXC_SAI1_TXD3__ENET1_RGMII_TD3 0x1f
MX8MP_IOMUXC_SAI1_TXD4__ENET1_RGMII_TX_CTL 0x1f
MX8MP_IOMUXC_SAI1_TXD5__ENET1_RGMII_TXC 0x1f
MX8MP_IOMUXC_SAI1_RXD0__GPIO4_IO02 0x19
MX8MP_IOMUXC_SAI1_RXD2__ENET1_MDC 0x2
MX8MP_IOMUXC_SAI1_RXD3__ENET1_MDIO 0x2
MX8MP_IOMUXC_SAI1_RXD4__ENET1_RGMII_RD0 0x90
MX8MP_IOMUXC_SAI1_RXD5__ENET1_RGMII_RD1 0x90
MX8MP_IOMUXC_SAI1_RXD6__ENET1_RGMII_RD2 0x90
MX8MP_IOMUXC_SAI1_RXD7__ENET1_RGMII_RD3 0x90
MX8MP_IOMUXC_SAI1_TXC__ENET1_RGMII_RXC 0x90
MX8MP_IOMUXC_SAI1_TXFS__ENET1_RGMII_RX_CTL 0x90
MX8MP_IOMUXC_SAI1_TXD0__ENET1_RGMII_TD0 0x16
MX8MP_IOMUXC_SAI1_TXD1__ENET1_RGMII_TD1 0x16
MX8MP_IOMUXC_SAI1_TXD2__ENET1_RGMII_TD2 0x16
MX8MP_IOMUXC_SAI1_TXD3__ENET1_RGMII_TD3 0x16
MX8MP_IOMUXC_SAI1_TXD4__ENET1_RGMII_TX_CTL 0x16
MX8MP_IOMUXC_SAI1_TXD5__ENET1_RGMII_TXC 0x16
MX8MP_IOMUXC_SAI1_RXD0__GPIO4_IO02 0x10
>;
};
@ -461,28 +461,28 @@
pinctrl_gpio_led: gpioledgrp {
fsl,pins = <
MX8MP_IOMUXC_NAND_READY_B__GPIO3_IO16 0x19
MX8MP_IOMUXC_NAND_READY_B__GPIO3_IO16 0x140
>;
};
pinctrl_i2c1: i2c1grp {
fsl,pins = <
MX8MP_IOMUXC_I2C1_SCL__I2C1_SCL 0x400001c3
MX8MP_IOMUXC_I2C1_SDA__I2C1_SDA 0x400001c3
MX8MP_IOMUXC_I2C1_SCL__I2C1_SCL 0x400001c2
MX8MP_IOMUXC_I2C1_SDA__I2C1_SDA 0x400001c2
>;
};
pinctrl_i2c3: i2c3grp {
fsl,pins = <
MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL 0x400001c3
MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA 0x400001c3
MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL 0x400001c2
MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA 0x400001c2
>;
};
pinctrl_i2c5: i2c5grp {
fsl,pins = <
MX8MP_IOMUXC_SPDIF_RX__I2C5_SDA 0x400001c3
MX8MP_IOMUXC_SPDIF_TX__I2C5_SCL 0x400001c3
MX8MP_IOMUXC_SPDIF_RX__I2C5_SDA 0x400001c2
MX8MP_IOMUXC_SPDIF_TX__I2C5_SCL 0x400001c2
>;
};
@ -500,20 +500,20 @@
pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp {
fsl,pins = <
MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x41
MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x40
>;
};
pinctrl_uart2: uart2grp {
fsl,pins = <
MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX 0x49
MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX 0x49
MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX 0x140
MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX 0x140
>;
};
pinctrl_usb1_vbus: usb1grp {
fsl,pins = <
MX8MP_IOMUXC_GPIO1_IO14__USB2_OTG_PWR 0x19
MX8MP_IOMUXC_GPIO1_IO14__USB2_OTG_PWR 0x10
>;
};
@ -525,7 +525,7 @@
MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d0
MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d0
MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d0
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
>;
};
@ -537,7 +537,7 @@
MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d4
MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d4
MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d4
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
>;
};
@ -549,7 +549,7 @@
MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d6
MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d6
MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d6
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
>;
};

View File

@ -110,28 +110,28 @@
&iomuxc {
pinctrl_eqos: eqosgrp {
fsl,pins = <
MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x3
MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x3
MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x91
MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x91
MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x91
MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x91
MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x91
MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x91
MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x1f
MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x1f
MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x1f
MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x1f
MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x1f
MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x1f
MX8MP_IOMUXC_NAND_DATA01__GPIO3_IO07 0x19
MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x2
MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x2
MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x90
MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x90
MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x90
MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x90
MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x90
MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x90
MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x16
MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x16
MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x16
MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x16
MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x16
MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x16
MX8MP_IOMUXC_NAND_DATA01__GPIO3_IO07 0x10
>;
};
pinctrl_uart2: uart2grp {
fsl,pins = <
MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX 0x49
MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX 0x49
MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX 0x40
MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX 0x40
>;
};
@ -151,7 +151,7 @@
MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d0
MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d0
MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d0
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
>;
};
@ -163,13 +163,13 @@
pinctrl_reg_usb1: regusb1grp {
fsl,pins = <
MX8MP_IOMUXC_GPIO1_IO14__GPIO1_IO14 0x19
MX8MP_IOMUXC_GPIO1_IO14__GPIO1_IO14 0x10
>;
};
pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp {
fsl,pins = <
MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x41
MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x40
>;
};
};

View File

@ -116,48 +116,48 @@
&iomuxc {
pinctrl_eqos: eqosgrp {
fsl,pins = <
MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x3
MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x3
MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x91
MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x91
MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x91
MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x91
MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x91
MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x91
MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x1f
MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x1f
MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x1f
MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x1f
MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x1f
MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x1f
MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x2
MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x2
MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x90
MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x90
MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x90
MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x90
MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x90
MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x90
MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x16
MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x16
MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x16
MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x16
MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x16
MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x16
MX8MP_IOMUXC_SAI1_MCLK__GPIO4_IO20 0x10
>;
};
pinctrl_i2c2: i2c2grp {
fsl,pins = <
MX8MP_IOMUXC_I2C2_SCL__I2C2_SCL 0x400001c3
MX8MP_IOMUXC_I2C2_SDA__I2C2_SDA 0x400001c3
MX8MP_IOMUXC_I2C2_SCL__I2C2_SCL 0x400001c2
MX8MP_IOMUXC_I2C2_SDA__I2C2_SDA 0x400001c2
>;
};
pinctrl_i2c2_gpio: i2c2gpiogrp {
fsl,pins = <
MX8MP_IOMUXC_I2C2_SCL__GPIO5_IO16 0x1e3
MX8MP_IOMUXC_I2C2_SDA__GPIO5_IO17 0x1e3
MX8MP_IOMUXC_I2C2_SCL__GPIO5_IO16 0x1e2
MX8MP_IOMUXC_I2C2_SDA__GPIO5_IO17 0x1e2
>;
};
pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp {
fsl,pins = <
MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x41
MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x40
>;
};
pinctrl_uart1: uart1grp {
fsl,pins = <
MX8MP_IOMUXC_UART1_RXD__UART1_DCE_RX 0x49
MX8MP_IOMUXC_UART1_TXD__UART1_DCE_TX 0x49
MX8MP_IOMUXC_UART1_RXD__UART1_DCE_RX 0x40
MX8MP_IOMUXC_UART1_TXD__UART1_DCE_TX 0x40
>;
};
@ -175,7 +175,7 @@
MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d0
MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d0
MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d0
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
>;
};
@ -187,7 +187,7 @@
MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d4
MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d4
MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d4
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
>;
};
@ -199,7 +199,7 @@
MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d6
MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d6
MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d6
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1
MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0
>;
};
};

View File

@ -622,15 +622,15 @@
pinctrl_hog: hoggrp {
fsl,pins = <
MX8MP_IOMUXC_GPIO1_IO09__GPIO1_IO09 0x40000041 /* DIO0 */
MX8MP_IOMUXC_GPIO1_IO11__GPIO1_IO11 0x40000041 /* DIO1 */
MX8MP_IOMUXC_NAND_DQS__GPIO3_IO14 0x40000041 /* M2SKT_OFF# */
MX8MP_IOMUXC_SD2_DATA2__GPIO2_IO17 0x40000159 /* PCIE1_WDIS# */
MX8MP_IOMUXC_SD2_DATA3__GPIO2_IO18 0x40000159 /* PCIE2_WDIS# */
MX8MP_IOMUXC_SD2_CMD__GPIO2_IO14 0x40000159 /* PCIE3_WDIS# */
MX8MP_IOMUXC_NAND_DATA00__GPIO3_IO06 0x40000041 /* M2SKT_RST# */
MX8MP_IOMUXC_SAI1_TXD6__GPIO4_IO18 0x40000159 /* M2SKT_WDIS# */
MX8MP_IOMUXC_NAND_ALE__GPIO3_IO00 0x40000159 /* M2SKT_GDIS# */
MX8MP_IOMUXC_GPIO1_IO09__GPIO1_IO09 0x40000040 /* DIO0 */
MX8MP_IOMUXC_GPIO1_IO11__GPIO1_IO11 0x40000040 /* DIO1 */
MX8MP_IOMUXC_NAND_DQS__GPIO3_IO14 0x40000040 /* M2SKT_OFF# */
MX8MP_IOMUXC_SD2_DATA2__GPIO2_IO17 0x40000150 /* PCIE1_WDIS# */
MX8MP_IOMUXC_SD2_DATA3__GPIO2_IO18 0x40000150 /* PCIE2_WDIS# */
MX8MP_IOMUXC_SD2_CMD__GPIO2_IO14 0x40000150 /* PCIE3_WDIS# */
MX8MP_IOMUXC_NAND_DATA00__GPIO3_IO06 0x40000040 /* M2SKT_RST# */
MX8MP_IOMUXC_SAI1_TXD6__GPIO4_IO18 0x40000150 /* M2SKT_WDIS# */
MX8MP_IOMUXC_NAND_ALE__GPIO3_IO00 0x40000150 /* M2SKT_GDIS# */
MX8MP_IOMUXC_SAI3_TXD__GPIO5_IO01 0x40000104 /* UART_TERM */
MX8MP_IOMUXC_SAI3_TXFS__GPIO4_IO31 0x40000104 /* UART_RS485 */
MX8MP_IOMUXC_SAI3_TXC__GPIO5_IO00 0x40000104 /* UART_HALF */
@ -639,47 +639,47 @@
pinctrl_accel: accelgrp {
fsl,pins = <
MX8MP_IOMUXC_GPIO1_IO07__GPIO1_IO07 0x159
MX8MP_IOMUXC_GPIO1_IO07__GPIO1_IO07 0x150
>;
};
pinctrl_eqos: eqosgrp {
fsl,pins = <
MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x3
MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x3
MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x91
MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x91
MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x91
MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x91
MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x91
MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x91
MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x1f
MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x1f
MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x1f
MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x1f
MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x1f
MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x1f
MX8MP_IOMUXC_SAI3_RXD__GPIO4_IO30 0x141 /* RST# */
MX8MP_IOMUXC_SAI3_RXFS__GPIO4_IO28 0x159 /* IRQ# */
MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x2
MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x2
MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x90
MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x90
MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x90
MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x90
MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x90
MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x90
MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x16
MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x16
MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x16
MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x16
MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x16
MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x16
MX8MP_IOMUXC_SAI3_RXD__GPIO4_IO30 0x140 /* RST# */
MX8MP_IOMUXC_SAI3_RXFS__GPIO4_IO28 0x150 /* IRQ# */
>;
};
pinctrl_fec: fecgrp {
fsl,pins = <
MX8MP_IOMUXC_SAI1_RXD4__ENET1_RGMII_RD0 0x91
MX8MP_IOMUXC_SAI1_RXD5__ENET1_RGMII_RD1 0x91
MX8MP_IOMUXC_SAI1_RXD6__ENET1_RGMII_RD2 0x91
MX8MP_IOMUXC_SAI1_RXD7__ENET1_RGMII_RD3 0x91
MX8MP_IOMUXC_SAI1_TXC__ENET1_RGMII_RXC 0x91
MX8MP_IOMUXC_SAI1_TXFS__ENET1_RGMII_RX_CTL 0x91
MX8MP_IOMUXC_SAI1_TXD0__ENET1_RGMII_TD0 0x1f
MX8MP_IOMUXC_SAI1_TXD1__ENET1_RGMII_TD1 0x1f
MX8MP_IOMUXC_SAI1_TXD2__ENET1_RGMII_TD2 0x1f
MX8MP_IOMUXC_SAI1_TXD3__ENET1_RGMII_TD3 0x1f
MX8MP_IOMUXC_SAI1_TXD4__ENET1_RGMII_TX_CTL 0x1f
MX8MP_IOMUXC_SAI1_TXD5__ENET1_RGMII_TXC 0x1f
MX8MP_IOMUXC_SAI1_RXFS__ENET1_1588_EVENT0_IN 0x141
MX8MP_IOMUXC_SAI1_RXC__ENET1_1588_EVENT0_OUT 0x141
MX8MP_IOMUXC_SAI1_RXD4__ENET1_RGMII_RD0 0x90
MX8MP_IOMUXC_SAI1_RXD5__ENET1_RGMII_RD1 0x90
MX8MP_IOMUXC_SAI1_RXD6__ENET1_RGMII_RD2 0x90
MX8MP_IOMUXC_SAI1_RXD7__ENET1_RGMII_RD3 0x90
MX8MP_IOMUXC_SAI1_TXC__ENET1_RGMII_RXC 0x90
MX8MP_IOMUXC_SAI1_TXFS__ENET1_RGMII_RX_CTL 0x90
MX8MP_IOMUXC_SAI1_TXD0__ENET1_RGMII_TD0 0x16
MX8MP_IOMUXC_SAI1_TXD1__ENET1_RGMII_TD1 0x16
MX8MP_IOMUXC_SAI1_TXD2__ENET1_RGMII_TD2 0x16
MX8MP_IOMUXC_SAI1_TXD3__ENET1_RGMII_TD3 0x16
MX8MP_IOMUXC_SAI1_TXD4__ENET1_RGMII_TX_CTL 0x16
MX8MP_IOMUXC_SAI1_TXD5__ENET1_RGMII_TXC 0x16
MX8MP_IOMUXC_SAI1_RXFS__ENET1_1588_EVENT0_IN 0x140
MX8MP_IOMUXC_SAI1_RXC__ENET1_1588_EVENT0_OUT 0x140
>;
};
@ -692,61 +692,61 @@
pinctrl_gsc: gscgrp {
fsl,pins = <
MX8MP_IOMUXC_SAI1_MCLK__GPIO4_IO20 0x159
MX8MP_IOMUXC_SAI1_MCLK__GPIO4_IO20 0x150
>;
};
pinctrl_i2c1: i2c1grp {
fsl,pins = <
MX8MP_IOMUXC_I2C1_SCL__I2C1_SCL 0x400001c3
MX8MP_IOMUXC_I2C1_SDA__I2C1_SDA 0x400001c3
MX8MP_IOMUXC_I2C1_SCL__I2C1_SCL 0x400001c2
MX8MP_IOMUXC_I2C1_SDA__I2C1_SDA 0x400001c2
>;
};
pinctrl_i2c2: i2c2grp {
fsl,pins = <
MX8MP_IOMUXC_I2C2_SCL__I2C2_SCL 0x400001c3
MX8MP_IOMUXC_I2C2_SDA__I2C2_SDA 0x400001c3
MX8MP_IOMUXC_I2C2_SCL__I2C2_SCL 0x400001c2
MX8MP_IOMUXC_I2C2_SDA__I2C2_SDA 0x400001c2
>;
};
pinctrl_i2c3: i2c3grp {
fsl,pins = <
MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL 0x400001c3
MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA 0x400001c3
MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL 0x400001c2
MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA 0x400001c2
>;
};
pinctrl_i2c4: i2c4grp {
fsl,pins = <
MX8MP_IOMUXC_I2C4_SCL__I2C4_SCL 0x400001c3
MX8MP_IOMUXC_I2C4_SDA__I2C4_SDA 0x400001c3
MX8MP_IOMUXC_I2C4_SCL__I2C4_SCL 0x400001c2
MX8MP_IOMUXC_I2C4_SDA__I2C4_SDA 0x400001c2
>;
};
pinctrl_ksz: kszgrp {
fsl,pins = <
MX8MP_IOMUXC_SAI3_RXC__GPIO4_IO29 0x159 /* IRQ# */
MX8MP_IOMUXC_SAI3_MCLK__GPIO5_IO02 0x141 /* RST# */
MX8MP_IOMUXC_SAI3_RXC__GPIO4_IO29 0x150 /* IRQ# */
MX8MP_IOMUXC_SAI3_MCLK__GPIO5_IO02 0x140 /* RST# */
>;
};
pinctrl_gpio_leds: ledgrp {
fsl,pins = <
MX8MP_IOMUXC_SD2_DATA0__GPIO2_IO15 0x19
MX8MP_IOMUXC_SD2_DATA1__GPIO2_IO16 0x19
MX8MP_IOMUXC_SD2_DATA0__GPIO2_IO15 0x10
MX8MP_IOMUXC_SD2_DATA1__GPIO2_IO16 0x10
>;
};
pinctrl_pmic: pmicgrp {
fsl,pins = <
MX8MP_IOMUXC_NAND_DATA01__GPIO3_IO07 0x141
MX8MP_IOMUXC_NAND_DATA01__GPIO3_IO07 0x140
>;
};
pinctrl_pps: ppsgrp {
fsl,pins = <
MX8MP_IOMUXC_GPIO1_IO12__GPIO1_IO12 0x141
MX8MP_IOMUXC_GPIO1_IO12__GPIO1_IO12 0x140
>;
};
@ -758,13 +758,13 @@
pinctrl_reg_usb2: regusb2grp {
fsl,pins = <
MX8MP_IOMUXC_GPIO1_IO06__GPIO1_IO06 0x141
MX8MP_IOMUXC_GPIO1_IO06__GPIO1_IO06 0x140
>;
};
pinctrl_reg_wifi: regwifigrp {
fsl,pins = <
MX8MP_IOMUXC_NAND_DATA03__GPIO3_IO09 0x119
MX8MP_IOMUXC_NAND_DATA03__GPIO3_IO09 0x110
>;
};
@ -811,7 +811,7 @@
pinctrl_uart3_gpio: uart3gpiogrp {
fsl,pins = <
MX8MP_IOMUXC_NAND_DATA02__GPIO3_IO08 0x119
MX8MP_IOMUXC_NAND_DATA02__GPIO3_IO08 0x110
>;
};

View File

@ -595,7 +595,7 @@
pgc_ispdwp: power-domain@18 {
#power-domain-cells = <0>;
reg = <IMX8MP_POWER_DOMAIN_MEDIAMIX_ISPDWP>;
clocks = <&clk IMX8MP_CLK_MEDIA_ISP_DIV>;
clocks = <&clk IMX8MP_CLK_MEDIA_ISP_ROOT>;
};
};
};

View File

@ -74,7 +74,7 @@
vdd_l17_29-supply = <&vph_pwr>;
vdd_l20_21-supply = <&vph_pwr>;
vdd_l25-supply = <&pm8994_s5>;
vdd_lvs1_2 = <&pm8994_s4>;
vdd_lvs1_2-supply = <&pm8994_s4>;
/* S1, S2, S6 and S12 are managed by RPMPD */

View File

@ -171,7 +171,7 @@
vdd_l17_29-supply = <&vph_pwr>;
vdd_l20_21-supply = <&vph_pwr>;
vdd_l25-supply = <&pm8994_s5>;
vdd_lvs1_2 = <&pm8994_s4>;
vdd_lvs1_2-supply = <&pm8994_s4>;
/* S1, S2, S6 and S12 are managed by RPMPD */

View File

@ -100,7 +100,7 @@
CPU6: cpu@102 {
device_type = "cpu";
compatible = "arm,cortex-a57";
reg = <0x0 0x101>;
reg = <0x0 0x102>;
enable-method = "psci";
next-level-cache = <&L2_1>;
};
@ -108,7 +108,7 @@
CPU7: cpu@103 {
device_type = "cpu";
compatible = "arm,cortex-a57";
reg = <0x0 0x101>;
reg = <0x0 0x103>;
enable-method = "psci";
next-level-cache = <&L2_1>;
};

View File

@ -5,7 +5,7 @@
* Copyright 2021 Google LLC.
*/
#include "sc7180-trogdor.dtsi"
/* This file must be included after sc7180-trogdor.dtsi */
/ {
/* BOARD-SPECIFIC TOP LEVEL NODES */

View File

@ -5,7 +5,7 @@
* Copyright 2020 Google LLC.
*/
#include "sc7180-trogdor.dtsi"
/* This file must be included after sc7180-trogdor.dtsi */
&ap_sar_sensor {
semtech,cs0-ground;

View File

@ -4244,7 +4244,7 @@
power-domains = <&dispcc MDSS_GDSC>;
clocks = <&gcc GCC_DISP_AHB_CLK>,
clocks = <&dispcc DISP_CC_MDSS_AHB_CLK>,
<&dispcc DISP_CC_MDSS_MDP_CLK>;
clock-names = "iface", "core";

View File

@ -2853,6 +2853,16 @@
reg = <0x0 0x17100000 0x0 0x10000>, /* GICD */
<0x0 0x17180000 0x0 0x200000>; /* GICR * 8 */
interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;
#address-cells = <2>;
#size-cells = <2>;
ranges;
gic_its: msi-controller@17140000 {
compatible = "arm,gic-v3-its";
reg = <0x0 0x17140000 0x0 0x20000>;
msi-controller;
#msi-cells = <1>;
};
};
timer@17420000 {
@ -3037,8 +3047,8 @@
iommus = <&apps_smmu 0xe0 0x0>;
interconnects = <&aggre1_noc MASTER_UFS_MEM &mc_virt SLAVE_EBI1>,
<&gem_noc MASTER_APPSS_PROC &config_noc SLAVE_UFS_MEM_CFG>;
interconnects = <&aggre1_noc MASTER_UFS_MEM 0 &mc_virt SLAVE_EBI1 0>,
<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_UFS_MEM_CFG 0>;
interconnect-names = "ufs-ddr", "cpu-ufs";
clock-names =
"core_clk",

View File

@ -214,6 +214,19 @@ static pte_t get_clear_contig(struct mm_struct *mm,
return orig_pte;
}
static pte_t get_clear_contig_flush(struct mm_struct *mm,
unsigned long addr,
pte_t *ptep,
unsigned long pgsize,
unsigned long ncontig)
{
pte_t orig_pte = get_clear_contig(mm, addr, ptep, pgsize, ncontig);
struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
flush_tlb_range(&vma, addr, addr + (pgsize * ncontig));
return orig_pte;
}
/*
* Changing some bits of contiguous entries requires us to follow a
* Break-Before-Make approach, breaking the whole contiguous set
@ -447,19 +460,20 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
int ncontig, i;
size_t pgsize = 0;
unsigned long pfn = pte_pfn(pte), dpfn;
struct mm_struct *mm = vma->vm_mm;
pgprot_t hugeprot;
pte_t orig_pte;
if (!pte_cont(pte))
return ptep_set_access_flags(vma, addr, ptep, pte, dirty);
ncontig = find_num_contig(vma->vm_mm, addr, ptep, &pgsize);
ncontig = find_num_contig(mm, addr, ptep, &pgsize);
dpfn = pgsize >> PAGE_SHIFT;
if (!__cont_access_flags_changed(ptep, pte, ncontig))
return 0;
orig_pte = get_clear_contig(vma->vm_mm, addr, ptep, pgsize, ncontig);
orig_pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
/* Make sure we don't lose the dirty or young state */
if (pte_dirty(orig_pte))
@ -470,7 +484,7 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
hugeprot = pte_pgprot(pte);
for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
set_pte_at(vma->vm_mm, addr, ptep, pfn_pte(pfn, hugeprot));
set_pte_at(mm, addr, ptep, pfn_pte(pfn, hugeprot));
return 1;
}
@ -492,7 +506,7 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
ncontig = find_num_contig(mm, addr, ptep, &pgsize);
dpfn = pgsize >> PAGE_SHIFT;
pte = get_clear_contig(mm, addr, ptep, pgsize, ncontig);
pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
pte = pte_wrprotect(pte);
hugeprot = pte_pgprot(pte);
@ -505,17 +519,15 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep)
{
struct mm_struct *mm = vma->vm_mm;
size_t pgsize;
int ncontig;
pte_t orig_pte;
if (!pte_cont(READ_ONCE(*ptep)))
return ptep_clear_flush(vma, addr, ptep);
ncontig = find_num_contig(vma->vm_mm, addr, ptep, &pgsize);
orig_pte = get_clear_contig(vma->vm_mm, addr, ptep, pgsize, ncontig);
flush_tlb_range(vma, addr, addr + pgsize * ncontig);
return orig_pte;
ncontig = find_num_contig(mm, addr, ptep, &pgsize);
return get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig);
}
static int __init hugetlbpage_init(void)

View File

@ -54,7 +54,6 @@ config LOONGARCH
select GENERIC_CMOS_UPDATE
select GENERIC_CPU_AUTOPROBE
select GENERIC_ENTRY
select GENERIC_FIND_FIRST_BIT
select GENERIC_GETTIMEOFDAY
select GENERIC_IRQ_MULTI_HANDLER
select GENERIC_IRQ_PROBE
@ -77,7 +76,6 @@ config LOONGARCH
select HAVE_ARCH_TRANSPARENT_HUGEPAGE
select HAVE_ASM_MODVERSIONS
select HAVE_CONTEXT_TRACKING
select HAVE_COPY_THREAD_TLS
select HAVE_DEBUG_STACKOVERFLOW
select HAVE_DMA_CONTIGUOUS
select HAVE_EXIT_THREAD
@ -86,8 +84,6 @@ config LOONGARCH
select HAVE_IOREMAP_PROT
select HAVE_IRQ_EXIT_ON_IRQ_STACK
select HAVE_IRQ_TIME_ACCOUNTING
select HAVE_MEMBLOCK
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_NMI
select HAVE_PERF_EVENTS

View File

@ -48,6 +48,5 @@
#define fcsr1 $r1
#define fcsr2 $r2
#define fcsr3 $r3
#define vcsr16 $r16
#endif /* _ASM_FPREGDEF_H */

View File

@ -6,6 +6,7 @@
#define _ASM_PAGE_H
#include <linux/const.h>
#include <asm/addrspace.h>
/*
* PAGE_SHIFT determines the page size

View File

@ -80,7 +80,6 @@ BUILD_FPR_ACCESS(64)
struct loongarch_fpu {
unsigned int fcsr;
unsigned int vcsr;
uint64_t fcc; /* 8x8 */
union fpureg fpr[NUM_FPU_REGS];
};
@ -161,7 +160,6 @@ struct thread_struct {
*/ \
.fpu = { \
.fcsr = 0, \
.vcsr = 0, \
.fcc = 0, \
.fpr = {{{0,},},}, \
}, \

View File

@ -166,7 +166,6 @@ void output_thread_fpu_defines(void)
OFFSET(THREAD_FCSR, loongarch_fpu, fcsr);
OFFSET(THREAD_FCC, loongarch_fpu, fcc);
OFFSET(THREAD_VCSR, loongarch_fpu, vcsr);
BLANK();
}

View File

@ -146,16 +146,6 @@
movgr2fcsr fcsr0, \tmp0
.endm
.macro sc_save_vcsr base, tmp0
movfcsr2gr \tmp0, vcsr16
EX st.w \tmp0, \base, 0
.endm
.macro sc_restore_vcsr base, tmp0
EX ld.w \tmp0, \base, 0
movgr2fcsr vcsr16, \tmp0
.endm
/*
* Save a thread's fp context.
*/

View File

@ -429,7 +429,6 @@ int __init init_numa_memory(void)
return 0;
}
EXPORT_SYMBOL(init_numa_memory);
#endif
void __init paging_init(void)

View File

@ -21,6 +21,7 @@ ccflags-vdso += $(filter --target=%,$(KBUILD_CFLAGS))
endif
cflags-vdso := $(ccflags-vdso) \
-isystem $(shell $(CC) -print-file-name=include) \
$(filter -W%,$(filter-out -Wa$(comma)%,$(KBUILD_CFLAGS))) \
-O2 -g -fno-strict-aliasing -fno-common -fno-builtin -G0 \
-fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \

View File

@ -25,7 +25,7 @@ struct or1k_frameinfo {
/*
* Verify a frameinfo structure. The return address should be a valid text
* address. The frame pointer may be null if its the last frame, otherwise
* the frame pointer should point to a location in the stack after the the
* the frame pointer should point to a location in the stack after the
* top of the next frame up.
*/
static inline int or1k_frameinfo_valid(struct or1k_frameinfo *frameinfo)

View File

@ -224,8 +224,13 @@ int main(void)
BLANK();
DEFINE(ASM_SIGFRAME_SIZE, PARISC_RT_SIGFRAME_SIZE);
DEFINE(SIGFRAME_CONTEXT_REGS, offsetof(struct rt_sigframe, uc.uc_mcontext) - PARISC_RT_SIGFRAME_SIZE);
#ifdef CONFIG_64BIT
DEFINE(ASM_SIGFRAME_SIZE32, PARISC_RT_SIGFRAME_SIZE32);
DEFINE(SIGFRAME_CONTEXT_REGS32, offsetof(struct compat_rt_sigframe, uc.uc_mcontext) - PARISC_RT_SIGFRAME_SIZE32);
#else
DEFINE(ASM_SIGFRAME_SIZE32, PARISC_RT_SIGFRAME_SIZE);
DEFINE(SIGFRAME_CONTEXT_REGS32, offsetof(struct rt_sigframe, uc.uc_mcontext) - PARISC_RT_SIGFRAME_SIZE);
#endif
BLANK();
DEFINE(ICACHE_BASE, offsetof(struct pdc_cache_info, ic_base));
DEFINE(ICACHE_STRIDE, offsetof(struct pdc_cache_info, ic_stride));

View File

@ -146,7 +146,7 @@ static int emulate_ldw(struct pt_regs *regs, int toreg, int flop)
" depw %%r0,31,2,%4\n"
"1: ldw 0(%%sr1,%4),%0\n"
"2: ldw 4(%%sr1,%4),%3\n"
" subi 32,%4,%2\n"
" subi 32,%2,%2\n"
" mtctl %2,11\n"
" vshd %0,%3,%0\n"
"3: \n"

View File

@ -358,6 +358,10 @@ config ARCH_SUSPEND_NONZERO_CPU
def_bool y
depends on PPC_POWERNV || PPC_PSERIES
config ARCH_HAS_ADD_PAGES
def_bool y
depends on ARCH_ENABLE_MEMORY_HOTPLUG
config PPC_DCR_NATIVE
bool

View File

@ -0,0 +1,9 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_POWERPC_BPF_PERF_EVENT_H
#define _ASM_POWERPC_BPF_PERF_EVENT_H
#include <asm/ptrace.h>
typedef struct user_pt_regs bpf_user_pt_regs_t;
#endif /* _ASM_POWERPC_BPF_PERF_EVENT_H */

View File

@ -1,9 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
#ifndef _UAPI__ASM_BPF_PERF_EVENT_H__
#define _UAPI__ASM_BPF_PERF_EVENT_H__
#include <asm/ptrace.h>
typedef struct user_pt_regs bpf_user_pt_regs_t;
#endif /* _UAPI__ASM_BPF_PERF_EVENT_H__ */

View File

@ -13,7 +13,7 @@
# If you really need to reference something from prom_init.o add
# it to the list below:
grep "^CONFIG_KASAN=y$" .config >/dev/null
grep "^CONFIG_KASAN=y$" ${KCONFIG_CONFIG} >/dev/null
if [ $? -eq 0 ]
then
MEM_FUNCS="__memcpy __memset"

View File

@ -105,6 +105,37 @@ void __ref arch_remove_linear_mapping(u64 start, u64 size)
vm_unmap_aliases();
}
/*
* After memory hotplug the variables max_pfn, max_low_pfn and high_memory need
* updating.
*/
static void update_end_of_memory_vars(u64 start, u64 size)
{
unsigned long end_pfn = PFN_UP(start + size);
if (end_pfn > max_pfn) {
max_pfn = end_pfn;
max_low_pfn = end_pfn;
high_memory = (void *)__va(max_pfn * PAGE_SIZE - 1) + 1;
}
}
int __ref add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages,
struct mhp_params *params)
{
int ret;
ret = __add_pages(nid, start_pfn, nr_pages, params);
if (ret)
return ret;
/* update max_pfn, max_low_pfn and high_memory */
update_end_of_memory_vars(start_pfn << PAGE_SHIFT,
nr_pages << PAGE_SHIFT);
return ret;
}
int __ref arch_add_memory(int nid, u64 start, u64 size,
struct mhp_params *params)
{
@ -115,7 +146,7 @@ int __ref arch_add_memory(int nid, u64 start, u64 size,
rc = arch_create_linear_mapping(nid, start, size, params);
if (rc)
return rc;
rc = __add_pages(nid, start_pfn, nr_pages, params);
rc = add_pages(nid, start_pfn, nr_pages, params);
if (rc)
arch_remove_linear_mapping(start, size);
return rc;

View File

@ -96,8 +96,8 @@ int __ref map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t prot)
pgdp = pgd_offset_k(ea);
p4dp = p4d_offset(pgdp, ea);
if (p4d_none(*p4dp)) {
pmdp = early_alloc_pgtable(PMD_TABLE_SIZE);
p4d_populate(&init_mm, p4dp, pmdp);
pudp = early_alloc_pgtable(PUD_TABLE_SIZE);
p4d_populate(&init_mm, p4dp, pudp);
}
pudp = pud_offset(p4dp, ea);
if (pud_none(*pudp)) {
@ -106,7 +106,7 @@ int __ref map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t prot)
}
pmdp = pmd_offset(pudp, ea);
if (!pmd_present(*pmdp)) {
ptep = early_alloc_pgtable(PAGE_SIZE);
ptep = early_alloc_pgtable(PTE_TABLE_SIZE);
pmd_populate_kernel(&init_mm, pmdp, ptep);
}
ptep = pte_offset_kernel(pmdp, ea);

View File

@ -176,12 +176,8 @@ static int __init pnv_get_random_long_early(unsigned long *v)
NULL) != pnv_get_random_long_early)
return 0;
for_each_compatible_node(dn, NULL, "ibm,power-rng") {
if (rng_create(dn))
continue;
/* Create devices for hwrng driver */
of_platform_device_create(dn, NULL, NULL);
}
for_each_compatible_node(dn, NULL, "ibm,power-rng")
rng_create(dn);
if (!ppc_md.get_random_seed)
return 0;
@ -205,10 +201,18 @@ void __init pnv_rng_init(void)
static int __init pnv_rng_late_init(void)
{
struct device_node *dn;
unsigned long v;
/* In case it wasn't called during init for some other reason. */
if (ppc_md.get_random_seed == pnv_get_random_long_early)
pnv_get_random_long_early(&v);
if (ppc_md.get_random_seed == powernv_get_random_long) {
for_each_compatible_node(dn, NULL, "ibm,power-rng")
of_platform_device_create(dn, NULL, NULL);
}
return 0;
}
machine_subsys_initcall(powernv, pnv_rng_late_init);

View File

@ -15,6 +15,7 @@
#include <linux/of_fdt.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/bitmap.h>
#include <linux/cpumask.h>
#include <linux/mm.h>
#include <linux/delay.h>
@ -57,7 +58,7 @@ static int __init xive_irq_bitmap_add(int base, int count)
spin_lock_init(&xibm->lock);
xibm->base = base;
xibm->count = count;
xibm->bitmap = kzalloc(xibm->count, GFP_KERNEL);
xibm->bitmap = bitmap_zalloc(xibm->count, GFP_KERNEL);
if (!xibm->bitmap) {
kfree(xibm);
return -ENOMEM;
@ -75,7 +76,7 @@ static void xive_irq_bitmap_remove_all(void)
list_for_each_entry_safe(xibm, tmp, &xive_irq_bitmaps, list) {
list_del(&xibm->list);
kfree(xibm->bitmap);
bitmap_free(xibm->bitmap);
kfree(xibm);
}
}

View File

@ -484,7 +484,6 @@ config KEXEC
config KEXEC_FILE
bool "kexec file based system call"
select KEXEC_CORE
select BUILD_BIN2C
depends on CRYPTO
depends on CRYPTO_SHA256
depends on CRYPTO_SHA256_S390

View File

@ -4,232 +4,15 @@
*
* Copyright IBM Corp. 2017, 2020
* Author(s): Harald Freudenberger
*
* The s390_arch_random_generate() function may be called from random.c
* in interrupt context. So this implementation does the best to be very
* fast. There is a buffer of random data which is asynchronously checked
* and filled by a workqueue thread.
* If there are enough bytes in the buffer the s390_arch_random_generate()
* just delivers these bytes. Otherwise false is returned until the
* worker thread refills the buffer.
* The worker fills the rng buffer by pulling fresh entropy from the
* high quality (but slow) true hardware random generator. This entropy
* is then spread over the buffer with an pseudo random generator PRNG.
* As the arch_get_random_seed_long() fetches 8 bytes and the calling
* function add_interrupt_randomness() counts this as 1 bit entropy the
* distribution needs to make sure there is in fact 1 bit entropy contained
* in 8 bytes of the buffer. The current values pull 32 byte entropy
* and scatter this into a 2048 byte buffer. So 8 byte in the buffer
* will contain 1 bit of entropy.
* The worker thread is rescheduled based on the charge level of the
* buffer but at least with 500 ms delay to avoid too much CPU consumption.
* So the max. amount of rng data delivered via arch_get_random_seed is
* limited to 4k bytes per second.
*/
#include <linux/kernel.h>
#include <linux/atomic.h>
#include <linux/random.h>
#include <linux/slab.h>
#include <linux/static_key.h>
#include <linux/workqueue.h>
#include <linux/moduleparam.h>
#include <asm/cpacf.h>
DEFINE_STATIC_KEY_FALSE(s390_arch_random_available);
atomic64_t s390_arch_random_counter = ATOMIC64_INIT(0);
EXPORT_SYMBOL(s390_arch_random_counter);
#define ARCH_REFILL_TICKS (HZ/2)
#define ARCH_PRNG_SEED_SIZE 32
#define ARCH_RNG_BUF_SIZE 2048
static DEFINE_SPINLOCK(arch_rng_lock);
static u8 *arch_rng_buf;
static unsigned int arch_rng_buf_idx;
static void arch_rng_refill_buffer(struct work_struct *);
static DECLARE_DELAYED_WORK(arch_rng_work, arch_rng_refill_buffer);
bool s390_arch_random_generate(u8 *buf, unsigned int nbytes)
{
/* max hunk is ARCH_RNG_BUF_SIZE */
if (nbytes > ARCH_RNG_BUF_SIZE)
return false;
/* lock rng buffer */
if (!spin_trylock(&arch_rng_lock))
return false;
/* try to resolve the requested amount of bytes from the buffer */
arch_rng_buf_idx -= nbytes;
if (arch_rng_buf_idx < ARCH_RNG_BUF_SIZE) {
memcpy(buf, arch_rng_buf + arch_rng_buf_idx, nbytes);
atomic64_add(nbytes, &s390_arch_random_counter);
spin_unlock(&arch_rng_lock);
return true;
}
/* not enough bytes in rng buffer, refill is done asynchronously */
spin_unlock(&arch_rng_lock);
return false;
}
EXPORT_SYMBOL(s390_arch_random_generate);
static void arch_rng_refill_buffer(struct work_struct *unused)
{
unsigned int delay = ARCH_REFILL_TICKS;
spin_lock(&arch_rng_lock);
if (arch_rng_buf_idx > ARCH_RNG_BUF_SIZE) {
/* buffer is exhausted and needs refill */
u8 seed[ARCH_PRNG_SEED_SIZE];
u8 prng_wa[240];
/* fetch ARCH_PRNG_SEED_SIZE bytes of entropy */
cpacf_trng(NULL, 0, seed, sizeof(seed));
/* blow this entropy up to ARCH_RNG_BUF_SIZE with PRNG */
memset(prng_wa, 0, sizeof(prng_wa));
cpacf_prno(CPACF_PRNO_SHA512_DRNG_SEED,
&prng_wa, NULL, 0, seed, sizeof(seed));
cpacf_prno(CPACF_PRNO_SHA512_DRNG_GEN,
&prng_wa, arch_rng_buf, ARCH_RNG_BUF_SIZE, NULL, 0);
arch_rng_buf_idx = ARCH_RNG_BUF_SIZE;
}
delay += (ARCH_REFILL_TICKS * arch_rng_buf_idx) / ARCH_RNG_BUF_SIZE;
spin_unlock(&arch_rng_lock);
/* kick next check */
queue_delayed_work(system_long_wq, &arch_rng_work, delay);
}
/*
* Here follows the implementation of s390_arch_get_random_long().
*
* The random longs to be pulled by arch_get_random_long() are
* prepared in an 4K buffer which is filled from the NIST 800-90
* compliant s390 drbg. By default the random long buffer is refilled
* 256 times before the drbg itself needs a reseed. The reseed of the
* drbg is done with 32 bytes fetched from the high quality (but slow)
* trng which is assumed to deliver 100% entropy. So the 32 * 8 = 256
* bits of entropy are spread over 256 * 4KB = 1MB serving 131072
* arch_get_random_long() invocations before reseeded.
*
* How often the 4K random long buffer is refilled with the drbg
* before the drbg is reseeded can be adjusted. There is a module
* parameter 's390_arch_rnd_long_drbg_reseed' accessible via
* /sys/module/arch_random/parameters/rndlong_drbg_reseed
* or as kernel command line parameter
* arch_random.rndlong_drbg_reseed=<value>
* This parameter tells how often the drbg fills the 4K buffer before
* it is re-seeded by fresh entropy from the trng.
* A value of 16 results in reseeding the drbg at every 16 * 4 KB = 64
* KB with 32 bytes of fresh entropy pulled from the trng. So a value
* of 16 would result in 256 bits entropy per 64 KB.
* A value of 256 results in 1MB of drbg output before a reseed of the
* drbg is done. So this would spread the 256 bits of entropy among 1MB.
* Setting this parameter to 0 forces the reseed to take place every
* time the 4K buffer is depleted, so the entropy rises to 256 bits
* entropy per 4K or 0.5 bit entropy per arch_get_random_long(). With
* setting this parameter to negative values all this effort is
* disabled, arch_get_random long() returns false and thus indicating
* that the arch_get_random_long() feature is disabled at all.
*/
static unsigned long rndlong_buf[512];
static DEFINE_SPINLOCK(rndlong_lock);
static int rndlong_buf_index;
static int rndlong_drbg_reseed = 256;
module_param_named(rndlong_drbg_reseed, rndlong_drbg_reseed, int, 0600);
MODULE_PARM_DESC(rndlong_drbg_reseed, "s390 arch_get_random_long() drbg reseed");
static inline void refill_rndlong_buf(void)
{
static u8 prng_ws[240];
static int drbg_counter;
if (--drbg_counter < 0) {
/* need to re-seed the drbg */
u8 seed[32];
/* fetch seed from trng */
cpacf_trng(NULL, 0, seed, sizeof(seed));
/* seed drbg */
memset(prng_ws, 0, sizeof(prng_ws));
cpacf_prno(CPACF_PRNO_SHA512_DRNG_SEED,
&prng_ws, NULL, 0, seed, sizeof(seed));
/* re-init counter for drbg */
drbg_counter = rndlong_drbg_reseed;
}
/* fill the arch_get_random_long buffer from drbg */
cpacf_prno(CPACF_PRNO_SHA512_DRNG_GEN, &prng_ws,
(u8 *) rndlong_buf, sizeof(rndlong_buf),
NULL, 0);
}
bool s390_arch_get_random_long(unsigned long *v)
{
bool rc = false;
unsigned long flags;
/* arch_get_random_long() disabled ? */
if (rndlong_drbg_reseed < 0)
return false;
/* try to lock the random long lock */
if (!spin_trylock_irqsave(&rndlong_lock, flags))
return false;
if (--rndlong_buf_index >= 0) {
/* deliver next long value from the buffer */
*v = rndlong_buf[rndlong_buf_index];
rc = true;
goto out;
}
/* buffer is depleted and needs refill */
if (in_interrupt()) {
/* delay refill in interrupt context to next caller */
rndlong_buf_index = 0;
goto out;
}
/* refill random long buffer */
refill_rndlong_buf();
rndlong_buf_index = ARRAY_SIZE(rndlong_buf);
/* and provide one random long */
*v = rndlong_buf[--rndlong_buf_index];
rc = true;
out:
spin_unlock_irqrestore(&rndlong_lock, flags);
return rc;
}
EXPORT_SYMBOL(s390_arch_get_random_long);
static int __init s390_arch_random_init(void)
{
/* all the needed PRNO subfunctions available ? */
if (cpacf_query_func(CPACF_PRNO, CPACF_PRNO_TRNG) &&
cpacf_query_func(CPACF_PRNO, CPACF_PRNO_SHA512_DRNG_GEN)) {
/* alloc arch random working buffer */
arch_rng_buf = kmalloc(ARCH_RNG_BUF_SIZE, GFP_KERNEL);
if (!arch_rng_buf)
return -ENOMEM;
/* kick worker queue job to fill the random buffer */
queue_delayed_work(system_long_wq,
&arch_rng_work, ARCH_REFILL_TICKS);
/* enable arch random to the outside world */
static_branch_enable(&s390_arch_random_available);
}
return 0;
}
arch_initcall(s390_arch_random_init);

View File

@ -15,17 +15,13 @@
#include <linux/static_key.h>
#include <linux/atomic.h>
#include <asm/cpacf.h>
DECLARE_STATIC_KEY_FALSE(s390_arch_random_available);
extern atomic64_t s390_arch_random_counter;
bool s390_arch_get_random_long(unsigned long *v);
bool s390_arch_random_generate(u8 *buf, unsigned int nbytes);
static inline bool __must_check arch_get_random_long(unsigned long *v)
{
if (static_branch_likely(&s390_arch_random_available))
return s390_arch_get_random_long(v);
return false;
}
@ -37,7 +33,9 @@ static inline bool __must_check arch_get_random_int(unsigned int *v)
static inline bool __must_check arch_get_random_seed_long(unsigned long *v)
{
if (static_branch_likely(&s390_arch_random_available)) {
return s390_arch_random_generate((u8 *)v, sizeof(*v));
cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v));
atomic64_add(sizeof(*v), &s390_arch_random_counter);
return true;
}
return false;
}
@ -45,7 +43,9 @@ static inline bool __must_check arch_get_random_seed_long(unsigned long *v)
static inline bool __must_check arch_get_random_seed_int(unsigned int *v)
{
if (static_branch_likely(&s390_arch_random_available)) {
return s390_arch_random_generate((u8 *)v, sizeof(*v));
cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v));
atomic64_add(sizeof(*v), &s390_arch_random_counter);
return true;
}
return false;
}

View File

@ -133,9 +133,9 @@ struct slibe {
* @sb_count: number of storage blocks
* @sba: storage block element addresses
* @dcount: size of storage block elements
* @user0: user defineable value
* @res4: reserved paramater
* @user1: user defineable value
* @user0: user definable value
* @res4: reserved parameter
* @user1: user definable value
*/
struct qaob {
u64 res0[6];

View File

@ -875,6 +875,11 @@ static void __init setup_randomness(void)
if (stsi(vmms, 3, 2, 2) == 0 && vmms->count)
add_device_randomness(&vmms->vm, sizeof(vmms->vm[0]) * vmms->count);
memblock_free(vmms, PAGE_SIZE);
#ifdef CONFIG_ARCH_RANDOM
if (cpacf_query_func(CPACF_PRNO, CPACF_PRNO_TRNG))
static_branch_enable(&s390_arch_random_available);
#endif
}
/*

View File

@ -48,7 +48,6 @@ OBJCOPYFLAGS_purgatory.ro += --remove-section='.note.*'
$(obj)/purgatory.ro: $(obj)/purgatory $(obj)/purgatory.chk FORCE
$(call if_changed,objcopy)
$(obj)/kexec-purgatory.o: $(obj)/kexec-purgatory.S $(obj)/purgatory.ro FORCE
$(call if_changed_rule,as_o_S)
$(obj)/kexec-purgatory.o: $(obj)/purgatory.ro
obj-$(CONFIG_ARCH_HAS_KEXEC_PURGATORY) += kexec-purgatory.o
obj-y += kexec-purgatory.o

View File

@ -110,6 +110,7 @@ void kernel_add_identity_map(unsigned long start, unsigned long end)
void initialize_identity_maps(void *rmode)
{
unsigned long cmdline;
struct setup_data *sd;
/* Exclude the encryption mask from __PHYSICAL_MASK */
physical_mask &= ~sme_me_mask;
@ -163,6 +164,18 @@ void initialize_identity_maps(void *rmode)
cmdline = get_cmd_line_ptr();
kernel_add_identity_map(cmdline, cmdline + COMMAND_LINE_SIZE);
/*
* Also map the setup_data entries passed via boot_params in case they
* need to be accessed by uncompressed kernel via the identity mapping.
*/
sd = (struct setup_data *)boot_params->hdr.setup_data;
while (sd) {
unsigned long sd_addr = (unsigned long)sd;
kernel_add_identity_map(sd_addr, sd_addr + sizeof(*sd) + sd->len);
sd = (struct setup_data *)sd->next;
}
sev_prep_identity_maps(top_level_pgt);
/* Load the new page-table. */

View File

@ -120,6 +120,9 @@ void *extend_brk(size_t size, size_t align);
static char __brk_##name[size]
extern void probe_roms(void);
void clear_bss(void);
#ifdef __i386__
asmlinkage void __init i386_start_kernel(void);

View File

@ -16,7 +16,7 @@
#define SETUP_INDIRECT (1<<31)
/* SETUP_INDIRECT | max(SETUP_*) */
#define SETUP_TYPE_MAX (SETUP_INDIRECT | SETUP_JAILHOUSE)
#define SETUP_TYPE_MAX (SETUP_INDIRECT | SETUP_CC_BLOB)
/* ram_size flags */
#define RAMDISK_IMAGE_START_MASK 0x07FF

View File

@ -11,6 +11,16 @@
/* Refer to drivers/acpi/cppc_acpi.c for the description of functions */
bool cpc_supported_by_cpu(void)
{
switch (boot_cpu_data.x86_vendor) {
case X86_VENDOR_AMD:
case X86_VENDOR_HYGON:
return boot_cpu_has(X86_FEATURE_CPPC);
}
return false;
}
bool cpc_ffh_supported(void)
{
return true;

View File

@ -426,10 +426,12 @@ void __init do_early_exception(struct pt_regs *regs, int trapnr)
/* Don't add a printk in there. printk relies on the PDA which is not initialized
yet. */
static void __init clear_bss(void)
void __init clear_bss(void)
{
memset(__bss_start, 0,
(unsigned long) __bss_stop - (unsigned long) __bss_start);
memset(__brk_base, 0,
(unsigned long) __brk_limit - (unsigned long) __brk_base);
}
static unsigned long get_cmd_line_ptr(void)

View File

@ -385,7 +385,7 @@ SECTIONS
__end_of_kernel_reserve = .;
. = ALIGN(PAGE_SIZE);
.brk (NOLOAD) : AT(ADDR(.brk) - LOAD_OFFSET) {
.brk : AT(ADDR(.brk) - LOAD_OFFSET) {
__brk_base = .;
. += 64 * 1024; /* 64k alignment slop space */
*(.bss..brk) /* areas brk users have reserved */

View File

@ -1183,15 +1183,19 @@ static void __init xen_domu_set_legacy_features(void)
extern void early_xen_iret_patch(void);
/* First C function to be called on Xen boot */
asmlinkage __visible void __init xen_start_kernel(void)
asmlinkage __visible void __init xen_start_kernel(struct start_info *si)
{
struct physdev_set_iopl set_iopl;
unsigned long initrd_start = 0;
int rc;
if (!xen_start_info)
if (!si)
return;
clear_bss();
xen_start_info = si;
__text_gen_insn(&early_xen_iret_patch,
JMP32_INSN_OPCODE, &early_xen_iret_patch, &xen_iret,
JMP32_INSN_SIZE);

View File

@ -48,15 +48,6 @@ SYM_CODE_START(startup_xen)
ANNOTATE_NOENDBR
cld
/* Clear .bss */
xor %eax,%eax
mov $__bss_start, %rdi
mov $__bss_stop, %rcx
sub %rdi, %rcx
shr $3, %rcx
rep stosq
mov %rsi, xen_start_info
mov initial_stack(%rip), %rsp
/* Set up %gs.
@ -71,6 +62,7 @@ SYM_CODE_START(startup_xen)
cdq
wrmsr
mov %rsi, %rdi
call xen_start_kernel
SYM_CODE_END(startup_xen)
__FINIT

View File

@ -666,6 +666,18 @@ config CRYPTO_CRC32_MIPS
CRC32c and CRC32 CRC algorithms implemented using mips crypto
instructions, when available.
config CRYPTO_CRC32_S390
tristate "CRC-32 algorithms"
depends on S390
select CRYPTO_HASH
select CRC32
help
Select this option if you want to use hardware accelerated
implementations of CRC algorithms. With this option, you
can optimize the computation of CRC-32 (IEEE 802.3 Ethernet)
and CRC-32C (Castagnoli).
It is available with IBM z13 or later.
config CRYPTO_XXHASH
tristate "xxHash hash algorithm"
@ -898,6 +910,16 @@ config CRYPTO_SHA512_SSSE3
Extensions version 1 (AVX1), or Advanced Vector Extensions
version 2 (AVX2) instructions, when available.
config CRYPTO_SHA512_S390
tristate "SHA384 and SHA512 digest algorithm"
depends on S390
select CRYPTO_HASH
help
This is the s390 hardware accelerated implementation of the
SHA512 secure hash standard.
It is available as of z10.
config CRYPTO_SHA1_OCTEON
tristate "SHA1 digest algorithm (OCTEON)"
depends on CPU_CAVIUM_OCTEON
@ -930,6 +952,16 @@ config CRYPTO_SHA1_PPC_SPE
SHA-1 secure hash standard (DFIPS 180-4) implemented
using powerpc SPE SIMD instruction set.
config CRYPTO_SHA1_S390
tristate "SHA1 digest algorithm"
depends on S390
select CRYPTO_HASH
help
This is the s390 hardware accelerated implementation of the
SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).
It is available as of z990.
config CRYPTO_SHA256
tristate "SHA224 and SHA256 digest algorithm"
select CRYPTO_HASH
@ -970,6 +1002,16 @@ config CRYPTO_SHA256_SPARC64
SHA-256 secure hash standard (DFIPS 180-2) implemented
using sparc64 crypto instructions, when available.
config CRYPTO_SHA256_S390
tristate "SHA256 digest algorithm"
depends on S390
select CRYPTO_HASH
help
This is the s390 hardware accelerated implementation of the
SHA256 secure hash standard (DFIPS 180-2).
It is available as of z9.
config CRYPTO_SHA512
tristate "SHA384 and SHA512 digest algorithms"
select CRYPTO_HASH
@ -1010,6 +1052,26 @@ config CRYPTO_SHA3
References:
http://keccak.noekeon.org/
config CRYPTO_SHA3_256_S390
tristate "SHA3_224 and SHA3_256 digest algorithm"
depends on S390
select CRYPTO_HASH
help
This is the s390 hardware accelerated implementation of the
SHA3_256 secure hash standard.
It is available as of z14.
config CRYPTO_SHA3_512_S390
tristate "SHA3_384 and SHA3_512 digest algorithm"
depends on S390
select CRYPTO_HASH
help
This is the s390 hardware accelerated implementation of the
SHA3_512 secure hash standard.
It is available as of z14.
config CRYPTO_SM3
tristate
@ -1070,6 +1132,16 @@ config CRYPTO_GHASH_CLMUL_NI_INTEL
This is the x86_64 CLMUL-NI accelerated implementation of
GHASH, the hash function used in GCM (Galois/Counter mode).
config CRYPTO_GHASH_S390
tristate "GHASH hash function"
depends on S390
select CRYPTO_HASH
help
This is the s390 hardware accelerated implementation of GHASH,
the hash function used in GCM (Galois/Counter mode).
It is available as of z196.
comment "Ciphers"
config CRYPTO_AES
@ -1185,6 +1257,23 @@ config CRYPTO_AES_PPC_SPE
architecture specific assembler implementations that work on 1KB
tables or 256 bytes S-boxes.
config CRYPTO_AES_S390
tristate "AES cipher algorithms"
depends on S390
select CRYPTO_ALGAPI
select CRYPTO_SKCIPHER
help
This is the s390 hardware accelerated implementation of the
AES cipher algorithms (FIPS-197).
As of z9 the ECB and CBC modes are hardware accelerated
for 128 bit keys.
As of z10 the ECB and CBC modes are hardware accelerated
for all AES key sizes.
As of z196 the CTR mode is hardware accelerated for all AES
key sizes and XTS mode is hardware accelerated for 256 and
512 bit keys.
config CRYPTO_ANUBIS
tristate "Anubis cipher algorithm"
depends on CRYPTO_USER_API_ENABLE_OBSOLETE
@ -1415,6 +1504,19 @@ config CRYPTO_DES3_EDE_X86_64
algorithm are provided; regular processing one input block and
one that processes three blocks parallel.
config CRYPTO_DES_S390
tristate "DES and Triple DES cipher algorithms"
depends on S390
select CRYPTO_ALGAPI
select CRYPTO_SKCIPHER
select CRYPTO_LIB_DES
help
This is the s390 hardware accelerated implementation of the
DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3).
As of z990 the ECB and CBC mode are hardware accelerated.
As of z196 the CTR mode is hardware accelerated.
config CRYPTO_FCRYPT
tristate "FCrypt cipher algorithm"
select CRYPTO_ALGAPI
@ -1474,6 +1576,18 @@ config CRYPTO_CHACHA_MIPS
select CRYPTO_SKCIPHER
select CRYPTO_ARCH_HAVE_LIB_CHACHA
config CRYPTO_CHACHA_S390
tristate "ChaCha20 stream cipher"
depends on S390
select CRYPTO_SKCIPHER
select CRYPTO_LIB_CHACHA_GENERIC
select CRYPTO_ARCH_HAVE_LIB_CHACHA
help
This is the s390 SIMD implementation of the ChaCha20 stream
cipher (RFC 7539).
It is available as of z13.
config CRYPTO_SEED
tristate "SEED cipher algorithm"
depends on CRYPTO_USER_API_ENABLE_OBSOLETE

View File

@ -73,6 +73,7 @@ module_param(device_id_scheme, bool, 0444);
static int only_lcd = -1;
module_param(only_lcd, int, 0444);
static bool has_backlight;
static int register_count;
static DEFINE_MUTEX(register_count_mutex);
static DEFINE_MUTEX(video_list_lock);
@ -1222,6 +1223,9 @@ acpi_video_bus_get_one_device(struct acpi_device *device,
acpi_video_device_bind(video, data);
acpi_video_device_find_cap(data);
if (data->cap._BCM && data->cap._BCL)
has_backlight = true;
mutex_lock(&video->device_list_lock);
list_add_tail(&data->entry, &video->video_device_list);
mutex_unlock(&video->device_list_lock);
@ -2249,6 +2253,7 @@ void acpi_video_unregister(void)
if (register_count) {
acpi_bus_unregister_driver(&acpi_video_bus);
register_count = 0;
has_backlight = false;
}
mutex_unlock(&register_count_mutex);
}
@ -2270,13 +2275,7 @@ void acpi_video_unregister_backlight(void)
bool acpi_video_handles_brightness_key_presses(void)
{
bool have_video_busses;
mutex_lock(&video_list_lock);
have_video_busses = !list_empty(&video_bus_head);
mutex_unlock(&video_list_lock);
return have_video_busses &&
return has_backlight &&
(report_key_events & REPORT_BRIGHTNESS_KEY_EVENTS);
}
EXPORT_SYMBOL(acpi_video_handles_brightness_key_presses);

View File

@ -298,7 +298,7 @@ EXPORT_SYMBOL_GPL(osc_cpc_flexible_adr_space_confirmed);
bool osc_sb_native_usb4_support_confirmed;
EXPORT_SYMBOL_GPL(osc_sb_native_usb4_support_confirmed);
bool osc_sb_cppc_not_supported;
bool osc_sb_cppc2_support_acked;
static u8 sb_uuid_str[] = "0811B06E-4A27-44F9-8D60-3CBBC22E7B48";
static void acpi_bus_osc_negotiate_platform_control(void)
@ -358,11 +358,6 @@ static void acpi_bus_osc_negotiate_platform_control(void)
return;
}
#ifdef CONFIG_ACPI_CPPC_LIB
osc_sb_cppc_not_supported = !(capbuf_ret[OSC_SUPPORT_DWORD] &
(OSC_SB_CPC_SUPPORT | OSC_SB_CPCV2_SUPPORT));
#endif
/*
* Now run _OSC again with query flag clear and with the caps
* supported by both the OS and the platform.
@ -376,6 +371,10 @@ static void acpi_bus_osc_negotiate_platform_control(void)
capbuf_ret = context.ret.pointer;
if (context.ret.length > OSC_SUPPORT_DWORD) {
#ifdef CONFIG_ACPI_CPPC_LIB
osc_sb_cppc2_support_acked = capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_CPCV2_SUPPORT;
#endif
osc_sb_apei_support_acked =
capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_APEI_SUPPORT;
osc_pc_lpi_support_confirmed =

View File

@ -577,6 +577,19 @@ bool __weak cpc_ffh_supported(void)
return false;
}
/**
* cpc_supported_by_cpu() - check if CPPC is supported by CPU
*
* Check if the architectural support for CPPC is present even
* if the _OSC hasn't prescribed it
*
* Return: true for supported, false for not supported
*/
bool __weak cpc_supported_by_cpu(void)
{
return false;
}
/**
* pcc_data_alloc() - Allocate the pcc_data memory for pcc subspace
*
@ -684,8 +697,11 @@ int acpi_cppc_processor_probe(struct acpi_processor *pr)
acpi_status status;
int ret = -ENODATA;
if (osc_sb_cppc_not_supported)
return -ENODEV;
if (!osc_sb_cppc2_support_acked) {
pr_debug("CPPC v2 _OSC not acked\n");
if (!cpc_supported_by_cpu())
return -ENODEV;
}
/* Parse the ACPI _CPC table for this CPU. */
status = acpi_evaluate_object_typed(handle, "_CPC", NULL, &output,

View File

@ -90,7 +90,7 @@ static void cs5535_set_piomode(struct ata_port *ap, struct ata_device *adev)
static const u16 pio_cmd_timings[5] = {
0xF7F4, 0x53F3, 0x13F1, 0x5131, 0x1131
};
u32 reg, dummy;
u32 reg, __maybe_unused dummy;
struct ata_device *pair = ata_dev_pair(adev);
int mode = adev->pio_mode - XFER_PIO_0;
@ -129,7 +129,7 @@ static void cs5535_set_dmamode(struct ata_port *ap, struct ata_device *adev)
static const u32 mwdma_timings[3] = {
0x7F0FFFF3, 0x7F035352, 0x7F024241
};
u32 reg, dummy;
u32 reg, __maybe_unused dummy;
int mode = adev->dma_mode;
rdmsr(ATAC_CH0D0_DMA + 2 * adev->devno, reg, dummy);

View File

@ -486,7 +486,18 @@ static void device_link_release_fn(struct work_struct *work)
/* Ensure that all references to the link object have been dropped. */
device_link_synchronize_removal();
pm_runtime_release_supplier(link, true);
pm_runtime_release_supplier(link);
/*
* If supplier_preactivated is set, the link has been dropped between
* the pm_runtime_get_suppliers() and pm_runtime_put_suppliers() calls
* in __driver_probe_device(). In that case, drop the supplier's
* PM-runtime usage counter to remove the reference taken by
* pm_runtime_get_suppliers().
*/
if (link->supplier_preactivated)
pm_runtime_put_noidle(link->supplier);
pm_request_idle(link->supplier);
put_device(link->consumer);
put_device(link->supplier);

View File

@ -308,13 +308,10 @@ static int rpm_get_suppliers(struct device *dev)
/**
* pm_runtime_release_supplier - Drop references to device link's supplier.
* @link: Target device link.
* @check_idle: Whether or not to check if the supplier device is idle.
*
* Drop all runtime PM references associated with @link to its supplier device
* and if @check_idle is set, check if that device is idle (and so it can be
* suspended).
* Drop all runtime PM references associated with @link to its supplier device.
*/
void pm_runtime_release_supplier(struct device_link *link, bool check_idle)
void pm_runtime_release_supplier(struct device_link *link)
{
struct device *supplier = link->supplier;
@ -327,9 +324,6 @@ void pm_runtime_release_supplier(struct device_link *link, bool check_idle)
while (refcount_dec_not_one(&link->rpm_active) &&
atomic_read(&supplier->power.usage_count) > 0)
pm_runtime_put_noidle(supplier);
if (check_idle)
pm_request_idle(supplier);
}
static void __rpm_put_suppliers(struct device *dev, bool try_to_suspend)
@ -337,8 +331,11 @@ static void __rpm_put_suppliers(struct device *dev, bool try_to_suspend)
struct device_link *link;
list_for_each_entry_rcu(link, &dev->links.suppliers, c_node,
device_links_read_lock_held())
pm_runtime_release_supplier(link, try_to_suspend);
device_links_read_lock_held()) {
pm_runtime_release_supplier(link);
if (try_to_suspend)
pm_request_idle(link->supplier);
}
}
static void rpm_put_suppliers(struct device *dev)
@ -1771,7 +1768,6 @@ void pm_runtime_get_suppliers(struct device *dev)
if (link->flags & DL_FLAG_PM_RUNTIME) {
link->supplier_preactivated = true;
pm_runtime_get_sync(link->supplier);
refcount_inc(&link->rpm_active);
}
device_links_read_unlock(idx);
@ -1791,19 +1787,8 @@ void pm_runtime_put_suppliers(struct device *dev)
list_for_each_entry_rcu(link, &dev->links.suppliers, c_node,
device_links_read_lock_held())
if (link->supplier_preactivated) {
bool put;
link->supplier_preactivated = false;
spin_lock_irq(&dev->power.lock);
put = pm_runtime_status_suspended(dev) &&
refcount_dec_not_one(&link->rpm_active);
spin_unlock_irq(&dev->power.lock);
if (put)
pm_runtime_put(link->supplier);
pm_runtime_put(link->supplier);
}
device_links_read_unlock(idx);
@ -1838,7 +1823,8 @@ void pm_runtime_drop_link(struct device_link *link)
return;
pm_runtime_drop_link_count(link->consumer);
pm_runtime_release_supplier(link, true);
pm_runtime_release_supplier(link);
pm_request_idle(link->supplier);
}
static bool pm_runtime_need_not_resume(struct device *dev)

View File

@ -152,6 +152,10 @@ static unsigned int xen_blkif_max_ring_order;
module_param_named(max_ring_page_order, xen_blkif_max_ring_order, int, 0444);
MODULE_PARM_DESC(max_ring_page_order, "Maximum order of pages to be used for the shared ring");
static bool __read_mostly xen_blkif_trusted = true;
module_param_named(trusted, xen_blkif_trusted, bool, 0644);
MODULE_PARM_DESC(trusted, "Is the backend trusted");
#define BLK_RING_SIZE(info) \
__CONST_RING_SIZE(blkif, XEN_PAGE_SIZE * (info)->nr_ring_pages)
@ -210,6 +214,7 @@ struct blkfront_info
unsigned int feature_discard:1;
unsigned int feature_secdiscard:1;
unsigned int feature_persistent:1;
unsigned int bounce:1;
unsigned int discard_granularity;
unsigned int discard_alignment;
/* Number of 4KB segments handled */
@ -310,8 +315,8 @@ static int fill_grant_buffer(struct blkfront_ring_info *rinfo, int num)
if (!gnt_list_entry)
goto out_of_memory;
if (info->feature_persistent) {
granted_page = alloc_page(GFP_NOIO);
if (info->bounce) {
granted_page = alloc_page(GFP_NOIO | __GFP_ZERO);
if (!granted_page) {
kfree(gnt_list_entry);
goto out_of_memory;
@ -330,7 +335,7 @@ out_of_memory:
list_for_each_entry_safe(gnt_list_entry, n,
&rinfo->grants, node) {
list_del(&gnt_list_entry->node);
if (info->feature_persistent)
if (info->bounce)
__free_page(gnt_list_entry->page);
kfree(gnt_list_entry);
i--;
@ -376,7 +381,7 @@ static struct grant *get_grant(grant_ref_t *gref_head,
/* Assign a gref to this page */
gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head);
BUG_ON(gnt_list_entry->gref == -ENOSPC);
if (info->feature_persistent)
if (info->bounce)
grant_foreign_access(gnt_list_entry, info);
else {
/* Grant access to the GFN passed by the caller */
@ -400,7 +405,7 @@ static struct grant *get_indirect_grant(grant_ref_t *gref_head,
/* Assign a gref to this page */
gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head);
BUG_ON(gnt_list_entry->gref == -ENOSPC);
if (!info->feature_persistent) {
if (!info->bounce) {
struct page *indirect_page;
/* Fetch a pre-allocated page to use for indirect grefs */
@ -703,7 +708,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
.grant_idx = 0,
.segments = NULL,
.rinfo = rinfo,
.need_copy = rq_data_dir(req) && info->feature_persistent,
.need_copy = rq_data_dir(req) && info->bounce,
};
/*
@ -981,11 +986,12 @@ static void xlvbd_flush(struct blkfront_info *info)
{
blk_queue_write_cache(info->rq, info->feature_flush ? true : false,
info->feature_fua ? true : false);
pr_info("blkfront: %s: %s %s %s %s %s\n",
pr_info("blkfront: %s: %s %s %s %s %s %s %s\n",
info->gd->disk_name, flush_info(info),
"persistent grants:", info->feature_persistent ?
"enabled;" : "disabled;", "indirect descriptors:",
info->max_indirect_segments ? "enabled;" : "disabled;");
info->max_indirect_segments ? "enabled;" : "disabled;",
"bounce buffer:", info->bounce ? "enabled" : "disabled;");
}
static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset)
@ -1207,7 +1213,7 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo)
if (!list_empty(&rinfo->indirect_pages)) {
struct page *indirect_page, *n;
BUG_ON(info->feature_persistent);
BUG_ON(info->bounce);
list_for_each_entry_safe(indirect_page, n, &rinfo->indirect_pages, lru) {
list_del(&indirect_page->lru);
__free_page(indirect_page);
@ -1224,7 +1230,7 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo)
NULL);
rinfo->persistent_gnts_c--;
}
if (info->feature_persistent)
if (info->bounce)
__free_page(persistent_gnt->page);
kfree(persistent_gnt);
}
@ -1245,7 +1251,7 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo)
for (j = 0; j < segs; j++) {
persistent_gnt = rinfo->shadow[i].grants_used[j];
gnttab_end_foreign_access(persistent_gnt->gref, NULL);
if (info->feature_persistent)
if (info->bounce)
__free_page(persistent_gnt->page);
kfree(persistent_gnt);
}
@ -1428,7 +1434,7 @@ static int blkif_completion(unsigned long *id,
data.s = s;
num_sg = s->num_sg;
if (bret->operation == BLKIF_OP_READ && info->feature_persistent) {
if (bret->operation == BLKIF_OP_READ && info->bounce) {
for_each_sg(s->sg, sg, num_sg, i) {
BUG_ON(sg->offset + sg->length > PAGE_SIZE);
@ -1487,7 +1493,7 @@ static int blkif_completion(unsigned long *id,
* Add the used indirect page back to the list of
* available pages for indirect grefs.
*/
if (!info->feature_persistent) {
if (!info->bounce) {
indirect_page = s->indirect_grants[i]->page;
list_add(&indirect_page->lru, &rinfo->indirect_pages);
}
@ -1764,6 +1770,10 @@ static int talk_to_blkback(struct xenbus_device *dev,
if (!info)
return -ENODEV;
/* Check if backend is trusted. */
info->bounce = !xen_blkif_trusted ||
!xenbus_read_unsigned(dev->nodename, "trusted", 1);
max_page_order = xenbus_read_unsigned(info->xbdev->otherend,
"max-ring-page-order", 0);
ring_page_order = min(xen_blkif_max_ring_order, max_page_order);
@ -2173,17 +2183,18 @@ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo)
if (err)
goto out_of_memory;
if (!info->feature_persistent && info->max_indirect_segments) {
if (!info->bounce && info->max_indirect_segments) {
/*
* We are using indirect descriptors but not persistent
* grants, we need to allocate a set of pages that can be
* We are using indirect descriptors but don't have a bounce
* buffer, we need to allocate a set of pages that can be
* used for mapping indirect grefs
*/
int num = INDIRECT_GREFS(grants) * BLK_RING_SIZE(info);
BUG_ON(!list_empty(&rinfo->indirect_pages));
for (i = 0; i < num; i++) {
struct page *indirect_page = alloc_page(GFP_KERNEL);
struct page *indirect_page = alloc_page(GFP_KERNEL |
__GFP_ZERO);
if (!indirect_page)
goto out_of_memory;
list_add(&indirect_page->lru, &rinfo->indirect_pages);
@ -2276,6 +2287,8 @@ static void blkfront_gather_backend_features(struct blkfront_info *info)
info->feature_persistent =
!!xenbus_read_unsigned(info->xbdev->otherend,
"feature-persistent", 0);
if (info->feature_persistent)
info->bounce = true;
indirect_segments = xenbus_read_unsigned(info->xbdev->otherend,
"feature-max-indirect-segments", 0);
@ -2547,6 +2560,13 @@ static void blkfront_delay_work(struct work_struct *work)
struct blkfront_info *info;
bool need_schedule_work = false;
/*
* Note that when using bounce buffers but not persistent grants
* there's no need to run blkfront_delay_work because grants are
* revoked in blkif_completion or else an error is reported and the
* connection is closed.
*/
mutex_lock(&blkfront_mutex);
list_for_each_entry(info, &info_list, info_list) {

View File

@ -111,6 +111,7 @@ int stm32_rcc_reset_init(struct device *dev, const struct of_device_id *match,
if (!reset_data)
return -ENOMEM;
spin_lock_init(&reset_data->lock);
reset_data->membase = base;
reset_data->rcdev.owner = THIS_MODULE;
reset_data->rcdev.ops = &stm32_reset_ops;

View File

@ -566,6 +566,28 @@ static int amd_pstate_cpu_exit(struct cpufreq_policy *policy)
return 0;
}
static int amd_pstate_cpu_resume(struct cpufreq_policy *policy)
{
int ret;
ret = amd_pstate_enable(true);
if (ret)
pr_err("failed to enable amd-pstate during resume, return %d\n", ret);
return ret;
}
static int amd_pstate_cpu_suspend(struct cpufreq_policy *policy)
{
int ret;
ret = amd_pstate_enable(false);
if (ret)
pr_err("failed to disable amd-pstate during suspend, return %d\n", ret);
return ret;
}
/* Sysfs attributes */
/*
@ -636,6 +658,8 @@ static struct cpufreq_driver amd_pstate_driver = {
.target = amd_pstate_target,
.init = amd_pstate_cpu_init,
.exit = amd_pstate_cpu_exit,
.suspend = amd_pstate_cpu_suspend,
.resume = amd_pstate_cpu_resume,
.set_boost = amd_pstate_set_boost,
.name = "amd-pstate",
.attr = amd_pstate_attr,

View File

@ -127,6 +127,7 @@ static const struct of_device_id blocklist[] __initconst = {
{ .compatible = "mediatek,mt8173", },
{ .compatible = "mediatek,mt8176", },
{ .compatible = "mediatek,mt8183", },
{ .compatible = "mediatek,mt8186", },
{ .compatible = "mediatek,mt8365", },
{ .compatible = "mediatek,mt8516", },

View File

@ -470,6 +470,10 @@ static int pmac_cpufreq_init_MacRISC3(struct device_node *cpunode)
if (slew_done_gpio_np)
slew_done_gpio = read_gpio(slew_done_gpio_np);
of_node_put(volt_gpio_np);
of_node_put(freq_gpio_np);
of_node_put(slew_done_gpio_np);
/* If we use the frequency GPIOs, calculate the min/max speeds based
* on the bus frequencies
*/

View File

@ -442,6 +442,9 @@ static int qcom_cpufreq_hw_cpu_online(struct cpufreq_policy *policy)
struct platform_device *pdev = cpufreq_get_driver_data();
int ret;
if (data->throttle_irq <= 0)
return 0;
ret = irq_set_affinity_hint(data->throttle_irq, policy->cpus);
if (ret)
dev_err(&pdev->dev, "Failed to set CPU affinity of %s[%d]\n",
@ -469,6 +472,9 @@ static int qcom_cpufreq_hw_cpu_offline(struct cpufreq_policy *policy)
static void qcom_cpufreq_hw_lmh_exit(struct qcom_cpufreq_data *data)
{
if (data->throttle_irq <= 0)
return;
free_irq(data->throttle_irq, data);
}

View File

@ -275,6 +275,7 @@ static int qoriq_cpufreq_probe(struct platform_device *pdev)
np = of_find_matching_node(NULL, qoriq_cpufreq_blacklist);
if (np) {
of_node_put(np);
dev_info(&pdev->dev, "Disabling due to erratum A-008083");
return -ENODEV;
}

View File

@ -133,98 +133,6 @@ config CRYPTO_PAES_S390
Select this option if you want to use the paes cipher
for example to use protected key encrypted devices.
config CRYPTO_SHA1_S390
tristate "SHA1 digest algorithm"
depends on S390
select CRYPTO_HASH
help
This is the s390 hardware accelerated implementation of the
SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).
It is available as of z990.
config CRYPTO_SHA256_S390
tristate "SHA256 digest algorithm"
depends on S390
select CRYPTO_HASH
help
This is the s390 hardware accelerated implementation of the
SHA256 secure hash standard (DFIPS 180-2).
It is available as of z9.
config CRYPTO_SHA512_S390
tristate "SHA384 and SHA512 digest algorithm"
depends on S390
select CRYPTO_HASH
help
This is the s390 hardware accelerated implementation of the
SHA512 secure hash standard.
It is available as of z10.
config CRYPTO_SHA3_256_S390
tristate "SHA3_224 and SHA3_256 digest algorithm"
depends on S390
select CRYPTO_HASH
help
This is the s390 hardware accelerated implementation of the
SHA3_256 secure hash standard.
It is available as of z14.
config CRYPTO_SHA3_512_S390
tristate "SHA3_384 and SHA3_512 digest algorithm"
depends on S390
select CRYPTO_HASH
help
This is the s390 hardware accelerated implementation of the
SHA3_512 secure hash standard.
It is available as of z14.
config CRYPTO_DES_S390
tristate "DES and Triple DES cipher algorithms"
depends on S390
select CRYPTO_ALGAPI
select CRYPTO_SKCIPHER
select CRYPTO_LIB_DES
help
This is the s390 hardware accelerated implementation of the
DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3).
As of z990 the ECB and CBC mode are hardware accelerated.
As of z196 the CTR mode is hardware accelerated.
config CRYPTO_AES_S390
tristate "AES cipher algorithms"
depends on S390
select CRYPTO_ALGAPI
select CRYPTO_SKCIPHER
help
This is the s390 hardware accelerated implementation of the
AES cipher algorithms (FIPS-197).
As of z9 the ECB and CBC modes are hardware accelerated
for 128 bit keys.
As of z10 the ECB and CBC modes are hardware accelerated
for all AES key sizes.
As of z196 the CTR mode is hardware accelerated for all AES
key sizes and XTS mode is hardware accelerated for 256 and
512 bit keys.
config CRYPTO_CHACHA_S390
tristate "ChaCha20 stream cipher"
depends on S390
select CRYPTO_SKCIPHER
select CRYPTO_LIB_CHACHA_GENERIC
select CRYPTO_ARCH_HAVE_LIB_CHACHA
help
This is the s390 SIMD implementation of the ChaCha20 stream
cipher (RFC 7539).
It is available as of z13.
config S390_PRNG
tristate "Pseudo random number generator device driver"
depends on S390
@ -238,29 +146,6 @@ config S390_PRNG
It is available as of z9.
config CRYPTO_GHASH_S390
tristate "GHASH hash function"
depends on S390
select CRYPTO_HASH
help
This is the s390 hardware accelerated implementation of GHASH,
the hash function used in GCM (Galois/Counter mode).
It is available as of z196.
config CRYPTO_CRC32_S390
tristate "CRC-32 algorithms"
depends on S390
select CRYPTO_HASH
select CRC32
help
Select this option if you want to use hardware accelerated
implementations of CRC algorithms. With this option, you
can optimize the computation of CRC-32 (IEEE 802.3 Ethernet)
and CRC-32C (Castagnoli).
It is available with IBM z13 or later.
config CRYPTO_DEV_NIAGARA2
tristate "Niagara2 Stream Processing Unit driver"
select CRYPTO_LIB_DES

View File

@ -85,17 +85,9 @@ static int sp_get_irqs(struct sp_device *sp)
struct sp_platform *sp_platform = sp->dev_specific;
struct device *dev = sp->dev;
struct platform_device *pdev = to_platform_device(dev);
unsigned int i, count;
int ret;
for (i = 0, count = 0; i < pdev->num_resources; i++) {
struct resource *res = &pdev->resource[i];
if (resource_type(res) == IORESOURCE_IRQ)
count++;
}
sp_platform->irq_count = count;
sp_platform->irq_count = platform_irq_count(pdev);
ret = platform_get_irq(pdev, 0);
if (ret < 0) {
@ -104,7 +96,7 @@ static int sp_get_irqs(struct sp_device *sp)
}
sp->psp_irq = ret;
if (count == 1) {
if (sp_platform->irq_count == 1) {
sp->ccp_irq = ret;
} else {
ret = platform_get_irq(pdev, 1);

View File

@ -197,7 +197,7 @@ static int init_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld,
else
cxld->target_type = CXL_DECODER_ACCELERATOR;
if (is_cxl_endpoint(to_cxl_port(cxld->dev.parent)))
if (is_endpoint_decoder(&cxld->dev))
return 0;
target_list.value =

View File

@ -355,11 +355,13 @@ static int cxl_to_mem_cmd(struct cxl_mem_command *mem_cmd,
return -EBUSY;
/* Check the input buffer is the expected size */
if (info->size_in != send_cmd->in.size)
if ((info->size_in != CXL_VARIABLE_PAYLOAD) &&
(info->size_in != send_cmd->in.size))
return -ENOMEM;
/* Check the output buffer is at least large enough */
if (send_cmd->out.size < info->size_out)
if ((info->size_out != CXL_VARIABLE_PAYLOAD) &&
(send_cmd->out.size < info->size_out))
return -ENOMEM;
*mem_cmd = (struct cxl_mem_command) {

View File

@ -272,7 +272,7 @@ static const struct device_type cxl_decoder_root_type = {
.groups = cxl_decoder_root_attribute_groups,
};
static bool is_endpoint_decoder(struct device *dev)
bool is_endpoint_decoder(struct device *dev)
{
return dev->type == &cxl_decoder_endpoint_type;
}

View File

@ -340,6 +340,7 @@ struct cxl_dport *cxl_find_dport_by_dev(struct cxl_port *port,
struct cxl_decoder *to_cxl_decoder(struct device *dev);
bool is_root_decoder(struct device *dev);
bool is_endpoint_decoder(struct device *dev);
bool is_cxl_decoder(struct device *dev);
struct cxl_decoder *cxl_root_decoder_alloc(struct cxl_port *port,
unsigned int nr_targets);

View File

@ -300,13 +300,13 @@ struct cxl_mbox_identify {
} __packed;
struct cxl_mbox_get_lsa {
u32 offset;
u32 length;
__le32 offset;
__le32 length;
} __packed;
struct cxl_mbox_set_lsa {
u32 offset;
u32 reserved;
__le32 offset;
__le32 reserved;
u8 data[];
} __packed;

View File

@ -29,6 +29,7 @@ static int create_endpoint(struct cxl_memdev *cxlmd,
{
struct cxl_dev_state *cxlds = cxlmd->cxlds;
struct cxl_port *endpoint;
int rc;
endpoint = devm_cxl_add_port(&parent_port->dev, &cxlmd->dev,
cxlds->component_reg_phys, parent_port);
@ -37,13 +38,17 @@ static int create_endpoint(struct cxl_memdev *cxlmd,
dev_dbg(&cxlmd->dev, "add: %s\n", dev_name(&endpoint->dev));
rc = cxl_endpoint_autoremove(cxlmd, endpoint);
if (rc)
return rc;
if (!endpoint->dev.driver) {
dev_err(&cxlmd->dev, "%s failed probe\n",
dev_name(&endpoint->dev));
return -ENXIO;
}
return cxl_endpoint_autoremove(cxlmd, endpoint);
return 0;
}
static void enable_suspend(void *data)

View File

@ -108,8 +108,8 @@ static int cxl_pmem_get_config_data(struct cxl_dev_state *cxlds,
return -EINVAL;
get_lsa = (struct cxl_mbox_get_lsa) {
.offset = cmd->in_offset,
.length = cmd->in_length,
.offset = cpu_to_le32(cmd->in_offset),
.length = cpu_to_le32(cmd->in_length),
};
rc = cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_GET_LSA, &get_lsa,
@ -139,7 +139,7 @@ static int cxl_pmem_set_config_data(struct cxl_dev_state *cxlds,
return -ENOMEM;
*set_lsa = (struct cxl_mbox_set_lsa) {
.offset = cmd->in_offset,
.offset = cpu_to_le32(cmd->in_offset),
};
memcpy(set_lsa->data, cmd->in_buf, cmd->in_length);

View File

@ -123,7 +123,7 @@ void devfreq_get_freq_range(struct devfreq *devfreq,
unsigned long *min_freq,
unsigned long *max_freq)
{
unsigned long *freq_table = devfreq->profile->freq_table;
unsigned long *freq_table = devfreq->freq_table;
s32 qos_min_freq, qos_max_freq;
lockdep_assert_held(&devfreq->lock);
@ -133,11 +133,11 @@ void devfreq_get_freq_range(struct devfreq *devfreq,
* The devfreq drivers can initialize this in either ascending or
* descending order and devfreq core supports both.
*/
if (freq_table[0] < freq_table[devfreq->profile->max_state - 1]) {
if (freq_table[0] < freq_table[devfreq->max_state - 1]) {
*min_freq = freq_table[0];
*max_freq = freq_table[devfreq->profile->max_state - 1];
*max_freq = freq_table[devfreq->max_state - 1];
} else {
*min_freq = freq_table[devfreq->profile->max_state - 1];
*min_freq = freq_table[devfreq->max_state - 1];
*max_freq = freq_table[0];
}
@ -169,8 +169,8 @@ static int devfreq_get_freq_level(struct devfreq *devfreq, unsigned long freq)
{
int lev;
for (lev = 0; lev < devfreq->profile->max_state; lev++)
if (freq == devfreq->profile->freq_table[lev])
for (lev = 0; lev < devfreq->max_state; lev++)
if (freq == devfreq->freq_table[lev])
return lev;
return -EINVAL;
@ -178,7 +178,6 @@ static int devfreq_get_freq_level(struct devfreq *devfreq, unsigned long freq)
static int set_freq_table(struct devfreq *devfreq)
{
struct devfreq_dev_profile *profile = devfreq->profile;
struct dev_pm_opp *opp;
unsigned long freq;
int i, count;
@ -188,25 +187,22 @@ static int set_freq_table(struct devfreq *devfreq)
if (count <= 0)
return -EINVAL;
profile->max_state = count;
profile->freq_table = devm_kcalloc(devfreq->dev.parent,
profile->max_state,
sizeof(*profile->freq_table),
GFP_KERNEL);
if (!profile->freq_table) {
profile->max_state = 0;
devfreq->max_state = count;
devfreq->freq_table = devm_kcalloc(devfreq->dev.parent,
devfreq->max_state,
sizeof(*devfreq->freq_table),
GFP_KERNEL);
if (!devfreq->freq_table)
return -ENOMEM;
}
for (i = 0, freq = 0; i < profile->max_state; i++, freq++) {
for (i = 0, freq = 0; i < devfreq->max_state; i++, freq++) {
opp = dev_pm_opp_find_freq_ceil(devfreq->dev.parent, &freq);
if (IS_ERR(opp)) {
devm_kfree(devfreq->dev.parent, profile->freq_table);
profile->max_state = 0;
devm_kfree(devfreq->dev.parent, devfreq->freq_table);
return PTR_ERR(opp);
}
dev_pm_opp_put(opp);
profile->freq_table[i] = freq;
devfreq->freq_table[i] = freq;
}
return 0;
@ -246,7 +242,7 @@ int devfreq_update_status(struct devfreq *devfreq, unsigned long freq)
if (lev != prev_lev) {
devfreq->stats.trans_table[
(prev_lev * devfreq->profile->max_state) + lev]++;
(prev_lev * devfreq->max_state) + lev]++;
devfreq->stats.total_trans++;
}
@ -835,6 +831,9 @@ struct devfreq *devfreq_add_device(struct device *dev,
if (err < 0)
goto err_dev;
mutex_lock(&devfreq->lock);
} else {
devfreq->freq_table = devfreq->profile->freq_table;
devfreq->max_state = devfreq->profile->max_state;
}
devfreq->scaling_min_freq = find_available_min_freq(devfreq);
@ -870,8 +869,8 @@ struct devfreq *devfreq_add_device(struct device *dev,
devfreq->stats.trans_table = devm_kzalloc(&devfreq->dev,
array3_size(sizeof(unsigned int),
devfreq->profile->max_state,
devfreq->profile->max_state),
devfreq->max_state,
devfreq->max_state),
GFP_KERNEL);
if (!devfreq->stats.trans_table) {
mutex_unlock(&devfreq->lock);
@ -880,7 +879,7 @@ struct devfreq *devfreq_add_device(struct device *dev,
}
devfreq->stats.time_in_state = devm_kcalloc(&devfreq->dev,
devfreq->profile->max_state,
devfreq->max_state,
sizeof(*devfreq->stats.time_in_state),
GFP_KERNEL);
if (!devfreq->stats.time_in_state) {
@ -932,8 +931,9 @@ struct devfreq *devfreq_add_device(struct device *dev,
err = devfreq->governor->event_handler(devfreq, DEVFREQ_GOV_START,
NULL);
if (err) {
dev_err(dev, "%s: Unable to start governor for the device\n",
__func__);
dev_err_probe(dev, err,
"%s: Unable to start governor for the device\n",
__func__);
goto err_init;
}
create_sysfs_files(devfreq, devfreq->governor);
@ -1665,9 +1665,9 @@ static ssize_t available_frequencies_show(struct device *d,
mutex_lock(&df->lock);
for (i = 0; i < df->profile->max_state; i++)
for (i = 0; i < df->max_state; i++)
count += scnprintf(&buf[count], (PAGE_SIZE - count - 2),
"%lu ", df->profile->freq_table[i]);
"%lu ", df->freq_table[i]);
mutex_unlock(&df->lock);
/* Truncate the trailing space */
@ -1690,7 +1690,7 @@ static ssize_t trans_stat_show(struct device *dev,
if (!df->profile)
return -EINVAL;
max_state = df->profile->max_state;
max_state = df->max_state;
if (max_state == 0)
return sprintf(buf, "Not Supported.\n");
@ -1707,19 +1707,17 @@ static ssize_t trans_stat_show(struct device *dev,
len += sprintf(buf + len, " :");
for (i = 0; i < max_state; i++)
len += sprintf(buf + len, "%10lu",
df->profile->freq_table[i]);
df->freq_table[i]);
len += sprintf(buf + len, " time(ms)\n");
for (i = 0; i < max_state; i++) {
if (df->profile->freq_table[i]
== df->previous_freq) {
if (df->freq_table[i] == df->previous_freq)
len += sprintf(buf + len, "*");
} else {
else
len += sprintf(buf + len, " ");
}
len += sprintf(buf + len, "%10lu:",
df->profile->freq_table[i]);
len += sprintf(buf + len, "%10lu:", df->freq_table[i]);
for (j = 0; j < max_state; j++)
len += sprintf(buf + len, "%10u",
df->stats.trans_table[(i * max_state) + j]);
@ -1743,7 +1741,7 @@ static ssize_t trans_stat_store(struct device *dev,
if (!df->profile)
return -EINVAL;
if (df->profile->max_state == 0)
if (df->max_state == 0)
return count;
err = kstrtoint(buf, 10, &value);
@ -1751,11 +1749,11 @@ static ssize_t trans_stat_store(struct device *dev,
return -EINVAL;
mutex_lock(&df->lock);
memset(df->stats.time_in_state, 0, (df->profile->max_state *
memset(df->stats.time_in_state, 0, (df->max_state *
sizeof(*df->stats.time_in_state)));
memset(df->stats.trans_table, 0, array3_size(sizeof(unsigned int),
df->profile->max_state,
df->profile->max_state));
df->max_state,
df->max_state));
df->stats.total_trans = 0;
df->stats.last_update = get_jiffies_64();
mutex_unlock(&df->lock);

View File

@ -519,15 +519,19 @@ static int of_get_devfreq_events(struct device_node *np,
count = of_get_child_count(events_np);
desc = devm_kcalloc(dev, count, sizeof(*desc), GFP_KERNEL);
if (!desc)
if (!desc) {
of_node_put(events_np);
return -ENOMEM;
}
info->num_events = count;
of_id = of_match_device(exynos_ppmu_id_match, dev);
if (of_id)
info->ppmu_type = (enum exynos_ppmu_type)of_id->data;
else
else {
of_node_put(events_np);
return -EINVAL;
}
j = 0;
for_each_child_of_node(events_np, node) {

View File

@ -447,9 +447,9 @@ static int exynos_bus_probe(struct platform_device *pdev)
}
}
max_state = bus->devfreq->profile->max_state;
min_freq = (bus->devfreq->profile->freq_table[0] / 1000);
max_freq = (bus->devfreq->profile->freq_table[max_state - 1] / 1000);
max_state = bus->devfreq->max_state;
min_freq = (bus->devfreq->freq_table[0] / 1000);
max_freq = (bus->devfreq->freq_table[max_state - 1] / 1000);
pr_info("exynos-bus: new bus device registered: %s (%6ld KHz ~ %6ld KHz)\n",
dev_name(dev), min_freq, max_freq);

View File

@ -1,4 +1,4 @@
// SPDX-License-Identifier: GPL-2.0-only
// SPDX-License-Identifier: GPL-2.0-only
/*
* linux/drivers/devfreq/governor_passive.c
*
@ -14,10 +14,9 @@
#include <linux/slab.h>
#include <linux/device.h>
#include <linux/devfreq.h>
#include <linux/units.h>
#include "governor.h"
#define HZ_PER_KHZ 1000
static struct devfreq_cpu_data *
get_parent_cpu_data(struct devfreq_passive_data *p_data,
struct cpufreq_policy *policy)
@ -34,6 +33,20 @@ get_parent_cpu_data(struct devfreq_passive_data *p_data,
return NULL;
}
static void delete_parent_cpu_data(struct devfreq_passive_data *p_data)
{
struct devfreq_cpu_data *parent_cpu_data, *tmp;
list_for_each_entry_safe(parent_cpu_data, tmp, &p_data->cpu_data_list, node) {
list_del(&parent_cpu_data->node);
if (parent_cpu_data->opp_table)
dev_pm_opp_put_opp_table(parent_cpu_data->opp_table);
kfree(parent_cpu_data);
}
}
static unsigned long get_target_freq_by_required_opp(struct device *p_dev,
struct opp_table *p_opp_table,
struct opp_table *opp_table,
@ -131,18 +144,18 @@ static int get_target_freq_with_devfreq(struct devfreq *devfreq,
goto out;
/* Use interpolation if required opps is not available */
for (i = 0; i < parent_devfreq->profile->max_state; i++)
if (parent_devfreq->profile->freq_table[i] == *freq)
for (i = 0; i < parent_devfreq->max_state; i++)
if (parent_devfreq->freq_table[i] == *freq)
break;
if (i == parent_devfreq->profile->max_state)
if (i == parent_devfreq->max_state)
return -EINVAL;
if (i < devfreq->profile->max_state) {
child_freq = devfreq->profile->freq_table[i];
if (i < devfreq->max_state) {
child_freq = devfreq->freq_table[i];
} else {
count = devfreq->profile->max_state;
child_freq = devfreq->profile->freq_table[count - 1];
count = devfreq->max_state;
child_freq = devfreq->freq_table[count - 1];
}
out:
@ -222,8 +235,7 @@ static int cpufreq_passive_unregister_notifier(struct devfreq *devfreq)
{
struct devfreq_passive_data *p_data
= (struct devfreq_passive_data *)devfreq->data;
struct devfreq_cpu_data *parent_cpu_data;
int cpu, ret = 0;
int ret;
if (p_data->nb.notifier_call) {
ret = cpufreq_unregister_notifier(&p_data->nb,
@ -232,27 +244,9 @@ static int cpufreq_passive_unregister_notifier(struct devfreq *devfreq)
return ret;
}
for_each_possible_cpu(cpu) {
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
if (!policy) {
ret = -EINVAL;
continue;
}
delete_parent_cpu_data(p_data);
parent_cpu_data = get_parent_cpu_data(p_data, policy);
if (!parent_cpu_data) {
cpufreq_cpu_put(policy);
continue;
}
list_del(&parent_cpu_data->node);
if (parent_cpu_data->opp_table)
dev_pm_opp_put_opp_table(parent_cpu_data->opp_table);
kfree(parent_cpu_data);
cpufreq_cpu_put(policy);
}
return ret;
return 0;
}
static int cpufreq_passive_register_notifier(struct devfreq *devfreq)
@ -336,7 +330,6 @@ err_free_cpu_data:
err_put_policy:
cpufreq_cpu_put(policy);
err:
WARN_ON(cpufreq_passive_unregister_notifier(devfreq));
return ret;
}
@ -407,8 +400,7 @@ static int devfreq_passive_event_handler(struct devfreq *devfreq,
if (!p_data)
return -EINVAL;
if (!p_data->this)
p_data->this = devfreq;
p_data->this = devfreq;
switch (event) {
case DEVFREQ_GOV_START:

View File

@ -1900,6 +1900,11 @@ static int at_xdmac_alloc_chan_resources(struct dma_chan *chan)
for (i = 0; i < init_nr_desc_per_channel; i++) {
desc = at_xdmac_alloc_desc(chan, GFP_KERNEL);
if (!desc) {
if (i == 0) {
dev_warn(chan2dev(chan),
"can't allocate any descriptors\n");
return -EIO;
}
dev_warn(chan2dev(chan),
"only %d descriptors have been allocated\n", i);
break;

View File

@ -675,16 +675,10 @@ static int dmatest_func(void *data)
/*
* src and dst buffers are freed by ourselves below
*/
if (params->polled) {
if (params->polled)
flags = DMA_CTRL_ACK;
} else {
if (dma_has_cap(DMA_INTERRUPT, dev->cap_mask)) {
flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;
} else {
pr_err("Channel does not support interrupt!\n");
goto err_pq_array;
}
}
else
flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;
ktime = ktime_get();
while (!(kthread_should_stop() ||
@ -912,7 +906,6 @@ error_unmap_continue:
runtime = ktime_to_us(ktime);
ret = 0;
err_pq_array:
kfree(dma_pq);
err_srcs_array:
kfree(srcs);

View File

@ -1164,8 +1164,9 @@ static int dma_chan_pause(struct dma_chan *dchan)
BIT(chan->id) << DMAC_CHAN_SUSP_WE_SHIFT;
axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
} else {
val = BIT(chan->id) << DMAC_CHAN_SUSP2_SHIFT |
BIT(chan->id) << DMAC_CHAN_SUSP2_WE_SHIFT;
val = axi_dma_ioread32(chan->chip, DMAC_CHSUSPREG);
val |= BIT(chan->id) << DMAC_CHAN_SUSP2_SHIFT |
BIT(chan->id) << DMAC_CHAN_SUSP2_WE_SHIFT;
axi_dma_iowrite32(chan->chip, DMAC_CHSUSPREG, val);
}
@ -1190,12 +1191,13 @@ static inline void axi_chan_resume(struct axi_dma_chan *chan)
{
u32 val;
val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
if (chan->chip->dw->hdata->reg_map_8_channels) {
val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
val &= ~(BIT(chan->id) << DMAC_CHAN_SUSP_SHIFT);
val |= (BIT(chan->id) << DMAC_CHAN_SUSP_WE_SHIFT);
axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
} else {
val = axi_dma_ioread32(chan->chip, DMAC_CHSUSPREG);
val &= ~(BIT(chan->id) << DMAC_CHAN_SUSP2_SHIFT);
val |= (BIT(chan->id) << DMAC_CHAN_SUSP2_WE_SHIFT);
axi_dma_iowrite32(chan->chip, DMAC_CHSUSPREG, val);

View File

@ -716,10 +716,7 @@ static void idxd_device_wqs_clear_state(struct idxd_device *idxd)
struct idxd_wq *wq = idxd->wqs[i];
mutex_lock(&wq->wq_lock);
if (wq->state == IDXD_WQ_ENABLED) {
idxd_wq_disable_cleanup(wq);
wq->state = IDXD_WQ_DISABLED;
}
idxd_wq_disable_cleanup(wq);
idxd_wq_device_reset_cleanup(wq);
mutex_unlock(&wq->wq_lock);
}

View File

@ -512,15 +512,16 @@ static int idxd_probe(struct idxd_device *idxd)
dev_dbg(dev, "IDXD reset complete\n");
if (IS_ENABLED(CONFIG_INTEL_IDXD_SVM) && sva) {
if (iommu_dev_enable_feature(dev, IOMMU_DEV_FEAT_SVA))
if (iommu_dev_enable_feature(dev, IOMMU_DEV_FEAT_SVA)) {
dev_warn(dev, "Unable to turn on user SVA feature.\n");
else
} else {
set_bit(IDXD_FLAG_USER_PASID_ENABLED, &idxd->flags);
if (idxd_enable_system_pasid(idxd))
dev_warn(dev, "No in-kernel DMA with PASID.\n");
else
set_bit(IDXD_FLAG_PASID_ENABLED, &idxd->flags);
if (idxd_enable_system_pasid(idxd))
dev_warn(dev, "No in-kernel DMA with PASID.\n");
else
set_bit(IDXD_FLAG_PASID_ENABLED, &idxd->flags);
}
} else if (!sva) {
dev_warn(dev, "User forced SVA off via module param.\n");
}

View File

@ -891,7 +891,7 @@ static void sdma_update_channel_loop(struct sdma_channel *sdmac)
* SDMA stops cyclic channel when DMA request triggers a channel and no SDMA
* owned buffer is available (i.e. BD_DONE was set too late).
*/
if (!is_sdma_channel_enabled(sdmac->sdma, sdmac->channel)) {
if (sdmac->desc && !is_sdma_channel_enabled(sdmac->sdma, sdmac->channel)) {
dev_warn(sdmac->sdma->dev, "restart cyclic channel %d\n", sdmac->channel);
sdma_enable_channel(sdmac->sdma, sdmac->channel);
}
@ -2346,7 +2346,7 @@ MODULE_DESCRIPTION("i.MX SDMA driver");
#if IS_ENABLED(CONFIG_SOC_IMX6Q)
MODULE_FIRMWARE("imx/sdma/sdma-imx6q.bin");
#endif
#if IS_ENABLED(CONFIG_SOC_IMX7D)
#if IS_ENABLED(CONFIG_SOC_IMX7D) || IS_ENABLED(CONFIG_SOC_IMX8M)
MODULE_FIRMWARE("imx/sdma/sdma-imx7d.bin");
#endif
MODULE_LICENSE("GPL");

View File

@ -1593,11 +1593,12 @@ static int intel_ldma_probe(struct platform_device *pdev)
d->core_clk = devm_clk_get_optional(dev, NULL);
if (IS_ERR(d->core_clk))
return PTR_ERR(d->core_clk);
clk_prepare_enable(d->core_clk);
d->rst = devm_reset_control_get_optional(dev, NULL);
if (IS_ERR(d->rst))
return PTR_ERR(d->rst);
clk_prepare_enable(d->core_clk);
reset_control_deassert(d->rst);
ret = devm_add_action_or_reset(dev, ldma_clk_disable, d);

View File

@ -2589,7 +2589,7 @@ static struct dma_pl330_desc *pl330_get_desc(struct dma_pl330_chan *pch)
/* If the DMAC pool is empty, alloc new */
if (!desc) {
DEFINE_SPINLOCK(lock);
static DEFINE_SPINLOCK(lock);
LIST_HEAD(pool);
if (!add_desc(&pool, &lock, GFP_ATOMIC, 1))

Some files were not shown because too many files have changed in this diff Show More