forked from Minki/linux
Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull s390 updates from Martin Schwidefsky: - Improvements for the spectre defense: * The spectre related code is consolidated to a single file nospec-branch.c * Automatic enable/disable for the spectre v2 defenses (expoline vs. nobp) * Syslog messages for specve v2 are added * Enable CONFIG_GENERIC_CPU_VULNERABILITIES and define the attribute functions for spectre v1 and v2 - Add helper macros for assembler alternatives and use them to shorten the code in entry.S. - Add support for persistent configuration data via the SCLP Store Data interface. The H/W interface requires a page table that uses 4K pages only, the code to setup such an address space is added as well. - Enable virtio GPU emulation in QEMU. To do this the depends statements for a few common Kconfig options are modified. - Add support for format-3 channel path descriptors and add a binary sysfs interface to export the associated utility strings. - Add a sysfs attribute to control the IFCC handling in case of constant channel errors. - The vfio-ccw changes from Cornelia. - Bug fixes and cleanups. * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (40 commits) s390/kvm: improve stack frame constants in entry.S s390/lpp: use assembler alternatives for the LPP instruction s390/entry.S: use assembler alternatives s390: add assembler macros for CPU alternatives s390: add sysfs attributes for spectre s390: report spectre mitigation via syslog s390: add automatic detection of the spectre defense s390: move nobp parameter functions to nospec-branch.c s390/cio: add util_string sysfs attribute s390/chsc: query utility strings via fmt3 channel path descriptor s390/cio: rename struct channel_path_desc s390/cio: fix unbind of io_subchannel_driver s390/qdio: split up CCQ handling for EQBS / SQBS s390/qdio: don't retry EQBS after CCQ 96 s390/qdio: restrict buffer merging to eligible devices s390/qdio: don't merge ERROR output buffers s390/qdio: simplify math in get_*_buffer_frontier() s390/decompressor: trim uncompressed image head during the build s390/crypto: Fix kernel crash on aes_s390 module remove. s390/defkeymap: fix global init to zero ...
This commit is contained in:
commit
becdce1c66
@ -28,7 +28,7 @@ every detail. More information/reference could be found here:
|
||||
https://en.wikipedia.org/wiki/Channel_I/O
|
||||
- s390 architecture:
|
||||
s390 Principles of Operation manual (IBM Form. No. SA22-7832)
|
||||
- The existing Qemu code which implements a simple emulated channel
|
||||
- The existing QEMU code which implements a simple emulated channel
|
||||
subsystem could also be a good reference. It makes it easier to follow
|
||||
the flow.
|
||||
qemu/hw/s390x/css.c
|
||||
@ -39,22 +39,22 @@ For vfio mediated device framework:
|
||||
Motivation of vfio-ccw
|
||||
----------------------
|
||||
|
||||
Currently, a guest virtualized via qemu/kvm on s390 only sees
|
||||
Typically, a guest virtualized via QEMU/KVM on s390 only sees
|
||||
paravirtualized virtio devices via the "Virtio Over Channel I/O
|
||||
(virtio-ccw)" transport. This makes virtio devices discoverable via
|
||||
standard operating system algorithms for handling channel devices.
|
||||
|
||||
However this is not enough. On s390 for the majority of devices, which
|
||||
use the standard Channel I/O based mechanism, we also need to provide
|
||||
the functionality of passing through them to a Qemu virtual machine.
|
||||
the functionality of passing through them to a QEMU virtual machine.
|
||||
This includes devices that don't have a virtio counterpart (e.g. tape
|
||||
drives) or that have specific characteristics which guests want to
|
||||
exploit.
|
||||
|
||||
For passing a device to a guest, we want to use the same interface as
|
||||
everybody else, namely vfio. Thus, we would like to introduce vfio
|
||||
support for channel devices. And we would like to name this new vfio
|
||||
device "vfio-ccw".
|
||||
everybody else, namely vfio. We implement this vfio support for channel
|
||||
devices via the vfio mediated device framework and the subchannel device
|
||||
driver "vfio_ccw".
|
||||
|
||||
Access patterns of CCW devices
|
||||
------------------------------
|
||||
@ -99,7 +99,7 @@ As mentioned above, we realize vfio-ccw with a mdev implementation.
|
||||
Channel I/O does not have IOMMU hardware support, so the physical
|
||||
vfio-ccw device does not have an IOMMU level translation or isolation.
|
||||
|
||||
Sub-channel I/O instructions are all privileged instructions, When
|
||||
Subchannel I/O instructions are all privileged instructions. When
|
||||
handling the I/O instruction interception, vfio-ccw has the software
|
||||
policing and translation how the channel program is programmed before
|
||||
it gets sent to hardware.
|
||||
@ -121,7 +121,7 @@ devices:
|
||||
- The vfio_mdev driver for the mediated vfio ccw device.
|
||||
This is provided by the mdev framework. It is a vfio device driver for
|
||||
the mdev that created by vfio_ccw.
|
||||
It realize a group of vfio device driver callbacks, adds itself to a
|
||||
It realizes a group of vfio device driver callbacks, adds itself to a
|
||||
vfio group, and registers itself to the mdev framework as a mdev
|
||||
driver.
|
||||
It uses a vfio iommu backend that uses the existing map and unmap
|
||||
@ -178,7 +178,7 @@ vfio-ccw I/O region
|
||||
|
||||
An I/O region is used to accept channel program request from user
|
||||
space and store I/O interrupt result for user space to retrieve. The
|
||||
defination of the region is:
|
||||
definition of the region is:
|
||||
|
||||
struct ccw_io_region {
|
||||
#define ORB_AREA_SIZE 12
|
||||
@ -198,30 +198,23 @@ irb_area stores the I/O result.
|
||||
|
||||
ret_code stores a return code for each access of the region.
|
||||
|
||||
vfio-ccw patches overview
|
||||
-------------------------
|
||||
vfio-ccw operation details
|
||||
--------------------------
|
||||
|
||||
For now, our patches are rebased on the latest mdev implementation.
|
||||
vfio-ccw follows what vfio-pci did on the s390 paltform and uses
|
||||
vfio-iommu-type1 as the vfio iommu backend. It's a good start to launch
|
||||
the code review for vfio-ccw. Note that the implementation is far from
|
||||
complete yet; but we'd like to get feedback for the general
|
||||
architecture.
|
||||
vfio-ccw follows what vfio-pci did on the s390 platform and uses
|
||||
vfio-iommu-type1 as the vfio iommu backend.
|
||||
|
||||
* CCW translation APIs
|
||||
- Description:
|
||||
These introduce a group of APIs (start with 'cp_') to do CCW
|
||||
translation. The CCWs passed in by a user space program are
|
||||
organized with their guest physical memory addresses. These APIs
|
||||
will copy the CCWs into the kernel space, and assemble a runnable
|
||||
kernel channel program by updating the guest physical addresses with
|
||||
their corresponding host physical addresses.
|
||||
- Patches:
|
||||
vfio: ccw: introduce channel program interfaces
|
||||
A group of APIs (start with 'cp_') to do CCW translation. The CCWs
|
||||
passed in by a user space program are organized with their guest
|
||||
physical memory addresses. These APIs will copy the CCWs into kernel
|
||||
space, and assemble a runnable kernel channel program by updating the
|
||||
guest physical addresses with their corresponding host physical addresses.
|
||||
Note that we have to use IDALs even for direct-access CCWs, as the
|
||||
referenced memory can be located anywhere, including above 2G.
|
||||
|
||||
* vfio_ccw device driver
|
||||
- Description:
|
||||
The following patches utilizes the CCW translation APIs and introduce
|
||||
This driver utilizes the CCW translation APIs and introduces
|
||||
vfio_ccw, which is the driver for the I/O subchannel devices you want
|
||||
to pass through.
|
||||
vfio_ccw implements the following vfio ioctls:
|
||||
@ -236,20 +229,14 @@ architecture.
|
||||
This also provides the SET_IRQ ioctl to setup an event notifier to
|
||||
notify the user space program the I/O completion in an asynchronous
|
||||
way.
|
||||
- Patches:
|
||||
vfio: ccw: basic implementation for vfio_ccw driver
|
||||
vfio: ccw: introduce ccw_io_region
|
||||
vfio: ccw: realize VFIO_DEVICE_GET_REGION_INFO ioctl
|
||||
vfio: ccw: realize VFIO_DEVICE_RESET ioctl
|
||||
vfio: ccw: realize VFIO_DEVICE_G(S)ET_IRQ_INFO ioctls
|
||||
|
||||
The user of vfio-ccw is not limited to Qemu, while Qemu is definitely a
|
||||
The use of vfio-ccw is not limited to QEMU, while QEMU is definitely a
|
||||
good example to get understand how these patches work. Here is a little
|
||||
bit more detail how an I/O request triggered by the Qemu guest will be
|
||||
bit more detail how an I/O request triggered by the QEMU guest will be
|
||||
handled (without error handling).
|
||||
|
||||
Explanation:
|
||||
Q1-Q7: Qemu side process.
|
||||
Q1-Q7: QEMU side process.
|
||||
K1-K5: Kernel side process.
|
||||
|
||||
Q1. Get I/O region info during initialization.
|
||||
@ -263,7 +250,7 @@ Q4. Write the guest channel program and ORB to the I/O region.
|
||||
K2. Translate the guest channel program to a host kernel space
|
||||
channel program, which becomes runnable for a real device.
|
||||
K3. With the necessary information contained in the orb passed in
|
||||
by Qemu, issue the ccwchain to the device.
|
||||
by QEMU, issue the ccwchain to the device.
|
||||
K4. Return the ssch CC code.
|
||||
Q5. Return the CC code to the guest.
|
||||
|
||||
@ -271,7 +258,7 @@ Q5. Return the CC code to the guest.
|
||||
|
||||
K5. Interrupt handler gets the I/O result and write the result to
|
||||
the I/O region.
|
||||
K6. Signal Qemu to retrieve the result.
|
||||
K6. Signal QEMU to retrieve the result.
|
||||
Q6. Get the signal and event handler reads out the result from the I/O
|
||||
region.
|
||||
Q7. Update the irb for the guest.
|
||||
@ -289,10 +276,20 @@ More information for DASD and ECKD could be found here:
|
||||
https://en.wikipedia.org/wiki/Direct-access_storage_device
|
||||
https://en.wikipedia.org/wiki/Count_key_data
|
||||
|
||||
Together with the corresponding work in Qemu, we can bring the passed
|
||||
Together with the corresponding work in QEMU, we can bring the passed
|
||||
through DASD/ECKD device online in a guest now and use it as a block
|
||||
device.
|
||||
|
||||
While the current code allows the guest to start channel programs via
|
||||
START SUBCHANNEL, support for HALT SUBCHANNEL or CLEAR SUBCHANNEL is
|
||||
not yet implemented.
|
||||
|
||||
vfio-ccw supports classic (command mode) channel I/O only. Transport
|
||||
mode (HPF) is not supported.
|
||||
|
||||
QDIO subchannels are currently not supported. Classic devices other than
|
||||
DASD/ECKD might work, but have not been tested.
|
||||
|
||||
Reference
|
||||
---------
|
||||
1. ESA/s390 Principles of Operation manual (IBM Form. No. SA22-7832)
|
||||
|
@ -120,6 +120,7 @@ config S390
|
||||
select GENERIC_CLOCKEVENTS
|
||||
select GENERIC_CPU_AUTOPROBE
|
||||
select GENERIC_CPU_DEVICES if !SMP
|
||||
select GENERIC_CPU_VULNERABILITIES
|
||||
select GENERIC_FIND_FIRST_BIT
|
||||
select GENERIC_SMP_IDLE_THREAD
|
||||
select GENERIC_TIME_VSYSCALL
|
||||
@ -576,7 +577,7 @@ choice
|
||||
config EXPOLINE_OFF
|
||||
bool "spectre_v2=off"
|
||||
|
||||
config EXPOLINE_MEDIUM
|
||||
config EXPOLINE_AUTO
|
||||
bool "spectre_v2=auto"
|
||||
|
||||
config EXPOLINE_FULL
|
||||
|
@ -47,9 +47,6 @@ cflags-$(CONFIG_MARCH_Z14_TUNE) += -mtune=z14
|
||||
|
||||
cflags-y += -Wa,-I$(srctree)/arch/$(ARCH)/include
|
||||
|
||||
#KBUILD_IMAGE is necessary for make rpm
|
||||
KBUILD_IMAGE :=arch/s390/boot/image
|
||||
|
||||
#
|
||||
# Prevent tail-call optimizations, to get clearer backtraces:
|
||||
#
|
||||
@ -84,7 +81,7 @@ ifdef CONFIG_EXPOLINE
|
||||
CC_FLAGS_EXPOLINE += -mfunction-return=thunk
|
||||
CC_FLAGS_EXPOLINE += -mindirect-branch-table
|
||||
export CC_FLAGS_EXPOLINE
|
||||
cflags-y += $(CC_FLAGS_EXPOLINE)
|
||||
cflags-y += $(CC_FLAGS_EXPOLINE) -DCC_USING_EXPOLINE
|
||||
endif
|
||||
endif
|
||||
|
||||
@ -126,6 +123,9 @@ tools := arch/s390/tools
|
||||
|
||||
all: image bzImage
|
||||
|
||||
#KBUILD_IMAGE is necessary for packaging targets like rpm-pkg, deb-pkg...
|
||||
KBUILD_IMAGE := $(boot)/bzImage
|
||||
|
||||
install: vmlinux
|
||||
$(Q)$(MAKE) $(build)=$(boot) $@
|
||||
|
||||
|
@ -29,11 +29,16 @@ LDFLAGS_vmlinux := --oformat $(LD_BFD) -e startup -T
|
||||
$(obj)/vmlinux: $(obj)/vmlinux.lds $(OBJECTS)
|
||||
$(call if_changed,ld)
|
||||
|
||||
sed-sizes := -e 's/^\([0-9a-fA-F]*\) . \(__bss_start\|_end\)$$/\#define SZ\2 0x\1/p'
|
||||
TRIM_HEAD_SIZE := 0x11000
|
||||
|
||||
quiet_cmd_sizes = GEN $@
|
||||
sed-sizes := -e 's/^\([0-9a-fA-F]*\) . \(__bss_start\|_end\)$$/\#define SZ\2 (0x\1 - $(TRIM_HEAD_SIZE))/p'
|
||||
|
||||
quiet_cmd_sizes = GEN $@
|
||||
cmd_sizes = $(NM) $< | sed -n $(sed-sizes) > $@
|
||||
|
||||
quiet_cmd_trim_head = TRIM $@
|
||||
cmd_trim_head = tail -c +$$(($(TRIM_HEAD_SIZE) + 1)) $< > $@
|
||||
|
||||
$(obj)/sizes.h: vmlinux
|
||||
$(call if_changed,sizes)
|
||||
|
||||
@ -43,10 +48,13 @@ $(obj)/head.o: $(obj)/sizes.h
|
||||
CFLAGS_misc.o += -I$(objtree)/$(obj)
|
||||
$(obj)/misc.o: $(obj)/sizes.h
|
||||
|
||||
OBJCOPYFLAGS_vmlinux.bin := -R .comment -S
|
||||
$(obj)/vmlinux.bin: vmlinux
|
||||
OBJCOPYFLAGS_vmlinux.bin.full := -R .comment -S
|
||||
$(obj)/vmlinux.bin.full: vmlinux
|
||||
$(call if_changed,objcopy)
|
||||
|
||||
$(obj)/vmlinux.bin: $(obj)/vmlinux.bin.full
|
||||
$(call if_changed,trim_head)
|
||||
|
||||
vmlinux.bin.all-y := $(obj)/vmlinux.bin
|
||||
|
||||
suffix-$(CONFIG_KERNEL_GZIP) := gz
|
||||
|
@ -23,12 +23,10 @@ ENTRY(startup_continue)
|
||||
aghi %r15,-160
|
||||
brasl %r14,decompress_kernel
|
||||
# Set up registers for memory mover. We move the decompressed image to
|
||||
# 0x11000, starting at offset 0x11000 in the decompressed image so
|
||||
# that code living at 0x11000 in the image will end up at 0x11000 in
|
||||
# memory.
|
||||
# 0x11000, where startup_continue of the decompressed image is supposed
|
||||
# to be.
|
||||
lgr %r4,%r2
|
||||
lg %r2,.Loffset-.LPG1(%r13)
|
||||
la %r4,0(%r2,%r4)
|
||||
lg %r3,.Lmvsize-.LPG1(%r13)
|
||||
lgr %r5,%r3
|
||||
# Move the memory mover someplace safe so it doesn't overwrite itself.
|
||||
|
@ -27,8 +27,8 @@
|
||||
/* Symbols defined by linker scripts */
|
||||
extern char input_data[];
|
||||
extern int input_len;
|
||||
extern char _text, _end;
|
||||
extern char _bss, _ebss;
|
||||
extern char _end[];
|
||||
extern char _bss[], _ebss[];
|
||||
|
||||
static void error(char *m);
|
||||
|
||||
@ -144,7 +144,7 @@ unsigned long decompress_kernel(void)
|
||||
{
|
||||
void *output, *kernel_end;
|
||||
|
||||
output = (void *) ALIGN((unsigned long) &_end + HEAP_SIZE, PAGE_SIZE);
|
||||
output = (void *) ALIGN((unsigned long) _end + HEAP_SIZE, PAGE_SIZE);
|
||||
kernel_end = output + SZ__bss_start;
|
||||
check_ipl_parmblock((void *) 0, (unsigned long) kernel_end);
|
||||
|
||||
@ -166,8 +166,8 @@ unsigned long decompress_kernel(void)
|
||||
* Clear bss section. free_mem_ptr and free_mem_end_ptr need to be
|
||||
* initialized afterwards since they reside in bss.
|
||||
*/
|
||||
memset(&_bss, 0, &_ebss - &_bss);
|
||||
free_mem_ptr = (unsigned long) &_end;
|
||||
memset(_bss, 0, _ebss - _bss);
|
||||
free_mem_ptr = (unsigned long) _end;
|
||||
free_mem_end_ptr = free_mem_ptr + HEAP_SIZE;
|
||||
|
||||
__decompress(input_data, input_len, NULL, NULL, output, 0, NULL, error);
|
||||
|
@ -52,6 +52,7 @@ SECTIONS
|
||||
/* Sections to be discarded */
|
||||
/DISCARD/ : {
|
||||
*(.eh_frame)
|
||||
*(__ex_table)
|
||||
*(*__ksymtab*)
|
||||
}
|
||||
}
|
||||
|
@ -1047,6 +1047,7 @@ static struct aead_alg gcm_aes_aead = {
|
||||
|
||||
static struct crypto_alg *aes_s390_algs_ptr[5];
|
||||
static int aes_s390_algs_num;
|
||||
static struct aead_alg *aes_s390_aead_alg;
|
||||
|
||||
static int aes_s390_register_alg(struct crypto_alg *alg)
|
||||
{
|
||||
@ -1065,7 +1066,8 @@ static void aes_s390_fini(void)
|
||||
if (ctrblk)
|
||||
free_page((unsigned long) ctrblk);
|
||||
|
||||
crypto_unregister_aead(&gcm_aes_aead);
|
||||
if (aes_s390_aead_alg)
|
||||
crypto_unregister_aead(aes_s390_aead_alg);
|
||||
}
|
||||
|
||||
static int __init aes_s390_init(void)
|
||||
@ -1123,6 +1125,7 @@ static int __init aes_s390_init(void)
|
||||
ret = crypto_register_aead(&gcm_aes_aead);
|
||||
if (ret)
|
||||
goto out_err;
|
||||
aes_s390_aead_alg = &gcm_aes_aead;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
108
arch/s390/include/asm/alternative-asm.h
Normal file
108
arch/s390/include/asm/alternative-asm.h
Normal file
@ -0,0 +1,108 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _ASM_S390_ALTERNATIVE_ASM_H
|
||||
#define _ASM_S390_ALTERNATIVE_ASM_H
|
||||
|
||||
#ifdef __ASSEMBLY__
|
||||
|
||||
/*
|
||||
* Check the length of an instruction sequence. The length may not be larger
|
||||
* than 254 bytes and it has to be divisible by 2.
|
||||
*/
|
||||
.macro alt_len_check start,end
|
||||
.if ( \end - \start ) > 254
|
||||
.error "cpu alternatives does not support instructions blocks > 254 bytes\n"
|
||||
.endif
|
||||
.if ( \end - \start ) % 2
|
||||
.error "cpu alternatives instructions length is odd\n"
|
||||
.endif
|
||||
.endm
|
||||
|
||||
/*
|
||||
* Issue one struct alt_instr descriptor entry (need to put it into
|
||||
* the section .altinstructions, see below). This entry contains
|
||||
* enough information for the alternatives patching code to patch an
|
||||
* instruction. See apply_alternatives().
|
||||
*/
|
||||
.macro alt_entry orig_start, orig_end, alt_start, alt_end, feature
|
||||
.long \orig_start - .
|
||||
.long \alt_start - .
|
||||
.word \feature
|
||||
.byte \orig_end - \orig_start
|
||||
.byte \alt_end - \alt_start
|
||||
.endm
|
||||
|
||||
/*
|
||||
* Fill up @bytes with nops. The macro emits 6-byte nop instructions
|
||||
* for the bulk of the area, possibly followed by a 4-byte and/or
|
||||
* a 2-byte nop if the size of the area is not divisible by 6.
|
||||
*/
|
||||
.macro alt_pad_fill bytes
|
||||
.fill ( \bytes ) / 6, 6, 0xc0040000
|
||||
.fill ( \bytes ) % 6 / 4, 4, 0x47000000
|
||||
.fill ( \bytes ) % 6 % 4 / 2, 2, 0x0700
|
||||
.endm
|
||||
|
||||
/*
|
||||
* Fill up @bytes with nops. If the number of bytes is larger
|
||||
* than 6, emit a jg instruction to branch over all nops, then
|
||||
* fill an area of size (@bytes - 6) with nop instructions.
|
||||
*/
|
||||
.macro alt_pad bytes
|
||||
.if ( \bytes > 0 )
|
||||
.if ( \bytes > 6 )
|
||||
jg . + \bytes
|
||||
alt_pad_fill \bytes - 6
|
||||
.else
|
||||
alt_pad_fill \bytes
|
||||
.endif
|
||||
.endif
|
||||
.endm
|
||||
|
||||
/*
|
||||
* Define an alternative between two instructions. If @feature is
|
||||
* present, early code in apply_alternatives() replaces @oldinstr with
|
||||
* @newinstr. ".skip" directive takes care of proper instruction padding
|
||||
* in case @newinstr is longer than @oldinstr.
|
||||
*/
|
||||
.macro ALTERNATIVE oldinstr, newinstr, feature
|
||||
.pushsection .altinstr_replacement,"ax"
|
||||
770: \newinstr
|
||||
771: .popsection
|
||||
772: \oldinstr
|
||||
773: alt_len_check 770b, 771b
|
||||
alt_len_check 772b, 773b
|
||||
alt_pad ( ( 771b - 770b ) - ( 773b - 772b ) )
|
||||
774: .pushsection .altinstructions,"a"
|
||||
alt_entry 772b, 774b, 770b, 771b, \feature
|
||||
.popsection
|
||||
.endm
|
||||
|
||||
/*
|
||||
* Define an alternative between two instructions. If @feature is
|
||||
* present, early code in apply_alternatives() replaces @oldinstr with
|
||||
* @newinstr. ".skip" directive takes care of proper instruction padding
|
||||
* in case @newinstr is longer than @oldinstr.
|
||||
*/
|
||||
.macro ALTERNATIVE_2 oldinstr, newinstr1, feature1, newinstr2, feature2
|
||||
.pushsection .altinstr_replacement,"ax"
|
||||
770: \newinstr1
|
||||
771: \newinstr2
|
||||
772: .popsection
|
||||
773: \oldinstr
|
||||
774: alt_len_check 770b, 771b
|
||||
alt_len_check 771b, 772b
|
||||
alt_len_check 773b, 774b
|
||||
.if ( 771b - 770b > 772b - 771b )
|
||||
alt_pad ( ( 771b - 770b ) - ( 774b - 773b ) )
|
||||
.else
|
||||
alt_pad ( ( 772b - 771b ) - ( 774b - 773b ) )
|
||||
.endif
|
||||
775: .pushsection .altinstructions,"a"
|
||||
alt_entry 773b, 775b, 770b, 771b,\feature1
|
||||
alt_entry 773b, 775b, 771b, 772b,\feature2
|
||||
.popsection
|
||||
.endm
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#endif /* _ASM_S390_ALTERNATIVE_ASM_H */
|
@ -230,5 +230,5 @@ int ccw_device_siosl(struct ccw_device *);
|
||||
|
||||
extern void ccw_device_get_schid(struct ccw_device *, struct subchannel_id *);
|
||||
|
||||
struct channel_path_desc *ccw_device_get_chp_desc(struct ccw_device *, int);
|
||||
struct channel_path_desc_fmt0 *ccw_device_get_chp_desc(struct ccw_device *, int);
|
||||
#endif /* _S390_CCWDEV_H_ */
|
||||
|
@ -9,7 +9,7 @@
|
||||
#include <uapi/asm/chpid.h>
|
||||
#include <asm/cio.h>
|
||||
|
||||
struct channel_path_desc {
|
||||
struct channel_path_desc_fmt0 {
|
||||
u8 flags;
|
||||
u8 lsn;
|
||||
u8 desc;
|
||||
|
@ -227,7 +227,7 @@ struct esw_eadm {
|
||||
* a field is valid; a field not being valid is always passed as %0.
|
||||
* If a unit check occurred, @ecw may contain sense data; this is retrieved
|
||||
* by the common I/O layer itself if the device doesn't support concurrent
|
||||
* sense (so that the device driver never needs to perform basic sene itself).
|
||||
* sense (so that the device driver never needs to perform basic sense itself).
|
||||
* For unsolicited interrupts, the irb is passed as-is (expect for sense data,
|
||||
* if applicable).
|
||||
*/
|
||||
|
@ -29,12 +29,12 @@
|
||||
/* CPU measurement facility support */
|
||||
static inline int cpum_cf_avail(void)
|
||||
{
|
||||
return MACHINE_HAS_LPP && test_facility(67);
|
||||
return test_facility(40) && test_facility(67);
|
||||
}
|
||||
|
||||
static inline int cpum_sf_avail(void)
|
||||
{
|
||||
return MACHINE_HAS_LPP && test_facility(68);
|
||||
return test_facility(40) && test_facility(68);
|
||||
}
|
||||
|
||||
|
||||
|
@ -32,8 +32,10 @@ struct css_general_char {
|
||||
u32 fcx : 1; /* bit 88 */
|
||||
u32 : 19;
|
||||
u32 alt_ssi : 1; /* bit 108 */
|
||||
u32:1;
|
||||
u32 narf:1; /* bit 110 */
|
||||
u32 : 1;
|
||||
u32 narf : 1; /* bit 110 */
|
||||
u32 : 12;
|
||||
u32 util_str : 1;/* bit 123 */
|
||||
} __packed;
|
||||
|
||||
extern struct css_general_char css_general_characteristics;
|
||||
|
@ -6,12 +6,10 @@
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
extern int nospec_call_disable;
|
||||
extern int nospec_return_disable;
|
||||
extern int nospec_disable;
|
||||
|
||||
void nospec_init_branches(void);
|
||||
void nospec_call_revert(s32 *start, s32 *end);
|
||||
void nospec_return_revert(s32 *start, s32 *end);
|
||||
void nospec_revert(s32 *start, s32 *end);
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
|
@ -151,4 +151,7 @@ void vmem_map_init(void);
|
||||
void *vmem_crst_alloc(unsigned long val);
|
||||
pte_t *vmem_pte_alloc(void);
|
||||
|
||||
unsigned long base_asce_alloc(unsigned long addr, unsigned long num_pages);
|
||||
void base_asce_free(unsigned long asce);
|
||||
|
||||
#endif /* _S390_PGALLOC_H */
|
||||
|
@ -390,10 +390,10 @@ static inline int scsw_cmd_is_valid_key(union scsw *scsw)
|
||||
}
|
||||
|
||||
/**
|
||||
* scsw_cmd_is_valid_sctl - check fctl field validity
|
||||
* scsw_cmd_is_valid_sctl - check sctl field validity
|
||||
* @scsw: pointer to scsw
|
||||
*
|
||||
* Return non-zero if the fctl field of the specified command mode scsw is
|
||||
* Return non-zero if the sctl field of the specified command mode scsw is
|
||||
* valid, zero otherwise.
|
||||
*/
|
||||
static inline int scsw_cmd_is_valid_sctl(union scsw *scsw)
|
||||
|
@ -25,7 +25,6 @@
|
||||
#define MACHINE_FLAG_DIAG44 _BITUL(6)
|
||||
#define MACHINE_FLAG_EDAT1 _BITUL(7)
|
||||
#define MACHINE_FLAG_EDAT2 _BITUL(8)
|
||||
#define MACHINE_FLAG_LPP _BITUL(9)
|
||||
#define MACHINE_FLAG_TOPOLOGY _BITUL(10)
|
||||
#define MACHINE_FLAG_TE _BITUL(11)
|
||||
#define MACHINE_FLAG_TLB_LC _BITUL(12)
|
||||
@ -66,7 +65,6 @@ extern void detect_memory_memblock(void);
|
||||
#define MACHINE_HAS_DIAG44 (S390_lowcore.machine_flags & MACHINE_FLAG_DIAG44)
|
||||
#define MACHINE_HAS_EDAT1 (S390_lowcore.machine_flags & MACHINE_FLAG_EDAT1)
|
||||
#define MACHINE_HAS_EDAT2 (S390_lowcore.machine_flags & MACHINE_FLAG_EDAT2)
|
||||
#define MACHINE_HAS_LPP (S390_lowcore.machine_flags & MACHINE_FLAG_LPP)
|
||||
#define MACHINE_HAS_TOPOLOGY (S390_lowcore.machine_flags & MACHINE_FLAG_TOPOLOGY)
|
||||
#define MACHINE_HAS_TE (S390_lowcore.machine_flags & MACHINE_FLAG_TE)
|
||||
#define MACHINE_HAS_TLB_LC (S390_lowcore.machine_flags & MACHINE_FLAG_TLB_LC)
|
||||
|
@ -68,25 +68,27 @@ typedef struct dasd_information2_t {
|
||||
#define DASD_FORMAT_CDL 2
|
||||
/*
|
||||
* values to be used for dasd_information_t.features
|
||||
* 0x00: default features
|
||||
* 0x01: readonly (ro)
|
||||
* 0x02: use diag discipline (diag)
|
||||
* 0x04: set the device initially online (internal use only)
|
||||
* 0x08: enable ERP related logging
|
||||
* 0x10: allow I/O to fail on lost paths
|
||||
* 0x20: allow I/O to fail when a lock was stolen
|
||||
* 0x40: give access to raw eckd data
|
||||
* 0x80: enable discard support
|
||||
* 0x100: default features
|
||||
* 0x001: readonly (ro)
|
||||
* 0x002: use diag discipline (diag)
|
||||
* 0x004: set the device initially online (internal use only)
|
||||
* 0x008: enable ERP related logging
|
||||
* 0x010: allow I/O to fail on lost paths
|
||||
* 0x020: allow I/O to fail when a lock was stolen
|
||||
* 0x040: give access to raw eckd data
|
||||
* 0x080: enable discard support
|
||||
* 0x100: enable autodisable for IFCC errors (default)
|
||||
*/
|
||||
#define DASD_FEATURE_DEFAULT 0x00
|
||||
#define DASD_FEATURE_READONLY 0x01
|
||||
#define DASD_FEATURE_USEDIAG 0x02
|
||||
#define DASD_FEATURE_INITIAL_ONLINE 0x04
|
||||
#define DASD_FEATURE_ERPLOG 0x08
|
||||
#define DASD_FEATURE_FAILFAST 0x10
|
||||
#define DASD_FEATURE_FAILONSLCK 0x20
|
||||
#define DASD_FEATURE_USERAW 0x40
|
||||
#define DASD_FEATURE_DISCARD 0x80
|
||||
#define DASD_FEATURE_READONLY 0x001
|
||||
#define DASD_FEATURE_USEDIAG 0x002
|
||||
#define DASD_FEATURE_INITIAL_ONLINE 0x004
|
||||
#define DASD_FEATURE_ERPLOG 0x008
|
||||
#define DASD_FEATURE_FAILFAST 0x010
|
||||
#define DASD_FEATURE_FAILONSLCK 0x020
|
||||
#define DASD_FEATURE_USERAW 0x040
|
||||
#define DASD_FEATURE_DISCARD 0x080
|
||||
#define DASD_FEATURE_PATH_AUTODISABLE 0x100
|
||||
#define DASD_FEATURE_DEFAULT DASD_FEATURE_PATH_AUTODISABLE
|
||||
|
||||
#define DASD_PARTN_BITS 2
|
||||
|
||||
|
@ -61,11 +61,11 @@ obj-y += debug.o irq.o ipl.o dis.o diag.o vdso.o als.o
|
||||
obj-y += sysinfo.o jump_label.o lgr.o os_info.o machine_kexec.o pgm_check.o
|
||||
obj-y += runtime_instr.o cache.o fpu.o dumpstack.o guarded_storage.o sthyi.o
|
||||
obj-y += entry.o reipl.o relocate_kernel.o kdebugfs.o alternative.o
|
||||
obj-y += nospec-branch.o
|
||||
|
||||
extra-y += head.o head64.o vmlinux.lds
|
||||
|
||||
obj-$(CONFIG_EXPOLINE) += nospec-branch.o
|
||||
CFLAGS_REMOVE_expoline.o += $(CC_FLAGS_EXPOLINE)
|
||||
CFLAGS_REMOVE_nospec-branch.o += $(CC_FLAGS_EXPOLINE)
|
||||
|
||||
obj-$(CONFIG_MODULES) += module.o
|
||||
obj-$(CONFIG_SMP) += smp.o
|
||||
|
@ -2,6 +2,7 @@
|
||||
#include <linux/module.h>
|
||||
#include <asm/alternative.h>
|
||||
#include <asm/facility.h>
|
||||
#include <asm/nospec-branch.h>
|
||||
|
||||
#define MAX_PATCH_LEN (255 - 1)
|
||||
|
||||
@ -15,29 +16,6 @@ static int __init disable_alternative_instructions(char *str)
|
||||
|
||||
early_param("noaltinstr", disable_alternative_instructions);
|
||||
|
||||
static int __init nobp_setup_early(char *str)
|
||||
{
|
||||
bool enabled;
|
||||
int rc;
|
||||
|
||||
rc = kstrtobool(str, &enabled);
|
||||
if (rc)
|
||||
return rc;
|
||||
if (enabled && test_facility(82))
|
||||
__set_facility(82, S390_lowcore.alt_stfle_fac_list);
|
||||
else
|
||||
__clear_facility(82, S390_lowcore.alt_stfle_fac_list);
|
||||
return 0;
|
||||
}
|
||||
early_param("nobp", nobp_setup_early);
|
||||
|
||||
static int __init nospec_setup_early(char *str)
|
||||
{
|
||||
__clear_facility(82, S390_lowcore.alt_stfle_fac_list);
|
||||
return 0;
|
||||
}
|
||||
early_param("nospec", nospec_setup_early);
|
||||
|
||||
struct brcl_insn {
|
||||
u16 opc;
|
||||
s32 disp;
|
||||
|
@ -63,6 +63,7 @@ int main(void)
|
||||
OFFSET(__SF_SIE_CONTROL, stack_frame, empty1[0]);
|
||||
OFFSET(__SF_SIE_SAVEAREA, stack_frame, empty1[1]);
|
||||
OFFSET(__SF_SIE_REASON, stack_frame, empty1[2]);
|
||||
OFFSET(__SF_SIE_FLAGS, stack_frame, empty1[3]);
|
||||
BLANK();
|
||||
/* timeval/timezone offsets for use by vdso */
|
||||
OFFSET(__VDSO_UPD_COUNT, vdso_data, tb_update_count);
|
||||
|
@ -67,7 +67,7 @@ static noinline __init void init_kernel_storage_key(void)
|
||||
#if PAGE_DEFAULT_KEY
|
||||
unsigned long end_pfn, init_pfn;
|
||||
|
||||
end_pfn = PFN_UP(__pa(&_end));
|
||||
end_pfn = PFN_UP(__pa(_end));
|
||||
|
||||
for (init_pfn = 0 ; init_pfn < end_pfn; init_pfn++)
|
||||
page_set_storage_key(init_pfn << PAGE_SHIFT,
|
||||
@ -242,8 +242,6 @@ static __init void detect_machine_facilities(void)
|
||||
S390_lowcore.machine_flags |= MACHINE_FLAG_EDAT2;
|
||||
if (test_facility(3))
|
||||
S390_lowcore.machine_flags |= MACHINE_FLAG_IDTE;
|
||||
if (test_facility(40))
|
||||
S390_lowcore.machine_flags |= MACHINE_FLAG_LPP;
|
||||
if (test_facility(50) && test_facility(73)) {
|
||||
S390_lowcore.machine_flags |= MACHINE_FLAG_TE;
|
||||
__ctl_set_bit(0, 55);
|
||||
|
@ -11,6 +11,7 @@
|
||||
|
||||
#include <linux/init.h>
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/alternative-asm.h>
|
||||
#include <asm/processor.h>
|
||||
#include <asm/cache.h>
|
||||
#include <asm/ctl_reg.h>
|
||||
@ -57,6 +58,8 @@ _CIF_WORK = (_CIF_MCCK_PENDING | _CIF_ASCE_PRIMARY | \
|
||||
_CIF_ASCE_SECONDARY | _CIF_FPU)
|
||||
_PIF_WORK = (_PIF_PER_TRAP | _PIF_SYSCALL_RESTART)
|
||||
|
||||
_LPP_OFFSET = __LC_LPP
|
||||
|
||||
#define BASED(name) name-cleanup_critical(%r13)
|
||||
|
||||
.macro TRACE_IRQS_ON
|
||||
@ -162,65 +165,22 @@ _PIF_WORK = (_PIF_PER_TRAP | _PIF_SYSCALL_RESTART)
|
||||
.endm
|
||||
|
||||
.macro BPOFF
|
||||
.pushsection .altinstr_replacement, "ax"
|
||||
660: .long 0xb2e8c000
|
||||
.popsection
|
||||
661: .long 0x47000000
|
||||
.pushsection .altinstructions, "a"
|
||||
.long 661b - .
|
||||
.long 660b - .
|
||||
.word 82
|
||||
.byte 4
|
||||
.byte 4
|
||||
.popsection
|
||||
ALTERNATIVE "", ".long 0xb2e8c000", 82
|
||||
.endm
|
||||
|
||||
.macro BPON
|
||||
.pushsection .altinstr_replacement, "ax"
|
||||
662: .long 0xb2e8d000
|
||||
.popsection
|
||||
663: .long 0x47000000
|
||||
.pushsection .altinstructions, "a"
|
||||
.long 663b - .
|
||||
.long 662b - .
|
||||
.word 82
|
||||
.byte 4
|
||||
.byte 4
|
||||
.popsection
|
||||
ALTERNATIVE "", ".long 0xb2e8d000", 82
|
||||
.endm
|
||||
|
||||
.macro BPENTER tif_ptr,tif_mask
|
||||
.pushsection .altinstr_replacement, "ax"
|
||||
662: .word 0xc004, 0x0000, 0x0000 # 6 byte nop
|
||||
.word 0xc004, 0x0000, 0x0000 # 6 byte nop
|
||||
.popsection
|
||||
664: TSTMSK \tif_ptr,\tif_mask
|
||||
jz . + 8
|
||||
.long 0xb2e8d000
|
||||
.pushsection .altinstructions, "a"
|
||||
.long 664b - .
|
||||
.long 662b - .
|
||||
.word 82
|
||||
.byte 12
|
||||
.byte 12
|
||||
.popsection
|
||||
ALTERNATIVE "TSTMSK \tif_ptr,\tif_mask; jz .+8; .long 0xb2e8d000", \
|
||||
"", 82
|
||||
.endm
|
||||
|
||||
.macro BPEXIT tif_ptr,tif_mask
|
||||
TSTMSK \tif_ptr,\tif_mask
|
||||
.pushsection .altinstr_replacement, "ax"
|
||||
662: jnz . + 8
|
||||
.long 0xb2e8d000
|
||||
.popsection
|
||||
664: jz . + 8
|
||||
.long 0xb2e8c000
|
||||
.pushsection .altinstructions, "a"
|
||||
.long 664b - .
|
||||
.long 662b - .
|
||||
.word 82
|
||||
.byte 8
|
||||
.byte 8
|
||||
.popsection
|
||||
ALTERNATIVE "jz .+8; .long 0xb2e8c000", \
|
||||
"jnz .+8; .long 0xb2e8d000", 82
|
||||
.endm
|
||||
|
||||
#ifdef CONFIG_EXPOLINE
|
||||
@ -323,10 +283,8 @@ ENTRY(__switch_to)
|
||||
aghi %r3,__TASK_pid
|
||||
mvc __LC_CURRENT_PID(4,%r0),0(%r3) # store pid of next
|
||||
lmg %r6,%r15,__SF_GPRS(%r15) # load gprs of next task
|
||||
TSTMSK __LC_MACHINE_FLAGS,MACHINE_FLAG_LPP
|
||||
jz 0f
|
||||
.insn s,0xb2800000,__LC_LPP # set program parameter
|
||||
0: BR_R1USE_R14
|
||||
ALTERNATIVE "", ".insn s,0xb2800000,_LPP_OFFSET", 40
|
||||
BR_R1USE_R14
|
||||
|
||||
.L__critical_start:
|
||||
|
||||
@ -339,10 +297,10 @@ ENTRY(__switch_to)
|
||||
ENTRY(sie64a)
|
||||
stmg %r6,%r14,__SF_GPRS(%r15) # save kernel registers
|
||||
lg %r12,__LC_CURRENT
|
||||
stg %r2,__SF_EMPTY(%r15) # save control block pointer
|
||||
stg %r3,__SF_EMPTY+8(%r15) # save guest register save area
|
||||
xc __SF_EMPTY+16(8,%r15),__SF_EMPTY+16(%r15) # reason code = 0
|
||||
mvc __SF_EMPTY+24(8,%r15),__TI_flags(%r12) # copy thread flags
|
||||
stg %r2,__SF_SIE_CONTROL(%r15) # save control block pointer
|
||||
stg %r3,__SF_SIE_SAVEAREA(%r15) # save guest register save area
|
||||
xc __SF_SIE_REASON(8,%r15),__SF_SIE_REASON(%r15) # reason code = 0
|
||||
mvc __SF_SIE_FLAGS(8,%r15),__TI_flags(%r12) # copy thread flags
|
||||
TSTMSK __LC_CPU_FLAGS,_CIF_FPU # load guest fp/vx registers ?
|
||||
jno .Lsie_load_guest_gprs
|
||||
brasl %r14,load_fpu_regs # load guest fp/vx regs
|
||||
@ -353,18 +311,18 @@ ENTRY(sie64a)
|
||||
jz .Lsie_gmap
|
||||
lctlg %c1,%c1,__GMAP_ASCE(%r14) # load primary asce
|
||||
.Lsie_gmap:
|
||||
lg %r14,__SF_EMPTY(%r15) # get control block pointer
|
||||
lg %r14,__SF_SIE_CONTROL(%r15) # get control block pointer
|
||||
oi __SIE_PROG0C+3(%r14),1 # we are going into SIE now
|
||||
tm __SIE_PROG20+3(%r14),3 # last exit...
|
||||
jnz .Lsie_skip
|
||||
TSTMSK __LC_CPU_FLAGS,_CIF_FPU
|
||||
jo .Lsie_skip # exit if fp/vx regs changed
|
||||
BPEXIT __SF_EMPTY+24(%r15),(_TIF_ISOLATE_BP|_TIF_ISOLATE_BP_GUEST)
|
||||
BPEXIT __SF_SIE_FLAGS(%r15),(_TIF_ISOLATE_BP|_TIF_ISOLATE_BP_GUEST)
|
||||
.Lsie_entry:
|
||||
sie 0(%r14)
|
||||
.Lsie_exit:
|
||||
BPOFF
|
||||
BPENTER __SF_EMPTY+24(%r15),(_TIF_ISOLATE_BP|_TIF_ISOLATE_BP_GUEST)
|
||||
BPENTER __SF_SIE_FLAGS(%r15),(_TIF_ISOLATE_BP|_TIF_ISOLATE_BP_GUEST)
|
||||
.Lsie_skip:
|
||||
ni __SIE_PROG0C+3(%r14),0xfe # no longer in SIE
|
||||
lctlg %c1,%c1,__LC_USER_ASCE # load primary asce
|
||||
@ -383,7 +341,7 @@ ENTRY(sie64a)
|
||||
nopr 7
|
||||
.globl sie_exit
|
||||
sie_exit:
|
||||
lg %r14,__SF_EMPTY+8(%r15) # load guest register save area
|
||||
lg %r14,__SF_SIE_SAVEAREA(%r15) # load guest register save area
|
||||
stmg %r0,%r13,0(%r14) # save guest gprs 0-13
|
||||
xgr %r0,%r0 # clear guest registers to
|
||||
xgr %r1,%r1 # prevent speculative use
|
||||
@ -392,11 +350,11 @@ sie_exit:
|
||||
xgr %r4,%r4
|
||||
xgr %r5,%r5
|
||||
lmg %r6,%r14,__SF_GPRS(%r15) # restore kernel registers
|
||||
lg %r2,__SF_EMPTY+16(%r15) # return exit reason code
|
||||
lg %r2,__SF_SIE_REASON(%r15) # return exit reason code
|
||||
BR_R1USE_R14
|
||||
.Lsie_fault:
|
||||
lghi %r14,-EFAULT
|
||||
stg %r14,__SF_EMPTY+16(%r15) # set exit reason code
|
||||
stg %r14,__SF_SIE_REASON(%r15) # set exit reason code
|
||||
j sie_exit
|
||||
|
||||
EX_TABLE(.Lrewind_pad6,.Lsie_fault)
|
||||
@ -685,7 +643,7 @@ ENTRY(pgm_check_handler)
|
||||
slg %r14,BASED(.Lsie_critical_start)
|
||||
clg %r14,BASED(.Lsie_critical_length)
|
||||
jhe 0f
|
||||
lg %r14,__SF_EMPTY(%r15) # get control block pointer
|
||||
lg %r14,__SF_SIE_CONTROL(%r15) # get control block pointer
|
||||
ni __SIE_PROG0C+3(%r14),0xfe # no longer in SIE
|
||||
lctlg %c1,%c1,__LC_USER_ASCE # load primary asce
|
||||
larl %r9,sie_exit # skip forward to sie_exit
|
||||
@ -1285,10 +1243,8 @@ ENTRY(mcck_int_handler)
|
||||
# PSW restart interrupt handler
|
||||
#
|
||||
ENTRY(restart_int_handler)
|
||||
TSTMSK __LC_MACHINE_FLAGS,MACHINE_FLAG_LPP
|
||||
jz 0f
|
||||
.insn s,0xb2800000,__LC_LPP
|
||||
0: stg %r15,__LC_SAVE_AREA_RESTART
|
||||
ALTERNATIVE "", ".insn s,0xb2800000,_LPP_OFFSET", 40
|
||||
stg %r15,__LC_SAVE_AREA_RESTART
|
||||
lg %r15,__LC_RESTART_STACK
|
||||
aghi %r15,-__PT_SIZE # create pt_regs on stack
|
||||
xc 0(__PT_SIZE,%r15),0(%r15)
|
||||
@ -1397,8 +1353,8 @@ cleanup_critical:
|
||||
clg %r9,BASED(.Lsie_crit_mcck_length)
|
||||
jh 1f
|
||||
oi __LC_CPU_FLAGS+7, _CIF_MCCK_GUEST
|
||||
1: BPENTER __SF_EMPTY+24(%r15),(_TIF_ISOLATE_BP|_TIF_ISOLATE_BP_GUEST)
|
||||
lg %r9,__SF_EMPTY(%r15) # get control block pointer
|
||||
1: BPENTER __SF_SIE_FLAGS(%r15),(_TIF_ISOLATE_BP|_TIF_ISOLATE_BP_GUEST)
|
||||
lg %r9,__SF_SIE_CONTROL(%r15) # get control block pointer
|
||||
ni __SIE_PROG0C+3(%r9),0xfe # no longer in SIE
|
||||
lctlg %c1,%c1,__LC_USER_ASCE # load primary asce
|
||||
larl %r9,sie_exit # skip forward to sie_exit
|
||||
|
@ -159,7 +159,7 @@ int module_frob_arch_sections(Elf_Ehdr *hdr, Elf_Shdr *sechdrs,
|
||||
me->core_layout.size += me->arch.got_size;
|
||||
me->arch.plt_offset = me->core_layout.size;
|
||||
if (me->arch.plt_size) {
|
||||
if (IS_ENABLED(CONFIG_EXPOLINE) && !nospec_call_disable)
|
||||
if (IS_ENABLED(CONFIG_EXPOLINE) && !nospec_disable)
|
||||
me->arch.plt_size += PLT_ENTRY_SIZE;
|
||||
me->core_layout.size += me->arch.plt_size;
|
||||
}
|
||||
@ -318,8 +318,7 @@ static int apply_rela(Elf_Rela *rela, Elf_Addr base, Elf_Sym *symtab,
|
||||
info->plt_offset;
|
||||
ip[0] = 0x0d10e310; /* basr 1,0 */
|
||||
ip[1] = 0x100a0004; /* lg 1,10(1) */
|
||||
if (IS_ENABLED(CONFIG_EXPOLINE) &&
|
||||
!nospec_call_disable) {
|
||||
if (IS_ENABLED(CONFIG_EXPOLINE) && !nospec_disable) {
|
||||
unsigned int *ij;
|
||||
ij = me->core_layout.base +
|
||||
me->arch.plt_offset +
|
||||
@ -440,7 +439,7 @@ int module_finalize(const Elf_Ehdr *hdr,
|
||||
void *aseg;
|
||||
|
||||
if (IS_ENABLED(CONFIG_EXPOLINE) &&
|
||||
!nospec_call_disable && me->arch.plt_size) {
|
||||
!nospec_disable && me->arch.plt_size) {
|
||||
unsigned int *ij;
|
||||
|
||||
ij = me->core_layout.base + me->arch.plt_offset +
|
||||
@ -467,11 +466,11 @@ int module_finalize(const Elf_Ehdr *hdr,
|
||||
|
||||
if (IS_ENABLED(CONFIG_EXPOLINE) &&
|
||||
(!strcmp(".nospec_call_table", secname)))
|
||||
nospec_call_revert(aseg, aseg + s->sh_size);
|
||||
nospec_revert(aseg, aseg + s->sh_size);
|
||||
|
||||
if (IS_ENABLED(CONFIG_EXPOLINE) &&
|
||||
(!strcmp(".nospec_return_table", secname)))
|
||||
nospec_return_revert(aseg, aseg + s->sh_size);
|
||||
nospec_revert(aseg, aseg + s->sh_size);
|
||||
}
|
||||
|
||||
jump_label_apply_nops(me);
|
||||
|
@ -1,32 +1,108 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
#include <linux/module.h>
|
||||
#include <linux/device.h>
|
||||
#include <asm/nospec-branch.h>
|
||||
|
||||
int nospec_call_disable = IS_ENABLED(CONFIG_EXPOLINE_OFF);
|
||||
int nospec_return_disable = !IS_ENABLED(CONFIG_EXPOLINE_FULL);
|
||||
static int __init nobp_setup_early(char *str)
|
||||
{
|
||||
bool enabled;
|
||||
int rc;
|
||||
|
||||
rc = kstrtobool(str, &enabled);
|
||||
if (rc)
|
||||
return rc;
|
||||
if (enabled && test_facility(82)) {
|
||||
/*
|
||||
* The user explicitely requested nobp=1, enable it and
|
||||
* disable the expoline support.
|
||||
*/
|
||||
__set_facility(82, S390_lowcore.alt_stfle_fac_list);
|
||||
if (IS_ENABLED(CONFIG_EXPOLINE))
|
||||
nospec_disable = 1;
|
||||
} else {
|
||||
__clear_facility(82, S390_lowcore.alt_stfle_fac_list);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
early_param("nobp", nobp_setup_early);
|
||||
|
||||
static int __init nospec_setup_early(char *str)
|
||||
{
|
||||
__clear_facility(82, S390_lowcore.alt_stfle_fac_list);
|
||||
return 0;
|
||||
}
|
||||
early_param("nospec", nospec_setup_early);
|
||||
|
||||
static int __init nospec_report(void)
|
||||
{
|
||||
if (IS_ENABLED(CC_USING_EXPOLINE) && !nospec_disable)
|
||||
pr_info("Spectre V2 mitigation: execute trampolines.\n");
|
||||
if (__test_facility(82, S390_lowcore.alt_stfle_fac_list))
|
||||
pr_info("Spectre V2 mitigation: limited branch prediction.\n");
|
||||
return 0;
|
||||
}
|
||||
arch_initcall(nospec_report);
|
||||
|
||||
#ifdef CONFIG_SYSFS
|
||||
ssize_t cpu_show_spectre_v1(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
return sprintf(buf, "Mitigation: __user pointer sanitization\n");
|
||||
}
|
||||
|
||||
ssize_t cpu_show_spectre_v2(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
if (IS_ENABLED(CC_USING_EXPOLINE) && !nospec_disable)
|
||||
return sprintf(buf, "Mitigation: execute trampolines\n");
|
||||
if (__test_facility(82, S390_lowcore.alt_stfle_fac_list))
|
||||
return sprintf(buf, "Mitigation: limited branch prediction.\n");
|
||||
return sprintf(buf, "Vulnerable\n");
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_EXPOLINE
|
||||
|
||||
int nospec_disable = IS_ENABLED(CONFIG_EXPOLINE_OFF);
|
||||
|
||||
static int __init nospectre_v2_setup_early(char *str)
|
||||
{
|
||||
nospec_call_disable = 1;
|
||||
nospec_return_disable = 1;
|
||||
nospec_disable = 1;
|
||||
return 0;
|
||||
}
|
||||
early_param("nospectre_v2", nospectre_v2_setup_early);
|
||||
|
||||
static int __init spectre_v2_auto_early(void)
|
||||
{
|
||||
if (IS_ENABLED(CC_USING_EXPOLINE)) {
|
||||
/*
|
||||
* The kernel has been compiled with expolines.
|
||||
* Keep expolines enabled and disable nobp.
|
||||
*/
|
||||
nospec_disable = 0;
|
||||
__clear_facility(82, S390_lowcore.alt_stfle_fac_list);
|
||||
}
|
||||
/*
|
||||
* If the kernel has not been compiled with expolines the
|
||||
* nobp setting decides what is done, this depends on the
|
||||
* CONFIG_KERNEL_NP option and the nobp/nospec parameters.
|
||||
*/
|
||||
return 0;
|
||||
}
|
||||
#ifdef CONFIG_EXPOLINE_AUTO
|
||||
early_initcall(spectre_v2_auto_early);
|
||||
#endif
|
||||
|
||||
static int __init spectre_v2_setup_early(char *str)
|
||||
{
|
||||
if (str && !strncmp(str, "on", 2)) {
|
||||
nospec_call_disable = 0;
|
||||
nospec_return_disable = 0;
|
||||
}
|
||||
if (str && !strncmp(str, "off", 3)) {
|
||||
nospec_call_disable = 1;
|
||||
nospec_return_disable = 1;
|
||||
}
|
||||
if (str && !strncmp(str, "auto", 4)) {
|
||||
nospec_call_disable = 0;
|
||||
nospec_return_disable = 1;
|
||||
nospec_disable = 0;
|
||||
__clear_facility(82, S390_lowcore.alt_stfle_fac_list);
|
||||
}
|
||||
if (str && !strncmp(str, "off", 3))
|
||||
nospec_disable = 1;
|
||||
if (str && !strncmp(str, "auto", 4))
|
||||
spectre_v2_auto_early();
|
||||
return 0;
|
||||
}
|
||||
early_param("spectre_v2", spectre_v2_setup_early);
|
||||
@ -79,15 +155,9 @@ static void __init_or_module __nospec_revert(s32 *start, s32 *end)
|
||||
}
|
||||
}
|
||||
|
||||
void __init_or_module nospec_call_revert(s32 *start, s32 *end)
|
||||
void __init_or_module nospec_revert(s32 *start, s32 *end)
|
||||
{
|
||||
if (nospec_call_disable)
|
||||
__nospec_revert(start, end);
|
||||
}
|
||||
|
||||
void __init_or_module nospec_return_revert(s32 *start, s32 *end)
|
||||
{
|
||||
if (nospec_return_disable)
|
||||
if (nospec_disable)
|
||||
__nospec_revert(start, end);
|
||||
}
|
||||
|
||||
@ -95,6 +165,8 @@ extern s32 __nospec_call_start[], __nospec_call_end[];
|
||||
extern s32 __nospec_return_start[], __nospec_return_end[];
|
||||
void __init nospec_init_branches(void)
|
||||
{
|
||||
nospec_call_revert(__nospec_call_start, __nospec_call_end);
|
||||
nospec_return_revert(__nospec_return_start, __nospec_return_end);
|
||||
nospec_revert(__nospec_call_start, __nospec_call_end);
|
||||
nospec_revert(__nospec_return_start, __nospec_return_end);
|
||||
}
|
||||
|
||||
#endif /* CONFIG_EXPOLINE */
|
||||
|
@ -221,6 +221,8 @@ static void __init conmode_default(void)
|
||||
SET_CONSOLE_SCLP;
|
||||
#endif
|
||||
}
|
||||
if (IS_ENABLED(CONFIG_VT) && IS_ENABLED(CONFIG_DUMMY_CONSOLE))
|
||||
conswitchp = &dummy_con;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_CRASH_DUMP
|
||||
@ -413,12 +415,12 @@ static void __init setup_resources(void)
|
||||
struct memblock_region *reg;
|
||||
int j;
|
||||
|
||||
code_resource.start = (unsigned long) &_text;
|
||||
code_resource.end = (unsigned long) &_etext - 1;
|
||||
data_resource.start = (unsigned long) &_etext;
|
||||
data_resource.end = (unsigned long) &_edata - 1;
|
||||
bss_resource.start = (unsigned long) &__bss_start;
|
||||
bss_resource.end = (unsigned long) &__bss_stop - 1;
|
||||
code_resource.start = (unsigned long) _text;
|
||||
code_resource.end = (unsigned long) _etext - 1;
|
||||
data_resource.start = (unsigned long) _etext;
|
||||
data_resource.end = (unsigned long) _edata - 1;
|
||||
bss_resource.start = (unsigned long) __bss_start;
|
||||
bss_resource.end = (unsigned long) __bss_stop - 1;
|
||||
|
||||
for_each_memblock(memory, reg) {
|
||||
res = memblock_virt_alloc(sizeof(*res), 8);
|
||||
@ -667,7 +669,7 @@ static void __init check_initrd(void)
|
||||
*/
|
||||
static void __init reserve_kernel(void)
|
||||
{
|
||||
unsigned long start_pfn = PFN_UP(__pa(&_end));
|
||||
unsigned long start_pfn = PFN_UP(__pa(_end));
|
||||
|
||||
#ifdef CONFIG_DMA_API_DEBUG
|
||||
/*
|
||||
@ -888,9 +890,9 @@ void __init setup_arch(char **cmdline_p)
|
||||
|
||||
/* Is init_mm really needed? */
|
||||
init_mm.start_code = PAGE_OFFSET;
|
||||
init_mm.end_code = (unsigned long) &_etext;
|
||||
init_mm.end_data = (unsigned long) &_edata;
|
||||
init_mm.brk = (unsigned long) &_end;
|
||||
init_mm.end_code = (unsigned long) _etext;
|
||||
init_mm.end_data = (unsigned long) _edata;
|
||||
init_mm.brk = (unsigned long) _end;
|
||||
|
||||
parse_early_param();
|
||||
#ifdef CONFIG_CRASH_DUMP
|
||||
|
@ -153,8 +153,8 @@ int pfn_is_nosave(unsigned long pfn)
|
||||
{
|
||||
unsigned long nosave_begin_pfn = PFN_DOWN(__pa(&__nosave_begin));
|
||||
unsigned long nosave_end_pfn = PFN_DOWN(__pa(&__nosave_end));
|
||||
unsigned long end_rodata_pfn = PFN_DOWN(__pa(&__end_rodata)) - 1;
|
||||
unsigned long stext_pfn = PFN_DOWN(__pa(&_stext));
|
||||
unsigned long end_rodata_pfn = PFN_DOWN(__pa(__end_rodata)) - 1;
|
||||
unsigned long stext_pfn = PFN_DOWN(__pa(_stext));
|
||||
|
||||
/* Always save lowcore pages (LC protection might be enabled). */
|
||||
if (pfn <= LC_PAGES)
|
||||
|
@ -24,8 +24,8 @@ enum address_markers_idx {
|
||||
|
||||
static struct addr_marker address_markers[] = {
|
||||
[IDENTITY_NR] = {0, "Identity Mapping"},
|
||||
[KERNEL_START_NR] = {(unsigned long)&_stext, "Kernel Image Start"},
|
||||
[KERNEL_END_NR] = {(unsigned long)&_end, "Kernel Image End"},
|
||||
[KERNEL_START_NR] = {(unsigned long)_stext, "Kernel Image Start"},
|
||||
[KERNEL_END_NR] = {(unsigned long)_end, "Kernel Image End"},
|
||||
[VMEMMAP_NR] = {0, "vmemmap Area"},
|
||||
[VMALLOC_NR] = {0, "vmalloc Area"},
|
||||
[MODULES_NR] = {0, "Modules Area"},
|
||||
|
@ -6,8 +6,9 @@
|
||||
* Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com>
|
||||
*/
|
||||
|
||||
#include <linux/mm.h>
|
||||
#include <linux/sysctl.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/mm.h>
|
||||
#include <asm/mmu_context.h>
|
||||
#include <asm/pgalloc.h>
|
||||
#include <asm/gmap.h>
|
||||
@ -366,3 +367,293 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table)
|
||||
if ((*batch)->nr == MAX_TABLE_BATCH)
|
||||
tlb_flush_mmu(tlb);
|
||||
}
|
||||
|
||||
/*
|
||||
* Base infrastructure required to generate basic asces, region, segment,
|
||||
* and page tables that do not make use of enhanced features like EDAT1.
|
||||
*/
|
||||
|
||||
static struct kmem_cache *base_pgt_cache;
|
||||
|
||||
static unsigned long base_pgt_alloc(void)
|
||||
{
|
||||
u64 *table;
|
||||
|
||||
table = kmem_cache_alloc(base_pgt_cache, GFP_KERNEL);
|
||||
if (table)
|
||||
memset64(table, _PAGE_INVALID, PTRS_PER_PTE);
|
||||
return (unsigned long) table;
|
||||
}
|
||||
|
||||
static void base_pgt_free(unsigned long table)
|
||||
{
|
||||
kmem_cache_free(base_pgt_cache, (void *) table);
|
||||
}
|
||||
|
||||
static unsigned long base_crst_alloc(unsigned long val)
|
||||
{
|
||||
unsigned long table;
|
||||
|
||||
table = __get_free_pages(GFP_KERNEL, CRST_ALLOC_ORDER);
|
||||
if (table)
|
||||
crst_table_init((unsigned long *)table, val);
|
||||
return table;
|
||||
}
|
||||
|
||||
static void base_crst_free(unsigned long table)
|
||||
{
|
||||
free_pages(table, CRST_ALLOC_ORDER);
|
||||
}
|
||||
|
||||
#define BASE_ADDR_END_FUNC(NAME, SIZE) \
|
||||
static inline unsigned long base_##NAME##_addr_end(unsigned long addr, \
|
||||
unsigned long end) \
|
||||
{ \
|
||||
unsigned long next = (addr + (SIZE)) & ~((SIZE) - 1); \
|
||||
\
|
||||
return (next - 1) < (end - 1) ? next : end; \
|
||||
}
|
||||
|
||||
BASE_ADDR_END_FUNC(page, _PAGE_SIZE)
|
||||
BASE_ADDR_END_FUNC(segment, _SEGMENT_SIZE)
|
||||
BASE_ADDR_END_FUNC(region3, _REGION3_SIZE)
|
||||
BASE_ADDR_END_FUNC(region2, _REGION2_SIZE)
|
||||
BASE_ADDR_END_FUNC(region1, _REGION1_SIZE)
|
||||
|
||||
static inline unsigned long base_lra(unsigned long address)
|
||||
{
|
||||
unsigned long real;
|
||||
|
||||
asm volatile(
|
||||
" lra %0,0(%1)\n"
|
||||
: "=d" (real) : "a" (address) : "cc");
|
||||
return real;
|
||||
}
|
||||
|
||||
static int base_page_walk(unsigned long origin, unsigned long addr,
|
||||
unsigned long end, int alloc)
|
||||
{
|
||||
unsigned long *pte, next;
|
||||
|
||||
if (!alloc)
|
||||
return 0;
|
||||
pte = (unsigned long *) origin;
|
||||
pte += (addr & _PAGE_INDEX) >> _PAGE_SHIFT;
|
||||
do {
|
||||
next = base_page_addr_end(addr, end);
|
||||
*pte = base_lra(addr);
|
||||
} while (pte++, addr = next, addr < end);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int base_segment_walk(unsigned long origin, unsigned long addr,
|
||||
unsigned long end, int alloc)
|
||||
{
|
||||
unsigned long *ste, next, table;
|
||||
int rc;
|
||||
|
||||
ste = (unsigned long *) origin;
|
||||
ste += (addr & _SEGMENT_INDEX) >> _SEGMENT_SHIFT;
|
||||
do {
|
||||
next = base_segment_addr_end(addr, end);
|
||||
if (*ste & _SEGMENT_ENTRY_INVALID) {
|
||||
if (!alloc)
|
||||
continue;
|
||||
table = base_pgt_alloc();
|
||||
if (!table)
|
||||
return -ENOMEM;
|
||||
*ste = table | _SEGMENT_ENTRY;
|
||||
}
|
||||
table = *ste & _SEGMENT_ENTRY_ORIGIN;
|
||||
rc = base_page_walk(table, addr, next, alloc);
|
||||
if (rc)
|
||||
return rc;
|
||||
if (!alloc)
|
||||
base_pgt_free(table);
|
||||
cond_resched();
|
||||
} while (ste++, addr = next, addr < end);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int base_region3_walk(unsigned long origin, unsigned long addr,
|
||||
unsigned long end, int alloc)
|
||||
{
|
||||
unsigned long *rtte, next, table;
|
||||
int rc;
|
||||
|
||||
rtte = (unsigned long *) origin;
|
||||
rtte += (addr & _REGION3_INDEX) >> _REGION3_SHIFT;
|
||||
do {
|
||||
next = base_region3_addr_end(addr, end);
|
||||
if (*rtte & _REGION_ENTRY_INVALID) {
|
||||
if (!alloc)
|
||||
continue;
|
||||
table = base_crst_alloc(_SEGMENT_ENTRY_EMPTY);
|
||||
if (!table)
|
||||
return -ENOMEM;
|
||||
*rtte = table | _REGION3_ENTRY;
|
||||
}
|
||||
table = *rtte & _REGION_ENTRY_ORIGIN;
|
||||
rc = base_segment_walk(table, addr, next, alloc);
|
||||
if (rc)
|
||||
return rc;
|
||||
if (!alloc)
|
||||
base_crst_free(table);
|
||||
} while (rtte++, addr = next, addr < end);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int base_region2_walk(unsigned long origin, unsigned long addr,
|
||||
unsigned long end, int alloc)
|
||||
{
|
||||
unsigned long *rste, next, table;
|
||||
int rc;
|
||||
|
||||
rste = (unsigned long *) origin;
|
||||
rste += (addr & _REGION2_INDEX) >> _REGION2_SHIFT;
|
||||
do {
|
||||
next = base_region2_addr_end(addr, end);
|
||||
if (*rste & _REGION_ENTRY_INVALID) {
|
||||
if (!alloc)
|
||||
continue;
|
||||
table = base_crst_alloc(_REGION3_ENTRY_EMPTY);
|
||||
if (!table)
|
||||
return -ENOMEM;
|
||||
*rste = table | _REGION2_ENTRY;
|
||||
}
|
||||
table = *rste & _REGION_ENTRY_ORIGIN;
|
||||
rc = base_region3_walk(table, addr, next, alloc);
|
||||
if (rc)
|
||||
return rc;
|
||||
if (!alloc)
|
||||
base_crst_free(table);
|
||||
} while (rste++, addr = next, addr < end);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int base_region1_walk(unsigned long origin, unsigned long addr,
|
||||
unsigned long end, int alloc)
|
||||
{
|
||||
unsigned long *rfte, next, table;
|
||||
int rc;
|
||||
|
||||
rfte = (unsigned long *) origin;
|
||||
rfte += (addr & _REGION1_INDEX) >> _REGION1_SHIFT;
|
||||
do {
|
||||
next = base_region1_addr_end(addr, end);
|
||||
if (*rfte & _REGION_ENTRY_INVALID) {
|
||||
if (!alloc)
|
||||
continue;
|
||||
table = base_crst_alloc(_REGION2_ENTRY_EMPTY);
|
||||
if (!table)
|
||||
return -ENOMEM;
|
||||
*rfte = table | _REGION1_ENTRY;
|
||||
}
|
||||
table = *rfte & _REGION_ENTRY_ORIGIN;
|
||||
rc = base_region2_walk(table, addr, next, alloc);
|
||||
if (rc)
|
||||
return rc;
|
||||
if (!alloc)
|
||||
base_crst_free(table);
|
||||
} while (rfte++, addr = next, addr < end);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* base_asce_free - free asce and tables returned from base_asce_alloc()
|
||||
* @asce: asce to be freed
|
||||
*
|
||||
* Frees all region, segment, and page tables that were allocated with a
|
||||
* corresponding base_asce_alloc() call.
|
||||
*/
|
||||
void base_asce_free(unsigned long asce)
|
||||
{
|
||||
unsigned long table = asce & _ASCE_ORIGIN;
|
||||
|
||||
if (!asce)
|
||||
return;
|
||||
switch (asce & _ASCE_TYPE_MASK) {
|
||||
case _ASCE_TYPE_SEGMENT:
|
||||
base_segment_walk(table, 0, _REGION3_SIZE, 0);
|
||||
break;
|
||||
case _ASCE_TYPE_REGION3:
|
||||
base_region3_walk(table, 0, _REGION2_SIZE, 0);
|
||||
break;
|
||||
case _ASCE_TYPE_REGION2:
|
||||
base_region2_walk(table, 0, _REGION1_SIZE, 0);
|
||||
break;
|
||||
case _ASCE_TYPE_REGION1:
|
||||
base_region1_walk(table, 0, -_PAGE_SIZE, 0);
|
||||
break;
|
||||
}
|
||||
base_crst_free(table);
|
||||
}
|
||||
|
||||
static int base_pgt_cache_init(void)
|
||||
{
|
||||
static DEFINE_MUTEX(base_pgt_cache_mutex);
|
||||
unsigned long sz = _PAGE_TABLE_SIZE;
|
||||
|
||||
if (base_pgt_cache)
|
||||
return 0;
|
||||
mutex_lock(&base_pgt_cache_mutex);
|
||||
if (!base_pgt_cache)
|
||||
base_pgt_cache = kmem_cache_create("base_pgt", sz, sz, 0, NULL);
|
||||
mutex_unlock(&base_pgt_cache_mutex);
|
||||
return base_pgt_cache ? 0 : -ENOMEM;
|
||||
}
|
||||
|
||||
/**
|
||||
* base_asce_alloc - create kernel mapping without enhanced DAT features
|
||||
* @addr: virtual start address of kernel mapping
|
||||
* @num_pages: number of consecutive pages
|
||||
*
|
||||
* Generate an asce, including all required region, segment and page tables,
|
||||
* that can be used to access the virtual kernel mapping. The difference is
|
||||
* that the returned asce does not make use of any enhanced DAT features like
|
||||
* e.g. large pages. This is required for some I/O functions that pass an
|
||||
* asce, like e.g. some service call requests.
|
||||
*
|
||||
* Note: the returned asce may NEVER be attached to any cpu. It may only be
|
||||
* used for I/O requests. tlb entries that might result because the
|
||||
* asce was attached to a cpu won't be cleared.
|
||||
*/
|
||||
unsigned long base_asce_alloc(unsigned long addr, unsigned long num_pages)
|
||||
{
|
||||
unsigned long asce, table, end;
|
||||
int rc;
|
||||
|
||||
if (base_pgt_cache_init())
|
||||
return 0;
|
||||
end = addr + num_pages * PAGE_SIZE;
|
||||
if (end <= _REGION3_SIZE) {
|
||||
table = base_crst_alloc(_SEGMENT_ENTRY_EMPTY);
|
||||
if (!table)
|
||||
return 0;
|
||||
rc = base_segment_walk(table, addr, end, 1);
|
||||
asce = table | _ASCE_TYPE_SEGMENT | _ASCE_TABLE_LENGTH;
|
||||
} else if (end <= _REGION2_SIZE) {
|
||||
table = base_crst_alloc(_REGION3_ENTRY_EMPTY);
|
||||
if (!table)
|
||||
return 0;
|
||||
rc = base_region3_walk(table, addr, end, 1);
|
||||
asce = table | _ASCE_TYPE_REGION3 | _ASCE_TABLE_LENGTH;
|
||||
} else if (end <= _REGION1_SIZE) {
|
||||
table = base_crst_alloc(_REGION2_ENTRY_EMPTY);
|
||||
if (!table)
|
||||
return 0;
|
||||
rc = base_region2_walk(table, addr, end, 1);
|
||||
asce = table | _ASCE_TYPE_REGION2 | _ASCE_TABLE_LENGTH;
|
||||
} else {
|
||||
table = base_crst_alloc(_REGION1_ENTRY_EMPTY);
|
||||
if (!table)
|
||||
return 0;
|
||||
rc = base_region1_walk(table, addr, end, 1);
|
||||
asce = table | _ASCE_TYPE_REGION1 | _ASCE_TABLE_LENGTH;
|
||||
}
|
||||
if (rc) {
|
||||
base_asce_free(asce);
|
||||
asce = 0;
|
||||
}
|
||||
return asce;
|
||||
}
|
||||
|
@ -3918,8 +3918,13 @@ static int dasd_generic_requeue_all_requests(struct dasd_device *device)
|
||||
cqr = refers;
|
||||
}
|
||||
|
||||
if (cqr->block)
|
||||
list_del_init(&cqr->blocklist);
|
||||
/*
|
||||
* _dasd_requeue_request already checked for a valid
|
||||
* blockdevice, no need to check again
|
||||
* all erp requests (cqr->refers) have a cqr->block
|
||||
* pointer copy from the original cqr
|
||||
*/
|
||||
list_del_init(&cqr->blocklist);
|
||||
cqr->block->base->discipline->free_cp(
|
||||
cqr, (struct request *) cqr->callback_data);
|
||||
}
|
||||
|
@ -2214,15 +2214,28 @@ static void dasd_3990_erp_disable_path(struct dasd_device *device, __u8 lpum)
|
||||
{
|
||||
int pos = pathmask_to_pos(lpum);
|
||||
|
||||
if (!(device->features & DASD_FEATURE_PATH_AUTODISABLE)) {
|
||||
dev_err(&device->cdev->dev,
|
||||
"Path %x.%02x (pathmask %02x) is operational despite excessive IFCCs\n",
|
||||
device->path[pos].cssid, device->path[pos].chpid, lpum);
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* no remaining path, cannot disable */
|
||||
if (!(dasd_path_get_opm(device) & ~lpum))
|
||||
return;
|
||||
if (!(dasd_path_get_opm(device) & ~lpum)) {
|
||||
dev_err(&device->cdev->dev,
|
||||
"Last path %x.%02x (pathmask %02x) is operational despite excessive IFCCs\n",
|
||||
device->path[pos].cssid, device->path[pos].chpid, lpum);
|
||||
goto out;
|
||||
}
|
||||
|
||||
dev_err(&device->cdev->dev,
|
||||
"Path %x.%02x (pathmask %02x) is disabled - IFCC threshold exceeded\n",
|
||||
device->path[pos].cssid, device->path[pos].chpid, lpum);
|
||||
dasd_path_remove_opm(device, lpum);
|
||||
dasd_path_add_ifccpm(device, lpum);
|
||||
|
||||
out:
|
||||
device->path[pos].errorclk = 0;
|
||||
atomic_set(&device->path[pos].error_count, 0);
|
||||
}
|
||||
|
@ -1550,9 +1550,49 @@ dasd_path_threshold_store(struct device *dev, struct device_attribute *attr,
|
||||
dasd_put_device(device);
|
||||
return count;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(path_threshold, 0644, dasd_path_threshold_show,
|
||||
dasd_path_threshold_store);
|
||||
|
||||
/*
|
||||
* configure if path is disabled after IFCC/CCC error threshold is
|
||||
* exceeded
|
||||
*/
|
||||
static ssize_t
|
||||
dasd_path_autodisable_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct dasd_devmap *devmap;
|
||||
int flag;
|
||||
|
||||
devmap = dasd_find_busid(dev_name(dev));
|
||||
if (!IS_ERR(devmap))
|
||||
flag = (devmap->features & DASD_FEATURE_PATH_AUTODISABLE) != 0;
|
||||
else
|
||||
flag = (DASD_FEATURE_DEFAULT &
|
||||
DASD_FEATURE_PATH_AUTODISABLE) != 0;
|
||||
return snprintf(buf, PAGE_SIZE, flag ? "1\n" : "0\n");
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
dasd_path_autodisable_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
unsigned int val;
|
||||
int rc;
|
||||
|
||||
if (kstrtouint(buf, 0, &val) || val > 1)
|
||||
return -EINVAL;
|
||||
|
||||
rc = dasd_set_feature(to_ccwdev(dev),
|
||||
DASD_FEATURE_PATH_AUTODISABLE, val);
|
||||
|
||||
return rc ? : count;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(path_autodisable, 0644,
|
||||
dasd_path_autodisable_show,
|
||||
dasd_path_autodisable_store);
|
||||
/*
|
||||
* interval for IFCC/CCC checks
|
||||
* meaning time with no IFCC/CCC error before the error counter
|
||||
@ -1623,6 +1663,7 @@ static struct attribute * dasd_attrs[] = {
|
||||
&dev_attr_host_access_count.attr,
|
||||
&dev_attr_path_masks.attr,
|
||||
&dev_attr_path_threshold.attr,
|
||||
&dev_attr_path_autodisable.attr,
|
||||
&dev_attr_path_interval.attr,
|
||||
&dev_attr_path_reset.attr,
|
||||
&dev_attr_hpf.attr,
|
||||
|
@ -214,24 +214,25 @@ static void set_ch_t(struct ch_t *geo, __u32 cyl, __u8 head)
|
||||
geo->head |= head;
|
||||
}
|
||||
|
||||
static int check_XRC(struct ccw1 *ccw, struct DE_eckd_data *data,
|
||||
static int set_timestamp(struct ccw1 *ccw, struct DE_eckd_data *data,
|
||||
struct dasd_device *device)
|
||||
{
|
||||
struct dasd_eckd_private *private = device->private;
|
||||
int rc;
|
||||
|
||||
if (!private->rdc_data.facilities.XRC_supported)
|
||||
rc = get_phys_clock(&data->ep_sys_time);
|
||||
/*
|
||||
* Ignore return code if XRC is not supported or
|
||||
* sync clock is switched off
|
||||
*/
|
||||
if ((rc && !private->rdc_data.facilities.XRC_supported) ||
|
||||
rc == -EOPNOTSUPP || rc == -EACCES)
|
||||
return 0;
|
||||
|
||||
/* switch on System Time Stamp - needed for XRC Support */
|
||||
data->ga_extended |= 0x08; /* switch on 'Time Stamp Valid' */
|
||||
data->ga_extended |= 0x02; /* switch on 'Extended Parameter' */
|
||||
|
||||
rc = get_phys_clock(&data->ep_sys_time);
|
||||
/* Ignore return code if sync clock is switched off. */
|
||||
if (rc == -EOPNOTSUPP || rc == -EACCES)
|
||||
rc = 0;
|
||||
|
||||
if (ccw) {
|
||||
ccw->count = sizeof(struct DE_eckd_data);
|
||||
ccw->flags |= CCW_FLAG_SLI;
|
||||
@ -286,12 +287,12 @@ define_extent(struct ccw1 *ccw, struct DE_eckd_data *data, unsigned int trk,
|
||||
case DASD_ECKD_CCW_WRITE_KD_MT:
|
||||
data->mask.perm = 0x02;
|
||||
data->attributes.operation = private->attrib.operation;
|
||||
rc = check_XRC(ccw, data, device);
|
||||
rc = set_timestamp(ccw, data, device);
|
||||
break;
|
||||
case DASD_ECKD_CCW_WRITE_CKD:
|
||||
case DASD_ECKD_CCW_WRITE_CKD_MT:
|
||||
data->attributes.operation = DASD_BYPASS_CACHE;
|
||||
rc = check_XRC(ccw, data, device);
|
||||
rc = set_timestamp(ccw, data, device);
|
||||
break;
|
||||
case DASD_ECKD_CCW_ERASE:
|
||||
case DASD_ECKD_CCW_WRITE_HOME_ADDRESS:
|
||||
@ -299,7 +300,7 @@ define_extent(struct ccw1 *ccw, struct DE_eckd_data *data, unsigned int trk,
|
||||
data->mask.perm = 0x3;
|
||||
data->mask.auth = 0x1;
|
||||
data->attributes.operation = DASD_BYPASS_CACHE;
|
||||
rc = check_XRC(ccw, data, device);
|
||||
rc = set_timestamp(ccw, data, device);
|
||||
break;
|
||||
case DASD_ECKD_CCW_WRITE_FULL_TRACK:
|
||||
data->mask.perm = 0x03;
|
||||
@ -310,7 +311,7 @@ define_extent(struct ccw1 *ccw, struct DE_eckd_data *data, unsigned int trk,
|
||||
data->mask.perm = 0x02;
|
||||
data->attributes.operation = private->attrib.operation;
|
||||
data->blk_size = blksize;
|
||||
rc = check_XRC(ccw, data, device);
|
||||
rc = set_timestamp(ccw, data, device);
|
||||
break;
|
||||
default:
|
||||
dev_err(&device->cdev->dev,
|
||||
@ -993,7 +994,7 @@ static int dasd_eckd_read_conf(struct dasd_device *device)
|
||||
struct dasd_eckd_private *private, path_private;
|
||||
struct dasd_uid *uid;
|
||||
char print_path_uid[60], print_device_uid[60];
|
||||
struct channel_path_desc *chp_desc;
|
||||
struct channel_path_desc_fmt0 *chp_desc;
|
||||
struct subchannel_id sch_id;
|
||||
|
||||
private = device->private;
|
||||
@ -3440,7 +3441,7 @@ static int prepare_itcw(struct itcw *itcw,
|
||||
dedata->mask.perm = 0x02;
|
||||
dedata->attributes.operation = basepriv->attrib.operation;
|
||||
dedata->blk_size = blksize;
|
||||
rc = check_XRC(NULL, dedata, basedev);
|
||||
rc = set_timestamp(NULL, dedata, basedev);
|
||||
dedata->ga_extended |= 0x42;
|
||||
lredata->operation.orientation = 0x0;
|
||||
lredata->operation.operation = 0x3F;
|
||||
|
@ -23,7 +23,7 @@ CFLAGS_REMOVE_sclp_early_core.o += $(CC_FLAGS_EXPOLINE)
|
||||
|
||||
obj-y += ctrlchar.o keyboard.o defkeymap.o sclp.o sclp_rw.o sclp_quiesce.o \
|
||||
sclp_cmd.o sclp_config.o sclp_cpi_sys.o sclp_ocf.o sclp_ctl.o \
|
||||
sclp_early.o sclp_early_core.o
|
||||
sclp_early.o sclp_early_core.o sclp_sd.o
|
||||
|
||||
obj-$(CONFIG_TN3270) += raw3270.o
|
||||
obj-$(CONFIG_TN3270_CONSOLE) += con3270.o
|
||||
|
@ -9,7 +9,9 @@
|
||||
#include <linux/kbd_kern.h>
|
||||
#include <linux/kbd_diacr.h>
|
||||
|
||||
u_short plain_map[NR_KEYS] = {
|
||||
#include "keyboard.h"
|
||||
|
||||
u_short ebc_plain_map[NR_KEYS] = {
|
||||
0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000,
|
||||
0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000,
|
||||
0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000,
|
||||
@ -85,12 +87,12 @@ static u_short shift_ctrl_map[NR_KEYS] = {
|
||||
0xf20a, 0xf108, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
|
||||
};
|
||||
|
||||
ushort *key_maps[MAX_NR_KEYMAPS] = {
|
||||
plain_map, shift_map, NULL, NULL,
|
||||
ushort *ebc_key_maps[MAX_NR_KEYMAPS] = {
|
||||
ebc_plain_map, shift_map, NULL, NULL,
|
||||
ctrl_map, shift_ctrl_map, NULL,
|
||||
};
|
||||
|
||||
unsigned int keymap_count = 4;
|
||||
unsigned int ebc_keymap_count = 4;
|
||||
|
||||
|
||||
/*
|
||||
@ -99,7 +101,7 @@ unsigned int keymap_count = 4;
|
||||
* the default and allocate dynamically in chunks of 512 bytes.
|
||||
*/
|
||||
|
||||
char func_buf[] = {
|
||||
char ebc_func_buf[] = {
|
||||
'\033', '[', '[', 'A', 0,
|
||||
'\033', '[', '[', 'B', 0,
|
||||
'\033', '[', '[', 'C', 0,
|
||||
@ -123,37 +125,37 @@ char func_buf[] = {
|
||||
};
|
||||
|
||||
|
||||
char *funcbufptr = func_buf;
|
||||
int funcbufsize = sizeof(func_buf);
|
||||
int funcbufleft = 0; /* space left */
|
||||
char *ebc_funcbufptr = ebc_func_buf;
|
||||
int ebc_funcbufsize = sizeof(ebc_func_buf);
|
||||
int ebc_funcbufleft; /* space left */
|
||||
|
||||
char *func_table[MAX_NR_FUNC] = {
|
||||
func_buf + 0,
|
||||
func_buf + 5,
|
||||
func_buf + 10,
|
||||
func_buf + 15,
|
||||
func_buf + 20,
|
||||
func_buf + 25,
|
||||
func_buf + 31,
|
||||
func_buf + 37,
|
||||
func_buf + 43,
|
||||
func_buf + 49,
|
||||
func_buf + 55,
|
||||
func_buf + 61,
|
||||
func_buf + 67,
|
||||
func_buf + 73,
|
||||
func_buf + 79,
|
||||
func_buf + 85,
|
||||
func_buf + 91,
|
||||
func_buf + 97,
|
||||
func_buf + 103,
|
||||
func_buf + 109,
|
||||
char *ebc_func_table[MAX_NR_FUNC] = {
|
||||
ebc_func_buf + 0,
|
||||
ebc_func_buf + 5,
|
||||
ebc_func_buf + 10,
|
||||
ebc_func_buf + 15,
|
||||
ebc_func_buf + 20,
|
||||
ebc_func_buf + 25,
|
||||
ebc_func_buf + 31,
|
||||
ebc_func_buf + 37,
|
||||
ebc_func_buf + 43,
|
||||
ebc_func_buf + 49,
|
||||
ebc_func_buf + 55,
|
||||
ebc_func_buf + 61,
|
||||
ebc_func_buf + 67,
|
||||
ebc_func_buf + 73,
|
||||
ebc_func_buf + 79,
|
||||
ebc_func_buf + 85,
|
||||
ebc_func_buf + 91,
|
||||
ebc_func_buf + 97,
|
||||
ebc_func_buf + 103,
|
||||
ebc_func_buf + 109,
|
||||
NULL,
|
||||
};
|
||||
|
||||
struct kbdiacruc accent_table[MAX_DIACR] = {
|
||||
struct kbdiacruc ebc_accent_table[MAX_DIACR] = {
|
||||
{'^', 'c', 0003}, {'^', 'd', 0004},
|
||||
{'^', 'z', 0032}, {'^', 0012, 0000},
|
||||
};
|
||||
|
||||
unsigned int accent_table_size = 4;
|
||||
unsigned int ebc_accent_table_size = 4;
|
||||
|
@ -54,24 +54,24 @@ kbd_alloc(void) {
|
||||
kbd = kzalloc(sizeof(struct kbd_data), GFP_KERNEL);
|
||||
if (!kbd)
|
||||
goto out;
|
||||
kbd->key_maps = kzalloc(sizeof(key_maps), GFP_KERNEL);
|
||||
kbd->key_maps = kzalloc(sizeof(ebc_key_maps), GFP_KERNEL);
|
||||
if (!kbd->key_maps)
|
||||
goto out_kbd;
|
||||
for (i = 0; i < ARRAY_SIZE(key_maps); i++) {
|
||||
if (key_maps[i]) {
|
||||
kbd->key_maps[i] = kmemdup(key_maps[i],
|
||||
for (i = 0; i < ARRAY_SIZE(ebc_key_maps); i++) {
|
||||
if (ebc_key_maps[i]) {
|
||||
kbd->key_maps[i] = kmemdup(ebc_key_maps[i],
|
||||
sizeof(u_short) * NR_KEYS,
|
||||
GFP_KERNEL);
|
||||
if (!kbd->key_maps[i])
|
||||
goto out_maps;
|
||||
}
|
||||
}
|
||||
kbd->func_table = kzalloc(sizeof(func_table), GFP_KERNEL);
|
||||
kbd->func_table = kzalloc(sizeof(ebc_func_table), GFP_KERNEL);
|
||||
if (!kbd->func_table)
|
||||
goto out_maps;
|
||||
for (i = 0; i < ARRAY_SIZE(func_table); i++) {
|
||||
if (func_table[i]) {
|
||||
kbd->func_table[i] = kstrdup(func_table[i],
|
||||
for (i = 0; i < ARRAY_SIZE(ebc_func_table); i++) {
|
||||
if (ebc_func_table[i]) {
|
||||
kbd->func_table[i] = kstrdup(ebc_func_table[i],
|
||||
GFP_KERNEL);
|
||||
if (!kbd->func_table[i])
|
||||
goto out_func;
|
||||
@ -81,22 +81,22 @@ kbd_alloc(void) {
|
||||
kzalloc(sizeof(fn_handler_fn *) * NR_FN_HANDLER, GFP_KERNEL);
|
||||
if (!kbd->fn_handler)
|
||||
goto out_func;
|
||||
kbd->accent_table = kmemdup(accent_table,
|
||||
kbd->accent_table = kmemdup(ebc_accent_table,
|
||||
sizeof(struct kbdiacruc) * MAX_DIACR,
|
||||
GFP_KERNEL);
|
||||
if (!kbd->accent_table)
|
||||
goto out_fn_handler;
|
||||
kbd->accent_table_size = accent_table_size;
|
||||
kbd->accent_table_size = ebc_accent_table_size;
|
||||
return kbd;
|
||||
|
||||
out_fn_handler:
|
||||
kfree(kbd->fn_handler);
|
||||
out_func:
|
||||
for (i = 0; i < ARRAY_SIZE(func_table); i++)
|
||||
for (i = 0; i < ARRAY_SIZE(ebc_func_table); i++)
|
||||
kfree(kbd->func_table[i]);
|
||||
kfree(kbd->func_table);
|
||||
out_maps:
|
||||
for (i = 0; i < ARRAY_SIZE(key_maps); i++)
|
||||
for (i = 0; i < ARRAY_SIZE(ebc_key_maps); i++)
|
||||
kfree(kbd->key_maps[i]);
|
||||
kfree(kbd->key_maps);
|
||||
out_kbd:
|
||||
@ -112,10 +112,10 @@ kbd_free(struct kbd_data *kbd)
|
||||
|
||||
kfree(kbd->accent_table);
|
||||
kfree(kbd->fn_handler);
|
||||
for (i = 0; i < ARRAY_SIZE(func_table); i++)
|
||||
for (i = 0; i < ARRAY_SIZE(ebc_func_table); i++)
|
||||
kfree(kbd->func_table[i]);
|
||||
kfree(kbd->func_table);
|
||||
for (i = 0; i < ARRAY_SIZE(key_maps); i++)
|
||||
for (i = 0; i < ARRAY_SIZE(ebc_key_maps); i++)
|
||||
kfree(kbd->key_maps[i]);
|
||||
kfree(kbd->key_maps);
|
||||
kfree(kbd);
|
||||
@ -131,7 +131,7 @@ kbd_ascebc(struct kbd_data *kbd, unsigned char *ascebc)
|
||||
int i, j, k;
|
||||
|
||||
memset(ascebc, 0x40, 256);
|
||||
for (i = 0; i < ARRAY_SIZE(key_maps); i++) {
|
||||
for (i = 0; i < ARRAY_SIZE(ebc_key_maps); i++) {
|
||||
keymap = kbd->key_maps[i];
|
||||
if (!keymap)
|
||||
continue;
|
||||
@ -158,7 +158,7 @@ kbd_ebcasc(struct kbd_data *kbd, unsigned char *ebcasc)
|
||||
int i, j, k;
|
||||
|
||||
memset(ebcasc, ' ', 256);
|
||||
for (i = 0; i < ARRAY_SIZE(key_maps); i++) {
|
||||
for (i = 0; i < ARRAY_SIZE(ebc_key_maps); i++) {
|
||||
keymap = kbd->key_maps[i];
|
||||
if (!keymap)
|
||||
continue;
|
||||
|
@ -14,6 +14,17 @@
|
||||
|
||||
struct kbd_data;
|
||||
|
||||
extern int ebc_funcbufsize, ebc_funcbufleft;
|
||||
extern char *ebc_func_table[MAX_NR_FUNC];
|
||||
extern char ebc_func_buf[];
|
||||
extern char *ebc_funcbufptr;
|
||||
extern unsigned int ebc_keymap_count;
|
||||
|
||||
extern struct kbdiacruc ebc_accent_table[];
|
||||
extern unsigned int ebc_accent_table_size;
|
||||
extern unsigned short *ebc_key_maps[MAX_NR_KEYMAPS];
|
||||
extern unsigned short ebc_plain_map[NR_KEYS];
|
||||
|
||||
typedef void (fn_handler_fn)(struct kbd_data *);
|
||||
|
||||
/*
|
||||
|
@ -417,7 +417,7 @@ sclp_dispatch_evbufs(struct sccb_header *sccb)
|
||||
reg = NULL;
|
||||
list_for_each(l, &sclp_reg_list) {
|
||||
reg = list_entry(l, struct sclp_register, list);
|
||||
if (reg->receive_mask & (1 << (32 - evbuf->type)))
|
||||
if (reg->receive_mask & SCLP_EVTYP_MASK(evbuf->type))
|
||||
break;
|
||||
else
|
||||
reg = NULL;
|
||||
@ -618,9 +618,12 @@ struct sclp_statechangebuf {
|
||||
u16 _zeros : 12;
|
||||
u16 mask_length;
|
||||
u64 sclp_active_facility_mask;
|
||||
sccb_mask_t sclp_receive_mask;
|
||||
sccb_mask_t sclp_send_mask;
|
||||
u32 read_data_function_mask;
|
||||
u8 masks[2 * 1021 + 4]; /* variable length */
|
||||
/*
|
||||
* u8 sclp_receive_mask[mask_length];
|
||||
* u8 sclp_send_mask[mask_length];
|
||||
* u32 read_data_function_mask;
|
||||
*/
|
||||
} __attribute__((packed));
|
||||
|
||||
|
||||
@ -631,14 +634,14 @@ sclp_state_change_cb(struct evbuf_header *evbuf)
|
||||
unsigned long flags;
|
||||
struct sclp_statechangebuf *scbuf;
|
||||
|
||||
BUILD_BUG_ON(sizeof(struct sclp_statechangebuf) > PAGE_SIZE);
|
||||
|
||||
scbuf = (struct sclp_statechangebuf *) evbuf;
|
||||
if (scbuf->mask_length != sizeof(sccb_mask_t))
|
||||
return;
|
||||
spin_lock_irqsave(&sclp_lock, flags);
|
||||
if (scbuf->validity_sclp_receive_mask)
|
||||
sclp_receive_mask = scbuf->sclp_receive_mask;
|
||||
sclp_receive_mask = sccb_get_recv_mask(scbuf);
|
||||
if (scbuf->validity_sclp_send_mask)
|
||||
sclp_send_mask = scbuf->sclp_send_mask;
|
||||
sclp_send_mask = sccb_get_send_mask(scbuf);
|
||||
spin_unlock_irqrestore(&sclp_lock, flags);
|
||||
if (scbuf->validity_sclp_active_facility_mask)
|
||||
sclp.facilities = scbuf->sclp_active_facility_mask;
|
||||
@ -748,7 +751,7 @@ EXPORT_SYMBOL(sclp_remove_processed);
|
||||
|
||||
/* Prepare init mask request. Called while sclp_lock is locked. */
|
||||
static inline void
|
||||
__sclp_make_init_req(u32 receive_mask, u32 send_mask)
|
||||
__sclp_make_init_req(sccb_mask_t receive_mask, sccb_mask_t send_mask)
|
||||
{
|
||||
struct init_sccb *sccb;
|
||||
|
||||
@ -761,12 +764,15 @@ __sclp_make_init_req(u32 receive_mask, u32 send_mask)
|
||||
sclp_init_req.callback = NULL;
|
||||
sclp_init_req.callback_data = NULL;
|
||||
sclp_init_req.sccb = sccb;
|
||||
sccb->header.length = sizeof(struct init_sccb);
|
||||
sccb->mask_length = sizeof(sccb_mask_t);
|
||||
sccb->receive_mask = receive_mask;
|
||||
sccb->send_mask = send_mask;
|
||||
sccb->sclp_receive_mask = 0;
|
||||
sccb->sclp_send_mask = 0;
|
||||
sccb->header.length = sizeof(*sccb);
|
||||
if (sclp_mask_compat_mode)
|
||||
sccb->mask_length = SCLP_MASK_SIZE_COMPAT;
|
||||
else
|
||||
sccb->mask_length = sizeof(sccb_mask_t);
|
||||
sccb_set_recv_mask(sccb, receive_mask);
|
||||
sccb_set_send_mask(sccb, send_mask);
|
||||
sccb_set_sclp_recv_mask(sccb, 0);
|
||||
sccb_set_sclp_send_mask(sccb, 0);
|
||||
}
|
||||
|
||||
/* Start init mask request. If calculate is non-zero, calculate the mask as
|
||||
@ -822,8 +828,8 @@ sclp_init_mask(int calculate)
|
||||
sccb->header.response_code == 0x20) {
|
||||
/* Successful request */
|
||||
if (calculate) {
|
||||
sclp_receive_mask = sccb->sclp_receive_mask;
|
||||
sclp_send_mask = sccb->sclp_send_mask;
|
||||
sclp_receive_mask = sccb_get_sclp_recv_mask(sccb);
|
||||
sclp_send_mask = sccb_get_sclp_send_mask(sccb);
|
||||
} else {
|
||||
sclp_receive_mask = 0;
|
||||
sclp_send_mask = 0;
|
||||
@ -974,12 +980,18 @@ sclp_check_interface(void)
|
||||
irq_subclass_unregister(IRQ_SUBCLASS_SERVICE_SIGNAL);
|
||||
spin_lock_irqsave(&sclp_lock, flags);
|
||||
del_timer(&sclp_request_timer);
|
||||
if (sclp_init_req.status == SCLP_REQ_DONE &&
|
||||
sccb->header.response_code == 0x20) {
|
||||
rc = 0;
|
||||
break;
|
||||
} else
|
||||
rc = -EBUSY;
|
||||
rc = -EBUSY;
|
||||
if (sclp_init_req.status == SCLP_REQ_DONE) {
|
||||
if (sccb->header.response_code == 0x20) {
|
||||
rc = 0;
|
||||
break;
|
||||
} else if (sccb->header.response_code == 0x74f0) {
|
||||
if (!sclp_mask_compat_mode) {
|
||||
sclp_mask_compat_mode = true;
|
||||
retry = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
unregister_external_irq(EXT_IRQ_SERVICE_SIG, sclp_check_handler);
|
||||
spin_unlock_irqrestore(&sclp_lock, flags);
|
||||
|
@ -18,7 +18,7 @@
|
||||
#define MAX_KMEM_PAGES (sizeof(unsigned long) << 3)
|
||||
#define SCLP_CONSOLE_PAGES 6
|
||||
|
||||
#define SCLP_EVTYP_MASK(T) (1U << (32 - (T)))
|
||||
#define SCLP_EVTYP_MASK(T) (1UL << (sizeof(sccb_mask_t) * BITS_PER_BYTE - (T)))
|
||||
|
||||
#define EVTYP_OPCMD 0x01
|
||||
#define EVTYP_MSG 0x02
|
||||
@ -28,6 +28,7 @@
|
||||
#define EVTYP_PMSGCMD 0x09
|
||||
#define EVTYP_ASYNC 0x0A
|
||||
#define EVTYP_CTLPROGIDENT 0x0B
|
||||
#define EVTYP_STORE_DATA 0x0C
|
||||
#define EVTYP_ERRNOTIFY 0x18
|
||||
#define EVTYP_VT220MSG 0x1A
|
||||
#define EVTYP_SDIAS 0x1C
|
||||
@ -42,6 +43,7 @@
|
||||
#define EVTYP_PMSGCMD_MASK SCLP_EVTYP_MASK(EVTYP_PMSGCMD)
|
||||
#define EVTYP_ASYNC_MASK SCLP_EVTYP_MASK(EVTYP_ASYNC)
|
||||
#define EVTYP_CTLPROGIDENT_MASK SCLP_EVTYP_MASK(EVTYP_CTLPROGIDENT)
|
||||
#define EVTYP_STORE_DATA_MASK SCLP_EVTYP_MASK(EVTYP_STORE_DATA)
|
||||
#define EVTYP_ERRNOTIFY_MASK SCLP_EVTYP_MASK(EVTYP_ERRNOTIFY)
|
||||
#define EVTYP_VT220MSG_MASK SCLP_EVTYP_MASK(EVTYP_VT220MSG)
|
||||
#define EVTYP_SDIAS_MASK SCLP_EVTYP_MASK(EVTYP_SDIAS)
|
||||
@ -85,7 +87,7 @@ enum sclp_pm_event {
|
||||
#define SCLP_PANIC_PRIO 1
|
||||
#define SCLP_PANIC_PRIO_CLIENT 0
|
||||
|
||||
typedef u32 sccb_mask_t; /* ATTENTION: assumes 32bit mask !!! */
|
||||
typedef u64 sccb_mask_t;
|
||||
|
||||
struct sccb_header {
|
||||
u16 length;
|
||||
@ -98,12 +100,53 @@ struct init_sccb {
|
||||
struct sccb_header header;
|
||||
u16 _reserved;
|
||||
u16 mask_length;
|
||||
sccb_mask_t receive_mask;
|
||||
sccb_mask_t send_mask;
|
||||
sccb_mask_t sclp_receive_mask;
|
||||
sccb_mask_t sclp_send_mask;
|
||||
u8 masks[4 * 1021]; /* variable length */
|
||||
/*
|
||||
* u8 receive_mask[mask_length];
|
||||
* u8 send_mask[mask_length];
|
||||
* u8 sclp_receive_mask[mask_length];
|
||||
* u8 sclp_send_mask[mask_length];
|
||||
*/
|
||||
} __attribute__((packed));
|
||||
|
||||
#define SCLP_MASK_SIZE_COMPAT 4
|
||||
|
||||
static inline sccb_mask_t sccb_get_mask(u8 *masks, size_t len, int i)
|
||||
{
|
||||
sccb_mask_t res = 0;
|
||||
|
||||
memcpy(&res, masks + i * len, min(sizeof(res), len));
|
||||
return res;
|
||||
}
|
||||
|
||||
static inline void sccb_set_mask(u8 *masks, size_t len, int i, sccb_mask_t val)
|
||||
{
|
||||
memset(masks + i * len, 0, len);
|
||||
memcpy(masks + i * len, &val, min(sizeof(val), len));
|
||||
}
|
||||
|
||||
#define sccb_get_generic_mask(sccb, i) \
|
||||
({ \
|
||||
__typeof__(sccb) __sccb = sccb; \
|
||||
\
|
||||
sccb_get_mask(__sccb->masks, __sccb->mask_length, i); \
|
||||
})
|
||||
#define sccb_get_recv_mask(sccb) sccb_get_generic_mask(sccb, 0)
|
||||
#define sccb_get_send_mask(sccb) sccb_get_generic_mask(sccb, 1)
|
||||
#define sccb_get_sclp_recv_mask(sccb) sccb_get_generic_mask(sccb, 2)
|
||||
#define sccb_get_sclp_send_mask(sccb) sccb_get_generic_mask(sccb, 3)
|
||||
|
||||
#define sccb_set_generic_mask(sccb, i, val) \
|
||||
({ \
|
||||
__typeof__(sccb) __sccb = sccb; \
|
||||
\
|
||||
sccb_set_mask(__sccb->masks, __sccb->mask_length, i, val); \
|
||||
})
|
||||
#define sccb_set_recv_mask(sccb, val) sccb_set_generic_mask(sccb, 0, val)
|
||||
#define sccb_set_send_mask(sccb, val) sccb_set_generic_mask(sccb, 1, val)
|
||||
#define sccb_set_sclp_recv_mask(sccb, val) sccb_set_generic_mask(sccb, 2, val)
|
||||
#define sccb_set_sclp_send_mask(sccb, val) sccb_set_generic_mask(sccb, 3, val)
|
||||
|
||||
struct read_cpu_info_sccb {
|
||||
struct sccb_header header;
|
||||
u16 nr_configured;
|
||||
@ -221,15 +264,17 @@ extern int sclp_init_state;
|
||||
extern int sclp_console_pages;
|
||||
extern int sclp_console_drop;
|
||||
extern unsigned long sclp_console_full;
|
||||
extern bool sclp_mask_compat_mode;
|
||||
|
||||
extern char sclp_early_sccb[PAGE_SIZE];
|
||||
|
||||
void sclp_early_wait_irq(void);
|
||||
int sclp_early_cmd(sclp_cmdw_t cmd, void *sccb);
|
||||
unsigned int sclp_early_con_check_linemode(struct init_sccb *sccb);
|
||||
unsigned int sclp_early_con_check_vt220(struct init_sccb *sccb);
|
||||
int sclp_early_set_event_mask(struct init_sccb *sccb,
|
||||
unsigned long receive_mask,
|
||||
unsigned long send_mask);
|
||||
sccb_mask_t receive_mask,
|
||||
sccb_mask_t send_mask);
|
||||
|
||||
/* useful inlines */
|
||||
|
||||
|
@ -249,7 +249,7 @@ static void __init sclp_early_console_detect(struct init_sccb *sccb)
|
||||
if (sccb->header.response_code != 0x20)
|
||||
return;
|
||||
|
||||
if (sccb->sclp_send_mask & EVTYP_VT220MSG_MASK)
|
||||
if (sclp_early_con_check_vt220(sccb))
|
||||
sclp.has_vt220 = 1;
|
||||
|
||||
if (sclp_early_con_check_linemode(sccb))
|
||||
|
@ -14,6 +14,11 @@
|
||||
|
||||
char sclp_early_sccb[PAGE_SIZE] __aligned(PAGE_SIZE) __section(.data);
|
||||
int sclp_init_state __section(.data) = sclp_init_state_uninitialized;
|
||||
/*
|
||||
* Used to keep track of the size of the event masks. Qemu until version 2.11
|
||||
* only supports 4 and needs a workaround.
|
||||
*/
|
||||
bool sclp_mask_compat_mode;
|
||||
|
||||
void sclp_early_wait_irq(void)
|
||||
{
|
||||
@ -142,16 +147,24 @@ static void sclp_early_print_vt220(const char *str, unsigned int len)
|
||||
}
|
||||
|
||||
int sclp_early_set_event_mask(struct init_sccb *sccb,
|
||||
unsigned long receive_mask,
|
||||
unsigned long send_mask)
|
||||
sccb_mask_t receive_mask,
|
||||
sccb_mask_t send_mask)
|
||||
{
|
||||
retry:
|
||||
memset(sccb, 0, sizeof(*sccb));
|
||||
sccb->header.length = sizeof(*sccb);
|
||||
sccb->mask_length = sizeof(sccb_mask_t);
|
||||
sccb->receive_mask = receive_mask;
|
||||
sccb->send_mask = send_mask;
|
||||
if (sclp_mask_compat_mode)
|
||||
sccb->mask_length = SCLP_MASK_SIZE_COMPAT;
|
||||
else
|
||||
sccb->mask_length = sizeof(sccb_mask_t);
|
||||
sccb_set_recv_mask(sccb, receive_mask);
|
||||
sccb_set_send_mask(sccb, send_mask);
|
||||
if (sclp_early_cmd(SCLP_CMDW_WRITE_EVENT_MASK, sccb))
|
||||
return -EIO;
|
||||
if ((sccb->header.response_code == 0x74f0) && !sclp_mask_compat_mode) {
|
||||
sclp_mask_compat_mode = true;
|
||||
goto retry;
|
||||
}
|
||||
if (sccb->header.response_code != 0x20)
|
||||
return -EIO;
|
||||
return 0;
|
||||
@ -159,19 +172,28 @@ int sclp_early_set_event_mask(struct init_sccb *sccb,
|
||||
|
||||
unsigned int sclp_early_con_check_linemode(struct init_sccb *sccb)
|
||||
{
|
||||
if (!(sccb->sclp_send_mask & EVTYP_OPCMD_MASK))
|
||||
if (!(sccb_get_sclp_send_mask(sccb) & EVTYP_OPCMD_MASK))
|
||||
return 0;
|
||||
if (!(sccb->sclp_receive_mask & (EVTYP_MSG_MASK | EVTYP_PMSGCMD_MASK)))
|
||||
if (!(sccb_get_sclp_recv_mask(sccb) & (EVTYP_MSG_MASK | EVTYP_PMSGCMD_MASK)))
|
||||
return 0;
|
||||
return 1;
|
||||
}
|
||||
|
||||
unsigned int sclp_early_con_check_vt220(struct init_sccb *sccb)
|
||||
{
|
||||
if (sccb_get_sclp_send_mask(sccb) & EVTYP_VT220MSG_MASK)
|
||||
return 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int sclp_early_setup(int disable, int *have_linemode, int *have_vt220)
|
||||
{
|
||||
unsigned long receive_mask, send_mask;
|
||||
struct init_sccb *sccb;
|
||||
int rc;
|
||||
|
||||
BUILD_BUG_ON(sizeof(struct init_sccb) > PAGE_SIZE);
|
||||
|
||||
*have_linemode = *have_vt220 = 0;
|
||||
sccb = (struct init_sccb *) &sclp_early_sccb;
|
||||
receive_mask = disable ? 0 : EVTYP_OPCMD_MASK;
|
||||
@ -180,7 +202,7 @@ static int sclp_early_setup(int disable, int *have_linemode, int *have_vt220)
|
||||
if (rc)
|
||||
return rc;
|
||||
*have_linemode = sclp_early_con_check_linemode(sccb);
|
||||
*have_vt220 = sccb->send_mask & EVTYP_VT220MSG_MASK;
|
||||
*have_vt220 = !!(sccb_get_send_mask(sccb) & EVTYP_VT220MSG_MASK);
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
569
drivers/s390/char/sclp_sd.c
Normal file
569
drivers/s390/char/sclp_sd.c
Normal file
@ -0,0 +1,569 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* SCLP Store Data support and sysfs interface
|
||||
*
|
||||
* Copyright IBM Corp. 2017
|
||||
*/
|
||||
|
||||
#define KMSG_COMPONENT "sclp_sd"
|
||||
#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
|
||||
|
||||
#include <linux/completion.h>
|
||||
#include <linux/kobject.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/printk.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/vmalloc.h>
|
||||
#include <linux/async.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/mutex.h>
|
||||
|
||||
#include <asm/pgalloc.h>
|
||||
|
||||
#include "sclp.h"
|
||||
|
||||
#define SD_EQ_STORE_DATA 0
|
||||
#define SD_EQ_HALT 1
|
||||
#define SD_EQ_SIZE 2
|
||||
|
||||
#define SD_DI_CONFIG 3
|
||||
|
||||
struct sclp_sd_evbuf {
|
||||
struct evbuf_header hdr;
|
||||
u8 eq;
|
||||
u8 di;
|
||||
u8 rflags;
|
||||
u64 :56;
|
||||
u32 id;
|
||||
u16 :16;
|
||||
u8 fmt;
|
||||
u8 status;
|
||||
u64 sat;
|
||||
u64 sa;
|
||||
u32 esize;
|
||||
u32 dsize;
|
||||
} __packed;
|
||||
|
||||
struct sclp_sd_sccb {
|
||||
struct sccb_header hdr;
|
||||
struct sclp_sd_evbuf evbuf;
|
||||
} __packed __aligned(PAGE_SIZE);
|
||||
|
||||
/**
|
||||
* struct sclp_sd_data - Result of a Store Data request
|
||||
* @esize_bytes: Resulting esize in bytes
|
||||
* @dsize_bytes: Resulting dsize in bytes
|
||||
* @data: Pointer to data - must be released using vfree()
|
||||
*/
|
||||
struct sclp_sd_data {
|
||||
size_t esize_bytes;
|
||||
size_t dsize_bytes;
|
||||
void *data;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct sclp_sd_listener - Listener for asynchronous Store Data response
|
||||
* @list: For enqueueing this struct
|
||||
* @id: Event ID of response to listen for
|
||||
* @completion: Can be used to wait for response
|
||||
* @evbuf: Contains the resulting Store Data response after completion
|
||||
*/
|
||||
struct sclp_sd_listener {
|
||||
struct list_head list;
|
||||
u32 id;
|
||||
struct completion completion;
|
||||
struct sclp_sd_evbuf evbuf;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct sclp_sd_file - Sysfs representation of a Store Data entity
|
||||
* @kobj: Kobject
|
||||
* @data_attr: Attribute for accessing data contents
|
||||
* @data_mutex: Mutex to serialize access and updates to @data
|
||||
* @data: Data associated with this entity
|
||||
* @di: DI value associated with this entity
|
||||
*/
|
||||
struct sclp_sd_file {
|
||||
struct kobject kobj;
|
||||
struct bin_attribute data_attr;
|
||||
struct mutex data_mutex;
|
||||
struct sclp_sd_data data;
|
||||
u8 di;
|
||||
};
|
||||
#define to_sd_file(x) container_of(x, struct sclp_sd_file, kobj)
|
||||
|
||||
static struct kset *sclp_sd_kset;
|
||||
static struct sclp_sd_file *config_file;
|
||||
|
||||
static LIST_HEAD(sclp_sd_queue);
|
||||
static DEFINE_SPINLOCK(sclp_sd_queue_lock);
|
||||
|
||||
/**
|
||||
* sclp_sd_listener_add() - Add listener for Store Data responses
|
||||
* @listener: Listener to add
|
||||
*/
|
||||
static void sclp_sd_listener_add(struct sclp_sd_listener *listener)
|
||||
{
|
||||
spin_lock_irq(&sclp_sd_queue_lock);
|
||||
list_add_tail(&listener->list, &sclp_sd_queue);
|
||||
spin_unlock_irq(&sclp_sd_queue_lock);
|
||||
}
|
||||
|
||||
/**
|
||||
* sclp_sd_listener_remove() - Remove listener for Store Data responses
|
||||
* @listener: Listener to remove
|
||||
*/
|
||||
static void sclp_sd_listener_remove(struct sclp_sd_listener *listener)
|
||||
{
|
||||
spin_lock_irq(&sclp_sd_queue_lock);
|
||||
list_del(&listener->list);
|
||||
spin_unlock_irq(&sclp_sd_queue_lock);
|
||||
}
|
||||
|
||||
/**
|
||||
* sclp_sd_listener_init() - Initialize a Store Data response listener
|
||||
* @id: Event ID to listen for
|
||||
*
|
||||
* Initialize a listener for asynchronous Store Data responses. This listener
|
||||
* can afterwards be used to wait for a specific response and to retrieve
|
||||
* the associated response data.
|
||||
*/
|
||||
static void sclp_sd_listener_init(struct sclp_sd_listener *listener, u32 id)
|
||||
{
|
||||
memset(listener, 0, sizeof(*listener));
|
||||
listener->id = id;
|
||||
init_completion(&listener->completion);
|
||||
}
|
||||
|
||||
/**
|
||||
* sclp_sd_receiver() - Receiver for Store Data events
|
||||
* @evbuf_hdr: Header of received events
|
||||
*
|
||||
* Process Store Data events and complete listeners with matching event IDs.
|
||||
*/
|
||||
static void sclp_sd_receiver(struct evbuf_header *evbuf_hdr)
|
||||
{
|
||||
struct sclp_sd_evbuf *evbuf = (struct sclp_sd_evbuf *) evbuf_hdr;
|
||||
struct sclp_sd_listener *listener;
|
||||
int found = 0;
|
||||
|
||||
pr_debug("received event (id=0x%08x)\n", evbuf->id);
|
||||
spin_lock(&sclp_sd_queue_lock);
|
||||
list_for_each_entry(listener, &sclp_sd_queue, list) {
|
||||
if (listener->id != evbuf->id)
|
||||
continue;
|
||||
|
||||
listener->evbuf = *evbuf;
|
||||
complete(&listener->completion);
|
||||
found = 1;
|
||||
break;
|
||||
}
|
||||
spin_unlock(&sclp_sd_queue_lock);
|
||||
|
||||
if (!found)
|
||||
pr_debug("unsolicited event (id=0x%08x)\n", evbuf->id);
|
||||
}
|
||||
|
||||
static struct sclp_register sclp_sd_register = {
|
||||
.send_mask = EVTYP_STORE_DATA_MASK,
|
||||
.receive_mask = EVTYP_STORE_DATA_MASK,
|
||||
.receiver_fn = sclp_sd_receiver,
|
||||
};
|
||||
|
||||
/**
|
||||
* sclp_sd_sync() - Perform Store Data request synchronously
|
||||
* @page: Address of work page - must be below 2GB
|
||||
* @eq: Input EQ value
|
||||
* @di: Input DI value
|
||||
* @sat: Input SAT value
|
||||
* @sa: Input SA value used to specify the address of the target buffer
|
||||
* @dsize_ptr: Optional pointer to input and output DSIZE value
|
||||
* @esize_ptr: Optional pointer to output ESIZE value
|
||||
*
|
||||
* Perform Store Data request with specified parameters and wait for completion.
|
||||
*
|
||||
* Return %0 on success and store resulting DSIZE and ESIZE values in
|
||||
* @dsize_ptr and @esize_ptr (if provided). Return non-zero on error.
|
||||
*/
|
||||
static int sclp_sd_sync(unsigned long page, u8 eq, u8 di, u64 sat, u64 sa,
|
||||
u32 *dsize_ptr, u32 *esize_ptr)
|
||||
{
|
||||
struct sclp_sd_sccb *sccb = (void *) page;
|
||||
struct sclp_sd_listener listener;
|
||||
struct sclp_sd_evbuf *evbuf;
|
||||
int rc;
|
||||
|
||||
sclp_sd_listener_init(&listener, (u32) (addr_t) sccb);
|
||||
sclp_sd_listener_add(&listener);
|
||||
|
||||
/* Prepare SCCB */
|
||||
memset(sccb, 0, PAGE_SIZE);
|
||||
sccb->hdr.length = sizeof(sccb->hdr) + sizeof(sccb->evbuf);
|
||||
evbuf = &sccb->evbuf;
|
||||
evbuf->hdr.length = sizeof(*evbuf);
|
||||
evbuf->hdr.type = EVTYP_STORE_DATA;
|
||||
evbuf->eq = eq;
|
||||
evbuf->di = di;
|
||||
evbuf->id = listener.id;
|
||||
evbuf->fmt = 1;
|
||||
evbuf->sat = sat;
|
||||
evbuf->sa = sa;
|
||||
if (dsize_ptr)
|
||||
evbuf->dsize = *dsize_ptr;
|
||||
|
||||
/* Perform command */
|
||||
pr_debug("request (eq=%d, di=%d, id=0x%08x)\n", eq, di, listener.id);
|
||||
rc = sclp_sync_request(SCLP_CMDW_WRITE_EVENT_DATA, sccb);
|
||||
pr_debug("request done (rc=%d)\n", rc);
|
||||
if (rc)
|
||||
goto out;
|
||||
|
||||
/* Evaluate response */
|
||||
if (sccb->hdr.response_code == 0x73f0) {
|
||||
pr_debug("event not supported\n");
|
||||
rc = -EIO;
|
||||
goto out_remove;
|
||||
}
|
||||
if (sccb->hdr.response_code != 0x0020 || !(evbuf->hdr.flags & 0x80)) {
|
||||
rc = -EIO;
|
||||
goto out;
|
||||
}
|
||||
if (!(evbuf->rflags & 0x80)) {
|
||||
rc = wait_for_completion_interruptible(&listener.completion);
|
||||
if (rc)
|
||||
goto out;
|
||||
evbuf = &listener.evbuf;
|
||||
}
|
||||
switch (evbuf->status) {
|
||||
case 0:
|
||||
if (dsize_ptr)
|
||||
*dsize_ptr = evbuf->dsize;
|
||||
if (esize_ptr)
|
||||
*esize_ptr = evbuf->esize;
|
||||
pr_debug("success (dsize=%u, esize=%u)\n", evbuf->dsize,
|
||||
evbuf->esize);
|
||||
break;
|
||||
case 3:
|
||||
rc = -ENOENT;
|
||||
break;
|
||||
default:
|
||||
rc = -EIO;
|
||||
break;
|
||||
|
||||
}
|
||||
|
||||
out:
|
||||
if (rc && rc != -ENOENT) {
|
||||
/* Provide some information about what went wrong */
|
||||
pr_warn("Store Data request failed (eq=%d, di=%d, "
|
||||
"response=0x%04x, flags=0x%02x, status=%d, rc=%d)\n",
|
||||
eq, di, sccb->hdr.response_code, evbuf->hdr.flags,
|
||||
evbuf->status, rc);
|
||||
}
|
||||
|
||||
out_remove:
|
||||
sclp_sd_listener_remove(&listener);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
/**
|
||||
* sclp_sd_store_data() - Obtain data for specified Store Data entity
|
||||
* @result: Resulting data
|
||||
* @di: DI value associated with this entity
|
||||
*
|
||||
* Perform a series of Store Data requests to obtain the size and contents of
|
||||
* the specified Store Data entity.
|
||||
*
|
||||
* Return:
|
||||
* %0: Success - result is stored in @result. @result->data must be
|
||||
* released using vfree() after use.
|
||||
* %-ENOENT: No data available for this entity
|
||||
* %<0: Other error
|
||||
*/
|
||||
static int sclp_sd_store_data(struct sclp_sd_data *result, u8 di)
|
||||
{
|
||||
u32 dsize = 0, esize = 0;
|
||||
unsigned long page, asce = 0;
|
||||
void *data = NULL;
|
||||
int rc;
|
||||
|
||||
page = __get_free_page(GFP_KERNEL | GFP_DMA);
|
||||
if (!page)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Get size */
|
||||
rc = sclp_sd_sync(page, SD_EQ_SIZE, di, 0, 0, &dsize, &esize);
|
||||
if (rc)
|
||||
goto out;
|
||||
if (dsize == 0)
|
||||
goto out_result;
|
||||
|
||||
/* Allocate memory */
|
||||
data = vzalloc((size_t) dsize * PAGE_SIZE);
|
||||
if (!data) {
|
||||
rc = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Get translation table for buffer */
|
||||
asce = base_asce_alloc((unsigned long) data, dsize);
|
||||
if (!asce) {
|
||||
vfree(data);
|
||||
rc = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Get data */
|
||||
rc = sclp_sd_sync(page, SD_EQ_STORE_DATA, di, asce, (u64) data, &dsize,
|
||||
&esize);
|
||||
if (rc) {
|
||||
/* Cancel running request if interrupted */
|
||||
if (rc == -ERESTARTSYS)
|
||||
sclp_sd_sync(page, SD_EQ_HALT, di, 0, 0, NULL, NULL);
|
||||
vfree(data);
|
||||
goto out;
|
||||
}
|
||||
|
||||
out_result:
|
||||
result->esize_bytes = (size_t) esize * PAGE_SIZE;
|
||||
result->dsize_bytes = (size_t) dsize * PAGE_SIZE;
|
||||
result->data = data;
|
||||
|
||||
out:
|
||||
base_asce_free(asce);
|
||||
free_page(page);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
/**
|
||||
* sclp_sd_data_reset() - Reset Store Data result buffer
|
||||
* @data: Data buffer to reset
|
||||
*
|
||||
* Reset @data to initial state and release associated memory.
|
||||
*/
|
||||
static void sclp_sd_data_reset(struct sclp_sd_data *data)
|
||||
{
|
||||
vfree(data->data);
|
||||
data->data = NULL;
|
||||
data->dsize_bytes = 0;
|
||||
data->esize_bytes = 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* sclp_sd_file_release() - Release function for sclp_sd_file object
|
||||
* @kobj: Kobject embedded in sclp_sd_file object
|
||||
*/
|
||||
static void sclp_sd_file_release(struct kobject *kobj)
|
||||
{
|
||||
struct sclp_sd_file *sd_file = to_sd_file(kobj);
|
||||
|
||||
sclp_sd_data_reset(&sd_file->data);
|
||||
kfree(sd_file);
|
||||
}
|
||||
|
||||
/**
|
||||
* sclp_sd_file_update() - Update contents of sclp_sd_file object
|
||||
* @sd_file: Object to update
|
||||
*
|
||||
* Obtain the current version of data associated with the Store Data entity
|
||||
* @sd_file.
|
||||
*
|
||||
* On success, return %0 and generate a KOBJ_CHANGE event to indicate that the
|
||||
* data may have changed. Return non-zero otherwise.
|
||||
*/
|
||||
static int sclp_sd_file_update(struct sclp_sd_file *sd_file)
|
||||
{
|
||||
const char *name = kobject_name(&sd_file->kobj);
|
||||
struct sclp_sd_data data;
|
||||
int rc;
|
||||
|
||||
rc = sclp_sd_store_data(&data, sd_file->di);
|
||||
if (rc) {
|
||||
if (rc == -ENOENT) {
|
||||
pr_info("No data is available for the %s data entity\n",
|
||||
name);
|
||||
}
|
||||
return rc;
|
||||
}
|
||||
|
||||
mutex_lock(&sd_file->data_mutex);
|
||||
sclp_sd_data_reset(&sd_file->data);
|
||||
sd_file->data = data;
|
||||
mutex_unlock(&sd_file->data_mutex);
|
||||
|
||||
pr_info("A %zu-byte %s data entity was retrieved\n", data.dsize_bytes,
|
||||
name);
|
||||
kobject_uevent(&sd_file->kobj, KOBJ_CHANGE);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* sclp_sd_file_update_async() - Wrapper for asynchronous update call
|
||||
* @data: Object to update
|
||||
*/
|
||||
static void sclp_sd_file_update_async(void *data, async_cookie_t cookie)
|
||||
{
|
||||
struct sclp_sd_file *sd_file = data;
|
||||
|
||||
sclp_sd_file_update(sd_file);
|
||||
}
|
||||
|
||||
/**
|
||||
* reload_store() - Store function for "reload" sysfs attribute
|
||||
* @kobj: Kobject of sclp_sd_file object
|
||||
*
|
||||
* Initiate a reload of the data associated with an sclp_sd_file object.
|
||||
*/
|
||||
static ssize_t reload_store(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct sclp_sd_file *sd_file = to_sd_file(kobj);
|
||||
|
||||
sclp_sd_file_update(sd_file);
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static struct kobj_attribute reload_attr = __ATTR_WO(reload);
|
||||
|
||||
static struct attribute *sclp_sd_file_default_attrs[] = {
|
||||
&reload_attr.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static struct kobj_type sclp_sd_file_ktype = {
|
||||
.sysfs_ops = &kobj_sysfs_ops,
|
||||
.release = sclp_sd_file_release,
|
||||
.default_attrs = sclp_sd_file_default_attrs,
|
||||
};
|
||||
|
||||
/**
|
||||
* data_read() - Read function for "read" sysfs attribute
|
||||
* @kobj: Kobject of sclp_sd_file object
|
||||
* @buffer: Target buffer
|
||||
* @off: Requested file offset
|
||||
* @size: Requested number of bytes
|
||||
*
|
||||
* Store the requested portion of the Store Data entity contents into the
|
||||
* specified buffer. Return the number of bytes stored on success, or %0
|
||||
* on EOF.
|
||||
*/
|
||||
static ssize_t data_read(struct file *file, struct kobject *kobj,
|
||||
struct bin_attribute *attr, char *buffer,
|
||||
loff_t off, size_t size)
|
||||
{
|
||||
struct sclp_sd_file *sd_file = to_sd_file(kobj);
|
||||
size_t data_size;
|
||||
char *data;
|
||||
|
||||
mutex_lock(&sd_file->data_mutex);
|
||||
|
||||
data = sd_file->data.data;
|
||||
data_size = sd_file->data.dsize_bytes;
|
||||
if (!data || off >= data_size) {
|
||||
size = 0;
|
||||
} else {
|
||||
if (off + size > data_size)
|
||||
size = data_size - off;
|
||||
memcpy(buffer, data + off, size);
|
||||
}
|
||||
|
||||
mutex_unlock(&sd_file->data_mutex);
|
||||
|
||||
return size;
|
||||
}
|
||||
|
||||
/**
|
||||
* sclp_sd_file_create() - Add a sysfs file representing a Store Data entity
|
||||
* @name: Name of file
|
||||
* @di: DI value associated with this entity
|
||||
*
|
||||
* Create a sysfs directory with the given @name located under
|
||||
*
|
||||
* /sys/firmware/sclp_sd/
|
||||
*
|
||||
* The files in this directory can be used to access the contents of the Store
|
||||
* Data entity associated with @DI.
|
||||
*
|
||||
* Return pointer to resulting sclp_sd_file object on success, %NULL otherwise.
|
||||
* The object must be freed by calling kobject_put() on the embedded kobject
|
||||
* pointer after use.
|
||||
*/
|
||||
static __init struct sclp_sd_file *sclp_sd_file_create(const char *name, u8 di)
|
||||
{
|
||||
struct sclp_sd_file *sd_file;
|
||||
int rc;
|
||||
|
||||
sd_file = kzalloc(sizeof(*sd_file), GFP_KERNEL);
|
||||
if (!sd_file)
|
||||
return NULL;
|
||||
sd_file->di = di;
|
||||
mutex_init(&sd_file->data_mutex);
|
||||
|
||||
/* Create kobject located under /sys/firmware/sclp_sd/ */
|
||||
sd_file->kobj.kset = sclp_sd_kset;
|
||||
rc = kobject_init_and_add(&sd_file->kobj, &sclp_sd_file_ktype, NULL,
|
||||
"%s", name);
|
||||
if (rc) {
|
||||
kobject_put(&sd_file->kobj);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
sysfs_bin_attr_init(&sd_file->data_attr);
|
||||
sd_file->data_attr.attr.name = "data";
|
||||
sd_file->data_attr.attr.mode = 0444;
|
||||
sd_file->data_attr.read = data_read;
|
||||
|
||||
rc = sysfs_create_bin_file(&sd_file->kobj, &sd_file->data_attr);
|
||||
if (rc) {
|
||||
kobject_put(&sd_file->kobj);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* For completeness only - users interested in entity data should listen
|
||||
* for KOBJ_CHANGE instead.
|
||||
*/
|
||||
kobject_uevent(&sd_file->kobj, KOBJ_ADD);
|
||||
|
||||
/* Don't let a slow Store Data request delay further initialization */
|
||||
async_schedule(sclp_sd_file_update_async, sd_file);
|
||||
|
||||
return sd_file;
|
||||
}
|
||||
|
||||
/**
|
||||
* sclp_sd_init() - Initialize sclp_sd support and register sysfs files
|
||||
*/
|
||||
static __init int sclp_sd_init(void)
|
||||
{
|
||||
int rc;
|
||||
|
||||
rc = sclp_register(&sclp_sd_register);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Create kset named "sclp_sd" located under /sys/firmware/ */
|
||||
rc = -ENOMEM;
|
||||
sclp_sd_kset = kset_create_and_add("sclp_sd", NULL, firmware_kobj);
|
||||
if (!sclp_sd_kset)
|
||||
goto err_kset;
|
||||
|
||||
rc = -EINVAL;
|
||||
config_file = sclp_sd_file_create("config", SD_DI_CONFIG);
|
||||
if (!config_file)
|
||||
goto err_config;
|
||||
|
||||
return 0;
|
||||
|
||||
err_config:
|
||||
kset_unregister(sclp_sd_kset);
|
||||
err_kset:
|
||||
sclp_unregister(&sclp_sd_register);
|
||||
|
||||
return rc;
|
||||
}
|
||||
device_initcall(sclp_sd_init);
|
@ -502,7 +502,10 @@ sclp_tty_init(void)
|
||||
int i;
|
||||
int rc;
|
||||
|
||||
if (!CONSOLE_IS_SCLP)
|
||||
/* z/VM multiplexes the line mode output on the 32xx screen */
|
||||
if (MACHINE_IS_VM && !CONSOLE_IS_SCLP)
|
||||
return 0;
|
||||
if (!sclp.has_linemode)
|
||||
return 0;
|
||||
driver = alloc_tty_driver(1);
|
||||
if (!driver)
|
||||
|
@ -384,6 +384,28 @@ static ssize_t chp_chid_external_show(struct device *dev,
|
||||
}
|
||||
static DEVICE_ATTR(chid_external, 0444, chp_chid_external_show, NULL);
|
||||
|
||||
static ssize_t util_string_read(struct file *filp, struct kobject *kobj,
|
||||
struct bin_attribute *attr, char *buf,
|
||||
loff_t off, size_t count)
|
||||
{
|
||||
struct channel_path *chp = to_channelpath(kobj_to_dev(kobj));
|
||||
ssize_t rc;
|
||||
|
||||
mutex_lock(&chp->lock);
|
||||
rc = memory_read_from_buffer(buf, count, &off, chp->desc_fmt3.util_str,
|
||||
sizeof(chp->desc_fmt3.util_str));
|
||||
mutex_unlock(&chp->lock);
|
||||
|
||||
return rc;
|
||||
}
|
||||
static BIN_ATTR_RO(util_string,
|
||||
sizeof(((struct channel_path_desc_fmt3 *)0)->util_str));
|
||||
|
||||
static struct bin_attribute *chp_bin_attrs[] = {
|
||||
&bin_attr_util_string,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static struct attribute *chp_attrs[] = {
|
||||
&dev_attr_status.attr,
|
||||
&dev_attr_configure.attr,
|
||||
@ -396,6 +418,7 @@ static struct attribute *chp_attrs[] = {
|
||||
};
|
||||
static struct attribute_group chp_attr_group = {
|
||||
.attrs = chp_attrs,
|
||||
.bin_attrs = chp_bin_attrs,
|
||||
};
|
||||
static const struct attribute_group *chp_attr_groups[] = {
|
||||
&chp_attr_group,
|
||||
@ -422,7 +445,7 @@ int chp_update_desc(struct channel_path *chp)
|
||||
{
|
||||
int rc;
|
||||
|
||||
rc = chsc_determine_base_channel_path_desc(chp->chpid, &chp->desc);
|
||||
rc = chsc_determine_fmt0_channel_path_desc(chp->chpid, &chp->desc);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
@ -431,6 +454,7 @@ int chp_update_desc(struct channel_path *chp)
|
||||
* hypervisors implement the required chsc commands.
|
||||
*/
|
||||
chsc_determine_fmt1_channel_path_desc(chp->chpid, &chp->desc_fmt1);
|
||||
chsc_determine_fmt3_channel_path_desc(chp->chpid, &chp->desc_fmt3);
|
||||
chsc_get_channel_measurement_chars(chp);
|
||||
|
||||
return 0;
|
||||
@ -506,20 +530,20 @@ out:
|
||||
* On success return a newly allocated copy of the channel-path description
|
||||
* data associated with the given channel-path ID. Return %NULL on error.
|
||||
*/
|
||||
struct channel_path_desc *chp_get_chp_desc(struct chp_id chpid)
|
||||
struct channel_path_desc_fmt0 *chp_get_chp_desc(struct chp_id chpid)
|
||||
{
|
||||
struct channel_path *chp;
|
||||
struct channel_path_desc *desc;
|
||||
struct channel_path_desc_fmt0 *desc;
|
||||
|
||||
chp = chpid_to_chp(chpid);
|
||||
if (!chp)
|
||||
return NULL;
|
||||
desc = kmalloc(sizeof(struct channel_path_desc), GFP_KERNEL);
|
||||
desc = kmalloc(sizeof(*desc), GFP_KERNEL);
|
||||
if (!desc)
|
||||
return NULL;
|
||||
|
||||
mutex_lock(&chp->lock);
|
||||
memcpy(desc, &chp->desc, sizeof(struct channel_path_desc));
|
||||
memcpy(desc, &chp->desc, sizeof(*desc));
|
||||
mutex_unlock(&chp->lock);
|
||||
return desc;
|
||||
}
|
||||
|
@ -44,8 +44,9 @@ struct channel_path {
|
||||
struct chp_id chpid;
|
||||
struct mutex lock; /* Serialize access to below members. */
|
||||
int state;
|
||||
struct channel_path_desc desc;
|
||||
struct channel_path_desc_fmt0 desc;
|
||||
struct channel_path_desc_fmt1 desc_fmt1;
|
||||
struct channel_path_desc_fmt3 desc_fmt3;
|
||||
/* Channel-measurement related stuff: */
|
||||
int cmg;
|
||||
int shared;
|
||||
@ -61,7 +62,7 @@ static inline struct channel_path *chpid_to_chp(struct chp_id chpid)
|
||||
int chp_get_status(struct chp_id chpid);
|
||||
u8 chp_get_sch_opm(struct subchannel *sch);
|
||||
int chp_is_registered(struct chp_id chpid);
|
||||
struct channel_path_desc *chp_get_chp_desc(struct chp_id chpid);
|
||||
struct channel_path_desc_fmt0 *chp_get_chp_desc(struct chp_id chpid);
|
||||
void chp_remove_cmg_attr(struct channel_path *chp);
|
||||
int chp_add_cmg_attr(struct channel_path *chp);
|
||||
int chp_update_desc(struct channel_path *chp);
|
||||
|
@ -915,6 +915,8 @@ int chsc_determine_channel_path_desc(struct chp_id chpid, int fmt, int rfmt,
|
||||
return -EINVAL;
|
||||
if ((rfmt == 2) && !css_general_characteristics.cib)
|
||||
return -EINVAL;
|
||||
if ((rfmt == 3) && !css_general_characteristics.util_str)
|
||||
return -EINVAL;
|
||||
|
||||
memset(page, 0, PAGE_SIZE);
|
||||
scpd_area = page;
|
||||
@ -940,43 +942,30 @@ int chsc_determine_channel_path_desc(struct chp_id chpid, int fmt, int rfmt,
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(chsc_determine_channel_path_desc);
|
||||
|
||||
int chsc_determine_base_channel_path_desc(struct chp_id chpid,
|
||||
struct channel_path_desc *desc)
|
||||
{
|
||||
struct chsc_scpd *scpd_area;
|
||||
unsigned long flags;
|
||||
int ret;
|
||||
|
||||
spin_lock_irqsave(&chsc_page_lock, flags);
|
||||
scpd_area = chsc_page;
|
||||
ret = chsc_determine_channel_path_desc(chpid, 0, 0, 0, 0, scpd_area);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
memcpy(desc, scpd_area->data, sizeof(*desc));
|
||||
out:
|
||||
spin_unlock_irqrestore(&chsc_page_lock, flags);
|
||||
return ret;
|
||||
#define chsc_det_chp_desc(FMT, c) \
|
||||
int chsc_determine_fmt##FMT##_channel_path_desc( \
|
||||
struct chp_id chpid, struct channel_path_desc_fmt##FMT *desc) \
|
||||
{ \
|
||||
struct chsc_scpd *scpd_area; \
|
||||
unsigned long flags; \
|
||||
int ret; \
|
||||
\
|
||||
spin_lock_irqsave(&chsc_page_lock, flags); \
|
||||
scpd_area = chsc_page; \
|
||||
ret = chsc_determine_channel_path_desc(chpid, 0, FMT, c, 0, \
|
||||
scpd_area); \
|
||||
if (ret) \
|
||||
goto out; \
|
||||
\
|
||||
memcpy(desc, scpd_area->data, sizeof(*desc)); \
|
||||
out: \
|
||||
spin_unlock_irqrestore(&chsc_page_lock, flags); \
|
||||
return ret; \
|
||||
}
|
||||
|
||||
int chsc_determine_fmt1_channel_path_desc(struct chp_id chpid,
|
||||
struct channel_path_desc_fmt1 *desc)
|
||||
{
|
||||
struct chsc_scpd *scpd_area;
|
||||
unsigned long flags;
|
||||
int ret;
|
||||
|
||||
spin_lock_irqsave(&chsc_page_lock, flags);
|
||||
scpd_area = chsc_page;
|
||||
ret = chsc_determine_channel_path_desc(chpid, 0, 1, 1, 0, scpd_area);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
memcpy(desc, scpd_area->data, sizeof(*desc));
|
||||
out:
|
||||
spin_unlock_irqrestore(&chsc_page_lock, flags);
|
||||
return ret;
|
||||
}
|
||||
chsc_det_chp_desc(0, 0)
|
||||
chsc_det_chp_desc(1, 1)
|
||||
chsc_det_chp_desc(3, 0)
|
||||
|
||||
static void
|
||||
chsc_initialize_cmg_chars(struct channel_path *chp, u8 cmcv,
|
||||
|
@ -40,6 +40,11 @@ struct channel_path_desc_fmt1 {
|
||||
u32 zeros[2];
|
||||
} __attribute__ ((packed));
|
||||
|
||||
struct channel_path_desc_fmt3 {
|
||||
struct channel_path_desc_fmt1 fmt1_desc;
|
||||
u8 util_str[64];
|
||||
};
|
||||
|
||||
struct channel_path;
|
||||
|
||||
struct css_chsc_char {
|
||||
@ -147,10 +152,12 @@ int __chsc_do_secm(struct channel_subsystem *css, int enable);
|
||||
int chsc_chp_vary(struct chp_id chpid, int on);
|
||||
int chsc_determine_channel_path_desc(struct chp_id chpid, int fmt, int rfmt,
|
||||
int c, int m, void *page);
|
||||
int chsc_determine_base_channel_path_desc(struct chp_id chpid,
|
||||
struct channel_path_desc *desc);
|
||||
int chsc_determine_fmt0_channel_path_desc(struct chp_id chpid,
|
||||
struct channel_path_desc_fmt0 *desc);
|
||||
int chsc_determine_fmt1_channel_path_desc(struct chp_id chpid,
|
||||
struct channel_path_desc_fmt1 *desc);
|
||||
int chsc_determine_fmt3_channel_path_desc(struct chp_id chpid,
|
||||
struct channel_path_desc_fmt3 *desc);
|
||||
void chsc_chp_online(struct chp_id chpid);
|
||||
void chsc_chp_offline(struct chp_id chpid);
|
||||
int chsc_get_channel_measurement_chars(struct channel_path *chp);
|
||||
|
@ -1073,8 +1073,7 @@ out_schedule:
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
io_subchannel_remove (struct subchannel *sch)
|
||||
static int io_subchannel_remove(struct subchannel *sch)
|
||||
{
|
||||
struct io_subchannel_private *io_priv = to_io_private(sch);
|
||||
struct ccw_device *cdev;
|
||||
@ -1082,14 +1081,12 @@ io_subchannel_remove (struct subchannel *sch)
|
||||
cdev = sch_get_cdev(sch);
|
||||
if (!cdev)
|
||||
goto out_free;
|
||||
io_subchannel_quiesce(sch);
|
||||
/* Set ccw device to not operational and drop reference. */
|
||||
spin_lock_irq(cdev->ccwlock);
|
||||
|
||||
ccw_device_unregister(cdev);
|
||||
spin_lock_irq(sch->lock);
|
||||
sch_set_cdev(sch, NULL);
|
||||
set_io_private(sch, NULL);
|
||||
cdev->private->state = DEV_STATE_NOT_OPER;
|
||||
spin_unlock_irq(cdev->ccwlock);
|
||||
ccw_device_unregister(cdev);
|
||||
spin_unlock_irq(sch->lock);
|
||||
out_free:
|
||||
kfree(io_priv);
|
||||
sysfs_remove_group(&sch->dev.kobj, &io_subchannel_attr_group);
|
||||
@ -1721,6 +1718,7 @@ static int ccw_device_remove(struct device *dev)
|
||||
{
|
||||
struct ccw_device *cdev = to_ccwdev(dev);
|
||||
struct ccw_driver *cdrv = cdev->drv;
|
||||
struct subchannel *sch;
|
||||
int ret;
|
||||
|
||||
if (cdrv->remove)
|
||||
@ -1746,7 +1744,9 @@ static int ccw_device_remove(struct device *dev)
|
||||
ccw_device_set_timeout(cdev, 0);
|
||||
cdev->drv = NULL;
|
||||
cdev->private->int_class = IRQIO_CIO;
|
||||
sch = to_subchannel(cdev->dev.parent);
|
||||
spin_unlock_irq(cdev->ccwlock);
|
||||
io_subchannel_quiesce(sch);
|
||||
__disable_cmf(cdev);
|
||||
|
||||
return 0;
|
||||
|
@ -460,8 +460,8 @@ __u8 ccw_device_get_path_mask(struct ccw_device *cdev)
|
||||
* On success return a newly allocated copy of the channel-path description
|
||||
* data associated with the given channel path. Return %NULL on error.
|
||||
*/
|
||||
struct channel_path_desc *ccw_device_get_chp_desc(struct ccw_device *cdev,
|
||||
int chp_idx)
|
||||
struct channel_path_desc_fmt0 *ccw_device_get_chp_desc(struct ccw_device *cdev,
|
||||
int chp_idx)
|
||||
{
|
||||
struct subchannel *sch;
|
||||
struct chp_id chpid;
|
||||
|
@ -98,22 +98,6 @@ static inline int do_siga_output(unsigned long schid, unsigned long mask,
|
||||
return cc;
|
||||
}
|
||||
|
||||
static inline int qdio_check_ccq(struct qdio_q *q, unsigned int ccq)
|
||||
{
|
||||
/* all done or next buffer state different */
|
||||
if (ccq == 0 || ccq == 32)
|
||||
return 0;
|
||||
/* no buffer processed */
|
||||
if (ccq == 97)
|
||||
return 1;
|
||||
/* not all buffers processed */
|
||||
if (ccq == 96)
|
||||
return 2;
|
||||
/* notify devices immediately */
|
||||
DBF_ERROR("%4x ccq:%3d", SCH_NO(q), ccq);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
/**
|
||||
* qdio_do_eqbs - extract buffer states for QEBSM
|
||||
* @q: queue to manipulate
|
||||
@ -128,7 +112,7 @@ static inline int qdio_check_ccq(struct qdio_q *q, unsigned int ccq)
|
||||
static int qdio_do_eqbs(struct qdio_q *q, unsigned char *state,
|
||||
int start, int count, int auto_ack)
|
||||
{
|
||||
int rc, tmp_count = count, tmp_start = start, nr = q->nr, retried = 0;
|
||||
int tmp_count = count, tmp_start = start, nr = q->nr;
|
||||
unsigned int ccq = 0;
|
||||
|
||||
qperf_inc(q, eqbs);
|
||||
@ -138,34 +122,30 @@ static int qdio_do_eqbs(struct qdio_q *q, unsigned char *state,
|
||||
again:
|
||||
ccq = do_eqbs(q->irq_ptr->sch_token, state, nr, &tmp_start, &tmp_count,
|
||||
auto_ack);
|
||||
rc = qdio_check_ccq(q, ccq);
|
||||
if (!rc)
|
||||
|
||||
switch (ccq) {
|
||||
case 0:
|
||||
case 32:
|
||||
/* all done, or next buffer state different */
|
||||
return count - tmp_count;
|
||||
|
||||
if (rc == 1) {
|
||||
DBF_DEV_EVENT(DBF_WARN, q->irq_ptr, "EQBS again:%2d", ccq);
|
||||
goto again;
|
||||
}
|
||||
|
||||
if (rc == 2) {
|
||||
case 96:
|
||||
/* not all buffers processed */
|
||||
qperf_inc(q, eqbs_partial);
|
||||
DBF_DEV_EVENT(DBF_WARN, q->irq_ptr, "EQBS part:%02x",
|
||||
tmp_count);
|
||||
/*
|
||||
* Retry once, if that fails bail out and process the
|
||||
* extracted buffers before trying again.
|
||||
*/
|
||||
if (!retried++)
|
||||
goto again;
|
||||
else
|
||||
return count - tmp_count;
|
||||
return count - tmp_count;
|
||||
case 97:
|
||||
/* no buffer processed */
|
||||
DBF_DEV_EVENT(DBF_WARN, q->irq_ptr, "EQBS again:%2d", ccq);
|
||||
goto again;
|
||||
default:
|
||||
DBF_ERROR("%4x ccq:%3d", SCH_NO(q), ccq);
|
||||
DBF_ERROR("%4x EQBS ERROR", SCH_NO(q));
|
||||
DBF_ERROR("%3d%3d%2d", count, tmp_count, nr);
|
||||
q->handler(q->irq_ptr->cdev, QDIO_ERROR_GET_BUF_STATE, q->nr,
|
||||
q->first_to_kick, count, q->irq_ptr->int_parm);
|
||||
return 0;
|
||||
}
|
||||
|
||||
DBF_ERROR("%4x EQBS ERROR", SCH_NO(q));
|
||||
DBF_ERROR("%3d%3d%2d", count, tmp_count, nr);
|
||||
q->handler(q->irq_ptr->cdev, QDIO_ERROR_GET_BUF_STATE,
|
||||
q->nr, q->first_to_kick, count, q->irq_ptr->int_parm);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
@ -185,7 +165,6 @@ static int qdio_do_sqbs(struct qdio_q *q, unsigned char state, int start,
|
||||
unsigned int ccq = 0;
|
||||
int tmp_count = count, tmp_start = start;
|
||||
int nr = q->nr;
|
||||
int rc;
|
||||
|
||||
if (!count)
|
||||
return 0;
|
||||
@ -195,26 +174,32 @@ static int qdio_do_sqbs(struct qdio_q *q, unsigned char state, int start,
|
||||
nr += q->irq_ptr->nr_input_qs;
|
||||
again:
|
||||
ccq = do_sqbs(q->irq_ptr->sch_token, state, nr, &tmp_start, &tmp_count);
|
||||
rc = qdio_check_ccq(q, ccq);
|
||||
if (!rc) {
|
||||
|
||||
switch (ccq) {
|
||||
case 0:
|
||||
case 32:
|
||||
/* all done, or active buffer adapter-owned */
|
||||
WARN_ON_ONCE(tmp_count);
|
||||
return count - tmp_count;
|
||||
}
|
||||
|
||||
if (rc == 1 || rc == 2) {
|
||||
case 96:
|
||||
/* not all buffers processed */
|
||||
DBF_DEV_EVENT(DBF_INFO, q->irq_ptr, "SQBS again:%2d", ccq);
|
||||
qperf_inc(q, sqbs_partial);
|
||||
goto again;
|
||||
default:
|
||||
DBF_ERROR("%4x ccq:%3d", SCH_NO(q), ccq);
|
||||
DBF_ERROR("%4x SQBS ERROR", SCH_NO(q));
|
||||
DBF_ERROR("%3d%3d%2d", count, tmp_count, nr);
|
||||
q->handler(q->irq_ptr->cdev, QDIO_ERROR_SET_BUF_STATE, q->nr,
|
||||
q->first_to_kick, count, q->irq_ptr->int_parm);
|
||||
return 0;
|
||||
}
|
||||
|
||||
DBF_ERROR("%4x SQBS ERROR", SCH_NO(q));
|
||||
DBF_ERROR("%3d%3d%2d", count, tmp_count, nr);
|
||||
q->handler(q->irq_ptr->cdev, QDIO_ERROR_SET_BUF_STATE,
|
||||
q->nr, q->first_to_kick, count, q->irq_ptr->int_parm);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* returns number of examined buffers and their common state in *state */
|
||||
/*
|
||||
* Returns number of examined buffers and their common state in *state.
|
||||
* Requested number of buffers-to-examine must be > 0.
|
||||
*/
|
||||
static inline int get_buf_states(struct qdio_q *q, unsigned int bufnr,
|
||||
unsigned char *state, unsigned int count,
|
||||
int auto_ack, int merge_pending)
|
||||
@ -225,17 +210,23 @@ static inline int get_buf_states(struct qdio_q *q, unsigned int bufnr,
|
||||
if (is_qebsm(q))
|
||||
return qdio_do_eqbs(q, state, bufnr, count, auto_ack);
|
||||
|
||||
for (i = 0; i < count; i++) {
|
||||
if (!__state) {
|
||||
__state = q->slsb.val[bufnr];
|
||||
if (merge_pending && __state == SLSB_P_OUTPUT_PENDING)
|
||||
__state = SLSB_P_OUTPUT_EMPTY;
|
||||
} else if (merge_pending) {
|
||||
if ((q->slsb.val[bufnr] & __state) != __state)
|
||||
break;
|
||||
} else if (q->slsb.val[bufnr] != __state)
|
||||
break;
|
||||
/* get initial state: */
|
||||
__state = q->slsb.val[bufnr];
|
||||
if (merge_pending && __state == SLSB_P_OUTPUT_PENDING)
|
||||
__state = SLSB_P_OUTPUT_EMPTY;
|
||||
|
||||
for (i = 1; i < count; i++) {
|
||||
bufnr = next_buf(bufnr);
|
||||
|
||||
/* merge PENDING into EMPTY: */
|
||||
if (merge_pending &&
|
||||
q->slsb.val[bufnr] == SLSB_P_OUTPUT_PENDING &&
|
||||
__state == SLSB_P_OUTPUT_EMPTY)
|
||||
continue;
|
||||
|
||||
/* stop if next state differs from initial state: */
|
||||
if (q->slsb.val[bufnr] != __state)
|
||||
break;
|
||||
}
|
||||
*state = __state;
|
||||
return i;
|
||||
@ -502,8 +493,8 @@ static inline void inbound_primed(struct qdio_q *q, int count)
|
||||
|
||||
static int get_inbound_buffer_frontier(struct qdio_q *q)
|
||||
{
|
||||
int count, stop;
|
||||
unsigned char state = 0;
|
||||
int count;
|
||||
|
||||
q->timestamp = get_tod_clock_fast();
|
||||
|
||||
@ -512,9 +503,7 @@ static int get_inbound_buffer_frontier(struct qdio_q *q)
|
||||
* would return 0.
|
||||
*/
|
||||
count = min(atomic_read(&q->nr_buf_used), QDIO_MAX_BUFFERS_MASK);
|
||||
stop = add_buf(q->first_to_check, count);
|
||||
|
||||
if (q->first_to_check == stop)
|
||||
if (!count)
|
||||
goto out;
|
||||
|
||||
/*
|
||||
@ -734,8 +723,8 @@ void qdio_inbound_processing(unsigned long data)
|
||||
|
||||
static int get_outbound_buffer_frontier(struct qdio_q *q)
|
||||
{
|
||||
int count, stop;
|
||||
unsigned char state = 0;
|
||||
int count;
|
||||
|
||||
q->timestamp = get_tod_clock_fast();
|
||||
|
||||
@ -751,11 +740,11 @@ static int get_outbound_buffer_frontier(struct qdio_q *q)
|
||||
* would return 0.
|
||||
*/
|
||||
count = min(atomic_read(&q->nr_buf_used), QDIO_MAX_BUFFERS_MASK);
|
||||
stop = add_buf(q->first_to_check, count);
|
||||
if (q->first_to_check == stop)
|
||||
if (!count)
|
||||
goto out;
|
||||
|
||||
count = get_buf_states(q, q->first_to_check, &state, count, 0, 1);
|
||||
count = get_buf_states(q, q->first_to_check, &state, count, 0,
|
||||
q->u.out.use_cq);
|
||||
if (!count)
|
||||
goto out;
|
||||
|
||||
|
@ -124,6 +124,11 @@ static void fsm_io_request(struct vfio_ccw_private *private,
|
||||
if (scsw->cmd.fctl & SCSW_FCTL_START_FUNC) {
|
||||
orb = (union orb *)io_region->orb_area;
|
||||
|
||||
/* Don't try to build a cp if transport mode is specified. */
|
||||
if (orb->tm.b) {
|
||||
io_region->ret_code = -EOPNOTSUPP;
|
||||
goto err_out;
|
||||
}
|
||||
io_region->ret_code = cp_init(&private->cp, mdev_dev(mdev),
|
||||
orb);
|
||||
if (io_region->ret_code)
|
||||
|
@ -1369,7 +1369,7 @@ static void qeth_set_multiple_write_queues(struct qeth_card *card)
|
||||
static void qeth_update_from_chp_desc(struct qeth_card *card)
|
||||
{
|
||||
struct ccw_device *ccwdev;
|
||||
struct channel_path_desc *chp_dsc;
|
||||
struct channel_path_desc_fmt0 *chp_dsc;
|
||||
|
||||
QETH_DBF_TEXT(SETUP, 2, "chp_desc");
|
||||
|
||||
|
@ -11,7 +11,7 @@ if TTY
|
||||
|
||||
config VT
|
||||
bool "Virtual terminal" if EXPERT
|
||||
depends on !S390 && !UML
|
||||
depends on !UML
|
||||
select INPUT
|
||||
default y
|
||||
---help---
|
||||
|
@ -3,7 +3,8 @@
|
||||
#
|
||||
|
||||
menu "Graphics support"
|
||||
depends on HAS_IOMEM
|
||||
|
||||
if HAS_IOMEM
|
||||
|
||||
config HAVE_FB_ATMEL
|
||||
bool
|
||||
@ -36,6 +37,8 @@ config VIDEOMODE_HELPERS
|
||||
config HDMI
|
||||
bool
|
||||
|
||||
endif # HAS_IOMEM
|
||||
|
||||
if VT
|
||||
source "drivers/video/console/Kconfig"
|
||||
endif
|
||||
|
@ -8,7 +8,7 @@ config VGA_CONSOLE
|
||||
bool "VGA text console" if EXPERT || !X86
|
||||
depends on !4xx && !PPC_8xx && !SPARC && !M68K && !PARISC && !SUPERH && \
|
||||
(!ARM || ARCH_FOOTBRIDGE || ARCH_INTEGRATOR || ARCH_NETWINDER) && \
|
||||
!ARM64 && !ARC && !MICROBLAZE && !OPENRISC && !NDS32
|
||||
!ARM64 && !ARC && !MICROBLAZE && !OPENRISC && !NDS32 && !S390
|
||||
default y
|
||||
help
|
||||
Saying Y here will allow you to use Linux in text mode through a
|
||||
@ -84,7 +84,7 @@ config MDA_CONSOLE
|
||||
|
||||
config SGI_NEWPORT_CONSOLE
|
||||
tristate "SGI Newport Console support"
|
||||
depends on SGI_IP22
|
||||
depends on SGI_IP22 && HAS_IOMEM
|
||||
select FONT_SUPPORT
|
||||
help
|
||||
Say Y here if you want the console on the Newport aka XL graphics
|
||||
@ -152,7 +152,7 @@ config FRAMEBUFFER_CONSOLE_ROTATION
|
||||
|
||||
config STI_CONSOLE
|
||||
bool "STI text console"
|
||||
depends on PARISC
|
||||
depends on PARISC && HAS_IOMEM
|
||||
select FONT_SUPPORT
|
||||
default y
|
||||
help
|
||||
|
Loading…
Reference in New Issue
Block a user