Samsung updates for v4.2

- add failure(exception) handling
   : of_iomap(), of_find_device_by_node() and kstrdup()
 
 - add common poweroff to use PS_HOLD based for all of exynos SoCs
 - add exnos_get/set_boot_addr() helper
 - constify platform_device_id and irq_domain_ops
 - get current parent clock for power domain on/off
 - use core_initcall to register power domain driver
 - make exynos_core_restart() less verbose
 
 - add support coupled CPUidle for exynos3250
 
 - fix exynos_boot_secondary() return value on timeout
 - fix clk_enable() in s3c24xx adc
 - fix missing of_node_put() for power domains
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.11 (GNU/Linux)
 
 iQIcBAABAgAGBQJVcdzXAAoJEA0Cl+kVi2xql4QP/3rDUfEGSifijucf8K2fssVa
 mQ/a++UG//uXE6Pv9t5tymsEIwKceqxoBOMR5XgmHdftYHc7if7lwNOlTcllbUYj
 W1a7W4rCJboh2hl7oChz5tDYedoFiEJUZLAaJ1yLF+5vm6nVZYplHOCiG4q6le36
 4DzQ1f8ECUHrWvfGtowK61NE9GiiixJHoBJpBnFmtx67w10KeS8zVmRrhrYghyNF
 QX3rveWpuZcAtBy1YzLsEtuMucG3iLtg+JJE+9j5Sqj/nZxlUWLpD1q8f65c77tW
 QrJOCnDEFIOzai6XjCLMbD1euiRhAZze1Rqq7giqRjFyUbAJi+OUiTkt2yjy5hZR
 G9INmY7qgHWFyBQmqLLmA4nPdh2kdPp9FH9r17fI9IDDwv10kktJ69n06tVoQLQX
 L+m8LAzpx5ubgJe7/R8sFockDN1BE03F1GTVdXuGJFzjPat/JG0PddoPM9l+Quxk
 +KSHexmdMYy9B7P2LqEQezyP4Y7en9ywUzUiQprKnz5wQSfTx6GA5l6j2rno4xte
 h93MooUSt9GScubaaFRaQeU81gphc9cMMsU43On0DHbQ71CGnaBmxkGwC4FOdSkV
 PaevURAT5hkeDQjbaHaYVTfh/qC1aQJFv3eDDwoaYpjqXPSnqeB3R/ZbAZpfthEG
 jLQ1zkRIo435Sc7wCrce
 =LybA
 -----END PGP SIGNATURE-----

Merge tag 'samsung-mach-1' of git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung into next/soc

Samsung updates for v4.2

- add failure(exception) handling
  : of_iomap(), of_find_device_by_node() and kstrdup()

- add common poweroff to use PS_HOLD based for all of exynos SoCs
- add exnos_get/set_boot_addr() helper
- constify platform_device_id and irq_domain_ops
- get current parent clock for power domain on/off
- use core_initcall to register power domain driver
- make exynos_core_restart() less verbose

- add support coupled CPUidle for exynos3250

- fix exynos_boot_secondary() return value on timeout
- fix clk_enable() in s3c24xx adc
- fix missing of_node_put() for power domains

* tag 'samsung-mach-1' of git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung: (301 commits)
  ARM: EXYNOS: register power domain driver from core_initcall
  ARM: EXYNOS: use PS_HOLD based poweroff for all supported SoCs
  ARM: SAMSUNG: Constify platform_device_id
  ARM: EXYNOS: Constify irq_domain_ops
  ARM: EXYNOS: add coupled cpuidle support for Exynos3250
  ARM: EXYNOS: add exynos_get_boot_addr() helper
  ARM: EXYNOS: add exynos_set_boot_addr() helper
  ARM: EXYNOS: make exynos_core_restart() less verbose
  ARM: EXYNOS: fix exynos_boot_secondary() return value on timeout
  ARM: EXYNOS: Get current parent clock for power domain on/off
  ARM: SAMSUNG: fix clk_enable() WARNing in S3C24XX ADC
  ARM: EXYNOS: Add missing of_node_put() when parsing power domains
  ARM: EXYNOS: Handle of_find_device_by_node() and kstrdup() failures
  ARM: EXYNOS: Handle of of_iomap() failure
  Linux 4.1-rc4
  ....
This commit is contained in:
Kevin Hilman 2015-06-11 14:44:21 -07:00
commit eec6492861
336 changed files with 3721 additions and 3838 deletions

View File

@ -19,9 +19,10 @@ Optional Properties:
domains.
- clock-names: The following clocks can be specified:
- oscclk: Oscillator clock.
- pclkN, clkN: Pairs of parent of input clock and input clock to the
devices in this power domain. Maximum of 4 pairs (N = 0 to 3)
are supported currently.
- clkN: Input clocks to the devices in this power domain. These clocks
will be reparented to oscclk before swithing power domain off.
Their original parent will be brought back after turning on
the domain. Maximum of 4 clocks (N = 0 to 3) are supported.
- asbN: Clocks required by asynchronous bridges (ASB) present in
the power domain. These clock should be enabled during power
domain on/off operations.

View File

@ -8,8 +8,8 @@ Required properties:
is not Linux-only, but in case of Linux, see the "m25p_ids"
table in drivers/mtd/devices/m25p80.c for the list of supported
chips.
Must also include "nor-jedec" for any SPI NOR flash that can be
identified by the JEDEC READ ID opcode (0x9F).
Must also include "jedec,spi-nor" for any SPI NOR flash that can
be identified by the JEDEC READ ID opcode (0x9F).
- reg : Chip-Select number
- spi-max-frequency : Maximum frequency of the SPI bus the chip can operate at
@ -25,7 +25,7 @@ Example:
flash: m25p80@0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "spansion,m25p80", "nor-jedec";
compatible = "spansion,m25p80", "jedec,spi-nor";
reg = <0>;
spi-max-frequency = <40000000>;
m25p,fast-read;

View File

@ -198,6 +198,9 @@ TTY_IO_ERROR If set, causes all subsequent userspace read/write
TTY_OTHER_CLOSED Device is a pty and the other side has closed.
TTY_OTHER_DONE Device is a pty and the other side has closed and
all pending input processing has been completed.
TTY_NO_WRITE_SPLIT Prevent driver from splitting up writes into
smaller chunks.

View File

@ -974,7 +974,7 @@ S: Maintained
ARM/CORTINA SYSTEMS GEMINI ARM ARCHITECTURE
M: Hans Ulli Kroll <ulli.kroll@googlemail.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
T: git git://git.berlios.de/gemini-board
T: git git://github.com/ulli-kroll/linux.git
S: Maintained
F: arch/arm/mach-gemini/
@ -1201,7 +1201,7 @@ ARM/MAGICIAN MACHINE SUPPORT
M: Philipp Zabel <philipp.zabel@gmail.com>
S: Maintained
ARM/Marvell Armada 370 and Armada XP SOC support
ARM/Marvell Kirkwood and Armada 370, 375, 38x, XP SOC support
M: Jason Cooper <jason@lakedaemon.net>
M: Andrew Lunn <andrew@lunn.ch>
M: Gregory Clement <gregory.clement@free-electrons.com>
@ -1210,12 +1210,17 @@ L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-mvebu/
F: drivers/rtc/rtc-armada38x.c
F: arch/arm/boot/dts/armada*
F: arch/arm/boot/dts/kirkwood*
ARM/Marvell Berlin SoC support
M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-berlin/
F: arch/arm/boot/dts/berlin*
ARM/Marvell Dove/MV78xx0/Orion SOC support
M: Jason Cooper <jason@lakedaemon.net>
@ -1228,6 +1233,9 @@ F: arch/arm/mach-dove/
F: arch/arm/mach-mv78xx0/
F: arch/arm/mach-orion5x/
F: arch/arm/plat-orion/
F: arch/arm/boot/dts/dove*
F: arch/arm/boot/dts/orion5x*
ARM/Orion SoC/Technologic Systems TS-78xx platform support
M: Alexander Clouter <alex@digriz.org.uk>
@ -1379,6 +1387,7 @@ N: rockchip
ARM/SAMSUNG EXYNOS ARM ARCHITECTURES
M: Kukjin Kim <kgene@kernel.org>
M: Krzysztof Kozlowski <k.kozlowski@samsung.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
S: Maintained
@ -1967,7 +1976,7 @@ S: Maintained
F: drivers/net/wireless/b43legacy/
BACKLIGHT CLASS/SUBSYSTEM
M: Jingoo Han <jg1.han@samsung.com>
M: Jingoo Han <jingoohan1@gmail.com>
M: Lee Jones <lee.jones@linaro.org>
S: Maintained
F: drivers/video/backlight/
@ -3951,7 +3960,7 @@ F: drivers/extcon/
F: Documentation/extcon/
EXYNOS DP DRIVER
M: Jingoo Han <jg1.han@samsung.com>
M: Jingoo Han <jingoohan1@gmail.com>
L: dri-devel@lists.freedesktop.org
S: Maintained
F: drivers/gpu/drm/exynos/exynos_dp*
@ -4410,11 +4419,10 @@ F: fs/gfs2/
F: include/uapi/linux/gfs2_ondisk.h
GIGASET ISDN DRIVERS
M: Hansjoerg Lipp <hjlipp@web.de>
M: Tilman Schmidt <tilman@imap.cc>
M: Paul Bolle <pebolle@tiscali.nl>
L: gigaset307x-common@lists.sourceforge.net
W: http://gigaset307x.sourceforge.net/
S: Maintained
S: Odd Fixes
F: Documentation/isdn/README.gigaset
F: drivers/isdn/gigaset/
F: include/uapi/linux/gigaset_dev.h
@ -5087,7 +5095,7 @@ M: Hal Rosenstock <hal.rosenstock@gmail.com>
L: linux-rdma@vger.kernel.org
W: http://www.openfabrics.org/
Q: http://patchwork.kernel.org/project/linux-rdma/list/
T: git git://github.com/dledford/linux.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma.git
S: Supported
F: Documentation/infiniband/
F: drivers/infiniband/
@ -6992,6 +7000,17 @@ T: git git://git.rocketboards.org/linux-socfpga-next.git
S: Maintained
F: arch/nios2/
NOKIA N900 POWER SUPPLY DRIVERS
M: Pali Rohár <pali.rohar@gmail.com>
S: Maintained
F: include/linux/power/bq2415x_charger.h
F: include/linux/power/bq27x00_battery.h
F: include/linux/power/isp1704_charger.h
F: drivers/power/bq2415x_charger.c
F: drivers/power/bq27x00_battery.c
F: drivers/power/isp1704_charger.c
F: drivers/power/rx51_battery.c
NTB DRIVER
M: Jon Mason <jdmason@kudzu.us>
M: Dave Jiang <dave.jiang@intel.com>
@ -7580,7 +7599,7 @@ S: Maintained
F: drivers/pci/host/*rcar*
PCI DRIVER FOR SAMSUNG EXYNOS
M: Jingoo Han <jg1.han@samsung.com>
M: Jingoo Han <jingoohan1@gmail.com>
L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
@ -7588,7 +7607,7 @@ S: Maintained
F: drivers/pci/host/pci-exynos.c
PCI DRIVER FOR SYNOPSIS DESIGNWARE
M: Jingoo Han <jg1.han@samsung.com>
M: Jingoo Han <jingoohan1@gmail.com>
L: linux-pci@vger.kernel.org
S: Maintained
F: drivers/pci/host/*designware*
@ -8544,7 +8563,7 @@ S: Supported
F: sound/soc/samsung/
SAMSUNG FRAMEBUFFER DRIVER
M: Jingoo Han <jg1.han@samsung.com>
M: Jingoo Han <jingoohan1@gmail.com>
L: linux-fbdev@vger.kernel.org
S: Maintained
F: drivers/video/fbdev/s3c-fb.c
@ -8849,10 +8868,11 @@ W: http://www.emulex.com
S: Supported
F: drivers/scsi/be2iscsi/
SERVER ENGINES 10Gbps NIC - BladeEngine 2 DRIVER
M: Sathya Perla <sathya.perla@emulex.com>
M: Subbu Seetharaman <subbu.seetharaman@emulex.com>
M: Ajit Khaparde <ajit.khaparde@emulex.com>
Emulex 10Gbps NIC BE2, BE3-R, Lancer, Skyhawk-R DRIVER
M: Sathya Perla <sathya.perla@avagotech.com>
M: Ajit Khaparde <ajit.khaparde@avagotech.com>
M: Padmanabh Ratnakar <padmanabh.ratnakar@avagotech.com>
M: Sriharsha Basavapatna <sriharsha.basavapatna@avagotech.com>
L: netdev@vger.kernel.org
W: http://www.emulex.com
S: Supported

View File

@ -1,7 +1,7 @@
VERSION = 4
PATCHLEVEL = 1
SUBLEVEL = 0
EXTRAVERSION = -rc3
EXTRAVERSION = -rc4
NAME = Hurr durr I'ma sheep
# *DOCUMENTATION*

View File

@ -2,19 +2,6 @@ menu "Kernel hacking"
source "lib/Kconfig.debug"
config EARLY_PRINTK
bool "Early printk" if EMBEDDED
default y
help
Write kernel log output directly into the VGA buffer or to a serial
port.
This is useful for kernel debugging when your machine crashes very
early before the console code is initialized. For normal operation
it is not recommended because it looks ugly and doesn't cooperate
with klogd/syslogd or the X server. You should normally N here,
unless you want to debug such a crash.
config 16KSTACKS
bool "Use 16Kb for kernel stacks instead of 8Kb"
help

View File

@ -99,7 +99,7 @@ static inline void atomic_##op(int i, atomic_t *v) \
atomic_ops_unlock(flags); \
}
#define ATOMIC_OP_RETURN(op, c_op) \
#define ATOMIC_OP_RETURN(op, c_op, asm_op) \
static inline int atomic_##op##_return(int i, atomic_t *v) \
{ \
unsigned long flags; \

View File

@ -266,7 +266,7 @@ static inline void __cache_line_loop(unsigned long paddr, unsigned long vaddr,
* Machine specific helpers for Entire D-Cache or Per Line ops
*/
static unsigned int __before_dc_op(const int op)
static inline unsigned int __before_dc_op(const int op)
{
unsigned int reg = reg;
@ -284,7 +284,7 @@ static unsigned int __before_dc_op(const int op)
return reg;
}
static void __after_dc_op(const int op, unsigned int reg)
static inline void __after_dc_op(const int op, unsigned int reg)
{
if (op & OP_FLUSH) /* flush / flush-n-inv both wait */
while (read_aux_reg(ARC_REG_DC_CTRL) & DC_CTRL_FLUSH_STATUS);

View File

@ -69,7 +69,7 @@
mainpll: mainpll {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <2000000000>;
clock-frequency = <1000000000>;
};
/* 25 MHz reference crystal */
refclk: oscillator {

View File

@ -585,7 +585,7 @@
mainpll: mainpll {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <2000000000>;
clock-frequency = <1000000000>;
};
/* 25 MHz reference crystal */

View File

@ -502,7 +502,7 @@
mainpll: mainpll {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <2000000000>;
clock-frequency = <1000000000>;
};
};
};

View File

@ -87,6 +87,7 @@
/* connect xtal input to 25MHz reference */
clocks = <&ref25>;
clock-names = "xtal";
/* connect xtal input as source of pll0 and pll1 */
silabs,pll-source = <0 0>, <1 0>;

View File

@ -711,6 +711,7 @@
num-slots = <1>;
broken-cd;
cap-sdio-irq;
keep-power-in-suspend;
card-detect-delay = <200>;
clock-frequency = <400000000>;
samsung,dw-mshc-ciu-div = <1>;

View File

@ -674,6 +674,7 @@
num-slots = <1>;
broken-cd;
cap-sdio-irq;
keep-power-in-suspend;
card-detect-delay = <200>;
clock-frequency = <400000000>;
samsung,dw-mshc-ciu-div = <1>;

View File

@ -826,7 +826,7 @@
<&tegra_car TEGRA124_CLK_PLL_U>,
<&tegra_car TEGRA124_CLK_USBD>;
clock-names = "reg", "pll_u", "utmi-pads";
resets = <&tegra_car 59>, <&tegra_car 22>;
resets = <&tegra_car 22>, <&tegra_car 22>;
reset-names = "usb", "utmi-pads";
nvidia,hssync-start-delay = <0>;
nvidia,idle-wait-delay = <17>;
@ -838,6 +838,7 @@
nvidia,hssquelch-level = <2>;
nvidia,hsdiscon-level = <5>;
nvidia,xcvr-hsslew = <12>;
nvidia,has-utmi-pad-registers;
status = "disabled";
};
@ -862,7 +863,7 @@
<&tegra_car TEGRA124_CLK_PLL_U>,
<&tegra_car TEGRA124_CLK_USBD>;
clock-names = "reg", "pll_u", "utmi-pads";
resets = <&tegra_car 22>, <&tegra_car 22>;
resets = <&tegra_car 58>, <&tegra_car 22>;
reset-names = "usb", "utmi-pads";
nvidia,hssync-start-delay = <0>;
nvidia,idle-wait-delay = <17>;
@ -874,7 +875,6 @@
nvidia,hssquelch-level = <2>;
nvidia,hsdiscon-level = <5>;
nvidia,xcvr-hsslew = <12>;
nvidia,has-utmi-pad-registers;
status = "disabled";
};
@ -899,7 +899,7 @@
<&tegra_car TEGRA124_CLK_PLL_U>,
<&tegra_car TEGRA124_CLK_USBD>;
clock-names = "reg", "pll_u", "utmi-pads";
resets = <&tegra_car 58>, <&tegra_car 22>;
resets = <&tegra_car 59>, <&tegra_car 22>;
reset-names = "usb", "utmi-pads";
nvidia,hssync-start-delay = <0>;
nvidia,idle-wait-delay = <17>;

View File

@ -191,6 +191,7 @@
compatible = "arm,cortex-a15-pmu";
interrupts = <0 68 4>,
<0 69 4>;
interrupt-affinity = <&cpu0>, <&cpu1>;
};
oscclk6a: oscclk6a {

View File

@ -33,28 +33,28 @@
#address-cells = <1>;
#size-cells = <0>;
cpu@0 {
A9_0: cpu@0 {
device_type = "cpu";
compatible = "arm,cortex-a9";
reg = <0>;
next-level-cache = <&L2>;
};
cpu@1 {
A9_1: cpu@1 {
device_type = "cpu";
compatible = "arm,cortex-a9";
reg = <1>;
next-level-cache = <&L2>;
};
cpu@2 {
A9_2: cpu@2 {
device_type = "cpu";
compatible = "arm,cortex-a9";
reg = <2>;
next-level-cache = <&L2>;
};
cpu@3 {
A9_3: cpu@3 {
device_type = "cpu";
compatible = "arm,cortex-a9";
reg = <3>;
@ -170,6 +170,7 @@
compatible = "arm,pl310-cache";
reg = <0x1e00a000 0x1000>;
interrupts = <0 43 4>;
cache-unified;
cache-level = <2>;
arm,data-latency = <1 1 1>;
arm,tag-latency = <1 1 1>;
@ -181,6 +182,8 @@
<0 61 4>,
<0 62 4>,
<0 63 4>;
interrupt-affinity = <&A9_0>, <&A9_1>, <&A9_2>, <&A9_3>;
};
dcc {

View File

@ -33,6 +33,10 @@ struct firmware_ops {
* Sets boot address of specified physical CPU
*/
int (*set_cpu_boot_addr)(int cpu, unsigned long boot_addr);
/*
* Gets boot address of specified physical CPU
*/
int (*get_cpu_boot_addr)(int cpu, unsigned long *boot_addr);
/*
* Boots specified physical CPU
*/

View File

@ -159,9 +159,13 @@ extern void exynos_enter_aftr(void);
extern struct cpuidle_exynos_data cpuidle_coupled_exynos_data;
extern void exynos_set_delayed_reset_assertion(bool enable);
extern void s5p_init_cpu(void __iomem *cpuid_addr);
extern unsigned int samsung_rev(void);
extern void __iomem *cpu_boot_reg_base(void);
extern void exynos_core_restart(u32 core_id);
extern int exynos_set_boot_addr(u32 core_id, unsigned long boot_addr);
extern int exynos_get_boot_addr(u32 core_id, unsigned long *boot_addr);
static inline void pmu_raw_writel(u32 val, u32 offset)
{

View File

@ -166,6 +166,33 @@ static void __init exynos_init_io(void)
exynos_map_io();
}
/*
* Set or clear the USE_DELAYED_RESET_ASSERTION option. Used by smp code
* and suspend.
*
* This is necessary only on Exynos4 SoCs. When system is running
* USE_DELAYED_RESET_ASSERTION should be set so the ARM CLK clock down
* feature could properly detect global idle state when secondary CPU is
* powered down.
*
* However this should not be set when such system is going into suspend.
*/
void exynos_set_delayed_reset_assertion(bool enable)
{
if (of_machine_is_compatible("samsung,exynos4")) {
unsigned int tmp, core_id;
for (core_id = 0; core_id < num_possible_cpus(); core_id++) {
tmp = pmu_raw_readl(EXYNOS_ARM_CORE_OPTION(core_id));
if (enable)
tmp |= S5P_USE_DELAYED_RESET_ASSERTION;
else
tmp &= ~(S5P_USE_DELAYED_RESET_ASSERTION);
pmu_raw_writel(tmp, EXYNOS_ARM_CORE_OPTION(core_id));
}
}
}
/*
* Apparently, these SoCs are not able to wake-up from suspend using
* the PMU. Too bad. Should they suddenly become capable of such a
@ -207,7 +234,8 @@ static void __init exynos_dt_machine_init(void)
exynos_sysram_init();
#if defined(CONFIG_SMP) && defined(CONFIG_ARM_EXYNOS_CPUIDLE)
if (of_machine_is_compatible("samsung,exynos4210"))
if (of_machine_is_compatible("samsung,exynos4210") ||
of_machine_is_compatible("samsung,exynos3250"))
exynos_cpuidle.dev.platform_data = &cpuidle_coupled_exynos_data;
#endif
if (of_machine_is_compatible("samsung,exynos4210") ||

View File

@ -49,6 +49,7 @@ static int exynos_do_idle(unsigned long mode)
sysram_ns_base_addr + 0x24);
__raw_writel(EXYNOS_AFTR_MAGIC, sysram_ns_base_addr + 0x20);
if (soc_is_exynos3250()) {
flush_cache_all();
exynos_smc(SMC_CMD_SAVE, OP_TYPE_CORE,
SMC_POWERSTATE_IDLE, 0);
exynos_smc(SMC_CMD_SHUTDOWN, OP_TYPE_CLUSTER,
@ -104,6 +105,22 @@ static int exynos_set_cpu_boot_addr(int cpu, unsigned long boot_addr)
return 0;
}
static int exynos_get_cpu_boot_addr(int cpu, unsigned long *boot_addr)
{
void __iomem *boot_reg;
if (!sysram_ns_base_addr)
return -ENODEV;
boot_reg = sysram_ns_base_addr + 0x1c;
if (soc_is_exynos4412())
boot_reg += 4 * cpu;
*boot_addr = __raw_readl(boot_reg);
return 0;
}
static int exynos_cpu_suspend(unsigned long arg)
{
flush_cache_all();
@ -138,6 +155,7 @@ static int exynos_resume(void)
static const struct firmware_ops exynos_firmware_ops = {
.do_idle = IS_ENABLED(CONFIG_EXYNOS_CPU_SUSPEND) ? exynos_do_idle : NULL,
.set_cpu_boot_addr = exynos_set_cpu_boot_addr,
.get_cpu_boot_addr = exynos_get_cpu_boot_addr,
.cpu_boot = exynos_cpu_boot,
.suspend = IS_ENABLED(CONFIG_PM_SLEEP) ? exynos_suspend : NULL,
.resume = IS_ENABLED(CONFIG_EXYNOS_CPU_SUSPEND) ? exynos_resume : NULL,

View File

@ -34,30 +34,6 @@
extern void exynos4_secondary_startup(void);
/*
* Set or clear the USE_DELAYED_RESET_ASSERTION option, set on Exynos4 SoCs
* during hot-(un)plugging CPUx.
*
* The feature can be cleared safely during first boot of secondary CPU.
*
* Exynos4 SoCs require setting USE_DELAYED_RESET_ASSERTION during powering
* down a CPU so the CPU idle clock down feature could properly detect global
* idle state when CPUx is off.
*/
static void exynos_set_delayed_reset_assertion(u32 core_id, bool enable)
{
if (soc_is_exynos4()) {
unsigned int tmp;
tmp = pmu_raw_readl(EXYNOS_ARM_CORE_OPTION(core_id));
if (enable)
tmp |= S5P_USE_DELAYED_RESET_ASSERTION;
else
tmp &= ~(S5P_USE_DELAYED_RESET_ASSERTION);
pmu_raw_writel(tmp, EXYNOS_ARM_CORE_OPTION(core_id));
}
}
#ifdef CONFIG_HOTPLUG_CPU
static inline void cpu_leave_lowpower(u32 core_id)
{
@ -73,8 +49,6 @@ static inline void cpu_leave_lowpower(u32 core_id)
: "=&r" (v)
: "Ir" (CR_C), "Ir" (0x40)
: "cc");
exynos_set_delayed_reset_assertion(core_id, false);
}
static inline void platform_do_lowpower(unsigned int cpu, int *spurious)
@ -87,14 +61,6 @@ static inline void platform_do_lowpower(unsigned int cpu, int *spurious)
/* Turn the CPU off on next WFI instruction. */
exynos_cpu_power_down(core_id);
/*
* Exynos4 SoCs require setting
* USE_DELAYED_RESET_ASSERTION so the CPU idle
* clock down feature could properly detect
* global idle state when CPUx is off.
*/
exynos_set_delayed_reset_assertion(core_id, true);
wfi();
if (pen_release == core_id) {
@ -203,7 +169,7 @@ int exynos_cluster_power_state(int cluster)
S5P_CORE_LOCAL_PWR_EN);
}
void __iomem *cpu_boot_reg_base(void)
static void __iomem *cpu_boot_reg_base(void)
{
if (soc_is_exynos4210() && samsung_rev() == EXYNOS4210_REV_1_1)
return pmu_base_addr + S5P_INFORM5;
@ -229,7 +195,7 @@ static inline void __iomem *cpu_boot_reg(int cpu)
*
* Currently this is needed only when booting secondary CPU on Exynos3250.
*/
static void exynos_core_restart(u32 core_id)
void exynos_core_restart(u32 core_id)
{
u32 val;
@ -244,7 +210,6 @@ static void exynos_core_restart(u32 core_id)
val |= S5P_CORE_WAKEUP_FROM_LOCAL_CFG;
pmu_raw_writel(val, EXYNOS_ARM_CORE_STATUS(core_id));
pr_info("CPU%u: Software reset\n", core_id);
pmu_raw_writel(EXYNOS_CORE_PO_RESET(core_id), EXYNOS_SWRESET);
}
@ -282,6 +247,56 @@ static void exynos_secondary_init(unsigned int cpu)
spin_unlock(&boot_lock);
}
int exynos_set_boot_addr(u32 core_id, unsigned long boot_addr)
{
int ret;
/*
* Try to set boot address using firmware first
* and fall back to boot register if it fails.
*/
ret = call_firmware_op(set_cpu_boot_addr, core_id, boot_addr);
if (ret && ret != -ENOSYS)
goto fail;
if (ret == -ENOSYS) {
void __iomem *boot_reg = cpu_boot_reg(core_id);
if (IS_ERR(boot_reg)) {
ret = PTR_ERR(boot_reg);
goto fail;
}
__raw_writel(boot_addr, boot_reg);
ret = 0;
}
fail:
return ret;
}
int exynos_get_boot_addr(u32 core_id, unsigned long *boot_addr)
{
int ret;
/*
* Try to get boot address using firmware first
* and fall back to boot register if it fails.
*/
ret = call_firmware_op(get_cpu_boot_addr, core_id, boot_addr);
if (ret && ret != -ENOSYS)
goto fail;
if (ret == -ENOSYS) {
void __iomem *boot_reg = cpu_boot_reg(core_id);
if (IS_ERR(boot_reg)) {
ret = PTR_ERR(boot_reg);
goto fail;
}
*boot_addr = __raw_readl(boot_reg);
ret = 0;
}
fail:
return ret;
}
static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle)
{
unsigned long timeout;
@ -341,22 +356,9 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle)
boot_addr = virt_to_phys(exynos4_secondary_startup);
/*
* Try to set boot address using firmware first
* and fall back to boot register if it fails.
*/
ret = call_firmware_op(set_cpu_boot_addr, core_id, boot_addr);
if (ret && ret != -ENOSYS)
ret = exynos_set_boot_addr(core_id, boot_addr);
if (ret)
goto fail;
if (ret == -ENOSYS) {
void __iomem *boot_reg = cpu_boot_reg(core_id);
if (IS_ERR(boot_reg)) {
ret = PTR_ERR(boot_reg);
goto fail;
}
__raw_writel(boot_addr, boot_reg);
}
call_firmware_op(cpu_boot, core_id);
@ -371,8 +373,8 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle)
udelay(10);
}
/* No harm if this is called during first boot of secondary CPU */
exynos_set_delayed_reset_assertion(core_id, false);
if (pen_release != -1)
ret = -ETIMEDOUT;
/*
* now the secondary core is starting up let it run its
@ -420,6 +422,8 @@ static void __init exynos_smp_prepare_cpus(unsigned int max_cpus)
exynos_sysram_init();
exynos_set_delayed_reset_assertion(true);
if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9)
scu_enable(scu_base_addr());
@ -442,16 +446,9 @@ static void __init exynos_smp_prepare_cpus(unsigned int max_cpus)
core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0);
boot_addr = virt_to_phys(exynos4_secondary_startup);
ret = call_firmware_op(set_cpu_boot_addr, core_id, boot_addr);
if (ret && ret != -ENOSYS)
ret = exynos_set_boot_addr(core_id, boot_addr);
if (ret)
break;
if (ret == -ENOSYS) {
void __iomem *boot_reg = cpu_boot_reg(core_id);
if (IS_ERR(boot_reg))
break;
__raw_writel(boot_addr, boot_reg);
}
}
}

View File

@ -22,6 +22,7 @@
#include <asm/firmware.h>
#include <asm/smp_scu.h>
#include <asm/suspend.h>
#include <asm/cacheflush.h>
#include <mach/map.h>
@ -209,6 +210,8 @@ static int exynos_cpu0_enter_aftr(void)
* sequence, let's wait for one of these to happen
*/
while (exynos_cpu_power_state(1)) {
unsigned long boot_addr;
/*
* The other cpu may skip idle and boot back
* up again
@ -221,7 +224,11 @@ static int exynos_cpu0_enter_aftr(void)
* boot back up again, getting stuck in the
* boot rom code
*/
if (__raw_readl(cpu_boot_reg_base()) == 0)
ret = exynos_get_boot_addr(1, &boot_addr);
if (ret)
goto fail;
ret = -1;
if (boot_addr == 0)
goto abort;
cpu_relax();
@ -233,11 +240,14 @@ static int exynos_cpu0_enter_aftr(void)
abort:
if (cpu_online(1)) {
unsigned long boot_addr = virt_to_phys(exynos_cpu_resume);
/*
* Set the boot vector to something non-zero
*/
__raw_writel(virt_to_phys(exynos_cpu_resume),
cpu_boot_reg_base());
ret = exynos_set_boot_addr(1, boot_addr);
if (ret)
goto fail;
dsb();
/*
@ -247,22 +257,42 @@ abort:
while (exynos_cpu_power_state(1) != S5P_CORE_LOCAL_PWR_EN)
cpu_relax();
if (soc_is_exynos3250()) {
while (!pmu_raw_readl(S5P_PMU_SPARE2) &&
!atomic_read(&cpu1_wakeup))
cpu_relax();
if (!atomic_read(&cpu1_wakeup))
exynos_core_restart(1);
}
while (!atomic_read(&cpu1_wakeup)) {
smp_rmb();
/*
* Poke cpu1 out of the boot rom
*/
__raw_writel(virt_to_phys(exynos_cpu_resume),
cpu_boot_reg_base());
ret = exynos_set_boot_addr(1, boot_addr);
if (ret)
goto fail;
call_firmware_op(cpu_boot, 1);
if (soc_is_exynos3250())
dsb_sev();
else
arch_send_wakeup_ipi_mask(cpumask_of(1));
}
}
fail:
return ret;
}
static int exynos_wfi_finisher(unsigned long flags)
{
if (soc_is_exynos3250())
flush_cache_all();
cpu_do_idle();
return -1;
@ -283,6 +313,9 @@ static int exynos_cpu1_powerdown(void)
*/
exynos_cpu_power_down(1);
if (soc_is_exynos3250())
pmu_raw_writel(0, S5P_PMU_SPARE2);
ret = cpu_suspend(0, exynos_wfi_finisher);
cpu_pm_exit();
@ -299,7 +332,9 @@ cpu1_aborted:
static void exynos_pre_enter_aftr(void)
{
__raw_writel(virt_to_phys(exynos_cpu_resume), cpu_boot_reg_base());
unsigned long boot_addr = virt_to_phys(exynos_cpu_resume);
(void)exynos_set_boot_addr(1, boot_addr);
}
static void exynos_post_enter_aftr(void)

View File

@ -62,6 +62,7 @@ static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on)
for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) {
if (IS_ERR(pd->clk[i]))
break;
pd->pclk[i] = clk_get_parent(pd->clk[i]);
if (clk_set_parent(pd->clk[i], pd->oscclk))
pr_err("%s: error setting oscclk as parent to clock %d\n",
pd->name, i);
@ -90,6 +91,9 @@ static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on)
for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) {
if (IS_ERR(pd->clk[i]))
break;
if (IS_ERR(pd->clk[i]))
continue; /* Skip on first power up */
if (clk_set_parent(pd->clk[i], pd->pclk[i]))
pr_err("%s: error setting parent to clock%d\n",
pd->name, i);
@ -117,27 +121,37 @@ static int exynos_pd_power_off(struct generic_pm_domain *domain)
static __init int exynos4_pm_init_power_domain(void)
{
struct platform_device *pdev;
struct device_node *np;
for_each_compatible_node(np, NULL, "samsung,exynos4210-pd") {
struct exynos_pm_domain *pd;
int on, i;
struct device *dev;
pdev = of_find_device_by_node(np);
dev = &pdev->dev;
pd = kzalloc(sizeof(*pd), GFP_KERNEL);
if (!pd) {
pr_err("%s: failed to allocate memory for domain\n",
__func__);
of_node_put(np);
return -ENOMEM;
}
pd->pd.name = kstrdup_const(strrchr(np->full_name, '/') + 1,
GFP_KERNEL);
if (!pd->pd.name) {
kfree(pd);
of_node_put(np);
return -ENOMEM;
}
pd->pd.name = kstrdup(dev_name(dev), GFP_KERNEL);
pd->name = pd->pd.name;
pd->base = of_iomap(np, 0);
if (!pd->base) {
pr_warn("%s: failed to map memory\n", __func__);
kfree(pd->pd.name);
kfree(pd);
of_node_put(np);
continue;
}
pd->pd.power_off = exynos_pd_power_off;
pd->pd.power_on = exynos_pd_power_on;
@ -145,12 +159,12 @@ static __init int exynos4_pm_init_power_domain(void)
char clk_name[8];
snprintf(clk_name, sizeof(clk_name), "asb%d", i);
pd->asb_clk[i] = clk_get(dev, clk_name);
pd->asb_clk[i] = of_clk_get_by_name(np, clk_name);
if (IS_ERR(pd->asb_clk[i]))
break;
}
pd->oscclk = clk_get(dev, "oscclk");
pd->oscclk = of_clk_get_by_name(np, "oscclk");
if (IS_ERR(pd->oscclk))
goto no_clk;
@ -158,16 +172,14 @@ static __init int exynos4_pm_init_power_domain(void)
char clk_name[8];
snprintf(clk_name, sizeof(clk_name), "clk%d", i);
pd->clk[i] = clk_get(dev, clk_name);
pd->clk[i] = of_clk_get_by_name(np, clk_name);
if (IS_ERR(pd->clk[i]))
break;
snprintf(clk_name, sizeof(clk_name), "pclk%d", i);
pd->pclk[i] = clk_get(dev, clk_name);
if (IS_ERR(pd->pclk[i])) {
clk_put(pd->clk[i]);
pd->clk[i] = ERR_PTR(-EINVAL);
break;
}
/*
* Skip setting parent on first power up.
* The parent at this time may not be useful at all.
*/
pd->pclk[i] = ERR_PTR(-EINVAL);
}
if (IS_ERR(pd->clk[0]))
@ -188,16 +200,16 @@ no_clk:
args.np = np;
args.args_count = 0;
child_domain = of_genpd_get_from_provider(&args);
if (!child_domain)
continue;
if (IS_ERR(child_domain))
goto next_pd;
if (of_parse_phandle_with_args(np, "power-domains",
"#power-domain-cells", 0, &args) != 0)
continue;
goto next_pd;
parent_domain = of_genpd_get_from_provider(&args);
if (!parent_domain)
continue;
if (IS_ERR(parent_domain))
goto next_pd;
if (pm_genpd_add_subdomain(parent_domain, child_domain))
pr_warn("%s failed to add subdomain: %s\n",
@ -205,9 +217,10 @@ no_clk:
else
pr_info("%s has as child subdomain: %s.\n",
parent_domain->name, child_domain->name);
next_pd:
of_node_put(np);
}
return 0;
}
arch_initcall(exynos4_pm_init_power_domain);
core_initcall(exynos4_pm_init_power_domain);

View File

@ -681,7 +681,7 @@ static unsigned int const exynos5420_list_disable_pmu_reg[] = {
EXYNOS5420_CMU_RESET_FSYS_SYS_PWR_REG,
};
static void exynos5_power_off(void)
static void exynos_power_off(void)
{
unsigned int tmp;
@ -872,8 +872,6 @@ static void exynos5420_pmu_init(void)
EXYNOS5420_ARM_INTR_SPREAD_USE_STANDBYWFI);
pmu_raw_writel(0x1, EXYNOS5420_UP_SCHEDULER);
pm_power_off = exynos5_power_off;
pr_info("EXYNOS5420 PMU initialized\n");
}
@ -984,6 +982,8 @@ static int exynos_pmu_probe(struct platform_device *pdev)
if (ret)
dev_warn(dev, "can't register restart handler err=%d\n", ret);
pm_power_off = exynos_power_off;
dev_dbg(dev, "Exynos PMU Driver probe done\n");
return 0;
}

View File

@ -223,7 +223,7 @@ static int exynos_pmu_domain_alloc(struct irq_domain *domain,
return irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, &parent_args);
}
static struct irq_domain_ops exynos_pmu_domain_ops = {
static const struct irq_domain_ops exynos_pmu_domain_ops = {
.xlate = exynos_pmu_domain_xlate,
.alloc = exynos_pmu_domain_alloc,
.free = irq_domain_free_irqs_common,
@ -342,6 +342,8 @@ static void exynos_pm_enter_sleep_mode(void)
static void exynos_pm_prepare(void)
{
exynos_set_delayed_reset_assertion(false);
/* Set wake-up mask registers */
exynos_pm_set_wakeup_mask();
@ -482,6 +484,7 @@ early_wakeup:
/* Clear SLEEP mode set in INFORM1 */
pmu_raw_writel(0x0, S5P_INFORM1);
exynos_set_delayed_reset_assertion(true);
}
static void exynos3250_pm_resume(void)
@ -723,8 +726,10 @@ void __init exynos_pm_init(void)
return;
}
if (WARN_ON(!of_find_property(np, "interrupt-controller", NULL)))
if (WARN_ON(!of_find_property(np, "interrupt-controller", NULL))) {
pr_warn("Outdated DT detected, suspend/resume will NOT work\n");
return;
}
pm_data = (const struct exynos_pm_data *) match->data;

View File

@ -12,6 +12,8 @@
#ifndef __GEMINI_COMMON_H__
#define __GEMINI_COMMON_H__
#include <linux/reboot.h>
struct mtd_partition;
extern void gemini_map_io(void);
@ -26,6 +28,6 @@ extern int platform_register_pflash(unsigned int size,
struct mtd_partition *parts,
unsigned int nr_parts);
extern void gemini_restart(char mode, const char *cmd);
extern void gemini_restart(enum reboot_mode mode, const char *cmd);
#endif /* __GEMINI_COMMON_H__ */

View File

@ -14,7 +14,9 @@
#include <mach/hardware.h>
#include <mach/global_reg.h>
void gemini_restart(char mode, const char *cmd)
#include "common.h"
void gemini_restart(enum reboot_mode mode, const char *cmd)
{
__raw_writel(RESET_GLOBAL | RESET_CPU1,
IO_ADDRESS(GEMINI_GLOBAL_BASE) + GLOBAL_RESET);

View File

@ -171,6 +171,12 @@
*/
#define LINKS_PER_OCP_IF 2
/*
* Address offset (in bytes) between the reset control and the reset
* status registers: 4 bytes on OMAP4
*/
#define OMAP4_RST_CTRL_ST_OFFSET 4
/**
* struct omap_hwmod_soc_ops - fn ptrs for some SoC-specific operations
* @enable_module: function to enable a module (via MODULEMODE)
@ -3016,10 +3022,12 @@ static int _omap4_deassert_hardreset(struct omap_hwmod *oh,
if (ohri->st_shift)
pr_err("omap_hwmod: %s: %s: hwmod data error: OMAP4 does not support st_shift\n",
oh->name, ohri->name);
return omap_prm_deassert_hardreset(ohri->rst_shift, 0,
return omap_prm_deassert_hardreset(ohri->rst_shift, ohri->rst_shift,
oh->clkdm->pwrdm.ptr->prcm_partition,
oh->clkdm->pwrdm.ptr->prcm_offs,
oh->prcm.omap4.rstctrl_offs, 0);
oh->prcm.omap4.rstctrl_offs,
oh->prcm.omap4.rstctrl_offs +
OMAP4_RST_CTRL_ST_OFFSET);
}
/**
@ -3047,27 +3055,6 @@ static int _omap4_is_hardreset_asserted(struct omap_hwmod *oh,
oh->prcm.omap4.rstctrl_offs);
}
/**
* _am33xx_assert_hardreset - call AM33XX PRM hardreset fn with hwmod args
* @oh: struct omap_hwmod * to assert hardreset
* @ohri: hardreset line data
*
* Call am33xx_prminst_assert_hardreset() with parameters extracted
* from the hwmod @oh and the hardreset line data @ohri. Only
* intended for use as an soc_ops function pointer. Passes along the
* return value from am33xx_prminst_assert_hardreset(). XXX This
* function is scheduled for removal when the PRM code is moved into
* drivers/.
*/
static int _am33xx_assert_hardreset(struct omap_hwmod *oh,
struct omap_hwmod_rst_info *ohri)
{
return omap_prm_assert_hardreset(ohri->rst_shift, 0,
oh->clkdm->pwrdm.ptr->prcm_offs,
oh->prcm.omap4.rstctrl_offs);
}
/**
* _am33xx_deassert_hardreset - call AM33XX PRM hardreset fn with hwmod args
* @oh: struct omap_hwmod * to deassert hardreset
@ -3083,32 +3070,13 @@ static int _am33xx_assert_hardreset(struct omap_hwmod *oh,
static int _am33xx_deassert_hardreset(struct omap_hwmod *oh,
struct omap_hwmod_rst_info *ohri)
{
return omap_prm_deassert_hardreset(ohri->rst_shift, ohri->st_shift, 0,
return omap_prm_deassert_hardreset(ohri->rst_shift, ohri->st_shift,
oh->clkdm->pwrdm.ptr->prcm_partition,
oh->clkdm->pwrdm.ptr->prcm_offs,
oh->prcm.omap4.rstctrl_offs,
oh->prcm.omap4.rstst_offs);
}
/**
* _am33xx_is_hardreset_asserted - call AM33XX PRM hardreset fn with hwmod args
* @oh: struct omap_hwmod * to test hardreset
* @ohri: hardreset line data
*
* Call am33xx_prminst_is_hardreset_asserted() with parameters
* extracted from the hwmod @oh and the hardreset line data @ohri.
* Only intended for use as an soc_ops function pointer. Passes along
* the return value from am33xx_prminst_is_hardreset_asserted(). XXX
* This function is scheduled for removal when the PRM code is moved
* into drivers/.
*/
static int _am33xx_is_hardreset_asserted(struct omap_hwmod *oh,
struct omap_hwmod_rst_info *ohri)
{
return omap_prm_is_hardreset_asserted(ohri->rst_shift, 0,
oh->clkdm->pwrdm.ptr->prcm_offs,
oh->prcm.omap4.rstctrl_offs);
}
/* Public functions */
u32 omap_hwmod_read(struct omap_hwmod *oh, u16 reg_offs)
@ -3908,21 +3876,13 @@ void __init omap_hwmod_init(void)
soc_ops.init_clkdm = _init_clkdm;
soc_ops.update_context_lost = _omap4_update_context_lost;
soc_ops.get_context_lost = _omap4_get_context_lost;
} else if (soc_is_am43xx()) {
} else if (cpu_is_ti816x() || soc_is_am33xx() || soc_is_am43xx()) {
soc_ops.enable_module = _omap4_enable_module;
soc_ops.disable_module = _omap4_disable_module;
soc_ops.wait_target_ready = _omap4_wait_target_ready;
soc_ops.assert_hardreset = _omap4_assert_hardreset;
soc_ops.deassert_hardreset = _omap4_deassert_hardreset;
soc_ops.is_hardreset_asserted = _omap4_is_hardreset_asserted;
soc_ops.init_clkdm = _init_clkdm;
} else if (cpu_is_ti816x() || soc_is_am33xx()) {
soc_ops.enable_module = _omap4_enable_module;
soc_ops.disable_module = _omap4_disable_module;
soc_ops.wait_target_ready = _omap4_wait_target_ready;
soc_ops.assert_hardreset = _am33xx_assert_hardreset;
soc_ops.deassert_hardreset = _am33xx_deassert_hardreset;
soc_ops.is_hardreset_asserted = _am33xx_is_hardreset_asserted;
soc_ops.is_hardreset_asserted = _omap4_is_hardreset_asserted;
soc_ops.init_clkdm = _init_clkdm;
} else {
WARN(1, "omap_hwmod: unknown SoC type\n");

View File

@ -544,6 +544,44 @@ static struct omap_hwmod am43xx_hdq1w_hwmod = {
},
};
static struct omap_hwmod_class_sysconfig am43xx_vpfe_sysc = {
.rev_offs = 0x0,
.sysc_offs = 0x104,
.sysc_flags = SYSC_HAS_MIDLEMODE | SYSC_HAS_SIDLEMODE,
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
MSTANDBY_FORCE | MSTANDBY_SMART | MSTANDBY_NO),
.sysc_fields = &omap_hwmod_sysc_type2,
};
static struct omap_hwmod_class am43xx_vpfe_hwmod_class = {
.name = "vpfe",
.sysc = &am43xx_vpfe_sysc,
};
static struct omap_hwmod am43xx_vpfe0_hwmod = {
.name = "vpfe0",
.class = &am43xx_vpfe_hwmod_class,
.clkdm_name = "l3s_clkdm",
.prcm = {
.omap4 = {
.modulemode = MODULEMODE_SWCTRL,
.clkctrl_offs = AM43XX_CM_PER_VPFE0_CLKCTRL_OFFSET,
},
},
};
static struct omap_hwmod am43xx_vpfe1_hwmod = {
.name = "vpfe1",
.class = &am43xx_vpfe_hwmod_class,
.clkdm_name = "l3s_clkdm",
.prcm = {
.omap4 = {
.modulemode = MODULEMODE_SWCTRL,
.clkctrl_offs = AM43XX_CM_PER_VPFE1_CLKCTRL_OFFSET,
},
},
};
/* Interfaces */
static struct omap_hwmod_ocp_if am43xx_l3_main__l4_hs = {
.master = &am33xx_l3_main_hwmod,
@ -825,6 +863,34 @@ static struct omap_hwmod_ocp_if am43xx_l4_ls__hdq1w = {
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
static struct omap_hwmod_ocp_if am43xx_l3__vpfe0 = {
.master = &am43xx_vpfe0_hwmod,
.slave = &am33xx_l3_main_hwmod,
.clk = "l3_gclk",
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
static struct omap_hwmod_ocp_if am43xx_l3__vpfe1 = {
.master = &am43xx_vpfe1_hwmod,
.slave = &am33xx_l3_main_hwmod,
.clk = "l3_gclk",
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
static struct omap_hwmod_ocp_if am43xx_l4_ls__vpfe0 = {
.master = &am33xx_l4_ls_hwmod,
.slave = &am43xx_vpfe0_hwmod,
.clk = "l4ls_gclk",
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
static struct omap_hwmod_ocp_if am43xx_l4_ls__vpfe1 = {
.master = &am33xx_l4_ls_hwmod,
.slave = &am43xx_vpfe1_hwmod,
.clk = "l4ls_gclk",
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
static struct omap_hwmod_ocp_if *am43xx_hwmod_ocp_ifs[] __initdata = {
&am33xx_l4_wkup__synctimer,
&am43xx_l4_ls__timer8,
@ -925,6 +991,10 @@ static struct omap_hwmod_ocp_if *am43xx_hwmod_ocp_ifs[] __initdata = {
&am43xx_l4_ls__dss_dispc,
&am43xx_l4_ls__dss_rfbi,
&am43xx_l4_ls__hdq1w,
&am43xx_l3__vpfe0,
&am43xx_l3__vpfe1,
&am43xx_l4_ls__vpfe0,
&am43xx_l4_ls__vpfe1,
NULL,
};

View File

@ -144,5 +144,6 @@
#define AM43XX_CM_PER_USBPHYOCP2SCP1_CLKCTRL_OFFSET 0x05C0
#define AM43XX_CM_PER_DSS_CLKCTRL_OFFSET 0x0a20
#define AM43XX_CM_PER_HDQ1W_CLKCTRL_OFFSET 0x04a0
#define AM43XX_CM_PER_VPFE0_CLKCTRL_OFFSET 0x0068
#define AM43XX_CM_PER_VPFE1_CLKCTRL_OFFSET 0x0070
#endif

View File

@ -87,12 +87,6 @@ u32 omap4_prminst_rmw_inst_reg_bits(u32 mask, u32 bits, u8 part, s16 inst,
return v;
}
/*
* Address offset (in bytes) between the reset control and the reset
* status registers: 4 bytes on OMAP4
*/
#define OMAP4_RST_CTRL_ST_OFFSET 4
/**
* omap4_prminst_is_hardreset_asserted - read the HW reset line state of
* submodules contained in the hwmod module
@ -141,11 +135,11 @@ int omap4_prminst_assert_hardreset(u8 shift, u8 part, s16 inst,
* omap4_prminst_deassert_hardreset - deassert a submodule hardreset line and
* wait
* @shift: register bit shift corresponding to the reset line to deassert
* @st_shift: status bit offset, not used for OMAP4+
* @st_shift: status bit offset corresponding to the reset line
* @part: PRM partition
* @inst: PRM instance offset
* @rstctrl_offs: reset register offset
* @st_offs: reset status register offset, not used for OMAP4+
* @rstst_offs: reset status register offset
*
* Some IPs like dsp, ipu or iva contain processors that require an HW
* reset line to be asserted / deasserted in order to fully enable the
@ -157,11 +151,11 @@ int omap4_prminst_assert_hardreset(u8 shift, u8 part, s16 inst,
* of reset, or -EBUSY if the submodule did not exit reset promptly.
*/
int omap4_prminst_deassert_hardreset(u8 shift, u8 st_shift, u8 part, s16 inst,
u16 rstctrl_offs, u16 st_offs)
u16 rstctrl_offs, u16 rstst_offs)
{
int c;
u32 mask = 1 << shift;
u16 rstst_offs = rstctrl_offs + OMAP4_RST_CTRL_ST_OFFSET;
u32 st_mask = 1 << st_shift;
/* Check the current status to avoid de-asserting the line twice */
if (omap4_prminst_is_hardreset_asserted(shift, part, inst,
@ -169,13 +163,13 @@ int omap4_prminst_deassert_hardreset(u8 shift, u8 st_shift, u8 part, s16 inst,
return -EEXIST;
/* Clear the reset status by writing 1 to the status bit */
omap4_prminst_rmw_inst_reg_bits(0xffffffff, mask, part, inst,
omap4_prminst_rmw_inst_reg_bits(0xffffffff, st_mask, part, inst,
rstst_offs);
/* de-assert the reset control line */
omap4_prminst_rmw_inst_reg_bits(mask, 0, part, inst, rstctrl_offs);
/* wait the status to be set */
omap_test_timeout(omap4_prminst_is_hardreset_asserted(shift, part, inst,
rstst_offs),
omap_test_timeout(omap4_prminst_is_hardreset_asserted(st_shift, part,
inst, rstst_offs),
MAX_MODULE_HARDRESET_WAIT, c);
return (c == MAX_MODULE_HARDRESET_WAIT) ? -EBUSY : 0;

View File

@ -298,15 +298,12 @@ static int __init omap_dm_timer_init_one(struct omap_dm_timer *timer,
if (IS_ERR(src))
return PTR_ERR(src);
if (clk_get_parent(timer->fclk) != src) {
r = clk_set_parent(timer->fclk, src);
if (r < 0) {
pr_warn("%s: %s cannot set source\n", __func__,
oh->name);
pr_warn("%s: %s cannot set source\n", __func__, oh->name);
clk_put(src);
return r;
}
}
clk_put(src);

View File

@ -44,11 +44,9 @@ static void __iomem *rk3288_bootram_base;
static phys_addr_t rk3288_bootram_phy;
static struct regmap *pmu_regmap;
static struct regmap *grf_regmap;
static struct regmap *sgrf_regmap;
static u32 rk3288_pmu_pwr_mode_con;
static u32 rk3288_grf_soc_con0;
static u32 rk3288_sgrf_soc_con0;
static inline u32 rk3288_l2_config(void)
@ -72,25 +70,11 @@ static void rk3288_slp_mode_set(int level)
{
u32 mode_set, mode_set1;
regmap_read(grf_regmap, RK3288_GRF_SOC_CON0, &rk3288_grf_soc_con0);
regmap_read(sgrf_regmap, RK3288_SGRF_SOC_CON0, &rk3288_sgrf_soc_con0);
regmap_read(pmu_regmap, RK3288_PMU_PWRMODE_CON,
&rk3288_pmu_pwr_mode_con);
/*
* We need set this bit GRF_FORCE_JTAG here, for the debug module,
* otherwise, it may become inaccessible after resume.
* This creates a potential security issue, as the sdmmc pins may
* accept jtag data for a short time during resume if no card is
* inserted.
* But this is of course also true for the regular boot, before we
* turn of the jtag/sdmmc autodetect.
*/
regmap_write(grf_regmap, RK3288_GRF_SOC_CON0, GRF_FORCE_JTAG |
GRF_FORCE_JTAG_WRITE);
/*
* SGRF_FAST_BOOT_EN - system to boot from FAST_BOOT_ADDR
* PCLK_WDT_GATE - disable WDT during suspend.
@ -151,9 +135,6 @@ static void rk3288_slp_mode_set_resume(void)
regmap_write(sgrf_regmap, RK3288_SGRF_SOC_CON0,
rk3288_sgrf_soc_con0 | SGRF_PCLK_WDT_GATE_WRITE
| SGRF_FAST_BOOT_EN_WRITE);
regmap_write(grf_regmap, RK3288_GRF_SOC_CON0, rk3288_grf_soc_con0 |
GRF_FORCE_JTAG_WRITE);
}
static int rockchip_lpmode_enter(unsigned long arg)
@ -212,13 +193,6 @@ static int rk3288_suspend_init(struct device_node *np)
return PTR_ERR(pmu_regmap);
}
grf_regmap = syscon_regmap_lookup_by_compatible(
"rockchip,rk3288-grf");
if (IS_ERR(grf_regmap)) {
pr_err("%s: could not find grf regmap\n", __func__);
return PTR_ERR(pmu_regmap);
}
sram_np = of_find_compatible_node(NULL, NULL,
"rockchip,rk3288-pmu-sram");
if (!sram_np) {

View File

@ -48,10 +48,6 @@ static inline void rockchip_suspend_init(void)
#define RK3288_PMU_WAKEUP_RST_CLR_CNT 0x44
#define RK3288_PMU_PWRMODE_CON1 0x90
#define RK3288_GRF_SOC_CON0 0x244
#define GRF_FORCE_JTAG BIT(12)
#define GRF_FORCE_JTAG_WRITE BIT(28)
#define RK3288_SGRF_SOC_CON0 (0x0000)
#define RK3288_SGRF_FAST_BOOT_ADDR (0x0120)
#define SGRF_PCLK_WDT_GATE BIT(6)

View File

@ -54,6 +54,7 @@
#define SEEN_DATA (1 << (BPF_MEMWORDS + 3))
#define FLAG_NEED_X_RESET (1 << 0)
#define FLAG_IMM_OVERFLOW (1 << 1)
struct jit_ctx {
const struct bpf_prog *skf;
@ -293,6 +294,15 @@ static u16 imm_offset(u32 k, struct jit_ctx *ctx)
/* PC in ARM mode == address of the instruction + 8 */
imm = offset - (8 + ctx->idx * 4);
if (imm & ~0xfff) {
/*
* literal pool is too far, signal it into flags. we
* can only detect it on the second pass unfortunately.
*/
ctx->flags |= FLAG_IMM_OVERFLOW;
return 0;
}
return imm;
}
@ -449,10 +459,21 @@ static inline void emit_udiv(u8 rd, u8 rm, u8 rn, struct jit_ctx *ctx)
return;
}
#endif
if (rm != ARM_R0)
emit(ARM_MOV_R(ARM_R0, rm), ctx);
/*
* For BPF_ALU | BPF_DIV | BPF_K instructions, rm is ARM_R4
* (r_A) and rn is ARM_R0 (r_scratch) so load rn first into
* ARM_R1 to avoid accidentally overwriting ARM_R0 with rm
* before using it as a source for ARM_R1.
*
* For BPF_ALU | BPF_DIV | BPF_X rm is ARM_R4 (r_A) and rn is
* ARM_R5 (r_X) so there is no particular register overlap
* issues.
*/
if (rn != ARM_R1)
emit(ARM_MOV_R(ARM_R1, rn), ctx);
if (rm != ARM_R0)
emit(ARM_MOV_R(ARM_R0, rm), ctx);
ctx->seen |= SEEN_CALL;
emit_mov_i(ARM_R3, (u32)jit_udiv, ctx);
@ -855,6 +876,14 @@ b_epilogue:
default:
return -1;
}
if (ctx->flags & FLAG_IMM_OVERFLOW)
/*
* this instruction generated an overflow when
* trying to access the literal pool, so
* delegate this filter to the kernel interpreter.
*/
return -1;
}
/* compute offsets only during the first pass */
@ -917,7 +946,14 @@ void bpf_jit_compile(struct bpf_prog *fp)
ctx.idx = 0;
build_prologue(&ctx);
build_body(&ctx);
if (build_body(&ctx) < 0) {
#if __LINUX_ARM_ARCH__ < 7
if (ctx.imm_count)
kfree(ctx.imms);
#endif
bpf_jit_binary_free(header);
goto out;
}
build_epilogue(&ctx);
flush_icache_range((u32)ctx.target, (u32)(ctx.target + ctx.idx));

View File

@ -389,7 +389,7 @@ static int s3c_adc_probe(struct platform_device *pdev)
if (ret)
return ret;
clk_enable(adc->clk);
clk_prepare_enable(adc->clk);
tmp = adc->prescale | S3C2410_ADCCON_PRSCEN;
@ -413,7 +413,7 @@ static int s3c_adc_remove(struct platform_device *pdev)
{
struct adc_device *adc = platform_get_drvdata(pdev);
clk_disable(adc->clk);
clk_disable_unprepare(adc->clk);
regulator_disable(adc->vdd);
return 0;
@ -475,7 +475,7 @@ static int s3c_adc_resume(struct device *dev)
#define s3c_adc_resume NULL
#endif
static struct platform_device_id s3c_adc_driver_ids[] = {
static const struct platform_device_id s3c_adc_driver_ids[] = {
{
.name = "s3c24xx-adc",
.driver_data = TYPE_ADCV1,

View File

@ -21,6 +21,20 @@
clock-output-names = "juno_mb:clk25mhz";
};
v2m_refclk1mhz: refclk1mhz {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <1000000>;
clock-output-names = "juno_mb:refclk1mhz";
};
v2m_refclk32khz: refclk32khz {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <32768>;
clock-output-names = "juno_mb:refclk32khz";
};
motherboard {
compatible = "arm,vexpress,v2p-p1", "simple-bus";
#address-cells = <2>; /* SMB chipselect number and offset */
@ -66,6 +80,15 @@
#size-cells = <1>;
ranges = <0 3 0 0x200000>;
v2m_sysctl: sysctl@020000 {
compatible = "arm,sp810", "arm,primecell";
reg = <0x020000 0x1000>;
clocks = <&v2m_refclk32khz>, <&v2m_refclk1mhz>, <&mb_clk24mhz>;
clock-names = "refclk", "timclk", "apb_pclk";
#clock-cells = <1>;
clock-output-names = "timerclken0", "timerclken1", "timerclken2", "timerclken3";
};
mmci@050000 {
compatible = "arm,pl180", "arm,primecell";
reg = <0x050000 0x1000>;
@ -106,16 +129,16 @@
compatible = "arm,sp804", "arm,primecell";
reg = <0x110000 0x10000>;
interrupts = <9>;
clocks = <&mb_clk24mhz>, <&soc_smc50mhz>;
clock-names = "timclken1", "apb_pclk";
clocks = <&v2m_sysctl 0>, <&v2m_sysctl 1>, <&mb_clk24mhz>;
clock-names = "timclken1", "timclken2", "apb_pclk";
};
v2m_timer23: timer@120000 {
compatible = "arm,sp804", "arm,primecell";
reg = <0x120000 0x10000>;
interrupts = <9>;
clocks = <&mb_clk24mhz>, <&soc_smc50mhz>;
clock-names = "timclken1", "apb_pclk";
clocks = <&v2m_sysctl 2>, <&v2m_sysctl 3>, <&mb_clk24mhz>;
clock-names = "timclken1", "timclken2", "apb_pclk";
};
rtc@170000 {

View File

@ -147,13 +147,21 @@ static int chksum_final(struct shash_desc *desc, u8 *out)
{
struct chksum_desc_ctx *ctx = shash_desc_ctx(desc);
put_unaligned_le32(ctx->crc, out);
return 0;
}
static int chksumc_final(struct shash_desc *desc, u8 *out)
{
struct chksum_desc_ctx *ctx = shash_desc_ctx(desc);
put_unaligned_le32(~ctx->crc, out);
return 0;
}
static int __chksum_finup(u32 crc, const u8 *data, unsigned int len, u8 *out)
{
put_unaligned_le32(~crc32_arm64_le_hw(crc, data, len), out);
put_unaligned_le32(crc32_arm64_le_hw(crc, data, len), out);
return 0;
}
@ -199,6 +207,14 @@ static int crc32_cra_init(struct crypto_tfm *tfm)
{
struct chksum_ctx *mctx = crypto_tfm_ctx(tfm);
mctx->key = 0;
return 0;
}
static int crc32c_cra_init(struct crypto_tfm *tfm)
{
struct chksum_ctx *mctx = crypto_tfm_ctx(tfm);
mctx->key = ~0;
return 0;
}
@ -229,7 +245,7 @@ static struct shash_alg crc32c_alg = {
.setkey = chksum_setkey,
.init = chksum_init,
.update = chksumc_update,
.final = chksum_final,
.final = chksumc_final,
.finup = chksumc_finup,
.digest = chksumc_digest,
.descsize = sizeof(struct chksum_desc_ctx),
@ -241,7 +257,7 @@ static struct shash_alg crc32c_alg = {
.cra_alignmask = 0,
.cra_ctxsize = sizeof(struct chksum_ctx),
.cra_module = THIS_MODULE,
.cra_init = crc32_cra_init,
.cra_init = crc32c_cra_init,
}
};

View File

@ -74,6 +74,9 @@ static int sha1_ce_finup(struct shash_desc *desc, const u8 *data,
static int sha1_ce_final(struct shash_desc *desc, u8 *out)
{
struct sha1_ce_state *sctx = shash_desc_ctx(desc);
sctx->finalize = 0;
kernel_neon_begin_partial(16);
sha1_base_do_finalize(desc, (sha1_block_fn *)sha1_ce_transform);
kernel_neon_end();

View File

@ -75,6 +75,9 @@ static int sha256_ce_finup(struct shash_desc *desc, const u8 *data,
static int sha256_ce_final(struct shash_desc *desc, u8 *out)
{
struct sha256_ce_state *sctx = shash_desc_ctx(desc);
sctx->finalize = 0;
kernel_neon_begin_partial(28);
sha256_base_do_finalize(desc, (sha256_block_fn *)sha2_ce_transform);
kernel_neon_end();

View File

@ -24,7 +24,6 @@
#include <asm/cacheflush.h>
#include <asm/alternative.h>
#include <asm/cpufeature.h>
#include <asm/insn.h>
#include <linux/stop_machine.h>
extern struct alt_instr __alt_instructions[], __alt_instructions_end[];
@ -34,48 +33,6 @@ struct alt_region {
struct alt_instr *end;
};
/*
* Decode the imm field of a b/bl instruction, and return the byte
* offset as a signed value (so it can be used when computing a new
* branch target).
*/
static s32 get_branch_offset(u32 insn)
{
s32 imm = aarch64_insn_decode_immediate(AARCH64_INSN_IMM_26, insn);
/* sign-extend the immediate before turning it into a byte offset */
return (imm << 6) >> 4;
}
static u32 get_alt_insn(u8 *insnptr, u8 *altinsnptr)
{
u32 insn;
aarch64_insn_read(altinsnptr, &insn);
/* Stop the world on instructions we don't support... */
BUG_ON(aarch64_insn_is_cbz(insn));
BUG_ON(aarch64_insn_is_cbnz(insn));
BUG_ON(aarch64_insn_is_bcond(insn));
/* ... and there is probably more. */
if (aarch64_insn_is_b(insn) || aarch64_insn_is_bl(insn)) {
enum aarch64_insn_branch_type type;
unsigned long target;
if (aarch64_insn_is_b(insn))
type = AARCH64_INSN_BRANCH_NOLINK;
else
type = AARCH64_INSN_BRANCH_LINK;
target = (unsigned long)altinsnptr + get_branch_offset(insn);
insn = aarch64_insn_gen_branch_imm((unsigned long)insnptr,
target, type);
}
return insn;
}
static int __apply_alternatives(void *alt_region)
{
struct alt_instr *alt;
@ -83,9 +40,6 @@ static int __apply_alternatives(void *alt_region)
u8 *origptr, *replptr;
for (alt = region->begin; alt < region->end; alt++) {
u32 insn;
int i;
if (!cpus_have_cap(alt->cpufeature))
continue;
@ -95,12 +49,7 @@ static int __apply_alternatives(void *alt_region)
origptr = (u8 *)&alt->orig_offset + alt->orig_offset;
replptr = (u8 *)&alt->alt_offset + alt->alt_offset;
for (i = 0; i < alt->alt_len; i += sizeof(insn)) {
insn = get_alt_insn(origptr + i, replptr + i);
aarch64_insn_write(origptr + i, insn);
}
memcpy(origptr, replptr, alt->alt_len);
flush_icache_range((uintptr_t)origptr,
(uintptr_t)(origptr + alt->alt_len));
}

View File

@ -1315,15 +1315,15 @@ static int armpmu_device_probe(struct platform_device *pdev)
if (!cpu_pmu)
return -ENODEV;
irqs = kcalloc(pdev->num_resources, sizeof(*irqs), GFP_KERNEL);
if (!irqs)
return -ENOMEM;
/* Don't bother with PPIs; they're already affine */
irq = platform_get_irq(pdev, 0);
if (irq >= 0 && irq_is_percpu(irq))
return 0;
irqs = kcalloc(pdev->num_resources, sizeof(*irqs), GFP_KERNEL);
if (!irqs)
return -ENOMEM;
for (i = 0; i < pdev->num_resources; ++i) {
struct device_node *dn;
int cpu;

View File

@ -328,10 +328,12 @@ static int ptdump_init(void)
for (j = 0; j < pg_level[i].num; j++)
pg_level[i].mask |= pg_level[i].bits[j].mask;
#ifdef CONFIG_SPARSEMEM_VMEMMAP
address_markers[VMEMMAP_START_NR].start_address =
(unsigned long)virt_to_page(PAGE_OFFSET);
address_markers[VMEMMAP_END_NR].start_address =
(unsigned long)virt_to_page(high_memory);
#endif
pe = debugfs_create_file("kernel_page_tables", 0400, NULL, NULL,
&ptdump_fops);

View File

@ -487,7 +487,7 @@ emit_cond_jmp:
return -EINVAL;
}
imm64 = (u64)insn1.imm << 32 | imm;
imm64 = (u64)insn1.imm << 32 | (u32)imm;
emit_a64_mov_i64(dst, imm64, ctx);
return 1;

View File

@ -277,7 +277,7 @@ LDFLAGS += -m $(ld-emul)
ifdef CONFIG_MIPS
CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -x c /dev/null | \
egrep -vw '__GNUC_(|MINOR_|PATCHLEVEL_)_' | \
sed -e "s/^\#define /-D'/" -e "s/ /'='/" -e "s/$$/'/")
sed -e "s/^\#define /-D'/" -e "s/ /'='/" -e "s/$$/'/" -e 's/\$$/&&/g')
ifdef CONFIG_64BIT
CHECKFLAGS += -m64
endif

View File

@ -304,7 +304,7 @@ do { \
\
current->thread.abi = &mips_abi; \
\
current->thread.fpu.fcr31 = current_cpu_data.fpu_csr31; \
current->thread.fpu.fcr31 = boot_cpu_data.fpu_csr31; \
} while (0)
#endif /* CONFIG_32BIT */
@ -366,7 +366,7 @@ do { \
else \
current->thread.abi = &mips_abi; \
\
current->thread.fpu.fcr31 = current_cpu_data.fpu_csr31; \
current->thread.fpu.fcr31 = boot_cpu_data.fpu_csr31; \
\
p = personality(current->personality); \
if (p != PER_LINUX32 && p != PER_LINUX) \

View File

@ -45,7 +45,7 @@ extern int __cpu_logical_map[NR_CPUS];
#define SMP_DUMP 0x8
#define SMP_ASK_C0COUNT 0x10
extern volatile cpumask_t cpu_callin_map;
extern cpumask_t cpu_callin_map;
/* Mask of CPUs which are currently definitely operating coherently */
extern cpumask_t cpu_coherent_mask;

View File

@ -76,14 +76,6 @@ int arch_elf_pt_proc(void *_ehdr, void *_phdr, struct file *elf,
/* Lets see if this is an O32 ELF */
if (ehdr32->e_ident[EI_CLASS] == ELFCLASS32) {
/* FR = 1 for N32 */
if (ehdr32->e_flags & EF_MIPS_ABI2)
state->overall_fp_mode = FP_FR1;
else
/* Set a good default FPU mode for O32 */
state->overall_fp_mode = cpu_has_mips_r6 ?
FP_FRE : FP_FR0;
if (ehdr32->e_flags & EF_MIPS_FP64) {
/*
* Set MIPS_ABI_FP_OLD_64 for EF_MIPS_FP64. We will override it
@ -104,9 +96,6 @@ int arch_elf_pt_proc(void *_ehdr, void *_phdr, struct file *elf,
(char *)&abiflags,
sizeof(abiflags));
} else {
/* FR=1 is really the only option for 64-bit */
state->overall_fp_mode = FP_FR1;
if (phdr64->p_type != PT_MIPS_ABIFLAGS)
return 0;
if (phdr64->p_filesz < sizeof(abiflags))
@ -137,6 +126,7 @@ int arch_check_elf(void *_ehdr, bool has_interpreter,
struct elf32_hdr *ehdr = _ehdr;
struct mode_req prog_req, interp_req;
int fp_abi, interp_fp_abi, abi0, abi1, max_abi;
bool is_mips64;
if (!config_enabled(CONFIG_MIPS_O32_FP64_SUPPORT))
return 0;
@ -152,10 +142,22 @@ int arch_check_elf(void *_ehdr, bool has_interpreter,
abi0 = abi1 = fp_abi;
}
/* ABI limits. O32 = FP_64A, N32/N64 = FP_SOFT */
max_abi = ((ehdr->e_ident[EI_CLASS] == ELFCLASS32) &&
(!(ehdr->e_flags & EF_MIPS_ABI2))) ?
MIPS_ABI_FP_64A : MIPS_ABI_FP_SOFT;
is_mips64 = (ehdr->e_ident[EI_CLASS] == ELFCLASS64) ||
(ehdr->e_flags & EF_MIPS_ABI2);
if (is_mips64) {
/* MIPS64 code always uses FR=1, thus the default is easy */
state->overall_fp_mode = FP_FR1;
/* Disallow access to the various FPXX & FP64 ABIs */
max_abi = MIPS_ABI_FP_SOFT;
} else {
/* Default to a mode capable of running code expecting FR=0 */
state->overall_fp_mode = cpu_has_mips_r6 ? FP_FRE : FP_FR0;
/* Allow all ABIs we know about */
max_abi = MIPS_ABI_FP_64A;
}
if ((abi0 > max_abi && abi0 != MIPS_ABI_FP_UNKNOWN) ||
(abi1 > max_abi && abi1 != MIPS_ABI_FP_UNKNOWN))

View File

@ -176,7 +176,7 @@ int ptrace_setfpregs(struct task_struct *child, __u32 __user *data)
__get_user(value, data + 64);
fcr31 = child->thread.fpu.fcr31;
mask = current_cpu_data.fpu_msk31;
mask = boot_cpu_data.fpu_msk31;
child->thread.fpu.fcr31 = (value & ~mask) | (fcr31 & mask);
/* FIR may not be written. */

View File

@ -92,7 +92,7 @@ static void __init cps_smp_setup(void)
#ifdef CONFIG_MIPS_MT_FPAFF
/* If we have an FPU, enroll ourselves in the FPU-full mask */
if (cpu_has_fpu)
cpu_set(0, mt_fpu_cpumask);
cpumask_set_cpu(0, &mt_fpu_cpumask);
#endif /* CONFIG_MIPS_MT_FPAFF */
}

View File

@ -43,7 +43,7 @@
#include <asm/time.h>
#include <asm/setup.h>
volatile cpumask_t cpu_callin_map; /* Bitmask of started secondaries */
cpumask_t cpu_callin_map; /* Bitmask of started secondaries */
int __cpu_number_map[NR_CPUS]; /* Map physical to logical */
EXPORT_SYMBOL(__cpu_number_map);
@ -218,8 +218,10 @@ int __cpu_up(unsigned int cpu, struct task_struct *tidle)
/*
* Trust is futile. We should really have timeouts ...
*/
while (!cpumask_test_cpu(cpu, &cpu_callin_map))
while (!cpumask_test_cpu(cpu, &cpu_callin_map)) {
udelay(100);
schedule();
}
synchronise_count_master(cpu);
return 0;

View File

@ -269,7 +269,6 @@ static void __show_regs(const struct pt_regs *regs)
*/
printk("epc : %0*lx %pS\n", field, regs->cp0_epc,
(void *) regs->cp0_epc);
printk(" %s\n", print_tainted());
printk("ra : %0*lx %pS\n", field, regs->regs[31],
(void *) regs->regs[31]);

View File

@ -2389,7 +2389,6 @@ enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu,
{
unsigned long *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr];
enum emulation_result er = EMULATE_DONE;
unsigned long curr_pc;
if (run->mmio.len > sizeof(*gpr)) {
kvm_err("Bad MMIO length: %d", run->mmio.len);
@ -2397,11 +2396,6 @@ enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu,
goto done;
}
/*
* Update PC and hold onto current PC in case there is
* an error and we want to rollback the PC
*/
curr_pc = vcpu->arch.pc;
er = update_pc(vcpu, vcpu->arch.pending_load_cause);
if (er == EMULATE_FAIL)
return er;

View File

@ -889,7 +889,7 @@ static inline void cop1_cfc(struct pt_regs *xcp, struct mips_fpu_struct *ctx,
break;
case FPCREG_RID:
value = current_cpu_data.fpu_id;
value = boot_cpu_data.fpu_id;
break;
default:
@ -921,7 +921,7 @@ static inline void cop1_ctc(struct pt_regs *xcp, struct mips_fpu_struct *ctx,
(void *)xcp->cp0_epc, MIPSInst_RT(ir), value);
/* Preserve read-only bits. */
mask = current_cpu_data.fpu_msk31;
mask = boot_cpu_data.fpu_msk31;
fcr31 = (value & ~mask) | (fcr31 & mask);
break;

View File

@ -495,7 +495,7 @@ static void r4k_tlb_configure(void)
if (cpu_has_rixi) {
/*
* Enable the no read, no exec bits, and enable large virtual
* Enable the no read, no exec bits, and enable large physical
* address.
*/
#ifdef CONFIG_64BIT

View File

@ -130,9 +130,9 @@ struct platform_device ip32_rtc_device = {
.resource = ip32_rtc_resources,
};
+static int __init sgio2_rtc_devinit(void)
static __init int sgio2_rtc_devinit(void)
{
return platform_device_register(&ip32_rtc_device);
}
device_initcall(sgio2_cmos_devinit);
device_initcall(sgio2_rtc_devinit);

View File

@ -348,6 +348,10 @@ struct pt_regs; /* forward declaration... */
#define ELF_HWCAP 0
#define STACK_RND_MASK (is_32bit_task() ? \
0x7ff >> (PAGE_SHIFT - 12) : \
0x3ffff >> (PAGE_SHIFT - 12))
struct mm_struct;
extern unsigned long arch_randomize_brk(struct mm_struct *);
#define arch_randomize_brk arch_randomize_brk

View File

@ -181,9 +181,12 @@ int dump_task_fpu (struct task_struct *tsk, elf_fpregset_t *r)
return 1;
}
/*
* Copy architecture-specific thread state
*/
int
copy_thread(unsigned long clone_flags, unsigned long usp,
unsigned long arg, struct task_struct *p)
unsigned long kthread_arg, struct task_struct *p)
{
struct pt_regs *cregs = &(p->thread.regs);
void *stack = task_stack_page(p);
@ -195,11 +198,10 @@ copy_thread(unsigned long clone_flags, unsigned long usp,
extern void * const child_return;
if (unlikely(p->flags & PF_KTHREAD)) {
/* kernel thread */
memset(cregs, 0, sizeof(struct pt_regs));
if (!usp) /* idle thread */
return 0;
/* kernel thread */
/* Must exit via ret_from_kernel_thread in order
* to call schedule_tail()
*/
@ -215,7 +217,7 @@ copy_thread(unsigned long clone_flags, unsigned long usp,
#else
cregs->gr[26] = usp;
#endif
cregs->gr[25] = arg;
cregs->gr[25] = kthread_arg;
} else {
/* user thread */
/* usp must be word aligned. This also prevents users from

View File

@ -77,6 +77,9 @@ static unsigned long mmap_upper_limit(void)
if (stack_base > STACK_SIZE_MAX)
stack_base = STACK_SIZE_MAX;
/* Add space for stack randomization. */
stack_base += (STACK_RND_MASK << PAGE_SHIFT);
return PAGE_ALIGN(STACK_TOP - stack_base);
}

View File

@ -1134,7 +1134,7 @@ static __initconst const u64 slm_hw_cache_extra_regs
[ C(LL ) ] = {
[ C(OP_READ) ] = {
[ C(RESULT_ACCESS) ] = SLM_DMND_READ|SLM_LLC_ACCESS,
[ C(RESULT_MISS) ] = SLM_DMND_READ|SLM_LLC_MISS,
[ C(RESULT_MISS) ] = 0,
},
[ C(OP_WRITE) ] = {
[ C(RESULT_ACCESS) ] = SLM_DMND_WRITE|SLM_LLC_ACCESS,
@ -1184,8 +1184,7 @@ static __initconst const u64 slm_hw_cache_event_ids
[ C(OP_READ) ] = {
/* OFFCORE_RESPONSE.ANY_DATA.LOCAL_CACHE */
[ C(RESULT_ACCESS) ] = 0x01b7,
/* OFFCORE_RESPONSE.ANY_DATA.ANY_LLC_MISS */
[ C(RESULT_MISS) ] = 0x01b7,
[ C(RESULT_MISS) ] = 0,
},
[ C(OP_WRITE) ] = {
/* OFFCORE_RESPONSE.ANY_RFO.LOCAL_CACHE */
@ -1217,7 +1216,7 @@ static __initconst const u64 slm_hw_cache_event_ids
[ C(ITLB) ] = {
[ C(OP_READ) ] = {
[ C(RESULT_ACCESS) ] = 0x00c0, /* INST_RETIRED.ANY_P */
[ C(RESULT_MISS) ] = 0x0282, /* ITLB.MISSES */
[ C(RESULT_MISS) ] = 0x40205, /* PAGE_WALKS.I_SIDE_WALKS */
},
[ C(OP_WRITE) ] = {
[ C(RESULT_ACCESS) ] = -1,

View File

@ -722,6 +722,7 @@ static int __init rapl_pmu_init(void)
break;
case 60: /* Haswell */
case 69: /* Haswell-Celeron */
case 61: /* Broadwell */
rapl_cntr_mask = RAPL_IDX_HSW;
rapl_pmu_events_group.attrs = rapl_events_hsw_attr;
break;

View File

@ -559,6 +559,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
if (is_ereg(dst_reg))
EMIT1(0x41);
EMIT3(0xC1, add_1reg(0xC8, dst_reg), 8);
/* emit 'movzwl eax, ax' */
if (is_ereg(dst_reg))
EMIT3(0x45, 0x0F, 0xB7);
else
EMIT2(0x0F, 0xB7);
EMIT1(add_2reg(0xC0, dst_reg, dst_reg));
break;
case 32:
/* emit 'bswap eax' to swap lower 4 bytes */
@ -577,6 +584,27 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
break;
case BPF_ALU | BPF_END | BPF_FROM_LE:
switch (imm32) {
case 16:
/* emit 'movzwl eax, ax' to zero extend 16-bit
* into 64 bit
*/
if (is_ereg(dst_reg))
EMIT3(0x45, 0x0F, 0xB7);
else
EMIT2(0x0F, 0xB7);
EMIT1(add_2reg(0xC0, dst_reg, dst_reg));
break;
case 32:
/* emit 'mov eax, eax' to clear upper 32-bits */
if (is_ereg(dst_reg))
EMIT1(0x45);
EMIT2(0x89, add_2reg(0xC0, dst_reg, dst_reg));
break;
case 64:
/* nop */
break;
}
break;
/* ST: *(u8*)(dst_reg + off) = imm */

View File

@ -51,7 +51,7 @@ VDSO_LDFLAGS_vdso.lds = -m64 -Wl,-soname=linux-vdso.so.1 \
$(obj)/vdso64.so.dbg: $(src)/vdso.lds $(vobjs) FORCE
$(call if_changed,vdso)
HOST_EXTRACFLAGS += -I$(srctree)/tools/include -I$(srctree)/include/uapi
HOST_EXTRACFLAGS += -I$(srctree)/tools/include -I$(srctree)/include/uapi -I$(srctree)/arch/x86/include/uapi
hostprogs-y += vdso2c
quiet_cmd_vdso2c = VDSO2C $@

View File

@ -102,19 +102,12 @@ const struct acpi_predefined_names acpi_gbl_pre_defined_names[] = {
{"_SB_", ACPI_TYPE_DEVICE, NULL},
{"_SI_", ACPI_TYPE_LOCAL_SCOPE, NULL},
{"_TZ_", ACPI_TYPE_DEVICE, NULL},
/*
* March, 2015:
* The _REV object is in the process of being deprecated, because
* other ACPI implementations permanently return 2. Thus, it
* has little or no value. Return 2 for compatibility with
* other ACPI implementations.
*/
{"_REV", ACPI_TYPE_INTEGER, ACPI_CAST_PTR(char, 2)},
{"_REV", ACPI_TYPE_INTEGER, (char *)ACPI_CA_SUPPORT_LEVEL},
{"_OS_", ACPI_TYPE_STRING, ACPI_OS_NAME},
{"_GL_", ACPI_TYPE_MUTEX, ACPI_CAST_PTR(char, 1)},
{"_GL_", ACPI_TYPE_MUTEX, (char *)1},
#if !defined (ACPI_NO_METHOD_EXECUTION) || defined (ACPI_CONSTANT_EVAL_ONLY)
{"_OSI", ACPI_TYPE_METHOD, ACPI_CAST_PTR(char, 1)},
{"_OSI", ACPI_TYPE_METHOD, (char *)1},
#endif
/* Table terminator */

View File

@ -182,7 +182,7 @@ static void __init acpi_request_region (struct acpi_generic_address *gas,
request_mem_region(addr, length, desc);
}
static int __init acpi_reserve_resources(void)
static void __init acpi_reserve_resources(void)
{
acpi_request_region(&acpi_gbl_FADT.xpm1a_event_block, acpi_gbl_FADT.pm1_event_length,
"ACPI PM1a_EVT_BLK");
@ -211,10 +211,7 @@ static int __init acpi_reserve_resources(void)
if (!(acpi_gbl_FADT.gpe1_block_length & 0x1))
acpi_request_region(&acpi_gbl_FADT.xgpe1_block,
acpi_gbl_FADT.gpe1_block_length, "ACPI GPE1_BLK");
return 0;
}
device_initcall(acpi_reserve_resources);
void acpi_os_printf(const char *fmt, ...)
{
@ -1845,6 +1842,7 @@ acpi_status __init acpi_os_initialize(void)
acpi_status __init acpi_os_initialize1(void)
{
acpi_reserve_resources();
kacpid_wq = alloc_workqueue("kacpid", 0, 1);
kacpi_notify_wq = alloc_workqueue("kacpi_notify", 0, 1);
kacpi_hotplug_wq = alloc_ordered_workqueue("kacpi_hotplug", 0);

View File

@ -270,6 +270,7 @@ config ATA_PIIX
config SATA_DWC
tristate "DesignWare Cores SATA support"
depends on 460EX
select DW_DMAC
help
This option enables support for the on-chip SATA controller of the
AppliedMicro processor 460EX.
@ -729,15 +730,6 @@ config PATA_SC1200
If unsure, say N.
config PATA_SCC
tristate "Toshiba's Cell Reference Set IDE support"
depends on PCI && PPC_CELLEB
help
This option enables support for the built-in IDE controller on
Toshiba Cell Reference Board.
If unsure, say N.
config PATA_SCH
tristate "Intel SCH PATA support"
depends on PCI

View File

@ -75,7 +75,6 @@ obj-$(CONFIG_PATA_PDC_OLD) += pata_pdc202xx_old.o
obj-$(CONFIG_PATA_RADISYS) += pata_radisys.o
obj-$(CONFIG_PATA_RDC) += pata_rdc.o
obj-$(CONFIG_PATA_SC1200) += pata_sc1200.o
obj-$(CONFIG_PATA_SCC) += pata_scc.o
obj-$(CONFIG_PATA_SCH) += pata_sch.o
obj-$(CONFIG_PATA_SERVERWORKS) += pata_serverworks.o
obj-$(CONFIG_PATA_SIL680) += pata_sil680.o

View File

@ -66,6 +66,7 @@ enum board_ids {
board_ahci_yes_fbs,
/* board IDs for specific chipsets in alphabetical order */
board_ahci_avn,
board_ahci_mcp65,
board_ahci_mcp77,
board_ahci_mcp89,
@ -84,6 +85,8 @@ enum board_ids {
static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);
static int ahci_vt8251_hardreset(struct ata_link *link, unsigned int *class,
unsigned long deadline);
static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class,
unsigned long deadline);
static void ahci_mcp89_apple_enable(struct pci_dev *pdev);
static bool is_mcp89_apple(struct pci_dev *pdev);
static int ahci_p5wdh_hardreset(struct ata_link *link, unsigned int *class,
@ -107,6 +110,11 @@ static struct ata_port_operations ahci_p5wdh_ops = {
.hardreset = ahci_p5wdh_hardreset,
};
static struct ata_port_operations ahci_avn_ops = {
.inherits = &ahci_ops,
.hardreset = ahci_avn_hardreset,
};
static const struct ata_port_info ahci_port_info[] = {
/* by features */
[board_ahci] = {
@ -151,6 +159,12 @@ static const struct ata_port_info ahci_port_info[] = {
.port_ops = &ahci_ops,
},
/* by chipsets */
[board_ahci_avn] = {
.flags = AHCI_FLAG_COMMON,
.pio_mask = ATA_PIO4,
.udma_mask = ATA_UDMA6,
.port_ops = &ahci_avn_ops,
},
[board_ahci_mcp65] = {
AHCI_HFLAGS (AHCI_HFLAG_NO_FPDMA_AA | AHCI_HFLAG_NO_PMP |
AHCI_HFLAG_YES_NCQ),
@ -290,14 +304,14 @@ static const struct pci_device_id ahci_pci_tbl[] = {
{ PCI_VDEVICE(INTEL, 0x1f27), board_ahci }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f2e), board_ahci }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f2f), board_ahci }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f32), board_ahci }, /* Avoton AHCI */
{ PCI_VDEVICE(INTEL, 0x1f33), board_ahci }, /* Avoton AHCI */
{ PCI_VDEVICE(INTEL, 0x1f34), board_ahci }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f35), board_ahci }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f36), board_ahci }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f37), board_ahci }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f3e), board_ahci }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f3f), board_ahci }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f32), board_ahci_avn }, /* Avoton AHCI */
{ PCI_VDEVICE(INTEL, 0x1f33), board_ahci_avn }, /* Avoton AHCI */
{ PCI_VDEVICE(INTEL, 0x1f34), board_ahci_avn }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f35), board_ahci_avn }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f36), board_ahci_avn }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f37), board_ahci_avn }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f3e), board_ahci_avn }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f3f), board_ahci_avn }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x2823), board_ahci }, /* Wellsburg RAID */
{ PCI_VDEVICE(INTEL, 0x2827), board_ahci }, /* Wellsburg RAID */
{ PCI_VDEVICE(INTEL, 0x8d02), board_ahci }, /* Wellsburg AHCI */
@ -670,6 +684,79 @@ static int ahci_p5wdh_hardreset(struct ata_link *link, unsigned int *class,
return rc;
}
/*
* ahci_avn_hardreset - attempt more aggressive recovery of Avoton ports.
*
* It has been observed with some SSDs that the timing of events in the
* link synchronization phase can leave the port in a state that can not
* be recovered by a SATA-hard-reset alone. The failing signature is
* SStatus.DET stuck at 1 ("Device presence detected but Phy
* communication not established"). It was found that unloading and
* reloading the driver when this problem occurs allows the drive
* connection to be recovered (DET advanced to 0x3). The critical
* component of reloading the driver is that the port state machines are
* reset by bouncing "port enable" in the AHCI PCS configuration
* register. So, reproduce that effect by bouncing a port whenever we
* see DET==1 after a reset.
*/
static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class,
unsigned long deadline)
{
const unsigned long *timing = sata_ehc_deb_timing(&link->eh_context);
struct ata_port *ap = link->ap;
struct ahci_port_priv *pp = ap->private_data;
struct ahci_host_priv *hpriv = ap->host->private_data;
u8 *d2h_fis = pp->rx_fis + RX_FIS_D2H_REG;
unsigned long tmo = deadline - jiffies;
struct ata_taskfile tf;
bool online;
int rc, i;
DPRINTK("ENTER\n");
ahci_stop_engine(ap);
for (i = 0; i < 2; i++) {
u16 val;
u32 sstatus;
int port = ap->port_no;
struct ata_host *host = ap->host;
struct pci_dev *pdev = to_pci_dev(host->dev);
/* clear D2H reception area to properly wait for D2H FIS */
ata_tf_init(link->device, &tf);
tf.command = ATA_BUSY;
ata_tf_to_fis(&tf, 0, 0, d2h_fis);
rc = sata_link_hardreset(link, timing, deadline, &online,
ahci_check_ready);
if (sata_scr_read(link, SCR_STATUS, &sstatus) != 0 ||
(sstatus & 0xf) != 1)
break;
ata_link_printk(link, KERN_INFO, "avn bounce port%d\n",
port);
pci_read_config_word(pdev, 0x92, &val);
val &= ~(1 << port);
pci_write_config_word(pdev, 0x92, val);
ata_msleep(ap, 1000);
val |= 1 << port;
pci_write_config_word(pdev, 0x92, val);
deadline += tmo;
}
hpriv->start_engine(ap);
if (online)
*class = ahci_dev_classify(ap);
DPRINTK("EXIT, rc=%d, class=%u\n", rc, *class);
return rc;
}
#ifdef CONFIG_PM
static int ahci_pci_device_suspend(struct pci_dev *pdev, pm_message_t mesg)
{

View File

@ -37,7 +37,6 @@ struct st_ahci_drv_data {
struct reset_control *pwr;
struct reset_control *sw_rst;
struct reset_control *pwr_rst;
struct ahci_host_priv *hpriv;
};
static void st_ahci_configure_oob(void __iomem *mmio)
@ -55,9 +54,10 @@ static void st_ahci_configure_oob(void __iomem *mmio)
writel(new_val, mmio + ST_AHCI_OOBR);
}
static int st_ahci_deassert_resets(struct device *dev)
static int st_ahci_deassert_resets(struct ahci_host_priv *hpriv,
struct device *dev)
{
struct st_ahci_drv_data *drv_data = dev_get_drvdata(dev);
struct st_ahci_drv_data *drv_data = hpriv->plat_data;
int err;
if (drv_data->pwr) {
@ -90,8 +90,8 @@ static int st_ahci_deassert_resets(struct device *dev)
static void st_ahci_host_stop(struct ata_host *host)
{
struct ahci_host_priv *hpriv = host->private_data;
struct st_ahci_drv_data *drv_data = hpriv->plat_data;
struct device *dev = host->dev;
struct st_ahci_drv_data *drv_data = dev_get_drvdata(dev);
int err;
if (drv_data->pwr) {
@ -103,29 +103,30 @@ static void st_ahci_host_stop(struct ata_host *host)
ahci_platform_disable_resources(hpriv);
}
static int st_ahci_probe_resets(struct platform_device *pdev)
static int st_ahci_probe_resets(struct ahci_host_priv *hpriv,
struct device *dev)
{
struct st_ahci_drv_data *drv_data = platform_get_drvdata(pdev);
struct st_ahci_drv_data *drv_data = hpriv->plat_data;
drv_data->pwr = devm_reset_control_get(&pdev->dev, "pwr-dwn");
drv_data->pwr = devm_reset_control_get(dev, "pwr-dwn");
if (IS_ERR(drv_data->pwr)) {
dev_info(&pdev->dev, "power reset control not defined\n");
dev_info(dev, "power reset control not defined\n");
drv_data->pwr = NULL;
}
drv_data->sw_rst = devm_reset_control_get(&pdev->dev, "sw-rst");
drv_data->sw_rst = devm_reset_control_get(dev, "sw-rst");
if (IS_ERR(drv_data->sw_rst)) {
dev_info(&pdev->dev, "soft reset control not defined\n");
dev_info(dev, "soft reset control not defined\n");
drv_data->sw_rst = NULL;
}
drv_data->pwr_rst = devm_reset_control_get(&pdev->dev, "pwr-rst");
drv_data->pwr_rst = devm_reset_control_get(dev, "pwr-rst");
if (IS_ERR(drv_data->pwr_rst)) {
dev_dbg(&pdev->dev, "power soft reset control not defined\n");
dev_dbg(dev, "power soft reset control not defined\n");
drv_data->pwr_rst = NULL;
}
return st_ahci_deassert_resets(&pdev->dev);
return st_ahci_deassert_resets(hpriv, dev);
}
static struct ata_port_operations st_ahci_port_ops = {
@ -154,15 +155,12 @@ static int st_ahci_probe(struct platform_device *pdev)
if (!drv_data)
return -ENOMEM;
platform_set_drvdata(pdev, drv_data);
hpriv = ahci_platform_get_resources(pdev);
if (IS_ERR(hpriv))
return PTR_ERR(hpriv);
hpriv->plat_data = drv_data;
drv_data->hpriv = hpriv;
err = st_ahci_probe_resets(pdev);
err = st_ahci_probe_resets(hpriv, &pdev->dev);
if (err)
return err;
@ -170,7 +168,7 @@ static int st_ahci_probe(struct platform_device *pdev)
if (err)
return err;
st_ahci_configure_oob(drv_data->hpriv->mmio);
st_ahci_configure_oob(hpriv->mmio);
err = ahci_platform_init_host(pdev, hpriv, &st_ahci_port_info,
&ahci_platform_sht);
@ -185,8 +183,9 @@ static int st_ahci_probe(struct platform_device *pdev)
#ifdef CONFIG_PM_SLEEP
static int st_ahci_suspend(struct device *dev)
{
struct st_ahci_drv_data *drv_data = dev_get_drvdata(dev);
struct ahci_host_priv *hpriv = drv_data->hpriv;
struct ata_host *host = dev_get_drvdata(dev);
struct ahci_host_priv *hpriv = host->private_data;
struct st_ahci_drv_data *drv_data = hpriv->plat_data;
int err;
err = ahci_platform_suspend_host(dev);
@ -208,21 +207,21 @@ static int st_ahci_suspend(struct device *dev)
static int st_ahci_resume(struct device *dev)
{
struct st_ahci_drv_data *drv_data = dev_get_drvdata(dev);
struct ahci_host_priv *hpriv = drv_data->hpriv;
struct ata_host *host = dev_get_drvdata(dev);
struct ahci_host_priv *hpriv = host->private_data;
int err;
err = ahci_platform_enable_resources(hpriv);
if (err)
return err;
err = st_ahci_deassert_resets(dev);
err = st_ahci_deassert_resets(hpriv, dev);
if (err) {
ahci_platform_disable_resources(hpriv);
return err;
}
st_ahci_configure_oob(drv_data->hpriv->mmio);
st_ahci_configure_oob(hpriv->mmio);
return ahci_platform_resume_host(dev);
}

View File

@ -1707,8 +1707,7 @@ static void ahci_handle_port_interrupt(struct ata_port *ap,
if (unlikely(resetting))
status &= ~PORT_IRQ_BAD_PMP;
/* if LPM is enabled, PHYRDY doesn't mean anything */
if (ap->link.lpm_policy > ATA_LPM_MAX_POWER) {
if (sata_lpm_ignore_phy_events(&ap->link)) {
status &= ~PORT_IRQ_PHYRDY;
ahci_scr_write(&ap->link, SCR_ERROR, SERR_PHYRDY_CHG);
}

View File

@ -4235,7 +4235,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
ATA_HORKAGE_ZERO_AFTER_TRIM, },
{ "Crucial_CT*MX100*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM |
ATA_HORKAGE_ZERO_AFTER_TRIM, },
{ "Samsung SSD 850 PRO*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |
{ "Samsung SSD 8*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |
ATA_HORKAGE_ZERO_AFTER_TRIM, },
/*
@ -6752,6 +6752,38 @@ u32 ata_wait_register(struct ata_port *ap, void __iomem *reg, u32 mask, u32 val,
return tmp;
}
/**
* sata_lpm_ignore_phy_events - test if PHY event should be ignored
* @link: Link receiving the event
*
* Test whether the received PHY event has to be ignored or not.
*
* LOCKING:
* None:
*
* RETURNS:
* True if the event has to be ignored.
*/
bool sata_lpm_ignore_phy_events(struct ata_link *link)
{
unsigned long lpm_timeout = link->last_lpm_change +
msecs_to_jiffies(ATA_TMOUT_SPURIOUS_PHY);
/* if LPM is enabled, PHYRDY doesn't mean anything */
if (link->lpm_policy > ATA_LPM_MAX_POWER)
return true;
/* ignore the first PHY event after the LPM policy changed
* as it is might be spurious
*/
if ((link->flags & ATA_LFLAG_CHANGED) &&
time_before(jiffies, lpm_timeout))
return true;
return false;
}
EXPORT_SYMBOL_GPL(sata_lpm_ignore_phy_events);
/*
* Dummy port_ops
*/

View File

@ -3597,6 +3597,9 @@ static int ata_eh_set_lpm(struct ata_link *link, enum ata_lpm_policy policy,
}
}
link->last_lpm_change = jiffies;
link->flags |= ATA_LFLAG_CHANGED;
return 0;
fail:

File diff suppressed because it is too large Load Diff

View File

@ -227,7 +227,6 @@ static void bt3c_receive(struct bt3c_info *info)
iobase = info->p_dev->resource[0]->start;
avail = bt3c_read(iobase, 0x7006);
//printk("bt3c_cs: receiving %d bytes\n", avail);
bt3c_address(iobase, 0x7480);
while (size < avail) {
@ -250,7 +249,6 @@ static void bt3c_receive(struct bt3c_info *info)
bt_cb(info->rx_skb)->pkt_type = inb(iobase + DATA_L);
inb(iobase + DATA_H);
//printk("bt3c: PACKET_TYPE=%02x\n", bt_cb(info->rx_skb)->pkt_type);
switch (bt_cb(info->rx_skb)->pkt_type) {
@ -364,7 +362,6 @@ static irqreturn_t bt3c_interrupt(int irq, void *dev_inst)
if (stat & 0x0001)
bt3c_receive(info);
if (stat & 0x0002) {
//BT_ERR("Ack (stat=0x%04x)", stat);
clear_bit(XMIT_SENDING, &(info->tx_state));
bt3c_write_wakeup(info);
}

View File

@ -95,6 +95,78 @@ int btbcm_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr)
}
EXPORT_SYMBOL_GPL(btbcm_set_bdaddr);
int btbcm_patchram(struct hci_dev *hdev, const char *firmware)
{
const struct hci_command_hdr *cmd;
const struct firmware *fw;
const u8 *fw_ptr;
size_t fw_size;
struct sk_buff *skb;
u16 opcode;
int err;
err = request_firmware(&fw, firmware, &hdev->dev);
if (err < 0) {
BT_INFO("%s: BCM: Patch %s not found", hdev->name, firmware);
return err;
}
/* Start Download */
skb = __hci_cmd_sync(hdev, 0xfc2e, 0, NULL, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
err = PTR_ERR(skb);
BT_ERR("%s: BCM: Download Minidrv command failed (%d)",
hdev->name, err);
goto done;
}
kfree_skb(skb);
/* 50 msec delay after Download Minidrv completes */
msleep(50);
fw_ptr = fw->data;
fw_size = fw->size;
while (fw_size >= sizeof(*cmd)) {
const u8 *cmd_param;
cmd = (struct hci_command_hdr *)fw_ptr;
fw_ptr += sizeof(*cmd);
fw_size -= sizeof(*cmd);
if (fw_size < cmd->plen) {
BT_ERR("%s: BCM: Patch %s is corrupted", hdev->name,
firmware);
err = -EINVAL;
goto done;
}
cmd_param = fw_ptr;
fw_ptr += cmd->plen;
fw_size -= cmd->plen;
opcode = le16_to_cpu(cmd->opcode);
skb = __hci_cmd_sync(hdev, opcode, cmd->plen, cmd_param,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
err = PTR_ERR(skb);
BT_ERR("%s: BCM: Patch command %04x failed (%d)",
hdev->name, opcode, err);
goto done;
}
kfree_skb(skb);
}
/* 250 msec delay after Launch Ram completes */
msleep(250);
done:
release_firmware(fw);
return err;
}
EXPORT_SYMBOL(btbcm_patchram);
static int btbcm_reset(struct hci_dev *hdev)
{
struct sk_buff *skb;
@ -198,12 +270,8 @@ static const struct {
int btbcm_setup_patchram(struct hci_dev *hdev)
{
const struct hci_command_hdr *cmd;
const struct firmware *fw;
const u8 *fw_ptr;
size_t fw_size;
char fw_name[64];
u16 opcode, subver, rev, pid, vid;
u16 subver, rev, pid, vid;
const char *hw_name = NULL;
struct sk_buff *skb;
struct hci_rp_read_local_version *ver;
@ -273,74 +341,19 @@ int btbcm_setup_patchram(struct hci_dev *hdev)
hw_name ? : "BCM", (subver & 0x7000) >> 13,
(subver & 0x1f00) >> 8, (subver & 0x00ff), rev & 0x0fff);
err = request_firmware(&fw, fw_name, &hdev->dev);
if (err < 0) {
BT_INFO("%s: BCM: patch %s not found", hdev->name, fw_name);
err = btbcm_patchram(hdev, fw_name);
if (err == -ENOENT)
return 0;
}
/* Start Download */
skb = __hci_cmd_sync(hdev, 0xfc2e, 0, NULL, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
err = PTR_ERR(skb);
BT_ERR("%s: BCM: Download Minidrv command failed (%d)",
hdev->name, err);
goto reset;
}
kfree_skb(skb);
/* 50 msec delay after Download Minidrv completes */
msleep(50);
fw_ptr = fw->data;
fw_size = fw->size;
while (fw_size >= sizeof(*cmd)) {
const u8 *cmd_param;
cmd = (struct hci_command_hdr *)fw_ptr;
fw_ptr += sizeof(*cmd);
fw_size -= sizeof(*cmd);
if (fw_size < cmd->plen) {
BT_ERR("%s: BCM: patch %s is corrupted", hdev->name,
fw_name);
err = -EINVAL;
goto reset;
}
cmd_param = fw_ptr;
fw_ptr += cmd->plen;
fw_size -= cmd->plen;
opcode = le16_to_cpu(cmd->opcode);
skb = __hci_cmd_sync(hdev, opcode, cmd->plen, cmd_param,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
err = PTR_ERR(skb);
BT_ERR("%s: BCM: patch command %04x failed (%d)",
hdev->name, opcode, err);
goto reset;
}
kfree_skb(skb);
}
/* 250 msec delay after Launch Ram completes */
msleep(250);
reset:
/* Reset */
err = btbcm_reset(hdev);
if (err)
goto done;
return err;
/* Read Local Version Info */
skb = btbcm_read_local_version(hdev);
if (IS_ERR(skb)) {
err = PTR_ERR(skb);
goto done;
}
if (IS_ERR(skb))
return PTR_ERR(skb);
ver = (struct hci_rp_read_local_version *)skb->data;
rev = le16_to_cpu(ver->hci_rev);
@ -355,10 +368,7 @@ reset:
set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
done:
release_firmware(fw);
return err;
return 0;
}
EXPORT_SYMBOL_GPL(btbcm_setup_patchram);

View File

@ -25,6 +25,7 @@
int btbcm_check_bdaddr(struct hci_dev *hdev);
int btbcm_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr);
int btbcm_patchram(struct hci_dev *hdev, const char *firmware);
int btbcm_setup_patchram(struct hci_dev *hdev);
int btbcm_setup_apple(struct hci_dev *hdev);
@ -41,6 +42,11 @@ static inline int btbcm_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr)
return -EOPNOTSUPP;
}
static inline int btbcm_patchram(struct hci_dev *hdev, const char *firmware)
{
return -EOPNOTSUPP;
}
static inline int btbcm_setup_patchram(struct hci_dev *hdev)
{
return 0;

View File

@ -24,6 +24,7 @@
#include <linux/module.h>
#include <linux/usb.h>
#include <linux/firmware.h>
#include <asm/unaligned.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
@ -57,6 +58,7 @@ static struct usb_driver btusb_driver;
#define BTUSB_AMP 0x4000
#define BTUSB_QCA_ROME 0x8000
#define BTUSB_BCM_APPLE 0x10000
#define BTUSB_REALTEK 0x20000
static const struct usb_device_id btusb_table[] = {
/* Generic Bluetooth USB device */
@ -288,6 +290,28 @@ static const struct usb_device_id blacklist_table[] = {
{ USB_VENDOR_AND_INTERFACE_INFO(0x8087, 0xe0, 0x01, 0x01),
.driver_info = BTUSB_IGNORE },
/* Realtek Bluetooth devices */
{ USB_VENDOR_AND_INTERFACE_INFO(0x0bda, 0xe0, 0x01, 0x01),
.driver_info = BTUSB_REALTEK },
/* Additional Realtek 8723AE Bluetooth devices */
{ USB_DEVICE(0x0930, 0x021d), .driver_info = BTUSB_REALTEK },
{ USB_DEVICE(0x13d3, 0x3394), .driver_info = BTUSB_REALTEK },
/* Additional Realtek 8723BE Bluetooth devices */
{ USB_DEVICE(0x0489, 0xe085), .driver_info = BTUSB_REALTEK },
{ USB_DEVICE(0x0489, 0xe08b), .driver_info = BTUSB_REALTEK },
{ USB_DEVICE(0x13d3, 0x3410), .driver_info = BTUSB_REALTEK },
{ USB_DEVICE(0x13d3, 0x3416), .driver_info = BTUSB_REALTEK },
{ USB_DEVICE(0x13d3, 0x3459), .driver_info = BTUSB_REALTEK },
/* Additional Realtek 8821AE Bluetooth devices */
{ USB_DEVICE(0x0b05, 0x17dc), .driver_info = BTUSB_REALTEK },
{ USB_DEVICE(0x13d3, 0x3414), .driver_info = BTUSB_REALTEK },
{ USB_DEVICE(0x13d3, 0x3458), .driver_info = BTUSB_REALTEK },
{ USB_DEVICE(0x13d3, 0x3461), .driver_info = BTUSB_REALTEK },
{ USB_DEVICE(0x13d3, 0x3462), .driver_info = BTUSB_REALTEK },
{ } /* Terminating entry */
};
@ -892,7 +916,7 @@ static int btusb_open(struct hci_dev *hdev)
*/
if (data->setup_on_usb) {
err = data->setup_on_usb(hdev);
if (err <0)
if (err < 0)
return err;
}
@ -1345,6 +1369,378 @@ static int btusb_setup_csr(struct hci_dev *hdev)
return ret;
}
#define RTL_FRAG_LEN 252
struct rtl_download_cmd {
__u8 index;
__u8 data[RTL_FRAG_LEN];
} __packed;
struct rtl_download_response {
__u8 status;
__u8 index;
} __packed;
struct rtl_rom_version_evt {
__u8 status;
__u8 version;
} __packed;
struct rtl_epatch_header {
__u8 signature[8];
__le32 fw_version;
__le16 num_patches;
} __packed;
#define RTL_EPATCH_SIGNATURE "Realtech"
#define RTL_ROM_LMP_3499 0x3499
#define RTL_ROM_LMP_8723A 0x1200
#define RTL_ROM_LMP_8723B 0x8723
#define RTL_ROM_LMP_8821A 0x8821
#define RTL_ROM_LMP_8761A 0x8761
static int rtl_read_rom_version(struct hci_dev *hdev, u8 *version)
{
struct rtl_rom_version_evt *rom_version;
struct sk_buff *skb;
int ret;
/* Read RTL ROM version command */
skb = __hci_cmd_sync(hdev, 0xfc6d, 0, NULL, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s: Read ROM version failed (%ld)",
hdev->name, PTR_ERR(skb));
return PTR_ERR(skb);
}
if (skb->len != sizeof(*rom_version)) {
BT_ERR("%s: RTL version event length mismatch", hdev->name);
kfree_skb(skb);
return -EIO;
}
rom_version = (struct rtl_rom_version_evt *)skb->data;
BT_INFO("%s: rom_version status=%x version=%x",
hdev->name, rom_version->status, rom_version->version);
ret = rom_version->status;
if (ret == 0)
*version = rom_version->version;
kfree_skb(skb);
return ret;
}
static int rtl8723b_parse_firmware(struct hci_dev *hdev, u16 lmp_subver,
const struct firmware *fw,
unsigned char **_buf)
{
const u8 extension_sig[] = { 0x51, 0x04, 0xfd, 0x77 };
struct rtl_epatch_header *epatch_info;
unsigned char *buf;
int i, ret, len;
size_t min_size;
u8 opcode, length, data, rom_version = 0;
int project_id = -1;
const unsigned char *fwptr, *chip_id_base;
const unsigned char *patch_length_base, *patch_offset_base;
u32 patch_offset = 0;
u16 patch_length, num_patches;
const u16 project_id_to_lmp_subver[] = {
RTL_ROM_LMP_8723A,
RTL_ROM_LMP_8723B,
RTL_ROM_LMP_8821A,
RTL_ROM_LMP_8761A
};
ret = rtl_read_rom_version(hdev, &rom_version);
if (ret)
return -bt_to_errno(ret);
min_size = sizeof(struct rtl_epatch_header) + sizeof(extension_sig) + 3;
if (fw->size < min_size)
return -EINVAL;
fwptr = fw->data + fw->size - sizeof(extension_sig);
if (memcmp(fwptr, extension_sig, sizeof(extension_sig)) != 0) {
BT_ERR("%s: extension section signature mismatch", hdev->name);
return -EINVAL;
}
/* Loop from the end of the firmware parsing instructions, until
* we find an instruction that identifies the "project ID" for the
* hardware supported by this firwmare file.
* Once we have that, we double-check that that project_id is suitable
* for the hardware we are working with.
*/
while (fwptr >= fw->data + (sizeof(struct rtl_epatch_header) + 3)) {
opcode = *--fwptr;
length = *--fwptr;
data = *--fwptr;
BT_DBG("check op=%x len=%x data=%x", opcode, length, data);
if (opcode == 0xff) /* EOF */
break;
if (length == 0) {
BT_ERR("%s: found instruction with length 0",
hdev->name);
return -EINVAL;
}
if (opcode == 0 && length == 1) {
project_id = data;
break;
}
fwptr -= length;
}
if (project_id < 0) {
BT_ERR("%s: failed to find version instruction", hdev->name);
return -EINVAL;
}
if (project_id >= ARRAY_SIZE(project_id_to_lmp_subver)) {
BT_ERR("%s: unknown project id %d", hdev->name, project_id);
return -EINVAL;
}
if (lmp_subver != project_id_to_lmp_subver[project_id]) {
BT_ERR("%s: firmware is for %x but this is a %x", hdev->name,
project_id_to_lmp_subver[project_id], lmp_subver);
return -EINVAL;
}
epatch_info = (struct rtl_epatch_header *)fw->data;
if (memcmp(epatch_info->signature, RTL_EPATCH_SIGNATURE, 8) != 0) {
BT_ERR("%s: bad EPATCH signature", hdev->name);
return -EINVAL;
}
num_patches = le16_to_cpu(epatch_info->num_patches);
BT_DBG("fw_version=%x, num_patches=%d",
le32_to_cpu(epatch_info->fw_version), num_patches);
/* After the rtl_epatch_header there is a funky patch metadata section.
* Assuming 2 patches, the layout is:
* ChipID1 ChipID2 PatchLength1 PatchLength2 PatchOffset1 PatchOffset2
*
* Find the right patch for this chip.
*/
min_size += 8 * num_patches;
if (fw->size < min_size)
return -EINVAL;
chip_id_base = fw->data + sizeof(struct rtl_epatch_header);
patch_length_base = chip_id_base + (sizeof(u16) * num_patches);
patch_offset_base = patch_length_base + (sizeof(u16) * num_patches);
for (i = 0; i < num_patches; i++) {
u16 chip_id = get_unaligned_le16(chip_id_base +
(i * sizeof(u16)));
if (chip_id == rom_version + 1) {
patch_length = get_unaligned_le16(patch_length_base +
(i * sizeof(u16)));
patch_offset = get_unaligned_le32(patch_offset_base +
(i * sizeof(u32)));
break;
}
}
if (!patch_offset) {
BT_ERR("%s: didn't find patch for chip id %d",
hdev->name, rom_version);
return -EINVAL;
}
BT_DBG("length=%x offset=%x index %d", patch_length, patch_offset, i);
min_size = patch_offset + patch_length;
if (fw->size < min_size)
return -EINVAL;
/* Copy the firmware into a new buffer and write the version at
* the end.
*/
len = patch_length;
buf = kmemdup(fw->data + patch_offset, patch_length, GFP_KERNEL);
if (!buf)
return -ENOMEM;
memcpy(buf + patch_length - 4, &epatch_info->fw_version, 4);
*_buf = buf;
return len;
}
static int rtl_download_firmware(struct hci_dev *hdev,
const unsigned char *data, int fw_len)
{
struct rtl_download_cmd *dl_cmd;
int frag_num = fw_len / RTL_FRAG_LEN + 1;
int frag_len = RTL_FRAG_LEN;
int ret = 0;
int i;
dl_cmd = kmalloc(sizeof(struct rtl_download_cmd), GFP_KERNEL);
if (!dl_cmd)
return -ENOMEM;
for (i = 0; i < frag_num; i++) {
struct rtl_download_response *dl_resp;
struct sk_buff *skb;
BT_DBG("download fw (%d/%d)", i, frag_num);
dl_cmd->index = i;
if (i == (frag_num - 1)) {
dl_cmd->index |= 0x80; /* data end */
frag_len = fw_len % RTL_FRAG_LEN;
}
memcpy(dl_cmd->data, data, frag_len);
/* Send download command */
skb = __hci_cmd_sync(hdev, 0xfc20, frag_len + 1, dl_cmd,
HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
BT_ERR("%s: download fw command failed (%ld)",
hdev->name, PTR_ERR(skb));
ret = -PTR_ERR(skb);
goto out;
}
if (skb->len != sizeof(*dl_resp)) {
BT_ERR("%s: download fw event length mismatch",
hdev->name);
kfree_skb(skb);
ret = -EIO;
goto out;
}
dl_resp = (struct rtl_download_response *)skb->data;
if (dl_resp->status != 0) {
kfree_skb(skb);
ret = bt_to_errno(dl_resp->status);
goto out;
}
kfree_skb(skb);
data += RTL_FRAG_LEN;
}
out:
kfree(dl_cmd);
return ret;
}
static int btusb_setup_rtl8723a(struct hci_dev *hdev)
{
struct btusb_data *data = dev_get_drvdata(&hdev->dev);
struct usb_device *udev = interface_to_usbdev(data->intf);
const struct firmware *fw;
int ret;
BT_INFO("%s: rtl: loading rtl_bt/rtl8723a_fw.bin", hdev->name);
ret = request_firmware(&fw, "rtl_bt/rtl8723a_fw.bin", &udev->dev);
if (ret < 0) {
BT_ERR("%s: Failed to load rtl_bt/rtl8723a_fw.bin", hdev->name);
return ret;
}
if (fw->size < 8) {
ret = -EINVAL;
goto out;
}
/* Check that the firmware doesn't have the epatch signature
* (which is only for RTL8723B and newer).
*/
if (!memcmp(fw->data, RTL_EPATCH_SIGNATURE, 8)) {
BT_ERR("%s: unexpected EPATCH signature!", hdev->name);
ret = -EINVAL;
goto out;
}
ret = rtl_download_firmware(hdev, fw->data, fw->size);
out:
release_firmware(fw);
return ret;
}
static int btusb_setup_rtl8723b(struct hci_dev *hdev, u16 lmp_subver,
const char *fw_name)
{
struct btusb_data *data = dev_get_drvdata(&hdev->dev);
struct usb_device *udev = interface_to_usbdev(data->intf);
unsigned char *fw_data = NULL;
const struct firmware *fw;
int ret;
BT_INFO("%s: rtl: loading %s", hdev->name, fw_name);
ret = request_firmware(&fw, fw_name, &udev->dev);
if (ret < 0) {
BT_ERR("%s: Failed to load %s", hdev->name, fw_name);
return ret;
}
ret = rtl8723b_parse_firmware(hdev, lmp_subver, fw, &fw_data);
if (ret < 0)
goto out;
ret = rtl_download_firmware(hdev, fw_data, ret);
kfree(fw_data);
if (ret < 0)
goto out;
out:
release_firmware(fw);
return ret;
}
static int btusb_setup_realtek(struct hci_dev *hdev)
{
struct sk_buff *skb;
struct hci_rp_read_local_version *resp;
u16 lmp_subver;
skb = btusb_read_local_version(hdev);
if (IS_ERR(skb))
return -PTR_ERR(skb);
resp = (struct hci_rp_read_local_version *)skb->data;
BT_INFO("%s: rtl: examining hci_ver=%02x hci_rev=%04x lmp_ver=%02x "
"lmp_subver=%04x", hdev->name, resp->hci_ver, resp->hci_rev,
resp->lmp_ver, resp->lmp_subver);
lmp_subver = le16_to_cpu(resp->lmp_subver);
kfree_skb(skb);
/* Match a set of subver values that correspond to stock firmware,
* which is not compatible with standard btusb.
* If matched, upload an alternative firmware that does conform to
* standard btusb. Once that firmware is uploaded, the subver changes
* to a different value.
*/
switch (lmp_subver) {
case RTL_ROM_LMP_8723A:
case RTL_ROM_LMP_3499:
return btusb_setup_rtl8723a(hdev);
case RTL_ROM_LMP_8723B:
return btusb_setup_rtl8723b(hdev, lmp_subver,
"rtl_bt/rtl8723b_fw.bin");
case RTL_ROM_LMP_8821A:
return btusb_setup_rtl8723b(hdev, lmp_subver,
"rtl_bt/rtl8821a_fw.bin");
case RTL_ROM_LMP_8761A:
return btusb_setup_rtl8723b(hdev, lmp_subver,
"rtl_bt/rtl8761a_fw.bin");
default:
BT_INFO("rtl: assuming no firmware upload needed.");
return 0;
}
}
static const struct firmware *btusb_setup_intel_get_fw(struct hci_dev *hdev,
struct intel_version *ver)
{
@ -2776,6 +3172,9 @@ static int btusb_probe(struct usb_interface *intf,
hdev->set_bdaddr = btusb_set_bdaddr_ath3012;
}
if (id->driver_info & BTUSB_REALTEK)
hdev->setup = btusb_setup_realtek;
if (id->driver_info & BTUSB_AMP) {
/* AMP controllers do not support SCO packets */
data->isoc = NULL;

View File

@ -95,7 +95,6 @@ static void ath_hci_uart_work(struct work_struct *work)
hci_uart_tx_wakeup(hu);
}
/* Initialize protocol */
static int ath_open(struct hci_uart *hu)
{
struct ath_struct *ath;
@ -116,19 +115,6 @@ static int ath_open(struct hci_uart *hu)
return 0;
}
/* Flush protocol data */
static int ath_flush(struct hci_uart *hu)
{
struct ath_struct *ath = hu->priv;
BT_DBG("hu %p", hu);
skb_queue_purge(&ath->txq);
return 0;
}
/* Close protocol */
static int ath_close(struct hci_uart *hu)
{
struct ath_struct *ath = hu->priv;
@ -147,9 +133,73 @@ static int ath_close(struct hci_uart *hu)
return 0;
}
static int ath_flush(struct hci_uart *hu)
{
struct ath_struct *ath = hu->priv;
BT_DBG("hu %p", hu);
skb_queue_purge(&ath->txq);
return 0;
}
static int ath_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr)
{
struct sk_buff *skb;
u8 buf[10];
int err;
buf[0] = 0x01;
buf[1] = 0x01;
buf[2] = 0x00;
buf[3] = sizeof(bdaddr_t);
memcpy(buf + 4, bdaddr, sizeof(bdaddr_t));
skb = __hci_cmd_sync(hdev, 0xfc0b, sizeof(buf), buf, HCI_INIT_TIMEOUT);
if (IS_ERR(skb)) {
err = PTR_ERR(skb);
BT_ERR("%s: Change address command failed (%d)",
hdev->name, err);
return err;
}
kfree_skb(skb);
return 0;
}
static int ath_setup(struct hci_uart *hu)
{
BT_DBG("hu %p", hu);
hu->hdev->set_bdaddr = ath_set_bdaddr;
return 0;
}
static const struct h4_recv_pkt ath_recv_pkts[] = {
{ H4_RECV_ACL, .recv = hci_recv_frame },
{ H4_RECV_SCO, .recv = hci_recv_frame },
{ H4_RECV_EVENT, .recv = hci_recv_frame },
};
static int ath_recv(struct hci_uart *hu, const void *data, int count)
{
struct ath_struct *ath = hu->priv;
ath->rx_skb = h4_recv_buf(hu->hdev, ath->rx_skb, data, count,
ath_recv_pkts, ARRAY_SIZE(ath_recv_pkts));
if (IS_ERR(ath->rx_skb)) {
int err = PTR_ERR(ath->rx_skb);
BT_ERR("%s: Frame reassembly failed (%d)", hu->hdev->name, err);
return err;
}
return count;
}
#define HCI_OP_ATH_SLEEP 0xFC04
/* Enqueue frame for transmittion */
static int ath_enqueue(struct hci_uart *hu, struct sk_buff *skb)
{
struct ath_struct *ath = hu->priv;
@ -159,8 +209,7 @@ static int ath_enqueue(struct hci_uart *hu, struct sk_buff *skb)
return 0;
}
/*
* Update power management enable flag with parameters of
/* Update power management enable flag with parameters of
* HCI sleep enable vendor specific HCI command.
*/
if (bt_cb(skb)->pkt_type == HCI_COMMAND_PKT) {
@ -190,37 +239,16 @@ static struct sk_buff *ath_dequeue(struct hci_uart *hu)
return skb_dequeue(&ath->txq);
}
static const struct h4_recv_pkt ath_recv_pkts[] = {
{ H4_RECV_ACL, .recv = hci_recv_frame },
{ H4_RECV_SCO, .recv = hci_recv_frame },
{ H4_RECV_EVENT, .recv = hci_recv_frame },
};
/* Recv data */
static int ath_recv(struct hci_uart *hu, const void *data, int count)
{
struct ath_struct *ath = hu->priv;
ath->rx_skb = h4_recv_buf(hu->hdev, ath->rx_skb, data, count,
ath_recv_pkts, ARRAY_SIZE(ath_recv_pkts));
if (IS_ERR(ath->rx_skb)) {
int err = PTR_ERR(ath->rx_skb);
BT_ERR("%s: Frame reassembly failed (%d)", hu->hdev->name, err);
return err;
}
return count;
}
static const struct hci_uart_proto athp = {
.id = HCI_UART_ATH3K,
.name = "ATH3K",
.open = ath_open,
.close = ath_close,
.flush = ath_flush,
.setup = ath_setup,
.recv = ath_recv,
.enqueue = ath_enqueue,
.dequeue = ath_dequeue,
.flush = ath_flush,
};
int __init ath_init(void)

View File

@ -119,6 +119,18 @@ static int usb_extcon_probe(struct platform_device *pdev)
return PTR_ERR(info->id_gpiod);
}
info->edev = devm_extcon_dev_allocate(dev, usb_extcon_cable);
if (IS_ERR(info->edev)) {
dev_err(dev, "failed to allocate extcon device\n");
return -ENOMEM;
}
ret = devm_extcon_dev_register(dev, info->edev);
if (ret < 0) {
dev_err(dev, "failed to register extcon device\n");
return ret;
}
ret = gpiod_set_debounce(info->id_gpiod,
USB_GPIO_DEBOUNCE_MS * 1000);
if (ret < 0)
@ -142,18 +154,6 @@ static int usb_extcon_probe(struct platform_device *pdev)
return ret;
}
info->edev = devm_extcon_dev_allocate(dev, usb_extcon_cable);
if (IS_ERR(info->edev)) {
dev_err(dev, "failed to allocate extcon device\n");
return -ENOMEM;
}
ret = devm_extcon_dev_register(dev, info->edev);
if (ret < 0) {
dev_err(dev, "failed to register extcon device\n");
return ret;
}
platform_set_drvdata(pdev, info);
device_init_wakeup(dev, 1);

View File

@ -499,19 +499,19 @@ static int __init dmi_present(const u8 *buf)
buf += 16;
if (memcmp(buf, "_DMI_", 5) == 0 && dmi_checksum(buf, 15)) {
if (smbios_ver)
dmi_ver = smbios_ver;
else
dmi_ver = (buf[14] & 0xF0) << 4 | (buf[14] & 0x0F);
dmi_num = get_unaligned_le16(buf + 12);
dmi_len = get_unaligned_le16(buf + 6);
dmi_base = get_unaligned_le32(buf + 8);
if (dmi_walk_early(dmi_decode) == 0) {
if (smbios_ver) {
dmi_ver = smbios_ver;
pr_info("SMBIOS %d.%d%s present.\n",
dmi_ver >> 8, dmi_ver & 0xFF,
(dmi_ver < 0x0300) ? "" : ".x");
pr_info("SMBIOS %d.%d present.\n",
dmi_ver >> 8, dmi_ver & 0xFF);
} else {
dmi_ver = (buf[14] & 0xF0) << 4 |
(buf[14] & 0x0F);
pr_info("Legacy DMI %d.%d present.\n",
dmi_ver >> 8, dmi_ver & 0xFF);
}

View File

@ -699,6 +699,16 @@ static int i915_drm_resume(struct drm_device *dev)
intel_init_pch_refclk(dev);
drm_mode_config_reset(dev);
/*
* Interrupts have to be enabled before any batches are run. If not the
* GPU will hang. i915_gem_init_hw() will initiate batches to
* update/restore the context.
*
* Modeset enabling in intel_modeset_init_hw() also needs working
* interrupts.
*/
intel_runtime_pm_enable_interrupts(dev_priv);
mutex_lock(&dev->struct_mutex);
if (i915_gem_init_hw(dev)) {
DRM_ERROR("failed to re-initialize GPU, declaring wedged!\n");
@ -706,9 +716,6 @@ static int i915_drm_resume(struct drm_device *dev)
}
mutex_unlock(&dev->struct_mutex);
/* We need working interrupts for modeset enabling ... */
intel_runtime_pm_enable_interrupts(dev_priv);
intel_modeset_init_hw(dev);
spin_lock_irq(&dev_priv->irq_lock);

View File

@ -5822,7 +5822,7 @@ static int cik_pcie_gart_enable(struct radeon_device *rdev)
L2_CACHE_BIGK_FRAGMENT_SIZE(4));
/* setup context0 */
WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR, rdev->mc.gtt_start >> 12);
WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, rdev->mc.gtt_end >> 12);
WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, (rdev->mc.gtt_end >> 12) - 1);
WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR, rdev->gart.table_addr >> 12);
WREG32(VM_CONTEXT0_PROTECTION_FAULT_DEFAULT_ADDR,
(u32)(rdev->dummy_page.addr >> 12));
@ -5837,7 +5837,7 @@ static int cik_pcie_gart_enable(struct radeon_device *rdev)
/* restore context1-15 */
/* set vm size, must be a multiple of 4 */
WREG32(VM_CONTEXT1_PAGE_TABLE_START_ADDR, 0);
WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn);
WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn - 1);
for (i = 1; i < 16; i++) {
if (i < 8)
WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2),

View File

@ -2485,7 +2485,7 @@ static int evergreen_pcie_gart_enable(struct radeon_device *rdev)
WREG32(MC_VM_MB_L1_TLB2_CNTL, tmp);
WREG32(MC_VM_MB_L1_TLB3_CNTL, tmp);
WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR, rdev->mc.gtt_start >> 12);
WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, rdev->mc.gtt_end >> 12);
WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, (rdev->mc.gtt_end >> 12) - 1);
WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR, rdev->gart.table_addr >> 12);
WREG32(VM_CONTEXT0_CNTL, ENABLE_CONTEXT | PAGE_TABLE_DEPTH(0) |
RANGE_PROTECTION_FAULT_ENABLE_DEFAULT);

View File

@ -1282,7 +1282,7 @@ static int cayman_pcie_gart_enable(struct radeon_device *rdev)
L2_CACHE_BIGK_FRAGMENT_SIZE(6));
/* setup context0 */
WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR, rdev->mc.gtt_start >> 12);
WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, rdev->mc.gtt_end >> 12);
WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, (rdev->mc.gtt_end >> 12) - 1);
WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR, rdev->gart.table_addr >> 12);
WREG32(VM_CONTEXT0_PROTECTION_FAULT_DEFAULT_ADDR,
(u32)(rdev->dummy_page.addr >> 12));
@ -1301,7 +1301,8 @@ static int cayman_pcie_gart_enable(struct radeon_device *rdev)
*/
for (i = 1; i < 8; i++) {
WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR + (i << 2), 0);
WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR + (i << 2), rdev->vm_manager.max_pfn);
WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR + (i << 2),
rdev->vm_manager.max_pfn - 1);
WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2),
rdev->vm_manager.saved_table_addr[i]);
}

View File

@ -1112,7 +1112,7 @@ static int r600_pcie_gart_enable(struct radeon_device *rdev)
WREG32(MC_VM_L1_TLB_MCB_RD_SEM_CNTL, tmp | ENABLE_SEMAPHORE_MODE);
WREG32(MC_VM_L1_TLB_MCB_WR_SEM_CNTL, tmp | ENABLE_SEMAPHORE_MODE);
WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR, rdev->mc.gtt_start >> 12);
WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, rdev->mc.gtt_end >> 12);
WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, (rdev->mc.gtt_end >> 12) - 1);
WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR, rdev->gart.table_addr >> 12);
WREG32(VM_CONTEXT0_CNTL, ENABLE_CONTEXT | PAGE_TABLE_DEPTH(0) |
RANGE_PROTECTION_FAULT_ENABLE_DEFAULT);

View File

@ -666,6 +666,9 @@ radeon_dp_mst_probe(struct radeon_connector *radeon_connector)
int ret;
u8 msg[1];
if (!radeon_mst)
return 0;
if (dig_connector->dpcd[DP_DPCD_REV] < 0x12)
return 0;

View File

@ -921,7 +921,7 @@ static int rv770_pcie_gart_enable(struct radeon_device *rdev)
WREG32(MC_VM_MB_L1_TLB2_CNTL, tmp);
WREG32(MC_VM_MB_L1_TLB3_CNTL, tmp);
WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR, rdev->mc.gtt_start >> 12);
WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, rdev->mc.gtt_end >> 12);
WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, (rdev->mc.gtt_end >> 12) - 1);
WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR, rdev->gart.table_addr >> 12);
WREG32(VM_CONTEXT0_CNTL, ENABLE_CONTEXT | PAGE_TABLE_DEPTH(0) |
RANGE_PROTECTION_FAULT_ENABLE_DEFAULT);

View File

@ -4303,7 +4303,7 @@ static int si_pcie_gart_enable(struct radeon_device *rdev)
L2_CACHE_BIGK_FRAGMENT_SIZE(4));
/* setup context0 */
WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR, rdev->mc.gtt_start >> 12);
WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, rdev->mc.gtt_end >> 12);
WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, (rdev->mc.gtt_end >> 12) - 1);
WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR, rdev->gart.table_addr >> 12);
WREG32(VM_CONTEXT0_PROTECTION_FAULT_DEFAULT_ADDR,
(u32)(rdev->dummy_page.addr >> 12));
@ -4318,7 +4318,7 @@ static int si_pcie_gart_enable(struct radeon_device *rdev)
/* empty context1-15 */
/* set vm size, must be a multiple of 4 */
WREG32(VM_CONTEXT1_PAGE_TABLE_START_ADDR, 0);
WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn);
WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn - 1);
/* Assign the pt base to something valid for now; the pts used for
* the VMs are determined by the application and setup and assigned
* on the fly in the vm part of radeon_gart.c

View File

@ -643,15 +643,6 @@ config BLK_DEV_TC86C001
help
This driver adds support for Toshiba TC86C001 GOKU-S chip.
config BLK_DEV_CELLEB
tristate "Toshiba's Cell Reference Set IDE support"
depends on PPC_CELLEB
select BLK_DEV_IDEDMA_PCI
help
This driver provides support for the on-board IDE controller on
Toshiba Cell Reference Board.
If unsure, say Y.
endif
# TODO: BLK_DEV_IDEDMA_PCI -> BLK_DEV_IDEDMA_SFF

View File

@ -38,7 +38,6 @@ obj-$(CONFIG_BLK_DEV_AEC62XX) += aec62xx.o
obj-$(CONFIG_BLK_DEV_ALI15X3) += alim15x3.o
obj-$(CONFIG_BLK_DEV_AMD74XX) += amd74xx.o
obj-$(CONFIG_BLK_DEV_ATIIXP) += atiixp.o
obj-$(CONFIG_BLK_DEV_CELLEB) += scc_pata.o
obj-$(CONFIG_BLK_DEV_CMD64X) += cmd64x.o
obj-$(CONFIG_BLK_DEV_CS5520) += cs5520.o
obj-$(CONFIG_BLK_DEV_CS5530) += cs5530.o

View File

@ -1,887 +0,0 @@
/*
* Support for IDE interfaces on Celleb platform
*
* (C) Copyright 2006 TOSHIBA CORPORATION
*
* This code is based on drivers/ide/pci/siimage.c:
* Copyright (C) 2001-2002 Andre Hedrick <andre@linux-ide.org>
* Copyright (C) 2003 Red Hat
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include <linux/types.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/delay.h>
#include <linux/ide.h>
#include <linux/init.h>
#define PCI_DEVICE_ID_TOSHIBA_SCC_ATA 0x01b4
#define SCC_PATA_NAME "scc IDE"
#define TDVHSEL_MASTER 0x00000001
#define TDVHSEL_SLAVE 0x00000004
#define MODE_JCUSFEN 0x00000080
#define CCKCTRL_ATARESET 0x00040000
#define CCKCTRL_BUFCNT 0x00020000
#define CCKCTRL_CRST 0x00010000
#define CCKCTRL_OCLKEN 0x00000100
#define CCKCTRL_ATACLKOEN 0x00000002
#define CCKCTRL_LCLKEN 0x00000001
#define QCHCD_IOS_SS 0x00000001
#define QCHSD_STPDIAG 0x00020000
#define INTMASK_MSK 0xD1000012
#define INTSTS_SERROR 0x80000000
#define INTSTS_PRERR 0x40000000
#define INTSTS_RERR 0x10000000
#define INTSTS_ICERR 0x01000000
#define INTSTS_BMSINT 0x00000010
#define INTSTS_BMHE 0x00000008
#define INTSTS_IOIRQS 0x00000004
#define INTSTS_INTRQ 0x00000002
#define INTSTS_ACTEINT 0x00000001
#define ECMODE_VALUE 0x01
static struct scc_ports {
unsigned long ctl, dma;
struct ide_host *host; /* for removing port from system */
} scc_ports[MAX_HWIFS];
/* PIO transfer mode table */
/* JCHST */
static unsigned long JCHSTtbl[2][7] = {
{0x0E, 0x05, 0x02, 0x03, 0x02, 0x00, 0x00}, /* 100MHz */
{0x13, 0x07, 0x04, 0x04, 0x03, 0x00, 0x00} /* 133MHz */
};
/* JCHHT */
static unsigned long JCHHTtbl[2][7] = {
{0x0E, 0x02, 0x02, 0x02, 0x02, 0x00, 0x00}, /* 100MHz */
{0x13, 0x03, 0x03, 0x03, 0x03, 0x00, 0x00} /* 133MHz */
};
/* JCHCT */
static unsigned long JCHCTtbl[2][7] = {
{0x1D, 0x1D, 0x1C, 0x0B, 0x06, 0x00, 0x00}, /* 100MHz */
{0x27, 0x26, 0x26, 0x0E, 0x09, 0x00, 0x00} /* 133MHz */
};
/* DMA transfer mode table */
/* JCHDCTM/JCHDCTS */
static unsigned long JCHDCTxtbl[2][7] = {
{0x0A, 0x06, 0x04, 0x03, 0x01, 0x00, 0x00}, /* 100MHz */
{0x0E, 0x09, 0x06, 0x04, 0x02, 0x01, 0x00} /* 133MHz */
};
/* JCSTWTM/JCSTWTS */
static unsigned long JCSTWTxtbl[2][7] = {
{0x06, 0x04, 0x03, 0x02, 0x02, 0x02, 0x00}, /* 100MHz */
{0x09, 0x06, 0x04, 0x02, 0x02, 0x02, 0x02} /* 133MHz */
};
/* JCTSS */
static unsigned long JCTSStbl[2][7] = {
{0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x00}, /* 100MHz */
{0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x05} /* 133MHz */
};
/* JCENVT */
static unsigned long JCENVTtbl[2][7] = {
{0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x00}, /* 100MHz */
{0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02} /* 133MHz */
};
/* JCACTSELS/JCACTSELM */
static unsigned long JCACTSELtbl[2][7] = {
{0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00}, /* 100MHz */
{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01} /* 133MHz */
};
static u8 scc_ide_inb(unsigned long port)
{
u32 data = in_be32((void*)port);
return (u8)data;
}
static void scc_exec_command(ide_hwif_t *hwif, u8 cmd)
{
out_be32((void *)hwif->io_ports.command_addr, cmd);
eieio();
in_be32((void *)(hwif->dma_base + 0x01c));
eieio();
}
static u8 scc_read_status(ide_hwif_t *hwif)
{
return (u8)in_be32((void *)hwif->io_ports.status_addr);
}
static u8 scc_read_altstatus(ide_hwif_t *hwif)
{
return (u8)in_be32((void *)hwif->io_ports.ctl_addr);
}
static u8 scc_dma_sff_read_status(ide_hwif_t *hwif)
{
return (u8)in_be32((void *)(hwif->dma_base + 4));
}
static void scc_write_devctl(ide_hwif_t *hwif, u8 ctl)
{
out_be32((void *)hwif->io_ports.ctl_addr, ctl);
eieio();
in_be32((void *)(hwif->dma_base + 0x01c));
eieio();
}
static void scc_ide_insw(unsigned long port, void *addr, u32 count)
{
u16 *ptr = (u16 *)addr;
while (count--) {
*ptr++ = le16_to_cpu(in_be32((void*)port));
}
}
static void scc_ide_insl(unsigned long port, void *addr, u32 count)
{
u16 *ptr = (u16 *)addr;
while (count--) {
*ptr++ = le16_to_cpu(in_be32((void*)port));
*ptr++ = le16_to_cpu(in_be32((void*)port));
}
}
static void scc_ide_outb(u8 addr, unsigned long port)
{
out_be32((void*)port, addr);
}
static void
scc_ide_outsw(unsigned long port, void *addr, u32 count)
{
u16 *ptr = (u16 *)addr;
while (count--) {
out_be32((void*)port, cpu_to_le16(*ptr++));
}
}
static void
scc_ide_outsl(unsigned long port, void *addr, u32 count)
{
u16 *ptr = (u16 *)addr;
while (count--) {
out_be32((void*)port, cpu_to_le16(*ptr++));
out_be32((void*)port, cpu_to_le16(*ptr++));
}
}
/**
* scc_set_pio_mode - set host controller for PIO mode
* @hwif: port
* @drive: drive
*
* Load the timing settings for this device mode into the
* controller.
*/
static void scc_set_pio_mode(ide_hwif_t *hwif, ide_drive_t *drive)
{
struct scc_ports *ports = ide_get_hwifdata(hwif);
unsigned long ctl_base = ports->ctl;
unsigned long cckctrl_port = ctl_base + 0xff0;
unsigned long piosht_port = ctl_base + 0x000;
unsigned long pioct_port = ctl_base + 0x004;
unsigned long reg;
int offset;
const u8 pio = drive->pio_mode - XFER_PIO_0;
reg = in_be32((void __iomem *)cckctrl_port);
if (reg & CCKCTRL_ATACLKOEN) {
offset = 1; /* 133MHz */
} else {
offset = 0; /* 100MHz */
}
reg = JCHSTtbl[offset][pio] << 16 | JCHHTtbl[offset][pio];
out_be32((void __iomem *)piosht_port, reg);
reg = JCHCTtbl[offset][pio];
out_be32((void __iomem *)pioct_port, reg);
}
/**
* scc_set_dma_mode - set host controller for DMA mode
* @hwif: port
* @drive: drive
*
* Load the timing settings for this device mode into the
* controller.
*/
static void scc_set_dma_mode(ide_hwif_t *hwif, ide_drive_t *drive)
{
struct scc_ports *ports = ide_get_hwifdata(hwif);
unsigned long ctl_base = ports->ctl;
unsigned long cckctrl_port = ctl_base + 0xff0;
unsigned long mdmact_port = ctl_base + 0x008;
unsigned long mcrcst_port = ctl_base + 0x00c;
unsigned long sdmact_port = ctl_base + 0x010;
unsigned long scrcst_port = ctl_base + 0x014;
unsigned long udenvt_port = ctl_base + 0x018;
unsigned long tdvhsel_port = ctl_base + 0x020;
int is_slave = drive->dn & 1;
int offset, idx;
unsigned long reg;
unsigned long jcactsel;
const u8 speed = drive->dma_mode;
reg = in_be32((void __iomem *)cckctrl_port);
if (reg & CCKCTRL_ATACLKOEN) {
offset = 1; /* 133MHz */
} else {
offset = 0; /* 100MHz */
}
idx = speed - XFER_UDMA_0;
jcactsel = JCACTSELtbl[offset][idx];
if (is_slave) {
out_be32((void __iomem *)sdmact_port, JCHDCTxtbl[offset][idx]);
out_be32((void __iomem *)scrcst_port, JCSTWTxtbl[offset][idx]);
jcactsel = jcactsel << 2;
out_be32((void __iomem *)tdvhsel_port, (in_be32((void __iomem *)tdvhsel_port) & ~TDVHSEL_SLAVE) | jcactsel);
} else {
out_be32((void __iomem *)mdmact_port, JCHDCTxtbl[offset][idx]);
out_be32((void __iomem *)mcrcst_port, JCSTWTxtbl[offset][idx]);
out_be32((void __iomem *)tdvhsel_port, (in_be32((void __iomem *)tdvhsel_port) & ~TDVHSEL_MASTER) | jcactsel);
}
reg = JCTSStbl[offset][idx] << 16 | JCENVTtbl[offset][idx];
out_be32((void __iomem *)udenvt_port, reg);
}
static void scc_dma_host_set(ide_drive_t *drive, int on)
{
ide_hwif_t *hwif = drive->hwif;
u8 unit = drive->dn & 1;
u8 dma_stat = scc_dma_sff_read_status(hwif);
if (on)
dma_stat |= (1 << (5 + unit));
else
dma_stat &= ~(1 << (5 + unit));
scc_ide_outb(dma_stat, hwif->dma_base + 4);
}
/**
* scc_dma_setup - begin a DMA phase
* @drive: target device
* @cmd: command
*
* Build an IDE DMA PRD (IDE speak for scatter gather table)
* and then set up the DMA transfer registers.
*
* Returns 0 on success. If a PIO fallback is required then 1
* is returned.
*/
static int scc_dma_setup(ide_drive_t *drive, struct ide_cmd *cmd)
{
ide_hwif_t *hwif = drive->hwif;
u32 rw = (cmd->tf_flags & IDE_TFLAG_WRITE) ? 0 : ATA_DMA_WR;
u8 dma_stat;
/* fall back to pio! */
if (ide_build_dmatable(drive, cmd) == 0)
return 1;
/* PRD table */
out_be32((void __iomem *)(hwif->dma_base + 8), hwif->dmatable_dma);
/* specify r/w */
out_be32((void __iomem *)hwif->dma_base, rw);
/* read DMA status for INTR & ERROR flags */
dma_stat = scc_dma_sff_read_status(hwif);
/* clear INTR & ERROR flags */
out_be32((void __iomem *)(hwif->dma_base + 4), dma_stat | 6);
return 0;
}
static void scc_dma_start(ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
u8 dma_cmd = scc_ide_inb(hwif->dma_base);
/* start DMA */
scc_ide_outb(dma_cmd | 1, hwif->dma_base);
}
static int __scc_dma_end(ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
u8 dma_stat, dma_cmd;
/* get DMA command mode */
dma_cmd = scc_ide_inb(hwif->dma_base);
/* stop DMA */
scc_ide_outb(dma_cmd & ~1, hwif->dma_base);
/* get DMA status */
dma_stat = scc_dma_sff_read_status(hwif);
/* clear the INTR & ERROR bits */
scc_ide_outb(dma_stat | 6, hwif->dma_base + 4);
/* verify good DMA status */
return (dma_stat & 7) != 4 ? (0x10 | dma_stat) : 0;
}
/**
* scc_dma_end - Stop DMA
* @drive: IDE drive
*
* Check and clear INT Status register.
* Then call __scc_dma_end().
*/
static int scc_dma_end(ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
void __iomem *dma_base = (void __iomem *)hwif->dma_base;
unsigned long intsts_port = hwif->dma_base + 0x014;
u32 reg;
int dma_stat, data_loss = 0;
static int retry = 0;
/* errata A308 workaround: Step5 (check data loss) */
/* We don't check non ide_disk because it is limited to UDMA4 */
if (!(in_be32((void __iomem *)hwif->io_ports.ctl_addr)
& ATA_ERR) &&
drive->media == ide_disk && drive->current_speed > XFER_UDMA_4) {
reg = in_be32((void __iomem *)intsts_port);
if (!(reg & INTSTS_ACTEINT)) {
printk(KERN_WARNING "%s: operation failed (transfer data loss)\n",
drive->name);
data_loss = 1;
if (retry++) {
struct request *rq = hwif->rq;
ide_drive_t *drive;
int i;
/* ERROR_RESET and drive->crc_count are needed
* to reduce DMA transfer mode in retry process.
*/
if (rq)
rq->errors |= ERROR_RESET;
ide_port_for_each_dev(i, drive, hwif)
drive->crc_count++;
}
}
}
while (1) {
reg = in_be32((void __iomem *)intsts_port);
if (reg & INTSTS_SERROR) {
printk(KERN_WARNING "%s: SERROR\n", SCC_PATA_NAME);
out_be32((void __iomem *)intsts_port, INTSTS_SERROR|INTSTS_BMSINT);
out_be32(dma_base, in_be32(dma_base) & ~QCHCD_IOS_SS);
continue;
}
if (reg & INTSTS_PRERR) {
u32 maea0, maec0;
unsigned long ctl_base = hwif->config_data;
maea0 = in_be32((void __iomem *)(ctl_base + 0xF50));
maec0 = in_be32((void __iomem *)(ctl_base + 0xF54));
printk(KERN_WARNING "%s: PRERR [addr:%x cmd:%x]\n", SCC_PATA_NAME, maea0, maec0);
out_be32((void __iomem *)intsts_port, INTSTS_PRERR|INTSTS_BMSINT);
out_be32(dma_base, in_be32(dma_base) & ~QCHCD_IOS_SS);
continue;
}
if (reg & INTSTS_RERR) {
printk(KERN_WARNING "%s: Response Error\n", SCC_PATA_NAME);
out_be32((void __iomem *)intsts_port, INTSTS_RERR|INTSTS_BMSINT);
out_be32(dma_base, in_be32(dma_base) & ~QCHCD_IOS_SS);
continue;
}
if (reg & INTSTS_ICERR) {
out_be32(dma_base, in_be32(dma_base) & ~QCHCD_IOS_SS);
printk(KERN_WARNING "%s: Illegal Configuration\n", SCC_PATA_NAME);
out_be32((void __iomem *)intsts_port, INTSTS_ICERR|INTSTS_BMSINT);
continue;
}
if (reg & INTSTS_BMSINT) {
printk(KERN_WARNING "%s: Internal Bus Error\n", SCC_PATA_NAME);
out_be32((void __iomem *)intsts_port, INTSTS_BMSINT);
ide_do_reset(drive);
continue;
}
if (reg & INTSTS_BMHE) {
out_be32((void __iomem *)intsts_port, INTSTS_BMHE);
continue;
}
if (reg & INTSTS_ACTEINT) {
out_be32((void __iomem *)intsts_port, INTSTS_ACTEINT);
continue;
}
if (reg & INTSTS_IOIRQS) {
out_be32((void __iomem *)intsts_port, INTSTS_IOIRQS);
continue;
}
break;
}
dma_stat = __scc_dma_end(drive);
if (data_loss)
dma_stat |= 2; /* emulate DMA error (to retry command) */
return dma_stat;
}
/* returns 1 if dma irq issued, 0 otherwise */
static int scc_dma_test_irq(ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
u32 int_stat = in_be32((void __iomem *)hwif->dma_base + 0x014);
/* SCC errata A252,A308 workaround: Step4 */
if ((in_be32((void __iomem *)hwif->io_ports.ctl_addr)
& ATA_ERR) &&
(int_stat & INTSTS_INTRQ))
return 1;
/* SCC errata A308 workaround: Step5 (polling IOIRQS) */
if (int_stat & INTSTS_IOIRQS)
return 1;
return 0;
}
static u8 scc_udma_filter(ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
u8 mask = hwif->ultra_mask;
/* errata A308 workaround: limit non ide_disk drive to UDMA4 */
if ((drive->media != ide_disk) && (mask & 0xE0)) {
printk(KERN_INFO "%s: limit %s to UDMA4\n",
SCC_PATA_NAME, drive->name);
mask = ATA_UDMA4;
}
return mask;
}
/**
* setup_mmio_scc - map CTRL/BMID region
* @dev: PCI device we are configuring
* @name: device name
*
*/
static int setup_mmio_scc (struct pci_dev *dev, const char *name)
{
void __iomem *ctl_addr;
void __iomem *dma_addr;
int i, ret;
for (i = 0; i < MAX_HWIFS; i++) {
if (scc_ports[i].ctl == 0)
break;
}
if (i >= MAX_HWIFS)
return -ENOMEM;
ret = pci_request_selected_regions(dev, (1 << 2) - 1, name);
if (ret < 0) {
printk(KERN_ERR "%s: can't reserve resources\n", name);
return ret;
}
ctl_addr = pci_ioremap_bar(dev, 0);
if (!ctl_addr)
goto fail_0;
dma_addr = pci_ioremap_bar(dev, 1);
if (!dma_addr)
goto fail_1;
pci_set_master(dev);
scc_ports[i].ctl = (unsigned long)ctl_addr;
scc_ports[i].dma = (unsigned long)dma_addr;
pci_set_drvdata(dev, (void *) &scc_ports[i]);
return 1;
fail_1:
iounmap(ctl_addr);
fail_0:
return -ENOMEM;
}
static int scc_ide_setup_pci_device(struct pci_dev *dev,
const struct ide_port_info *d)
{
struct scc_ports *ports = pci_get_drvdata(dev);
struct ide_host *host;
struct ide_hw hw, *hws[] = { &hw };
int i, rc;
memset(&hw, 0, sizeof(hw));
for (i = 0; i <= 8; i++)
hw.io_ports_array[i] = ports->dma + 0x20 + i * 4;
hw.irq = dev->irq;
hw.dev = &dev->dev;
rc = ide_host_add(d, hws, 1, &host);
if (rc)
return rc;
ports->host = host;
return 0;
}
/**
* init_setup_scc - set up an SCC PATA Controller
* @dev: PCI device
* @d: IDE port info
*
* Perform the initial set up for this device.
*/
static int init_setup_scc(struct pci_dev *dev, const struct ide_port_info *d)
{
unsigned long ctl_base;
unsigned long dma_base;
unsigned long cckctrl_port;
unsigned long intmask_port;
unsigned long mode_port;
unsigned long ecmode_port;
u32 reg = 0;
struct scc_ports *ports;
int rc;
rc = pci_enable_device(dev);
if (rc)
goto end;
rc = setup_mmio_scc(dev, d->name);
if (rc < 0)
goto end;
ports = pci_get_drvdata(dev);
ctl_base = ports->ctl;
dma_base = ports->dma;
cckctrl_port = ctl_base + 0xff0;
intmask_port = dma_base + 0x010;
mode_port = ctl_base + 0x024;
ecmode_port = ctl_base + 0xf00;
/* controller initialization */
reg = 0;
out_be32((void*)cckctrl_port, reg);
reg |= CCKCTRL_ATACLKOEN;
out_be32((void*)cckctrl_port, reg);
reg |= CCKCTRL_LCLKEN | CCKCTRL_OCLKEN;
out_be32((void*)cckctrl_port, reg);
reg |= CCKCTRL_CRST;
out_be32((void*)cckctrl_port, reg);
for (;;) {
reg = in_be32((void*)cckctrl_port);
if (reg & CCKCTRL_CRST)
break;
udelay(5000);
}
reg |= CCKCTRL_ATARESET;
out_be32((void*)cckctrl_port, reg);
out_be32((void*)ecmode_port, ECMODE_VALUE);
out_be32((void*)mode_port, MODE_JCUSFEN);
out_be32((void*)intmask_port, INTMASK_MSK);
rc = scc_ide_setup_pci_device(dev, d);
end:
return rc;
}
static void scc_tf_load(ide_drive_t *drive, struct ide_taskfile *tf, u8 valid)
{
struct ide_io_ports *io_ports = &drive->hwif->io_ports;
if (valid & IDE_VALID_FEATURE)
scc_ide_outb(tf->feature, io_ports->feature_addr);
if (valid & IDE_VALID_NSECT)
scc_ide_outb(tf->nsect, io_ports->nsect_addr);
if (valid & IDE_VALID_LBAL)
scc_ide_outb(tf->lbal, io_ports->lbal_addr);
if (valid & IDE_VALID_LBAM)
scc_ide_outb(tf->lbam, io_ports->lbam_addr);
if (valid & IDE_VALID_LBAH)
scc_ide_outb(tf->lbah, io_ports->lbah_addr);
if (valid & IDE_VALID_DEVICE)
scc_ide_outb(tf->device, io_ports->device_addr);
}
static void scc_tf_read(ide_drive_t *drive, struct ide_taskfile *tf, u8 valid)
{
struct ide_io_ports *io_ports = &drive->hwif->io_ports;
if (valid & IDE_VALID_ERROR)
tf->error = scc_ide_inb(io_ports->feature_addr);
if (valid & IDE_VALID_NSECT)
tf->nsect = scc_ide_inb(io_ports->nsect_addr);
if (valid & IDE_VALID_LBAL)
tf->lbal = scc_ide_inb(io_ports->lbal_addr);
if (valid & IDE_VALID_LBAM)
tf->lbam = scc_ide_inb(io_ports->lbam_addr);
if (valid & IDE_VALID_LBAH)
tf->lbah = scc_ide_inb(io_ports->lbah_addr);
if (valid & IDE_VALID_DEVICE)
tf->device = scc_ide_inb(io_ports->device_addr);
}
static void scc_input_data(ide_drive_t *drive, struct ide_cmd *cmd,
void *buf, unsigned int len)
{
unsigned long data_addr = drive->hwif->io_ports.data_addr;
len++;
if (drive->io_32bit) {
scc_ide_insl(data_addr, buf, len / 4);
if ((len & 3) >= 2)
scc_ide_insw(data_addr, (u8 *)buf + (len & ~3), 1);
} else
scc_ide_insw(data_addr, buf, len / 2);
}
static void scc_output_data(ide_drive_t *drive, struct ide_cmd *cmd,
void *buf, unsigned int len)
{
unsigned long data_addr = drive->hwif->io_ports.data_addr;
len++;
if (drive->io_32bit) {
scc_ide_outsl(data_addr, buf, len / 4);
if ((len & 3) >= 2)
scc_ide_outsw(data_addr, (u8 *)buf + (len & ~3), 1);
} else
scc_ide_outsw(data_addr, buf, len / 2);
}
/**
* init_mmio_iops_scc - set up the iops for MMIO
* @hwif: interface to set up
*
*/
static void init_mmio_iops_scc(ide_hwif_t *hwif)
{
struct pci_dev *dev = to_pci_dev(hwif->dev);
struct scc_ports *ports = pci_get_drvdata(dev);
unsigned long dma_base = ports->dma;
ide_set_hwifdata(hwif, ports);
hwif->dma_base = dma_base;
hwif->config_data = ports->ctl;
}
/**
* init_iops_scc - set up iops
* @hwif: interface to set up
*
* Do the basic setup for the SCC hardware interface
* and then do the MMIO setup.
*/
static void init_iops_scc(ide_hwif_t *hwif)
{
struct pci_dev *dev = to_pci_dev(hwif->dev);
hwif->hwif_data = NULL;
if (pci_get_drvdata(dev) == NULL)
return;
init_mmio_iops_scc(hwif);
}
static int scc_init_dma(ide_hwif_t *hwif, const struct ide_port_info *d)
{
return ide_allocate_dma_engine(hwif);
}
static u8 scc_cable_detect(ide_hwif_t *hwif)
{
return ATA_CBL_PATA80;
}
/**
* init_hwif_scc - set up hwif
* @hwif: interface to set up
*
* We do the basic set up of the interface structure. The SCC
* requires several custom handlers so we override the default
* ide DMA handlers appropriately.
*/
static void init_hwif_scc(ide_hwif_t *hwif)
{
/* PTERADD */
out_be32((void __iomem *)(hwif->dma_base + 0x018), hwif->dmatable_dma);
if (in_be32((void __iomem *)(hwif->config_data + 0xff0)) & CCKCTRL_ATACLKOEN)
hwif->ultra_mask = ATA_UDMA6; /* 133MHz */
else
hwif->ultra_mask = ATA_UDMA5; /* 100MHz */
}
static const struct ide_tp_ops scc_tp_ops = {
.exec_command = scc_exec_command,
.read_status = scc_read_status,
.read_altstatus = scc_read_altstatus,
.write_devctl = scc_write_devctl,
.dev_select = ide_dev_select,
.tf_load = scc_tf_load,
.tf_read = scc_tf_read,
.input_data = scc_input_data,
.output_data = scc_output_data,
};
static const struct ide_port_ops scc_port_ops = {
.set_pio_mode = scc_set_pio_mode,
.set_dma_mode = scc_set_dma_mode,
.udma_filter = scc_udma_filter,
.cable_detect = scc_cable_detect,
};
static const struct ide_dma_ops scc_dma_ops = {
.dma_host_set = scc_dma_host_set,
.dma_setup = scc_dma_setup,
.dma_start = scc_dma_start,
.dma_end = scc_dma_end,
.dma_test_irq = scc_dma_test_irq,
.dma_lost_irq = ide_dma_lost_irq,
.dma_timer_expiry = ide_dma_sff_timer_expiry,
.dma_sff_read_status = scc_dma_sff_read_status,
};
static const struct ide_port_info scc_chipset = {
.name = "sccIDE",
.init_iops = init_iops_scc,
.init_dma = scc_init_dma,
.init_hwif = init_hwif_scc,
.tp_ops = &scc_tp_ops,
.port_ops = &scc_port_ops,
.dma_ops = &scc_dma_ops,
.host_flags = IDE_HFLAG_SINGLE,
.irq_flags = IRQF_SHARED,
.pio_mask = ATA_PIO4,
.chipset = ide_pci,
};
/**
* scc_init_one - pci layer discovery entry
* @dev: PCI device
* @id: ident table entry
*
* Called by the PCI code when it finds an SCC PATA controller.
* We then use the IDE PCI generic helper to do most of the work.
*/
static int scc_init_one(struct pci_dev *dev, const struct pci_device_id *id)
{
return init_setup_scc(dev, &scc_chipset);
}
/**
* scc_remove - pci layer remove entry
* @dev: PCI device
*
* Called by the PCI code when it removes an SCC PATA controller.
*/
static void scc_remove(struct pci_dev *dev)
{
struct scc_ports *ports = pci_get_drvdata(dev);
struct ide_host *host = ports->host;
ide_host_remove(host);
iounmap((void*)ports->dma);
iounmap((void*)ports->ctl);
pci_release_selected_regions(dev, (1 << 2) - 1);
memset(ports, 0, sizeof(*ports));
}
static const struct pci_device_id scc_pci_tbl[] = {
{ PCI_VDEVICE(TOSHIBA_2, PCI_DEVICE_ID_TOSHIBA_SCC_ATA), 0 },
{ 0, },
};
MODULE_DEVICE_TABLE(pci, scc_pci_tbl);
static struct pci_driver scc_pci_driver = {
.name = "SCC IDE",
.id_table = scc_pci_tbl,
.probe = scc_init_one,
.remove = scc_remove,
};
static int __init scc_ide_init(void)
{
return ide_pci_register_driver(&scc_pci_driver);
}
static void __exit scc_ide_exit(void)
{
pci_unregister_driver(&scc_pci_driver);
}
module_init(scc_ide_init);
module_exit(scc_ide_exit);
MODULE_DESCRIPTION("PCI driver module for Toshiba SCC IDE");
MODULE_LICENSE("GPL");

View File

@ -389,7 +389,12 @@ int mma9551_read_config_words(struct i2c_client *client, u8 app_id,
{
int ret, i;
int len_words = len / sizeof(u16);
__be16 be_buf[MMA9551_MAX_MAILBOX_DATA_REGS];
__be16 be_buf[MMA9551_MAX_MAILBOX_DATA_REGS / 2];
if (len_words > ARRAY_SIZE(be_buf)) {
dev_err(&client->dev, "Invalid buffer size %d\n", len);
return -EINVAL;
}
ret = mma9551_transfer(client, app_id, MMA9551_CMD_READ_CONFIG,
reg, NULL, 0, (u8 *) be_buf, len);
@ -424,7 +429,12 @@ int mma9551_read_status_words(struct i2c_client *client, u8 app_id,
{
int ret, i;
int len_words = len / sizeof(u16);
__be16 be_buf[MMA9551_MAX_MAILBOX_DATA_REGS];
__be16 be_buf[MMA9551_MAX_MAILBOX_DATA_REGS / 2];
if (len_words > ARRAY_SIZE(be_buf)) {
dev_err(&client->dev, "Invalid buffer size %d\n", len);
return -EINVAL;
}
ret = mma9551_transfer(client, app_id, MMA9551_CMD_READ_STATUS,
reg, NULL, 0, (u8 *) be_buf, len);
@ -459,7 +469,12 @@ int mma9551_write_config_words(struct i2c_client *client, u8 app_id,
{
int i;
int len_words = len / sizeof(u16);
__be16 be_buf[MMA9551_MAX_MAILBOX_DATA_REGS];
__be16 be_buf[(MMA9551_MAX_MAILBOX_DATA_REGS - 1) / 2];
if (len_words > ARRAY_SIZE(be_buf)) {
dev_err(&client->dev, "Invalid buffer size %d\n", len);
return -EINVAL;
}
for (i = 0; i < len_words; i++)
be_buf[i] = cpu_to_be16(buf[i]);

View File

@ -54,6 +54,7 @@
#define MMA9553_MASK_CONF_STEPCOALESCE GENMASK(7, 0)
#define MMA9553_REG_CONF_ACTTHD 0x0E
#define MMA9553_MAX_ACTTHD GENMASK(15, 0)
/* Pedometer status registers (R-only) */
#define MMA9553_REG_STATUS 0x00
@ -316,22 +317,19 @@ static int mma9553_set_config(struct mma9553_data *data, u16 reg,
static int mma9553_read_activity_stepcnt(struct mma9553_data *data,
u8 *activity, u16 *stepcnt)
{
u32 status_stepcnt;
u16 status;
u16 buf[2];
int ret;
ret = mma9551_read_status_words(data->client, MMA9551_APPID_PEDOMETER,
MMA9553_REG_STATUS, sizeof(u32),
(u16 *) &status_stepcnt);
MMA9553_REG_STATUS, sizeof(u32), buf);
if (ret < 0) {
dev_err(&data->client->dev,
"error reading status and stepcnt\n");
return ret;
}
status = status_stepcnt & MMA9553_MASK_CONF_WORD;
*activity = mma9553_get_bits(status, MMA9553_MASK_STATUS_ACTIVITY);
*stepcnt = status_stepcnt >> 16;
*activity = mma9553_get_bits(buf[0], MMA9553_MASK_STATUS_ACTIVITY);
*stepcnt = buf[1];
return 0;
}
@ -872,6 +870,9 @@ static int mma9553_write_event_value(struct iio_dev *indio_dev,
case IIO_EV_INFO_PERIOD:
switch (chan->type) {
case IIO_ACTIVITY:
if (val < 0 || val > MMA9553_ACTIVITY_THD_TO_SEC(
MMA9553_MAX_ACTTHD))
return -EINVAL;
mutex_lock(&data->mutex);
ret = mma9553_set_config(data, MMA9553_REG_CONF_ACTTHD,
&data->conf.actthd,
@ -971,7 +972,8 @@ static const struct iio_chan_spec_ext_info mma9553_ext_info[] = {
.modified = 1, \
.channel2 = _chan2, \
.info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED), \
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_CALIBHEIGHT), \
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_CALIBHEIGHT) | \
BIT(IIO_CHAN_INFO_ENABLE), \
.event_spec = mma9553_activity_events, \
.num_event_specs = ARRAY_SIZE(mma9553_activity_events), \
.ext_info = mma9553_ext_info, \

View File

@ -546,6 +546,7 @@ int st_accel_common_probe(struct iio_dev *indio_dev)
indio_dev->modes = INDIO_DIRECT_MODE;
indio_dev->info = &accel_info;
mutex_init(&adata->tb.buf_lock);
st_sensors_power_enable(indio_dev);

View File

@ -53,39 +53,42 @@ static const struct iio_chan_spec const axp288_adc_channels[] = {
.channel = 0,
.address = AXP288_TS_ADC_H,
.datasheet_name = "TS_PIN",
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
}, {
.indexed = 1,
.type = IIO_TEMP,
.channel = 1,
.address = AXP288_PMIC_ADC_H,
.datasheet_name = "PMIC_TEMP",
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
}, {
.indexed = 1,
.type = IIO_TEMP,
.channel = 2,
.address = AXP288_GP_ADC_H,
.datasheet_name = "GPADC",
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
}, {
.indexed = 1,
.type = IIO_CURRENT,
.channel = 3,
.address = AXP20X_BATT_CHRG_I_H,
.datasheet_name = "BATT_CHG_I",
.info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED),
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
}, {
.indexed = 1,
.type = IIO_CURRENT,
.channel = 4,
.address = AXP20X_BATT_DISCHRG_I_H,
.datasheet_name = "BATT_DISCHRG_I",
.info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED),
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
}, {
.indexed = 1,
.type = IIO_VOLTAGE,
.channel = 5,
.address = AXP20X_BATT_V_H,
.datasheet_name = "BATT_V",
.info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED),
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
},
};
@ -151,9 +154,6 @@ static int axp288_adc_read_raw(struct iio_dev *indio_dev,
chan->address))
dev_err(&indio_dev->dev, "TS pin restore\n");
break;
case IIO_CHAN_INFO_PROCESSED:
ret = axp288_adc_read_channel(val, chan->address, info->regmap);
break;
default:
ret = -EINVAL;
}

View File

@ -35,8 +35,9 @@
#define CC10001_ADC_EOC_SET BIT(0)
#define CC10001_ADC_CHSEL_SAMPLED 0x0c
#define CC10001_ADC_POWER_UP 0x10
#define CC10001_ADC_POWER_UP_SET BIT(0)
#define CC10001_ADC_POWER_DOWN 0x10
#define CC10001_ADC_POWER_DOWN_SET BIT(0)
#define CC10001_ADC_DEBUG 0x14
#define CC10001_ADC_DATA_COUNT 0x20
@ -62,7 +63,6 @@ struct cc10001_adc_device {
u16 *buf;
struct mutex lock;
unsigned long channel_map;
unsigned int start_delay_ns;
unsigned int eoc_delay_ns;
};
@ -79,6 +79,18 @@ static inline u32 cc10001_adc_read_reg(struct cc10001_adc_device *adc_dev,
return readl(adc_dev->reg_base + reg);
}
static void cc10001_adc_power_up(struct cc10001_adc_device *adc_dev)
{
cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_DOWN, 0);
ndelay(adc_dev->start_delay_ns);
}
static void cc10001_adc_power_down(struct cc10001_adc_device *adc_dev)
{
cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_DOWN,
CC10001_ADC_POWER_DOWN_SET);
}
static void cc10001_adc_start(struct cc10001_adc_device *adc_dev,
unsigned int channel)
{
@ -88,6 +100,7 @@ static void cc10001_adc_start(struct cc10001_adc_device *adc_dev,
val = (channel & CC10001_ADC_CH_MASK) | CC10001_ADC_MODE_SINGLE_CONV;
cc10001_adc_write_reg(adc_dev, CC10001_ADC_CONFIG, val);
udelay(1);
val = cc10001_adc_read_reg(adc_dev, CC10001_ADC_CONFIG);
val = val | CC10001_ADC_START_CONV;
cc10001_adc_write_reg(adc_dev, CC10001_ADC_CONFIG, val);
@ -129,6 +142,7 @@ static irqreturn_t cc10001_adc_trigger_h(int irq, void *p)
struct iio_dev *indio_dev;
unsigned int delay_ns;
unsigned int channel;
unsigned int scan_idx;
bool sample_invalid;
u16 *data;
int i;
@ -139,20 +153,17 @@ static irqreturn_t cc10001_adc_trigger_h(int irq, void *p)
mutex_lock(&adc_dev->lock);
cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP,
CC10001_ADC_POWER_UP_SET);
/* Wait for 8 (6+2) clock cycles before activating START */
ndelay(adc_dev->start_delay_ns);
cc10001_adc_power_up(adc_dev);
/* Calculate delay step for eoc and sampled data */
delay_ns = adc_dev->eoc_delay_ns / CC10001_MAX_POLL_COUNT;
i = 0;
sample_invalid = false;
for_each_set_bit(channel, indio_dev->active_scan_mask,
for_each_set_bit(scan_idx, indio_dev->active_scan_mask,
indio_dev->masklength) {
channel = indio_dev->channels[scan_idx].channel;
cc10001_adc_start(adc_dev, channel);
data[i] = cc10001_adc_poll_done(indio_dev, channel, delay_ns);
@ -166,7 +177,7 @@ static irqreturn_t cc10001_adc_trigger_h(int irq, void *p)
}
done:
cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP, 0);
cc10001_adc_power_down(adc_dev);
mutex_unlock(&adc_dev->lock);
@ -185,11 +196,7 @@ static u16 cc10001_adc_read_raw_voltage(struct iio_dev *indio_dev,
unsigned int delay_ns;
u16 val;
cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP,
CC10001_ADC_POWER_UP_SET);
/* Wait for 8 (6+2) clock cycles before activating START */
ndelay(adc_dev->start_delay_ns);
cc10001_adc_power_up(adc_dev);
/* Calculate delay step for eoc and sampled data */
delay_ns = adc_dev->eoc_delay_ns / CC10001_MAX_POLL_COUNT;
@ -198,7 +205,7 @@ static u16 cc10001_adc_read_raw_voltage(struct iio_dev *indio_dev,
val = cc10001_adc_poll_done(indio_dev, chan->channel, delay_ns);
cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP, 0);
cc10001_adc_power_down(adc_dev);
return val;
}
@ -224,7 +231,7 @@ static int cc10001_adc_read_raw(struct iio_dev *indio_dev,
case IIO_CHAN_INFO_SCALE:
ret = regulator_get_voltage(adc_dev->reg);
if (ret)
if (ret < 0)
return ret;
*val = ret / 1000;
@ -255,22 +262,22 @@ static const struct iio_info cc10001_adc_info = {
.update_scan_mode = &cc10001_update_scan_mode,
};
static int cc10001_adc_channel_init(struct iio_dev *indio_dev)
static int cc10001_adc_channel_init(struct iio_dev *indio_dev,
unsigned long channel_map)
{
struct cc10001_adc_device *adc_dev = iio_priv(indio_dev);
struct iio_chan_spec *chan_array, *timestamp;
unsigned int bit, idx = 0;
indio_dev->num_channels = bitmap_weight(&adc_dev->channel_map,
CC10001_ADC_NUM_CHANNELS);
indio_dev->num_channels = bitmap_weight(&channel_map,
CC10001_ADC_NUM_CHANNELS) + 1;
chan_array = devm_kcalloc(&indio_dev->dev, indio_dev->num_channels + 1,
chan_array = devm_kcalloc(&indio_dev->dev, indio_dev->num_channels,
sizeof(struct iio_chan_spec),
GFP_KERNEL);
if (!chan_array)
return -ENOMEM;
for_each_set_bit(bit, &adc_dev->channel_map, CC10001_ADC_NUM_CHANNELS) {
for_each_set_bit(bit, &channel_map, CC10001_ADC_NUM_CHANNELS) {
struct iio_chan_spec *chan = &chan_array[idx];
chan->type = IIO_VOLTAGE;
@ -305,6 +312,7 @@ static int cc10001_adc_probe(struct platform_device *pdev)
unsigned long adc_clk_rate;
struct resource *res;
struct iio_dev *indio_dev;
unsigned long channel_map;
int ret;
indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*adc_dev));
@ -313,9 +321,9 @@ static int cc10001_adc_probe(struct platform_device *pdev)
adc_dev = iio_priv(indio_dev);
adc_dev->channel_map = GENMASK(CC10001_ADC_NUM_CHANNELS - 1, 0);
channel_map = GENMASK(CC10001_ADC_NUM_CHANNELS - 1, 0);
if (!of_property_read_u32(node, "adc-reserved-channels", &ret))
adc_dev->channel_map &= ~ret;
channel_map &= ~ret;
adc_dev->reg = devm_regulator_get(&pdev->dev, "vref");
if (IS_ERR(adc_dev->reg))
@ -361,7 +369,7 @@ static int cc10001_adc_probe(struct platform_device *pdev)
adc_dev->start_delay_ns = adc_dev->eoc_delay_ns * CC10001_WAIT_CYCLES;
/* Setup the ADC channels available on the device */
ret = cc10001_adc_channel_init(indio_dev);
ret = cc10001_adc_channel_init(indio_dev, channel_map);
if (ret < 0)
goto err_disable_clk;

View File

@ -60,12 +60,12 @@ struct mcp320x {
struct spi_message msg;
struct spi_transfer transfer[2];
u8 tx_buf;
u8 rx_buf[2];
struct regulator *reg;
struct mutex lock;
const struct mcp320x_chip_info *chip_info;
u8 tx_buf ____cacheline_aligned;
u8 rx_buf[2];
};
static int mcp320x_channel_to_tx_data(int device_index,

View File

@ -18,6 +18,7 @@
#include <linux/iio/iio.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/math64.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
@ -471,11 +472,11 @@ static s32 vadc_calibrate(struct vadc_priv *vadc,
const struct vadc_channel_prop *prop, u16 adc_code)
{
const struct vadc_prescale_ratio *prescale;
s32 voltage;
s64 voltage;
voltage = adc_code - vadc->graph[prop->calibration].gnd;
voltage *= vadc->graph[prop->calibration].dx;
voltage = voltage / vadc->graph[prop->calibration].dy;
voltage = div64_s64(voltage, vadc->graph[prop->calibration].dy);
if (prop->calibration == VADC_CALIB_ABSOLUTE)
voltage += vadc->graph[prop->calibration].dx;
@ -487,7 +488,7 @@ static s32 vadc_calibrate(struct vadc_priv *vadc,
voltage = voltage * prescale->den;
return voltage / prescale->num;
return div64_s64(voltage, prescale->num);
}
static int vadc_decimation_from_dt(u32 value)

View File

@ -856,6 +856,7 @@ static int xadc_read_raw(struct iio_dev *indio_dev,
switch (chan->address) {
case XADC_REG_VCCINT:
case XADC_REG_VCCAUX:
case XADC_REG_VREFP:
case XADC_REG_VCCBRAM:
case XADC_REG_VCCPINT:
case XADC_REG_VCCPAUX:
@ -996,7 +997,7 @@ static const struct iio_event_spec xadc_voltage_events[] = {
.num_event_specs = (_alarm) ? ARRAY_SIZE(xadc_voltage_events) : 0, \
.scan_index = (_scan_index), \
.scan_type = { \
.sign = 'u', \
.sign = ((_addr) == XADC_REG_VREFN) ? 's' : 'u', \
.realbits = 12, \
.storagebits = 16, \
.shift = 4, \
@ -1008,7 +1009,7 @@ static const struct iio_event_spec xadc_voltage_events[] = {
static const struct iio_chan_spec xadc_channels[] = {
XADC_CHAN_TEMP(0, 8, XADC_REG_TEMP),
XADC_CHAN_VOLTAGE(0, 9, XADC_REG_VCCINT, "vccint", true),
XADC_CHAN_VOLTAGE(1, 10, XADC_REG_VCCINT, "vccaux", true),
XADC_CHAN_VOLTAGE(1, 10, XADC_REG_VCCAUX, "vccaux", true),
XADC_CHAN_VOLTAGE(2, 14, XADC_REG_VCCBRAM, "vccbram", true),
XADC_CHAN_VOLTAGE(3, 5, XADC_REG_VCCPINT, "vccpint", true),
XADC_CHAN_VOLTAGE(4, 6, XADC_REG_VCCPAUX, "vccpaux", true),

Some files were not shown because too many files have changed in this diff Show More