USB / Thunderbolt patches for 5.11-rc1

Here is the big USB and thunderbolt pull request for 5.11-rc1.
 
 Nothing major in here, just the grind of constant development to support
 new hardware and fix old issues:
   - thunderbolt updates for new USB4 hardware
   - cdns3 major driver updates
   - lots of typec updates and additions as more hardware is available
   - usb serial driver updates and fixes
   - other tiny USB driver updates
 
 All have been in linux-next with no reported issues.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCX9iKFQ8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+yme2gCfacEYztlnc0fh7PyyatYXgdkspbYAn2ri6mfF
 4VY86HYXKqQRDW8w/lSg
 =f8KJ
 -----END PGP SIGNATURE-----

Merge tag 'usb-5.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb

Pull USB / Thunderbolt updates from Greg KH:
 "Here is the big USB and thunderbolt pull request for 5.11-rc1.

  Nothing major in here, just the grind of constant development to
  support new hardware and fix old issues:

   - thunderbolt updates for new USB4 hardware

   - cdns3 major driver updates

   - lots of typec updates and additions as more hardware is available

   - usb serial driver updates and fixes

   - other tiny USB driver updates

  All have been in linux-next with no reported issues"

* tag 'usb-5.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (172 commits)
  usb: phy: convert comma to semicolon
  usb: ucsi: convert comma to semicolon
  usb: typec: tcpm: convert comma to semicolon
  usb: typec: tcpm: Update vbus_vsafe0v on init
  usb: typec: tcpci: Enable bleed discharge when auto discharge is enabled
  usb: typec: Add class for plug alt mode device
  USB: typec: tcpci: Add Bleed discharge to POWER_CONTROL definition
  USB: typec: tcpm: Add a 30ms room for tPSSourceOn in PR_SWAP
  USB: typec: tcpm: Fix PR_SWAP error handling
  USB: typec: tcpm: Hard Reset after not receiving a Request
  USB: gadget: f_fs: remove likely/unlikely
  usb: gadget: f_fs: Re-use SS descriptors for SuperSpeedPlus
  USB: gadget: f_midi: setup SuperSpeed Plus descriptors
  USB: gadget: f_acm: add support for SuperSpeed Plus
  USB: gadget: f_rndis: fix bitrate for SuperSpeed and above
  usb: typec: intel_pmc_mux: Configure cable generation value for USB4
  MAINTAINERS: Add myself as a reviewer for CADENCE USB3 DRD IP DRIVER
  usb: chipidea: ci_hdrc_imx: Use of_device_get_match_data()
  usb: chipidea: usbmisc_imx: Use of_device_get_match_data()
  usb: cdns3: fix NULL pointer dereference on no platform data
  ...
This commit is contained in:
Linus Torvalds 2020-12-15 13:54:56 -08:00
commit 0cee54c890
158 changed files with 4301 additions and 4711 deletions

View File

@ -1,3 +1,31 @@
What: /sys/bus/thunderbolt/devices/<xdomain>/rx_speed
Date: Feb 2021
KernelVersion: 5.11
Contact: Isaac Hazan <isaac.hazan@intel.com>
Description: This attribute reports the XDomain RX speed per lane.
All RX lanes run at the same speed.
What: /sys/bus/thunderbolt/devices/<xdomain>/rx_lanes
Date: Feb 2021
KernelVersion: 5.11
Contact: Isaac Hazan <isaac.hazan@intel.com>
Description: This attribute reports the number of RX lanes the XDomain
is using simultaneously through its upstream port.
What: /sys/bus/thunderbolt/devices/<xdomain>/tx_speed
Date: Feb 2021
KernelVersion: 5.11
Contact: Isaac Hazan <isaac.hazan@intel.com>
Description: This attribute reports the XDomain TX speed per lane.
All TX lanes run at the same speed.
What: /sys/bus/thunderbolt/devices/<xdomain>/tx_lanes
Date: Feb 2021
KernelVersion: 5.11
Contact: Isaac Hazan <isaac.hazan@intel.com>
Description: This attribute reports number of TX lanes the XDomain
is using simultaneously through its upstream port.
What: /sys/bus/thunderbolt/devices/.../domainX/boot_acl
Date: Jun 2018
KernelVersion: 4.17

View File

@ -139,6 +139,49 @@ Description:
Shows if the partner supports USB Power Delivery communication:
Valid values: yes, no
What: /sys/class/typec/<port>-partner/number_of_alternate_modes
Date: November 2020
Contact: Prashant Malani <pmalani@chromium.org>
Description:
Shows the number of alternate modes which are advertised by the partner
during Power Delivery discovery. This file remains hidden until a value
greater than or equal to 0 is set by Type C port driver.
What: /sys/class/typec/<port>-partner/type
Date: December 2020
Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Description: USB Power Delivery Specification defines a set of product types
for the partner devices. This file will show the product type of
the partner if it is known. Dual-role capable partners will have
both UFP and DFP product types defined, but only one that
matches the current role will be active at the time. If the
product type of the partner is not visible to the device driver,
this file will not exist.
When the partner product type is detected, or changed with role
swap, uvevent is also raised that contains PRODUCT_TYPE=<product
type> (for example PRODUCT_TYPE=hub).
Valid values:
UFP / device role
====================== ==========================
undefined -
hub PDUSB Hub
peripheral PDUSB Peripheral
psd Power Bank
ama Alternate Mode Adapter
====================== ==========================
DFP / host role
====================== ==========================
undefined -
hub PDUSB Hub
host PDUSB Host
power_brick Power Brick
amc Alternate Mode Controller
====================== ==========================
What: /sys/class/typec/<port>-partner>/identity/
Date: April 2017
Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com>
@ -151,31 +194,6 @@ Description:
directory exists, it will have an attribute file for every VDO
in Discover Identity command result.
What: /sys/class/typec/<port>-partner/identity/id_header
Date: April 2017
Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Description:
ID Header VDO part of Discover Identity command result. The
value will show 0 until Discover Identity command result becomes
available. The value can be polled.
What: /sys/class/typec/<port>-partner/identity/cert_stat
Date: April 2017
Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Description:
Cert Stat VDO part of Discover Identity command result. The
value will show 0 until Discover Identity command result becomes
available. The value can be polled.
What: /sys/class/typec/<port>-partner/identity/product
Date: April 2017
Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Description:
Product VDO part of Discover Identity command result. The value
will show 0 until Discover Identity command result becomes
available. The value can be polled.
USB Type-C cable devices (eg. /sys/class/typec/port0-cable/)
Note: Electronically Marked Cables will have a device also for one cable plug
@ -187,9 +205,21 @@ described in USB Type-C and USB Power Delivery specifications.
What: /sys/class/typec/<port>-cable/type
Date: April 2017
Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Description:
Shows if the cable is active.
Valid values: active, passive
Description: USB Power Delivery Specification defines a set of product types
for the cables. This file will show the product type of the
cable if it is known. If the product type of the cable is not
visible to the device driver, this file will not exist.
When the cable product type is detected, uvevent is also raised
with PRODUCT_TYPE showing the product type of the cable.
Valid values:
====================== ==========================
undefined -
active Active Cable
passive Passive Cable
====================== ==========================
What: /sys/class/typec/<port>-cable/plug_type
Date: April 2017
@ -202,17 +232,37 @@ Description:
- type-c
- captive
What: /sys/class/typec/<port>-cable/identity/
What: /sys/class/typec/<port>-<plug>/number_of_alternate_modes
Date: November 2020
Contact: Prashant Malani <pmalani@chromium.org>
Description:
Shows the number of alternate modes which are advertised by the plug
associated with a particular cable during Power Delivery discovery.
This file remains hidden until a value greater than or equal to 0
is set by Type C port driver.
USB Type-C partner/cable Power Delivery Identity objects
NOTE: The following attributes will be applicable to both
partner (e.g /sys/class/typec/port0-partner/) and
cable (e.g /sys/class/typec/port0-cable/) devices. Consequently, the example file
paths below are prefixed with "/sys/class/typec/<port>-{partner|cable}/" to
reflect this.
What: /sys/class/typec/<port>-{partner|cable}/identity/
Date: April 2017
Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Description:
This directory appears only if the port device driver is capable
of showing the result of Discover Identity USB power delivery
command. That will not always be possible even when USB power
delivery is supported. If the directory exists, it will have an
attribute for every VDO returned by Discover Identity command.
delivery is supported, for example when USB power delivery
communication for the port is mostly handled in firmware. If the
directory exists, it will have an attribute file for every VDO
in Discover Identity command result.
What: /sys/class/typec/<port>-cable/identity/id_header
What: /sys/class/typec/<port>-{partner|cable}/identity/id_header
Date: April 2017
Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Description:
@ -220,7 +270,7 @@ Description:
value will show 0 until Discover Identity command result becomes
available. The value can be polled.
What: /sys/class/typec/<port>-cable/identity/cert_stat
What: /sys/class/typec/<port>-{partner|cable}/identity/cert_stat
Date: April 2017
Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Description:
@ -228,7 +278,7 @@ Description:
value will show 0 until Discover Identity command result becomes
available. The value can be polled.
What: /sys/class/typec/<port>-cable/identity/product
What: /sys/class/typec/<port>-{partner|cable}/identity/product
Date: April 2017
Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Description:
@ -236,6 +286,30 @@ Description:
will show 0 until Discover Identity command result becomes
available. The value can be polled.
What: /sys/class/typec/<port>-{partner|cable}/identity/product_type_vdo1
Date: October 2020
Contact: Prashant Malani <pmalani@chromium.org>
Description:
1st Product Type VDO of Discover Identity command result.
The value will show 0 until Discover Identity command result becomes
available and a valid Product Type VDO is returned.
What: /sys/class/typec/<port>-{partner|cable}/identity/product_type_vdo2
Date: October 2020
Contact: Prashant Malani <pmalani@chromium.org>
Description:
2nd Product Type VDO of Discover Identity command result.
The value will show 0 until Discover Identity command result becomes
available and a valid Product Type VDO is returned.
What: /sys/class/typec/<port>-{partner|cable}/identity/product_type_vdo3
Date: October 2020
Contact: Prashant Malani <pmalani@chromium.org>
Description:
3rd Product Type VDO of Discover Identity command result.
The value will show 0 until Discover Identity command result becomes
available and a valid Product Type VDO is returned.
USB Type-C port alternate mode devices.

View File

@ -5665,6 +5665,7 @@
device);
j = NO_REPORT_LUNS (don't use report luns
command, uas only);
k = NO_SAME (do not use WRITE_SAME, uas only)
l = NOT_LOCKABLE (don't try to lock and
unlock ejectable media, not on uas);
m = MAX_SECTORS_64 (don't transfer more

View File

@ -147,6 +147,25 @@ properties:
required:
- port@0
new-source-frs-typec-current:
description: Initial current capability of the new source when vSafe5V
is applied during PD3.0 Fast Role Swap. "Table 6-14 Fixed Supply PDO - Sink"
of "USB Power Delivery Specification Revision 3.0, Version 1.2" provides the
different power levels and "6.4.1.3.1.6 Fast Role Swap USB Type-C Current"
provides a detailed description of the field. The sink PDO from current source
reflects the current source's(i.e. transmitter of the FRS signal) power
requirement during fr swap. The current sink (i.e. receiver of the FRS signal),
a.k.a new source, should check if it will be able to satisfy the current source's,
new sink's, requirement during frswap before enabling the frs signal reception.
This property refers to maximum current capability that the current sink can
satisfy. During FRS, VBUS voltage is at 5V, as the partners are in implicit
contract, hence, the power level is only a function of the current capability.
"1" refers to default USB power level as described by "Table 6-14 Fixed Supply PDO - Sink".
"2" refers to 1.5A@5V.
"3" refers to 3.0A@5V.
$ref: /schemas/types.yaml#/definitions/uint32
enum: [1, 2, 3]
required:
- compatible

View File

@ -0,0 +1,70 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/usb/brcm,usb-pinmap.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Broadcom USB pin map Controller Device Tree Bindings
maintainers:
- Al Cooper <alcooperx@gmail.com>
properties:
compatible:
items:
- const: brcm,usb-pinmap
reg:
maxItems: 1
interrupts:
maxItems: 1
description: Interrupt for signals mirrored to out-gpios.
in-gpios:
description: Array of one or two GPIO pins used for input signals.
brcm,in-functions:
$ref: /schemas/types.yaml#/definitions/string-array
description: Array of input signal names, one per gpio in in-gpios.
brcm,in-masks:
$ref: /schemas/types.yaml#/definitions/uint32-array
description: Array of enable and mask pairs, one per gpio in-gpios.
out-gpios:
description: Array of one GPIO pin used for output signals.
brcm,out-functions:
$ref: /schemas/types.yaml#/definitions/string-array
description: Array of output signal names, one per gpio in out-gpios.
brcm,out-masks:
$ref: /schemas/types.yaml#/definitions/uint32-array
description: Array of enable, value, changed and clear masks, one
per gpio in out-gpios.
required:
- compatible
- reg
additionalProperties: false
dependencies:
in-gpios: [ interrupts ]
examples:
- |
usb_pinmap: usb-pinmap@22000d0 {
compatible = "brcm,usb-pinmap";
reg = <0x22000d0 0x4>;
in-gpios = <&gpio 18 0>, <&gpio 19 0>;
brcm,in-functions = "VBUS", "PWRFLT";
brcm,in-masks = <0x8000 0x40000 0x10000 0x80000>;
out-gpios = <&gpio 20 0>;
brcm,out-functions = "PWRON";
brcm,out-masks = <0x20000 0x800000 0x400000 0x200000>;
interrupts = <0x0 0xb2 0x4>;
};
...

View File

@ -26,16 +26,21 @@ properties:
- const: dev
interrupts:
minItems: 3
items:
- description: OTG/DRD controller interrupt
- description: XHCI host controller interrupt
- description: Device controller interrupt
- description: interrupt used to wake up core, e.g when usbcmd.rs is
cleared by xhci core, this interrupt is optional
interrupt-names:
minItems: 3
items:
- const: host
- const: peripheral
- const: otg
- const: wakeup
dr_mode:
enum: [host, otg, peripheral]

View File

@ -0,0 +1,75 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: "http://devicetree.org/schemas/usb/maxim,max33359.yaml#"
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
title: Maxim TCPCI Type-C PD controller DT bindings
maintainers:
- Badhri Jagan Sridharan <badhri@google.com>
description: Maxim TCPCI Type-C PD controller
properties:
compatible:
enum:
- maxim,max33359
reg:
maxItems: 1
interrupts:
maxItems: 1
connector:
type: object
$ref: ../connector/usb-connector.yaml#
description:
Properties for usb c connector.
required:
- compatible
- reg
- interrupts
- connector
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/usb/pd.h>
i2c0 {
#address-cells = <1>;
#size-cells = <0>;
maxtcpc@25 {
compatible = "maxim,max33359";
reg = <0x25>;
interrupt-parent = <&gpa8>;
interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
connector {
compatible = "usb-c-connector";
label = "USB-C";
data-role = "dual";
power-role = "dual";
try-power-role = "sink";
self-powered;
op-sink-microwatt = <2600000>;
new-source-frs-typec-current = <FRS_5V_1P5A>;
source-pdos = <PDO_FIXED(5000, 900,
PDO_FIXED_SUSPEND |
PDO_FIXED_USB_COMM |
PDO_FIXED_DATA_SWAP |
PDO_FIXED_DUAL_ROLE)>;
sink-pdos = <PDO_FIXED(5000, 3000,
PDO_FIXED_USB_COMM |
PDO_FIXED_DATA_SWAP |
PDO_FIXED_DUAL_ROLE)
PDO_FIXED(9000, 2000, 0)>;
};
};
};
...

View File

@ -3583,6 +3583,14 @@ S: Maintained
F: Documentation/devicetree/bindings/usb/brcm,bcm7445-ehci.yaml
F: drivers/usb/host/ehci-brcm.*
BROADCOM BRCMSTB USB PIN MAP DRIVER
M: Al Cooper <alcooperx@gmail.com>
L: linux-usb@vger.kernel.org
L: bcm-kernel-feedback-list@broadcom.com
S: Maintained
F: Documentation/devicetree/bindings/usb/brcm,usb-pinmap.yaml
F: drivers/usb/misc/brcmstb-usb-pinmap.c
BROADCOM BRCMSTB USB2 and USB3 PHY DRIVER
M: Al Cooper <alcooperx@gmail.com>
L: linux-kernel@vger.kernel.org
@ -3868,6 +3876,7 @@ CADENCE USB3 DRD IP DRIVER
M: Peter Chen <peter.chen@nxp.com>
M: Pawel Laszczak <pawell@cadence.com>
M: Roger Quadros <rogerq@ti.com>
R: Aswath Govindraju <a-govindraju@ti.com>
L: linux-usb@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/peter.chen/usb.git
@ -17476,6 +17485,12 @@ W: http://thinkwiki.org/wiki/Ibm-acpi
T: git git://repo.or.cz/linux-2.6/linux-acpi-2.6/ibm-acpi-2.6.git
F: drivers/platform/x86/thinkpad_acpi.c
THUNDERBOLT DMA TRAFFIC TEST DRIVER
M: Isaac Hazan <isaac.hazan@intel.com>
L: linux-usb@vger.kernel.org
S: Maintained
F: drivers/thunderbolt/dma_test.c
THUNDERBOLT DRIVER
M: Andreas Noever <andreas.noever@gmail.com>
M: Michael Jamet <michael.jamet@intel.com>

View File

@ -89,7 +89,6 @@ CONFIG_USB_SERIAL_KEYSPAN=m
CONFIG_USB_SERIAL_MCT_U232=m
CONFIG_USB_SERIAL_PL2303=m
CONFIG_USB_SERIAL_CYBERJACK=m
CONFIG_USB_SERIAL_XIRCOM=m
CONFIG_USB_SERIAL_OMNINET=m
CONFIG_EXT2_FS=m
CONFIG_EXT3_FS=m

View File

@ -191,7 +191,6 @@ CONFIG_USB_SERIAL_PL2303=m
CONFIG_USB_SERIAL_SAFE=m
CONFIG_USB_SERIAL_TI=m
CONFIG_USB_SERIAL_CYBERJACK=m
CONFIG_USB_SERIAL_XIRCOM=m
CONFIG_USB_SERIAL_OMNINET=m
CONFIG_USB_EMI62=m
CONFIG_USB_EMI26=m

View File

@ -574,7 +574,6 @@ CONFIG_USB_SERIAL_PL2303=m
CONFIG_USB_SERIAL_SAFE=m
CONFIG_USB_SERIAL_TI=m
CONFIG_USB_SERIAL_CYBERJACK=m
CONFIG_USB_SERIAL_XIRCOM=m
CONFIG_USB_SERIAL_OMNINET=m
CONFIG_USB_EMI62=m
CONFIG_USB_EMI26=m

View File

@ -185,7 +185,6 @@ CONFIG_USB_SERIAL_PL2303=m
CONFIG_USB_SERIAL_SAFE=m
CONFIG_USB_SERIAL_TI=m
CONFIG_USB_SERIAL_CYBERJACK=m
CONFIG_USB_SERIAL_XIRCOM=m
CONFIG_USB_SERIAL_OMNINET=m
CONFIG_USB_EMI62=m
CONFIG_USB_EMI26=m

View File

@ -16,6 +16,7 @@
* Copyright (C) 2004 Nokia Corporation by Imre Deak <imre.deak@nokia.com>
*/
#include <linux/gpio.h>
#include <linux/gpio/machine.h>
#include <linux/kernel.h>
#include <linux/platform_device.h>
#include <linux/delay.h>
@ -46,6 +47,9 @@
#include "common.h"
#include "board-h2.h"
/* The first 16 SoC GPIO lines are on this GPIO chip */
#define OMAP_GPIO_LABEL "gpio-0-15"
/* At OMAP1610 Innovator the Ethernet is directly connected to CS1 */
#define OMAP1610_ETHR_START 0x04000300
@ -334,7 +338,19 @@ static struct i2c_board_info __initdata h2_i2c_board_info[] = {
I2C_BOARD_INFO("tps65010", 0x48),
.platform_data = &tps_board,
}, {
I2C_BOARD_INFO("isp1301_omap", 0x2d),
.type = "isp1301_omap",
.addr = 0x2d,
.dev_name = "isp1301",
},
};
static struct gpiod_lookup_table isp1301_gpiod_table = {
.dev_id = "isp1301",
.table = {
/* Active low since the irq triggers on falling edge */
GPIO_LOOKUP(OMAP_GPIO_LABEL, 2,
NULL, GPIO_ACTIVE_LOW),
{ },
},
};
@ -406,8 +422,10 @@ static void __init h2_init(void)
h2_smc91x_resources[1].end = gpio_to_irq(0);
platform_add_devices(h2_devices, ARRAY_SIZE(h2_devices));
omap_serial_init();
/* ISP1301 IRQ wired at M14 */
omap_cfg_reg(M14_1510_GPIO2);
h2_i2c_board_info[0].irq = gpio_to_irq(58);
h2_i2c_board_info[1].irq = gpio_to_irq(2);
omap_register_i2c_bus(1, 100, h2_i2c_board_info,
ARRAY_SIZE(h2_i2c_board_info));
omap1_usb_init(&h2_usb_config);

View File

@ -563,7 +563,6 @@ CONFIG_USB_SERIAL_SAFE=m
CONFIG_USB_SERIAL_SIERRAWIRELESS=m
CONFIG_USB_SERIAL_TI=m
CONFIG_USB_SERIAL_CYBERJACK=m
CONFIG_USB_SERIAL_XIRCOM=m
CONFIG_USB_SERIAL_OPTION=m
CONFIG_USB_SERIAL_OMNINET=m
CONFIG_USB_EMI62=m

View File

@ -311,7 +311,6 @@ CONFIG_USB_SERIAL_PL2303=m
CONFIG_USB_SERIAL_SAFE=m
CONFIG_USB_SERIAL_SAFE_PADDED=y
CONFIG_USB_SERIAL_CYBERJACK=m
CONFIG_USB_SERIAL_XIRCOM=m
CONFIG_USB_SERIAL_OMNINET=m
CONFIG_USB_LEGOTOWER=m
CONFIG_USB_LCD=m

View File

@ -194,7 +194,6 @@ CONFIG_USB_SERIAL_SAFE=m
CONFIG_USB_SERIAL_SAFE_PADDED=y
CONFIG_USB_SERIAL_TI=m
CONFIG_USB_SERIAL_CYBERJACK=m
CONFIG_USB_SERIAL_XIRCOM=m
CONFIG_USB_SERIAL_OMNINET=m
CONFIG_USB_APPLEDISPLAY=m
CONFIG_EXT2_FS=y

View File

@ -911,7 +911,6 @@ CONFIG_USB_SERIAL_SAFE_PADDED=y
CONFIG_USB_SERIAL_SIERRAWIRELESS=m
CONFIG_USB_SERIAL_TI=m
CONFIG_USB_SERIAL_CYBERJACK=m
CONFIG_USB_SERIAL_XIRCOM=m
CONFIG_USB_SERIAL_OPTION=m
CONFIG_USB_SERIAL_OMNINET=m
CONFIG_USB_SERIAL_DEBUG=m

View File

@ -866,7 +866,7 @@ static int tbnet_open(struct net_device *dev)
eof_mask = BIT(TBIP_PDF_FRAME_END);
ring = tb_ring_alloc_rx(xd->tb->nhi, -1, TBNET_RING_SIZE,
RING_FLAG_FRAME, sof_mask, eof_mask,
RING_FLAG_FRAME, 0, sof_mask, eof_mask,
tbnet_start_poll, net);
if (!ring) {
netdev_err(dev, "failed to allocate Rx ring\n");

View File

@ -438,8 +438,7 @@ static int cros_typec_enable_tbt(struct cros_typec_data *typec,
if (pd_ctrl->control_flags & USB_PD_CTRL_ACTIVE_LINK_UNIDIR)
data.cable_mode |= TBT_CABLE_LINK_TRAINING;
if (pd_ctrl->cable_gen)
data.cable_mode |= TBT_CABLE_ROUNDED;
data.cable_mode |= TBT_SET_CABLE_ROUNDED(pd_ctrl->cable_gen);
/* Enter Mode VDO */
data.enter_vdo = TBT_SET_CABLE_SPEED(pd_ctrl->cable_speed);

View File

@ -31,4 +31,17 @@ config USB4_KUNIT_TEST
bool "KUnit tests"
depends on KUNIT=y
config USB4_DMA_TEST
tristate "DMA traffic test driver"
depends on DEBUG_FS
help
This allows sending and receiving DMA traffic through loopback
connection. Loopback connection can be done by either special
dongle that has TX/RX lines crossed, or by simply connecting a
cable back to the host. Only enable this if you know what you
are doing. Normal users and distro kernels should say N here.
To compile this driver a module, choose M here. The module will be
called thunderbolt_dma_test.
endif # USB4

View File

@ -7,3 +7,6 @@ thunderbolt-objs += nvm.o retimer.o quirks.o
thunderbolt-${CONFIG_ACPI} += acpi.o
thunderbolt-$(CONFIG_DEBUG_FS) += debugfs.o
thunderbolt-${CONFIG_USB4_KUNIT_TEST} += test.o
thunderbolt_dma_test-${CONFIG_USB4_DMA_TEST} += dma_test.o
obj-$(CONFIG_USB4_DMA_TEST) += thunderbolt_dma_test.o

View File

@ -628,8 +628,8 @@ struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void *cb_data)
if (!ctl->tx)
goto err;
ctl->rx = tb_ring_alloc_rx(nhi, 0, 10, RING_FLAG_NO_SUSPEND, 0xffff,
0xffff, NULL, NULL);
ctl->rx = tb_ring_alloc_rx(nhi, 0, 10, RING_FLAG_NO_SUSPEND, 0, 0xffff,
0xffff, NULL, NULL);
if (!ctl->rx)
goto err;
@ -962,6 +962,9 @@ static int tb_cfg_get_error(struct tb_ctl *ctl, enum tb_cfg_space space,
if (res->tb_error == TB_CFG_ERROR_LOCK)
return -EACCES;
else if (res->tb_error == TB_CFG_ERROR_PORT_NOT_CONNECTED)
return -ENOTCONN;
return -EIO;
}

View File

@ -691,6 +691,30 @@ void tb_switch_debugfs_remove(struct tb_switch *sw)
debugfs_remove_recursive(sw->debugfs_dir);
}
/**
* tb_service_debugfs_init() - Add debugfs directory for service
* @svc: Thunderbolt service pointer
*
* Adds debugfs directory for service.
*/
void tb_service_debugfs_init(struct tb_service *svc)
{
svc->debugfs_dir = debugfs_create_dir(dev_name(&svc->dev),
tb_debugfs_root);
}
/**
* tb_service_debugfs_remove() - Remove service debugfs directory
* @svc: Thunderbolt service pointer
*
* Removes the previously created debugfs directory for @svc.
*/
void tb_service_debugfs_remove(struct tb_service *svc)
{
debugfs_remove_recursive(svc->debugfs_dir);
svc->debugfs_dir = NULL;
}
void tb_debugfs_init(void)
{
tb_debugfs_root = debugfs_create_dir("thunderbolt", NULL);

View File

@ -0,0 +1,736 @@
// SPDX-License-Identifier: GPL-2.0
/*
* DMA traffic test driver
*
* Copyright (C) 2020, Intel Corporation
* Authors: Isaac Hazan <isaac.hazan@intel.com>
* Mika Westerberg <mika.westerberg@linux.intel.com>
*/
#include <linux/acpi.h>
#include <linux/completion.h>
#include <linux/debugfs.h>
#include <linux/module.h>
#include <linux/sizes.h>
#include <linux/thunderbolt.h>
#define DMA_TEST_HOPID 8
#define DMA_TEST_TX_RING_SIZE 64
#define DMA_TEST_RX_RING_SIZE 256
#define DMA_TEST_FRAME_SIZE SZ_4K
#define DMA_TEST_DATA_PATTERN 0x0123456789abcdefLL
#define DMA_TEST_MAX_PACKETS 1000
enum dma_test_frame_pdf {
DMA_TEST_PDF_FRAME_START = 1,
DMA_TEST_PDF_FRAME_END,
};
struct dma_test_frame {
struct dma_test *dma_test;
void *data;
struct ring_frame frame;
};
enum dma_test_test_error {
DMA_TEST_NO_ERROR,
DMA_TEST_INTERRUPTED,
DMA_TEST_BUFFER_ERROR,
DMA_TEST_DMA_ERROR,
DMA_TEST_CONFIG_ERROR,
DMA_TEST_SPEED_ERROR,
DMA_TEST_WIDTH_ERROR,
DMA_TEST_BONDING_ERROR,
DMA_TEST_PACKET_ERROR,
};
static const char * const dma_test_error_names[] = {
[DMA_TEST_NO_ERROR] = "no errors",
[DMA_TEST_INTERRUPTED] = "interrupted by signal",
[DMA_TEST_BUFFER_ERROR] = "no memory for packet buffers",
[DMA_TEST_DMA_ERROR] = "DMA ring setup failed",
[DMA_TEST_CONFIG_ERROR] = "configuration is not valid",
[DMA_TEST_SPEED_ERROR] = "unexpected link speed",
[DMA_TEST_WIDTH_ERROR] = "unexpected link width",
[DMA_TEST_BONDING_ERROR] = "lane bonding configuration error",
[DMA_TEST_PACKET_ERROR] = "packet check failed",
};
enum dma_test_result {
DMA_TEST_NOT_RUN,
DMA_TEST_SUCCESS,
DMA_TEST_FAIL,
};
static const char * const dma_test_result_names[] = {
[DMA_TEST_NOT_RUN] = "not run",
[DMA_TEST_SUCCESS] = "success",
[DMA_TEST_FAIL] = "failed",
};
/**
* struct dma_test - DMA test device driver private data
* @svc: XDomain service the driver is bound to
* @xd: XDomain the service belongs to
* @rx_ring: Software ring holding RX frames
* @tx_ring: Software ring holding TX frames
* @packets_to_send: Number of packets to send
* @packets_to_receive: Number of packets to receive
* @packets_sent: Actual number of packets sent
* @packets_received: Actual number of packets received
* @link_speed: Expected link speed (Gb/s), %0 to use whatever is negotiated
* @link_width: Expected link width (Gb/s), %0 to use whatever is negotiated
* @crc_errors: Number of CRC errors during the test run
* @buffer_overflow_errors: Number of buffer overflow errors during the test
* run
* @result: Result of the last run
* @error_code: Error code of the last run
* @complete: Used to wait for the Rx to complete
* @lock: Lock serializing access to this structure
* @debugfs_dir: dentry of this dma_test
*/
struct dma_test {
const struct tb_service *svc;
struct tb_xdomain *xd;
struct tb_ring *rx_ring;
struct tb_ring *tx_ring;
unsigned int packets_to_send;
unsigned int packets_to_receive;
unsigned int packets_sent;
unsigned int packets_received;
unsigned int link_speed;
unsigned int link_width;
unsigned int crc_errors;
unsigned int buffer_overflow_errors;
enum dma_test_result result;
enum dma_test_test_error error_code;
struct completion complete;
struct mutex lock;
struct dentry *debugfs_dir;
};
/* DMA test property directory UUID: 3188cd10-6523-4a5a-a682-fdca07a248d8 */
static const uuid_t dma_test_dir_uuid =
UUID_INIT(0x3188cd10, 0x6523, 0x4a5a,
0xa6, 0x82, 0xfd, 0xca, 0x07, 0xa2, 0x48, 0xd8);
static struct tb_property_dir *dma_test_dir;
static void *dma_test_pattern;
static void dma_test_free_rings(struct dma_test *dt)
{
if (dt->rx_ring) {
tb_ring_free(dt->rx_ring);
dt->rx_ring = NULL;
}
if (dt->tx_ring) {
tb_ring_free(dt->tx_ring);
dt->tx_ring = NULL;
}
}
static int dma_test_start_rings(struct dma_test *dt)
{
unsigned int flags = RING_FLAG_FRAME;
struct tb_xdomain *xd = dt->xd;
int ret, e2e_tx_hop = 0;
struct tb_ring *ring;
/*
* If we are both sender and receiver (traffic goes over a
* special loopback dongle) enable E2E flow control. This avoids
* losing packets.
*/
if (dt->packets_to_send && dt->packets_to_receive)
flags |= RING_FLAG_E2E;
if (dt->packets_to_send) {
ring = tb_ring_alloc_tx(xd->tb->nhi, -1, DMA_TEST_TX_RING_SIZE,
flags);
if (!ring)
return -ENOMEM;
dt->tx_ring = ring;
e2e_tx_hop = ring->hop;
}
if (dt->packets_to_receive) {
u16 sof_mask, eof_mask;
sof_mask = BIT(DMA_TEST_PDF_FRAME_START);
eof_mask = BIT(DMA_TEST_PDF_FRAME_END);
ring = tb_ring_alloc_rx(xd->tb->nhi, -1, DMA_TEST_RX_RING_SIZE,
flags, e2e_tx_hop, sof_mask, eof_mask,
NULL, NULL);
if (!ring) {
dma_test_free_rings(dt);
return -ENOMEM;
}
dt->rx_ring = ring;
}
ret = tb_xdomain_enable_paths(dt->xd, DMA_TEST_HOPID,
dt->tx_ring ? dt->tx_ring->hop : 0,
DMA_TEST_HOPID,
dt->rx_ring ? dt->rx_ring->hop : 0);
if (ret) {
dma_test_free_rings(dt);
return ret;
}
if (dt->tx_ring)
tb_ring_start(dt->tx_ring);
if (dt->rx_ring)
tb_ring_start(dt->rx_ring);
return 0;
}
static void dma_test_stop_rings(struct dma_test *dt)
{
if (dt->rx_ring)
tb_ring_stop(dt->rx_ring);
if (dt->tx_ring)
tb_ring_stop(dt->tx_ring);
if (tb_xdomain_disable_paths(dt->xd))
dev_warn(&dt->svc->dev, "failed to disable DMA paths\n");
dma_test_free_rings(dt);
}
static void dma_test_rx_callback(struct tb_ring *ring, struct ring_frame *frame,
bool canceled)
{
struct dma_test_frame *tf = container_of(frame, typeof(*tf), frame);
struct dma_test *dt = tf->dma_test;
struct device *dma_dev = tb_ring_dma_device(dt->rx_ring);
dma_unmap_single(dma_dev, tf->frame.buffer_phy, DMA_TEST_FRAME_SIZE,
DMA_FROM_DEVICE);
kfree(tf->data);
if (canceled) {
kfree(tf);
return;
}
dt->packets_received++;
dev_dbg(&dt->svc->dev, "packet %u/%u received\n", dt->packets_received,
dt->packets_to_receive);
if (tf->frame.flags & RING_DESC_CRC_ERROR)
dt->crc_errors++;
if (tf->frame.flags & RING_DESC_BUFFER_OVERRUN)
dt->buffer_overflow_errors++;
kfree(tf);
if (dt->packets_received == dt->packets_to_receive)
complete(&dt->complete);
}
static int dma_test_submit_rx(struct dma_test *dt, size_t npackets)
{
struct device *dma_dev = tb_ring_dma_device(dt->rx_ring);
int i;
for (i = 0; i < npackets; i++) {
struct dma_test_frame *tf;
dma_addr_t dma_addr;
tf = kzalloc(sizeof(*tf), GFP_KERNEL);
if (!tf)
return -ENOMEM;
tf->data = kzalloc(DMA_TEST_FRAME_SIZE, GFP_KERNEL);
if (!tf->data) {
kfree(tf);
return -ENOMEM;
}
dma_addr = dma_map_single(dma_dev, tf->data, DMA_TEST_FRAME_SIZE,
DMA_FROM_DEVICE);
if (dma_mapping_error(dma_dev, dma_addr)) {
kfree(tf->data);
kfree(tf);
return -ENOMEM;
}
tf->frame.buffer_phy = dma_addr;
tf->frame.callback = dma_test_rx_callback;
tf->dma_test = dt;
INIT_LIST_HEAD(&tf->frame.list);
tb_ring_rx(dt->rx_ring, &tf->frame);
}
return 0;
}
static void dma_test_tx_callback(struct tb_ring *ring, struct ring_frame *frame,
bool canceled)
{
struct dma_test_frame *tf = container_of(frame, typeof(*tf), frame);
struct dma_test *dt = tf->dma_test;
struct device *dma_dev = tb_ring_dma_device(dt->tx_ring);
dma_unmap_single(dma_dev, tf->frame.buffer_phy, DMA_TEST_FRAME_SIZE,
DMA_TO_DEVICE);
kfree(tf->data);
kfree(tf);
}
static int dma_test_submit_tx(struct dma_test *dt, size_t npackets)
{
struct device *dma_dev = tb_ring_dma_device(dt->tx_ring);
int i;
for (i = 0; i < npackets; i++) {
struct dma_test_frame *tf;
dma_addr_t dma_addr;
tf = kzalloc(sizeof(*tf), GFP_KERNEL);
if (!tf)
return -ENOMEM;
tf->frame.size = 0; /* means 4096 */
tf->dma_test = dt;
tf->data = kzalloc(DMA_TEST_FRAME_SIZE, GFP_KERNEL);
if (!tf->data) {
kfree(tf);
return -ENOMEM;
}
memcpy(tf->data, dma_test_pattern, DMA_TEST_FRAME_SIZE);
dma_addr = dma_map_single(dma_dev, tf->data, DMA_TEST_FRAME_SIZE,
DMA_TO_DEVICE);
if (dma_mapping_error(dma_dev, dma_addr)) {
kfree(tf->data);
kfree(tf);
return -ENOMEM;
}
tf->frame.buffer_phy = dma_addr;
tf->frame.callback = dma_test_tx_callback;
tf->frame.sof = DMA_TEST_PDF_FRAME_START;
tf->frame.eof = DMA_TEST_PDF_FRAME_END;
INIT_LIST_HEAD(&tf->frame.list);
dt->packets_sent++;
dev_dbg(&dt->svc->dev, "packet %u/%u sent\n", dt->packets_sent,
dt->packets_to_send);
tb_ring_tx(dt->tx_ring, &tf->frame);
}
return 0;
}
#define DMA_TEST_DEBUGFS_ATTR(__fops, __get, __validate, __set) \
static int __fops ## _show(void *data, u64 *val) \
{ \
struct tb_service *svc = data; \
struct dma_test *dt = tb_service_get_drvdata(svc); \
int ret; \
\
ret = mutex_lock_interruptible(&dt->lock); \
if (ret) \
return ret; \
__get(dt, val); \
mutex_unlock(&dt->lock); \
return 0; \
} \
static int __fops ## _store(void *data, u64 val) \
{ \
struct tb_service *svc = data; \
struct dma_test *dt = tb_service_get_drvdata(svc); \
int ret; \
\
ret = __validate(val); \
if (ret) \
return ret; \
ret = mutex_lock_interruptible(&dt->lock); \
if (ret) \
return ret; \
__set(dt, val); \
mutex_unlock(&dt->lock); \
return 0; \
} \
DEFINE_DEBUGFS_ATTRIBUTE(__fops ## _fops, __fops ## _show, \
__fops ## _store, "%llu\n")
static void lanes_get(const struct dma_test *dt, u64 *val)
{
*val = dt->link_width;
}
static int lanes_validate(u64 val)
{
return val > 2 ? -EINVAL : 0;
}
static void lanes_set(struct dma_test *dt, u64 val)
{
dt->link_width = val;
}
DMA_TEST_DEBUGFS_ATTR(lanes, lanes_get, lanes_validate, lanes_set);
static void speed_get(const struct dma_test *dt, u64 *val)
{
*val = dt->link_speed;
}
static int speed_validate(u64 val)
{
switch (val) {
case 20:
case 10:
case 0:
return 0;
default:
return -EINVAL;
}
}
static void speed_set(struct dma_test *dt, u64 val)
{
dt->link_speed = val;
}
DMA_TEST_DEBUGFS_ATTR(speed, speed_get, speed_validate, speed_set);
static void packets_to_receive_get(const struct dma_test *dt, u64 *val)
{
*val = dt->packets_to_receive;
}
static int packets_to_receive_validate(u64 val)
{
return val > DMA_TEST_MAX_PACKETS ? -EINVAL : 0;
}
static void packets_to_receive_set(struct dma_test *dt, u64 val)
{
dt->packets_to_receive = val;
}
DMA_TEST_DEBUGFS_ATTR(packets_to_receive, packets_to_receive_get,
packets_to_receive_validate, packets_to_receive_set);
static void packets_to_send_get(const struct dma_test *dt, u64 *val)
{
*val = dt->packets_to_send;
}
static int packets_to_send_validate(u64 val)
{
return val > DMA_TEST_MAX_PACKETS ? -EINVAL : 0;
}
static void packets_to_send_set(struct dma_test *dt, u64 val)
{
dt->packets_to_send = val;
}
DMA_TEST_DEBUGFS_ATTR(packets_to_send, packets_to_send_get,
packets_to_send_validate, packets_to_send_set);
static int dma_test_set_bonding(struct dma_test *dt)
{
switch (dt->link_width) {
case 2:
return tb_xdomain_lane_bonding_enable(dt->xd);
case 1:
tb_xdomain_lane_bonding_disable(dt->xd);
fallthrough;
default:
return 0;
}
}
static bool dma_test_validate_config(struct dma_test *dt)
{
if (!dt->packets_to_send && !dt->packets_to_receive)
return false;
if (dt->packets_to_send && dt->packets_to_receive &&
dt->packets_to_send != dt->packets_to_receive)
return false;
return true;
}
static void dma_test_check_errors(struct dma_test *dt, int ret)
{
if (!dt->error_code) {
if (dt->link_speed && dt->xd->link_speed != dt->link_speed) {
dt->error_code = DMA_TEST_SPEED_ERROR;
} else if (dt->link_width &&
dt->xd->link_width != dt->link_width) {
dt->error_code = DMA_TEST_WIDTH_ERROR;
} else if (dt->packets_to_send != dt->packets_sent ||
dt->packets_to_receive != dt->packets_received ||
dt->crc_errors || dt->buffer_overflow_errors) {
dt->error_code = DMA_TEST_PACKET_ERROR;
} else {
return;
}
}
dt->result = DMA_TEST_FAIL;
}
static int test_store(void *data, u64 val)
{
struct tb_service *svc = data;
struct dma_test *dt = tb_service_get_drvdata(svc);
int ret;
if (val != 1)
return -EINVAL;
ret = mutex_lock_interruptible(&dt->lock);
if (ret)
return ret;
dt->packets_sent = 0;
dt->packets_received = 0;
dt->crc_errors = 0;
dt->buffer_overflow_errors = 0;
dt->result = DMA_TEST_SUCCESS;
dt->error_code = DMA_TEST_NO_ERROR;
dev_dbg(&svc->dev, "DMA test starting\n");
if (dt->link_speed)
dev_dbg(&svc->dev, "link_speed: %u Gb/s\n", dt->link_speed);
if (dt->link_width)
dev_dbg(&svc->dev, "link_width: %u\n", dt->link_width);
dev_dbg(&svc->dev, "packets_to_send: %u\n", dt->packets_to_send);
dev_dbg(&svc->dev, "packets_to_receive: %u\n", dt->packets_to_receive);
if (!dma_test_validate_config(dt)) {
dev_err(&svc->dev, "invalid test configuration\n");
dt->error_code = DMA_TEST_CONFIG_ERROR;
goto out_unlock;
}
ret = dma_test_set_bonding(dt);
if (ret) {
dev_err(&svc->dev, "failed to set lanes\n");
dt->error_code = DMA_TEST_BONDING_ERROR;
goto out_unlock;
}
ret = dma_test_start_rings(dt);
if (ret) {
dev_err(&svc->dev, "failed to enable DMA rings\n");
dt->error_code = DMA_TEST_DMA_ERROR;
goto out_unlock;
}
if (dt->packets_to_receive) {
reinit_completion(&dt->complete);
ret = dma_test_submit_rx(dt, dt->packets_to_receive);
if (ret) {
dev_err(&svc->dev, "failed to submit receive buffers\n");
dt->error_code = DMA_TEST_BUFFER_ERROR;
goto out_stop;
}
}
if (dt->packets_to_send) {
ret = dma_test_submit_tx(dt, dt->packets_to_send);
if (ret) {
dev_err(&svc->dev, "failed to submit transmit buffers\n");
dt->error_code = DMA_TEST_BUFFER_ERROR;
goto out_stop;
}
}
if (dt->packets_to_receive) {
ret = wait_for_completion_interruptible(&dt->complete);
if (ret) {
dt->error_code = DMA_TEST_INTERRUPTED;
goto out_stop;
}
}
out_stop:
dma_test_stop_rings(dt);
out_unlock:
dma_test_check_errors(dt, ret);
mutex_unlock(&dt->lock);
dev_dbg(&svc->dev, "DMA test %s\n", dma_test_result_names[dt->result]);
return ret;
}
DEFINE_DEBUGFS_ATTRIBUTE(test_fops, NULL, test_store, "%llu\n");
static int status_show(struct seq_file *s, void *not_used)
{
struct tb_service *svc = s->private;
struct dma_test *dt = tb_service_get_drvdata(svc);
int ret;
ret = mutex_lock_interruptible(&dt->lock);
if (ret)
return ret;
seq_printf(s, "result: %s\n", dma_test_result_names[dt->result]);
if (dt->result == DMA_TEST_NOT_RUN)
goto out_unlock;
seq_printf(s, "packets received: %u\n", dt->packets_received);
seq_printf(s, "packets sent: %u\n", dt->packets_sent);
seq_printf(s, "CRC errors: %u\n", dt->crc_errors);
seq_printf(s, "buffer overflow errors: %u\n",
dt->buffer_overflow_errors);
seq_printf(s, "error: %s\n", dma_test_error_names[dt->error_code]);
out_unlock:
mutex_unlock(&dt->lock);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(status);
static void dma_test_debugfs_init(struct tb_service *svc)
{
struct dma_test *dt = tb_service_get_drvdata(svc);
dt->debugfs_dir = debugfs_create_dir("dma_test", svc->debugfs_dir);
debugfs_create_file("lanes", 0600, dt->debugfs_dir, svc, &lanes_fops);
debugfs_create_file("speed", 0600, dt->debugfs_dir, svc, &speed_fops);
debugfs_create_file("packets_to_receive", 0600, dt->debugfs_dir, svc,
&packets_to_receive_fops);
debugfs_create_file("packets_to_send", 0600, dt->debugfs_dir, svc,
&packets_to_send_fops);
debugfs_create_file("status", 0400, dt->debugfs_dir, svc, &status_fops);
debugfs_create_file("test", 0200, dt->debugfs_dir, svc, &test_fops);
}
static int dma_test_probe(struct tb_service *svc, const struct tb_service_id *id)
{
struct tb_xdomain *xd = tb_service_parent(svc);
struct dma_test *dt;
dt = devm_kzalloc(&svc->dev, sizeof(*dt), GFP_KERNEL);
if (!dt)
return -ENOMEM;
dt->svc = svc;
dt->xd = xd;
mutex_init(&dt->lock);
init_completion(&dt->complete);
tb_service_set_drvdata(svc, dt);
dma_test_debugfs_init(svc);
return 0;
}
static void dma_test_remove(struct tb_service *svc)
{
struct dma_test *dt = tb_service_get_drvdata(svc);
mutex_lock(&dt->lock);
debugfs_remove_recursive(dt->debugfs_dir);
mutex_unlock(&dt->lock);
}
static int __maybe_unused dma_test_suspend(struct device *dev)
{
/*
* No need to do anything special here. If userspace is writing
* to the test attribute when suspend started, it comes out from
* wait_for_completion_interruptible() with -ERESTARTSYS and the
* DMA test fails tearing down the rings. Once userspace is
* thawed the kernel restarts the write syscall effectively
* re-running the test.
*/
return 0;
}
static int __maybe_unused dma_test_resume(struct device *dev)
{
return 0;
}
static const struct dev_pm_ops dma_test_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(dma_test_suspend, dma_test_resume)
};
static const struct tb_service_id dma_test_ids[] = {
{ TB_SERVICE("dma_test", 1) },
{ },
};
MODULE_DEVICE_TABLE(tbsvc, dma_test_ids);
static struct tb_service_driver dma_test_driver = {
.driver = {
.owner = THIS_MODULE,
.name = "thunderbolt_dma_test",
.pm = &dma_test_pm_ops,
},
.probe = dma_test_probe,
.remove = dma_test_remove,
.id_table = dma_test_ids,
};
static int __init dma_test_init(void)
{
u64 data_value = DMA_TEST_DATA_PATTERN;
int i, ret;
dma_test_pattern = kmalloc(DMA_TEST_FRAME_SIZE, GFP_KERNEL);
if (!dma_test_pattern)
return -ENOMEM;
for (i = 0; i < DMA_TEST_FRAME_SIZE / sizeof(data_value); i++)
((u32 *)dma_test_pattern)[i] = data_value++;
dma_test_dir = tb_property_create_dir(&dma_test_dir_uuid);
if (!dma_test_dir) {
ret = -ENOMEM;
goto err_free_pattern;
}
tb_property_add_immediate(dma_test_dir, "prtcid", 1);
tb_property_add_immediate(dma_test_dir, "prtcvers", 1);
tb_property_add_immediate(dma_test_dir, "prtcrevs", 0);
tb_property_add_immediate(dma_test_dir, "prtcstns", 0);
ret = tb_register_property_dir("dma_test", dma_test_dir);
if (ret)
goto err_free_dir;
ret = tb_register_service_driver(&dma_test_driver);
if (ret)
goto err_unregister_dir;
return 0;
err_unregister_dir:
tb_unregister_property_dir("dma_test", dma_test_dir);
err_free_dir:
tb_property_free_dir(dma_test_dir);
err_free_pattern:
kfree(dma_test_pattern);
return ret;
}
module_init(dma_test_init);
static void __exit dma_test_exit(void)
{
tb_unregister_service_driver(&dma_test_driver);
tb_unregister_property_dir("dma_test", dma_test_dir);
tb_property_free_dir(dma_test_dir);
kfree(dma_test_pattern);
}
module_exit(dma_test_exit);
MODULE_AUTHOR("Isaac Hazan <isaac.hazan@intel.com>");
MODULE_AUTHOR("Mika Westerberg <mika.westerberg@linux.intel.com>");
MODULE_DESCRIPTION("DMA traffic test driver");
MODULE_LICENSE("GPL v2");

View File

@ -48,6 +48,18 @@ static bool start_icm;
module_param(start_icm, bool, 0444);
MODULE_PARM_DESC(start_icm, "start ICM firmware if it is not running (default: false)");
/**
* struct usb4_switch_nvm_auth - Holds USB4 NVM_AUTH status
* @reply: Reply from ICM firmware is placed here
* @request: Request that is sent to ICM firmware
* @icm: Pointer to ICM private data
*/
struct usb4_switch_nvm_auth {
struct icm_usb4_switch_op_response reply;
struct icm_usb4_switch_op request;
struct icm *icm;
};
/**
* struct icm - Internal connection manager private data
* @request_lock: Makes sure only one message is send to ICM at time
@ -61,6 +73,8 @@ MODULE_PARM_DESC(start_icm, "start ICM firmware if it is not running (default: f
* @max_boot_acl: Maximum number of preboot ACL entries (%0 if not supported)
* @rpm: Does the controller support runtime PM (RTD3)
* @can_upgrade_nvm: Can the NVM firmware be upgrade on this controller
* @proto_version: Firmware protocol version
* @last_nvm_auth: Last USB4 router NVM_AUTH result (or %NULL if not set)
* @veto: Is RTD3 veto in effect
* @is_supported: Checks if we can support ICM on this controller
* @cio_reset: Trigger CIO reset
@ -79,11 +93,13 @@ struct icm {
struct mutex request_lock;
struct delayed_work rescan_work;
struct pci_dev *upstream_port;
size_t max_boot_acl;
int vnd_cap;
bool safe_mode;
size_t max_boot_acl;
bool rpm;
bool can_upgrade_nvm;
u8 proto_version;
struct usb4_switch_nvm_auth *last_nvm_auth;
bool veto;
bool (*is_supported)(struct tb *tb);
int (*cio_reset)(struct tb *tb);
@ -92,7 +108,7 @@ struct icm {
void (*save_devices)(struct tb *tb);
int (*driver_ready)(struct tb *tb,
enum tb_security_level *security_level,
size_t *nboot_acl, bool *rpm);
u8 *proto_version, size_t *nboot_acl, bool *rpm);
void (*set_uuid)(struct tb *tb);
void (*device_connected)(struct tb *tb,
const struct icm_pkg_header *hdr);
@ -437,7 +453,7 @@ static void icm_fr_save_devices(struct tb *tb)
static int
icm_fr_driver_ready(struct tb *tb, enum tb_security_level *security_level,
size_t *nboot_acl, bool *rpm)
u8 *proto_version, size_t *nboot_acl, bool *rpm)
{
struct icm_fr_pkg_driver_ready_response reply;
struct icm_pkg_driver_ready request = {
@ -870,7 +886,13 @@ icm_fr_device_disconnected(struct tb *tb, const struct icm_pkg_header *hdr)
return;
}
pm_runtime_get_sync(sw->dev.parent);
remove_switch(sw);
pm_runtime_mark_last_busy(sw->dev.parent);
pm_runtime_put_autosuspend(sw->dev.parent);
tb_switch_put(sw);
}
@ -986,7 +1008,7 @@ static int icm_tr_cio_reset(struct tb *tb)
static int
icm_tr_driver_ready(struct tb *tb, enum tb_security_level *security_level,
size_t *nboot_acl, bool *rpm)
u8 *proto_version, size_t *nboot_acl, bool *rpm)
{
struct icm_tr_pkg_driver_ready_response reply;
struct icm_pkg_driver_ready request = {
@ -1002,6 +1024,9 @@ icm_tr_driver_ready(struct tb *tb, enum tb_security_level *security_level,
if (security_level)
*security_level = reply.info & ICM_TR_INFO_SLEVEL_MASK;
if (proto_version)
*proto_version = (reply.info & ICM_TR_INFO_PROTO_VERSION_MASK) >>
ICM_TR_INFO_PROTO_VERSION_SHIFT;
if (nboot_acl)
*nboot_acl = (reply.info & ICM_TR_INFO_BOOT_ACL_MASK) >>
ICM_TR_INFO_BOOT_ACL_SHIFT;
@ -1280,8 +1305,13 @@ icm_tr_device_disconnected(struct tb *tb, const struct icm_pkg_header *hdr)
tb_warn(tb, "no switch exists at %llx, ignoring\n", route);
return;
}
pm_runtime_get_sync(sw->dev.parent);
remove_switch(sw);
pm_runtime_mark_last_busy(sw->dev.parent);
pm_runtime_put_autosuspend(sw->dev.parent);
tb_switch_put(sw);
}
@ -1450,7 +1480,7 @@ static int icm_ar_get_mode(struct tb *tb)
static int
icm_ar_driver_ready(struct tb *tb, enum tb_security_level *security_level,
size_t *nboot_acl, bool *rpm)
u8 *proto_version, size_t *nboot_acl, bool *rpm)
{
struct icm_ar_pkg_driver_ready_response reply;
struct icm_pkg_driver_ready request = {
@ -1580,7 +1610,7 @@ static int icm_ar_set_boot_acl(struct tb *tb, const uuid_t *uuids,
static int
icm_icl_driver_ready(struct tb *tb, enum tb_security_level *security_level,
size_t *nboot_acl, bool *rpm)
u8 *proto_version, size_t *nboot_acl, bool *rpm)
{
struct icm_tr_pkg_driver_ready_response reply;
struct icm_pkg_driver_ready request = {
@ -1594,6 +1624,10 @@ icm_icl_driver_ready(struct tb *tb, enum tb_security_level *security_level,
if (ret)
return ret;
if (proto_version)
*proto_version = (reply.info & ICM_TR_INFO_PROTO_VERSION_MASK) >>
ICM_TR_INFO_PROTO_VERSION_SHIFT;
/* Ice Lake always supports RTD3 */
if (rpm)
*rpm = true;
@ -1702,13 +1736,14 @@ static void icm_handle_event(struct tb *tb, enum tb_cfg_pkg_type type,
static int
__icm_driver_ready(struct tb *tb, enum tb_security_level *security_level,
size_t *nboot_acl, bool *rpm)
u8 *proto_version, size_t *nboot_acl, bool *rpm)
{
struct icm *icm = tb_priv(tb);
unsigned int retries = 50;
int ret;
ret = icm->driver_ready(tb, security_level, nboot_acl, rpm);
ret = icm->driver_ready(tb, security_level, proto_version, nboot_acl,
rpm);
if (ret) {
tb_err(tb, "failed to send driver ready to ICM\n");
return ret;
@ -1918,8 +1953,8 @@ static int icm_driver_ready(struct tb *tb)
return 0;
}
ret = __icm_driver_ready(tb, &tb->security_level, &tb->nboot_acl,
&icm->rpm);
ret = __icm_driver_ready(tb, &tb->security_level, &icm->proto_version,
&tb->nboot_acl, &icm->rpm);
if (ret)
return ret;
@ -1930,6 +1965,9 @@ static int icm_driver_ready(struct tb *tb)
if (tb->nboot_acl > icm->max_boot_acl)
tb->nboot_acl = 0;
if (icm->proto_version >= 3)
tb_dbg(tb, "USB4 proxy operations supported\n");
return 0;
}
@ -2045,7 +2083,7 @@ static void icm_complete(struct tb *tb)
* Now all existing children should be resumed, start events
* from ICM to get updated status.
*/
__icm_driver_ready(tb, NULL, NULL, NULL);
__icm_driver_ready(tb, NULL, NULL, NULL, NULL);
/*
* We do not get notifications of devices that have been
@ -2124,6 +2162,8 @@ static void icm_stop(struct tb *tb)
tb_switch_remove(tb->root_switch);
tb->root_switch = NULL;
nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_DRV_UNLOADS, 0);
kfree(icm->last_nvm_auth);
icm->last_nvm_auth = NULL;
}
static int icm_disconnect_pcie_paths(struct tb *tb)
@ -2131,6 +2171,165 @@ static int icm_disconnect_pcie_paths(struct tb *tb)
return nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_DISCONNECT_PCIE_PATHS, 0);
}
static void icm_usb4_switch_nvm_auth_complete(void *data)
{
struct usb4_switch_nvm_auth *auth = data;
struct icm *icm = auth->icm;
struct tb *tb = icm_to_tb(icm);
tb_dbg(tb, "NVM_AUTH response for %llx flags %#x status %#x\n",
get_route(auth->reply.route_hi, auth->reply.route_lo),
auth->reply.hdr.flags, auth->reply.status);
mutex_lock(&tb->lock);
if (WARN_ON(icm->last_nvm_auth))
kfree(icm->last_nvm_auth);
icm->last_nvm_auth = auth;
mutex_unlock(&tb->lock);
}
static int icm_usb4_switch_nvm_authenticate(struct tb *tb, u64 route)
{
struct usb4_switch_nvm_auth *auth;
struct icm *icm = tb_priv(tb);
struct tb_cfg_request *req;
int ret;
auth = kzalloc(sizeof(*auth), GFP_KERNEL);
if (!auth)
return -ENOMEM;
auth->icm = icm;
auth->request.hdr.code = ICM_USB4_SWITCH_OP;
auth->request.route_hi = upper_32_bits(route);
auth->request.route_lo = lower_32_bits(route);
auth->request.opcode = USB4_SWITCH_OP_NVM_AUTH;
req = tb_cfg_request_alloc();
if (!req) {
ret = -ENOMEM;
goto err_free_auth;
}
req->match = icm_match;
req->copy = icm_copy;
req->request = &auth->request;
req->request_size = sizeof(auth->request);
req->request_type = TB_CFG_PKG_ICM_CMD;
req->response = &auth->reply;
req->npackets = 1;
req->response_size = sizeof(auth->reply);
req->response_type = TB_CFG_PKG_ICM_RESP;
tb_dbg(tb, "NVM_AUTH request for %llx\n", route);
mutex_lock(&icm->request_lock);
ret = tb_cfg_request(tb->ctl, req, icm_usb4_switch_nvm_auth_complete,
auth);
mutex_unlock(&icm->request_lock);
tb_cfg_request_put(req);
if (ret)
goto err_free_auth;
return 0;
err_free_auth:
kfree(auth);
return ret;
}
static int icm_usb4_switch_op(struct tb_switch *sw, u16 opcode, u32 *metadata,
u8 *status, const void *tx_data, size_t tx_data_len,
void *rx_data, size_t rx_data_len)
{
struct icm_usb4_switch_op_response reply;
struct icm_usb4_switch_op request;
struct tb *tb = sw->tb;
struct icm *icm = tb_priv(tb);
u64 route = tb_route(sw);
int ret;
/*
* USB4 router operation proxy is supported in firmware if the
* protocol version is 3 or higher.
*/
if (icm->proto_version < 3)
return -EOPNOTSUPP;
/*
* NVM_AUTH is a special USB4 proxy operation that does not
* return immediately so handle it separately.
*/
if (opcode == USB4_SWITCH_OP_NVM_AUTH)
return icm_usb4_switch_nvm_authenticate(tb, route);
memset(&request, 0, sizeof(request));
request.hdr.code = ICM_USB4_SWITCH_OP;
request.route_hi = upper_32_bits(route);
request.route_lo = lower_32_bits(route);
request.opcode = opcode;
if (metadata)
request.metadata = *metadata;
if (tx_data_len) {
request.data_len_valid |= ICM_USB4_SWITCH_DATA_VALID;
if (tx_data_len < ARRAY_SIZE(request.data))
request.data_len_valid =
tx_data_len & ICM_USB4_SWITCH_DATA_LEN_MASK;
memcpy(request.data, tx_data, tx_data_len * sizeof(u32));
}
memset(&reply, 0, sizeof(reply));
ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
1, ICM_TIMEOUT);
if (ret)
return ret;
if (reply.hdr.flags & ICM_FLAGS_ERROR)
return -EIO;
if (status)
*status = reply.status;
if (metadata)
*metadata = reply.metadata;
if (rx_data_len)
memcpy(rx_data, reply.data, rx_data_len * sizeof(u32));
return 0;
}
static int icm_usb4_switch_nvm_authenticate_status(struct tb_switch *sw,
u32 *status)
{
struct usb4_switch_nvm_auth *auth;
struct tb *tb = sw->tb;
struct icm *icm = tb_priv(tb);
int ret = 0;
if (icm->proto_version < 3)
return -EOPNOTSUPP;
auth = icm->last_nvm_auth;
icm->last_nvm_auth = NULL;
if (auth && auth->reply.route_hi == sw->config.route_hi &&
auth->reply.route_lo == sw->config.route_lo) {
tb_dbg(tb, "NVM_AUTH found for %llx flags 0x%#x status %#x\n",
tb_route(sw), auth->reply.hdr.flags, auth->reply.status);
if (auth->reply.hdr.flags & ICM_FLAGS_ERROR)
ret = -EIO;
else
*status = auth->reply.status;
} else {
*status = 0;
}
kfree(auth);
return ret;
}
/* Falcon Ridge */
static const struct tb_cm_ops icm_fr_ops = {
.driver_ready = icm_driver_ready,
@ -2189,6 +2388,9 @@ static const struct tb_cm_ops icm_tr_ops = {
.disconnect_pcie_paths = icm_disconnect_pcie_paths,
.approve_xdomain_paths = icm_tr_approve_xdomain_paths,
.disconnect_xdomain_paths = icm_tr_disconnect_xdomain_paths,
.usb4_switch_op = icm_usb4_switch_op,
.usb4_switch_nvm_authenticate_status =
icm_usb4_switch_nvm_authenticate_status,
};
/* Ice Lake */
@ -2202,6 +2404,9 @@ static const struct tb_cm_ops icm_icl_ops = {
.handle_event = icm_handle_event,
.approve_xdomain_paths = icm_tr_approve_xdomain_paths,
.disconnect_xdomain_paths = icm_tr_disconnect_xdomain_paths,
.usb4_switch_op = icm_usb4_switch_op,
.usb4_switch_nvm_authenticate_status =
icm_usb4_switch_nvm_authenticate_status,
};
struct tb *icm_probe(struct tb_nhi *nhi)
@ -2300,6 +2505,17 @@ struct tb *icm_probe(struct tb_nhi *nhi)
icm->rtd3_veto = icm_icl_rtd3_veto;
tb->cm_ops = &icm_icl_ops;
break;
case PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_4C_NHI:
icm->is_supported = icm_tgl_is_supported;
icm->get_mode = icm_ar_get_mode;
icm->driver_ready = icm_tr_driver_ready;
icm->device_connected = icm_tr_device_connected;
icm->device_disconnected = icm_tr_device_disconnected;
icm->xdomain_connected = icm_tr_xdomain_connected;
icm->xdomain_disconnected = icm_tr_xdomain_disconnected;
tb->cm_ops = &icm_tr_ops;
break;
}
if (!icm->is_supported || !icm->is_supported(tb)) {
@ -2308,5 +2524,7 @@ struct tb *icm_probe(struct tb_nhi *nhi)
return NULL;
}
tb_dbg(tb, "using firmware connection manager\n");
return tb;
}

View File

@ -494,7 +494,7 @@ err_unlock:
static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
bool transmit, unsigned int flags,
u16 sof_mask, u16 eof_mask,
int e2e_tx_hop, u16 sof_mask, u16 eof_mask,
void (*start_poll)(void *),
void *poll_data)
{
@ -517,6 +517,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
ring->is_tx = transmit;
ring->size = size;
ring->flags = flags;
ring->e2e_tx_hop = e2e_tx_hop;
ring->sof_mask = sof_mask;
ring->eof_mask = eof_mask;
ring->head = 0;
@ -561,7 +562,7 @@ err_free_ring:
struct tb_ring *tb_ring_alloc_tx(struct tb_nhi *nhi, int hop, int size,
unsigned int flags)
{
return tb_ring_alloc(nhi, hop, size, true, flags, 0, 0, NULL, NULL);
return tb_ring_alloc(nhi, hop, size, true, flags, 0, 0, 0, NULL, NULL);
}
EXPORT_SYMBOL_GPL(tb_ring_alloc_tx);
@ -571,6 +572,7 @@ EXPORT_SYMBOL_GPL(tb_ring_alloc_tx);
* @hop: HopID (ring) to allocate. Pass %-1 for automatic allocation.
* @size: Number of entries in the ring
* @flags: Flags for the ring
* @e2e_tx_hop: Transmit HopID when E2E is enabled in @flags
* @sof_mask: Mask of PDF values that start a frame
* @eof_mask: Mask of PDF values that end a frame
* @start_poll: If not %NULL the ring will call this function when an
@ -579,10 +581,11 @@ EXPORT_SYMBOL_GPL(tb_ring_alloc_tx);
* @poll_data: Optional data passed to @start_poll
*/
struct tb_ring *tb_ring_alloc_rx(struct tb_nhi *nhi, int hop, int size,
unsigned int flags, u16 sof_mask, u16 eof_mask,
unsigned int flags, int e2e_tx_hop,
u16 sof_mask, u16 eof_mask,
void (*start_poll)(void *), void *poll_data)
{
return tb_ring_alloc(nhi, hop, size, false, flags, sof_mask, eof_mask,
return tb_ring_alloc(nhi, hop, size, false, flags, e2e_tx_hop, sof_mask, eof_mask,
start_poll, poll_data);
}
EXPORT_SYMBOL_GPL(tb_ring_alloc_rx);
@ -629,6 +632,31 @@ void tb_ring_start(struct tb_ring *ring)
ring_iowrite32options(ring, sof_eof_mask, 4);
ring_iowrite32options(ring, flags, 0);
}
/*
* Now that the ring valid bit is set we can configure E2E if
* enabled for the ring.
*/
if (ring->flags & RING_FLAG_E2E) {
if (!ring->is_tx) {
u32 hop;
hop = ring->e2e_tx_hop << REG_RX_OPTIONS_E2E_HOP_SHIFT;
hop &= REG_RX_OPTIONS_E2E_HOP_MASK;
flags |= hop;
dev_dbg(&ring->nhi->pdev->dev,
"enabling E2E for %s %d with TX HopID %d\n",
RING_TYPE(ring), ring->hop, ring->e2e_tx_hop);
} else {
dev_dbg(&ring->nhi->pdev->dev, "enabling E2E for %s %d\n",
RING_TYPE(ring), ring->hop);
}
flags |= RING_FLAG_E2E_FLOW_CONTROL;
ring_iowrite32options(ring, flags, 0);
}
ring_interrupt_active(ring, true);
ring->running = true;
err:

View File

@ -55,6 +55,7 @@ extern const struct tb_nhi_ops icl_nhi_ops;
* need for the PCI quirk anymore as we will use ICM also on Apple
* hardware.
*/
#define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_4C_NHI 0x1137
#define PCI_DEVICE_ID_INTEL_WIN_RIDGE_2C_NHI 0x157d
#define PCI_DEVICE_ID_INTEL_WIN_RIDGE_2C_BRIDGE 0x157e
#define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_NHI 0x15bf

View File

@ -406,10 +406,17 @@ static int __tb_path_deactivate_hop(struct tb_port *port, int hop_index,
if (!hop.pending) {
if (clear_fc) {
/* Clear flow control */
hop.ingress_fc = 0;
/*
* Clear flow control. Protocol adapters
* IFC and ISE bits are vendor defined
* in the USB4 spec so we clear them
* only for pre-USB4 adapters.
*/
if (!tb_switch_is_usb4(port->sw)) {
hop.ingress_fc = 0;
hop.ingress_shared_buffer = 0;
}
hop.egress_fc = 0;
hop.ingress_shared_buffer = 0;
hop.egress_shared_buffer = 0;
return tb_port_write(port, &hop, TB_CFG_HOPS,
@ -447,7 +454,7 @@ void tb_path_deactivate(struct tb_path *path)
return;
}
tb_dbg(path->tb,
"deactivating %s path from %llx:%x to %llx:%x\n",
"deactivating %s path from %llx:%u to %llx:%u\n",
path->name, tb_route(path->hops[0].in_port->sw),
path->hops[0].in_port->port,
tb_route(path->hops[path->path_length - 1].out_port->sw),
@ -475,7 +482,7 @@ int tb_path_activate(struct tb_path *path)
}
tb_dbg(path->tb,
"activating %s path from %llx:%x to %llx:%x\n",
"activating %s path from %llx:%u to %llx:%u\n",
path->name, tb_route(path->hops[0].in_port->sw),
path->hops[0].in_port->port,
tb_route(path->hops[path->path_length - 1].out_port->sw),

View File

@ -503,12 +503,13 @@ static void tb_dump_port(struct tb *tb, struct tb_regs_port_header *port)
/**
* tb_port_state() - get connectedness state of a port
* @port: the port to check
*
* The port must have a TB_CAP_PHY (i.e. it should be a real port).
*
* Return: Returns an enum tb_port_state on success or an error code on failure.
*/
static int tb_port_state(struct tb_port *port)
int tb_port_state(struct tb_port *port)
{
struct tb_cap_phy phy;
int res;
@ -932,7 +933,14 @@ int tb_port_get_link_speed(struct tb_port *port)
return speed == LANE_ADP_CS_1_CURRENT_SPEED_GEN3 ? 20 : 10;
}
static int tb_port_get_link_width(struct tb_port *port)
/**
* tb_port_get_link_width() - Get current link width
* @port: Port to check (USB4 or CIO)
*
* Returns link width. Return values can be 1 (Single-Lane), 2 (Dual-Lane)
* or negative errno in case of failure.
*/
int tb_port_get_link_width(struct tb_port *port)
{
u32 val;
int ret;
@ -1001,7 +1009,16 @@ static int tb_port_set_link_width(struct tb_port *port, unsigned int width)
port->cap_phy + LANE_ADP_CS_1, 1);
}
static int tb_port_lane_bonding_enable(struct tb_port *port)
/**
* tb_port_lane_bonding_enable() - Enable bonding on port
* @port: port to enable
*
* Enable bonding by setting the link width of the port and the
* other port in case of dual link port.
*
* Return: %0 in case of success and negative errno in case of error
*/
int tb_port_lane_bonding_enable(struct tb_port *port)
{
int ret;
@ -1031,7 +1048,15 @@ static int tb_port_lane_bonding_enable(struct tb_port *port)
return 0;
}
static void tb_port_lane_bonding_disable(struct tb_port *port)
/**
* tb_port_lane_bonding_disable() - Disable bonding on port
* @port: port to disable
*
* Disable bonding by setting the link width of the port and the
* other port in case of dual link port.
*
*/
void tb_port_lane_bonding_disable(struct tb_port *port)
{
port->dual_link_port->bonded = false;
port->bonded = false;
@ -2135,6 +2160,7 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
fallthrough;
case 3:
case 4:
ret = tb_switch_set_uuid(sw);
if (ret)
return ret;
@ -2150,6 +2176,22 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
break;
}
if (sw->no_nvm_upgrade)
return 0;
if (tb_switch_is_usb4(sw)) {
ret = usb4_switch_nvm_authenticate_status(sw, &status);
if (ret)
return ret;
if (status) {
tb_sw_info(sw, "switch flash authentication failed\n");
nvm_set_auth_status(sw, status);
}
return 0;
}
/* Root switch DMA port requires running firmware */
if (!tb_route(sw) && !tb_switch_is_icm(sw))
return 0;
@ -2158,9 +2200,6 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
if (!sw->dma_port)
return 0;
if (sw->no_nvm_upgrade)
return 0;
/*
* If there is status already set then authentication failed
* when the dma_port_flash_update_auth() returned. Power cycling

View File

@ -1534,5 +1534,7 @@ struct tb *tb_probe(struct tb_nhi *nhi)
INIT_LIST_HEAD(&tcm->dp_resources);
INIT_DELAYED_WORK(&tcm->remove_work, tb_remove_work);
tb_dbg(tb, "using software connection manager\n");
return tb;
}

View File

@ -367,6 +367,14 @@ struct tb_path {
* @disconnect_pcie_paths: Disconnects PCIe paths before NVM update
* @approve_xdomain_paths: Approve (establish) XDomain DMA paths
* @disconnect_xdomain_paths: Disconnect XDomain DMA paths
* @usb4_switch_op: Optional proxy for USB4 router operations. If set
* this will be called whenever USB4 router operation is
* performed. If this returns %-EOPNOTSUPP then the
* native USB4 router operation is called.
* @usb4_switch_nvm_authenticate_status: Optional callback that the CM
* implementation can be used to
* return status of USB4 NVM_AUTH
* router operation.
*/
struct tb_cm_ops {
int (*driver_ready)(struct tb *tb);
@ -393,6 +401,11 @@ struct tb_cm_ops {
int (*disconnect_pcie_paths)(struct tb *tb);
int (*approve_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd);
int (*disconnect_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd);
int (*usb4_switch_op)(struct tb_switch *sw, u16 opcode, u32 *metadata,
u8 *status, const void *tx_data, size_t tx_data_len,
void *rx_data, size_t rx_data_len);
int (*usb4_switch_nvm_authenticate_status)(struct tb_switch *sw,
u32 *status);
};
static inline void *tb_priv(struct tb *tb)
@ -864,6 +877,10 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
(p) = tb_next_port_on_path((src), (dst), (p)))
int tb_port_get_link_speed(struct tb_port *port);
int tb_port_get_link_width(struct tb_port *port);
int tb_port_state(struct tb_port *port);
int tb_port_lane_bonding_enable(struct tb_port *port);
void tb_port_lane_bonding_disable(struct tb_port *port);
int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec);
int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap);
@ -970,6 +987,7 @@ int usb4_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf,
int usb4_switch_nvm_write(struct tb_switch *sw, unsigned int address,
const void *buf, size_t size);
int usb4_switch_nvm_authenticate(struct tb_switch *sw);
int usb4_switch_nvm_authenticate_status(struct tb_switch *sw, u32 *status);
bool usb4_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in);
int usb4_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in);
int usb4_switch_dealloc_dp_resource(struct tb_switch *sw, struct tb_port *in);
@ -1025,11 +1043,15 @@ void tb_debugfs_init(void);
void tb_debugfs_exit(void);
void tb_switch_debugfs_init(struct tb_switch *sw);
void tb_switch_debugfs_remove(struct tb_switch *sw);
void tb_service_debugfs_init(struct tb_service *svc);
void tb_service_debugfs_remove(struct tb_service *svc);
#else
static inline void tb_debugfs_init(void) { }
static inline void tb_debugfs_exit(void) { }
static inline void tb_switch_debugfs_init(struct tb_switch *sw) { }
static inline void tb_switch_debugfs_remove(struct tb_switch *sw) { }
static inline void tb_service_debugfs_init(struct tb_service *svc) { }
static inline void tb_service_debugfs_remove(struct tb_service *svc) { }
#endif
#ifdef CONFIG_USB4_KUNIT_TEST

View File

@ -106,6 +106,7 @@ enum icm_pkg_code {
ICM_APPROVE_XDOMAIN = 0x10,
ICM_DISCONNECT_XDOMAIN = 0x11,
ICM_PREBOOT_ACL = 0x18,
ICM_USB4_SWITCH_OP = 0x20,
};
enum icm_event_code {
@ -343,6 +344,8 @@ struct icm_tr_pkg_driver_ready_response {
#define ICM_TR_FLAGS_RTD3 BIT(6)
#define ICM_TR_INFO_SLEVEL_MASK GENMASK(2, 0)
#define ICM_TR_INFO_PROTO_VERSION_MASK GENMASK(6, 4)
#define ICM_TR_INFO_PROTO_VERSION_SHIFT 4
#define ICM_TR_INFO_BOOT_ACL_SHIFT 7
#define ICM_TR_INFO_BOOT_ACL_MASK GENMASK(12, 7)
@ -478,6 +481,31 @@ struct icm_icl_event_rtd3_veto {
u32 veto_reason;
};
/* USB4 ICM messages */
struct icm_usb4_switch_op {
struct icm_pkg_header hdr;
u32 route_hi;
u32 route_lo;
u32 metadata;
u16 opcode;
u16 data_len_valid;
u32 data[16];
};
#define ICM_USB4_SWITCH_DATA_LEN_MASK GENMASK(3, 0)
#define ICM_USB4_SWITCH_DATA_VALID BIT(4)
struct icm_usb4_switch_op_response {
struct icm_pkg_header hdr;
u32 route_hi;
u32 route_lo;
u32 metadata;
u16 opcode;
u16 status;
u32 data[16];
};
/* XDomain messages */
struct tb_xdomain_header {

View File

@ -211,11 +211,25 @@ struct tb_regs_switch_header {
#define ROUTER_CS_9 0x09
#define ROUTER_CS_25 0x19
#define ROUTER_CS_26 0x1a
#define ROUTER_CS_26_OPCODE_MASK GENMASK(15, 0)
#define ROUTER_CS_26_STATUS_MASK GENMASK(29, 24)
#define ROUTER_CS_26_STATUS_SHIFT 24
#define ROUTER_CS_26_ONS BIT(30)
#define ROUTER_CS_26_OV BIT(31)
/* USB4 router operations opcodes */
enum usb4_switch_op {
USB4_SWITCH_OP_QUERY_DP_RESOURCE = 0x10,
USB4_SWITCH_OP_ALLOC_DP_RESOURCE = 0x11,
USB4_SWITCH_OP_DEALLOC_DP_RESOURCE = 0x12,
USB4_SWITCH_OP_NVM_WRITE = 0x20,
USB4_SWITCH_OP_NVM_AUTH = 0x21,
USB4_SWITCH_OP_NVM_READ = 0x22,
USB4_SWITCH_OP_NVM_SET_OFFSET = 0x23,
USB4_SWITCH_OP_DROM_READ = 0x24,
USB4_SWITCH_OP_NVM_SECTOR_SIZE = 0x25,
};
/* Router TMU configuration */
#define TMU_RTR_CS_0 0x00
#define TMU_RTR_CS_0_TD BIT(27)

View File

@ -34,9 +34,6 @@
#define TB_DP_AUX_PATH_OUT 1
#define TB_DP_AUX_PATH_IN 2
#define TB_DMA_PATH_OUT 0
#define TB_DMA_PATH_IN 1
static const char * const tb_tunnel_names[] = { "PCI", "DP", "DMA", "USB3" };
#define __TB_TUNNEL_PRINT(level, tunnel, fmt, arg...) \
@ -829,10 +826,10 @@ static void tb_dma_init_path(struct tb_path *path, unsigned int isb,
* @nhi: Host controller port
* @dst: Destination null port which the other domain is connected to
* @transmit_ring: NHI ring number used to send packets towards the
* other domain
* other domain. Set to %0 if TX path is not needed.
* @transmit_path: HopID used for transmitting packets
* @receive_ring: NHI ring number used to receive packets from the
* other domain
* other domain. Set to %0 if RX path is not needed.
* @reveive_path: HopID used for receiving packets
*
* Return: Returns a tb_tunnel on success or NULL on failure.
@ -843,10 +840,19 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
int receive_path)
{
struct tb_tunnel *tunnel;
size_t npaths = 0, i = 0;
struct tb_path *path;
u32 credits;
tunnel = tb_tunnel_alloc(tb, 2, TB_TUNNEL_DMA);
if (receive_ring)
npaths++;
if (transmit_ring)
npaths++;
if (WARN_ON(!npaths))
return NULL;
tunnel = tb_tunnel_alloc(tb, npaths, TB_TUNNEL_DMA);
if (!tunnel)
return NULL;
@ -856,22 +862,28 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
credits = tb_dma_credits(nhi);
path = tb_path_alloc(tb, dst, receive_path, nhi, receive_ring, 0, "DMA RX");
if (!path) {
tb_tunnel_free(tunnel);
return NULL;
if (receive_ring) {
path = tb_path_alloc(tb, dst, receive_path, nhi, receive_ring, 0,
"DMA RX");
if (!path) {
tb_tunnel_free(tunnel);
return NULL;
}
tb_dma_init_path(path, TB_PATH_NONE, TB_PATH_SOURCE | TB_PATH_INTERNAL,
credits);
tunnel->paths[i++] = path;
}
tb_dma_init_path(path, TB_PATH_NONE, TB_PATH_SOURCE | TB_PATH_INTERNAL,
credits);
tunnel->paths[TB_DMA_PATH_IN] = path;
path = tb_path_alloc(tb, nhi, transmit_ring, dst, transmit_path, 0, "DMA TX");
if (!path) {
tb_tunnel_free(tunnel);
return NULL;
if (transmit_ring) {
path = tb_path_alloc(tb, nhi, transmit_ring, dst, transmit_path, 0,
"DMA TX");
if (!path) {
tb_tunnel_free(tunnel);
return NULL;
}
tb_dma_init_path(path, TB_PATH_SOURCE, TB_PATH_ALL, credits);
tunnel->paths[i++] = path;
}
tb_dma_init_path(path, TB_PATH_SOURCE, TB_PATH_ALL, credits);
tunnel->paths[TB_DMA_PATH_OUT] = path;
return tunnel;
}

View File

@ -16,18 +16,6 @@
#define USB4_DATA_DWORDS 16
#define USB4_DATA_RETRIES 3
enum usb4_switch_op {
USB4_SWITCH_OP_QUERY_DP_RESOURCE = 0x10,
USB4_SWITCH_OP_ALLOC_DP_RESOURCE = 0x11,
USB4_SWITCH_OP_DEALLOC_DP_RESOURCE = 0x12,
USB4_SWITCH_OP_NVM_WRITE = 0x20,
USB4_SWITCH_OP_NVM_AUTH = 0x21,
USB4_SWITCH_OP_NVM_READ = 0x22,
USB4_SWITCH_OP_NVM_SET_OFFSET = 0x23,
USB4_SWITCH_OP_DROM_READ = 0x24,
USB4_SWITCH_OP_NVM_SECTOR_SIZE = 0x25,
};
enum usb4_sb_target {
USB4_SB_TARGET_ROUTER,
USB4_SB_TARGET_PARTNER,
@ -74,34 +62,6 @@ static int usb4_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit,
return -ETIMEDOUT;
}
static int usb4_switch_op_read_data(struct tb_switch *sw, void *data,
size_t dwords)
{
if (dwords > USB4_DATA_DWORDS)
return -EINVAL;
return tb_sw_read(sw, data, TB_CFG_SWITCH, ROUTER_CS_9, dwords);
}
static int usb4_switch_op_write_data(struct tb_switch *sw, const void *data,
size_t dwords)
{
if (dwords > USB4_DATA_DWORDS)
return -EINVAL;
return tb_sw_write(sw, data, TB_CFG_SWITCH, ROUTER_CS_9, dwords);
}
static int usb4_switch_op_read_metadata(struct tb_switch *sw, u32 *metadata)
{
return tb_sw_read(sw, metadata, TB_CFG_SWITCH, ROUTER_CS_25, 1);
}
static int usb4_switch_op_write_metadata(struct tb_switch *sw, u32 metadata)
{
return tb_sw_write(sw, &metadata, TB_CFG_SWITCH, ROUTER_CS_25, 1);
}
static int usb4_do_read_data(u16 address, void *buf, size_t size,
read_block_fn read_block, void *read_block_data)
{
@ -171,11 +131,26 @@ static int usb4_do_write_data(unsigned int address, const void *buf, size_t size
return 0;
}
static int usb4_switch_op(struct tb_switch *sw, u16 opcode, u8 *status)
static int usb4_native_switch_op(struct tb_switch *sw, u16 opcode,
u32 *metadata, u8 *status,
const void *tx_data, size_t tx_dwords,
void *rx_data, size_t rx_dwords)
{
u32 val;
int ret;
if (metadata) {
ret = tb_sw_write(sw, metadata, TB_CFG_SWITCH, ROUTER_CS_25, 1);
if (ret)
return ret;
}
if (tx_dwords) {
ret = tb_sw_write(sw, tx_data, TB_CFG_SWITCH, ROUTER_CS_9,
tx_dwords);
if (ret)
return ret;
}
val = opcode | ROUTER_CS_26_OV;
ret = tb_sw_write(sw, &val, TB_CFG_SWITCH, ROUTER_CS_26, 1);
if (ret)
@ -192,10 +167,73 @@ static int usb4_switch_op(struct tb_switch *sw, u16 opcode, u8 *status)
if (val & ROUTER_CS_26_ONS)
return -EOPNOTSUPP;
*status = (val & ROUTER_CS_26_STATUS_MASK) >> ROUTER_CS_26_STATUS_SHIFT;
if (status)
*status = (val & ROUTER_CS_26_STATUS_MASK) >>
ROUTER_CS_26_STATUS_SHIFT;
if (metadata) {
ret = tb_sw_read(sw, metadata, TB_CFG_SWITCH, ROUTER_CS_25, 1);
if (ret)
return ret;
}
if (rx_dwords) {
ret = tb_sw_read(sw, rx_data, TB_CFG_SWITCH, ROUTER_CS_9,
rx_dwords);
if (ret)
return ret;
}
return 0;
}
static int __usb4_switch_op(struct tb_switch *sw, u16 opcode, u32 *metadata,
u8 *status, const void *tx_data, size_t tx_dwords,
void *rx_data, size_t rx_dwords)
{
const struct tb_cm_ops *cm_ops = sw->tb->cm_ops;
if (tx_dwords > USB4_DATA_DWORDS || rx_dwords > USB4_DATA_DWORDS)
return -EINVAL;
/*
* If the connection manager implementation provides USB4 router
* operation proxy callback, call it here instead of running the
* operation natively.
*/
if (cm_ops->usb4_switch_op) {
int ret;
ret = cm_ops->usb4_switch_op(sw, opcode, metadata, status,
tx_data, tx_dwords, rx_data,
rx_dwords);
if (ret != -EOPNOTSUPP)
return ret;
/*
* If the proxy was not supported then run the native
* router operation instead.
*/
}
return usb4_native_switch_op(sw, opcode, metadata, status, tx_data,
tx_dwords, rx_data, rx_dwords);
}
static inline int usb4_switch_op(struct tb_switch *sw, u16 opcode,
u32 *metadata, u8 *status)
{
return __usb4_switch_op(sw, opcode, metadata, status, NULL, 0, NULL, 0);
}
static inline int usb4_switch_op_data(struct tb_switch *sw, u16 opcode,
u32 *metadata, u8 *status,
const void *tx_data, size_t tx_dwords,
void *rx_data, size_t rx_dwords)
{
return __usb4_switch_op(sw, opcode, metadata, status, tx_data,
tx_dwords, rx_data, rx_dwords);
}
static void usb4_switch_check_wakes(struct tb_switch *sw)
{
struct tb_port *port;
@ -348,18 +386,12 @@ static int usb4_switch_drom_read_block(void *data,
metadata |= (dwaddress << USB4_DROM_ADDRESS_SHIFT) &
USB4_DROM_ADDRESS_MASK;
ret = usb4_switch_op_write_metadata(sw, metadata);
ret = usb4_switch_op_data(sw, USB4_SWITCH_OP_DROM_READ, &metadata,
&status, NULL, 0, buf, dwords);
if (ret)
return ret;
ret = usb4_switch_op(sw, USB4_SWITCH_OP_DROM_READ, &status);
if (ret)
return ret;
if (status)
return -EIO;
return usb4_switch_op_read_data(sw, buf, dwords);
return status ? -EIO : 0;
}
/**
@ -512,17 +544,14 @@ int usb4_switch_nvm_sector_size(struct tb_switch *sw)
u8 status;
int ret;
ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_SECTOR_SIZE, &status);
ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_SECTOR_SIZE, &metadata,
&status);
if (ret)
return ret;
if (status)
return status == 0x2 ? -EOPNOTSUPP : -EIO;
ret = usb4_switch_op_read_metadata(sw, &metadata);
if (ret)
return ret;
return metadata & USB4_NVM_SECTOR_SIZE_MASK;
}
@ -539,18 +568,12 @@ static int usb4_switch_nvm_read_block(void *data,
metadata |= (dwaddress << USB4_NVM_READ_OFFSET_SHIFT) &
USB4_NVM_READ_OFFSET_MASK;
ret = usb4_switch_op_write_metadata(sw, metadata);
ret = usb4_switch_op_data(sw, USB4_SWITCH_OP_NVM_READ, &metadata,
&status, NULL, 0, buf, dwords);
if (ret)
return ret;
ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_READ, &status);
if (ret)
return ret;
if (status)
return -EIO;
return usb4_switch_op_read_data(sw, buf, dwords);
return status ? -EIO : 0;
}
/**
@ -581,11 +604,8 @@ static int usb4_switch_nvm_set_offset(struct tb_switch *sw,
metadata = (dwaddress << USB4_NVM_SET_OFFSET_SHIFT) &
USB4_NVM_SET_OFFSET_MASK;
ret = usb4_switch_op_write_metadata(sw, metadata);
if (ret)
return ret;
ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_SET_OFFSET, &status);
ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_SET_OFFSET, &metadata,
&status);
if (ret)
return ret;
@ -599,11 +619,8 @@ static int usb4_switch_nvm_write_next_block(void *data, const void *buf,
u8 status;
int ret;
ret = usb4_switch_op_write_data(sw, buf, dwords);
if (ret)
return ret;
ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_WRITE, &status);
ret = usb4_switch_op_data(sw, USB4_SWITCH_OP_NVM_WRITE, NULL, &status,
buf, dwords, NULL, 0);
if (ret)
return ret;
@ -638,32 +655,78 @@ int usb4_switch_nvm_write(struct tb_switch *sw, unsigned int address,
* @sw: USB4 router
*
* After the new NVM has been written via usb4_switch_nvm_write(), this
* function triggers NVM authentication process. If the authentication
* is successful the router is power cycled and the new NVM starts
* function triggers NVM authentication process. The router gets power
* cycled and if the authentication is successful the new NVM starts
* running. In case of failure returns negative errno.
*
* The caller should call usb4_switch_nvm_authenticate_status() to read
* the status of the authentication after power cycle. It should be the
* first router operation to avoid the status being lost.
*/
int usb4_switch_nvm_authenticate(struct tb_switch *sw)
{
u8 status = 0;
int ret;
ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_AUTH, &status);
ret = usb4_switch_op(sw, USB4_SWITCH_OP_NVM_AUTH, NULL, NULL);
switch (ret) {
/*
* The router is power cycled once NVM_AUTH is started so it is
* expected to get any of the following errors back.
*/
case -EACCES:
case -ENOTCONN:
case -ETIMEDOUT:
return 0;
default:
return ret;
}
}
/**
* usb4_switch_nvm_authenticate_status() - Read status of last NVM authenticate
* @sw: USB4 router
* @status: Status code of the operation
*
* The function checks if there is status available from the last NVM
* authenticate router operation. If there is status then %0 is returned
* and the status code is placed in @status. Returns negative errno in case
* of failure.
*
* Must be called before any other router operation.
*/
int usb4_switch_nvm_authenticate_status(struct tb_switch *sw, u32 *status)
{
const struct tb_cm_ops *cm_ops = sw->tb->cm_ops;
u16 opcode;
u32 val;
int ret;
if (cm_ops->usb4_switch_nvm_authenticate_status) {
ret = cm_ops->usb4_switch_nvm_authenticate_status(sw, status);
if (ret != -EOPNOTSUPP)
return ret;
}
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, ROUTER_CS_26, 1);
if (ret)
return ret;
switch (status) {
case 0x0:
tb_sw_dbg(sw, "NVM authentication successful\n");
return 0;
case 0x1:
return -EINVAL;
case 0x2:
return -EAGAIN;
case 0x3:
return -EOPNOTSUPP;
default:
return -EIO;
/* Check that the opcode is correct */
opcode = val & ROUTER_CS_26_OPCODE_MASK;
if (opcode == USB4_SWITCH_OP_NVM_AUTH) {
if (val & ROUTER_CS_26_OV)
return -EBUSY;
if (val & ROUTER_CS_26_ONS)
return -EOPNOTSUPP;
*status = (val & ROUTER_CS_26_STATUS_MASK) >>
ROUTER_CS_26_STATUS_SHIFT;
} else {
*status = 0;
}
return 0;
}
/**
@ -677,14 +740,12 @@ int usb4_switch_nvm_authenticate(struct tb_switch *sw)
*/
bool usb4_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in)
{
u32 metadata = in->port;
u8 status;
int ret;
ret = usb4_switch_op_write_metadata(sw, in->port);
if (ret)
return false;
ret = usb4_switch_op(sw, USB4_SWITCH_OP_QUERY_DP_RESOURCE, &status);
ret = usb4_switch_op(sw, USB4_SWITCH_OP_QUERY_DP_RESOURCE, &metadata,
&status);
/*
* If DP resource allocation is not supported assume it is
* always available.
@ -709,14 +770,12 @@ bool usb4_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in)
*/
int usb4_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in)
{
u32 metadata = in->port;
u8 status;
int ret;
ret = usb4_switch_op_write_metadata(sw, in->port);
if (ret)
return ret;
ret = usb4_switch_op(sw, USB4_SWITCH_OP_ALLOC_DP_RESOURCE, &status);
ret = usb4_switch_op(sw, USB4_SWITCH_OP_ALLOC_DP_RESOURCE, &metadata,
&status);
if (ret == -EOPNOTSUPP)
return 0;
else if (ret)
@ -734,14 +793,12 @@ int usb4_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in)
*/
int usb4_switch_dealloc_dp_resource(struct tb_switch *sw, struct tb_port *in)
{
u32 metadata = in->port;
u8 status;
int ret;
ret = usb4_switch_op_write_metadata(sw, in->port);
if (ret)
return ret;
ret = usb4_switch_op(sw, USB4_SWITCH_OP_DEALLOC_DP_RESOURCE, &status);
ret = usb4_switch_op(sw, USB4_SWITCH_OP_DEALLOC_DP_RESOURCE, &metadata,
&status);
if (ret == -EOPNOTSUPP)
return 0;
else if (ret)

View File

@ -8,6 +8,7 @@
*/
#include <linux/device.h>
#include <linux/delay.h>
#include <linux/kmod.h>
#include <linux/module.h>
#include <linux/pm_runtime.h>
@ -21,6 +22,7 @@
#define XDOMAIN_UUID_RETRIES 10
#define XDOMAIN_PROPERTIES_RETRIES 60
#define XDOMAIN_PROPERTIES_CHANGED_RETRIES 10
#define XDOMAIN_BONDING_WAIT 100 /* ms */
struct xdomain_request_work {
struct work_struct work;
@ -587,8 +589,6 @@ static void tb_xdp_handle_request(struct work_struct *work)
break;
case PROPERTIES_CHANGED_REQUEST: {
const struct tb_xdp_properties_changed *xchg =
(const struct tb_xdp_properties_changed *)pkg;
struct tb_xdomain *xd;
ret = tb_xdp_properties_changed_response(ctl, route, sequence);
@ -598,10 +598,12 @@ static void tb_xdp_handle_request(struct work_struct *work)
* the xdomain related to this connection as well in
* case there is a change in services it offers.
*/
xd = tb_xdomain_find_by_uuid_locked(tb, &xchg->src_uuid);
xd = tb_xdomain_find_by_route_locked(tb, route);
if (xd) {
queue_delayed_work(tb->wq, &xd->get_properties_work,
msecs_to_jiffies(50));
if (device_is_registered(&xd->dev)) {
queue_delayed_work(tb->wq, &xd->get_properties_work,
msecs_to_jiffies(50));
}
tb_xdomain_put(xd);
}
@ -777,6 +779,7 @@ static void tb_service_release(struct device *dev)
struct tb_service *svc = container_of(dev, struct tb_service, dev);
struct tb_xdomain *xd = tb_service_parent(svc);
tb_service_debugfs_remove(svc);
ida_simple_remove(&xd->service_ids, svc->id);
kfree(svc->key);
kfree(svc);
@ -891,6 +894,8 @@ static void enumerate_services(struct tb_xdomain *xd)
svc->dev.parent = &xd->dev;
dev_set_name(&svc->dev, "%s.%d", dev_name(&xd->dev), svc->id);
tb_service_debugfs_init(svc);
if (device_register(&svc->dev)) {
put_device(&svc->dev);
break;
@ -943,6 +948,43 @@ static void tb_xdomain_restore_paths(struct tb_xdomain *xd)
}
}
static inline struct tb_switch *tb_xdomain_parent(struct tb_xdomain *xd)
{
return tb_to_switch(xd->dev.parent);
}
static int tb_xdomain_update_link_attributes(struct tb_xdomain *xd)
{
bool change = false;
struct tb_port *port;
int ret;
port = tb_port_at(xd->route, tb_xdomain_parent(xd));
ret = tb_port_get_link_speed(port);
if (ret < 0)
return ret;
if (xd->link_speed != ret)
change = true;
xd->link_speed = ret;
ret = tb_port_get_link_width(port);
if (ret < 0)
return ret;
if (xd->link_width != ret)
change = true;
xd->link_width = ret;
if (change)
kobject_uevent(&xd->dev.kobj, KOBJ_CHANGE);
return 0;
}
static void tb_xdomain_get_uuid(struct work_struct *work)
{
struct tb_xdomain *xd = container_of(work, typeof(*xd),
@ -962,10 +1004,8 @@ static void tb_xdomain_get_uuid(struct work_struct *work)
return;
}
if (uuid_equal(&uuid, xd->local_uuid)) {
if (uuid_equal(&uuid, xd->local_uuid))
dev_dbg(&xd->dev, "intra-domain loop detected\n");
return;
}
/*
* If the UUID is different, there is another domain connected
@ -1056,6 +1096,8 @@ static void tb_xdomain_get_properties(struct work_struct *work)
xd->properties = dir;
xd->property_block_gen = gen;
tb_xdomain_update_link_attributes(xd);
tb_xdomain_restore_paths(xd);
mutex_unlock(&xd->lock);
@ -1162,9 +1204,35 @@ static ssize_t unique_id_show(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR_RO(unique_id);
static ssize_t speed_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct tb_xdomain *xd = container_of(dev, struct tb_xdomain, dev);
return sprintf(buf, "%u.0 Gb/s\n", xd->link_speed);
}
static DEVICE_ATTR(rx_speed, 0444, speed_show, NULL);
static DEVICE_ATTR(tx_speed, 0444, speed_show, NULL);
static ssize_t lanes_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct tb_xdomain *xd = container_of(dev, struct tb_xdomain, dev);
return sprintf(buf, "%u\n", xd->link_width);
}
static DEVICE_ATTR(rx_lanes, 0444, lanes_show, NULL);
static DEVICE_ATTR(tx_lanes, 0444, lanes_show, NULL);
static struct attribute *xdomain_attrs[] = {
&dev_attr_device.attr,
&dev_attr_device_name.attr,
&dev_attr_rx_lanes.attr,
&dev_attr_rx_speed.attr,
&dev_attr_tx_lanes.attr,
&dev_attr_tx_speed.attr,
&dev_attr_unique_id.attr,
&dev_attr_vendor.attr,
&dev_attr_vendor_name.attr,
@ -1381,6 +1449,70 @@ void tb_xdomain_remove(struct tb_xdomain *xd)
device_unregister(&xd->dev);
}
/**
* tb_xdomain_lane_bonding_enable() - Enable lane bonding on XDomain
* @xd: XDomain connection
*
* Lane bonding is disabled by default for XDomains. This function tries
* to enable bonding by first enabling the port and waiting for the CL0
* state.
*
* Return: %0 in case of success and negative errno in case of error.
*/
int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd)
{
struct tb_port *port;
int ret;
port = tb_port_at(xd->route, tb_xdomain_parent(xd));
if (!port->dual_link_port)
return -ENODEV;
ret = tb_port_enable(port->dual_link_port);
if (ret)
return ret;
ret = tb_wait_for_port(port->dual_link_port, true);
if (ret < 0)
return ret;
if (!ret)
return -ENOTCONN;
ret = tb_port_lane_bonding_enable(port);
if (ret) {
tb_port_warn(port, "failed to enable lane bonding\n");
return ret;
}
tb_xdomain_update_link_attributes(xd);
dev_dbg(&xd->dev, "lane bonding enabled\n");
return 0;
}
EXPORT_SYMBOL_GPL(tb_xdomain_lane_bonding_enable);
/**
* tb_xdomain_lane_bonding_disable() - Disable lane bonding
* @xd: XDomain connection
*
* Lane bonding is disabled by default for XDomains. If bonding has been
* enabled, this function can be used to disable it.
*/
void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd)
{
struct tb_port *port;
port = tb_port_at(xd->route, tb_xdomain_parent(xd));
if (port->dual_link_port) {
tb_port_lane_bonding_disable(port);
tb_port_disable(port->dual_link_port);
tb_xdomain_update_link_attributes(xd);
dev_dbg(&xd->dev, "lane bonding disabled\n");
}
}
EXPORT_SYMBOL_GPL(tb_xdomain_lane_bonding_disable);
/**
* tb_xdomain_enable_paths() - Enable DMA paths for XDomain connection
* @xd: XDomain connection

View File

@ -30,7 +30,6 @@ obj-$(CONFIG_USB_ISP1362_HCD) += host/
obj-$(CONFIG_USB_U132_HCD) += host/
obj-$(CONFIG_USB_R8A66597_HCD) += host/
obj-$(CONFIG_USB_HWA_HCD) += host/
obj-$(CONFIG_USB_IMX21_HCD) += host/
obj-$(CONFIG_USB_FSL_USB2) += host/
obj-$(CONFIG_USB_FOTG210_HCD) += host/
obj-$(CONFIG_USB_MAX3421_HCD) += host/

View File

@ -810,9 +810,6 @@ static int cxacru_atm_start(struct usbatm_data *usbatm_instance,
mutex_unlock(&instance->poll_state_serialize);
mutex_unlock(&instance->adsl_state_serialize);
printk(KERN_INFO "%s%d: %s %pM\n", atm_dev->type, atm_dev->number,
usbatm_instance->description, atm_dev->esi);
if (start_polling)
cxacru_poll_status(&instance->poll_work.work);
return 0;
@ -852,15 +849,15 @@ static void cxacru_poll_status(struct work_struct *work)
switch (instance->adsl_status) {
case 0:
atm_printk(KERN_INFO, usbatm, "ADSL state: running\n");
atm_info(usbatm, "ADSL state: running\n");
break;
case 1:
atm_printk(KERN_INFO, usbatm, "ADSL state: stopped\n");
atm_info(usbatm, "ADSL state: stopped\n");
break;
default:
atm_printk(KERN_INFO, usbatm, "Unknown adsl status %02x\n", instance->adsl_status);
atm_info(usbatm, "Unknown adsl status %02x\n", instance->adsl_status);
break;
}
}

View File

@ -249,7 +249,7 @@ static void usbatm_complete(struct urb *urb)
/* vdbg("%s: urb 0x%p, status %d, actual_length %d",
__func__, urb, status, urb->actual_length); */
/* usually in_interrupt(), but not always */
/* Can be invoked from task context, protect against interrupts */
spin_lock_irqsave(&channel->lock, flags);
/* must add to the back when receiving; doesn't matter when sending */
@ -1278,7 +1278,7 @@ EXPORT_SYMBOL_GPL(usbatm_usb_disconnect);
static int __init usbatm_usb_init(void)
{
if (sizeof(struct usbatm_control) > sizeof_field(struct sk_buff, cb)) {
printk(KERN_ERR "%s unusable with this kernel!\n", usbatm_driver_name);
pr_err("%s unusable with this kernel!\n", usbatm_driver_name);
return -EIO;
}

View File

@ -179,7 +179,7 @@ static int __init xusbatm_init(void)
num_vendor != num_product ||
num_vendor != num_rx_endpoint ||
num_vendor != num_tx_endpoint) {
printk(KERN_WARNING "xusbatm: malformed module parameters\n");
pr_warn("xusbatm: malformed module parameters\n");
return -EINVAL;
}

View File

@ -151,6 +151,7 @@ static int cdns_imx_platform_suspend(struct device *dev,
bool suspend, bool wakeup);
static struct cdns3_platform_data cdns_imx_pdata = {
.platform_suspend = cdns_imx_platform_suspend,
.quirks = CDNS3_DEFAULT_PM_RUNTIME_ALLOW,
};
static const struct of_dev_auxdata cdns_imx_auxdata[] = {
@ -206,7 +207,6 @@ static int cdns_imx_probe(struct platform_device *pdev)
device_set_wakeup_capable(dev, true);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
pm_runtime_forbid(dev);
return ret;
err:

View File

@ -465,11 +465,8 @@ static int cdns3_probe(struct platform_device *pdev)
cdns->xhci_res[1] = *res;
cdns->dev_irq = platform_get_irq_byname(pdev, "peripheral");
if (cdns->dev_irq == -EPROBE_DEFER)
return cdns->dev_irq;
if (cdns->dev_irq < 0)
dev_err(dev, "couldn't get peripheral irq\n");
return cdns->dev_irq;
regs = devm_platform_ioremap_resource_byname(pdev, "dev");
if (IS_ERR(regs))
@ -477,14 +474,9 @@ static int cdns3_probe(struct platform_device *pdev)
cdns->dev_regs = regs;
cdns->otg_irq = platform_get_irq_byname(pdev, "otg");
if (cdns->otg_irq == -EPROBE_DEFER)
if (cdns->otg_irq < 0)
return cdns->otg_irq;
if (cdns->otg_irq < 0) {
dev_err(dev, "couldn't get otg irq\n");
return cdns->otg_irq;
}
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "otg");
if (!res) {
dev_err(dev, "couldn't get otg resource\n");
@ -569,7 +561,8 @@ static int cdns3_probe(struct platform_device *pdev)
device_set_wakeup_capable(dev, true);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
pm_runtime_forbid(dev);
if (!(cdns->pdata && (cdns->pdata->quirks & CDNS3_DEFAULT_PM_RUNTIME_ALLOW)))
pm_runtime_forbid(dev);
/*
* The controller needs less time between bus and controller suspend,

View File

@ -42,6 +42,8 @@ struct cdns3_role_driver {
struct cdns3_platform_data {
int (*platform_suspend)(struct device *dev,
bool suspend, bool wakeup);
unsigned long quirks;
#define CDNS3_DEFAULT_PM_RUNTIME_ALLOW BIT(0)
};
/**
@ -73,6 +75,7 @@ struct cdns3_platform_data {
* @wakeup_pending: wakeup interrupt pending
* @pdata: platform data from glue layer
* @lock: spinlock structure
* @xhci_plat_data: xhci private data structure pointer
*/
struct cdns3 {
struct device *dev;
@ -106,6 +109,7 @@ struct cdns3 {
bool wakeup_pending;
struct cdns3_platform_data *pdata;
spinlock_t lock;
struct xhci_plat_priv *xhci_plat_data;
};
int cdns3_hw_role_switch(struct cdns3 *cdns);

View File

@ -13,7 +13,6 @@
#ifdef CONFIG_USB_CDNS3_GADGET
int cdns3_gadget_init(struct cdns3 *cdns);
void cdns3_gadget_exit(struct cdns3 *cdns);
#else
static inline int cdns3_gadget_init(struct cdns3 *cdns)
@ -21,8 +20,6 @@ static inline int cdns3_gadget_init(struct cdns3 *cdns)
return -ENXIO;
}
static inline void cdns3_gadget_exit(struct cdns3 *cdns) { }
#endif
#endif /* __LINUX_CDNS3_GADGET_EXPORT */

View File

@ -3084,7 +3084,7 @@ static void cdns3_gadget_release(struct device *dev)
kfree(priv_dev);
}
void cdns3_gadget_exit(struct cdns3 *cdns)
static void cdns3_gadget_exit(struct cdns3 *cdns)
{
struct cdns3_device *priv_dev;

View File

@ -9,9 +9,11 @@
#ifndef __LINUX_CDNS3_HOST_EXPORT
#define __LINUX_CDNS3_HOST_EXPORT
struct usb_hcd;
#ifdef CONFIG_USB_CDNS3_HOST
int cdns3_host_init(struct cdns3 *cdns);
int xhci_cdns3_suspend_quirk(struct usb_hcd *hcd);
#else
@ -21,6 +23,10 @@ static inline int cdns3_host_init(struct cdns3 *cdns)
}
static inline void cdns3_host_exit(struct cdns3 *cdns) { }
static inline int xhci_cdns3_suspend_quirk(struct usb_hcd *hcd)
{
return 0;
}
#endif /* CONFIG_USB_CDNS3_HOST */

View File

@ -14,6 +14,19 @@
#include "drd.h"
#include "host-export.h"
#include <linux/usb/hcd.h>
#include "../host/xhci.h"
#include "../host/xhci-plat.h"
#define XECP_PORT_CAP_REG 0x8000
#define XECP_AUX_CTRL_REG1 0x8120
#define CFG_RXDET_P3_EN BIT(15)
#define LPM_2_STB_SWITCH_EN BIT(25)
static const struct xhci_plat_priv xhci_plat_cdns3_xhci = {
.quirks = XHCI_SKIP_PHY_INIT | XHCI_AVOID_BEI,
.suspend_quirk = xhci_cdns3_suspend_quirk,
};
static int __cdns3_host_init(struct cdns3 *cdns)
{
@ -39,10 +52,25 @@ static int __cdns3_host_init(struct cdns3 *cdns)
goto err1;
}
cdns->xhci_plat_data = kmemdup(&xhci_plat_cdns3_xhci,
sizeof(struct xhci_plat_priv), GFP_KERNEL);
if (!cdns->xhci_plat_data) {
ret = -ENOMEM;
goto err1;
}
if (cdns->pdata && (cdns->pdata->quirks & CDNS3_DEFAULT_PM_RUNTIME_ALLOW))
cdns->xhci_plat_data->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
ret = platform_device_add_data(xhci, cdns->xhci_plat_data,
sizeof(struct xhci_plat_priv));
if (ret)
goto free_memory;
ret = platform_device_add(xhci);
if (ret) {
dev_err(cdns->dev, "failed to register xHCI device\n");
goto err1;
goto free_memory;
}
/* Glue needs to access xHCI region register for Power management */
@ -51,13 +79,43 @@ static int __cdns3_host_init(struct cdns3 *cdns)
cdns->xhci_regs = hcd->regs;
return 0;
free_memory:
kfree(cdns->xhci_plat_data);
err1:
platform_device_put(xhci);
return ret;
}
int xhci_cdns3_suspend_quirk(struct usb_hcd *hcd)
{
struct xhci_hcd *xhci = hcd_to_xhci(hcd);
u32 value;
if (pm_runtime_status_suspended(hcd->self.controller))
return 0;
/* set usbcmd.EU3S */
value = readl(&xhci->op_regs->command);
value |= CMD_PM_INDEX;
writel(value, &xhci->op_regs->command);
if (hcd->regs) {
value = readl(hcd->regs + XECP_AUX_CTRL_REG1);
value |= CFG_RXDET_P3_EN;
writel(value, hcd->regs + XECP_AUX_CTRL_REG1);
value = readl(hcd->regs + XECP_PORT_CAP_REG);
value |= LPM_2_STB_SWITCH_EN;
writel(value, hcd->regs + XECP_PORT_CAP_REG);
}
return 0;
}
static void cdns3_host_exit(struct cdns3 *cdns)
{
kfree(cdns->xhci_plat_data);
platform_device_unregister(cdns->host_dev);
cdns->host_dev = NULL;
cdns3_drd_host_off(cdns);

View File

@ -1,8 +1,11 @@
# SPDX-License-Identifier: GPL-2.0
# define_trace.h needs to know how to find our header
CFLAGS_trace.o := -I$(src)
obj-$(CONFIG_USB_CHIPIDEA) += ci_hdrc.o
ci_hdrc-y := core.o otg.o debug.o ulpi.o
ci_hdrc-$(CONFIG_USB_CHIPIDEA_UDC) += udc.o
ci_hdrc-$(CONFIG_USB_CHIPIDEA_UDC) += udc.o trace.o
ci_hdrc-$(CONFIG_USB_CHIPIDEA_HOST) += host.o
ci_hdrc-$(CONFIG_USB_OTG_FSM) += otg_fsm.o

View File

@ -57,7 +57,8 @@ static const struct ci_hdrc_imx_platform_flag imx6sx_usb_data = {
static const struct ci_hdrc_imx_platform_flag imx6ul_usb_data = {
.flags = CI_HDRC_SUPPORTS_RUNTIME_PM |
CI_HDRC_TURN_VBUS_EARLY_ON,
CI_HDRC_TURN_VBUS_EARLY_ON |
CI_HDRC_DISABLE_DEVICE_STREAMING,
};
static const struct ci_hdrc_imx_platform_flag imx7d_usb_data = {
@ -319,16 +320,11 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
.notify_event = ci_hdrc_imx_notify_event,
};
int ret;
const struct of_device_id *of_id;
const struct ci_hdrc_imx_platform_flag *imx_platform_flag;
struct device_node *np = pdev->dev.of_node;
struct device *dev = &pdev->dev;
of_id = of_match_device(ci_hdrc_imx_dt_ids, dev);
if (!of_id)
return -ENODEV;
imx_platform_flag = of_id->data;
imx_platform_flag = of_device_get_match_data(&pdev->dev);
data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
if (!data)

View File

@ -0,0 +1,23 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Chipidea Device Mode Trace Support
*
* Copyright (C) 2020 NXP
*
* Author: Peter Chen <peter.chen@nxp.com>
*/
#define CREATE_TRACE_POINTS
#include "trace.h"
void ci_log(struct ci_hdrc *ci, const char *fmt, ...)
{
struct va_format vaf;
va_list args;
va_start(args, fmt);
vaf.fmt = fmt;
vaf.va = &args;
trace_ci_log(ci, &vaf);
va_end(args);
}

View File

@ -0,0 +1,92 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Trace support header file for device mode
*
* Copyright (C) 2020 NXP
*
* Author: Peter Chen <peter.chen@nxp.com>
*/
#undef TRACE_SYSTEM
#define TRACE_SYSTEM chipidea
#if !defined(__LINUX_CHIPIDEA_TRACE) || defined(TRACE_HEADER_MULTI_READ)
#define __LINUX_CHIPIDEA_TRACE
#include <linux/types.h>
#include <linux/tracepoint.h>
#include <linux/usb/chipidea.h>
#include "ci.h"
#include "udc.h"
#define CHIPIDEA_MSG_MAX 500
void ci_log(struct ci_hdrc *ci, const char *fmt, ...);
TRACE_EVENT(ci_log,
TP_PROTO(struct ci_hdrc *ci, struct va_format *vaf),
TP_ARGS(ci, vaf),
TP_STRUCT__entry(
__string(name, dev_name(ci->dev))
__dynamic_array(char, msg, CHIPIDEA_MSG_MAX)
),
TP_fast_assign(
__assign_str(name, dev_name(ci->dev));
vsnprintf(__get_str(msg), CHIPIDEA_MSG_MAX, vaf->fmt, *vaf->va);
),
TP_printk("%s: %s", __get_str(name), __get_str(msg))
);
DECLARE_EVENT_CLASS(ci_log_trb,
TP_PROTO(struct ci_hw_ep *hwep, struct ci_hw_req *hwreq, struct td_node *td),
TP_ARGS(hwep, hwreq, td),
TP_STRUCT__entry(
__string(name, hwep->name)
__field(struct td_node *, td)
__field(struct usb_request *, req)
__field(dma_addr_t, dma)
__field(s32, td_remaining_size)
__field(u32, next)
__field(u32, token)
__field(u32, type)
),
TP_fast_assign(
__assign_str(name, hwep->name);
__entry->req = &hwreq->req;
__entry->td = td;
__entry->dma = td->dma;
__entry->td_remaining_size = td->td_remaining_size;
__entry->next = le32_to_cpu(td->ptr->next);
__entry->token = le32_to_cpu(td->ptr->token);
__entry->type = usb_endpoint_type(hwep->ep.desc);
),
TP_printk("%s: req: %p, td: %p, td_dma_address: %pad, remaining_size: %d, "
"next: %x, total bytes: %d, status: %lx",
__get_str(name), __entry->req, __entry->td, &__entry->dma,
__entry->td_remaining_size, __entry->next,
(int)((__entry->token & TD_TOTAL_BYTES) >> __ffs(TD_TOTAL_BYTES)),
__entry->token & TD_STATUS
)
);
DEFINE_EVENT(ci_log_trb, ci_prepare_td,
TP_PROTO(struct ci_hw_ep *hwep, struct ci_hw_req *hwreq, struct td_node *td),
TP_ARGS(hwep, hwreq, td)
);
DEFINE_EVENT(ci_log_trb, ci_complete_td,
TP_PROTO(struct ci_hw_ep *hwep, struct ci_hw_req *hwreq, struct td_node *td),
TP_ARGS(hwep, hwreq, td)
);
#endif /* __LINUX_CHIPIDEA_TRACE */
/* this part must be outside header guard */
#undef TRACE_INCLUDE_PATH
#define TRACE_INCLUDE_PATH .
#undef TRACE_INCLUDE_FILE
#define TRACE_INCLUDE_FILE trace
#include <trace/define_trace.h>

View File

@ -26,6 +26,7 @@
#include "bits.h"
#include "otg.h"
#include "otg_fsm.h"
#include "trace.h"
/* control endpoint description */
static const struct usb_endpoint_descriptor
@ -569,14 +570,18 @@ static int _hardware_enqueue(struct ci_hw_ep *hwep, struct ci_hw_req *hwreq)
if (ret)
return ret;
firstnode = list_first_entry(&hwreq->tds, struct td_node, td);
lastnode = list_entry(hwreq->tds.prev,
struct td_node, td);
lastnode->ptr->next = cpu_to_le32(TD_TERMINATE);
if (!hwreq->req.no_interrupt)
lastnode->ptr->token |= cpu_to_le32(TD_IOC);
list_for_each_entry_safe(firstnode, lastnode, &hwreq->tds, td)
trace_ci_prepare_td(hwep, hwreq, firstnode);
firstnode = list_first_entry(&hwreq->tds, struct td_node, td);
wmb();
hwreq->req.actual = 0;
@ -671,6 +676,7 @@ static int _hardware_dequeue(struct ci_hw_ep *hwep, struct ci_hw_req *hwreq)
list_for_each_entry_safe(node, tmpnode, &hwreq->tds, td) {
tmptoken = le32_to_cpu(node->ptr->token);
trace_ci_complete_td(hwep, hwreq, node);
if ((TD_STATUS_ACTIVE & tmptoken) != 0) {
int n = hw_ep_bit(hwep->num, hwep->dir);

View File

@ -1134,11 +1134,6 @@ MODULE_DEVICE_TABLE(of, usbmisc_imx_dt_ids);
static int usbmisc_imx_probe(struct platform_device *pdev)
{
struct imx_usbmisc *data;
const struct of_device_id *of_id;
of_id = of_match_device(usbmisc_imx_dt_ids, &pdev->dev);
if (!of_id)
return -ENODEV;
data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
if (!data)
@ -1150,7 +1145,7 @@ static int usbmisc_imx_probe(struct platform_device *pdev)
if (IS_ERR(data->base))
return PTR_ERR(data->base);
data->ops = (const struct usbmisc_ops *)of_id->data;
data->ops = of_device_get_match_data(&pdev->dev);
platform_set_drvdata(pdev, data);
return 0;

View File

@ -118,7 +118,7 @@ static struct attribute *ulpi_dev_attrs[] = {
NULL
};
static struct attribute_group ulpi_dev_attr_group = {
static const struct attribute_group ulpi_dev_attr_group = {
.attrs = ulpi_dev_attrs,
};

View File

@ -51,7 +51,8 @@ void __init usb_init_pool_max(void)
/**
* hcd_buffer_create - initialize buffer pools
* @hcd: the bus whose buffer pools are to be initialized
* Context: !in_interrupt()
*
* Context: task context, might sleep
*
* Call this as part of initializing a host controller that uses the dma
* memory allocators. It initializes some pools of dma-coherent memory that
@ -88,7 +89,8 @@ int hcd_buffer_create(struct usb_hcd *hcd)
/**
* hcd_buffer_destroy - deallocate buffer pools
* @hcd: the bus whose buffer pools are to be destroyed
* Context: !in_interrupt()
*
* Context: task context, might sleep
*
* This frees the buffer pools created by hcd_buffer_create().
*/

View File

@ -1076,6 +1076,7 @@ int usb_get_bos_descriptor(struct usb_device *dev)
case USB_PTM_CAP_TYPE:
dev->bos->ptm_cap =
(struct usb_ptm_cap_descriptor *)buffer;
break;
default:
break;
}

View File

@ -153,7 +153,7 @@ static struct attribute *ep_dev_attrs[] = {
&dev_attr_direction.attr,
NULL,
};
static struct attribute_group ep_dev_attr_grp = {
static const struct attribute_group ep_dev_attr_grp = {
.attrs = ep_dev_attrs,
};
static const struct attribute_group *ep_dev_groups[] = {

View File

@ -160,7 +160,8 @@ static void ehci_wait_for_companions(struct pci_dev *pdev, struct usb_hcd *hcd,
* @dev: USB Host Controller being probed
* @id: pci hotplug id connecting controller to HCD framework
* @driver: USB HC driver handle
* Context: !in_interrupt()
*
* Context: task context, might sleep
*
* Allocates basic PCI resources for this USB host controller, and
* then invokes the start() method for the HCD associated with it
@ -304,7 +305,8 @@ EXPORT_SYMBOL_GPL(usb_hcd_pci_probe);
/**
* usb_hcd_pci_remove - shutdown processing for PCI-based HCDs
* @dev: USB Host Controller being removed
* Context: !in_interrupt()
*
* Context: task context, might sleep
*
* Reverses the effect of usb_hcd_pci_probe(), first invoking
* the HCD's stop() method. It is always called from a thread

View File

@ -747,8 +747,7 @@ error:
* driver requests it; otherwise the driver is responsible for
* calling usb_hcd_poll_rh_status() when an event occurs.
*
* Completions are called in_interrupt(), but they may or may not
* be in_irq().
* Completion handler may not sleep. See usb_hcd_giveback_urb() for details.
*/
void usb_hcd_poll_rh_status(struct usb_hcd *hcd)
{
@ -904,7 +903,8 @@ static void usb_bus_init (struct usb_bus *bus)
/**
* usb_register_bus - registers the USB host controller with the usb core
* @bus: pointer to the bus to register
* Context: !in_interrupt()
*
* Context: task context, might sleep.
*
* Assigns a bus number, and links the controller into usbcore data
* structures so that it can be seen by scanning the bus list.
@ -939,7 +939,8 @@ error_find_busnum:
/**
* usb_deregister_bus - deregisters the USB host controller
* @bus: pointer to the bus to deregister
* Context: !in_interrupt()
*
* Context: task context, might sleep.
*
* Recycles the bus number, and unlinks the controller from usbcore data
* structures so that it won't be seen by scanning the bus list.
@ -1646,9 +1647,16 @@ static void __usb_hcd_giveback_urb(struct urb *urb)
/* pass ownership to the completion handler */
urb->status = status;
kcov_remote_start_usb((u64)urb->dev->bus->busnum);
/*
* This function can be called in task context inside another remote
* coverage collection section, but KCOV doesn't support that kind of
* recursion yet. Only collect coverage in softirq context for now.
*/
if (in_serving_softirq())
kcov_remote_start_usb((u64)urb->dev->bus->busnum);
urb->complete(urb);
kcov_remote_stop();
if (in_serving_softirq())
kcov_remote_stop();
usb_anchor_resume_wakeups(anchor);
atomic_dec(&urb->use_count);
@ -1691,7 +1699,11 @@ static void usb_giveback_urb_bh(struct tasklet_struct *t)
* @hcd: host controller returning the URB
* @urb: urb being returned to the USB device driver.
* @status: completion status code for the URB.
* Context: in_interrupt()
*
* Context: atomic. The completion callback is invoked in caller's context.
* For HCDs with HCD_BH flag set, the completion callback is invoked in tasklet
* context (except for URBs submitted to the root hub which always complete in
* caller's context).
*
* This hands the URB from HCD to its USB device driver, using its
* completion function. The HCD has freed all per-urb resources
@ -2268,7 +2280,7 @@ EXPORT_SYMBOL_GPL(usb_hcd_resume_root_hub);
* usb_bus_start_enum - start immediate enumeration (for OTG)
* @bus: the bus (must use hcd framework)
* @port_num: 1-based number of port; usually bus->otg_port
* Context: in_interrupt()
* Context: atomic
*
* Starts enumeration, with an immediate reset followed later by
* hub_wq identifying and possibly configuring the device.
@ -2474,7 +2486,8 @@ EXPORT_SYMBOL_GPL(__usb_create_hcd);
* @bus_name: value to store in hcd->self.bus_name
* @primary_hcd: a pointer to the usb_hcd structure that is sharing the
* PCI device. Only allocate certain resources for the primary HCD
* Context: !in_interrupt()
*
* Context: task context, might sleep.
*
* Allocate a struct usb_hcd, with extra space at the end for the
* HC driver's private data. Initialize the generic members of the
@ -2496,7 +2509,8 @@ EXPORT_SYMBOL_GPL(usb_create_shared_hcd);
* @driver: HC driver that will use this hcd
* @dev: device for this HC, stored in hcd->self.controller
* @bus_name: value to store in hcd->self.bus_name
* Context: !in_interrupt()
*
* Context: task context, might sleep.
*
* Allocate a struct usb_hcd, with extra space at the end for the
* HC driver's private data. Initialize the generic members of the
@ -2830,7 +2844,8 @@ EXPORT_SYMBOL_GPL(usb_add_hcd);
/**
* usb_remove_hcd - shutdown processing for generic HCDs
* @hcd: the usb_hcd structure to remove
* Context: !in_interrupt()
*
* Context: task context, might sleep.
*
* Disconnects the root hub, then reverses the effects of usb_add_hcd(),
* invoking the HCD's stop() method.

View File

@ -2171,7 +2171,8 @@ static void hub_disconnect_children(struct usb_device *udev)
/**
* usb_disconnect - disconnect a device (usbcore-internal)
* @pdev: pointer to device being disconnected
* Context: !in_interrupt ()
*
* Context: task context, might sleep
*
* Something got disconnected. Get rid of it and all of its children.
*

View File

@ -119,7 +119,7 @@ static int usb_internal_control_msg(struct usb_device *usb_dev,
* @timeout: time in msecs to wait for the message to complete before timing
* out (if 0 the wait is forever)
*
* Context: !in_interrupt ()
* Context: task context, might sleep.
*
* This function sends a simple control message to a specified endpoint and
* waits for the message to complete, or timeout.
@ -204,9 +204,6 @@ int usb_control_msg_send(struct usb_device *dev, __u8 endpoint, __u8 request,
int ret;
u8 *data = NULL;
if (usb_pipe_type_check(dev, pipe))
return -EINVAL;
if (size) {
data = kmemdup(driver_data, size, memflags);
if (!data)
@ -219,9 +216,8 @@ int usb_control_msg_send(struct usb_device *dev, __u8 endpoint, __u8 request,
if (ret < 0)
return ret;
if (ret == size)
return 0;
return -EINVAL;
return 0;
}
EXPORT_SYMBOL_GPL(usb_control_msg_send);
@ -273,7 +269,7 @@ int usb_control_msg_recv(struct usb_device *dev, __u8 endpoint, __u8 request,
int ret;
u8 *data;
if (!size || !driver_data || usb_pipe_type_check(dev, pipe))
if (!size || !driver_data)
return -EINVAL;
data = kmalloc(size, memflags);
@ -290,7 +286,7 @@ int usb_control_msg_recv(struct usb_device *dev, __u8 endpoint, __u8 request,
memcpy(driver_data, data, size);
ret = 0;
} else {
ret = -EINVAL;
ret = -EREMOTEIO;
}
exit:
@ -310,7 +306,7 @@ EXPORT_SYMBOL_GPL(usb_control_msg_recv);
* @timeout: time in msecs to wait for the message to complete before
* timing out (if 0 the wait is forever)
*
* Context: !in_interrupt ()
* Context: task context, might sleep.
*
* This function sends a simple interrupt message to a specified endpoint and
* waits for the message to complete, or timeout.
@ -343,7 +339,7 @@ EXPORT_SYMBOL_GPL(usb_interrupt_msg);
* @timeout: time in msecs to wait for the message to complete before
* timing out (if 0 the wait is forever)
*
* Context: !in_interrupt ()
* Context: task context, might sleep.
*
* This function sends a simple bulk message to a specified endpoint
* and waits for the message to complete, or timeout.
@ -610,7 +606,8 @@ EXPORT_SYMBOL_GPL(usb_sg_init);
* usb_sg_wait - synchronously execute scatter/gather request
* @io: request block handle, as initialized with usb_sg_init().
* some fields become accessible when this call returns.
* Context: !in_interrupt ()
*
* Context: task context, might sleep.
*
* This function blocks until the specified I/O operation completes. It
* leverages the grouping of the related I/O requests to get good transfer
@ -764,7 +761,8 @@ EXPORT_SYMBOL_GPL(usb_sg_cancel);
* @index: the number of the descriptor
* @buf: where to put the descriptor
* @size: how big is "buf"?
* Context: !in_interrupt ()
*
* Context: task context, might sleep.
*
* Gets a USB descriptor. Convenience functions exist to simplify
* getting some types of descriptors. Use
@ -812,7 +810,8 @@ EXPORT_SYMBOL_GPL(usb_get_descriptor);
* @index: the number of the descriptor
* @buf: where to put the string
* @size: how big is "buf"?
* Context: !in_interrupt ()
*
* Context: task context, might sleep.
*
* Retrieves a string, encoded using UTF-16LE (Unicode, 16 bits per character,
* in little-endian byte order).
@ -947,7 +946,8 @@ static int usb_get_langid(struct usb_device *dev, unsigned char *tbuf)
* @index: the number of the descriptor
* @buf: where to put the string
* @size: how big is "buf"?
* Context: !in_interrupt ()
*
* Context: task context, might sleep.
*
* This converts the UTF-16LE encoded strings returned by devices, from
* usb_get_string_descriptor(), to null-terminated UTF-8 encoded ones
@ -1036,7 +1036,8 @@ char *usb_cache_string(struct usb_device *udev, int index)
* usb_get_device_descriptor - (re)reads the device descriptor (usbcore)
* @dev: the device whose device descriptor is being updated
* @size: how much of the descriptor to read
* Context: !in_interrupt ()
*
* Context: task context, might sleep.
*
* Updates the copy of the device descriptor stored in the device structure,
* which dedicates space for this purpose.
@ -1071,7 +1072,7 @@ int usb_get_device_descriptor(struct usb_device *dev, unsigned int size)
/*
* usb_set_isoch_delay - informs the device of the packet transmit delay
* @dev: the device whose delay is to be informed
* Context: !in_interrupt()
* Context: task context, might sleep
*
* Since this is an optional request, we don't bother if it fails.
*/
@ -1100,7 +1101,8 @@ int usb_set_isoch_delay(struct usb_device *dev)
* @type: USB_STATUS_TYPE_*; for standard or PTM status types
* @target: zero (for device), else interface or endpoint number
* @data: pointer to two bytes of bitmap data
* Context: !in_interrupt ()
*
* Context: task context, might sleep.
*
* Returns device, interface, or endpoint status. Normally only of
* interest to see if the device is self powered, or has enabled the
@ -1177,7 +1179,8 @@ EXPORT_SYMBOL_GPL(usb_get_status);
* usb_clear_halt - tells device to clear endpoint halt/stall condition
* @dev: device whose endpoint is halted
* @pipe: endpoint "pipe" being cleared
* Context: !in_interrupt ()
*
* Context: task context, might sleep.
*
* This is used to clear halt conditions for bulk and interrupt endpoints,
* as reported by URB completion status. Endpoints that are halted are
@ -1481,7 +1484,8 @@ void usb_enable_interface(struct usb_device *dev,
* @dev: the device whose interface is being updated
* @interface: the interface being updated
* @alternate: the setting being chosen.
* Context: !in_interrupt ()
*
* Context: task context, might sleep.
*
* This is used to enable data transfers on interfaces that may not
* be enabled by default. Not all devices support such configurability.
@ -1902,7 +1906,8 @@ static void __usb_queue_reset_device(struct work_struct *ws)
* usb_set_configuration - Makes a particular device setting be current
* @dev: the device whose configuration is being updated
* @configuration: the configuration being chosen.
* Context: !in_interrupt(), caller owns the device lock
*
* Context: task context, might sleep. Caller holds device lock.
*
* This is used to enable non-default device modes. Not all devices
* use this kind of configurability; many devices only have one

View File

@ -155,7 +155,7 @@ static struct attribute *port_dev_attrs[] = {
NULL,
};
static struct attribute_group port_dev_attr_grp = {
static const struct attribute_group port_dev_attr_grp = {
.attrs = port_dev_attrs,
};
@ -169,7 +169,7 @@ static struct attribute *port_dev_usb3_attrs[] = {
NULL,
};
static struct attribute_group port_dev_usb3_attr_grp = {
static const struct attribute_group port_dev_usb3_attr_grp = {
.attrs = port_dev_usb3_attrs,
};

View File

@ -342,6 +342,9 @@ static const struct usb_device_id usb_quirk_list[] = {
{ USB_DEVICE(0x06a3, 0x0006), .driver_info =
USB_QUIRK_CONFIG_INTF_STRINGS },
/* Agfa SNAPSCAN 1212U */
{ USB_DEVICE(0x06bd, 0x0001), .driver_info = USB_QUIRK_RESET_RESUME },
/* Guillemot Webcam Hercules Dualpix Exchange (2nd ID) */
{ USB_DEVICE(0x06f8, 0x0804), .driver_info = USB_QUIRK_RESET_RESUME },

View File

@ -641,7 +641,7 @@ static struct attribute *usb2_hardware_lpm_attr[] = {
&dev_attr_usb2_lpm_besl.attr,
NULL,
};
static struct attribute_group usb2_hardware_lpm_attr_group = {
static const struct attribute_group usb2_hardware_lpm_attr_group = {
.name = power_group_name,
.attrs = usb2_hardware_lpm_attr,
};
@ -651,7 +651,7 @@ static struct attribute *usb3_hardware_lpm_attr[] = {
&dev_attr_usb3_hardware_lpm_u2.attr,
NULL,
};
static struct attribute_group usb3_hardware_lpm_attr_group = {
static const struct attribute_group usb3_hardware_lpm_attr_group = {
.name = power_group_name,
.attrs = usb3_hardware_lpm_attr,
};
@ -663,7 +663,7 @@ static struct attribute *power_attrs[] = {
&dev_attr_active_duration.attr,
NULL,
};
static struct attribute_group power_attr_group = {
static const struct attribute_group power_attr_group = {
.name = power_group_name,
.attrs = power_attrs,
};
@ -832,7 +832,7 @@ static struct attribute *dev_attrs[] = {
#endif
NULL,
};
static struct attribute_group dev_attr_grp = {
static const struct attribute_group dev_attr_grp = {
.attrs = dev_attrs,
};
@ -865,7 +865,7 @@ static umode_t dev_string_attrs_are_visible(struct kobject *kobj,
return a->mode;
}
static struct attribute_group dev_string_attr_grp = {
static const struct attribute_group dev_string_attr_grp = {
.attrs = dev_string_attrs,
.is_visible = dev_string_attrs_are_visible,
};
@ -1222,7 +1222,7 @@ static struct attribute *intf_attrs[] = {
&dev_attr_interface_authorized.attr,
NULL,
};
static struct attribute_group intf_attr_grp = {
static const struct attribute_group intf_attr_grp = {
.attrs = intf_attrs,
};
@ -1246,7 +1246,7 @@ static umode_t intf_assoc_attrs_are_visible(struct kobject *kobj,
return a->mode;
}
static struct attribute_group intf_assoc_attr_grp = {
static const struct attribute_group intf_assoc_attr_grp = {
.attrs = intf_assoc_attrs,
.is_visible = intf_assoc_attrs_are_visible,
};

View File

@ -28,7 +28,6 @@
#include <linux/string.h>
#include <linux/bitops.h>
#include <linux/slab.h>
#include <linux/interrupt.h> /* for in_interrupt() */
#include <linux/kmod.h>
#include <linux/init.h>
#include <linux/spinlock.h>
@ -561,7 +560,8 @@ static bool usb_dev_authorized(struct usb_device *dev, struct usb_hcd *hcd)
* @parent: hub to which device is connected; null to allocate a root hub
* @bus: bus used to access the device
* @port1: one-based index of port; ignored for root hubs
* Context: !in_interrupt()
*
* Context: task context, might sleep.
*
* Only hub drivers (including virtual root hub drivers for host
* controllers) should ever call this.

View File

@ -686,7 +686,7 @@ acm_bind(struct usb_configuration *c, struct usb_function *f)
acm_ss_out_desc.bEndpointAddress = acm_fs_out_desc.bEndpointAddress;
status = usb_assign_descriptors(f, acm_fs_function, acm_hs_function,
acm_ss_function, NULL);
acm_ss_function, acm_ss_function);
if (status)
goto fail;

View File

@ -296,11 +296,11 @@ static int __ffs_ep0_queue_wait(struct ffs_data *ffs, char *data, size_t len)
reinit_completion(&ffs->ep0req_completion);
ret = usb_ep_queue(ffs->gadget->ep0, req, GFP_ATOMIC);
if (unlikely(ret < 0))
if (ret < 0)
return ret;
ret = wait_for_completion_interruptible(&ffs->ep0req_completion);
if (unlikely(ret)) {
if (ret) {
usb_ep_dequeue(ffs->gadget->ep0, req);
return -EINTR;
}
@ -337,7 +337,7 @@ static ssize_t ffs_ep0_write(struct file *file, const char __user *buf,
/* Acquire mutex */
ret = ffs_mutex_lock(&ffs->mutex, file->f_flags & O_NONBLOCK);
if (unlikely(ret < 0))
if (ret < 0)
return ret;
/* Check state */
@ -345,7 +345,7 @@ static ssize_t ffs_ep0_write(struct file *file, const char __user *buf,
case FFS_READ_DESCRIPTORS:
case FFS_READ_STRINGS:
/* Copy data */
if (unlikely(len < 16)) {
if (len < 16) {
ret = -EINVAL;
break;
}
@ -360,7 +360,7 @@ static ssize_t ffs_ep0_write(struct file *file, const char __user *buf,
if (ffs->state == FFS_READ_DESCRIPTORS) {
pr_info("read descriptors\n");
ret = __ffs_data_got_descs(ffs, data, len);
if (unlikely(ret < 0))
if (ret < 0)
break;
ffs->state = FFS_READ_STRINGS;
@ -368,11 +368,11 @@ static ssize_t ffs_ep0_write(struct file *file, const char __user *buf,
} else {
pr_info("read strings\n");
ret = __ffs_data_got_strings(ffs, data, len);
if (unlikely(ret < 0))
if (ret < 0)
break;
ret = ffs_epfiles_create(ffs);
if (unlikely(ret)) {
if (ret) {
ffs->state = FFS_CLOSING;
break;
}
@ -381,7 +381,7 @@ static ssize_t ffs_ep0_write(struct file *file, const char __user *buf,
mutex_unlock(&ffs->mutex);
ret = ffs_ready(ffs);
if (unlikely(ret < 0)) {
if (ret < 0) {
ffs->state = FFS_CLOSING;
return ret;
}
@ -495,7 +495,7 @@ static ssize_t __ffs_ep0_read_events(struct ffs_data *ffs, char __user *buf,
spin_unlock_irq(&ffs->ev.waitq.lock);
mutex_unlock(&ffs->mutex);
return unlikely(copy_to_user(buf, events, size)) ? -EFAULT : size;
return copy_to_user(buf, events, size) ? -EFAULT : size;
}
static ssize_t ffs_ep0_read(struct file *file, char __user *buf,
@ -514,7 +514,7 @@ static ssize_t ffs_ep0_read(struct file *file, char __user *buf,
/* Acquire mutex */
ret = ffs_mutex_lock(&ffs->mutex, file->f_flags & O_NONBLOCK);
if (unlikely(ret < 0))
if (ret < 0)
return ret;
/* Check state */
@ -536,7 +536,7 @@ static ssize_t ffs_ep0_read(struct file *file, char __user *buf,
case FFS_NO_SETUP:
n = len / sizeof(struct usb_functionfs_event);
if (unlikely(!n)) {
if (!n) {
ret = -EINVAL;
break;
}
@ -567,9 +567,9 @@ static ssize_t ffs_ep0_read(struct file *file, char __user *buf,
spin_unlock_irq(&ffs->ev.waitq.lock);
if (likely(len)) {
if (len) {
data = kmalloc(len, GFP_KERNEL);
if (unlikely(!data)) {
if (!data) {
ret = -ENOMEM;
goto done_mutex;
}
@ -586,7 +586,7 @@ static ssize_t ffs_ep0_read(struct file *file, char __user *buf,
/* unlocks spinlock */
ret = __ffs_ep0_queue_wait(ffs, data, len);
if (likely(ret > 0) && unlikely(copy_to_user(buf, data, len)))
if ((ret > 0) && (copy_to_user(buf, data, len)))
ret = -EFAULT;
goto done_mutex;
@ -608,7 +608,7 @@ static int ffs_ep0_open(struct inode *inode, struct file *file)
ENTER();
if (unlikely(ffs->state == FFS_CLOSING))
if (ffs->state == FFS_CLOSING)
return -EBUSY;
file->private_data = ffs;
@ -657,7 +657,7 @@ static __poll_t ffs_ep0_poll(struct file *file, poll_table *wait)
poll_wait(file, &ffs->ev.waitq, wait);
ret = ffs_mutex_lock(&ffs->mutex, file->f_flags & O_NONBLOCK);
if (unlikely(ret < 0))
if (ret < 0)
return mask;
switch (ffs->state) {
@ -678,6 +678,8 @@ static __poll_t ffs_ep0_poll(struct file *file, poll_table *wait)
mask |= (EPOLLIN | EPOLLOUT);
break;
}
break;
case FFS_CLOSING:
break;
case FFS_DEACTIVATED:
@ -706,7 +708,7 @@ static const struct file_operations ffs_ep0_operations = {
static void ffs_epfile_io_complete(struct usb_ep *_ep, struct usb_request *req)
{
ENTER();
if (likely(req->context)) {
if (req->context) {
struct ffs_ep *ep = _ep->driver_data;
ep->status = req->status ? req->status : req->actual;
complete(req->context);
@ -716,10 +718,10 @@ static void ffs_epfile_io_complete(struct usb_ep *_ep, struct usb_request *req)
static ssize_t ffs_copy_to_iter(void *data, int data_len, struct iov_iter *iter)
{
ssize_t ret = copy_to_iter(data, data_len, iter);
if (likely(ret == data_len))
if (ret == data_len)
return ret;
if (unlikely(iov_iter_count(iter)))
if (iov_iter_count(iter))
return -EFAULT;
/*
@ -885,7 +887,7 @@ static ssize_t __ffs_epfile_read_buffered(struct ffs_epfile *epfile,
return ret;
}
if (unlikely(iov_iter_count(iter))) {
if (iov_iter_count(iter)) {
ret = -EFAULT;
} else {
buf->length -= ret;
@ -906,10 +908,10 @@ static ssize_t __ffs_epfile_read_data(struct ffs_epfile *epfile,
struct ffs_buffer *buf;
ssize_t ret = copy_to_iter(data, data_len, iter);
if (likely(data_len == ret))
if (data_len == ret)
return ret;
if (unlikely(iov_iter_count(iter)))
if (iov_iter_count(iter))
return -EFAULT;
/* See ffs_copy_to_iter for more context. */
@ -930,7 +932,7 @@ static ssize_t __ffs_epfile_read_data(struct ffs_epfile *epfile,
* in struct ffs_epfile for full read_buffer pointer synchronisation
* story.
*/
if (unlikely(cmpxchg(&epfile->read_buffer, NULL, buf)))
if (cmpxchg(&epfile->read_buffer, NULL, buf))
kfree(buf);
return ret;
@ -968,7 +970,7 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
/* We will be using request and read_buffer */
ret = ffs_mutex_lock(&epfile->mutex, file->f_flags & O_NONBLOCK);
if (unlikely(ret))
if (ret)
goto error;
/* Allocate & copy */
@ -1013,7 +1015,7 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
spin_unlock_irq(&epfile->ffs->eps_lock);
data = ffs_alloc_buffer(io_data, data_len);
if (unlikely(!data)) {
if (!data) {
ret = -ENOMEM;
goto error_mutex;
}
@ -1033,7 +1035,7 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
ret = usb_ep_set_halt(ep->ep);
if (!ret)
ret = -EBADMSG;
} else if (unlikely(data_len == -EINVAL)) {
} else if (data_len == -EINVAL) {
/*
* Sanity Check: even though data_len can't be used
* uninitialized at the time I write this comment, some
@ -1068,12 +1070,12 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
req->complete = ffs_epfile_io_complete;
ret = usb_ep_queue(ep->ep, req, GFP_ATOMIC);
if (unlikely(ret < 0))
if (ret < 0)
goto error_lock;
spin_unlock_irq(&epfile->ffs->eps_lock);
if (unlikely(wait_for_completion_interruptible(&done))) {
if (wait_for_completion_interruptible(&done)) {
/*
* To avoid race condition with ffs_epfile_io_complete,
* dequeue the request first then check
@ -1115,7 +1117,7 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
req->complete = ffs_epfile_async_io_complete;
ret = usb_ep_queue(ep->ep, req, GFP_ATOMIC);
if (unlikely(ret)) {
if (ret) {
io_data->req = NULL;
usb_ep_free_request(ep->ep, req);
goto error_lock;
@ -1166,7 +1168,7 @@ static int ffs_aio_cancel(struct kiocb *kiocb)
spin_lock_irqsave(&epfile->ffs->eps_lock, flags);
if (likely(io_data && io_data->ep && io_data->req))
if (io_data && io_data->ep && io_data->req)
value = usb_ep_dequeue(io_data->ep, io_data->req);
else
value = -EINVAL;
@ -1185,7 +1187,7 @@ static ssize_t ffs_epfile_write_iter(struct kiocb *kiocb, struct iov_iter *from)
if (!is_sync_kiocb(kiocb)) {
p = kzalloc(sizeof(io_data), GFP_KERNEL);
if (unlikely(!p))
if (!p)
return -ENOMEM;
p->aio = true;
} else {
@ -1222,7 +1224,7 @@ static ssize_t ffs_epfile_read_iter(struct kiocb *kiocb, struct iov_iter *to)
if (!is_sync_kiocb(kiocb)) {
p = kzalloc(sizeof(io_data), GFP_KERNEL);
if (unlikely(!p))
if (!p)
return -ENOMEM;
p->aio = true;
} else {
@ -1328,6 +1330,7 @@ static long ffs_epfile_ioctl(struct file *file, unsigned code,
switch (epfile->ffs->gadget->speed) {
case USB_SPEED_SUPER:
case USB_SPEED_SUPER_PLUS:
desc_idx = 2;
break;
case USB_SPEED_HIGH:
@ -1385,7 +1388,7 @@ ffs_sb_make_inode(struct super_block *sb, void *data,
inode = new_inode(sb);
if (likely(inode)) {
if (inode) {
struct timespec64 ts = current_time(inode);
inode->i_ino = get_next_ino();
@ -1417,11 +1420,11 @@ static struct dentry *ffs_sb_create_file(struct super_block *sb,
ENTER();
dentry = d_alloc_name(sb->s_root, name);
if (unlikely(!dentry))
if (!dentry)
return NULL;
inode = ffs_sb_make_inode(sb, data, fops, NULL, &ffs->file_perms);
if (unlikely(!inode)) {
if (!inode) {
dput(dentry);
return NULL;
}
@ -1468,12 +1471,11 @@ static int ffs_sb_fill(struct super_block *sb, struct fs_context *fc)
&simple_dir_inode_operations,
&data->perms);
sb->s_root = d_make_root(inode);
if (unlikely(!sb->s_root))
if (!sb->s_root)
return -ENOMEM;
/* EP0 file */
if (unlikely(!ffs_sb_create_file(sb, "ep0", ffs,
&ffs_ep0_operations)))
if (!ffs_sb_create_file(sb, "ep0", ffs, &ffs_ep0_operations))
return -ENOMEM;
return 0;
@ -1561,13 +1563,13 @@ static int ffs_fs_get_tree(struct fs_context *fc)
return invalf(fc, "No source specified");
ffs = ffs_data_new(fc->source);
if (unlikely(!ffs))
if (!ffs)
return -ENOMEM;
ffs->file_perms = ctx->perms;
ffs->no_disconnect = ctx->no_disconnect;
ffs->dev_name = kstrdup(fc->source, GFP_KERNEL);
if (unlikely(!ffs->dev_name)) {
if (!ffs->dev_name) {
ffs_data_put(ffs);
return -ENOMEM;
}
@ -1653,7 +1655,7 @@ static int functionfs_init(void)
ENTER();
ret = register_filesystem(&ffs_fs_type);
if (likely(!ret))
if (!ret)
pr_info("file system registered\n");
else
pr_err("failed registering file system (%d)\n", ret);
@ -1698,7 +1700,7 @@ static void ffs_data_put(struct ffs_data *ffs)
{
ENTER();
if (unlikely(refcount_dec_and_test(&ffs->ref))) {
if (refcount_dec_and_test(&ffs->ref)) {
pr_info("%s(): freeing\n", __func__);
ffs_data_clear(ffs);
BUG_ON(waitqueue_active(&ffs->ev.waitq) ||
@ -1740,7 +1742,7 @@ static void ffs_data_closed(struct ffs_data *ffs)
static struct ffs_data *ffs_data_new(const char *dev_name)
{
struct ffs_data *ffs = kzalloc(sizeof *ffs, GFP_KERNEL);
if (unlikely(!ffs))
if (!ffs)
return NULL;
ENTER();
@ -1830,11 +1832,11 @@ static int functionfs_bind(struct ffs_data *ffs, struct usb_composite_dev *cdev)
return -EBADFD;
first_id = usb_string_ids_n(cdev, ffs->strings_count);
if (unlikely(first_id < 0))
if (first_id < 0)
return first_id;
ffs->ep0req = usb_ep_alloc_request(cdev->gadget->ep0, GFP_KERNEL);
if (unlikely(!ffs->ep0req))
if (!ffs->ep0req)
return -ENOMEM;
ffs->ep0req->complete = ffs_ep0_complete;
ffs->ep0req->context = ffs;
@ -1890,7 +1892,7 @@ static int ffs_epfiles_create(struct ffs_data *ffs)
epfile->dentry = ffs_sb_create_file(ffs->sb, epfile->name,
epfile,
&ffs_epfile_operations);
if (unlikely(!epfile->dentry)) {
if (!epfile->dentry) {
ffs_epfiles_destroy(epfiles, i - 1);
return -ENOMEM;
}
@ -1928,7 +1930,7 @@ static void ffs_func_eps_disable(struct ffs_function *func)
spin_lock_irqsave(&func->ffs->eps_lock, flags);
while (count--) {
/* pending requests get nuked */
if (likely(ep->ep))
if (ep->ep)
usb_ep_disable(ep->ep);
++ep;
@ -1962,7 +1964,7 @@ static int ffs_func_eps_enable(struct ffs_function *func)
}
ret = usb_ep_enable(ep->ep);
if (likely(!ret)) {
if (!ret) {
epfile->ep = ep;
epfile->in = usb_endpoint_dir_in(ep->ep->desc);
epfile->isoc = usb_endpoint_xfer_isoc(ep->ep->desc);
@ -2035,12 +2037,12 @@ static int __must_check ffs_do_single_desc(char *data, unsigned len,
#define __entity_check_ENDPOINT(val) ((val) & USB_ENDPOINT_NUMBER_MASK)
#define __entity(type, val) do { \
pr_vdebug("entity " #type "(%02x)\n", (val)); \
if (unlikely(!__entity_check_ ##type(val))) { \
if (!__entity_check_ ##type(val)) { \
pr_vdebug("invalid entity's value\n"); \
return -EINVAL; \
} \
ret = entity(FFS_ ##type, &val, _ds, priv); \
if (unlikely(ret < 0)) { \
if (ret < 0) { \
pr_debug("entity " #type "(%02x); ret = %d\n", \
(val), ret); \
return ret; \
@ -2165,7 +2167,7 @@ static int __must_check ffs_do_descs(unsigned count, char *data, unsigned len,
/* Record "descriptor" entity */
ret = entity(FFS_DESCRIPTOR, (u8 *)num, (void *)data, priv);
if (unlikely(ret < 0)) {
if (ret < 0) {
pr_debug("entity DESCRIPTOR(%02lx); ret = %d\n",
num, ret);
return ret;
@ -2176,7 +2178,7 @@ static int __must_check ffs_do_descs(unsigned count, char *data, unsigned len,
ret = ffs_do_single_desc(data, len, entity, priv,
&current_class);
if (unlikely(ret < 0)) {
if (ret < 0) {
pr_debug("%s returns %d\n", __func__, ret);
return ret;
}
@ -2282,7 +2284,7 @@ static int __must_check ffs_do_single_os_desc(char *data, unsigned len,
/* loop over all ext compat/ext prop descriptors */
while (feature_count--) {
ret = entity(type, h, data, len, priv);
if (unlikely(ret < 0)) {
if (ret < 0) {
pr_debug("bad OS descriptor, type: %d\n", type);
return ret;
}
@ -2322,7 +2324,7 @@ static int __must_check ffs_do_os_descs(unsigned count,
return -EINVAL;
ret = __ffs_do_os_desc_header(&type, desc);
if (unlikely(ret < 0)) {
if (ret < 0) {
pr_debug("entity OS_DESCRIPTOR(%02lx); ret = %d\n",
num, ret);
return ret;
@ -2343,7 +2345,7 @@ static int __must_check ffs_do_os_descs(unsigned count,
*/
ret = ffs_do_single_os_desc(data, len, type,
feature_count, entity, priv, desc);
if (unlikely(ret < 0)) {
if (ret < 0) {
pr_debug("%s returns %d\n", __func__, ret);
return ret;
}
@ -2575,20 +2577,20 @@ static int __ffs_data_got_strings(struct ffs_data *ffs,
ENTER();
if (unlikely(len < 16 ||
get_unaligned_le32(data) != FUNCTIONFS_STRINGS_MAGIC ||
get_unaligned_le32(data + 4) != len))
if (len < 16 ||
get_unaligned_le32(data) != FUNCTIONFS_STRINGS_MAGIC ||
get_unaligned_le32(data + 4) != len)
goto error;
str_count = get_unaligned_le32(data + 8);
lang_count = get_unaligned_le32(data + 12);
/* if one is zero the other must be zero */
if (unlikely(!str_count != !lang_count))
if (!str_count != !lang_count)
goto error;
/* Do we have at least as many strings as descriptors need? */
needed_count = ffs->strings_count;
if (unlikely(str_count < needed_count))
if (str_count < needed_count)
goto error;
/*
@ -2612,7 +2614,7 @@ static int __ffs_data_got_strings(struct ffs_data *ffs,
char *vlabuf = kmalloc(vla_group_size(d), GFP_KERNEL);
if (unlikely(!vlabuf)) {
if (!vlabuf) {
kfree(_data);
return -ENOMEM;
}
@ -2639,7 +2641,7 @@ static int __ffs_data_got_strings(struct ffs_data *ffs,
do { /* lang_count > 0 so we can use do-while */
unsigned needed = needed_count;
if (unlikely(len < 3))
if (len < 3)
goto error_free;
t->language = get_unaligned_le16(data);
t->strings = s;
@ -2652,7 +2654,7 @@ static int __ffs_data_got_strings(struct ffs_data *ffs,
do { /* str_count > 0 so we can use do-while */
size_t length = strnlen(data, len);
if (unlikely(length == len))
if (length == len)
goto error_free;
/*
@ -2660,7 +2662,7 @@ static int __ffs_data_got_strings(struct ffs_data *ffs,
* if that's the case we simply ignore the
* rest
*/
if (likely(needed)) {
if (needed) {
/*
* s->id will be set while adding
* function to configuration so for
@ -2682,7 +2684,7 @@ static int __ffs_data_got_strings(struct ffs_data *ffs,
} while (--lang_count);
/* Some garbage left? */
if (unlikely(len))
if (len)
goto error_free;
/* Done! */
@ -2829,7 +2831,7 @@ static int __ffs_func_bind_do_descs(enum ffs_entity_type type, u8 *valuep,
ffs_ep = func->eps + idx;
if (unlikely(ffs_ep->descs[ep_desc_id])) {
if (ffs_ep->descs[ep_desc_id]) {
pr_err("two %sspeed descriptors for EP %d\n",
speed_names[ep_desc_id],
ds->bEndpointAddress & USB_ENDPOINT_NUMBER_MASK);
@ -2860,12 +2862,12 @@ static int __ffs_func_bind_do_descs(enum ffs_entity_type type, u8 *valuep,
wMaxPacketSize = ds->wMaxPacketSize;
pr_vdebug("autoconfig\n");
ep = usb_ep_autoconfig(func->gadget, ds);
if (unlikely(!ep))
if (!ep)
return -ENOTSUPP;
ep->driver_data = func->eps + idx;
req = usb_ep_alloc_request(ep, GFP_KERNEL);
if (unlikely(!req))
if (!req)
return -ENOMEM;
ffs_ep->ep = ep;
@ -2907,7 +2909,7 @@ static int __ffs_func_bind_do_nums(enum ffs_entity_type type, u8 *valuep,
idx = *valuep;
if (func->interfaces_nums[idx] < 0) {
int id = usb_interface_id(func->conf, &func->function);
if (unlikely(id < 0))
if (id < 0)
return id;
func->interfaces_nums[idx] = id;
}
@ -2928,7 +2930,7 @@ static int __ffs_func_bind_do_nums(enum ffs_entity_type type, u8 *valuep,
return 0;
idx = (*valuep & USB_ENDPOINT_NUMBER_MASK) - 1;
if (unlikely(!func->eps[idx].ep))
if (!func->eps[idx].ep)
return -EINVAL;
{
@ -3111,12 +3113,12 @@ static int _ffs_func_bind(struct usb_configuration *c,
ENTER();
/* Has descriptors only for speeds gadget does not support */
if (unlikely(!(full | high | super)))
if (!(full | high | super))
return -ENOTSUPP;
/* Allocate a single chunk, less management later on */
vlabuf = kzalloc(vla_group_size(d), GFP_KERNEL);
if (unlikely(!vlabuf))
if (!vlabuf)
return -ENOMEM;
ffs->ms_os_descs_ext_prop_avail = vla_ptr(vlabuf, d, ext_prop);
@ -3145,13 +3147,13 @@ static int _ffs_func_bind(struct usb_configuration *c,
* endpoints first, so that later we can rewrite the endpoint
* numbers without worrying that it may be described later on.
*/
if (likely(full)) {
if (full) {
func->function.fs_descriptors = vla_ptr(vlabuf, d, fs_descs);
fs_len = ffs_do_descs(ffs->fs_descs_count,
vla_ptr(vlabuf, d, raw_descs),
d_raw_descs__sz,
__ffs_func_bind_do_descs, func);
if (unlikely(fs_len < 0)) {
if (fs_len < 0) {
ret = fs_len;
goto error;
}
@ -3159,13 +3161,13 @@ static int _ffs_func_bind(struct usb_configuration *c,
fs_len = 0;
}
if (likely(high)) {
if (high) {
func->function.hs_descriptors = vla_ptr(vlabuf, d, hs_descs);
hs_len = ffs_do_descs(ffs->hs_descs_count,
vla_ptr(vlabuf, d, raw_descs) + fs_len,
d_raw_descs__sz - fs_len,
__ffs_func_bind_do_descs, func);
if (unlikely(hs_len < 0)) {
if (hs_len < 0) {
ret = hs_len;
goto error;
}
@ -3173,13 +3175,14 @@ static int _ffs_func_bind(struct usb_configuration *c,
hs_len = 0;
}
if (likely(super)) {
func->function.ss_descriptors = vla_ptr(vlabuf, d, ss_descs);
if (super) {
func->function.ss_descriptors = func->function.ssp_descriptors =
vla_ptr(vlabuf, d, ss_descs);
ss_len = ffs_do_descs(ffs->ss_descs_count,
vla_ptr(vlabuf, d, raw_descs) + fs_len + hs_len,
d_raw_descs__sz - fs_len - hs_len,
__ffs_func_bind_do_descs, func);
if (unlikely(ss_len < 0)) {
if (ss_len < 0) {
ret = ss_len;
goto error;
}
@ -3197,7 +3200,7 @@ static int _ffs_func_bind(struct usb_configuration *c,
(super ? ffs->ss_descs_count : 0),
vla_ptr(vlabuf, d, raw_descs), d_raw_descs__sz,
__ffs_func_bind_do_nums, func);
if (unlikely(ret < 0))
if (ret < 0)
goto error;
func->function.os_desc_table = vla_ptr(vlabuf, d, os_desc_table);
@ -3218,7 +3221,7 @@ static int _ffs_func_bind(struct usb_configuration *c,
d_raw_descs__sz - fs_len - hs_len -
ss_len,
__ffs_func_bind_do_os_desc, func);
if (unlikely(ret < 0))
if (ret < 0)
goto error;
}
func->function.os_desc_n =
@ -3269,7 +3272,7 @@ static int ffs_func_set_alt(struct usb_function *f,
if (alt != (unsigned)-1) {
intf = ffs_func_revmap_intf(func, interface);
if (unlikely(intf < 0))
if (intf < 0)
return intf;
}
@ -3294,7 +3297,7 @@ static int ffs_func_set_alt(struct usb_function *f,
ffs->func = func;
ret = ffs_func_eps_enable(func);
if (likely(ret >= 0))
if (ret >= 0)
ffs_event_add(ffs, FUNCTIONFS_ENABLE);
return ret;
}
@ -3336,13 +3339,13 @@ static int ffs_func_setup(struct usb_function *f,
switch (creq->bRequestType & USB_RECIP_MASK) {
case USB_RECIP_INTERFACE:
ret = ffs_func_revmap_intf(func, le16_to_cpu(creq->wIndex));
if (unlikely(ret < 0))
if (ret < 0)
return ret;
break;
case USB_RECIP_ENDPOINT:
ret = ffs_func_revmap_ep(func, le16_to_cpu(creq->wIndex));
if (unlikely(ret < 0))
if (ret < 0)
return ret;
if (func->ffs->user_flags & FUNCTIONFS_VIRTUAL_ADDR)
ret = func->ffs->eps_addrmap[ret];
@ -3584,6 +3587,7 @@ static void ffs_func_unbind(struct usb_configuration *c,
func->function.fs_descriptors = NULL;
func->function.hs_descriptors = NULL;
func->function.ss_descriptors = NULL;
func->function.ssp_descriptors = NULL;
func->interfaces_nums = NULL;
ffs_event_add(ffs, FUNCTIONFS_UNBIND);
@ -3596,7 +3600,7 @@ static struct usb_function *ffs_alloc(struct usb_function_instance *fi)
ENTER();
func = kzalloc(sizeof(*func), GFP_KERNEL);
if (unlikely(!func))
if (!func)
return ERR_PTR(-ENOMEM);
func->function.name = "Function FS Gadget";
@ -3811,7 +3815,7 @@ done:
static int ffs_mutex_lock(struct mutex *mutex, unsigned nonblock)
{
return nonblock
? likely(mutex_trylock(mutex)) ? 0 : -EAGAIN
? mutex_trylock(mutex) ? 0 : -EAGAIN
: mutex_lock_interruptible(mutex);
}
@ -3819,14 +3823,14 @@ static char *ffs_prepare_buffer(const char __user *buf, size_t len)
{
char *data;
if (unlikely(!len))
if (!len)
return NULL;
data = kmalloc(len, GFP_KERNEL);
if (unlikely(!data))
if (!data)
return ERR_PTR(-ENOMEM);
if (unlikely(copy_from_user(data, buf, len))) {
if (copy_from_user(data, buf, len)) {
kfree(data);
return ERR_PTR(-EFAULT);
}

View File

@ -274,7 +274,7 @@ static void loopback_complete(struct usb_ep *ep, struct usb_request *req)
default:
ERROR(cdev, "%s loop complete --> %d, %d/%d\n", ep->name,
status, req->actual, req->length);
/* FALLTHROUGH */
fallthrough;
/* NOTE: since this driver doesn't maintain an explicit record
* of requests it submitted (just maintains qlen count), we

View File

@ -1048,6 +1048,12 @@ static int f_midi_bind(struct usb_configuration *c, struct usb_function *f)
f->ss_descriptors = usb_copy_descriptors(midi_function);
if (!f->ss_descriptors)
goto fail_f_midi;
if (gadget_is_superspeed_plus(c->cdev->gadget)) {
f->ssp_descriptors = usb_copy_descriptors(midi_function);
if (!f->ssp_descriptors)
goto fail_f_midi;
}
}
kfree(midi_function);

View File

@ -87,8 +87,10 @@ static inline struct f_rndis *func_to_rndis(struct usb_function *f)
/* peak (theoretical) bulk transfer rate in bits-per-second */
static unsigned int bitrate(struct usb_gadget *g)
{
if (gadget_is_superspeed(g) && g->speed >= USB_SPEED_SUPER_PLUS)
return 4250000000U;
if (gadget_is_superspeed(g) && g->speed == USB_SPEED_SUPER)
return 13 * 1024 * 8 * 1000 * 8;
return 3750000000U;
else if (gadget_is_dualspeed(g) && g->speed == USB_SPEED_HIGH)
return 13 * 512 * 8 * 1000 * 8;
else

View File

@ -559,6 +559,7 @@ static void source_sink_complete(struct usb_ep *ep, struct usb_request *req)
#if 1
DBG(cdev, "%s complete --> %d, %d/%d\n", ep->name,
status, req->actual, req->length);
break;
#endif
case -EREMOTEIO: /* short read */
break;

View File

@ -897,8 +897,6 @@ EXPORT_SYMBOL_GPL(usb_gadget_unmap_request);
* @ep: the endpoint to be used with with the request
* @req: the request being given back
*
* Context: in_interrupt()
*
* This is called by device controller drivers in order to return the
* completed request back to the gadget layer.
*/

View File

@ -553,6 +553,7 @@ static int dummy_enable(struct usb_ep *_ep,
/* we'll fake any legal size */
break;
/* save a return statement */
fallthrough;
default:
goto done;
}
@ -595,6 +596,7 @@ static int dummy_enable(struct usb_ep *_ep,
if (max <= 1023)
break;
/* save a return statement */
fallthrough;
default:
goto done;
}
@ -1754,8 +1756,10 @@ static int handle_control_request(struct dummy_hcd *dum_hcd, struct urb *urb,
return ret_val;
}
/* drive both sides of the transfers; looks like irq handlers to
* both drivers except the callbacks aren't in_irq().
/*
* Drive both sides of the transfers; looks like irq handlers to both
* drivers except that the callbacks are invoked from soft interrupt
* context.
*/
static void dummy_timer(struct timer_list *t)
{
@ -2734,7 +2738,7 @@ static int __init init(void)
{
int retval = -ENOMEM;
int i;
struct dummy *dum[MAX_NUM_UDC];
struct dummy *dum[MAX_NUM_UDC] = {};
if (usb_disabled())
return -ENODEV;

View File

@ -304,7 +304,7 @@ static struct pxa_ep *find_pxa_ep(struct pxa_udc *udc,
* update_pxa_ep_matches - update pxa_ep cached values in all udc_usb_ep
* @udc: pxa udc
*
* Context: in_interrupt()
* Context: interrupt handler
*
* Updates all pxa_ep fields in udc_usb_ep structures, if this field was
* previously set up (and is not NULL). The update is necessary is a
@ -859,7 +859,7 @@ static int write_packet(struct pxa_ep *ep, struct pxa27x_request *req,
* @ep: pxa physical endpoint
* @req: usb request
*
* Context: callable when in_interrupt()
* Context: interrupt handler
*
* Unload as many packets as possible from the fifo we use for usb OUT
* transfers and put them into the request. Caller should have made sure
@ -997,7 +997,7 @@ static int read_ep0_fifo(struct pxa_ep *ep, struct pxa27x_request *req)
* @ep: control endpoint
* @req: request
*
* Context: callable when in_interrupt()
* Context: interrupt handler
*
* Sends a request (or a part of the request) to the control endpoint (ep0 in).
* If the request doesn't fit, the remaining part will be sent from irq.
@ -1036,8 +1036,8 @@ static int write_ep0_fifo(struct pxa_ep *ep, struct pxa27x_request *req)
* @_req: usb request
* @gfp_flags: flags
*
* Context: normally called when !in_interrupt, but callable when in_interrupt()
* in the special case of ep0 setup :
* Context: thread context or from the interrupt handler in the
* special case of ep0 setup :
* (irq->handle_ep0_ctrl_req->gadget_setup->pxa_ep_queue)
*
* Returns 0 if succedeed, error otherwise
@ -1512,7 +1512,8 @@ static int should_disable_udc(struct pxa_udc *udc)
* pxa_udc_pullup - Offer manual D+ pullup control
* @_gadget: usb gadget using the control
* @is_active: 0 if disconnect, else connect D+ pullup resistor
* Context: !in_interrupt()
*
* Context: task context, might sleep
*
* Returns 0 if OK, -EOPNOTSUPP if udc driver doesn't handle D+ pullup
*/
@ -1560,7 +1561,7 @@ static int pxa_udc_vbus_session(struct usb_gadget *_gadget, int is_active)
* @_gadget: usb gadget
* @mA: current drawn
*
* Context: !in_interrupt()
* Context: task context, might sleep
*
* Called after a configuration was chosen by a USB host, to inform how much
* current can be drawn by the device from VBus line.
@ -1886,7 +1887,7 @@ stall:
* @fifo_irq: 1 if triggered by fifo service type irq
* @opc_irq: 1 if triggered by output packet complete type irq
*
* Context : when in_interrupt() or with ep->lock held
* Context : interrupt handler
*
* Tries to transfer all pending request data into the endpoint and/or
* transfer all pending data in the endpoint into usb requests.
@ -2011,7 +2012,7 @@ static void handle_ep0(struct pxa_udc *udc, int fifo_irq, int opc_irq)
* Tries to transfer all pending request data into the endpoint and/or
* transfer all pending data in the endpoint into usb requests.
*
* Is always called when in_interrupt() and with ep->lock released.
* Is always called from the interrupt handler. ep->lock must not be held.
*/
static void handle_ep(struct pxa_ep *ep)
{

View File

@ -213,13 +213,6 @@ config USB_EHCI_FSL
help
Variation of ARC USB block used in some Freescale chips.
config USB_EHCI_MXC
tristate "Support for Freescale i.MX on-chip EHCI USB controller"
depends on ARCH_MXC || COMPILE_TEST
select USB_EHCI_ROOT_HUB_TT
help
Variation of ARC USB block used in some Freescale chips.
config USB_EHCI_HCD_NPCM7XX
tristate "Support for Nuvoton NPCM7XX on-chip EHCI USB controller"
depends on (USB_EHCI_HCD && ARCH_NPCM7XX) || COMPILE_TEST
@ -741,16 +734,6 @@ config USB_RENESAS_USBHS_HCD
To compile this driver as a module, choose M here: the
module will be called renesas-usbhs.
config USB_IMX21_HCD
tristate "i.MX21 HCD support"
depends on ARM && ARCH_MXC
help
This driver enables support for the on-chip USB host in the
i.MX21 processor.
To compile this driver as a module, choose M here: the
module will be called "imx21-hcd".
config USB_HCD_BCMA
tristate "BCMA usb host driver"
depends on BCMA

View File

@ -40,7 +40,6 @@ obj-$(CONFIG_USB_PCI) += pci-quirks.o
obj-$(CONFIG_USB_EHCI_HCD) += ehci-hcd.o
obj-$(CONFIG_USB_EHCI_PCI) += ehci-pci.o
obj-$(CONFIG_USB_EHCI_HCD_PLATFORM) += ehci-platform.o
obj-$(CONFIG_USB_EHCI_MXC) += ehci-mxc.o
obj-$(CONFIG_USB_EHCI_HCD_NPCM7XX) += ehci-npcm7xx.o
obj-$(CONFIG_USB_EHCI_HCD_OMAP) += ehci-omap.o
obj-$(CONFIG_USB_EHCI_HCD_ORION) += ehci-orion.o
@ -81,7 +80,6 @@ obj-$(CONFIG_USB_SL811_HCD) += sl811-hcd.o
obj-$(CONFIG_USB_SL811_CS) += sl811_cs.o
obj-$(CONFIG_USB_U132_HCD) += u132-hcd.o
obj-$(CONFIG_USB_R8A66597_HCD) += r8a66597-hcd.o
obj-$(CONFIG_USB_IMX21_HCD) += imx21-hcd.o
obj-$(CONFIG_USB_FSL_USB2) += fsl-mph-dr-of.o
obj-$(CONFIG_USB_EHCI_FSL) += fsl-mph-dr-of.o
obj-$(CONFIG_USB_EHCI_FSL) += ehci-fsl.o

View File

@ -39,10 +39,10 @@ static struct hc_driver __read_mostly fsl_ehci_hc_driver;
/*
* fsl_ehci_drv_probe - initialize FSL-based HCDs
* @pdev: USB Host Controller being probed
* Context: !in_interrupt()
*
* Context: task context, might sleep
*
* Allocates basic resources for this USB host controller.
*
*/
static int fsl_ehci_drv_probe(struct platform_device *pdev)
{
@ -684,12 +684,11 @@ static const struct ehci_driver_overrides ehci_fsl_overrides __initconst = {
/**
* fsl_ehci_drv_remove - shutdown processing for FSL-based HCDs
* @pdev: USB Host Controller being removed
* Context: !in_interrupt()
*
* Context: task context, might sleep
*
* Reverses the effect of usb_hcd_fsl_probe().
*
*/
static int fsl_ehci_drv_remove(struct platform_device *pdev)
{
struct fsl_usb2_platform_data *pdata = dev_get_platdata(&pdev->dev);

View File

@ -867,7 +867,7 @@ static int ehci_urb_enqueue (
*/
if (urb->transfer_buffer_length > (16 * 1024))
return -EMSGSIZE;
/* FALLTHROUGH */
fallthrough;
/* case PIPE_BULK: */
default:
if (!qh_urb_transaction (ehci, urb, &qtd_list, mem_flags))

View File

@ -1,213 +0,0 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Copyright (c) 2008 Sascha Hauer <s.hauer@pengutronix.de>, Pengutronix
* Copyright (c) 2009 Daniel Mack <daniel@caiaq.de>
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/io.h>
#include <linux/platform_device.h>
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/usb/otg.h>
#include <linux/usb/ulpi.h>
#include <linux/slab.h>
#include <linux/usb.h>
#include <linux/usb/hcd.h>
#include <linux/platform_data/usb-ehci-mxc.h>
#include "ehci.h"
#define DRIVER_DESC "Freescale On-Chip EHCI Host driver"
static const char hcd_name[] = "ehci-mxc";
#define ULPI_VIEWPORT_OFFSET 0x170
struct ehci_mxc_priv {
struct clk *usbclk, *ahbclk, *phyclk;
};
static struct hc_driver __read_mostly ehci_mxc_hc_driver;
static const struct ehci_driver_overrides ehci_mxc_overrides __initconst = {
.extra_priv_size = sizeof(struct ehci_mxc_priv),
};
static int ehci_mxc_drv_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct mxc_usbh_platform_data *pdata = dev_get_platdata(dev);
struct usb_hcd *hcd;
struct resource *res;
int irq, ret;
struct ehci_mxc_priv *priv;
struct ehci_hcd *ehci;
if (!pdata) {
dev_err(dev, "No platform data given, bailing out.\n");
return -EINVAL;
}
irq = platform_get_irq(pdev, 0);
if (irq < 0)
return irq;
hcd = usb_create_hcd(&ehci_mxc_hc_driver, dev, dev_name(dev));
if (!hcd)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
hcd->regs = devm_ioremap_resource(dev, res);
if (IS_ERR(hcd->regs)) {
ret = PTR_ERR(hcd->regs);
goto err_alloc;
}
hcd->rsrc_start = res->start;
hcd->rsrc_len = resource_size(res);
hcd->has_tt = 1;
ehci = hcd_to_ehci(hcd);
priv = (struct ehci_mxc_priv *) ehci->priv;
/* enable clocks */
priv->usbclk = devm_clk_get(dev, "ipg");
if (IS_ERR(priv->usbclk)) {
ret = PTR_ERR(priv->usbclk);
goto err_alloc;
}
clk_prepare_enable(priv->usbclk);
priv->ahbclk = devm_clk_get(dev, "ahb");
if (IS_ERR(priv->ahbclk)) {
ret = PTR_ERR(priv->ahbclk);
goto err_clk_ahb;
}
clk_prepare_enable(priv->ahbclk);
/* "dr" device has its own clock on i.MX51 */
priv->phyclk = devm_clk_get(dev, "phy");
if (IS_ERR(priv->phyclk))
priv->phyclk = NULL;
if (priv->phyclk)
clk_prepare_enable(priv->phyclk);
/* call platform specific init function */
if (pdata->init) {
ret = pdata->init(pdev);
if (ret) {
dev_err(dev, "platform init failed\n");
goto err_init;
}
/* platforms need some time to settle changed IO settings */
mdelay(10);
}
/* EHCI registers start at offset 0x100 */
ehci->caps = hcd->regs + 0x100;
ehci->regs = hcd->regs + 0x100 +
HC_LENGTH(ehci, ehci_readl(ehci, &ehci->caps->hc_capbase));
/* set up the PORTSCx register */
ehci_writel(ehci, pdata->portsc, &ehci->regs->port_status[0]);
/* is this really needed? */
msleep(10);
/* Initialize the transceiver */
if (pdata->otg) {
pdata->otg->io_priv = hcd->regs + ULPI_VIEWPORT_OFFSET;
ret = usb_phy_init(pdata->otg);
if (ret) {
dev_err(dev, "unable to init transceiver, probably missing\n");
ret = -ENODEV;
goto err_add;
}
ret = otg_set_vbus(pdata->otg->otg, 1);
if (ret) {
dev_err(dev, "unable to enable vbus on transceiver\n");
goto err_add;
}
}
platform_set_drvdata(pdev, hcd);
ret = usb_add_hcd(hcd, irq, IRQF_SHARED);
if (ret)
goto err_add;
device_wakeup_enable(hcd->self.controller);
return 0;
err_add:
if (pdata && pdata->exit)
pdata->exit(pdev);
err_init:
if (priv->phyclk)
clk_disable_unprepare(priv->phyclk);
clk_disable_unprepare(priv->ahbclk);
err_clk_ahb:
clk_disable_unprepare(priv->usbclk);
err_alloc:
usb_put_hcd(hcd);
return ret;
}
static int ehci_mxc_drv_remove(struct platform_device *pdev)
{
struct mxc_usbh_platform_data *pdata = dev_get_platdata(&pdev->dev);
struct usb_hcd *hcd = platform_get_drvdata(pdev);
struct ehci_hcd *ehci = hcd_to_ehci(hcd);
struct ehci_mxc_priv *priv = (struct ehci_mxc_priv *) ehci->priv;
usb_remove_hcd(hcd);
if (pdata && pdata->exit)
pdata->exit(pdev);
if (pdata && pdata->otg)
usb_phy_shutdown(pdata->otg);
clk_disable_unprepare(priv->usbclk);
clk_disable_unprepare(priv->ahbclk);
if (priv->phyclk)
clk_disable_unprepare(priv->phyclk);
usb_put_hcd(hcd);
return 0;
}
MODULE_ALIAS("platform:mxc-ehci");
static struct platform_driver ehci_mxc_driver = {
.probe = ehci_mxc_drv_probe,
.remove = ehci_mxc_drv_remove,
.shutdown = usb_hcd_platform_shutdown,
.driver = {
.name = "mxc-ehci",
},
};
static int __init ehci_mxc_init(void)
{
if (usb_disabled())
return -ENODEV;
pr_info("%s: " DRIVER_DESC "\n", hcd_name);
ehci_init_driver(&ehci_mxc_hc_driver, &ehci_mxc_overrides);
return platform_driver_register(&ehci_mxc_driver);
}
module_init(ehci_mxc_init);
static void __exit ehci_mxc_cleanup(void)
{
platform_driver_unregister(&ehci_mxc_driver);
}
module_exit(ehci_mxc_cleanup);
MODULE_DESCRIPTION(DRIVER_DESC);
MODULE_AUTHOR("Sascha Hauer");
MODULE_LICENSE("GPL");

View File

@ -220,6 +220,7 @@ static int ehci_hcd_omap_probe(struct platform_device *pdev)
err_pm_runtime:
pm_runtime_put_sync(dev);
pm_runtime_disable(dev);
err_phy:
for (i = 0; i < omap->nports; i++) {

View File

@ -147,12 +147,14 @@ err1:
/**
* usb_hcd_msp_probe - initialize PMC MSP-based HCDs
* Context: !in_interrupt()
* @driver: Pointer to hc driver instance
* @dev: USB controller to probe
*
* Context: task context, might sleep
*
* Allocates basic resources for this USB host controller, and
* then invokes the start() method for the HCD associated with it
* through the hotplug entry's driver_data.
*
*/
int usb_hcd_msp_probe(const struct hc_driver *driver,
struct platform_device *dev)
@ -223,8 +225,9 @@ err1:
/**
* usb_hcd_msp_remove - shutdown processing for PMC MSP-based HCDs
* @dev: USB Host Controller being removed
* Context: !in_interrupt()
* @hcd: USB Host Controller being removed
*
* Context: task context, might sleep
*
* Reverses the effect of usb_hcd_msp_probe(), first invoking
* the HCD's stop() method. It is always called from a thread
@ -233,7 +236,7 @@ err1:
* may be called without controller electrically present
* may be called with controller, bus, and devices active
*/
void usb_hcd_msp_remove(struct usb_hcd *hcd, struct platform_device *dev)
static void usb_hcd_msp_remove(struct usb_hcd *hcd)
{
usb_remove_hcd(hcd);
iounmap(hcd->regs);
@ -306,7 +309,7 @@ static int ehci_hcd_msp_drv_remove(struct platform_device *pdev)
{
struct usb_hcd *hcd = platform_get_drvdata(pdev);
usb_hcd_msp_remove(hcd, pdev);
usb_hcd_msp_remove(hcd);
/* free TWI GPIO USB_HOST_DEV pin */
gpio_free(MSP_PIN_USB0_HOST_DEV);

View File

@ -244,6 +244,12 @@ static void reserve_release_intr_bandwidth(struct ehci_hcd *ehci,
/* FS/LS bus bandwidth */
if (tt_usecs) {
/*
* find_tt() will not return any error here as we have
* already called find_tt() before calling this function
* and checked for any error return. The previous call
* would have created the data structure.
*/
tt = find_tt(qh->ps.udev);
if (sign > 0)
list_add_tail(&qh->ps.ps_list, &tt->ps_list);
@ -1337,6 +1343,12 @@ static void reserve_release_iso_bandwidth(struct ehci_hcd *ehci,
}
}
/*
* find_tt() will not return any error here as we have
* already called find_tt() before calling this function
* and checked for any error return. The previous call
* would have created the data structure.
*/
tt = find_tt(stream->ps.udev);
if (sign > 0)
list_add_tail(&stream->ps.ps_list, &tt->ps_list);

View File

@ -1951,7 +1951,7 @@ static int fotg210_mem_init(struct fotg210_hcd *fotg210, gfp_t flags)
goto fail;
/* Hardware periodic table */
fotg210->periodic = (__le32 *)
fotg210->periodic =
dma_alloc_coherent(fotg210_to_hcd(fotg210)->self.controller,
fotg210->periodic_size * sizeof(__le32),
&fotg210->periodic_dma, 0);
@ -5276,7 +5276,7 @@ static int fotg210_urb_enqueue(struct usb_hcd *hcd, struct urb *urb,
*/
if (urb->transfer_buffer_length > (16 * 1024))
return -EMSGSIZE;
/* FALLTHROUGH */
fallthrough;
/* case PIPE_BULK: */
default:
if (!qh_urb_transaction(fotg210, urb, &qtd_list, mem_flags))

View File

@ -1,439 +0,0 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Copyright (c) 2009 by Martin Fuzzey
*/
/* this file is part of imx21-hcd.c */
#ifdef CONFIG_DYNAMIC_DEBUG
#define DEBUG
#endif
#ifndef DEBUG
static inline void create_debug_files(struct imx21 *imx21) { }
static inline void remove_debug_files(struct imx21 *imx21) { }
static inline void debug_urb_submitted(struct imx21 *imx21, struct urb *urb) {}
static inline void debug_urb_completed(struct imx21 *imx21, struct urb *urb,
int status) {}
static inline void debug_urb_unlinked(struct imx21 *imx21, struct urb *urb) {}
static inline void debug_urb_queued_for_etd(struct imx21 *imx21,
struct urb *urb) {}
static inline void debug_urb_queued_for_dmem(struct imx21 *imx21,
struct urb *urb) {}
static inline void debug_etd_allocated(struct imx21 *imx21) {}
static inline void debug_etd_freed(struct imx21 *imx21) {}
static inline void debug_dmem_allocated(struct imx21 *imx21, int size) {}
static inline void debug_dmem_freed(struct imx21 *imx21, int size) {}
static inline void debug_isoc_submitted(struct imx21 *imx21,
int frame, struct td *td) {}
static inline void debug_isoc_completed(struct imx21 *imx21,
int frame, struct td *td, int cc, int len) {}
#else
#include <linux/debugfs.h>
#include <linux/seq_file.h>
static const char *dir_labels[] = {
"TD 0",
"OUT",
"IN",
"TD 1"
};
static const char *speed_labels[] = {
"Full",
"Low"
};
static const char *format_labels[] = {
"Control",
"ISO",
"Bulk",
"Interrupt"
};
static inline struct debug_stats *stats_for_urb(struct imx21 *imx21,
struct urb *urb)
{
return usb_pipeisoc(urb->pipe) ?
&imx21->isoc_stats : &imx21->nonisoc_stats;
}
static void debug_urb_submitted(struct imx21 *imx21, struct urb *urb)
{
stats_for_urb(imx21, urb)->submitted++;
}
static void debug_urb_completed(struct imx21 *imx21, struct urb *urb, int st)
{
if (st)
stats_for_urb(imx21, urb)->completed_failed++;
else
stats_for_urb(imx21, urb)->completed_ok++;
}
static void debug_urb_unlinked(struct imx21 *imx21, struct urb *urb)
{
stats_for_urb(imx21, urb)->unlinked++;
}
static void debug_urb_queued_for_etd(struct imx21 *imx21, struct urb *urb)
{
stats_for_urb(imx21, urb)->queue_etd++;
}
static void debug_urb_queued_for_dmem(struct imx21 *imx21, struct urb *urb)
{
stats_for_urb(imx21, urb)->queue_dmem++;
}
static inline void debug_etd_allocated(struct imx21 *imx21)
{
imx21->etd_usage.maximum = max(
++(imx21->etd_usage.value),
imx21->etd_usage.maximum);
}
static inline void debug_etd_freed(struct imx21 *imx21)
{
imx21->etd_usage.value--;
}
static inline void debug_dmem_allocated(struct imx21 *imx21, int size)
{
imx21->dmem_usage.value += size;
imx21->dmem_usage.maximum = max(
imx21->dmem_usage.value,
imx21->dmem_usage.maximum);
}
static inline void debug_dmem_freed(struct imx21 *imx21, int size)
{
imx21->dmem_usage.value -= size;
}
static void debug_isoc_submitted(struct imx21 *imx21,
int frame, struct td *td)
{
struct debug_isoc_trace *trace = &imx21->isoc_trace[
imx21->isoc_trace_index++];
imx21->isoc_trace_index %= ARRAY_SIZE(imx21->isoc_trace);
trace->schedule_frame = td->frame;
trace->submit_frame = frame;
trace->request_len = td->len;
trace->td = td;
}
static inline void debug_isoc_completed(struct imx21 *imx21,
int frame, struct td *td, int cc, int len)
{
struct debug_isoc_trace *trace, *trace_failed;
int i;
int found = 0;
trace = imx21->isoc_trace;
for (i = 0; i < ARRAY_SIZE(imx21->isoc_trace); i++, trace++) {
if (trace->td == td) {
trace->done_frame = frame;
trace->done_len = len;
trace->cc = cc;
trace->td = NULL;
found = 1;
break;
}
}
if (found && cc) {
trace_failed = &imx21->isoc_trace_failed[
imx21->isoc_trace_index_failed++];
imx21->isoc_trace_index_failed %= ARRAY_SIZE(
imx21->isoc_trace_failed);
*trace_failed = *trace;
}
}
static char *format_ep(struct usb_host_endpoint *ep, char *buf, int bufsize)
{
if (ep)
snprintf(buf, bufsize, "ep_%02x (type:%02X kaddr:%p)",
ep->desc.bEndpointAddress,
usb_endpoint_type(&ep->desc),
ep);
else
snprintf(buf, bufsize, "none");
return buf;
}
static char *format_etd_dword0(u32 value, char *buf, int bufsize)
{
snprintf(buf, bufsize,
"addr=%d ep=%d dir=%s speed=%s format=%s halted=%d",
value & 0x7F,
(value >> DW0_ENDPNT) & 0x0F,
dir_labels[(value >> DW0_DIRECT) & 0x03],
speed_labels[(value >> DW0_SPEED) & 0x01],
format_labels[(value >> DW0_FORMAT) & 0x03],
(value >> DW0_HALTED) & 0x01);
return buf;
}
static int debug_status_show(struct seq_file *s, void *v)
{
struct imx21 *imx21 = s->private;
int etds_allocated = 0;
int etds_sw_busy = 0;
int etds_hw_busy = 0;
int dmem_blocks = 0;
int queued_for_etd = 0;
int queued_for_dmem = 0;
unsigned int dmem_bytes = 0;
int i;
struct etd_priv *etd;
u32 etd_enable_mask;
unsigned long flags;
struct imx21_dmem_area *dmem;
struct ep_priv *ep_priv;
spin_lock_irqsave(&imx21->lock, flags);
etd_enable_mask = readl(imx21->regs + USBH_ETDENSET);
for (i = 0, etd = imx21->etd; i < USB_NUM_ETD; i++, etd++) {
if (etd->alloc)
etds_allocated++;
if (etd->urb)
etds_sw_busy++;
if (etd_enable_mask & (1<<i))
etds_hw_busy++;
}
list_for_each_entry(dmem, &imx21->dmem_list, list) {
dmem_bytes += dmem->size;
dmem_blocks++;
}
list_for_each_entry(ep_priv, &imx21->queue_for_etd, queue)
queued_for_etd++;
list_for_each_entry(etd, &imx21->queue_for_dmem, queue)
queued_for_dmem++;
spin_unlock_irqrestore(&imx21->lock, flags);
seq_printf(s,
"Frame: %d\n"
"ETDs allocated: %d/%d (max=%d)\n"
"ETDs in use sw: %d\n"
"ETDs in use hw: %d\n"
"DMEM allocated: %d/%d (max=%d)\n"
"DMEM blocks: %d\n"
"Queued waiting for ETD: %d\n"
"Queued waiting for DMEM: %d\n",
readl(imx21->regs + USBH_FRMNUB) & 0xFFFF,
etds_allocated, USB_NUM_ETD, imx21->etd_usage.maximum,
etds_sw_busy,
etds_hw_busy,
dmem_bytes, DMEM_SIZE, imx21->dmem_usage.maximum,
dmem_blocks,
queued_for_etd,
queued_for_dmem);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(debug_status);
static int debug_dmem_show(struct seq_file *s, void *v)
{
struct imx21 *imx21 = s->private;
struct imx21_dmem_area *dmem;
unsigned long flags;
char ep_text[40];
spin_lock_irqsave(&imx21->lock, flags);
list_for_each_entry(dmem, &imx21->dmem_list, list)
seq_printf(s,
"%04X: size=0x%X "
"ep=%s\n",
dmem->offset, dmem->size,
format_ep(dmem->ep, ep_text, sizeof(ep_text)));
spin_unlock_irqrestore(&imx21->lock, flags);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(debug_dmem);
static int debug_etd_show(struct seq_file *s, void *v)
{
struct imx21 *imx21 = s->private;
struct etd_priv *etd;
char buf[60];
u32 dword;
int i, j;
unsigned long flags;
spin_lock_irqsave(&imx21->lock, flags);
for (i = 0, etd = imx21->etd; i < USB_NUM_ETD; i++, etd++) {
int state = -1;
struct urb_priv *urb_priv;
if (etd->urb) {
urb_priv = etd->urb->hcpriv;
if (urb_priv)
state = urb_priv->state;
}
seq_printf(s,
"etd_num: %d\n"
"ep: %s\n"
"alloc: %d\n"
"len: %d\n"
"busy sw: %d\n"
"busy hw: %d\n"
"urb state: %d\n"
"current urb: %p\n",
i,
format_ep(etd->ep, buf, sizeof(buf)),
etd->alloc,
etd->len,
etd->urb != NULL,
(readl(imx21->regs + USBH_ETDENSET) & (1 << i)) > 0,
state,
etd->urb);
for (j = 0; j < 4; j++) {
dword = etd_readl(imx21, i, j);
switch (j) {
case 0:
format_etd_dword0(dword, buf, sizeof(buf));
break;
case 2:
snprintf(buf, sizeof(buf),
"cc=0X%02X", dword >> DW2_COMPCODE);
break;
default:
*buf = 0;
break;
}
seq_printf(s,
"dword %d: submitted=%08X cur=%08X [%s]\n",
j,
etd->submitted_dwords[j],
dword,
buf);
}
seq_printf(s, "\n");
}
spin_unlock_irqrestore(&imx21->lock, flags);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(debug_etd);
static void debug_statistics_show_one(struct seq_file *s,
const char *name, struct debug_stats *stats)
{
seq_printf(s, "%s:\n"
"submitted URBs: %lu\n"
"completed OK: %lu\n"
"completed failed: %lu\n"
"unlinked: %lu\n"
"queued for ETD: %lu\n"
"queued for DMEM: %lu\n\n",
name,
stats->submitted,
stats->completed_ok,
stats->completed_failed,
stats->unlinked,
stats->queue_etd,
stats->queue_dmem);
}
static int debug_statistics_show(struct seq_file *s, void *v)
{
struct imx21 *imx21 = s->private;
unsigned long flags;
spin_lock_irqsave(&imx21->lock, flags);
debug_statistics_show_one(s, "nonisoc", &imx21->nonisoc_stats);
debug_statistics_show_one(s, "isoc", &imx21->isoc_stats);
seq_printf(s, "unblock kludge triggers: %lu\n", imx21->debug_unblocks);
spin_unlock_irqrestore(&imx21->lock, flags);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(debug_statistics);
static void debug_isoc_show_one(struct seq_file *s,
const char *name, int index, struct debug_isoc_trace *trace)
{
seq_printf(s, "%s %d:\n"
"cc=0X%02X\n"
"scheduled frame %d (%d)\n"
"submitted frame %d (%d)\n"
"completed frame %d (%d)\n"
"requested length=%d\n"
"completed length=%d\n\n",
name, index,
trace->cc,
trace->schedule_frame, trace->schedule_frame & 0xFFFF,
trace->submit_frame, trace->submit_frame & 0xFFFF,
trace->done_frame, trace->done_frame & 0xFFFF,
trace->request_len,
trace->done_len);
}
static int debug_isoc_show(struct seq_file *s, void *v)
{
struct imx21 *imx21 = s->private;
struct debug_isoc_trace *trace;
unsigned long flags;
int i;
spin_lock_irqsave(&imx21->lock, flags);
trace = imx21->isoc_trace_failed;
for (i = 0; i < ARRAY_SIZE(imx21->isoc_trace_failed); i++, trace++)
debug_isoc_show_one(s, "isoc failed", i, trace);
trace = imx21->isoc_trace;
for (i = 0; i < ARRAY_SIZE(imx21->isoc_trace); i++, trace++)
debug_isoc_show_one(s, "isoc", i, trace);
spin_unlock_irqrestore(&imx21->lock, flags);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(debug_isoc);
static void create_debug_files(struct imx21 *imx21)
{
struct dentry *root;
root = debugfs_create_dir(dev_name(imx21->dev), usb_debug_root);
imx21->debug_root = root;
debugfs_create_file("status", S_IRUGO, root, imx21, &debug_status_fops);
debugfs_create_file("dmem", S_IRUGO, root, imx21, &debug_dmem_fops);
debugfs_create_file("etd", S_IRUGO, root, imx21, &debug_etd_fops);
debugfs_create_file("statistics", S_IRUGO, root, imx21,
&debug_statistics_fops);
debugfs_create_file("isoc", S_IRUGO, root, imx21, &debug_isoc_fops);
}
static void remove_debug_files(struct imx21 *imx21)
{
debugfs_remove_recursive(imx21->debug_root);
}
#endif

File diff suppressed because it is too large Load Diff

View File

@ -1,431 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* Macros and prototypes for i.MX21
*
* Copyright (C) 2006 Loping Dog Embedded Systems
* Copyright (C) 2009 Martin Fuzzey
* Originally written by Jay Monkman <jtm@lopingdog.com>
* Ported to 2.6.30, debugged and enhanced by Martin Fuzzey
*/
#ifndef __LINUX_IMX21_HCD_H__
#define __LINUX_IMX21_HCD_H__
#ifdef CONFIG_DYNAMIC_DEBUG
#define DEBUG
#endif
#include <linux/platform_data/usb-mx2.h>
#define NUM_ISO_ETDS 2
#define USB_NUM_ETD 32
#define DMEM_SIZE 4096
/* Register definitions */
#define USBOTG_HWMODE 0x00
#define USBOTG_HWMODE_ANASDBEN (1 << 14)
#define USBOTG_HWMODE_OTGXCVR_SHIFT 6
#define USBOTG_HWMODE_OTGXCVR_MASK (3 << 6)
#define USBOTG_HWMODE_OTGXCVR_TD_RD (0 << 6)
#define USBOTG_HWMODE_OTGXCVR_TS_RD (2 << 6)
#define USBOTG_HWMODE_OTGXCVR_TD_RS (1 << 6)
#define USBOTG_HWMODE_OTGXCVR_TS_RS (3 << 6)
#define USBOTG_HWMODE_HOSTXCVR_SHIFT 4
#define USBOTG_HWMODE_HOSTXCVR_MASK (3 << 4)
#define USBOTG_HWMODE_HOSTXCVR_TD_RD (0 << 4)
#define USBOTG_HWMODE_HOSTXCVR_TS_RD (2 << 4)
#define USBOTG_HWMODE_HOSTXCVR_TD_RS (1 << 4)
#define USBOTG_HWMODE_HOSTXCVR_TS_RS (3 << 4)
#define USBOTG_HWMODE_CRECFG_MASK (3 << 0)
#define USBOTG_HWMODE_CRECFG_HOST (1 << 0)
#define USBOTG_HWMODE_CRECFG_FUNC (2 << 0)
#define USBOTG_HWMODE_CRECFG_HNP (3 << 0)
#define USBOTG_CINT_STAT 0x04
#define USBOTG_CINT_STEN 0x08
#define USBOTG_ASHNPINT (1 << 5)
#define USBOTG_ASFCINT (1 << 4)
#define USBOTG_ASHCINT (1 << 3)
#define USBOTG_SHNPINT (1 << 2)
#define USBOTG_FCINT (1 << 1)
#define USBOTG_HCINT (1 << 0)
#define USBOTG_CLK_CTRL 0x0c
#define USBOTG_CLK_CTRL_FUNC (1 << 2)
#define USBOTG_CLK_CTRL_HST (1 << 1)
#define USBOTG_CLK_CTRL_MAIN (1 << 0)
#define USBOTG_RST_CTRL 0x10
#define USBOTG_RST_RSTI2C (1 << 15)
#define USBOTG_RST_RSTCTRL (1 << 5)
#define USBOTG_RST_RSTFC (1 << 4)
#define USBOTG_RST_RSTFSKE (1 << 3)
#define USBOTG_RST_RSTRH (1 << 2)
#define USBOTG_RST_RSTHSIE (1 << 1)
#define USBOTG_RST_RSTHC (1 << 0)
#define USBOTG_FRM_INTVL 0x14
#define USBOTG_FRM_REMAIN 0x18
#define USBOTG_HNP_CSR 0x1c
#define USBOTG_HNP_ISR 0x2c
#define USBOTG_HNP_IEN 0x30
#define USBOTG_I2C_TXCVR_REG(x) (0x100 + (x))
#define USBOTG_I2C_XCVR_DEVAD 0x118
#define USBOTG_I2C_SEQ_OP_REG 0x119
#define USBOTG_I2C_SEQ_RD_STARTAD 0x11a
#define USBOTG_I2C_OP_CTRL_REG 0x11b
#define USBOTG_I2C_SCLK_TO_SCK_HPER 0x11e
#define USBOTG_I2C_MASTER_INT_REG 0x11f
#define USBH_HOST_CTRL 0x80
#define USBH_HOST_CTRL_HCRESET (1 << 31)
#define USBH_HOST_CTRL_SCHDOVR(x) ((x) << 16)
#define USBH_HOST_CTRL_RMTWUEN (1 << 4)
#define USBH_HOST_CTRL_HCUSBSTE_RESET (0 << 2)
#define USBH_HOST_CTRL_HCUSBSTE_RESUME (1 << 2)
#define USBH_HOST_CTRL_HCUSBSTE_OPERATIONAL (2 << 2)
#define USBH_HOST_CTRL_HCUSBSTE_SUSPEND (3 << 2)
#define USBH_HOST_CTRL_CTLBLKSR_1 (0 << 0)
#define USBH_HOST_CTRL_CTLBLKSR_2 (1 << 0)
#define USBH_HOST_CTRL_CTLBLKSR_3 (2 << 0)
#define USBH_HOST_CTRL_CTLBLKSR_4 (3 << 0)
#define USBH_SYSISR 0x88
#define USBH_SYSISR_PSCINT (1 << 6)
#define USBH_SYSISR_FMOFINT (1 << 5)
#define USBH_SYSISR_HERRINT (1 << 4)
#define USBH_SYSISR_RESDETINT (1 << 3)
#define USBH_SYSISR_SOFINT (1 << 2)
#define USBH_SYSISR_DONEINT (1 << 1)
#define USBH_SYSISR_SORINT (1 << 0)
#define USBH_SYSIEN 0x8c
#define USBH_SYSIEN_PSCINT (1 << 6)
#define USBH_SYSIEN_FMOFINT (1 << 5)
#define USBH_SYSIEN_HERRINT (1 << 4)
#define USBH_SYSIEN_RESDETINT (1 << 3)
#define USBH_SYSIEN_SOFINT (1 << 2)
#define USBH_SYSIEN_DONEINT (1 << 1)
#define USBH_SYSIEN_SORINT (1 << 0)
#define USBH_XBUFSTAT 0x98
#define USBH_YBUFSTAT 0x9c
#define USBH_XYINTEN 0xa0
#define USBH_XFILLSTAT 0xa8
#define USBH_YFILLSTAT 0xac
#define USBH_ETDENSET 0xc0
#define USBH_ETDENCLR 0xc4
#define USBH_IMMEDINT 0xcc
#define USBH_ETDDONESTAT 0xd0
#define USBH_ETDDONEEN 0xd4
#define USBH_FRMNUB 0xe0
#define USBH_LSTHRESH 0xe4
#define USBH_ROOTHUBA 0xe8
#define USBH_ROOTHUBA_PWRTOGOOD_MASK (0xff)
#define USBH_ROOTHUBA_PWRTOGOOD_SHIFT (24)
#define USBH_ROOTHUBA_NOOVRCURP (1 << 12)
#define USBH_ROOTHUBA_OVRCURPM (1 << 11)
#define USBH_ROOTHUBA_DEVTYPE (1 << 10)
#define USBH_ROOTHUBA_PWRSWTMD (1 << 9)
#define USBH_ROOTHUBA_NOPWRSWT (1 << 8)
#define USBH_ROOTHUBA_NDNSTMPRT_MASK (0xff)
#define USBH_ROOTHUBB 0xec
#define USBH_ROOTHUBB_PRTPWRCM(x) (1 << ((x) + 16))
#define USBH_ROOTHUBB_DEVREMOVE(x) (1 << (x))
#define USBH_ROOTSTAT 0xf0
#define USBH_ROOTSTAT_CLRRMTWUE (1 << 31)
#define USBH_ROOTSTAT_OVRCURCHG (1 << 17)
#define USBH_ROOTSTAT_DEVCONWUE (1 << 15)
#define USBH_ROOTSTAT_OVRCURI (1 << 1)
#define USBH_ROOTSTAT_LOCPWRS (1 << 0)
#define USBH_PORTSTAT(x) (0xf4 + ((x) * 4))
#define USBH_PORTSTAT_PRTRSTSC (1 << 20)
#define USBH_PORTSTAT_OVRCURIC (1 << 19)
#define USBH_PORTSTAT_PRTSTATSC (1 << 18)
#define USBH_PORTSTAT_PRTENBLSC (1 << 17)
#define USBH_PORTSTAT_CONNECTSC (1 << 16)
#define USBH_PORTSTAT_LSDEVCON (1 << 9)
#define USBH_PORTSTAT_PRTPWRST (1 << 8)
#define USBH_PORTSTAT_PRTRSTST (1 << 4)
#define USBH_PORTSTAT_PRTOVRCURI (1 << 3)
#define USBH_PORTSTAT_PRTSUSPST (1 << 2)
#define USBH_PORTSTAT_PRTENABST (1 << 1)
#define USBH_PORTSTAT_CURCONST (1 << 0)
#define USB_DMAREV 0x800
#define USB_DMAINTSTAT 0x804
#define USB_DMAINTSTAT_EPERR (1 << 1)
#define USB_DMAINTSTAT_ETDERR (1 << 0)
#define USB_DMAINTEN 0x808
#define USB_DMAINTEN_EPERRINTEN (1 << 1)
#define USB_DMAINTEN_ETDERRINTEN (1 << 0)
#define USB_ETDDMAERSTAT 0x80c
#define USB_EPDMAERSTAT 0x810
#define USB_ETDDMAEN 0x820
#define USB_EPDMAEN 0x824
#define USB_ETDDMAXTEN 0x828
#define USB_EPDMAXTEN 0x82c
#define USB_ETDDMAENXYT 0x830
#define USB_EPDMAENXYT 0x834
#define USB_ETDDMABST4EN 0x838
#define USB_EPDMABST4EN 0x83c
#define USB_MISCCONTROL 0x840
#define USB_MISCCONTROL_ISOPREVFRM (1 << 3)
#define USB_MISCCONTROL_SKPRTRY (1 << 2)
#define USB_MISCCONTROL_ARBMODE (1 << 1)
#define USB_MISCCONTROL_FILTCC (1 << 0)
#define USB_ETDDMACHANLCLR 0x848
#define USB_EPDMACHANLCLR 0x84c
#define USB_ETDSMSA(x) (0x900 + ((x) * 4))
#define USB_EPSMSA(x) (0x980 + ((x) * 4))
#define USB_ETDDMABUFPTR(x) (0xa00 + ((x) * 4))
#define USB_EPDMABUFPTR(x) (0xa80 + ((x) * 4))
#define USB_ETD_DWORD(x, w) (0x200 + ((x) * 16) + ((w) * 4))
#define DW0_ADDRESS 0
#define DW0_ENDPNT 7
#define DW0_DIRECT 11
#define DW0_SPEED 13
#define DW0_FORMAT 14
#define DW0_MAXPKTSIZ 16
#define DW0_HALTED 27
#define DW0_TOGCRY 28
#define DW0_SNDNAK 30
#define DW1_XBUFSRTAD 0
#define DW1_YBUFSRTAD 16
#define DW2_RTRYDELAY 0
#define DW2_POLINTERV 0
#define DW2_STARTFRM 0
#define DW2_RELPOLPOS 8
#define DW2_DIRPID 16
#define DW2_BUFROUND 18
#define DW2_DELAYINT 19
#define DW2_DATATOG 22
#define DW2_ERRORCNT 24
#define DW2_COMPCODE 28
#define DW3_TOTBYECNT 0
#define DW3_PKTLEN0 0
#define DW3_COMPCODE0 12
#define DW3_PKTLEN1 16
#define DW3_BUFSIZE 21
#define DW3_COMPCODE1 28
#define USBCTRL 0x600
#define USBCTRL_I2C_WU_INT_STAT (1 << 27)
#define USBCTRL_OTG_WU_INT_STAT (1 << 26)
#define USBCTRL_HOST_WU_INT_STAT (1 << 25)
#define USBCTRL_FNT_WU_INT_STAT (1 << 24)
#define USBCTRL_I2C_WU_INT_EN (1 << 19)
#define USBCTRL_OTG_WU_INT_EN (1 << 18)
#define USBCTRL_HOST_WU_INT_EN (1 << 17)
#define USBCTRL_FNT_WU_INT_EN (1 << 16)
#define USBCTRL_OTC_RCV_RXDP (1 << 13)
#define USBCTRL_HOST1_BYP_TLL (1 << 12)
#define USBCTRL_OTG_BYP_VAL(x) ((x) << 10)
#define USBCTRL_HOST1_BYP_VAL(x) ((x) << 8)
#define USBCTRL_OTG_PWR_MASK (1 << 6)
#define USBCTRL_HOST1_PWR_MASK (1 << 5)
#define USBCTRL_HOST2_PWR_MASK (1 << 4)
#define USBCTRL_USB_BYP (1 << 2)
#define USBCTRL_HOST1_TXEN_OE (1 << 1)
#define USBOTG_DMEM 0x1000
/* Values in TD blocks */
#define TD_DIR_SETUP 0
#define TD_DIR_OUT 1
#define TD_DIR_IN 2
#define TD_FORMAT_CONTROL 0
#define TD_FORMAT_ISO 1
#define TD_FORMAT_BULK 2
#define TD_FORMAT_INT 3
#define TD_TOGGLE_CARRY 0
#define TD_TOGGLE_DATA0 2
#define TD_TOGGLE_DATA1 3
/* control transfer states */
#define US_CTRL_SETUP 2
#define US_CTRL_DATA 1
#define US_CTRL_ACK 0
/* bulk transfer main state and 0-length packet */
#define US_BULK 1
#define US_BULK0 0
/*ETD format description*/
#define IMX_FMT_CTRL 0x0
#define IMX_FMT_ISO 0x1
#define IMX_FMT_BULK 0x2
#define IMX_FMT_INT 0x3
static char fmt_urb_to_etd[4] = {
/*PIPE_ISOCHRONOUS*/ IMX_FMT_ISO,
/*PIPE_INTERRUPT*/ IMX_FMT_INT,
/*PIPE_CONTROL*/ IMX_FMT_CTRL,
/*PIPE_BULK*/ IMX_FMT_BULK
};
/* condition (error) CC codes and mapping (OHCI like) */
#define TD_CC_NOERROR 0x00
#define TD_CC_CRC 0x01
#define TD_CC_BITSTUFFING 0x02
#define TD_CC_DATATOGGLEM 0x03
#define TD_CC_STALL 0x04
#define TD_DEVNOTRESP 0x05
#define TD_PIDCHECKFAIL 0x06
/*#define TD_UNEXPECTEDPID 0x07 - reserved, not active on MX2*/
#define TD_DATAOVERRUN 0x08
#define TD_DATAUNDERRUN 0x09
#define TD_BUFFEROVERRUN 0x0C
#define TD_BUFFERUNDERRUN 0x0D
#define TD_SCHEDULEOVERRUN 0x0E
#define TD_NOTACCESSED 0x0F
static const int cc_to_error[16] = {
/* No Error */ 0,
/* CRC Error */ -EILSEQ,
/* Bit Stuff */ -EPROTO,
/* Data Togg */ -EILSEQ,
/* Stall */ -EPIPE,
/* DevNotResp */ -ETIMEDOUT,
/* PIDCheck */ -EPROTO,
/* UnExpPID */ -EPROTO,
/* DataOver */ -EOVERFLOW,
/* DataUnder */ -EREMOTEIO,
/* (for hw) */ -EIO,
/* (for hw) */ -EIO,
/* BufferOver */ -ECOMM,
/* BuffUnder */ -ENOSR,
/* (for HCD) */ -ENOSPC,
/* (for HCD) */ -EALREADY
};
/* HCD data associated with a usb core URB */
struct urb_priv {
struct urb *urb;
struct usb_host_endpoint *ep;
int active;
int state;
struct td *isoc_td;
int isoc_remaining;
int isoc_status;
};
/* HCD data associated with a usb core endpoint */
struct ep_priv {
struct usb_host_endpoint *ep;
struct list_head td_list;
struct list_head queue;
int etd[NUM_ISO_ETDS];
int waiting_etd;
};
/* isoc packet */
struct td {
struct list_head list;
struct urb *urb;
struct usb_host_endpoint *ep;
dma_addr_t dma_handle;
void *cpu_buffer;
int len;
int frame;
int isoc_index;
};
/* HCD data associated with a hardware ETD */
struct etd_priv {
struct usb_host_endpoint *ep;
struct urb *urb;
struct td *td;
struct list_head queue;
dma_addr_t dma_handle;
void *cpu_buffer;
void *bounce_buffer;
int alloc;
int len;
int dmem_size;
int dmem_offset;
int active_count;
#ifdef DEBUG
int activated_frame;
int disactivated_frame;
int last_int_frame;
int last_req_frame;
u32 submitted_dwords[4];
#endif
};
/* Hardware data memory info */
struct imx21_dmem_area {
struct usb_host_endpoint *ep;
unsigned int offset;
unsigned int size;
struct list_head list;
};
#ifdef DEBUG
struct debug_usage_stats {
unsigned int value;
unsigned int maximum;
};
struct debug_stats {
unsigned long submitted;
unsigned long completed_ok;
unsigned long completed_failed;
unsigned long unlinked;
unsigned long queue_etd;
unsigned long queue_dmem;
};
struct debug_isoc_trace {
int schedule_frame;
int submit_frame;
int request_len;
int done_frame;
int done_len;
int cc;
struct td *td;
};
#endif
/* HCD data structure */
struct imx21 {
spinlock_t lock;
struct device *dev;
struct usb_hcd *hcd;
struct mx21_usbh_platform_data *pdata;
struct list_head dmem_list;
struct list_head queue_for_etd; /* eps queued due to etd shortage */
struct list_head queue_for_dmem; /* etds queued due to dmem shortage */
struct etd_priv etd[USB_NUM_ETD];
struct clk *clk;
void __iomem *regs;
#ifdef DEBUG
struct dentry *debug_root;
struct debug_stats nonisoc_stats;
struct debug_stats isoc_stats;
struct debug_usage_stats etd_usage;
struct debug_usage_stats dmem_usage;
struct debug_isoc_trace isoc_trace[20];
struct debug_isoc_trace isoc_trace_failed[20];
unsigned long debug_unblocks;
int isoc_trace_index;
int isoc_trace_index_failed;
#endif
};
#endif

View File

@ -1447,6 +1447,7 @@ static int isp116x_bus_resume(struct usb_hcd *hcd)
val &= ~HCCONTROL_HCFS;
val |= HCCONTROL_USB_RESUME;
isp116x_write_reg32(isp116x, HCCONTROL, val);
break;
case HCCONTROL_USB_RESUME:
break;
case HCCONTROL_USB_OPER:

View File

@ -793,60 +793,6 @@ static void isp1362_write_fifo(struct isp1362_hcd *isp1362_hcd, void *buf, u16 l
ISP1362_REG_NO(ISP1362_REG_##r), isp1362_read_reg16(d, r)); \
}
static void __attribute__((__unused__)) isp1362_show_regs(struct isp1362_hcd *isp1362_hcd)
{
isp1362_show_reg(isp1362_hcd, HCREVISION);
isp1362_show_reg(isp1362_hcd, HCCONTROL);
isp1362_show_reg(isp1362_hcd, HCCMDSTAT);
isp1362_show_reg(isp1362_hcd, HCINTSTAT);
isp1362_show_reg(isp1362_hcd, HCINTENB);
isp1362_show_reg(isp1362_hcd, HCFMINTVL);
isp1362_show_reg(isp1362_hcd, HCFMREM);
isp1362_show_reg(isp1362_hcd, HCFMNUM);
isp1362_show_reg(isp1362_hcd, HCLSTHRESH);
isp1362_show_reg(isp1362_hcd, HCRHDESCA);
isp1362_show_reg(isp1362_hcd, HCRHDESCB);
isp1362_show_reg(isp1362_hcd, HCRHSTATUS);
isp1362_show_reg(isp1362_hcd, HCRHPORT1);
isp1362_show_reg(isp1362_hcd, HCRHPORT2);
isp1362_show_reg(isp1362_hcd, HCHWCFG);
isp1362_show_reg(isp1362_hcd, HCDMACFG);
isp1362_show_reg(isp1362_hcd, HCXFERCTR);
isp1362_show_reg(isp1362_hcd, HCuPINT);
if (in_interrupt())
DBG(0, "%-12s[%02x]: %04x\n", "HCuPINTENB",
ISP1362_REG_NO(ISP1362_REG_HCuPINTENB), isp1362_hcd->irqenb);
else
isp1362_show_reg(isp1362_hcd, HCuPINTENB);
isp1362_show_reg(isp1362_hcd, HCCHIPID);
isp1362_show_reg(isp1362_hcd, HCSCRATCH);
isp1362_show_reg(isp1362_hcd, HCBUFSTAT);
isp1362_show_reg(isp1362_hcd, HCDIRADDR);
/* Access would advance fifo
* isp1362_show_reg(isp1362_hcd, HCDIRDATA);
*/
isp1362_show_reg(isp1362_hcd, HCISTLBUFSZ);
isp1362_show_reg(isp1362_hcd, HCISTLRATE);
isp1362_show_reg(isp1362_hcd, HCINTLBUFSZ);
isp1362_show_reg(isp1362_hcd, HCINTLBLKSZ);
isp1362_show_reg(isp1362_hcd, HCINTLDONE);
isp1362_show_reg(isp1362_hcd, HCINTLSKIP);
isp1362_show_reg(isp1362_hcd, HCINTLLAST);
isp1362_show_reg(isp1362_hcd, HCINTLCURR);
isp1362_show_reg(isp1362_hcd, HCATLBUFSZ);
isp1362_show_reg(isp1362_hcd, HCATLBLKSZ);
/* only valid after ATL_DONE interrupt
* isp1362_show_reg(isp1362_hcd, HCATLDONE);
*/
isp1362_show_reg(isp1362_hcd, HCATLSKIP);
isp1362_show_reg(isp1362_hcd, HCATLLAST);
isp1362_show_reg(isp1362_hcd, HCATLCURR);
isp1362_show_reg(isp1362_hcd, HCATLDTC);
isp1362_show_reg(isp1362_hcd, HCATLDTCTO);
}
static void isp1362_write_diraddr(struct isp1362_hcd *isp1362_hcd, u16 offset, u16 len)
{
len = (len + 1) & ~1;

View File

@ -1537,6 +1537,7 @@ max3421_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flags)
__func__, urb->interval);
return -EINVAL;
}
break;
default:
break;
}
@ -1847,7 +1848,7 @@ max3421_probe(struct spi_device *spi)
struct max3421_hcd *max3421_hcd;
struct usb_hcd *hcd = NULL;
struct max3421_hcd_platform_data *pdata = NULL;
int retval = -ENOMEM;
int retval;
if (spi_setup(spi) < 0) {
dev_err(&spi->dev, "Unable to setup SPI bus");
@ -1889,6 +1890,7 @@ max3421_probe(struct spi_device *spi)
goto error;
}
retval = -ENOMEM;
hcd = usb_create_hcd(&max3421_hcd_desc, &spi->dev,
dev_name(&spi->dev));
if (!hcd) {

View File

@ -155,7 +155,10 @@ static struct regmap *at91_dt_syscon_sfr(void)
/*
* usb_hcd_at91_probe - initialize AT91-based HCDs
* Context: !in_interrupt()
* @driver: Pointer to hc driver instance
* @pdev: USB controller to probe
*
* Context: task context, might sleep
*
* Allocates basic resources for this USB host controller, and
* then invokes the start() method for the HCD associated with it
@ -246,12 +249,14 @@ static int usb_hcd_at91_probe(const struct hc_driver *driver,
/*
* usb_hcd_at91_remove - shutdown processing for AT91-based HCDs
* Context: !in_interrupt()
* @hcd: USB controller to remove
* @pdev: Platform device required for cleanup
*
* Context: task context, might sleep
*
* Reverses the effect of usb_hcd_at91_probe(), first invoking
* the HCD's stop() method. It is always called from a thread
* context, "rmmod" or something similar.
*
*/
static void usb_hcd_at91_remove(struct usb_hcd *hcd,
struct platform_device *pdev)

View File

@ -171,7 +171,7 @@ static int ohci_urb_enqueue (
/* 1 TD for setup, 1 for ACK, plus ... */
size = 2;
/* FALLTHROUGH */
fallthrough;
// case PIPE_INTERRUPT:
// case PIPE_BULK:
default:

View File

@ -692,6 +692,7 @@ int ohci_hub_control(
case C_HUB_OVER_CURRENT:
ohci_writel (ohci, RH_HS_OCIC,
&ohci->regs->roothub.status);
break;
case C_HUB_LOCAL_POWER:
break;
default:

View File

@ -285,7 +285,9 @@ static int ohci_omap_reset(struct usb_hcd *hcd)
/**
* ohci_hcd_omap_probe - initialize OMAP-based HCDs
* Context: !in_interrupt()
* @pdev: USB controller to probe
*
* Context: task context, might sleep
*
* Allocates basic resources for this USB host controller, and
* then invokes the start() method for the HCD associated with it
@ -399,8 +401,9 @@ err_put_hcd:
/**
* ohci_hcd_omap_remove - shutdown processing for OMAP-based HCDs
* @dev: USB Host Controller being removed
* Context: !in_interrupt()
* @pdev: USB Host Controller being removed
*
* Context: task context, might sleep
*
* Reverses the effect of ohci_hcd_omap_probe(), first invoking
* the HCD's stop() method. It is always called from a thread

View File

@ -410,12 +410,13 @@ static int ohci_pxa_of_init(struct platform_device *pdev)
/**
* ohci_hcd_pxa27x_probe - initialize pxa27x-based HCDs
* Context: !in_interrupt()
* @pdev: USB Host controller to probe
*
* Context: task context, might sleep
*
* Allocates basic resources for this USB host controller, and
* then invokes the start() method for the HCD associated with it
* through the hotplug entry's driver_data.
*
*/
static int ohci_hcd_pxa27x_probe(struct platform_device *pdev)
{
@ -509,13 +510,13 @@ static int ohci_hcd_pxa27x_probe(struct platform_device *pdev)
/**
* ohci_hcd_pxa27x_remove - shutdown processing for pxa27x-based HCDs
* @dev: USB Host Controller being removed
* Context: !in_interrupt()
* @pdev: USB Host Controller being removed
*
* Context: task context, might sleep
*
* Reverses the effect of ohci_hcd_pxa27x_probe(), first invoking
* the HCD's stop() method. It is always called from a thread
* context, normally "rmmod", "apmd", or something similar.
*
*/
static int ohci_hcd_pxa27x_remove(struct platform_device *pdev)
{

View File

@ -324,14 +324,13 @@ static void s3c2410_hcd_oc(struct s3c2410_hcd_info *info, int port_oc)
/*
* ohci_hcd_s3c2410_remove - shutdown processing for HCD
* @dev: USB Host Controller being removed
* Context: !in_interrupt()
*
* Context: task context, might sleep
*
* Reverses the effect of ohci_hcd_3c2410_probe(), first invoking
* the HCD's stop() method. It is always called from a thread
* context, normally "rmmod", "apmd", or something similar.
*
*/
*/
static int
ohci_hcd_s3c2410_remove(struct platform_device *dev)
{
@ -345,12 +344,13 @@ ohci_hcd_s3c2410_remove(struct platform_device *dev)
/*
* ohci_hcd_s3c2410_probe - initialize S3C2410-based HCDs
* Context: !in_interrupt()
* @dev: USB Host Controller to be probed
*
* Context: task context, might sleep
*
* Allocates basic resources for this USB host controller, and
* then invokes the start() method for the HCD associated with it
* through the hotplug entry's driver_data.
*
*/
static int ohci_hcd_s3c2410_probe(struct platform_device *dev)
{

View File

@ -1365,6 +1365,7 @@ __acquires(oxu->lock)
switch (urb->status) {
case -EINPROGRESS: /* success */
urb->status = 0;
break;
default: /* fault */
break;
case -EREMOTEIO: /* fault or normal */
@ -4151,8 +4152,10 @@ static struct usb_hcd *oxu_create(struct platform_device *pdev,
oxu->is_otg = otg;
ret = usb_add_hcd(hcd, irq, IRQF_SHARED);
if (ret < 0)
if (ret < 0) {
usb_put_hcd(hcd);
return ERR_PTR(ret);
}
device_wakeup_enable(hcd->self.controller);
return hcd;

View File

@ -208,13 +208,13 @@ struct u132 {
#define ftdi_read_pcimem(pdev, member, data) usb_ftdi_elan_read_pcimem(pdev, \
offsetof(struct ohci_regs, member), 0, data);
#define ftdi_write_pcimem(pdev, member, data) usb_ftdi_elan_write_pcimem(pdev, \
offsetof(struct ohci_regs, member), 0, data);
offsetof(struct ohci_regs, member), 0, data)
#define u132_read_pcimem(u132, member, data) \
usb_ftdi_elan_read_pcimem(u132->platform_dev, offsetof(struct \
ohci_regs, member), 0, data);
ohci_regs, member), 0, data)
#define u132_write_pcimem(u132, member, data) \
usb_ftdi_elan_write_pcimem(u132->platform_dev, offsetof(struct \
ohci_regs, member), 0, data);
ohci_regs, member), 0, data)
static inline struct u132 *udev_to_u132(struct u132_udev *udev)
{
u8 udev_number = udev->udev_number;

View File

@ -1712,6 +1712,10 @@ retry:
hcd->state = HC_STATE_SUSPENDED;
bus_state->next_statechange = jiffies + msecs_to_jiffies(10);
spin_unlock_irqrestore(&xhci->lock, flags);
if (bus_state->bus_suspended)
usleep_range(5000, 10000);
return 0;
}

View File

@ -1144,7 +1144,6 @@ int xhci_setup_addressable_virt_dev(struct xhci_hcd *xhci, struct usb_device *ud
case USB_SPEED_WIRELESS:
xhci_dbg(xhci, "FIXME xHCI doesn't support wireless speeds\n");
return -EINVAL;
break;
default:
/* Speed was set earlier, this shouldn't happen. */
return -EINVAL;
@ -2110,7 +2109,7 @@ static void xhci_set_hc_event_deq(struct xhci_hcd *xhci)
deq = xhci_trb_virt_to_dma(xhci->event_ring->deq_seg,
xhci->event_ring->dequeue);
if (deq == 0 && !in_interrupt())
if (!deq)
xhci_warn(xhci, "WARN something wrong with SW event ring "
"dequeue ptr.\n");
/* Update HC event ring dequeue pointer */

View File

@ -47,6 +47,7 @@
#define PCI_DEVICE_ID_INTEL_DNV_XHCI 0x19d0
#define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_XHCI 0x15b5
#define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_XHCI 0x15b6
#define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_XHCI 0x15c1
#define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_XHCI 0x15db
#define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_XHCI 0x15d4
#define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_XHCI 0x15e9
@ -55,6 +56,7 @@
#define PCI_DEVICE_ID_INTEL_ICE_LAKE_XHCI 0x8a13
#define PCI_DEVICE_ID_INTEL_CML_XHCI 0xa3af
#define PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI 0x9a13
#define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI 0x1138
#define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9
#define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba
@ -232,13 +234,15 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
(pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_ICE_LAKE_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI))
pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI))
xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
if (pdev->vendor == PCI_VENDOR_ID_ETRON &&

Some files were not shown because too many files have changed in this diff Show More