usb: changes for v4.18 merge window

A total of 98 non-merge commits, the biggest part being in dwc3 this
 time around with a large refactoring of dwc3's transfer handling code.
 
 We also have a new driver for Aspeed virtual hub controller.
 
 Apart from that, just a list of miscellaneous fixes all over the place.
 -----BEGIN PGP SIGNATURE-----
 
 iQJRBAABCgA7FiEElLzh7wn96CXwjh2IzL64meEamQYFAlsCgCYdHGZlbGlwZS5i
 YWxiaUBsaW51eC5pbnRlbC5jb20ACgkQzL64meEamQb+thAAuL3kE7y5dGOp91cw
 Eiif9fcNdiQVz/ItyBqnaaUlztYrT3C/K0gZcgf63671rWkiYx3I+NihT9B/Za0e
 7zhauY6olddghKr9GRAeMf7sbrAnRGg6FyTm5P76f3MJsQF17hio05XJcJZ8cecd
 QNLyOJLAFJKMnczgNHLj2PP3v+lxucCi4ryJDYu7KxQcjfbtIdx0WMoSCIo1D9MX
 qJ/6HjLxlgOWoGpEVfmwNlsh6boI9liBsunzMOtt9HQ3pu9HO08fy3x1NAaxr2Cl
 VJsbyTDRmjUFDq4pl9uFt0F8GoNLEvQU30kogyxtJ/F9pEiLseX5+UP+uEHEsz4Q
 kIHdFUSsydZj4gGfupbfGmtzfQETV+9yM6dL/TTe6yvpAG25Az7NW498Sv3gUKrE
 qPHNcrumJugNiAG4cWiIu+K5VJoX6M/+0c7HgcFxOo/O3WpD0nJKj7WpQD/T0XV7
 ErehJywEjf4TpQOM2/SuRrjNgjTD5l88HhsEazkT95lfZkvtmLHcLMXVZbCVGjFV
 RAXZMgHKTqg4RCgDUdzrsaKF5l1W0PX3j60b3no3bAD2YG4HNEWOu2PjDC+EGaCi
 TCpQjLcEu9ynRgnOuRcugNupENCLc7u3IkMAIt7E7maktnKWGK0q9fzxpwnt9XqF
 YOM6Jj6YZRV2TtKRdv9MVz9LzHk=
 =b56m
 -----END PGP SIGNATURE-----

Merge tag 'usb-for-v4.18' of git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb into usb-next

usb: changes for v4.18 merge window

A total of 98 non-merge commits, the biggest part being in dwc3 this
time around with a large refactoring of dwc3's transfer handling code.

We also have a new driver for Aspeed virtual hub controller.

Apart from that, just a list of miscellaneous fixes all over the place.
This commit is contained in:
Greg Kroah-Hartman 2018-05-24 17:46:53 +02:00
commit 109e37a673
58 changed files with 5512 additions and 833 deletions

View File

@ -7,6 +7,26 @@ Required properties:
- compatible: must be "snps,dwc3" - compatible: must be "snps,dwc3"
- reg : Address and length of the register set for the device - reg : Address and length of the register set for the device
- interrupts: Interrupts used by the dwc3 controller. - interrupts: Interrupts used by the dwc3 controller.
- clock-names: should contain "ref", "bus_early", "suspend"
- clocks: list of phandle and clock specifier pairs corresponding to
entries in the clock-names property.
Exception for clocks:
clocks are optional if the parent node (i.e. glue-layer) is compatible to
one of the following:
"amlogic,meson-axg-dwc3"
"amlogic,meson-gxl-dwc3"
"cavium,octeon-7130-usb-uctl"
"qcom,dwc3"
"samsung,exynos5250-dwusb3"
"samsung,exynos7-dwusb3"
"sprd,sc9860-dwc3"
"st,stih407-dwc3"
"ti,am437x-dwc3"
"ti,dwc3"
"ti,keystone-dwc3"
"rockchip,rk3399-dwc3"
"xlnx,zynqmp-dwc3"
Optional properties: Optional properties:
- usb-phy : array of phandle for the PHY device. The first element - usb-phy : array of phandle for the PHY device. The first element
@ -15,6 +35,7 @@ Optional properties:
- phys: from the *Generic PHY* bindings - phys: from the *Generic PHY* bindings
- phy-names: from the *Generic PHY* bindings; supported names are "usb2-phy" - phy-names: from the *Generic PHY* bindings; supported names are "usb2-phy"
or "usb3-phy". or "usb3-phy".
- resets: a single pair of phandle and reset specifier
- snps,usb3_lpm_capable: determines if platform is USB3 LPM capable - snps,usb3_lpm_capable: determines if platform is USB3 LPM capable
- snps,disable_scramble_quirk: true when SW should disable data scrambling. - snps,disable_scramble_quirk: true when SW should disable data scrambling.
Only really useful for FPGA builds. Only really useful for FPGA builds.

View File

@ -1,54 +1,95 @@
Qualcomm SuperSpeed DWC3 USB SoC controller Qualcomm SuperSpeed DWC3 USB SoC controller
Required properties: Required properties:
- compatible: should contain "qcom,dwc3" - compatible: Compatible list, contains
"qcom,dwc3"
"qcom,msm8996-dwc3" for msm8996 SOC.
"qcom,sdm845-dwc3" for sdm845 SOC.
- reg: Offset and length of register set for QSCRATCH wrapper
- power-domains: specifies a phandle to PM domain provider node
- clocks: A list of phandle + clock-specifier pairs for the - clocks: A list of phandle + clock-specifier pairs for the
clocks listed in clock-names clocks listed in clock-names
- clock-names: Should contain the following: - clock-names: Should contain the following:
"core" Master/Core clock, have to be >= 125 MHz for SS "core" Master/Core clock, have to be >= 125 MHz for SS
operation and >= 60MHz for HS operation operation and >= 60MHz for HS operation
"mock_utmi" Mock utmi clock needed for ITP/SOF generation in
host mode. Its frequency should be 19.2MHz.
"sleep" Sleep clock, used for wakeup when USB3 core goes
into low power mode (U3).
Optional clocks: Optional clocks:
"iface" System bus AXI clock. Not present on all platforms "iface" System bus AXI clock.
"sleep" Sleep clock, used when USB3 core goes into low Not present on "qcom,msm8996-dwc3" compatible.
power mode (U3). "cfg_noc" System Config NOC clock.
Not present on "qcom,msm8996-dwc3" compatible.
- assigned-clocks: Should be:
MOCK_UTMI_CLK
MASTER_CLK
- assigned-clock-rates: Should be:
19.2Mhz (192000000) for MOCK_UTMI_CLK
>=125Mhz (125000000) for MASTER_CLK in SS mode
>=60Mhz (60000000) for MASTER_CLK in HS mode
Optional properties:
- resets: Phandle to reset control that resets core and wrapper.
- interrupts: specifies interrupts from controller wrapper used
to wakeup from low power/susepnd state. Must contain
one or more entry for interrupt-names property
- interrupt-names: Must include the following entries:
- "hs_phy_irq": The interrupt that is asserted when a
wakeup event is received on USB2 bus
- "ss_phy_irq": The interrupt that is asserted when a
wakeup event is received on USB3 bus
- "dm_hs_phy_irq" and "dp_hs_phy_irq": Separate
interrupts for any wakeup event on DM and DP lines
- qcom,select-utmi-as-pipe-clk: if present, disable USB3 pipe_clk requirement.
Used when dwc3 operates without SSPHY and only
HS/FS/LS modes are supported.
Required child node: Required child node:
A child node must exist to represent the core DWC3 IP block. The name of A child node must exist to represent the core DWC3 IP block. The name of
the node is not important. The content of the node is defined in dwc3.txt. the node is not important. The content of the node is defined in dwc3.txt.
Phy documentation is provided in the following places: Phy documentation is provided in the following places:
Documentation/devicetree/bindings/phy/qcom-dwc3-usb-phy.txt Documentation/devicetree/bindings/phy/qcom-qmp-phy.txt - USB3 QMP PHY
Documentation/devicetree/bindings/phy/qcom-qusb2-phy.txt - USB2 QUSB2 PHY
Example device nodes: Example device nodes:
hs_phy: phy@100f8800 { hs_phy: phy@100f8800 {
compatible = "qcom,dwc3-hs-usb-phy"; compatible = "qcom,qusb2-v2-phy";
reg = <0x100f8800 0x30>; ...
clocks = <&gcc USB30_0_UTMI_CLK>;
clock-names = "ref";
#phy-cells = <0>;
}; };
ss_phy: phy@100f8830 { ss_phy: phy@100f8830 {
compatible = "qcom,dwc3-ss-usb-phy"; compatible = "qcom,qmp-v3-usb3-phy";
reg = <0x100f8830 0x30>; ...
clocks = <&gcc USB30_0_MASTER_CLK>;
clock-names = "ref";
#phy-cells = <0>;
}; };
usb3_0: usb30@0 { usb3_0: usb30@a6f8800 {
compatible = "qcom,dwc3"; compatible = "qcom,dwc3";
reg = <0xa6f8800 0x400>;
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
clocks = <&gcc USB30_0_MASTER_CLK>;
clock-names = "core";
ranges; ranges;
interrupts = <0 131 0>, <0 486 0>, <0 488 0>, <0 489 0>;
interrupt-names = "hs_phy_irq", "ss_phy_irq",
"dm_hs_phy_irq", "dp_hs_phy_irq";
clocks = <&gcc GCC_USB30_PRIM_MASTER_CLK>,
<&gcc GCC_USB30_PRIM_MOCK_UTMI_CLK>,
<&gcc GCC_USB30_PRIM_SLEEP_CLK>;
clock-names = "core", "mock_utmi", "sleep";
assigned-clocks = <&gcc GCC_USB30_PRIM_MOCK_UTMI_CLK>,
<&gcc GCC_USB30_PRIM_MASTER_CLK>;
assigned-clock-rates = <19200000>, <133000000>;
resets = <&gcc GCC_USB30_PRIM_BCR>;
reset-names = "core_reset";
power-domains = <&gcc USB30_PRIM_GDSC>;
qcom,select-utmi-as-pipe-clk;
dwc3@10000000 { dwc3@10000000 {
compatible = "snps,dwc3"; compatible = "snps,dwc3";

View File

@ -674,9 +674,8 @@ operations, both of which can be traced. Format is::
__entry->flags & DWC3_EP_ENABLED ? 'E' : 'e', __entry->flags & DWC3_EP_ENABLED ? 'E' : 'e',
__entry->flags & DWC3_EP_STALL ? 'S' : 's', __entry->flags & DWC3_EP_STALL ? 'S' : 's',
__entry->flags & DWC3_EP_WEDGE ? 'W' : 'w', __entry->flags & DWC3_EP_WEDGE ? 'W' : 'w',
__entry->flags & DWC3_EP_BUSY ? 'B' : 'b', __entry->flags & DWC3_EP_TRANSFER_STARTED ? 'B' : 'b',
__entry->flags & DWC3_EP_PENDING_REQUEST ? 'P' : 'p', __entry->flags & DWC3_EP_PENDING_REQUEST ? 'P' : 'p',
__entry->flags & DWC3_EP_MISSED_ISOC ? 'M' : 'm',
__entry->flags & DWC3_EP_END_TRANSFER_PENDING ? 'E' : 'e', __entry->flags & DWC3_EP_END_TRANSFER_PENDING ? 'E' : 'e',
__entry->direction ? '<' : '>' __entry->direction ? '<' : '>'
) )

View File

@ -419,6 +419,8 @@ static void dwc2_wait_for_mode(struct dwc2_hsotg *hsotg,
/** /**
* dwc2_iddig_filter_enabled() - Returns true if the IDDIG debounce * dwc2_iddig_filter_enabled() - Returns true if the IDDIG debounce
* filter is enabled. * filter is enabled.
*
* @hsotg: Programming view of DWC_otg controller
*/ */
static bool dwc2_iddig_filter_enabled(struct dwc2_hsotg *hsotg) static bool dwc2_iddig_filter_enabled(struct dwc2_hsotg *hsotg)
{ {
@ -564,6 +566,9 @@ int dwc2_core_reset(struct dwc2_hsotg *hsotg, bool skip_wait)
* If a force is done, it requires a IDDIG debounce filter delay if * If a force is done, it requires a IDDIG debounce filter delay if
* the filter is configured and enabled. We poll the current mode of * the filter is configured and enabled. We poll the current mode of
* the controller to account for this delay. * the controller to account for this delay.
*
* @hsotg: Programming view of DWC_otg controller
* @host: Host mode flag
*/ */
void dwc2_force_mode(struct dwc2_hsotg *hsotg, bool host) void dwc2_force_mode(struct dwc2_hsotg *hsotg, bool host)
{ {
@ -610,6 +615,8 @@ void dwc2_force_mode(struct dwc2_hsotg *hsotg, bool host)
* or not because the value of the connector ID status is affected by * or not because the value of the connector ID status is affected by
* the force mode. We only need to call this once during probe if * the force mode. We only need to call this once during probe if
* dr_mode == OTG. * dr_mode == OTG.
*
* @hsotg: Programming view of DWC_otg controller
*/ */
static void dwc2_clear_force_mode(struct dwc2_hsotg *hsotg) static void dwc2_clear_force_mode(struct dwc2_hsotg *hsotg)
{ {

View File

@ -164,12 +164,11 @@ struct dwc2_hsotg_req;
* and has yet to be completed (maybe due to data move, or simply * and has yet to be completed (maybe due to data move, or simply
* awaiting an ack from the core all the data has been completed). * awaiting an ack from the core all the data has been completed).
* @debugfs: File entry for debugfs file for this endpoint. * @debugfs: File entry for debugfs file for this endpoint.
* @lock: State lock to protect contents of endpoint.
* @dir_in: Set to true if this endpoint is of the IN direction, which * @dir_in: Set to true if this endpoint is of the IN direction, which
* means that it is sending data to the Host. * means that it is sending data to the Host.
* @index: The index for the endpoint registers. * @index: The index for the endpoint registers.
* @mc: Multi Count - number of transactions per microframe * @mc: Multi Count - number of transactions per microframe
* @interval - Interval for periodic endpoints, in frames or microframes. * @interval: Interval for periodic endpoints, in frames or microframes.
* @name: The name array passed to the USB core. * @name: The name array passed to the USB core.
* @halted: Set if the endpoint has been halted. * @halted: Set if the endpoint has been halted.
* @periodic: Set if this is a periodic ep, such as Interrupt * @periodic: Set if this is a periodic ep, such as Interrupt
@ -178,10 +177,11 @@ struct dwc2_hsotg_req;
* @desc_list_dma: The DMA address of descriptor chain currently in use. * @desc_list_dma: The DMA address of descriptor chain currently in use.
* @desc_list: Pointer to descriptor DMA chain head currently in use. * @desc_list: Pointer to descriptor DMA chain head currently in use.
* @desc_count: Count of entries within the DMA descriptor chain of EP. * @desc_count: Count of entries within the DMA descriptor chain of EP.
* @isoc_chain_num: Number of ISOC chain currently in use - either 0 or 1.
* @next_desc: index of next free descriptor in the ISOC chain under SW control. * @next_desc: index of next free descriptor in the ISOC chain under SW control.
* @compl_desc: index of next descriptor to be completed by xFerComplete
* @total_data: The total number of data bytes done. * @total_data: The total number of data bytes done.
* @fifo_size: The size of the FIFO (for periodic IN endpoints) * @fifo_size: The size of the FIFO (for periodic IN endpoints)
* @fifo_index: For Dedicated FIFO operation, only FIFO0 can be used for EP0.
* @fifo_load: The amount of data loaded into the FIFO (periodic IN) * @fifo_load: The amount of data loaded into the FIFO (periodic IN)
* @last_load: The offset of data for the last start of request. * @last_load: The offset of data for the last start of request.
* @size_loaded: The last loaded size for DxEPTSIZE for periodic IN * @size_loaded: The last loaded size for DxEPTSIZE for periodic IN
@ -231,8 +231,8 @@ struct dwc2_hsotg_ep {
struct dwc2_dma_desc *desc_list; struct dwc2_dma_desc *desc_list;
u8 desc_count; u8 desc_count;
unsigned char isoc_chain_num;
unsigned int next_desc; unsigned int next_desc;
unsigned int compl_desc;
char name[10]; char name[10];
}; };
@ -380,6 +380,12 @@ enum dwc2_ep0_state {
* is FS. * is FS.
* 0 - No (default) * 0 - No (default)
* 1 - Yes * 1 - Yes
* @ipg_isoc_en: Indicates the IPG supports is enabled or disabled.
* 0 - Disable (default)
* 1 - Enable
* @acg_enable: For enabling Active Clock Gating in the controller
* 0 - No
* 1 - Yes
* @ulpi_fs_ls: Make ULPI phy operate in FS/LS mode only * @ulpi_fs_ls: Make ULPI phy operate in FS/LS mode only
* 0 - No (default) * 0 - No (default)
* 1 - Yes * 1 - Yes
@ -511,6 +517,7 @@ struct dwc2_core_params {
bool hird_threshold_en; bool hird_threshold_en;
u8 hird_threshold; u8 hird_threshold;
bool activate_stm_fs_transceiver; bool activate_stm_fs_transceiver;
bool ipg_isoc_en;
u16 max_packet_count; u16 max_packet_count;
u32 max_transfer_size; u32 max_transfer_size;
u32 ahbcfg; u32 ahbcfg;
@ -548,7 +555,7 @@ struct dwc2_core_params {
* *
* The values that are not in dwc2_core_params are documented below. * The values that are not in dwc2_core_params are documented below.
* *
* @op_mode Mode of Operation * @op_mode: Mode of Operation
* 0 - HNP- and SRP-Capable OTG (Host & Device) * 0 - HNP- and SRP-Capable OTG (Host & Device)
* 1 - SRP-Capable OTG (Host & Device) * 1 - SRP-Capable OTG (Host & Device)
* 2 - Non-HNP and Non-SRP Capable OTG (Host & Device) * 2 - Non-HNP and Non-SRP Capable OTG (Host & Device)
@ -556,43 +563,102 @@ struct dwc2_core_params {
* 4 - Non-OTG Device * 4 - Non-OTG Device
* 5 - SRP-Capable Host * 5 - SRP-Capable Host
* 6 - Non-OTG Host * 6 - Non-OTG Host
* @arch Architecture * @arch: Architecture
* 0 - Slave only * 0 - Slave only
* 1 - External DMA * 1 - External DMA
* 2 - Internal DMA * 2 - Internal DMA
* @power_optimized Are power optimizations enabled? * @ipg_isoc_en: This feature indicates that the controller supports
* @num_dev_ep Number of device endpoints available * the worst-case scenario of Rx followed by Rx
* @num_dev_in_eps Number of device IN endpoints available * Interpacket Gap (IPG) (32 bitTimes) as per the utmi
* @num_dev_perio_in_ep Number of device periodic IN endpoints * specification for any token following ISOC OUT token.
* available * 0 - Don't support
* @dev_token_q_depth Device Mode IN Token Sequence Learning Queue * 1 - Support
* @power_optimized: Are power optimizations enabled?
* @num_dev_ep: Number of device endpoints available
* @num_dev_in_eps: Number of device IN endpoints available
* @num_dev_perio_in_ep: Number of device periodic IN endpoints
* available
* @dev_token_q_depth: Device Mode IN Token Sequence Learning Queue
* Depth * Depth
* 0 to 30 * 0 to 30
* @host_perio_tx_q_depth * @host_perio_tx_q_depth:
* Host Mode Periodic Request Queue Depth * Host Mode Periodic Request Queue Depth
* 2, 4 or 8 * 2, 4 or 8
* @nperio_tx_q_depth * @nperio_tx_q_depth:
* Non-Periodic Request Queue Depth * Non-Periodic Request Queue Depth
* 2, 4 or 8 * 2, 4 or 8
* @hs_phy_type High-speed PHY interface type * @hs_phy_type: High-speed PHY interface type
* 0 - High-speed interface not supported * 0 - High-speed interface not supported
* 1 - UTMI+ * 1 - UTMI+
* 2 - ULPI * 2 - ULPI
* 3 - UTMI+ and ULPI * 3 - UTMI+ and ULPI
* @fs_phy_type Full-speed PHY interface type * @fs_phy_type: Full-speed PHY interface type
* 0 - Full speed interface not supported * 0 - Full speed interface not supported
* 1 - Dedicated full speed interface * 1 - Dedicated full speed interface
* 2 - FS pins shared with UTMI+ pins * 2 - FS pins shared with UTMI+ pins
* 3 - FS pins shared with ULPI pins * 3 - FS pins shared with ULPI pins
* @total_fifo_size: Total internal RAM for FIFOs (bytes) * @total_fifo_size: Total internal RAM for FIFOs (bytes)
* @hibernation Is hibernation enabled? * @hibernation: Is hibernation enabled?
* @utmi_phy_data_width UTMI+ PHY data width * @utmi_phy_data_width: UTMI+ PHY data width
* 0 - 8 bits * 0 - 8 bits
* 1 - 16 bits * 1 - 16 bits
* 2 - 8 or 16 bits * 2 - 8 or 16 bits
* @snpsid: Value from SNPSID register * @snpsid: Value from SNPSID register
* @dev_ep_dirs: Direction of device endpoints (GHWCFG1) * @dev_ep_dirs: Direction of device endpoints (GHWCFG1)
* @g_tx_fifo_size[] Power-on values of TxFIFO sizes * @g_tx_fifo_size: Power-on values of TxFIFO sizes
* @dma_desc_enable: When DMA mode is enabled, specifies whether to use
* address DMA mode or descriptor DMA mode for accessing
* the data FIFOs. The driver will automatically detect the
* value for this if none is specified.
* 0 - Address DMA
* 1 - Descriptor DMA (default, if available)
* @enable_dynamic_fifo: 0 - Use coreConsultant-specified FIFO size parameters
* 1 - Allow dynamic FIFO sizing (default, if available)
* @en_multiple_tx_fifo: Specifies whether dedicated per-endpoint transmit FIFOs
* are enabled for non-periodic IN endpoints in device
* mode.
* @host_nperio_tx_fifo_size: Number of 4-byte words in the non-periodic Tx FIFO
* in host mode when dynamic FIFO sizing is enabled
* 16 to 32768
* Actual maximum value is autodetected and also
* the default.
* @host_perio_tx_fifo_size: Number of 4-byte words in the periodic Tx FIFO in
* host mode when dynamic FIFO sizing is enabled
* 16 to 32768
* Actual maximum value is autodetected and also
* the default.
* @max_transfer_size: The maximum transfer size supported, in bytes
* 2047 to 65,535
* Actual maximum value is autodetected and also
* the default.
* @max_packet_count: The maximum number of packets in a transfer
* 15 to 511
* Actual maximum value is autodetected and also
* the default.
* @host_channels: The number of host channel registers to use
* 1 to 16
* Actual maximum value is autodetected and also
* the default.
* @dev_nperio_tx_fifo_size: Number of 4-byte words in the non-periodic Tx FIFO
* in device mode when dynamic FIFO sizing is enabled
* 16 to 32768
* Actual maximum value is autodetected and also
* the default.
* @i2c_enable: Specifies whether to use the I2Cinterface for a full
* speed PHY. This parameter is only applicable if phy_type
* is FS.
* 0 - No (default)
* 1 - Yes
* @acg_enable: For enabling Active Clock Gating in the controller
* 0 - Disable
* 1 - Enable
* @lpm_mode: For enabling Link Power Management in the controller
* 0 - Disable
* 1 - Enable
* @rx_fifo_size: Number of 4-byte words in the Rx FIFO when dynamic
* FIFO sizing is enabled 16 to 32768
* Actual maximum value is autodetected and also
* the default.
*/ */
struct dwc2_hw_params { struct dwc2_hw_params {
unsigned op_mode:3; unsigned op_mode:3;
@ -622,6 +688,7 @@ struct dwc2_hw_params {
unsigned hibernation:1; unsigned hibernation:1;
unsigned utmi_phy_data_width:2; unsigned utmi_phy_data_width:2;
unsigned lpm_mode:1; unsigned lpm_mode:1;
unsigned ipg_isoc_en:1;
u32 snpsid; u32 snpsid;
u32 dev_ep_dirs; u32 dev_ep_dirs;
u32 g_tx_fifo_size[MAX_EPS_CHANNELS]; u32 g_tx_fifo_size[MAX_EPS_CHANNELS];
@ -642,7 +709,11 @@ struct dwc2_hw_params {
* @gi2cctl: Backup of GI2CCTL register * @gi2cctl: Backup of GI2CCTL register
* @glpmcfg: Backup of GLPMCFG register * @glpmcfg: Backup of GLPMCFG register
* @gdfifocfg: Backup of GDFIFOCFG register * @gdfifocfg: Backup of GDFIFOCFG register
* @pcgcctl: Backup of PCGCCTL register
* @pcgcctl1: Backup of PCGCCTL1 register
* @dtxfsiz: Backup of DTXFSIZ registers for each endpoint
* @gpwrdn: Backup of GPWRDN register * @gpwrdn: Backup of GPWRDN register
* @valid: True if registers values backuped.
*/ */
struct dwc2_gregs_backup { struct dwc2_gregs_backup {
u32 gotgctl; u32 gotgctl;
@ -675,6 +746,7 @@ struct dwc2_gregs_backup {
* @doeptsiz: Backup of DOEPTSIZ register * @doeptsiz: Backup of DOEPTSIZ register
* @doepdma: Backup of DOEPDMA register * @doepdma: Backup of DOEPDMA register
* @dtxfsiz: Backup of DTXFSIZ registers for each endpoint * @dtxfsiz: Backup of DTXFSIZ registers for each endpoint
* @valid: True if registers values backuped.
*/ */
struct dwc2_dregs_backup { struct dwc2_dregs_backup {
u32 dcfg; u32 dcfg;
@ -698,9 +770,10 @@ struct dwc2_dregs_backup {
* @hcfg: Backup of HCFG register * @hcfg: Backup of HCFG register
* @haintmsk: Backup of HAINTMSK register * @haintmsk: Backup of HAINTMSK register
* @hcintmsk: Backup of HCINTMSK register * @hcintmsk: Backup of HCINTMSK register
* @hptr0: Backup of HPTR0 register * @hprt0: Backup of HPTR0 register
* @hfir: Backup of HFIR register * @hfir: Backup of HFIR register
* @hptxfsiz: Backup of HPTXFSIZ register * @hptxfsiz: Backup of HPTXFSIZ register
* @valid: True if registers values backuped.
*/ */
struct dwc2_hregs_backup { struct dwc2_hregs_backup {
u32 hcfg; u32 hcfg;
@ -800,7 +873,7 @@ struct dwc2_hregs_backup {
* @regs: Pointer to controller regs * @regs: Pointer to controller regs
* @hw_params: Parameters that were autodetected from the * @hw_params: Parameters that were autodetected from the
* hardware registers * hardware registers
* @core_params: Parameters that define how the core should be configured * @params: Parameters that define how the core should be configured
* @op_state: The operational State, during transitions (a_host=> * @op_state: The operational State, during transitions (a_host=>
* a_peripheral and b_device=>b_host) this may not match * a_peripheral and b_device=>b_host) this may not match
* the core, but allows the software to determine * the core, but allows the software to determine
@ -809,10 +882,13 @@ struct dwc2_hregs_backup {
* - USB_DR_MODE_PERIPHERAL * - USB_DR_MODE_PERIPHERAL
* - USB_DR_MODE_HOST * - USB_DR_MODE_HOST
* - USB_DR_MODE_OTG * - USB_DR_MODE_OTG
* @hcd_enabled Host mode sub-driver initialization indicator. * @hcd_enabled: Host mode sub-driver initialization indicator.
* @gadget_enabled Peripheral mode sub-driver initialization indicator. * @gadget_enabled: Peripheral mode sub-driver initialization indicator.
* @ll_hw_enabled Status of low-level hardware resources. * @ll_hw_enabled: Status of low-level hardware resources.
* @hibernated: True if core is hibernated * @hibernated: True if core is hibernated
* @frame_number: Frame number read from the core. For both device
* and host modes. The value ranges are from 0
* to HFNUM_MAX_FRNUM.
* @phy: The otg phy transceiver structure for phy control. * @phy: The otg phy transceiver structure for phy control.
* @uphy: The otg phy transceiver structure for old USB phy * @uphy: The otg phy transceiver structure for old USB phy
* control. * control.
@ -832,13 +908,25 @@ struct dwc2_hregs_backup {
* interrupt * interrupt
* @wkp_timer: Timer object for handling Wakeup Detected interrupt * @wkp_timer: Timer object for handling Wakeup Detected interrupt
* @lx_state: Lx state of connected device * @lx_state: Lx state of connected device
* @gregs_backup: Backup of global registers during suspend * @gr_backup: Backup of global registers during suspend
* @dregs_backup: Backup of device registers during suspend * @dr_backup: Backup of device registers during suspend
* @hregs_backup: Backup of host registers during suspend * @hr_backup: Backup of host registers during suspend
* *
* These are for host mode: * These are for host mode:
* *
* @flags: Flags for handling root port state changes * @flags: Flags for handling root port state changes
* @flags.d32: Contain all root port flags
* @flags.b: Separate root port flags from each other
* @flags.b.port_connect_status_change: True if root port connect status
* changed
* @flags.b.port_connect_status: True if device connected to root port
* @flags.b.port_reset_change: True if root port reset status changed
* @flags.b.port_enable_change: True if root port enable status changed
* @flags.b.port_suspend_change: True if root port suspend status changed
* @flags.b.port_over_current_change: True if root port over current state
* changed.
* @flags.b.port_l1_change: True if root port l1 status changed
* @flags.b.reserved: Reserved bits of root port register
* @non_periodic_sched_inactive: Inactive QHs in the non-periodic schedule. * @non_periodic_sched_inactive: Inactive QHs in the non-periodic schedule.
* Transfers associated with these QHs are not currently * Transfers associated with these QHs are not currently
* assigned to a host channel. * assigned to a host channel.
@ -847,6 +935,9 @@ struct dwc2_hregs_backup {
* assigned to a host channel. * assigned to a host channel.
* @non_periodic_qh_ptr: Pointer to next QH to process in the active * @non_periodic_qh_ptr: Pointer to next QH to process in the active
* non-periodic schedule * non-periodic schedule
* @non_periodic_sched_waiting: Waiting QHs in the non-periodic schedule.
* Transfers associated with these QHs are not currently
* assigned to a host channel.
* @periodic_sched_inactive: Inactive QHs in the periodic schedule. This is a * @periodic_sched_inactive: Inactive QHs in the periodic schedule. This is a
* list of QHs for periodic transfers that are _not_ * list of QHs for periodic transfers that are _not_
* scheduled for the next frame. Each QH in the list has an * scheduled for the next frame. Each QH in the list has an
@ -886,8 +977,6 @@ struct dwc2_hregs_backup {
* @hs_periodic_bitmap: Bitmap used by the microframe scheduler any time the * @hs_periodic_bitmap: Bitmap used by the microframe scheduler any time the
* host is in high speed mode; low speed schedules are * host is in high speed mode; low speed schedules are
* stored elsewhere since we need one per TT. * stored elsewhere since we need one per TT.
* @frame_number: Frame number read from the core at SOF. The value ranges
* from 0 to HFNUM_MAX_FRNUM.
* @periodic_qh_count: Count of periodic QHs, if using several eps. Used for * @periodic_qh_count: Count of periodic QHs, if using several eps. Used for
* SOF enable/disable. * SOF enable/disable.
* @free_hc_list: Free host channels in the controller. This is a list of * @free_hc_list: Free host channels in the controller. This is a list of
@ -898,8 +987,8 @@ struct dwc2_hregs_backup {
* host channel is available for non-periodic transactions. * host channel is available for non-periodic transactions.
* @non_periodic_channels: Number of host channels assigned to non-periodic * @non_periodic_channels: Number of host channels assigned to non-periodic
* transfers * transfers
* @available_host_channels Number of host channels available for the microframe * @available_host_channels: Number of host channels available for the
* scheduler to use * microframe scheduler to use
* @hc_ptr_array: Array of pointers to the host channel descriptors. * @hc_ptr_array: Array of pointers to the host channel descriptors.
* Allows accessing a host channel descriptor given the * Allows accessing a host channel descriptor given the
* host channel number. This is useful in interrupt * host channel number. This is useful in interrupt
@ -922,9 +1011,6 @@ struct dwc2_hregs_backup {
* @dedicated_fifos: Set if the hardware has dedicated IN-EP fifos. * @dedicated_fifos: Set if the hardware has dedicated IN-EP fifos.
* @num_of_eps: Number of available EPs (excluding EP0) * @num_of_eps: Number of available EPs (excluding EP0)
* @debug_root: Root directrory for debugfs. * @debug_root: Root directrory for debugfs.
* @debug_file: Main status file for debugfs.
* @debug_testmode: Testmode status file for debugfs.
* @debug_fifo: FIFO status file for debugfs.
* @ep0_reply: Request used for ep0 reply. * @ep0_reply: Request used for ep0 reply.
* @ep0_buff: Buffer for EP0 reply data, if needed. * @ep0_buff: Buffer for EP0 reply data, if needed.
* @ctrl_buff: Buffer for EP0 control requests. * @ctrl_buff: Buffer for EP0 control requests.
@ -939,7 +1025,37 @@ struct dwc2_hregs_backup {
* @ctrl_in_desc: EP0 IN data phase desc chain pointer * @ctrl_in_desc: EP0 IN data phase desc chain pointer
* @ctrl_out_desc_dma: EP0 OUT data phase desc chain DMA address * @ctrl_out_desc_dma: EP0 OUT data phase desc chain DMA address
* @ctrl_out_desc: EP0 OUT data phase desc chain pointer * @ctrl_out_desc: EP0 OUT data phase desc chain pointer
* @eps: The endpoints being supplied to the gadget framework * @irq: Interrupt request line number
* @clk: Pointer to otg clock
* @reset: Pointer to dwc2 reset controller
* @reset_ecc: Pointer to dwc2 optional reset controller in Stratix10.
* @regset: A pointer to a struct debugfs_regset32, which contains
* a pointer to an array of register definitions, the
* array size and the base address where the register bank
* is to be found.
* @bus_suspended: True if bus is suspended
* @last_frame_num: Number of last frame. Range from 0 to 32768
* @frame_num_array: Used only if CONFIG_USB_DWC2_TRACK_MISSED_SOFS is
* defined, for missed SOFs tracking. Array holds that
* frame numbers, which not equal to last_frame_num +1
* @last_frame_num_array: Used only if CONFIG_USB_DWC2_TRACK_MISSED_SOFS is
* defined, for missed SOFs tracking.
* If current_frame_number != last_frame_num+1
* then last_frame_num added to this array
* @frame_num_idx: Actual size of frame_num_array and last_frame_num_array
* @dumped_frame_num_array: 1 - if missed SOFs frame numbers dumbed
* 0 - if missed SOFs frame numbers not dumbed
* @fifo_mem: Total internal RAM for FIFOs (bytes)
* @fifo_map: Each bit intend for concrete fifo. If that bit is set,
* then that fifo is used
* @gadget: Represents a usb slave device
* @connected: Used in slave mode. True if device connected with host
* @eps_in: The IN endpoints being supplied to the gadget framework
* @eps_out: The OUT endpoints being supplied to the gadget framework
* @new_connection: Used in host mode. True if there are new connected
* device
* @enabled: Indicates the enabling state of controller
*
*/ */
struct dwc2_hsotg { struct dwc2_hsotg {
struct device *dev; struct device *dev;
@ -954,6 +1070,7 @@ struct dwc2_hsotg {
unsigned int gadget_enabled:1; unsigned int gadget_enabled:1;
unsigned int ll_hw_enabled:1; unsigned int ll_hw_enabled:1;
unsigned int hibernated:1; unsigned int hibernated:1;
u16 frame_number;
struct phy *phy; struct phy *phy;
struct usb_phy *uphy; struct usb_phy *uphy;
@ -1029,7 +1146,6 @@ struct dwc2_hsotg {
u16 periodic_usecs; u16 periodic_usecs;
unsigned long hs_periodic_bitmap[ unsigned long hs_periodic_bitmap[
DIV_ROUND_UP(DWC2_HS_SCHEDULE_US, BITS_PER_LONG)]; DIV_ROUND_UP(DWC2_HS_SCHEDULE_US, BITS_PER_LONG)];
u16 frame_number;
u16 periodic_qh_count; u16 periodic_qh_count;
bool bus_suspended; bool bus_suspended;
bool new_connection; bool new_connection;

View File

@ -778,6 +778,14 @@ irqreturn_t dwc2_handle_common_intr(int irq, void *dev)
goto out; goto out;
} }
/* Reading current frame number value in device or host modes. */
if (dwc2_is_device_mode(hsotg))
hsotg->frame_number = (dwc2_readl(hsotg->regs + DSTS)
& DSTS_SOFFN_MASK) >> DSTS_SOFFN_SHIFT;
else
hsotg->frame_number = (dwc2_readl(hsotg->regs + HFNUM)
& HFNUM_FRNUM_MASK) >> HFNUM_FRNUM_SHIFT;
gintsts = dwc2_read_common_intr(hsotg); gintsts = dwc2_read_common_intr(hsotg);
if (gintsts & ~GINTSTS_PRTINT) if (gintsts & ~GINTSTS_PRTINT)
retval = IRQ_HANDLED; retval = IRQ_HANDLED;

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
/** /*
* debug.h - Designware USB2 DRD controller debug header * debug.h - Designware USB2 DRD controller debug header
* *
* Copyright (C) 2015 Intel Corporation * Copyright (C) 2015 Intel Corporation

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
/** /*
* debugfs.c - Designware USB2 DRD controller debugfs * debugfs.c - Designware USB2 DRD controller debugfs
* *
* Copyright (C) 2015 Intel Corporation * Copyright (C) 2015 Intel Corporation
@ -16,12 +16,13 @@
#if IS_ENABLED(CONFIG_USB_DWC2_PERIPHERAL) || \ #if IS_ENABLED(CONFIG_USB_DWC2_PERIPHERAL) || \
IS_ENABLED(CONFIG_USB_DWC2_DUAL_ROLE) IS_ENABLED(CONFIG_USB_DWC2_DUAL_ROLE)
/** /**
* testmode_write - debugfs: change usb test mode * testmode_write() - change usb test mode state.
* @seq: The seq file to write to. * @file: The file to write to.
* @v: Unused parameter. * @ubuf: The buffer where user wrote.
* * @count: The ubuf size.
* This debugfs entry modify the current usb test mode. * @ppos: Unused parameter.
*/ */
static ssize_t testmode_write(struct file *file, const char __user *ubuf, size_t static ssize_t testmode_write(struct file *file, const char __user *ubuf, size_t
count, loff_t *ppos) count, loff_t *ppos)
@ -55,9 +56,9 @@ static ssize_t testmode_write(struct file *file, const char __user *ubuf, size_t
} }
/** /**
* testmode_show - debugfs: show usb test mode state * testmode_show() - debugfs: show usb test mode state
* @seq: The seq file to write to. * @s: The seq file to write to.
* @v: Unused parameter. * @unused: Unused parameter.
* *
* This debugfs entry shows which usb test mode is currently enabled. * This debugfs entry shows which usb test mode is currently enabled.
*/ */
@ -368,7 +369,7 @@ static const struct debugfs_reg32 dwc2_regs[] = {
dump_register(GINTSTS), dump_register(GINTSTS),
dump_register(GINTMSK), dump_register(GINTMSK),
dump_register(GRXSTSR), dump_register(GRXSTSR),
dump_register(GRXSTSP), /* Omit GRXSTSP */
dump_register(GRXFSIZ), dump_register(GRXFSIZ),
dump_register(GNPTXFSIZ), dump_register(GNPTXFSIZ),
dump_register(GNPTXSTS), dump_register(GNPTXSTS),
@ -710,6 +711,7 @@ static int params_show(struct seq_file *seq, void *v)
print_param(seq, p, phy_ulpi_ddr); print_param(seq, p, phy_ulpi_ddr);
print_param(seq, p, phy_ulpi_ext_vbus); print_param(seq, p, phy_ulpi_ext_vbus);
print_param(seq, p, i2c_enable); print_param(seq, p, i2c_enable);
print_param(seq, p, ipg_isoc_en);
print_param(seq, p, ulpi_fs_ls); print_param(seq, p, ulpi_fs_ls);
print_param(seq, p, host_support_fs_ls_low_power); print_param(seq, p, host_support_fs_ls_low_power);
print_param(seq, p, host_ls_low_power_phy_clk); print_param(seq, p, host_ls_low_power_phy_clk);

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
/** /*
* Copyright (c) 2011 Samsung Electronics Co., Ltd. * Copyright (c) 2011 Samsung Electronics Co., Ltd.
* http://www.samsung.com * http://www.samsung.com
* *
@ -107,7 +107,6 @@ static inline bool using_desc_dma(struct dwc2_hsotg *hsotg)
/** /**
* dwc2_gadget_incr_frame_num - Increments the targeted frame number. * dwc2_gadget_incr_frame_num - Increments the targeted frame number.
* @hs_ep: The endpoint * @hs_ep: The endpoint
* @increment: The value to increment by
* *
* This function will also check if the frame number overruns DSTS_SOFFN_LIMIT. * This function will also check if the frame number overruns DSTS_SOFFN_LIMIT.
* If an overrun occurs it will wrap the value and set the frame_overrun flag. * If an overrun occurs it will wrap the value and set the frame_overrun flag.
@ -190,6 +189,8 @@ static void dwc2_hsotg_ctrl_epint(struct dwc2_hsotg *hsotg,
/** /**
* dwc2_hsotg_tx_fifo_count - return count of TX FIFOs in device mode * dwc2_hsotg_tx_fifo_count - return count of TX FIFOs in device mode
*
* @hsotg: Programming view of the DWC_otg controller
*/ */
int dwc2_hsotg_tx_fifo_count(struct dwc2_hsotg *hsotg) int dwc2_hsotg_tx_fifo_count(struct dwc2_hsotg *hsotg)
{ {
@ -204,6 +205,8 @@ int dwc2_hsotg_tx_fifo_count(struct dwc2_hsotg *hsotg)
/** /**
* dwc2_hsotg_tx_fifo_total_depth - return total FIFO depth available for * dwc2_hsotg_tx_fifo_total_depth - return total FIFO depth available for
* device mode TX FIFOs * device mode TX FIFOs
*
* @hsotg: Programming view of the DWC_otg controller
*/ */
int dwc2_hsotg_tx_fifo_total_depth(struct dwc2_hsotg *hsotg) int dwc2_hsotg_tx_fifo_total_depth(struct dwc2_hsotg *hsotg)
{ {
@ -227,6 +230,8 @@ int dwc2_hsotg_tx_fifo_total_depth(struct dwc2_hsotg *hsotg)
/** /**
* dwc2_hsotg_tx_fifo_average_depth - returns average depth of device mode * dwc2_hsotg_tx_fifo_average_depth - returns average depth of device mode
* TX FIFOs * TX FIFOs
*
* @hsotg: Programming view of the DWC_otg controller
*/ */
int dwc2_hsotg_tx_fifo_average_depth(struct dwc2_hsotg *hsotg) int dwc2_hsotg_tx_fifo_average_depth(struct dwc2_hsotg *hsotg)
{ {
@ -327,6 +332,7 @@ static void dwc2_hsotg_init_fifo(struct dwc2_hsotg *hsotg)
} }
/** /**
* dwc2_hsotg_ep_alloc_request - allocate USB rerequest structure
* @ep: USB endpoint to allocate request for. * @ep: USB endpoint to allocate request for.
* @flags: Allocation flags * @flags: Allocation flags
* *
@ -793,9 +799,7 @@ static void dwc2_gadget_config_nonisoc_xfer_ddma(struct dwc2_hsotg_ep *hs_ep,
* @dma_buff: usb requests dma buffer. * @dma_buff: usb requests dma buffer.
* @len: usb request transfer length. * @len: usb request transfer length.
* *
* Finds out index of first free entry either in the bottom or up half of * Fills next free descriptor with the data of the arrived usb request,
* descriptor chain depend on which is under SW control and not processed
* by HW. Then fills that descriptor with the data of the arrived usb request,
* frame info, sets Last and IOC bits increments next_desc. If filled * frame info, sets Last and IOC bits increments next_desc. If filled
* descriptor is not the first one, removes L bit from the previous descriptor * descriptor is not the first one, removes L bit from the previous descriptor
* status. * status.
@ -810,34 +814,17 @@ static int dwc2_gadget_fill_isoc_desc(struct dwc2_hsotg_ep *hs_ep,
u32 mask = 0; u32 mask = 0;
maxsize = dwc2_gadget_get_desc_params(hs_ep, &mask); maxsize = dwc2_gadget_get_desc_params(hs_ep, &mask);
if (len > maxsize) {
dev_err(hsotg->dev, "wrong len %d\n", len);
return -EINVAL;
}
/*
* If SW has already filled half of chain, then return and wait for
* the other chain to be processed by HW.
*/
if (hs_ep->next_desc == MAX_DMA_DESC_NUM_GENERIC / 2)
return -EBUSY;
/* Increment frame number by interval for IN */
if (hs_ep->dir_in)
dwc2_gadget_incr_frame_num(hs_ep);
index = (MAX_DMA_DESC_NUM_GENERIC / 2) * hs_ep->isoc_chain_num +
hs_ep->next_desc;
/* Sanity check of calculated index */
if ((hs_ep->isoc_chain_num && index > MAX_DMA_DESC_NUM_GENERIC) ||
(!hs_ep->isoc_chain_num && index > MAX_DMA_DESC_NUM_GENERIC / 2)) {
dev_err(hsotg->dev, "wrong index %d for iso chain\n", index);
return -EINVAL;
}
index = hs_ep->next_desc;
desc = &hs_ep->desc_list[index]; desc = &hs_ep->desc_list[index];
/* Check if descriptor chain full */
if ((desc->status >> DEV_DMA_BUFF_STS_SHIFT) ==
DEV_DMA_BUFF_STS_HREADY) {
dev_dbg(hsotg->dev, "%s: desc chain full\n", __func__);
return 1;
}
/* Clear L bit of previous desc if more than one entries in the chain */ /* Clear L bit of previous desc if more than one entries in the chain */
if (hs_ep->next_desc) if (hs_ep->next_desc)
hs_ep->desc_list[index - 1].status &= ~DEV_DMA_L; hs_ep->desc_list[index - 1].status &= ~DEV_DMA_L;
@ -865,8 +852,14 @@ static int dwc2_gadget_fill_isoc_desc(struct dwc2_hsotg_ep *hs_ep,
desc->status &= ~DEV_DMA_BUFF_STS_MASK; desc->status &= ~DEV_DMA_BUFF_STS_MASK;
desc->status |= (DEV_DMA_BUFF_STS_HREADY << DEV_DMA_BUFF_STS_SHIFT); desc->status |= (DEV_DMA_BUFF_STS_HREADY << DEV_DMA_BUFF_STS_SHIFT);
/* Increment frame number by interval for IN */
if (hs_ep->dir_in)
dwc2_gadget_incr_frame_num(hs_ep);
/* Update index of last configured entry in the chain */ /* Update index of last configured entry in the chain */
hs_ep->next_desc++; hs_ep->next_desc++;
if (hs_ep->next_desc >= MAX_DMA_DESC_NUM_GENERIC)
hs_ep->next_desc = 0;
return 0; return 0;
} }
@ -875,11 +868,8 @@ static int dwc2_gadget_fill_isoc_desc(struct dwc2_hsotg_ep *hs_ep,
* dwc2_gadget_start_isoc_ddma - start isochronous transfer in DDMA * dwc2_gadget_start_isoc_ddma - start isochronous transfer in DDMA
* @hs_ep: The isochronous endpoint. * @hs_ep: The isochronous endpoint.
* *
* Prepare first descriptor chain for isochronous endpoints. Afterwards * Prepare descriptor chain for isochronous endpoints. Afterwards
* write DMA address to HW and enable the endpoint. * write DMA address to HW and enable the endpoint.
*
* Switch between descriptor chains via isoc_chain_num to give SW opportunity
* to prepare second descriptor chain while first one is being processed by HW.
*/ */
static void dwc2_gadget_start_isoc_ddma(struct dwc2_hsotg_ep *hs_ep) static void dwc2_gadget_start_isoc_ddma(struct dwc2_hsotg_ep *hs_ep)
{ {
@ -887,24 +877,34 @@ static void dwc2_gadget_start_isoc_ddma(struct dwc2_hsotg_ep *hs_ep)
struct dwc2_hsotg_req *hs_req, *treq; struct dwc2_hsotg_req *hs_req, *treq;
int index = hs_ep->index; int index = hs_ep->index;
int ret; int ret;
int i;
u32 dma_reg; u32 dma_reg;
u32 depctl; u32 depctl;
u32 ctrl; u32 ctrl;
struct dwc2_dma_desc *desc;
if (list_empty(&hs_ep->queue)) { if (list_empty(&hs_ep->queue)) {
dev_dbg(hsotg->dev, "%s: No requests in queue\n", __func__); dev_dbg(hsotg->dev, "%s: No requests in queue\n", __func__);
return; return;
} }
/* Initialize descriptor chain by Host Busy status */
for (i = 0; i < MAX_DMA_DESC_NUM_GENERIC; i++) {
desc = &hs_ep->desc_list[i];
desc->status = 0;
desc->status |= (DEV_DMA_BUFF_STS_HBUSY
<< DEV_DMA_BUFF_STS_SHIFT);
}
hs_ep->next_desc = 0;
list_for_each_entry_safe(hs_req, treq, &hs_ep->queue, queue) { list_for_each_entry_safe(hs_req, treq, &hs_ep->queue, queue) {
ret = dwc2_gadget_fill_isoc_desc(hs_ep, hs_req->req.dma, ret = dwc2_gadget_fill_isoc_desc(hs_ep, hs_req->req.dma,
hs_req->req.length); hs_req->req.length);
if (ret) { if (ret)
dev_dbg(hsotg->dev, "%s: desc chain full\n", __func__);
break; break;
}
} }
hs_ep->compl_desc = 0;
depctl = hs_ep->dir_in ? DIEPCTL(index) : DOEPCTL(index); depctl = hs_ep->dir_in ? DIEPCTL(index) : DOEPCTL(index);
dma_reg = hs_ep->dir_in ? DIEPDMA(index) : DOEPDMA(index); dma_reg = hs_ep->dir_in ? DIEPDMA(index) : DOEPDMA(index);
@ -914,10 +914,6 @@ static void dwc2_gadget_start_isoc_ddma(struct dwc2_hsotg_ep *hs_ep)
ctrl = dwc2_readl(hsotg->regs + depctl); ctrl = dwc2_readl(hsotg->regs + depctl);
ctrl |= DXEPCTL_EPENA | DXEPCTL_CNAK; ctrl |= DXEPCTL_EPENA | DXEPCTL_CNAK;
dwc2_writel(ctrl, hsotg->regs + depctl); dwc2_writel(ctrl, hsotg->regs + depctl);
/* Switch ISOC descriptor chain number being processed by SW*/
hs_ep->isoc_chain_num = (hs_ep->isoc_chain_num ^ 1) & 0x1;
hs_ep->next_desc = 0;
} }
/** /**
@ -1235,7 +1231,7 @@ static bool dwc2_gadget_target_frame_elapsed(struct dwc2_hsotg_ep *hs_ep)
{ {
struct dwc2_hsotg *hsotg = hs_ep->parent; struct dwc2_hsotg *hsotg = hs_ep->parent;
u32 target_frame = hs_ep->target_frame; u32 target_frame = hs_ep->target_frame;
u32 current_frame = dwc2_hsotg_read_frameno(hsotg); u32 current_frame = hsotg->frame_number;
bool frame_overrun = hs_ep->frame_overrun; bool frame_overrun = hs_ep->frame_overrun;
if (!frame_overrun && current_frame >= target_frame) if (!frame_overrun && current_frame >= target_frame)
@ -1291,6 +1287,9 @@ static int dwc2_hsotg_ep_queue(struct usb_ep *ep, struct usb_request *req,
struct dwc2_hsotg *hs = hs_ep->parent; struct dwc2_hsotg *hs = hs_ep->parent;
bool first; bool first;
int ret; int ret;
u32 maxsize = 0;
u32 mask = 0;
dev_dbg(hs->dev, "%s: req %p: %d@%p, noi=%d, zero=%d, snok=%d\n", dev_dbg(hs->dev, "%s: req %p: %d@%p, noi=%d, zero=%d, snok=%d\n",
ep->name, req, req->length, req->buf, req->no_interrupt, ep->name, req, req->length, req->buf, req->no_interrupt,
@ -1308,6 +1307,24 @@ static int dwc2_hsotg_ep_queue(struct usb_ep *ep, struct usb_request *req,
req->actual = 0; req->actual = 0;
req->status = -EINPROGRESS; req->status = -EINPROGRESS;
/* In DDMA mode for ISOC's don't queue request if length greater
* than descriptor limits.
*/
if (using_desc_dma(hs) && hs_ep->isochronous) {
maxsize = dwc2_gadget_get_desc_params(hs_ep, &mask);
if (hs_ep->dir_in && req->length > maxsize) {
dev_err(hs->dev, "wrong length %d (maxsize=%d)\n",
req->length, maxsize);
return -EINVAL;
}
if (!hs_ep->dir_in && req->length > hs_ep->ep.maxpacket) {
dev_err(hs->dev, "ISOC OUT: wrong length %d (mps=%d)\n",
req->length, hs_ep->ep.maxpacket);
return -EINVAL;
}
}
ret = dwc2_hsotg_handle_unaligned_buf_start(hs, hs_ep, hs_req); ret = dwc2_hsotg_handle_unaligned_buf_start(hs, hs_ep, hs_req);
if (ret) if (ret)
return ret; return ret;
@ -1330,17 +1347,15 @@ static int dwc2_hsotg_ep_queue(struct usb_ep *ep, struct usb_request *req,
/* /*
* Handle DDMA isochronous transfers separately - just add new entry * Handle DDMA isochronous transfers separately - just add new entry
* to the half of descriptor chain that is not processed by HW. * to the descriptor chain.
* Transfer will be started once SW gets either one of NAK or * Transfer will be started once SW gets either one of NAK or
* OutTknEpDis interrupts. * OutTknEpDis interrupts.
*/ */
if (using_desc_dma(hs) && hs_ep->isochronous && if (using_desc_dma(hs) && hs_ep->isochronous) {
hs_ep->target_frame != TARGET_FRAME_INITIAL) { if (hs_ep->target_frame != TARGET_FRAME_INITIAL) {
ret = dwc2_gadget_fill_isoc_desc(hs_ep, hs_req->req.dma, dwc2_gadget_fill_isoc_desc(hs_ep, hs_req->req.dma,
hs_req->req.length); hs_req->req.length);
if (ret) }
dev_dbg(hs->dev, "%s: ISO desc chain full\n", __func__);
return 0; return 0;
} }
@ -1350,8 +1365,15 @@ static int dwc2_hsotg_ep_queue(struct usb_ep *ep, struct usb_request *req,
return 0; return 0;
} }
while (dwc2_gadget_target_frame_elapsed(hs_ep)) /* Update current frame number value. */
hs->frame_number = dwc2_hsotg_read_frameno(hs);
while (dwc2_gadget_target_frame_elapsed(hs_ep)) {
dwc2_gadget_incr_frame_num(hs_ep); dwc2_gadget_incr_frame_num(hs_ep);
/* Update current frame number value once more as it
* changes here.
*/
hs->frame_number = dwc2_hsotg_read_frameno(hs);
}
if (hs_ep->target_frame != TARGET_FRAME_INITIAL) if (hs_ep->target_frame != TARGET_FRAME_INITIAL)
dwc2_hsotg_start_req(hs, hs_ep, hs_req, false); dwc2_hsotg_start_req(hs, hs_ep, hs_req, false);
@ -2011,108 +2033,75 @@ static void dwc2_hsotg_complete_request(struct dwc2_hsotg *hsotg,
* @hs_ep: The endpoint the request was on. * @hs_ep: The endpoint the request was on.
* *
* Get first request from the ep queue, determine descriptor on which complete * Get first request from the ep queue, determine descriptor on which complete
* happened. SW based on isoc_chain_num discovers which half of the descriptor * happened. SW discovers which descriptor currently in use by HW, adjusts
* chain is currently in use by HW, adjusts dma_address and calculates index * dma_address and calculates index of completed descriptor based on the value
* of completed descriptor based on the value of DEPDMA register. Update actual * of DEPDMA register. Update actual length of request, giveback to gadget.
* length of request, giveback to gadget.
*/ */
static void dwc2_gadget_complete_isoc_request_ddma(struct dwc2_hsotg_ep *hs_ep) static void dwc2_gadget_complete_isoc_request_ddma(struct dwc2_hsotg_ep *hs_ep)
{ {
struct dwc2_hsotg *hsotg = hs_ep->parent; struct dwc2_hsotg *hsotg = hs_ep->parent;
struct dwc2_hsotg_req *hs_req; struct dwc2_hsotg_req *hs_req;
struct usb_request *ureq; struct usb_request *ureq;
int index;
dma_addr_t dma_addr;
u32 dma_reg;
u32 depdma;
u32 desc_sts; u32 desc_sts;
u32 mask; u32 mask;
hs_req = get_ep_head(hs_ep); desc_sts = hs_ep->desc_list[hs_ep->compl_desc].status;
if (!hs_req) {
dev_warn(hsotg->dev, "%s: ISOC EP queue empty\n", __func__); /* Process only descriptors with buffer status set to DMA done */
return; while ((desc_sts & DEV_DMA_BUFF_STS_MASK) >>
DEV_DMA_BUFF_STS_SHIFT == DEV_DMA_BUFF_STS_DMADONE) {
hs_req = get_ep_head(hs_ep);
if (!hs_req) {
dev_warn(hsotg->dev, "%s: ISOC EP queue empty\n", __func__);
return;
}
ureq = &hs_req->req;
/* Check completion status */
if ((desc_sts & DEV_DMA_STS_MASK) >> DEV_DMA_STS_SHIFT ==
DEV_DMA_STS_SUCC) {
mask = hs_ep->dir_in ? DEV_DMA_ISOC_TX_NBYTES_MASK :
DEV_DMA_ISOC_RX_NBYTES_MASK;
ureq->actual = ureq->length - ((desc_sts & mask) >>
DEV_DMA_ISOC_NBYTES_SHIFT);
/* Adjust actual len for ISOC Out if len is
* not align of 4
*/
if (!hs_ep->dir_in && ureq->length & 0x3)
ureq->actual += 4 - (ureq->length & 0x3);
}
dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, 0);
hs_ep->compl_desc++;
if (hs_ep->compl_desc > (MAX_DMA_DESC_NUM_GENERIC - 1))
hs_ep->compl_desc = 0;
desc_sts = hs_ep->desc_list[hs_ep->compl_desc].status;
} }
ureq = &hs_req->req;
dma_addr = hs_ep->desc_list_dma;
/*
* If lower half of descriptor chain is currently use by SW,
* that means higher half is being processed by HW, so shift
* DMA address to higher half of descriptor chain.
*/
if (!hs_ep->isoc_chain_num)
dma_addr += sizeof(struct dwc2_dma_desc) *
(MAX_DMA_DESC_NUM_GENERIC / 2);
dma_reg = hs_ep->dir_in ? DIEPDMA(hs_ep->index) : DOEPDMA(hs_ep->index);
depdma = dwc2_readl(hsotg->regs + dma_reg);
index = (depdma - dma_addr) / sizeof(struct dwc2_dma_desc) - 1;
desc_sts = hs_ep->desc_list[index].status;
mask = hs_ep->dir_in ? DEV_DMA_ISOC_TX_NBYTES_MASK :
DEV_DMA_ISOC_RX_NBYTES_MASK;
ureq->actual = ureq->length -
((desc_sts & mask) >> DEV_DMA_ISOC_NBYTES_SHIFT);
/* Adjust actual length for ISOC Out if length is not align of 4 */
if (!hs_ep->dir_in && ureq->length & 0x3)
ureq->actual += 4 - (ureq->length & 0x3);
dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, 0);
} }
/* /*
* dwc2_gadget_start_next_isoc_ddma - start next isoc request, if any. * dwc2_gadget_handle_isoc_bna - handle BNA interrupt for ISOC.
* @hs_ep: The isochronous endpoint to be re-enabled. * @hs_ep: The isochronous endpoint.
* *
* If ep has been disabled due to last descriptor servicing (IN endpoint) or * If EP ISOC OUT then need to flush RX FIFO to remove source of BNA
* BNA (OUT endpoint) check the status of other half of descriptor chain that * interrupt. Reset target frame and next_desc to allow to start
* was under SW control till HW was busy and restart the endpoint if needed. * ISOC's on NAK interrupt for IN direction or on OUTTKNEPDIS
* interrupt for OUT direction.
*/ */
static void dwc2_gadget_start_next_isoc_ddma(struct dwc2_hsotg_ep *hs_ep) static void dwc2_gadget_handle_isoc_bna(struct dwc2_hsotg_ep *hs_ep)
{ {
struct dwc2_hsotg *hsotg = hs_ep->parent; struct dwc2_hsotg *hsotg = hs_ep->parent;
u32 depctl;
u32 dma_reg;
u32 ctrl;
u32 dma_addr = hs_ep->desc_list_dma;
unsigned char index = hs_ep->index;
dma_reg = hs_ep->dir_in ? DIEPDMA(index) : DOEPDMA(index); if (!hs_ep->dir_in)
depctl = hs_ep->dir_in ? DIEPCTL(index) : DOEPCTL(index); dwc2_flush_rx_fifo(hsotg);
dwc2_hsotg_complete_request(hsotg, hs_ep, get_ep_head(hs_ep), 0);
ctrl = dwc2_readl(hsotg->regs + depctl); hs_ep->target_frame = TARGET_FRAME_INITIAL;
hs_ep->next_desc = 0;
/* hs_ep->compl_desc = 0;
* EP was disabled if HW has processed last descriptor or BNA was set.
* So restart ep if SW has prepared new descriptor chain in ep_queue
* routine while HW was busy.
*/
if (!(ctrl & DXEPCTL_EPENA)) {
if (!hs_ep->next_desc) {
dev_dbg(hsotg->dev, "%s: No more ISOC requests\n",
__func__);
return;
}
dma_addr += sizeof(struct dwc2_dma_desc) *
(MAX_DMA_DESC_NUM_GENERIC / 2) *
hs_ep->isoc_chain_num;
dwc2_writel(dma_addr, hsotg->regs + dma_reg);
ctrl |= DXEPCTL_EPENA | DXEPCTL_CNAK;
dwc2_writel(ctrl, hsotg->regs + depctl);
/* Switch ISOC descriptor chain number being processed by SW*/
hs_ep->isoc_chain_num = (hs_ep->isoc_chain_num ^ 1) & 0x1;
hs_ep->next_desc = 0;
dev_dbg(hsotg->dev, "%s: Restarted isochronous endpoint\n",
__func__);
}
} }
/** /**
@ -2441,6 +2430,7 @@ static u32 dwc2_hsotg_ep0_mps(unsigned int mps)
* @ep: The index number of the endpoint * @ep: The index number of the endpoint
* @mps: The maximum packet size in bytes * @mps: The maximum packet size in bytes
* @mc: The multicount value * @mc: The multicount value
* @dir_in: True if direction is in.
* *
* Configure the maximum packet size for the given endpoint, updating * Configure the maximum packet size for the given endpoint, updating
* the hardware control registers to reflect this. * the hardware control registers to reflect this.
@ -2731,6 +2721,8 @@ static void dwc2_gadget_handle_ep_disabled(struct dwc2_hsotg_ep *hs_ep)
dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req,
-ENODATA); -ENODATA);
dwc2_gadget_incr_frame_num(hs_ep); dwc2_gadget_incr_frame_num(hs_ep);
/* Update current frame number value. */
hsotg->frame_number = dwc2_hsotg_read_frameno(hsotg);
} while (dwc2_gadget_target_frame_elapsed(hs_ep)); } while (dwc2_gadget_target_frame_elapsed(hs_ep));
dwc2_gadget_start_next_request(hs_ep); dwc2_gadget_start_next_request(hs_ep);
@ -2738,7 +2730,7 @@ static void dwc2_gadget_handle_ep_disabled(struct dwc2_hsotg_ep *hs_ep)
/** /**
* dwc2_gadget_handle_out_token_ep_disabled - handle DXEPINT_OUTTKNEPDIS * dwc2_gadget_handle_out_token_ep_disabled - handle DXEPINT_OUTTKNEPDIS
* @hs_ep: The endpoint on which interrupt is asserted. * @ep: The endpoint on which interrupt is asserted.
* *
* This is starting point for ISOC-OUT transfer, synchronization done with * This is starting point for ISOC-OUT transfer, synchronization done with
* first out token received from host while corresponding EP is disabled. * first out token received from host while corresponding EP is disabled.
@ -2763,7 +2755,7 @@ static void dwc2_gadget_handle_out_token_ep_disabled(struct dwc2_hsotg_ep *ep)
*/ */
tmp = dwc2_hsotg_read_frameno(hsotg); tmp = dwc2_hsotg_read_frameno(hsotg);
dwc2_hsotg_complete_request(hsotg, ep, get_ep_head(ep), -ENODATA); dwc2_hsotg_complete_request(hsotg, ep, get_ep_head(ep), 0);
if (using_desc_dma(hsotg)) { if (using_desc_dma(hsotg)) {
if (ep->target_frame == TARGET_FRAME_INITIAL) { if (ep->target_frame == TARGET_FRAME_INITIAL) {
@ -2816,18 +2808,25 @@ static void dwc2_gadget_handle_nak(struct dwc2_hsotg_ep *hs_ep)
{ {
struct dwc2_hsotg *hsotg = hs_ep->parent; struct dwc2_hsotg *hsotg = hs_ep->parent;
int dir_in = hs_ep->dir_in; int dir_in = hs_ep->dir_in;
u32 tmp;
if (!dir_in || !hs_ep->isochronous) if (!dir_in || !hs_ep->isochronous)
return; return;
if (hs_ep->target_frame == TARGET_FRAME_INITIAL) { if (hs_ep->target_frame == TARGET_FRAME_INITIAL) {
hs_ep->target_frame = dwc2_hsotg_read_frameno(hsotg);
tmp = dwc2_hsotg_read_frameno(hsotg);
if (using_desc_dma(hsotg)) { if (using_desc_dma(hsotg)) {
dwc2_hsotg_complete_request(hsotg, hs_ep,
get_ep_head(hs_ep), 0);
hs_ep->target_frame = tmp;
dwc2_gadget_incr_frame_num(hs_ep);
dwc2_gadget_start_isoc_ddma(hs_ep); dwc2_gadget_start_isoc_ddma(hs_ep);
return; return;
} }
hs_ep->target_frame = tmp;
if (hs_ep->interval > 1) { if (hs_ep->interval > 1) {
u32 ctrl = dwc2_readl(hsotg->regs + u32 ctrl = dwc2_readl(hsotg->regs +
DIEPCTL(hs_ep->index)); DIEPCTL(hs_ep->index));
@ -2843,7 +2842,8 @@ static void dwc2_gadget_handle_nak(struct dwc2_hsotg_ep *hs_ep)
get_ep_head(hs_ep), 0); get_ep_head(hs_ep), 0);
} }
dwc2_gadget_incr_frame_num(hs_ep); if (!using_desc_dma(hsotg))
dwc2_gadget_incr_frame_num(hs_ep);
} }
/** /**
@ -2901,9 +2901,9 @@ static void dwc2_hsotg_epint(struct dwc2_hsotg *hsotg, unsigned int idx,
/* In DDMA handle isochronous requests separately */ /* In DDMA handle isochronous requests separately */
if (using_desc_dma(hsotg) && hs_ep->isochronous) { if (using_desc_dma(hsotg) && hs_ep->isochronous) {
dwc2_gadget_complete_isoc_request_ddma(hs_ep); /* XferCompl set along with BNA */
/* Try to start next isoc request */ if (!(ints & DXEPINT_BNAINTR))
dwc2_gadget_start_next_isoc_ddma(hs_ep); dwc2_gadget_complete_isoc_request_ddma(hs_ep);
} else if (dir_in) { } else if (dir_in) {
/* /*
* We get OutDone from the FIFO, so we only * We get OutDone from the FIFO, so we only
@ -2978,15 +2978,8 @@ static void dwc2_hsotg_epint(struct dwc2_hsotg *hsotg, unsigned int idx,
if (ints & DXEPINT_BNAINTR) { if (ints & DXEPINT_BNAINTR) {
dev_dbg(hsotg->dev, "%s: BNA interrupt\n", __func__); dev_dbg(hsotg->dev, "%s: BNA interrupt\n", __func__);
/*
* Try to start next isoc request, if any.
* Sometimes the endpoint remains enabled after BNA interrupt
* assertion, which is not expected, hence we can enter here
* couple of times.
*/
if (hs_ep->isochronous) if (hs_ep->isochronous)
dwc2_gadget_start_next_isoc_ddma(hs_ep); dwc2_gadget_handle_isoc_bna(hs_ep);
} }
if (dir_in && !hs_ep->isochronous) { if (dir_in && !hs_ep->isochronous) {
@ -3197,6 +3190,7 @@ static void dwc2_hsotg_irq_fifoempty(struct dwc2_hsotg *hsotg, bool periodic)
/** /**
* dwc2_hsotg_core_init - issue softreset to the core * dwc2_hsotg_core_init - issue softreset to the core
* @hsotg: The device state * @hsotg: The device state
* @is_usb_reset: Usb resetting flag
* *
* Issue a soft reset to the core, and await the core finishing it. * Issue a soft reset to the core, and await the core finishing it.
*/ */
@ -3259,6 +3253,9 @@ void dwc2_hsotg_core_init_disconnected(struct dwc2_hsotg *hsotg,
dcfg |= DCFG_DEVSPD_HS; dcfg |= DCFG_DEVSPD_HS;
} }
if (hsotg->params.ipg_isoc_en)
dcfg |= DCFG_IPG_ISOC_SUPPORDED;
dwc2_writel(dcfg, hsotg->regs + DCFG); dwc2_writel(dcfg, hsotg->regs + DCFG);
/* Clear any pending OTG interrupts */ /* Clear any pending OTG interrupts */
@ -3320,8 +3317,10 @@ void dwc2_hsotg_core_init_disconnected(struct dwc2_hsotg *hsotg,
hsotg->regs + DOEPMSK); hsotg->regs + DOEPMSK);
/* Enable BNA interrupt for DDMA */ /* Enable BNA interrupt for DDMA */
if (using_desc_dma(hsotg)) if (using_desc_dma(hsotg)) {
dwc2_set_bit(hsotg->regs + DOEPMSK, DOEPMSK_BNAMSK); dwc2_set_bit(hsotg->regs + DOEPMSK, DOEPMSK_BNAMSK);
dwc2_set_bit(hsotg->regs + DIEPMSK, DIEPMSK_BNAININTRMSK);
}
dwc2_writel(0, hsotg->regs + DAINTMSK); dwc2_writel(0, hsotg->regs + DAINTMSK);
@ -3427,7 +3426,7 @@ static void dwc2_gadget_handle_incomplete_isoc_in(struct dwc2_hsotg *hsotg)
daintmsk = dwc2_readl(hsotg->regs + DAINTMSK); daintmsk = dwc2_readl(hsotg->regs + DAINTMSK);
for (idx = 1; idx <= hsotg->num_of_eps; idx++) { for (idx = 1; idx < hsotg->num_of_eps; idx++) {
hs_ep = hsotg->eps_in[idx]; hs_ep = hsotg->eps_in[idx];
/* Proceed only unmasked ISOC EPs */ /* Proceed only unmasked ISOC EPs */
if (!hs_ep->isochronous || (BIT(idx) & ~daintmsk)) if (!hs_ep->isochronous || (BIT(idx) & ~daintmsk))
@ -3473,7 +3472,7 @@ static void dwc2_gadget_handle_incomplete_isoc_out(struct dwc2_hsotg *hsotg)
daintmsk = dwc2_readl(hsotg->regs + DAINTMSK); daintmsk = dwc2_readl(hsotg->regs + DAINTMSK);
daintmsk >>= DAINT_OUTEP_SHIFT; daintmsk >>= DAINT_OUTEP_SHIFT;
for (idx = 1; idx <= hsotg->num_of_eps; idx++) { for (idx = 1; idx < hsotg->num_of_eps; idx++) {
hs_ep = hsotg->eps_out[idx]; hs_ep = hsotg->eps_out[idx];
/* Proceed only unmasked ISOC EPs */ /* Proceed only unmasked ISOC EPs */
if (!hs_ep->isochronous || (BIT(idx) & ~daintmsk)) if (!hs_ep->isochronous || (BIT(idx) & ~daintmsk))
@ -3647,7 +3646,7 @@ irq_retry:
dwc2_writel(gintmsk, hsotg->regs + GINTMSK); dwc2_writel(gintmsk, hsotg->regs + GINTMSK);
dev_dbg(hsotg->dev, "GOUTNakEff triggered\n"); dev_dbg(hsotg->dev, "GOUTNakEff triggered\n");
for (idx = 1; idx <= hsotg->num_of_eps; idx++) { for (idx = 1; idx < hsotg->num_of_eps; idx++) {
hs_ep = hsotg->eps_out[idx]; hs_ep = hsotg->eps_out[idx];
/* Proceed only unmasked ISOC EPs */ /* Proceed only unmasked ISOC EPs */
if (!hs_ep->isochronous || (BIT(idx) & ~daintmsk)) if (!hs_ep->isochronous || (BIT(idx) & ~daintmsk))
@ -3789,6 +3788,7 @@ static int dwc2_hsotg_ep_enable(struct usb_ep *ep,
unsigned int dir_in; unsigned int dir_in;
unsigned int i, val, size; unsigned int i, val, size;
int ret = 0; int ret = 0;
unsigned char ep_type;
dev_dbg(hsotg->dev, dev_dbg(hsotg->dev,
"%s: ep %s: a 0x%02x, attr 0x%02x, mps 0x%04x, intr %d\n", "%s: ep %s: a 0x%02x, attr 0x%02x, mps 0x%04x, intr %d\n",
@ -3807,9 +3807,26 @@ static int dwc2_hsotg_ep_enable(struct usb_ep *ep,
return -EINVAL; return -EINVAL;
} }
ep_type = desc->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK;
mps = usb_endpoint_maxp(desc); mps = usb_endpoint_maxp(desc);
mc = usb_endpoint_maxp_mult(desc); mc = usb_endpoint_maxp_mult(desc);
/* ISOC IN in DDMA supported bInterval up to 10 */
if (using_desc_dma(hsotg) && ep_type == USB_ENDPOINT_XFER_ISOC &&
dir_in && desc->bInterval > 10) {
dev_err(hsotg->dev,
"%s: ISOC IN, DDMA: bInterval>10 not supported!\n", __func__);
return -EINVAL;
}
/* High bandwidth ISOC OUT in DDMA not supported */
if (using_desc_dma(hsotg) && ep_type == USB_ENDPOINT_XFER_ISOC &&
!dir_in && mc > 1) {
dev_err(hsotg->dev,
"%s: ISOC OUT, DDMA: HB not supported!\n", __func__);
return -EINVAL;
}
/* note, we handle this here instead of dwc2_hsotg_set_ep_maxpacket */ /* note, we handle this here instead of dwc2_hsotg_set_ep_maxpacket */
epctrl_reg = dir_in ? DIEPCTL(index) : DOEPCTL(index); epctrl_reg = dir_in ? DIEPCTL(index) : DOEPCTL(index);
@ -3850,15 +3867,15 @@ static int dwc2_hsotg_ep_enable(struct usb_ep *ep,
hs_ep->halted = 0; hs_ep->halted = 0;
hs_ep->interval = desc->bInterval; hs_ep->interval = desc->bInterval;
switch (desc->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) { switch (ep_type) {
case USB_ENDPOINT_XFER_ISOC: case USB_ENDPOINT_XFER_ISOC:
epctrl |= DXEPCTL_EPTYPE_ISO; epctrl |= DXEPCTL_EPTYPE_ISO;
epctrl |= DXEPCTL_SETEVENFR; epctrl |= DXEPCTL_SETEVENFR;
hs_ep->isochronous = 1; hs_ep->isochronous = 1;
hs_ep->interval = 1 << (desc->bInterval - 1); hs_ep->interval = 1 << (desc->bInterval - 1);
hs_ep->target_frame = TARGET_FRAME_INITIAL; hs_ep->target_frame = TARGET_FRAME_INITIAL;
hs_ep->isoc_chain_num = 0;
hs_ep->next_desc = 0; hs_ep->next_desc = 0;
hs_ep->compl_desc = 0;
if (dir_in) { if (dir_in) {
hs_ep->periodic = 1; hs_ep->periodic = 1;
mask = dwc2_readl(hsotg->regs + DIEPMSK); mask = dwc2_readl(hsotg->regs + DIEPMSK);
@ -4301,7 +4318,6 @@ err:
/** /**
* dwc2_hsotg_udc_stop - stop the udc * dwc2_hsotg_udc_stop - stop the udc
* @gadget: The usb gadget state * @gadget: The usb gadget state
* @driver: The usb gadget driver
* *
* Stop udc hw block and stay tunned for future transmissions * Stop udc hw block and stay tunned for future transmissions
*/ */
@ -4453,6 +4469,7 @@ static const struct usb_gadget_ops dwc2_hsotg_gadget_ops = {
* @hsotg: The device state. * @hsotg: The device state.
* @hs_ep: The endpoint to be initialised. * @hs_ep: The endpoint to be initialised.
* @epnum: The endpoint number * @epnum: The endpoint number
* @dir_in: True if direction is in.
* *
* Initialise the given endpoint (as part of the probe and device state * Initialise the given endpoint (as part of the probe and device state
* creation) to give to the gadget driver. Setup the endpoint name, any * creation) to give to the gadget driver. Setup the endpoint name, any
@ -4526,7 +4543,7 @@ static void dwc2_hsotg_initep(struct dwc2_hsotg *hsotg,
/** /**
* dwc2_hsotg_hw_cfg - read HW configuration registers * dwc2_hsotg_hw_cfg - read HW configuration registers
* @param: The device state * @hsotg: Programming view of the DWC_otg controller
* *
* Read the USB core HW configuration registers * Read the USB core HW configuration registers
*/ */
@ -4582,7 +4599,8 @@ static int dwc2_hsotg_hw_cfg(struct dwc2_hsotg *hsotg)
/** /**
* dwc2_hsotg_dump - dump state of the udc * dwc2_hsotg_dump - dump state of the udc
* @param: The device state * @hsotg: Programming view of the DWC_otg controller
*
*/ */
static void dwc2_hsotg_dump(struct dwc2_hsotg *hsotg) static void dwc2_hsotg_dump(struct dwc2_hsotg *hsotg)
{ {
@ -4633,7 +4651,8 @@ static void dwc2_hsotg_dump(struct dwc2_hsotg *hsotg)
/** /**
* dwc2_gadget_init - init function for gadget * dwc2_gadget_init - init function for gadget
* @dwc2: The data structure for the DWC2 driver. * @hsotg: Programming view of the DWC_otg controller
*
*/ */
int dwc2_gadget_init(struct dwc2_hsotg *hsotg) int dwc2_gadget_init(struct dwc2_hsotg *hsotg)
{ {
@ -4730,7 +4749,8 @@ int dwc2_gadget_init(struct dwc2_hsotg *hsotg)
/** /**
* dwc2_hsotg_remove - remove function for hsotg driver * dwc2_hsotg_remove - remove function for hsotg driver
* @pdev: The platform information for the driver * @hsotg: Programming view of the DWC_otg controller
*
*/ */
int dwc2_hsotg_remove(struct dwc2_hsotg *hsotg) int dwc2_hsotg_remove(struct dwc2_hsotg *hsotg)
{ {
@ -5011,7 +5031,7 @@ int dwc2_gadget_enter_hibernation(struct dwc2_hsotg *hsotg)
* *
* @hsotg: Programming view of the DWC_otg controller * @hsotg: Programming view of the DWC_otg controller
* @rem_wakeup: indicates whether resume is initiated by Device or Host. * @rem_wakeup: indicates whether resume is initiated by Device or Host.
* @param reset: indicates whether resume is initiated by Reset. * @reset: indicates whether resume is initiated by Reset.
* *
* Return non-zero if failed to exit from hibernation. * Return non-zero if failed to exit from hibernation.
*/ */

View File

@ -597,7 +597,7 @@ u32 dwc2_calc_frame_interval(struct dwc2_hsotg *hsotg)
* dwc2_read_packet() - Reads a packet from the Rx FIFO into the destination * dwc2_read_packet() - Reads a packet from the Rx FIFO into the destination
* buffer * buffer
* *
* @core_if: Programming view of DWC_otg controller * @hsotg: Programming view of DWC_otg controller
* @dest: Destination buffer for the packet * @dest: Destination buffer for the packet
* @bytes: Number of bytes to copy to the destination * @bytes: Number of bytes to copy to the destination
*/ */
@ -4087,7 +4087,6 @@ static struct dwc2_hsotg *dwc2_hcd_to_hsotg(struct usb_hcd *hcd)
* then the refcount for the structure will go to 0 and we'll free it. * then the refcount for the structure will go to 0 and we'll free it.
* *
* @hsotg: The HCD state structure for the DWC OTG controller. * @hsotg: The HCD state structure for the DWC OTG controller.
* @qh: The QH structure.
* @context: The priv pointer from a struct dwc2_hcd_urb. * @context: The priv pointer from a struct dwc2_hcd_urb.
* @mem_flags: Flags for allocating memory. * @mem_flags: Flags for allocating memory.
* @ttport: We'll return this device's port number here. That's used to * @ttport: We'll return this device's port number here. That's used to

View File

@ -80,7 +80,7 @@ struct dwc2_qh;
* @xfer_count: Number of bytes transferred so far * @xfer_count: Number of bytes transferred so far
* @start_pkt_count: Packet count at start of transfer * @start_pkt_count: Packet count at start of transfer
* @xfer_started: True if the transfer has been started * @xfer_started: True if the transfer has been started
* @ping: True if a PING request should be issued on this channel * @do_ping: True if a PING request should be issued on this channel
* @error_state: True if the error count for this transaction is non-zero * @error_state: True if the error count for this transaction is non-zero
* @halt_on_queue: True if this channel should be halted the next time a * @halt_on_queue: True if this channel should be halted the next time a
* request is queued for the channel. This is necessary in * request is queued for the channel. This is necessary in
@ -102,7 +102,7 @@ struct dwc2_qh;
* @schinfo: Scheduling micro-frame bitmap * @schinfo: Scheduling micro-frame bitmap
* @ntd: Number of transfer descriptors for the transfer * @ntd: Number of transfer descriptors for the transfer
* @halt_status: Reason for halting the host channel * @halt_status: Reason for halting the host channel
* @hcint Contents of the HCINT register when the interrupt came * @hcint: Contents of the HCINT register when the interrupt came
* @qh: QH for the transfer being processed by this channel * @qh: QH for the transfer being processed by this channel
* @hc_list_entry: For linking to list of host channels * @hc_list_entry: For linking to list of host channels
* @desc_list_addr: Current QH's descriptor list DMA address * @desc_list_addr: Current QH's descriptor list DMA address
@ -237,7 +237,7 @@ struct dwc2_tt {
/** /**
* struct dwc2_hs_transfer_time - Info about a transfer on the high speed bus. * struct dwc2_hs_transfer_time - Info about a transfer on the high speed bus.
* *
* @start_schedule_usecs: The start time on the main bus schedule. Note that * @start_schedule_us: The start time on the main bus schedule. Note that
* the main bus schedule is tightly packed and this * the main bus schedule is tightly packed and this
* time should be interpreted as tightly packed (so * time should be interpreted as tightly packed (so
* uFrame 0 starts at 0 us, uFrame 1 starts at 100 us * uFrame 0 starts at 0 us, uFrame 1 starts at 100 us
@ -301,7 +301,6 @@ struct dwc2_hs_transfer_time {
* "struct dwc2_tt". Not used if this device is high * "struct dwc2_tt". Not used if this device is high
* speed. Note that this is in "schedule slice" which * speed. Note that this is in "schedule slice" which
* is tightly packed. * is tightly packed.
* @ls_duration_us: Duration on the low speed bus schedule.
* @ntd: Actual number of transfer descriptors in a list * @ntd: Actual number of transfer descriptors in a list
* @qtd_list: List of QTDs for this QH * @qtd_list: List of QTDs for this QH
* @channel: Host channel currently processing transfers for this QH * @channel: Host channel currently processing transfers for this QH
@ -315,7 +314,7 @@ struct dwc2_hs_transfer_time {
* descriptor * descriptor
* @unreserve_timer: Timer for releasing periodic reservation. * @unreserve_timer: Timer for releasing periodic reservation.
* @wait_timer: Timer used to wait before re-queuing. * @wait_timer: Timer used to wait before re-queuing.
* @dwc2_tt: Pointer to our tt info (or NULL if no tt). * @dwc_tt: Pointer to our tt info (or NULL if no tt).
* @ttport: Port number within our tt. * @ttport: Port number within our tt.
* @tt_buffer_dirty True if clear_tt_buffer_complete is pending * @tt_buffer_dirty True if clear_tt_buffer_complete is pending
* @unreserve_pending: True if we planned to unreserve but haven't yet. * @unreserve_pending: True if we planned to unreserve but haven't yet.
@ -325,6 +324,7 @@ struct dwc2_hs_transfer_time {
* periodic transfers and is ignored for periodic ones. * periodic transfers and is ignored for periodic ones.
* @wait_timer_cancel: Set to true to cancel the wait_timer. * @wait_timer_cancel: Set to true to cancel the wait_timer.
* *
* @tt_buffer_dirty: True if EP's TT buffer is not clean.
* A Queue Head (QH) holds the static characteristics of an endpoint and * A Queue Head (QH) holds the static characteristics of an endpoint and
* maintains a list of transfers (QTDs) for that endpoint. A QH structure may * maintains a list of transfers (QTDs) for that endpoint. A QH structure may
* be entered in either the non-periodic or periodic schedule. * be entered in either the non-periodic or periodic schedule.
@ -400,6 +400,10 @@ struct dwc2_qh {
* @urb: URB for this transfer * @urb: URB for this transfer
* @qh: Queue head for this QTD * @qh: Queue head for this QTD
* @qtd_list_entry: For linking to the QH's list of QTDs * @qtd_list_entry: For linking to the QH's list of QTDs
* @isoc_td_first: Index of first activated isochronous transfer
* descriptor in Descriptor DMA mode
* @isoc_td_last: Index of last activated isochronous transfer
* descriptor in Descriptor DMA mode
* *
* A Queue Transfer Descriptor (QTD) holds the state of a bulk, control, * A Queue Transfer Descriptor (QTD) holds the state of a bulk, control,
* interrupt, or isochronous transfer. A single QTD is created for each URB * interrupt, or isochronous transfer. A single QTD is created for each URB

View File

@ -332,6 +332,7 @@ static void dwc2_release_channel_ddma(struct dwc2_hsotg *hsotg,
* *
* @hsotg: The HCD state structure for the DWC OTG controller * @hsotg: The HCD state structure for the DWC OTG controller
* @qh: The QH to init * @qh: The QH to init
* @mem_flags: Indicates the type of memory allocation
* *
* Return: 0 if successful, negative error code otherwise * Return: 0 if successful, negative error code otherwise
* *

View File

@ -478,6 +478,12 @@ static u32 dwc2_get_actual_xfer_length(struct dwc2_hsotg *hsotg,
* of the URB based on the number of bytes transferred via the host channel. * of the URB based on the number of bytes transferred via the host channel.
* Sets the URB status if the data transfer is finished. * Sets the URB status if the data transfer is finished.
* *
* @hsotg: Programming view of the DWC_otg controller
* @chan: Programming view of host channel
* @chnum: Channel number
* @urb: Processing URB
* @qtd: Queue transfer descriptor
*
* Return: 1 if the data transfer specified by the URB is completely finished, * Return: 1 if the data transfer specified by the URB is completely finished,
* 0 otherwise * 0 otherwise
*/ */
@ -566,6 +572,12 @@ void dwc2_hcd_save_data_toggle(struct dwc2_hsotg *hsotg,
* halt_status. Completes the Isochronous URB if all the URB frames have been * halt_status. Completes the Isochronous URB if all the URB frames have been
* completed. * completed.
* *
* @hsotg: Programming view of the DWC_otg controller
* @chan: Programming view of host channel
* @chnum: Channel number
* @halt_status: Reason for halting a host channel
* @qtd: Queue transfer descriptor
*
* Return: DWC2_HC_XFER_COMPLETE if there are more frames remaining to be * Return: DWC2_HC_XFER_COMPLETE if there are more frames remaining to be
* transferred in the URB. Otherwise return DWC2_HC_XFER_URB_COMPLETE. * transferred in the URB. Otherwise return DWC2_HC_XFER_URB_COMPLETE.
*/ */

View File

@ -679,6 +679,7 @@ static int dwc2_hs_pmap_schedule(struct dwc2_hsotg *hsotg, struct dwc2_qh *qh,
* *
* @hsotg: The HCD state structure for the DWC OTG controller. * @hsotg: The HCD state structure for the DWC OTG controller.
* @qh: QH for the periodic transfer. * @qh: QH for the periodic transfer.
* @index: Transfer index
*/ */
static void dwc2_hs_pmap_unschedule(struct dwc2_hsotg *hsotg, static void dwc2_hs_pmap_unschedule(struct dwc2_hsotg *hsotg,
struct dwc2_qh *qh, int index) struct dwc2_qh *qh, int index)
@ -1276,7 +1277,7 @@ static void dwc2_do_unreserve(struct dwc2_hsotg *hsotg, struct dwc2_qh *qh)
* release the reservation. This worker is called after the appropriate * release the reservation. This worker is called after the appropriate
* delay. * delay.
* *
* @work: Pointer to a qh unreserve_work. * @t: Address to a qh unreserve_work.
*/ */
static void dwc2_unreserve_timer_fn(struct timer_list *t) static void dwc2_unreserve_timer_fn(struct timer_list *t)
{ {
@ -1631,7 +1632,7 @@ static void dwc2_qh_init(struct dwc2_hsotg *hsotg, struct dwc2_qh *qh,
* @hsotg: The HCD state structure for the DWC OTG controller * @hsotg: The HCD state structure for the DWC OTG controller
* @urb: Holds the information about the device/endpoint needed * @urb: Holds the information about the device/endpoint needed
* to initialize the QH * to initialize the QH
* @atomic_alloc: Flag to do atomic allocation if needed * @mem_flags: Flags for allocating memory.
* *
* Return: Pointer to the newly allocated QH, or NULL on error * Return: Pointer to the newly allocated QH, or NULL on error
*/ */

View File

@ -311,6 +311,7 @@
#define GHWCFG4_UTMI_PHY_DATA_WIDTH_MASK (0x3 << 14) #define GHWCFG4_UTMI_PHY_DATA_WIDTH_MASK (0x3 << 14)
#define GHWCFG4_UTMI_PHY_DATA_WIDTH_SHIFT 14 #define GHWCFG4_UTMI_PHY_DATA_WIDTH_SHIFT 14
#define GHWCFG4_ACG_SUPPORTED BIT(12) #define GHWCFG4_ACG_SUPPORTED BIT(12)
#define GHWCFG4_IPG_ISOC_SUPPORTED BIT(11)
#define GHWCFG4_UTMI_PHY_DATA_WIDTH_8 0 #define GHWCFG4_UTMI_PHY_DATA_WIDTH_8 0
#define GHWCFG4_UTMI_PHY_DATA_WIDTH_16 1 #define GHWCFG4_UTMI_PHY_DATA_WIDTH_16 1
#define GHWCFG4_UTMI_PHY_DATA_WIDTH_8_OR_16 2 #define GHWCFG4_UTMI_PHY_DATA_WIDTH_8_OR_16 2
@ -424,6 +425,7 @@
#define DCFG_EPMISCNT_SHIFT 18 #define DCFG_EPMISCNT_SHIFT 18
#define DCFG_EPMISCNT_LIMIT 0x1f #define DCFG_EPMISCNT_LIMIT 0x1f
#define DCFG_EPMISCNT(_x) ((_x) << 18) #define DCFG_EPMISCNT(_x) ((_x) << 18)
#define DCFG_IPG_ISOC_SUPPORDED BIT(17)
#define DCFG_PERFRINT_MASK (0x3 << 11) #define DCFG_PERFRINT_MASK (0x3 << 11)
#define DCFG_PERFRINT_SHIFT 11 #define DCFG_PERFRINT_SHIFT 11
#define DCFG_PERFRINT_LIMIT 0x3 #define DCFG_PERFRINT_LIMIT 0x3

View File

@ -70,6 +70,7 @@ static void dwc2_set_his_params(struct dwc2_hsotg *hsotg)
GAHBCFG_HBSTLEN_SHIFT; GAHBCFG_HBSTLEN_SHIFT;
p->uframe_sched = false; p->uframe_sched = false;
p->change_speed_quirk = true; p->change_speed_quirk = true;
p->power_down = false;
} }
static void dwc2_set_rk_params(struct dwc2_hsotg *hsotg) static void dwc2_set_rk_params(struct dwc2_hsotg *hsotg)
@ -269,6 +270,9 @@ static void dwc2_set_param_power_down(struct dwc2_hsotg *hsotg)
/** /**
* dwc2_set_default_params() - Set all core parameters to their * dwc2_set_default_params() - Set all core parameters to their
* auto-detected default values. * auto-detected default values.
*
* @hsotg: Programming view of the DWC_otg controller
*
*/ */
static void dwc2_set_default_params(struct dwc2_hsotg *hsotg) static void dwc2_set_default_params(struct dwc2_hsotg *hsotg)
{ {
@ -298,6 +302,7 @@ static void dwc2_set_default_params(struct dwc2_hsotg *hsotg)
p->besl = true; p->besl = true;
p->hird_threshold_en = true; p->hird_threshold_en = true;
p->hird_threshold = 4; p->hird_threshold = 4;
p->ipg_isoc_en = false;
p->max_packet_count = hw->max_packet_count; p->max_packet_count = hw->max_packet_count;
p->max_transfer_size = hw->max_transfer_size; p->max_transfer_size = hw->max_transfer_size;
p->ahbcfg = GAHBCFG_HBSTLEN_INCR << GAHBCFG_HBSTLEN_SHIFT; p->ahbcfg = GAHBCFG_HBSTLEN_INCR << GAHBCFG_HBSTLEN_SHIFT;
@ -338,6 +343,8 @@ static void dwc2_set_default_params(struct dwc2_hsotg *hsotg)
/** /**
* dwc2_get_device_properties() - Read in device properties. * dwc2_get_device_properties() - Read in device properties.
* *
* @hsotg: Programming view of the DWC_otg controller
*
* Read in the device properties and adjust core parameters if needed. * Read in the device properties and adjust core parameters if needed.
*/ */
static void dwc2_get_device_properties(struct dwc2_hsotg *hsotg) static void dwc2_get_device_properties(struct dwc2_hsotg *hsotg)
@ -549,7 +556,7 @@ static void dwc2_check_param_tx_fifo_sizes(struct dwc2_hsotg *hsotg)
} }
#define CHECK_RANGE(_param, _min, _max, _def) do { \ #define CHECK_RANGE(_param, _min, _max, _def) do { \
if ((hsotg->params._param) < (_min) || \ if ((int)(hsotg->params._param) < (_min) || \
(hsotg->params._param) > (_max)) { \ (hsotg->params._param) > (_max)) { \
dev_warn(hsotg->dev, "%s: Invalid parameter %s=%d\n", \ dev_warn(hsotg->dev, "%s: Invalid parameter %s=%d\n", \
__func__, #_param, hsotg->params._param); \ __func__, #_param, hsotg->params._param); \
@ -579,6 +586,7 @@ static void dwc2_check_params(struct dwc2_hsotg *hsotg)
CHECK_BOOL(enable_dynamic_fifo, hw->enable_dynamic_fifo); CHECK_BOOL(enable_dynamic_fifo, hw->enable_dynamic_fifo);
CHECK_BOOL(en_multiple_tx_fifo, hw->en_multiple_tx_fifo); CHECK_BOOL(en_multiple_tx_fifo, hw->en_multiple_tx_fifo);
CHECK_BOOL(i2c_enable, hw->i2c_enable); CHECK_BOOL(i2c_enable, hw->i2c_enable);
CHECK_BOOL(ipg_isoc_en, hw->ipg_isoc_en);
CHECK_BOOL(acg_enable, hw->acg_enable); CHECK_BOOL(acg_enable, hw->acg_enable);
CHECK_BOOL(reload_ctl, (hsotg->hw_params.snpsid > DWC2_CORE_REV_2_92a)); CHECK_BOOL(reload_ctl, (hsotg->hw_params.snpsid > DWC2_CORE_REV_2_92a));
CHECK_BOOL(lpm, (hsotg->hw_params.snpsid >= DWC2_CORE_REV_2_80a)); CHECK_BOOL(lpm, (hsotg->hw_params.snpsid >= DWC2_CORE_REV_2_80a));
@ -688,6 +696,9 @@ static void dwc2_get_dev_hwparams(struct dwc2_hsotg *hsotg)
/** /**
* During device initialization, read various hardware configuration * During device initialization, read various hardware configuration
* registers and interpret the contents. * registers and interpret the contents.
*
* @hsotg: Programming view of the DWC_otg controller
*
*/ */
int dwc2_get_hwparams(struct dwc2_hsotg *hsotg) int dwc2_get_hwparams(struct dwc2_hsotg *hsotg)
{ {
@ -772,6 +783,7 @@ int dwc2_get_hwparams(struct dwc2_hsotg *hsotg)
hw->utmi_phy_data_width = (hwcfg4 & GHWCFG4_UTMI_PHY_DATA_WIDTH_MASK) >> hw->utmi_phy_data_width = (hwcfg4 & GHWCFG4_UTMI_PHY_DATA_WIDTH_MASK) >>
GHWCFG4_UTMI_PHY_DATA_WIDTH_SHIFT; GHWCFG4_UTMI_PHY_DATA_WIDTH_SHIFT;
hw->acg_enable = !!(hwcfg4 & GHWCFG4_ACG_SUPPORTED); hw->acg_enable = !!(hwcfg4 & GHWCFG4_ACG_SUPPORTED);
hw->ipg_isoc_en = !!(hwcfg4 & GHWCFG4_IPG_ISOC_SUPPORTED);
/* fifo sizes */ /* fifo sizes */
hw->rx_fifo_size = (grxfsiz & GRXFSIZ_DEPTH_MASK) >> hw->rx_fifo_size = (grxfsiz & GRXFSIZ_DEPTH_MASK) >>

View File

@ -77,6 +77,12 @@ static int dwc2_pci_quirks(struct pci_dev *pdev, struct platform_device *dwc2)
return 0; return 0;
} }
/**
* dwc2_pci_probe() - Provides the cleanup entry points for the DWC_otg PCI
* driver
*
* @pci: The programming view of DWC_otg PCI
*/
static void dwc2_pci_remove(struct pci_dev *pci) static void dwc2_pci_remove(struct pci_dev *pci)
{ {
struct dwc2_pci_glue *glue = pci_get_drvdata(pci); struct dwc2_pci_glue *glue = pci_get_drvdata(pci);

View File

@ -106,4 +106,16 @@ config USB_DWC3_ST
inside (i.e. STiH407). inside (i.e. STiH407).
Say 'Y' or 'M' if you have one such device. Say 'Y' or 'M' if you have one such device.
config USB_DWC3_QCOM
tristate "Qualcomm Platform"
depends on ARCH_QCOM || COMPILE_TEST
depends on OF
default USB_DWC3
help
Some Qualcomm SoCs use DesignWare Core IP for USB2/3
functionality.
This driver also handles Qscratch wrapper which is needed
for peripheral mode support.
Say 'Y' or 'M' if you have one such device.
endif endif

View File

@ -48,3 +48,4 @@ obj-$(CONFIG_USB_DWC3_PCI) += dwc3-pci.o
obj-$(CONFIG_USB_DWC3_KEYSTONE) += dwc3-keystone.o obj-$(CONFIG_USB_DWC3_KEYSTONE) += dwc3-keystone.o
obj-$(CONFIG_USB_DWC3_OF_SIMPLE) += dwc3-of-simple.o obj-$(CONFIG_USB_DWC3_OF_SIMPLE) += dwc3-of-simple.o
obj-$(CONFIG_USB_DWC3_ST) += dwc3-st.o obj-$(CONFIG_USB_DWC3_ST) += dwc3-st.o
obj-$(CONFIG_USB_DWC3_QCOM) += dwc3-qcom.o

View File

@ -8,6 +8,7 @@
* Sebastian Andrzej Siewior <bigeasy@linutronix.de> * Sebastian Andrzej Siewior <bigeasy@linutronix.de>
*/ */
#include <linux/clk.h>
#include <linux/version.h> #include <linux/version.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/kernel.h> #include <linux/kernel.h>
@ -24,6 +25,7 @@
#include <linux/of.h> #include <linux/of.h>
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/pinctrl/consumer.h> #include <linux/pinctrl/consumer.h>
#include <linux/reset.h>
#include <linux/usb/ch9.h> #include <linux/usb/ch9.h>
#include <linux/usb/gadget.h> #include <linux/usb/gadget.h>
@ -266,6 +268,12 @@ done:
return 0; return 0;
} }
static const struct clk_bulk_data dwc3_core_clks[] = {
{ .id = "ref" },
{ .id = "bus_early" },
{ .id = "suspend" },
};
/* /*
* dwc3_frame_length_adjustment - Adjusts frame length if required * dwc3_frame_length_adjustment - Adjusts frame length if required
* @dwc3: Pointer to our controller context structure * @dwc3: Pointer to our controller context structure
@ -667,6 +675,9 @@ static void dwc3_core_exit(struct dwc3 *dwc)
usb_phy_set_suspend(dwc->usb3_phy, 1); usb_phy_set_suspend(dwc->usb3_phy, 1);
phy_power_off(dwc->usb2_generic_phy); phy_power_off(dwc->usb2_generic_phy);
phy_power_off(dwc->usb3_generic_phy); phy_power_off(dwc->usb3_generic_phy);
clk_bulk_disable(dwc->num_clks, dwc->clks);
clk_bulk_unprepare(dwc->num_clks, dwc->clks);
reset_control_assert(dwc->reset);
} }
static bool dwc3_core_is_valid(struct dwc3 *dwc) static bool dwc3_core_is_valid(struct dwc3 *dwc)
@ -1245,7 +1256,7 @@ static void dwc3_check_params(struct dwc3 *dwc)
static int dwc3_probe(struct platform_device *pdev) static int dwc3_probe(struct platform_device *pdev)
{ {
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct resource *res; struct resource *res, dwc_res;
struct dwc3 *dwc; struct dwc3 *dwc;
int ret; int ret;
@ -1256,6 +1267,12 @@ static int dwc3_probe(struct platform_device *pdev)
if (!dwc) if (!dwc)
return -ENOMEM; return -ENOMEM;
dwc->clks = devm_kmemdup(dev, dwc3_core_clks, sizeof(dwc3_core_clks),
GFP_KERNEL);
if (!dwc->clks)
return -ENOMEM;
dwc->num_clks = ARRAY_SIZE(dwc3_core_clks);
dwc->dev = dev; dwc->dev = dev;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
@ -1270,23 +1287,48 @@ static int dwc3_probe(struct platform_device *pdev)
dwc->xhci_resources[0].flags = res->flags; dwc->xhci_resources[0].flags = res->flags;
dwc->xhci_resources[0].name = res->name; dwc->xhci_resources[0].name = res->name;
res->start += DWC3_GLOBALS_REGS_START;
/* /*
* Request memory region but exclude xHCI regs, * Request memory region but exclude xHCI regs,
* since it will be requested by the xhci-plat driver. * since it will be requested by the xhci-plat driver.
*/ */
regs = devm_ioremap_resource(dev, res); dwc_res = *res;
if (IS_ERR(regs)) { dwc_res.start += DWC3_GLOBALS_REGS_START;
ret = PTR_ERR(regs);
goto err0; regs = devm_ioremap_resource(dev, &dwc_res);
} if (IS_ERR(regs))
return PTR_ERR(regs);
dwc->regs = regs; dwc->regs = regs;
dwc->regs_size = resource_size(res); dwc->regs_size = resource_size(&dwc_res);
dwc3_get_properties(dwc); dwc3_get_properties(dwc);
dwc->reset = devm_reset_control_get_optional_shared(dev, NULL);
if (IS_ERR(dwc->reset))
return PTR_ERR(dwc->reset);
ret = clk_bulk_get(dev, dwc->num_clks, dwc->clks);
if (ret == -EPROBE_DEFER)
return ret;
/*
* Clocks are optional, but new DT platforms should support all clocks
* as required by the DT-binding.
*/
if (ret)
dwc->num_clks = 0;
ret = reset_control_deassert(dwc->reset);
if (ret)
goto put_clks;
ret = clk_bulk_prepare(dwc->num_clks, dwc->clks);
if (ret)
goto assert_reset;
ret = clk_bulk_enable(dwc->num_clks, dwc->clks);
if (ret)
goto unprepare_clks;
platform_set_drvdata(pdev, dwc); platform_set_drvdata(pdev, dwc);
dwc3_cache_hwparams(dwc); dwc3_cache_hwparams(dwc);
@ -1350,13 +1392,13 @@ err1:
pm_runtime_put_sync(&pdev->dev); pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev); pm_runtime_disable(&pdev->dev);
err0: clk_bulk_disable(dwc->num_clks, dwc->clks);
/* unprepare_clks:
* restore res->start back to its original value so that, in case the clk_bulk_unprepare(dwc->num_clks, dwc->clks);
* probe is deferred, we don't end up getting error in request the assert_reset:
* memory region the next time probe is called. reset_control_assert(dwc->reset);
*/ put_clks:
res->start -= DWC3_GLOBALS_REGS_START; clk_bulk_put(dwc->num_clks, dwc->clks);
return ret; return ret;
} }
@ -1364,15 +1406,8 @@ err0:
static int dwc3_remove(struct platform_device *pdev) static int dwc3_remove(struct platform_device *pdev)
{ {
struct dwc3 *dwc = platform_get_drvdata(pdev); struct dwc3 *dwc = platform_get_drvdata(pdev);
struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
pm_runtime_get_sync(&pdev->dev); pm_runtime_get_sync(&pdev->dev);
/*
* restore res->start back to its original value so that, in case the
* probe is deferred, we don't end up getting error in request the
* memory region the next time probe is called.
*/
res->start -= DWC3_GLOBALS_REGS_START;
dwc3_debugfs_exit(dwc); dwc3_debugfs_exit(dwc);
dwc3_core_exit_mode(dwc); dwc3_core_exit_mode(dwc);
@ -1386,14 +1421,48 @@ static int dwc3_remove(struct platform_device *pdev)
dwc3_free_event_buffers(dwc); dwc3_free_event_buffers(dwc);
dwc3_free_scratch_buffers(dwc); dwc3_free_scratch_buffers(dwc);
clk_bulk_put(dwc->num_clks, dwc->clks);
return 0; return 0;
} }
#ifdef CONFIG_PM #ifdef CONFIG_PM
static int dwc3_core_init_for_resume(struct dwc3 *dwc)
{
int ret;
ret = reset_control_deassert(dwc->reset);
if (ret)
return ret;
ret = clk_bulk_prepare(dwc->num_clks, dwc->clks);
if (ret)
goto assert_reset;
ret = clk_bulk_enable(dwc->num_clks, dwc->clks);
if (ret)
goto unprepare_clks;
ret = dwc3_core_init(dwc);
if (ret)
goto disable_clks;
return 0;
disable_clks:
clk_bulk_disable(dwc->num_clks, dwc->clks);
unprepare_clks:
clk_bulk_unprepare(dwc->num_clks, dwc->clks);
assert_reset:
reset_control_assert(dwc->reset);
return ret;
}
static int dwc3_suspend_common(struct dwc3 *dwc, pm_message_t msg) static int dwc3_suspend_common(struct dwc3 *dwc, pm_message_t msg)
{ {
unsigned long flags; unsigned long flags;
u32 reg;
switch (dwc->current_dr_role) { switch (dwc->current_dr_role) {
case DWC3_GCTL_PRTCAP_DEVICE: case DWC3_GCTL_PRTCAP_DEVICE:
@ -1403,9 +1472,25 @@ static int dwc3_suspend_common(struct dwc3 *dwc, pm_message_t msg)
dwc3_core_exit(dwc); dwc3_core_exit(dwc);
break; break;
case DWC3_GCTL_PRTCAP_HOST: case DWC3_GCTL_PRTCAP_HOST:
/* do nothing during host runtime_suspend */ if (!PMSG_IS_AUTO(msg)) {
if (!PMSG_IS_AUTO(msg))
dwc3_core_exit(dwc); dwc3_core_exit(dwc);
break;
}
/* Let controller to suspend HSPHY before PHY driver suspends */
if (dwc->dis_u2_susphy_quirk ||
dwc->dis_enblslpm_quirk) {
reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
reg |= DWC3_GUSB2PHYCFG_ENBLSLPM |
DWC3_GUSB2PHYCFG_SUSPHY;
dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
/* Give some time for USB2 PHY to suspend */
usleep_range(5000, 6000);
}
phy_pm_runtime_put_sync(dwc->usb2_generic_phy);
phy_pm_runtime_put_sync(dwc->usb3_generic_phy);
break; break;
case DWC3_GCTL_PRTCAP_OTG: case DWC3_GCTL_PRTCAP_OTG:
/* do nothing during runtime_suspend */ /* do nothing during runtime_suspend */
@ -1433,10 +1518,11 @@ static int dwc3_resume_common(struct dwc3 *dwc, pm_message_t msg)
{ {
unsigned long flags; unsigned long flags;
int ret; int ret;
u32 reg;
switch (dwc->current_dr_role) { switch (dwc->current_dr_role) {
case DWC3_GCTL_PRTCAP_DEVICE: case DWC3_GCTL_PRTCAP_DEVICE:
ret = dwc3_core_init(dwc); ret = dwc3_core_init_for_resume(dwc);
if (ret) if (ret)
return ret; return ret;
@ -1446,13 +1532,25 @@ static int dwc3_resume_common(struct dwc3 *dwc, pm_message_t msg)
spin_unlock_irqrestore(&dwc->lock, flags); spin_unlock_irqrestore(&dwc->lock, flags);
break; break;
case DWC3_GCTL_PRTCAP_HOST: case DWC3_GCTL_PRTCAP_HOST:
/* nothing to do on host runtime_resume */
if (!PMSG_IS_AUTO(msg)) { if (!PMSG_IS_AUTO(msg)) {
ret = dwc3_core_init(dwc); ret = dwc3_core_init_for_resume(dwc);
if (ret) if (ret)
return ret; return ret;
dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_HOST); dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_HOST);
break;
} }
/* Restore GUSB2PHYCFG bits that were modified in suspend */
reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
if (dwc->dis_u2_susphy_quirk)
reg &= ~DWC3_GUSB2PHYCFG_SUSPHY;
if (dwc->dis_enblslpm_quirk)
reg &= ~DWC3_GUSB2PHYCFG_ENBLSLPM;
dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
phy_pm_runtime_get_sync(dwc->usb2_generic_phy);
phy_pm_runtime_get_sync(dwc->usb3_generic_phy);
break; break;
case DWC3_GCTL_PRTCAP_OTG: case DWC3_GCTL_PRTCAP_OTG:
/* nothing to do on runtime_resume */ /* nothing to do on runtime_resume */

View File

@ -639,8 +639,6 @@ struct dwc3_event_buffer {
* @resource_index: Resource transfer index * @resource_index: Resource transfer index
* @frame_number: set to the frame number we want this transfer to start (ISOC) * @frame_number: set to the frame number we want this transfer to start (ISOC)
* @interval: the interval on which the ISOC transfer is started * @interval: the interval on which the ISOC transfer is started
* @allocated_requests: number of requests allocated
* @queued_requests: number of requests queued for transfer
* @name: a human readable name e.g. ep1out-bulk * @name: a human readable name e.g. ep1out-bulk
* @direction: true for TX, false for RX * @direction: true for TX, false for RX
* @stream_capable: true when streams are enabled * @stream_capable: true when streams are enabled
@ -664,11 +662,9 @@ struct dwc3_ep {
#define DWC3_EP_ENABLED BIT(0) #define DWC3_EP_ENABLED BIT(0)
#define DWC3_EP_STALL BIT(1) #define DWC3_EP_STALL BIT(1)
#define DWC3_EP_WEDGE BIT(2) #define DWC3_EP_WEDGE BIT(2)
#define DWC3_EP_BUSY BIT(4) #define DWC3_EP_TRANSFER_STARTED BIT(3)
#define DWC3_EP_PENDING_REQUEST BIT(5) #define DWC3_EP_PENDING_REQUEST BIT(5)
#define DWC3_EP_MISSED_ISOC BIT(6)
#define DWC3_EP_END_TRANSFER_PENDING BIT(7) #define DWC3_EP_END_TRANSFER_PENDING BIT(7)
#define DWC3_EP_TRANSFER_STARTED BIT(8)
/* This last one is specific to EP0 */ /* This last one is specific to EP0 */
#define DWC3_EP0_DIR_IN BIT(31) #define DWC3_EP0_DIR_IN BIT(31)
@ -688,8 +684,6 @@ struct dwc3_ep {
u8 number; u8 number;
u8 type; u8 type;
u8 resource_index; u8 resource_index;
u32 allocated_requests;
u32 queued_requests;
u32 frame_number; u32 frame_number;
u32 interval; u32 interval;
@ -832,7 +826,9 @@ struct dwc3_hwparams {
* @list: a list_head used for request queueing * @list: a list_head used for request queueing
* @dep: struct dwc3_ep owning this request * @dep: struct dwc3_ep owning this request
* @sg: pointer to first incomplete sg * @sg: pointer to first incomplete sg
* @start_sg: pointer to the sg which should be queued next
* @num_pending_sgs: counter to pending sgs * @num_pending_sgs: counter to pending sgs
* @num_queued_sgs: counter to the number of sgs which already got queued
* @remaining: amount of data remaining * @remaining: amount of data remaining
* @epnum: endpoint number to which this request refers * @epnum: endpoint number to which this request refers
* @trb: pointer to struct dwc3_trb * @trb: pointer to struct dwc3_trb
@ -848,8 +844,10 @@ struct dwc3_request {
struct list_head list; struct list_head list;
struct dwc3_ep *dep; struct dwc3_ep *dep;
struct scatterlist *sg; struct scatterlist *sg;
struct scatterlist *start_sg;
unsigned num_pending_sgs; unsigned num_pending_sgs;
unsigned int num_queued_sgs;
unsigned remaining; unsigned remaining;
u8 epnum; u8 epnum;
struct dwc3_trb *trb; struct dwc3_trb *trb;
@ -891,6 +889,9 @@ struct dwc3_scratchpad_array {
* @eps: endpoint array * @eps: endpoint array
* @gadget: device side representation of the peripheral controller * @gadget: device side representation of the peripheral controller
* @gadget_driver: pointer to the gadget driver * @gadget_driver: pointer to the gadget driver
* @clks: array of clocks
* @num_clks: number of clocks
* @reset: reset control
* @regs: base address for our registers * @regs: base address for our registers
* @regs_size: address space size * @regs_size: address space size
* @fladj: frame length adjustment * @fladj: frame length adjustment
@ -1013,6 +1014,11 @@ struct dwc3 {
struct usb_gadget gadget; struct usb_gadget gadget;
struct usb_gadget_driver *gadget_driver; struct usb_gadget_driver *gadget_driver;
struct clk_bulk_data *clks;
int num_clks;
struct reset_control *reset;
struct usb_phy *usb2_phy; struct usb_phy *usb2_phy;
struct usb_phy *usb3_phy; struct usb_phy *usb3_phy;
@ -1197,11 +1203,12 @@ struct dwc3_event_depevt {
/* Within XferNotReady */ /* Within XferNotReady */
#define DEPEVT_STATUS_TRANSFER_ACTIVE BIT(3) #define DEPEVT_STATUS_TRANSFER_ACTIVE BIT(3)
/* Within XferComplete */ /* Within XferComplete or XferInProgress */
#define DEPEVT_STATUS_BUSERR BIT(0) #define DEPEVT_STATUS_BUSERR BIT(0)
#define DEPEVT_STATUS_SHORT BIT(1) #define DEPEVT_STATUS_SHORT BIT(1)
#define DEPEVT_STATUS_IOC BIT(2) #define DEPEVT_STATUS_IOC BIT(2)
#define DEPEVT_STATUS_LST BIT(3) #define DEPEVT_STATUS_LST BIT(3) /* XferComplete */
#define DEPEVT_STATUS_MISSED_ISOC BIT(3) /* XferInProgress */
/* Stream event only */ /* Stream event only */
#define DEPEVT_STREAMEVT_FOUND 1 #define DEPEVT_STREAMEVT_FOUND 1

View File

@ -475,21 +475,37 @@ dwc3_ep_event_string(char *str, const struct dwc3_event_depevt *event,
if (ret < 0) if (ret < 0)
return "UNKNOWN"; return "UNKNOWN";
status = event->status;
switch (event->endpoint_event) { switch (event->endpoint_event) {
case DWC3_DEPEVT_XFERCOMPLETE: case DWC3_DEPEVT_XFERCOMPLETE:
strcat(str, "Transfer Complete"); len = strlen(str);
sprintf(str + len, "Transfer Complete (%c%c%c)",
status & DEPEVT_STATUS_SHORT ? 'S' : 's',
status & DEPEVT_STATUS_IOC ? 'I' : 'i',
status & DEPEVT_STATUS_LST ? 'L' : 'l');
len = strlen(str); len = strlen(str);
if (epnum <= 1) if (epnum <= 1)
sprintf(str + len, " [%s]", dwc3_ep0_state_string(ep0state)); sprintf(str + len, " [%s]", dwc3_ep0_state_string(ep0state));
break; break;
case DWC3_DEPEVT_XFERINPROGRESS: case DWC3_DEPEVT_XFERINPROGRESS:
strcat(str, "Transfer In-Progress"); len = strlen(str);
sprintf(str + len, "Transfer In Progress [%d] (%c%c%c)",
event->parameters,
status & DEPEVT_STATUS_SHORT ? 'S' : 's',
status & DEPEVT_STATUS_IOC ? 'I' : 'i',
status & DEPEVT_STATUS_LST ? 'M' : 'm');
break; break;
case DWC3_DEPEVT_XFERNOTREADY: case DWC3_DEPEVT_XFERNOTREADY:
strcat(str, "Transfer Not Ready"); len = strlen(str);
status = event->status & DEPEVT_STATUS_TRANSFER_ACTIVE;
strcat(str, status ? " (Active)" : " (Not Active)"); sprintf(str + len, "Transfer Not Ready [%d]%s",
event->parameters,
status & DEPEVT_STATUS_TRANSFER_ACTIVE ?
" (Active)" : " (Not Active)");
/* Control Endpoints */ /* Control Endpoints */
if (epnum <= 1) { if (epnum <= 1) {

View File

@ -8,6 +8,7 @@
*/ */
#include <linux/extcon.h> #include <linux/extcon.h>
#include <linux/of_graph.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include "debug.h" #include "debug.h"
@ -439,17 +440,38 @@ static int dwc3_drd_notifier(struct notifier_block *nb,
return NOTIFY_DONE; return NOTIFY_DONE;
} }
static struct extcon_dev *dwc3_get_extcon(struct dwc3 *dwc)
{
struct device *dev = dwc->dev;
struct device_node *np_phy, *np_conn;
struct extcon_dev *edev;
if (of_property_read_bool(dev->of_node, "extcon"))
return extcon_get_edev_by_phandle(dwc->dev, 0);
np_phy = of_parse_phandle(dev->of_node, "phys", 0);
np_conn = of_graph_get_remote_node(np_phy, -1, -1);
if (np_conn)
edev = extcon_find_edev_by_node(np_conn);
else
edev = NULL;
of_node_put(np_conn);
of_node_put(np_phy);
return edev;
}
int dwc3_drd_init(struct dwc3 *dwc) int dwc3_drd_init(struct dwc3 *dwc)
{ {
int ret, irq; int ret, irq;
if (dwc->dev->of_node && dwc->edev = dwc3_get_extcon(dwc);
of_property_read_bool(dwc->dev->of_node, "extcon")) { if (IS_ERR(dwc->edev))
dwc->edev = extcon_get_edev_by_phandle(dwc->dev, 0); return PTR_ERR(dwc->edev);
if (IS_ERR(dwc->edev))
return PTR_ERR(dwc->edev);
if (dwc->edev) {
dwc->edev_nb.notifier_call = dwc3_drd_notifier; dwc->edev_nb.notifier_call = dwc3_drd_notifier;
ret = extcon_register_notifier(dwc->edev, EXTCON_USB_HOST, ret = extcon_register_notifier(dwc->edev, EXTCON_USB_HOST,
&dwc->edev_nb); &dwc->edev_nb);

View File

@ -208,13 +208,13 @@ static const struct dev_pm_ops dwc3_of_simple_dev_pm_ops = {
}; };
static const struct of_device_id of_dwc3_simple_match[] = { static const struct of_device_id of_dwc3_simple_match[] = {
{ .compatible = "qcom,dwc3" },
{ .compatible = "rockchip,rk3399-dwc3" }, { .compatible = "rockchip,rk3399-dwc3" },
{ .compatible = "xlnx,zynqmp-dwc3" }, { .compatible = "xlnx,zynqmp-dwc3" },
{ .compatible = "cavium,octeon-7130-usb-uctl" }, { .compatible = "cavium,octeon-7130-usb-uctl" },
{ .compatible = "sprd,sc9860-dwc3" }, { .compatible = "sprd,sc9860-dwc3" },
{ .compatible = "amlogic,meson-axg-dwc3" }, { .compatible = "amlogic,meson-axg-dwc3" },
{ .compatible = "amlogic,meson-gxl-dwc3" }, { .compatible = "amlogic,meson-gxl-dwc3" },
{ .compatible = "allwinner,sun50i-h6-dwc3" },
{ /* Sentinel */ } { /* Sentinel */ }
}; };
MODULE_DEVICE_TABLE(of, of_dwc3_simple_match); MODULE_DEVICE_TABLE(of, of_dwc3_simple_match);

View File

@ -0,0 +1,620 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2018, The Linux Foundation. All rights reserved.
*
* Inspired by dwc3-of-simple.c
*/
#define DEBUG
#include <linux/io.h>
#include <linux/of.h>
#include <linux/clk.h>
#include <linux/irq.h>
#include <linux/clk-provider.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/extcon.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/phy/phy.h>
#include <linux/usb/of.h>
#include <linux/reset.h>
#include <linux/iopoll.h>
#include "core.h"
/* USB QSCRATCH Hardware registers */
#define QSCRATCH_HS_PHY_CTRL 0x10
#define UTMI_OTG_VBUS_VALID BIT(20)
#define SW_SESSVLD_SEL BIT(28)
#define QSCRATCH_SS_PHY_CTRL 0x30
#define LANE0_PWR_PRESENT BIT(24)
#define QSCRATCH_GENERAL_CFG 0x08
#define PIPE_UTMI_CLK_SEL BIT(0)
#define PIPE3_PHYSTATUS_SW BIT(3)
#define PIPE_UTMI_CLK_DIS BIT(8)
#define PWR_EVNT_IRQ_STAT_REG 0x58
#define PWR_EVNT_LPM_IN_L2_MASK BIT(4)
#define PWR_EVNT_LPM_OUT_L2_MASK BIT(5)
struct dwc3_qcom {
struct device *dev;
void __iomem *qscratch_base;
struct platform_device *dwc3;
struct clk **clks;
int num_clocks;
struct reset_control *resets;
int hs_phy_irq;
int dp_hs_phy_irq;
int dm_hs_phy_irq;
int ss_phy_irq;
struct extcon_dev *edev;
struct extcon_dev *host_edev;
struct notifier_block vbus_nb;
struct notifier_block host_nb;
enum usb_dr_mode mode;
bool is_suspended;
bool pm_suspended;
};
static inline void dwc3_qcom_setbits(void __iomem *base, u32 offset, u32 val)
{
u32 reg;
reg = readl(base + offset);
reg |= val;
writel(reg, base + offset);
/* ensure that above write is through */
readl(base + offset);
}
static inline void dwc3_qcom_clrbits(void __iomem *base, u32 offset, u32 val)
{
u32 reg;
reg = readl(base + offset);
reg &= ~val;
writel(reg, base + offset);
/* ensure that above write is through */
readl(base + offset);
}
static void dwc3_qcom_vbus_overrride_enable(struct dwc3_qcom *qcom, bool enable)
{
if (enable) {
dwc3_qcom_setbits(qcom->qscratch_base, QSCRATCH_SS_PHY_CTRL,
LANE0_PWR_PRESENT);
dwc3_qcom_setbits(qcom->qscratch_base, QSCRATCH_HS_PHY_CTRL,
UTMI_OTG_VBUS_VALID | SW_SESSVLD_SEL);
} else {
dwc3_qcom_clrbits(qcom->qscratch_base, QSCRATCH_SS_PHY_CTRL,
LANE0_PWR_PRESENT);
dwc3_qcom_clrbits(qcom->qscratch_base, QSCRATCH_HS_PHY_CTRL,
UTMI_OTG_VBUS_VALID | SW_SESSVLD_SEL);
}
}
static int dwc3_qcom_vbus_notifier(struct notifier_block *nb,
unsigned long event, void *ptr)
{
struct dwc3_qcom *qcom = container_of(nb, struct dwc3_qcom, vbus_nb);
/* enable vbus override for device mode */
dwc3_qcom_vbus_overrride_enable(qcom, event);
qcom->mode = event ? USB_DR_MODE_PERIPHERAL : USB_DR_MODE_HOST;
return NOTIFY_DONE;
}
static int dwc3_qcom_host_notifier(struct notifier_block *nb,
unsigned long event, void *ptr)
{
struct dwc3_qcom *qcom = container_of(nb, struct dwc3_qcom, host_nb);
/* disable vbus override in host mode */
dwc3_qcom_vbus_overrride_enable(qcom, !event);
qcom->mode = event ? USB_DR_MODE_HOST : USB_DR_MODE_PERIPHERAL;
return NOTIFY_DONE;
}
static int dwc3_qcom_register_extcon(struct dwc3_qcom *qcom)
{
struct device *dev = qcom->dev;
struct extcon_dev *host_edev;
int ret;
if (!of_property_read_bool(dev->of_node, "extcon"))
return 0;
qcom->edev = extcon_get_edev_by_phandle(dev, 0);
if (IS_ERR(qcom->edev))
return PTR_ERR(qcom->edev);
qcom->vbus_nb.notifier_call = dwc3_qcom_vbus_notifier;
qcom->host_edev = extcon_get_edev_by_phandle(dev, 1);
if (IS_ERR(qcom->host_edev))
qcom->host_edev = NULL;
ret = devm_extcon_register_notifier(dev, qcom->edev, EXTCON_USB,
&qcom->vbus_nb);
if (ret < 0) {
dev_err(dev, "VBUS notifier register failed\n");
return ret;
}
if (qcom->host_edev)
host_edev = qcom->host_edev;
else
host_edev = qcom->edev;
qcom->host_nb.notifier_call = dwc3_qcom_host_notifier;
ret = devm_extcon_register_notifier(dev, host_edev, EXTCON_USB_HOST,
&qcom->host_nb);
if (ret < 0) {
dev_err(dev, "Host notifier register failed\n");
return ret;
}
/* Update initial VBUS override based on extcon state */
if (extcon_get_state(qcom->edev, EXTCON_USB) ||
!extcon_get_state(host_edev, EXTCON_USB_HOST))
dwc3_qcom_vbus_notifier(&qcom->vbus_nb, true, qcom->edev);
else
dwc3_qcom_vbus_notifier(&qcom->vbus_nb, false, qcom->edev);
return 0;
}
static void dwc3_qcom_disable_interrupts(struct dwc3_qcom *qcom)
{
if (qcom->hs_phy_irq) {
disable_irq_wake(qcom->hs_phy_irq);
disable_irq_nosync(qcom->hs_phy_irq);
}
if (qcom->dp_hs_phy_irq) {
disable_irq_wake(qcom->dp_hs_phy_irq);
disable_irq_nosync(qcom->dp_hs_phy_irq);
}
if (qcom->dm_hs_phy_irq) {
disable_irq_wake(qcom->dm_hs_phy_irq);
disable_irq_nosync(qcom->dm_hs_phy_irq);
}
if (qcom->ss_phy_irq) {
disable_irq_wake(qcom->ss_phy_irq);
disable_irq_nosync(qcom->ss_phy_irq);
}
}
static void dwc3_qcom_enable_interrupts(struct dwc3_qcom *qcom)
{
if (qcom->hs_phy_irq) {
enable_irq(qcom->hs_phy_irq);
enable_irq_wake(qcom->hs_phy_irq);
}
if (qcom->dp_hs_phy_irq) {
enable_irq(qcom->dp_hs_phy_irq);
enable_irq_wake(qcom->dp_hs_phy_irq);
}
if (qcom->dm_hs_phy_irq) {
enable_irq(qcom->dm_hs_phy_irq);
enable_irq_wake(qcom->dm_hs_phy_irq);
}
if (qcom->ss_phy_irq) {
enable_irq(qcom->ss_phy_irq);
enable_irq_wake(qcom->ss_phy_irq);
}
}
static int dwc3_qcom_suspend(struct dwc3_qcom *qcom)
{
u32 val;
int i;
if (qcom->is_suspended)
return 0;
val = readl(qcom->qscratch_base + PWR_EVNT_IRQ_STAT_REG);
if (!(val & PWR_EVNT_LPM_IN_L2_MASK))
dev_err(qcom->dev, "HS-PHY not in L2\n");
for (i = qcom->num_clocks - 1; i >= 0; i--)
clk_disable_unprepare(qcom->clks[i]);
qcom->is_suspended = true;
dwc3_qcom_enable_interrupts(qcom);
return 0;
}
static int dwc3_qcom_resume(struct dwc3_qcom *qcom)
{
int ret;
int i;
if (!qcom->is_suspended)
return 0;
dwc3_qcom_disable_interrupts(qcom);
for (i = 0; i < qcom->num_clocks; i++) {
ret = clk_prepare_enable(qcom->clks[i]);
if (ret < 0) {
while (--i >= 0)
clk_disable_unprepare(qcom->clks[i]);
return ret;
}
}
/* Clear existing events from PHY related to L2 in/out */
dwc3_qcom_setbits(qcom->qscratch_base, PWR_EVNT_IRQ_STAT_REG,
PWR_EVNT_LPM_IN_L2_MASK | PWR_EVNT_LPM_OUT_L2_MASK);
qcom->is_suspended = false;
return 0;
}
static irqreturn_t qcom_dwc3_resume_irq(int irq, void *data)
{
struct dwc3_qcom *qcom = data;
struct dwc3 *dwc = platform_get_drvdata(qcom->dwc3);
/* If pm_suspended then let pm_resume take care of resuming h/w */
if (qcom->pm_suspended)
return IRQ_HANDLED;
if (dwc->xhci)
pm_runtime_resume(&dwc->xhci->dev);
return IRQ_HANDLED;
}
static void dwc3_qcom_select_utmi_clk(struct dwc3_qcom *qcom)
{
/* Configure dwc3 to use UTMI clock as PIPE clock not present */
dwc3_qcom_setbits(qcom->qscratch_base, QSCRATCH_GENERAL_CFG,
PIPE_UTMI_CLK_DIS);
usleep_range(100, 1000);
dwc3_qcom_setbits(qcom->qscratch_base, QSCRATCH_GENERAL_CFG,
PIPE_UTMI_CLK_SEL | PIPE3_PHYSTATUS_SW);
usleep_range(100, 1000);
dwc3_qcom_clrbits(qcom->qscratch_base, QSCRATCH_GENERAL_CFG,
PIPE_UTMI_CLK_DIS);
}
static int dwc3_qcom_setup_irq(struct platform_device *pdev)
{
struct dwc3_qcom *qcom = platform_get_drvdata(pdev);
int irq, ret;
irq = platform_get_irq_byname(pdev, "hs_phy_irq");
if (irq > 0) {
/* Keep wakeup interrupts disabled until suspend */
irq_set_status_flags(irq, IRQ_NOAUTOEN);
ret = devm_request_threaded_irq(qcom->dev, irq, NULL,
qcom_dwc3_resume_irq,
IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
"qcom_dwc3 HS", qcom);
if (ret) {
dev_err(qcom->dev, "hs_phy_irq failed: %d\n", ret);
return ret;
}
qcom->hs_phy_irq = irq;
}
irq = platform_get_irq_byname(pdev, "dp_hs_phy_irq");
if (irq > 0) {
irq_set_status_flags(irq, IRQ_NOAUTOEN);
ret = devm_request_threaded_irq(qcom->dev, irq, NULL,
qcom_dwc3_resume_irq,
IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
"qcom_dwc3 DP_HS", qcom);
if (ret) {
dev_err(qcom->dev, "dp_hs_phy_irq failed: %d\n", ret);
return ret;
}
qcom->dp_hs_phy_irq = irq;
}
irq = platform_get_irq_byname(pdev, "dm_hs_phy_irq");
if (irq > 0) {
irq_set_status_flags(irq, IRQ_NOAUTOEN);
ret = devm_request_threaded_irq(qcom->dev, irq, NULL,
qcom_dwc3_resume_irq,
IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
"qcom_dwc3 DM_HS", qcom);
if (ret) {
dev_err(qcom->dev, "dm_hs_phy_irq failed: %d\n", ret);
return ret;
}
qcom->dm_hs_phy_irq = irq;
}
irq = platform_get_irq_byname(pdev, "ss_phy_irq");
if (irq > 0) {
irq_set_status_flags(irq, IRQ_NOAUTOEN);
ret = devm_request_threaded_irq(qcom->dev, irq, NULL,
qcom_dwc3_resume_irq,
IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
"qcom_dwc3 SS", qcom);
if (ret) {
dev_err(qcom->dev, "ss_phy_irq failed: %d\n", ret);
return ret;
}
qcom->ss_phy_irq = irq;
}
return 0;
}
static int dwc3_qcom_clk_init(struct dwc3_qcom *qcom, int count)
{
struct device *dev = qcom->dev;
struct device_node *np = dev->of_node;
int i;
qcom->num_clocks = count;
if (!count)
return 0;
qcom->clks = devm_kcalloc(dev, qcom->num_clocks,
sizeof(struct clk *), GFP_KERNEL);
if (!qcom->clks)
return -ENOMEM;
for (i = 0; i < qcom->num_clocks; i++) {
struct clk *clk;
int ret;
clk = of_clk_get(np, i);
if (IS_ERR(clk)) {
while (--i >= 0)
clk_put(qcom->clks[i]);
return PTR_ERR(clk);
}
ret = clk_prepare_enable(clk);
if (ret < 0) {
while (--i >= 0) {
clk_disable_unprepare(qcom->clks[i]);
clk_put(qcom->clks[i]);
}
clk_put(clk);
return ret;
}
qcom->clks[i] = clk;
}
return 0;
}
static int dwc3_qcom_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node, *dwc3_np;
struct device *dev = &pdev->dev;
struct dwc3_qcom *qcom;
struct resource *res;
int ret, i;
bool ignore_pipe_clk;
qcom = devm_kzalloc(&pdev->dev, sizeof(*qcom), GFP_KERNEL);
if (!qcom)
return -ENOMEM;
platform_set_drvdata(pdev, qcom);
qcom->dev = &pdev->dev;
qcom->resets = devm_reset_control_array_get_optional_exclusive(dev);
if (IS_ERR(qcom->resets)) {
ret = PTR_ERR(qcom->resets);
dev_err(&pdev->dev, "failed to get resets, err=%d\n", ret);
return ret;
}
ret = reset_control_assert(qcom->resets);
if (ret) {
dev_err(&pdev->dev, "failed to assert resets, err=%d\n", ret);
return ret;
}
usleep_range(10, 1000);
ret = reset_control_deassert(qcom->resets);
if (ret) {
dev_err(&pdev->dev, "failed to deassert resets, err=%d\n", ret);
goto reset_assert;
}
ret = dwc3_qcom_clk_init(qcom, of_count_phandle_with_args(np,
"clocks", "#clock-cells"));
if (ret) {
dev_err(dev, "failed to get clocks\n");
goto reset_assert;
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
qcom->qscratch_base = devm_ioremap_resource(dev, res);
if (IS_ERR(qcom->qscratch_base)) {
dev_err(dev, "failed to map qscratch, err=%d\n", ret);
ret = PTR_ERR(qcom->qscratch_base);
goto clk_disable;
}
ret = dwc3_qcom_setup_irq(pdev);
if (ret)
goto clk_disable;
dwc3_np = of_get_child_by_name(np, "dwc3");
if (!dwc3_np) {
dev_err(dev, "failed to find dwc3 core child\n");
ret = -ENODEV;
goto clk_disable;
}
/*
* Disable pipe_clk requirement if specified. Used when dwc3
* operates without SSPHY and only HS/FS/LS modes are supported.
*/
ignore_pipe_clk = device_property_read_bool(dev,
"qcom,select-utmi-as-pipe-clk");
if (ignore_pipe_clk)
dwc3_qcom_select_utmi_clk(qcom);
ret = of_platform_populate(np, NULL, NULL, dev);
if (ret) {
dev_err(dev, "failed to register dwc3 core - %d\n", ret);
goto clk_disable;
}
qcom->dwc3 = of_find_device_by_node(dwc3_np);
if (!qcom->dwc3) {
dev_err(&pdev->dev, "failed to get dwc3 platform device\n");
goto depopulate;
}
qcom->mode = usb_get_dr_mode(&qcom->dwc3->dev);
/* enable vbus override for device mode */
if (qcom->mode == USB_DR_MODE_PERIPHERAL)
dwc3_qcom_vbus_overrride_enable(qcom, true);
/* register extcon to override sw_vbus on Vbus change later */
ret = dwc3_qcom_register_extcon(qcom);
if (ret)
goto depopulate;
device_init_wakeup(&pdev->dev, 1);
qcom->is_suspended = false;
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
pm_runtime_forbid(dev);
return 0;
depopulate:
of_platform_depopulate(&pdev->dev);
clk_disable:
for (i = qcom->num_clocks - 1; i >= 0; i--) {
clk_disable_unprepare(qcom->clks[i]);
clk_put(qcom->clks[i]);
}
reset_assert:
reset_control_assert(qcom->resets);
return ret;
}
static int dwc3_qcom_remove(struct platform_device *pdev)
{
struct dwc3_qcom *qcom = platform_get_drvdata(pdev);
struct device *dev = &pdev->dev;
int i;
of_platform_depopulate(dev);
for (i = qcom->num_clocks - 1; i >= 0; i--) {
clk_disable_unprepare(qcom->clks[i]);
clk_put(qcom->clks[i]);
}
qcom->num_clocks = 0;
reset_control_assert(qcom->resets);
pm_runtime_allow(dev);
pm_runtime_disable(dev);
return 0;
}
#ifdef CONFIG_PM_SLEEP
static int dwc3_qcom_pm_suspend(struct device *dev)
{
struct dwc3_qcom *qcom = dev_get_drvdata(dev);
int ret = 0;
ret = dwc3_qcom_suspend(qcom);
if (!ret)
qcom->pm_suspended = true;
return ret;
}
static int dwc3_qcom_pm_resume(struct device *dev)
{
struct dwc3_qcom *qcom = dev_get_drvdata(dev);
int ret;
ret = dwc3_qcom_resume(qcom);
if (!ret)
qcom->pm_suspended = false;
return ret;
}
#endif
#ifdef CONFIG_PM
static int dwc3_qcom_runtime_suspend(struct device *dev)
{
struct dwc3_qcom *qcom = dev_get_drvdata(dev);
return dwc3_qcom_suspend(qcom);
}
static int dwc3_qcom_runtime_resume(struct device *dev)
{
struct dwc3_qcom *qcom = dev_get_drvdata(dev);
return dwc3_qcom_resume(qcom);
}
#endif
static const struct dev_pm_ops dwc3_qcom_dev_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(dwc3_qcom_pm_suspend, dwc3_qcom_pm_resume)
SET_RUNTIME_PM_OPS(dwc3_qcom_runtime_suspend, dwc3_qcom_runtime_resume,
NULL)
};
static const struct of_device_id dwc3_qcom_of_match[] = {
{ .compatible = "qcom,dwc3" },
{ .compatible = "qcom,msm8996-dwc3" },
{ .compatible = "qcom,sdm845-dwc3" },
{ }
};
MODULE_DEVICE_TABLE(of, dwc3_qcom_of_match);
static struct platform_driver dwc3_qcom_driver = {
.probe = dwc3_qcom_probe,
.remove = dwc3_qcom_remove,
.driver = {
.name = "dwc3-qcom",
.pm = &dwc3_qcom_dev_pm_ops,
.of_match_table = dwc3_qcom_of_match,
},
};
module_platform_driver(dwc3_qcom_driver);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("DesignWare DWC3 QCOM Glue Driver");

View File

@ -66,7 +66,7 @@ static int dwc3_ep0_start_trans(struct dwc3_ep *dep)
struct dwc3 *dwc; struct dwc3 *dwc;
int ret; int ret;
if (dep->flags & DWC3_EP_BUSY) if (dep->flags & DWC3_EP_TRANSFER_STARTED)
return 0; return 0;
dwc = dep->dwc; dwc = dep->dwc;
@ -79,8 +79,6 @@ static int dwc3_ep0_start_trans(struct dwc3_ep *dep)
if (ret < 0) if (ret < 0)
return ret; return ret;
dep->flags |= DWC3_EP_BUSY;
dep->resource_index = dwc3_gadget_ep_get_transfer_index(dep);
dwc->ep0_next_event = DWC3_EP0_COMPLETE; dwc->ep0_next_event = DWC3_EP0_COMPLETE;
return 0; return 0;
@ -913,7 +911,7 @@ static void dwc3_ep0_xfer_complete(struct dwc3 *dwc,
{ {
struct dwc3_ep *dep = dwc->eps[event->endpoint_number]; struct dwc3_ep *dep = dwc->eps[event->endpoint_number];
dep->flags &= ~DWC3_EP_BUSY; dep->flags &= ~DWC3_EP_TRANSFER_STARTED;
dep->resource_index = 0; dep->resource_index = 0;
dwc->setup_packet_pending = false; dwc->setup_packet_pending = false;

File diff suppressed because it is too large Load Diff

View File

@ -98,13 +98,12 @@ int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol);
* Caller should take care of locking. Returns the transfer resource * Caller should take care of locking. Returns the transfer resource
* index for a given endpoint. * index for a given endpoint.
*/ */
static inline u32 dwc3_gadget_ep_get_transfer_index(struct dwc3_ep *dep) static inline void dwc3_gadget_ep_get_transfer_index(struct dwc3_ep *dep)
{ {
u32 res_id; u32 res_id;
res_id = dwc3_readl(dep->regs, DWC3_DEPCMD); res_id = dwc3_readl(dep->regs, DWC3_DEPCMD);
dep->resource_index = DWC3_DEPCMD_GET_RSC_IDX(res_id);
return DWC3_DEPCMD_GET_RSC_IDX(res_id);
} }
#endif /* __DRIVERS_USB_DWC3_GADGET_H */ #endif /* __DRIVERS_USB_DWC3_GADGET_H */

View File

@ -230,17 +230,14 @@ DECLARE_EVENT_CLASS(dwc3_log_trb,
TP_fast_assign( TP_fast_assign(
__assign_str(name, dep->name); __assign_str(name, dep->name);
__entry->trb = trb; __entry->trb = trb;
__entry->allocated = dep->allocated_requests;
__entry->queued = dep->queued_requests;
__entry->bpl = trb->bpl; __entry->bpl = trb->bpl;
__entry->bph = trb->bph; __entry->bph = trb->bph;
__entry->size = trb->size; __entry->size = trb->size;
__entry->ctrl = trb->ctrl; __entry->ctrl = trb->ctrl;
__entry->type = usb_endpoint_type(dep->endpoint.desc); __entry->type = usb_endpoint_type(dep->endpoint.desc);
), ),
TP_printk("%s: %d/%d trb %p buf %08x%08x size %s%d ctrl %08x (%c%c%c%c:%c%c:%s)", TP_printk("%s: trb %p buf %08x%08x size %s%d ctrl %08x (%c%c%c%c:%c%c:%s)",
__get_str(name), __entry->queued, __entry->allocated, __get_str(name), __entry->trb, __entry->bph, __entry->bpl,
__entry->trb, __entry->bph, __entry->bpl,
({char *s; ({char *s;
int pcm = ((__entry->size >> 24) & 3) + 1; int pcm = ((__entry->size >> 24) & 3) + 1;
switch (__entry->type) { switch (__entry->type) {
@ -306,7 +303,7 @@ DECLARE_EVENT_CLASS(dwc3_log_ep,
__entry->trb_enqueue = dep->trb_enqueue; __entry->trb_enqueue = dep->trb_enqueue;
__entry->trb_dequeue = dep->trb_dequeue; __entry->trb_dequeue = dep->trb_dequeue;
), ),
TP_printk("%s: mps %d/%d streams %d burst %d ring %d/%d flags %c:%c%c%c%c%c:%c:%c", TP_printk("%s: mps %d/%d streams %d burst %d ring %d/%d flags %c:%c%c%c%c:%c:%c",
__get_str(name), __entry->maxpacket, __get_str(name), __entry->maxpacket,
__entry->maxpacket_limit, __entry->max_streams, __entry->maxpacket_limit, __entry->max_streams,
__entry->maxburst, __entry->trb_enqueue, __entry->maxburst, __entry->trb_enqueue,
@ -314,9 +311,8 @@ DECLARE_EVENT_CLASS(dwc3_log_ep,
__entry->flags & DWC3_EP_ENABLED ? 'E' : 'e', __entry->flags & DWC3_EP_ENABLED ? 'E' : 'e',
__entry->flags & DWC3_EP_STALL ? 'S' : 's', __entry->flags & DWC3_EP_STALL ? 'S' : 's',
__entry->flags & DWC3_EP_WEDGE ? 'W' : 'w', __entry->flags & DWC3_EP_WEDGE ? 'W' : 'w',
__entry->flags & DWC3_EP_BUSY ? 'B' : 'b', __entry->flags & DWC3_EP_TRANSFER_STARTED ? 'B' : 'b',
__entry->flags & DWC3_EP_PENDING_REQUEST ? 'P' : 'p', __entry->flags & DWC3_EP_PENDING_REQUEST ? 'P' : 'p',
__entry->flags & DWC3_EP_MISSED_ISOC ? 'M' : 'm',
__entry->flags & DWC3_EP_END_TRANSFER_PENDING ? 'E' : 'e', __entry->flags & DWC3_EP_END_TRANSFER_PENDING ? 'E' : 'e',
__entry->direction ? '<' : '>' __entry->direction ? '<' : '>'
) )

View File

@ -1601,7 +1601,7 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
cdev->gadget->ep0->maxpacket; cdev->gadget->ep0->maxpacket;
if (gadget_is_superspeed(gadget)) { if (gadget_is_superspeed(gadget)) {
if (gadget->speed >= USB_SPEED_SUPER) { if (gadget->speed >= USB_SPEED_SUPER) {
cdev->desc.bcdUSB = cpu_to_le16(0x0310); cdev->desc.bcdUSB = cpu_to_le16(0x0320);
cdev->desc.bMaxPacketSize0 = 9; cdev->desc.bMaxPacketSize0 = 9;
} else { } else {
cdev->desc.bcdUSB = cpu_to_le16(0x0210); cdev->desc.bcdUSB = cpu_to_le16(0x0210);

View File

@ -705,6 +705,8 @@ ecm_bind(struct usb_configuration *c, struct usb_function *f)
ecm_opts->bound = true; ecm_opts->bound = true;
} }
ecm_string_defs[1].s = ecm->ethaddr;
us = usb_gstrings_attach(cdev, ecm_strings, us = usb_gstrings_attach(cdev, ecm_strings,
ARRAY_SIZE(ecm_string_defs)); ARRAY_SIZE(ecm_string_defs));
if (IS_ERR(us)) if (IS_ERR(us))
@ -928,7 +930,6 @@ static struct usb_function *ecm_alloc(struct usb_function_instance *fi)
mutex_unlock(&opts->lock); mutex_unlock(&opts->lock);
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
} }
ecm_string_defs[1].s = ecm->ethaddr;
ecm->port.ioport = netdev_priv(opts->net); ecm->port.ioport = netdev_priv(opts->net);
mutex_unlock(&opts->lock); mutex_unlock(&opts->lock);

View File

@ -1266,6 +1266,14 @@ static long ffs_epfile_ioctl(struct file *file, unsigned code,
return ret; return ret;
} }
#ifdef CONFIG_COMPAT
static long ffs_epfile_compat_ioctl(struct file *file, unsigned code,
unsigned long value)
{
return ffs_epfile_ioctl(file, code, value);
}
#endif
static const struct file_operations ffs_epfile_operations = { static const struct file_operations ffs_epfile_operations = {
.llseek = no_llseek, .llseek = no_llseek,
@ -1274,6 +1282,9 @@ static const struct file_operations ffs_epfile_operations = {
.read_iter = ffs_epfile_read_iter, .read_iter = ffs_epfile_read_iter,
.release = ffs_epfile_release, .release = ffs_epfile_release,
.unlocked_ioctl = ffs_epfile_ioctl, .unlocked_ioctl = ffs_epfile_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = ffs_epfile_compat_ioctl,
#endif
}; };

View File

@ -109,6 +109,7 @@ static inline struct f_midi *func_to_midi(struct usb_function *f)
static void f_midi_transmit(struct f_midi *midi); static void f_midi_transmit(struct f_midi *midi);
static void f_midi_rmidi_free(struct snd_rawmidi *rmidi); static void f_midi_rmidi_free(struct snd_rawmidi *rmidi);
static void f_midi_free_inst(struct usb_function_instance *f);
DECLARE_UAC_AC_HEADER_DESCRIPTOR(1); DECLARE_UAC_AC_HEADER_DESCRIPTOR(1);
DECLARE_USB_MIDI_OUT_JACK_DESCRIPTOR(1); DECLARE_USB_MIDI_OUT_JACK_DESCRIPTOR(1);
@ -1102,7 +1103,7 @@ static ssize_t f_midi_opts_##name##_store(struct config_item *item, \
u32 num; \ u32 num; \
\ \
mutex_lock(&opts->lock); \ mutex_lock(&opts->lock); \
if (opts->refcnt) { \ if (opts->refcnt > 1) { \
ret = -EBUSY; \ ret = -EBUSY; \
goto end; \ goto end; \
} \ } \
@ -1157,7 +1158,7 @@ static ssize_t f_midi_opts_id_store(struct config_item *item,
char *c; char *c;
mutex_lock(&opts->lock); mutex_lock(&opts->lock);
if (opts->refcnt) { if (opts->refcnt > 1) {
ret = -EBUSY; ret = -EBUSY;
goto end; goto end;
} }
@ -1198,13 +1199,21 @@ static const struct config_item_type midi_func_type = {
static void f_midi_free_inst(struct usb_function_instance *f) static void f_midi_free_inst(struct usb_function_instance *f)
{ {
struct f_midi_opts *opts; struct f_midi_opts *opts;
bool free = false;
opts = container_of(f, struct f_midi_opts, func_inst); opts = container_of(f, struct f_midi_opts, func_inst);
if (opts->id_allocated) mutex_lock(&opts->lock);
kfree(opts->id); if (!--opts->refcnt) {
free = true;
}
mutex_unlock(&opts->lock);
kfree(opts); if (free) {
if (opts->id_allocated)
kfree(opts->id);
kfree(opts);
}
} }
static struct usb_function_instance *f_midi_alloc_inst(void) static struct usb_function_instance *f_midi_alloc_inst(void)
@ -1223,6 +1232,7 @@ static struct usb_function_instance *f_midi_alloc_inst(void)
opts->qlen = 32; opts->qlen = 32;
opts->in_ports = 1; opts->in_ports = 1;
opts->out_ports = 1; opts->out_ports = 1;
opts->refcnt = 1;
config_group_init_type_name(&opts->func_inst.group, "", config_group_init_type_name(&opts->func_inst.group, "",
&midi_func_type); &midi_func_type);
@ -1234,6 +1244,7 @@ static void f_midi_free(struct usb_function *f)
{ {
struct f_midi *midi; struct f_midi *midi;
struct f_midi_opts *opts; struct f_midi_opts *opts;
bool free = false;
midi = func_to_midi(f); midi = func_to_midi(f);
opts = container_of(f->fi, struct f_midi_opts, func_inst); opts = container_of(f->fi, struct f_midi_opts, func_inst);
@ -1242,9 +1253,12 @@ static void f_midi_free(struct usb_function *f)
kfree(midi->id); kfree(midi->id);
kfifo_free(&midi->in_req_fifo); kfifo_free(&midi->in_req_fifo);
kfree(midi); kfree(midi);
--opts->refcnt; free = true;
} }
mutex_unlock(&opts->lock); mutex_unlock(&opts->lock);
if (free)
f_midi_free_inst(&opts->func_inst);
} }
static void f_midi_rmidi_free(struct snd_rawmidi *rmidi) static void f_midi_rmidi_free(struct snd_rawmidi *rmidi)

View File

@ -851,6 +851,9 @@ int rndis_msg_parser(struct rndis_params *params, u8 *buf)
*/ */
pr_warn("%s: unknown RNDIS message 0x%08X len %d\n", pr_warn("%s: unknown RNDIS message 0x%08X len %d\n",
__func__, MsgType, MsgLength); __func__, MsgType, MsgLength);
/* Garbled message can be huge, so limit what we display */
if (MsgLength > 16)
MsgLength = 16;
print_hex_dump_bytes(__func__, DUMP_PREFIX_OFFSET, print_hex_dump_bytes(__func__, DUMP_PREFIX_OFFSET,
buf, MsgLength); buf, MsgLength);
break; break;

View File

@ -844,6 +844,10 @@ struct net_device *gether_setup_name_default(const char *netname)
net->ethtool_ops = &ops; net->ethtool_ops = &ops;
SET_NETDEV_DEVTYPE(net, &gadget_type); SET_NETDEV_DEVTYPE(net, &gadget_type);
/* MTU range: 14 - 15412 */
net->min_mtu = ETH_HLEN;
net->max_mtu = GETHER_MAX_ETH_FRAME_LEN;
return net; return net;
} }
EXPORT_SYMBOL_GPL(gether_setup_name_default); EXPORT_SYMBOL_GPL(gether_setup_name_default);

View File

@ -438,6 +438,8 @@ config USB_GADGET_XILINX
dynamically linked module called "udc-xilinx" and force all dynamically linked module called "udc-xilinx" and force all
gadget drivers to also be dynamically linked. gadget drivers to also be dynamically linked.
source "drivers/usb/gadget/udc/aspeed-vhub/Kconfig"
# #
# LAST -- dummy/emulated controller # LAST -- dummy/emulated controller
# #

View File

@ -39,4 +39,5 @@ obj-$(CONFIG_USB_MV_U3D) += mv_u3d_core.o
obj-$(CONFIG_USB_GR_UDC) += gr_udc.o obj-$(CONFIG_USB_GR_UDC) += gr_udc.o
obj-$(CONFIG_USB_GADGET_XILINX) += udc-xilinx.o obj-$(CONFIG_USB_GADGET_XILINX) += udc-xilinx.o
obj-$(CONFIG_USB_SNP_UDC_PLAT) += snps_udc_plat.o obj-$(CONFIG_USB_SNP_UDC_PLAT) += snps_udc_plat.o
obj-$(CONFIG_USB_ASPEED_VHUB) += aspeed-vhub/
obj-$(CONFIG_USB_BDC_UDC) += bdc/ obj-$(CONFIG_USB_BDC_UDC) += bdc/

View File

@ -0,0 +1,7 @@
# SPDX-License-Identifier: GPL-2.0+
config USB_ASPEED_VHUB
tristate "Aspeed vHub UDC driver"
depends on ARCH_ASPEED || COMPILE_TEST
help
USB peripheral controller for the Aspeed AST2500 family
SoCs supporting the "vHub" functionality and USB2.0

View File

@ -0,0 +1,4 @@
# SPDX-License-Identifier: GPL-2.0+
obj-$(CONFIG_USB_ASPEED_VHUB) += aspeed-vhub.o
aspeed-vhub-y := core.o ep0.o epn.o dev.o hub.o

View File

@ -0,0 +1,425 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* aspeed-vhub -- Driver for Aspeed SoC "vHub" USB gadget
*
* core.c - Top level support
*
* Copyright 2017 IBM Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/delay.h>
#include <linux/ioport.h>
#include <linux/slab.h>
#include <linux/errno.h>
#include <linux/list.h>
#include <linux/interrupt.h>
#include <linux/proc_fs.h>
#include <linux/prefetch.h>
#include <linux/clk.h>
#include <linux/usb/gadget.h>
#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/regmap.h>
#include <linux/dma-mapping.h>
#include "vhub.h"
void ast_vhub_done(struct ast_vhub_ep *ep, struct ast_vhub_req *req,
int status)
{
bool internal = req->internal;
EPVDBG(ep, "completing request @%p, status %d\n", req, status);
list_del_init(&req->queue);
if (req->req.status == -EINPROGRESS)
req->req.status = status;
if (req->req.dma) {
if (!WARN_ON(!ep->dev))
usb_gadget_unmap_request(&ep->dev->gadget,
&req->req, ep->epn.is_in);
req->req.dma = 0;
}
/*
* If this isn't an internal EP0 request, call the core
* to call the gadget completion.
*/
if (!internal) {
spin_unlock(&ep->vhub->lock);
usb_gadget_giveback_request(&ep->ep, &req->req);
spin_lock(&ep->vhub->lock);
}
}
void ast_vhub_nuke(struct ast_vhub_ep *ep, int status)
{
struct ast_vhub_req *req;
EPDBG(ep, "Nuking\n");
/* Beware, lock will be dropped & req-acquired by done() */
while (!list_empty(&ep->queue)) {
req = list_first_entry(&ep->queue, struct ast_vhub_req, queue);
ast_vhub_done(ep, req, status);
}
}
struct usb_request *ast_vhub_alloc_request(struct usb_ep *u_ep,
gfp_t gfp_flags)
{
struct ast_vhub_req *req;
req = kzalloc(sizeof(*req), gfp_flags);
if (!req)
return NULL;
return &req->req;
}
void ast_vhub_free_request(struct usb_ep *u_ep, struct usb_request *u_req)
{
struct ast_vhub_req *req = to_ast_req(u_req);
kfree(req);
}
static irqreturn_t ast_vhub_irq(int irq, void *data)
{
struct ast_vhub *vhub = data;
irqreturn_t iret = IRQ_NONE;
u32 istat;
/* Stale interrupt while tearing down */
if (!vhub->ep0_bufs)
return IRQ_NONE;
spin_lock(&vhub->lock);
/* Read and ACK interrupts */
istat = readl(vhub->regs + AST_VHUB_ISR);
if (!istat)
goto bail;
writel(istat, vhub->regs + AST_VHUB_ISR);
iret = IRQ_HANDLED;
UDCVDBG(vhub, "irq status=%08x, ep_acks=%08x ep_nacks=%08x\n",
istat,
readl(vhub->regs + AST_VHUB_EP_ACK_ISR),
readl(vhub->regs + AST_VHUB_EP_NACK_ISR));
/* Handle generic EPs first */
if (istat & VHUB_IRQ_EP_POOL_ACK_STALL) {
u32 i, ep_acks = readl(vhub->regs + AST_VHUB_EP_ACK_ISR);
writel(ep_acks, vhub->regs + AST_VHUB_EP_ACK_ISR);
for (i = 0; ep_acks && i < AST_VHUB_NUM_GEN_EPs; i++) {
u32 mask = VHUB_EP_IRQ(i);
if (ep_acks & mask) {
ast_vhub_epn_ack_irq(&vhub->epns[i]);
ep_acks &= ~mask;
}
}
}
/* Handle device interrupts */
if (istat & (VHUB_IRQ_DEVICE1 |
VHUB_IRQ_DEVICE2 |
VHUB_IRQ_DEVICE3 |
VHUB_IRQ_DEVICE4 |
VHUB_IRQ_DEVICE5)) {
if (istat & VHUB_IRQ_DEVICE1)
ast_vhub_dev_irq(&vhub->ports[0].dev);
if (istat & VHUB_IRQ_DEVICE2)
ast_vhub_dev_irq(&vhub->ports[1].dev);
if (istat & VHUB_IRQ_DEVICE3)
ast_vhub_dev_irq(&vhub->ports[2].dev);
if (istat & VHUB_IRQ_DEVICE4)
ast_vhub_dev_irq(&vhub->ports[3].dev);
if (istat & VHUB_IRQ_DEVICE5)
ast_vhub_dev_irq(&vhub->ports[4].dev);
}
/* Handle top-level vHub EP0 interrupts */
if (istat & (VHUB_IRQ_HUB_EP0_OUT_ACK_STALL |
VHUB_IRQ_HUB_EP0_IN_ACK_STALL |
VHUB_IRQ_HUB_EP0_SETUP)) {
if (istat & VHUB_IRQ_HUB_EP0_IN_ACK_STALL)
ast_vhub_ep0_handle_ack(&vhub->ep0, true);
if (istat & VHUB_IRQ_HUB_EP0_OUT_ACK_STALL)
ast_vhub_ep0_handle_ack(&vhub->ep0, false);
if (istat & VHUB_IRQ_HUB_EP0_SETUP)
ast_vhub_ep0_handle_setup(&vhub->ep0);
}
/* Various top level bus events */
if (istat & (VHUB_IRQ_BUS_RESUME |
VHUB_IRQ_BUS_SUSPEND |
VHUB_IRQ_BUS_RESET)) {
if (istat & VHUB_IRQ_BUS_RESUME)
ast_vhub_hub_resume(vhub);
if (istat & VHUB_IRQ_BUS_SUSPEND)
ast_vhub_hub_suspend(vhub);
if (istat & VHUB_IRQ_BUS_RESET)
ast_vhub_hub_reset(vhub);
}
bail:
spin_unlock(&vhub->lock);
return iret;
}
void ast_vhub_init_hw(struct ast_vhub *vhub)
{
u32 ctrl;
UDCDBG(vhub,"(Re)Starting HW ...\n");
/* Enable PHY */
ctrl = VHUB_CTRL_PHY_CLK |
VHUB_CTRL_PHY_RESET_DIS;
/*
* We do *NOT* set the VHUB_CTRL_CLK_STOP_SUSPEND bit
* to stop the logic clock during suspend because
* it causes the registers to become inaccessible and
* we haven't yet figured out a good wayt to bring the
* controller back into life to issue a wakeup.
*/
/*
* Set some ISO & split control bits according to Aspeed
* recommendation
*
* VHUB_CTRL_ISO_RSP_CTRL: When set tells the HW to respond
* with 0 bytes data packet to ISO IN endpoints when no data
* is available.
*
* VHUB_CTRL_SPLIT_IN: This makes a SOF complete a split IN
* transaction.
*/
ctrl |= VHUB_CTRL_ISO_RSP_CTRL | VHUB_CTRL_SPLIT_IN;
writel(ctrl, vhub->regs + AST_VHUB_CTRL);
udelay(1);
/* Set descriptor ring size */
if (AST_VHUB_DESCS_COUNT == 256) {
ctrl |= VHUB_CTRL_LONG_DESC;
writel(ctrl, vhub->regs + AST_VHUB_CTRL);
} else {
BUILD_BUG_ON(AST_VHUB_DESCS_COUNT != 32);
}
/* Reset all devices */
writel(VHUB_SW_RESET_ALL, vhub->regs + AST_VHUB_SW_RESET);
udelay(1);
writel(0, vhub->regs + AST_VHUB_SW_RESET);
/* Disable and cleanup EP ACK/NACK interrupts */
writel(0, vhub->regs + AST_VHUB_EP_ACK_IER);
writel(0, vhub->regs + AST_VHUB_EP_NACK_IER);
writel(VHUB_EP_IRQ_ALL, vhub->regs + AST_VHUB_EP_ACK_ISR);
writel(VHUB_EP_IRQ_ALL, vhub->regs + AST_VHUB_EP_NACK_ISR);
/* Default settings for EP0, enable HW hub EP1 */
writel(0, vhub->regs + AST_VHUB_EP0_CTRL);
writel(VHUB_EP1_CTRL_RESET_TOGGLE |
VHUB_EP1_CTRL_ENABLE,
vhub->regs + AST_VHUB_EP1_CTRL);
writel(0, vhub->regs + AST_VHUB_EP1_STS_CHG);
/* Configure EP0 DMA buffer */
writel(vhub->ep0.buf_dma, vhub->regs + AST_VHUB_EP0_DATA);
/* Clear address */
writel(0, vhub->regs + AST_VHUB_CONF);
/* Pullup hub (activate on host) */
if (vhub->force_usb1)
ctrl |= VHUB_CTRL_FULL_SPEED_ONLY;
ctrl |= VHUB_CTRL_UPSTREAM_CONNECT;
writel(ctrl, vhub->regs + AST_VHUB_CTRL);
/* Enable some interrupts */
writel(VHUB_IRQ_HUB_EP0_IN_ACK_STALL |
VHUB_IRQ_HUB_EP0_OUT_ACK_STALL |
VHUB_IRQ_HUB_EP0_SETUP |
VHUB_IRQ_EP_POOL_ACK_STALL |
VHUB_IRQ_BUS_RESUME |
VHUB_IRQ_BUS_SUSPEND |
VHUB_IRQ_BUS_RESET,
vhub->regs + AST_VHUB_IER);
}
static int ast_vhub_remove(struct platform_device *pdev)
{
struct ast_vhub *vhub = platform_get_drvdata(pdev);
unsigned long flags;
int i;
if (!vhub || !vhub->regs)
return 0;
/* Remove devices */
for (i = 0; i < AST_VHUB_NUM_PORTS; i++)
ast_vhub_del_dev(&vhub->ports[i].dev);
spin_lock_irqsave(&vhub->lock, flags);
/* Mask & ack all interrupts */
writel(0, vhub->regs + AST_VHUB_IER);
writel(VHUB_IRQ_ACK_ALL, vhub->regs + AST_VHUB_ISR);
/* Pull device, leave PHY enabled */
writel(VHUB_CTRL_PHY_CLK |
VHUB_CTRL_PHY_RESET_DIS,
vhub->regs + AST_VHUB_CTRL);
if (vhub->clk)
clk_disable_unprepare(vhub->clk);
spin_unlock_irqrestore(&vhub->lock, flags);
if (vhub->ep0_bufs)
dma_free_coherent(&pdev->dev,
AST_VHUB_EP0_MAX_PACKET *
(AST_VHUB_NUM_PORTS + 1),
vhub->ep0_bufs,
vhub->ep0_bufs_dma);
vhub->ep0_bufs = NULL;
return 0;
}
static int ast_vhub_probe(struct platform_device *pdev)
{
enum usb_device_speed max_speed;
struct ast_vhub *vhub;
struct resource *res;
int i, rc = 0;
vhub = devm_kzalloc(&pdev->dev, sizeof(*vhub), GFP_KERNEL);
if (!vhub)
return -ENOMEM;
spin_lock_init(&vhub->lock);
vhub->pdev = pdev;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
vhub->regs = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(vhub->regs)) {
dev_err(&pdev->dev, "Failed to map resources\n");
return PTR_ERR(vhub->regs);
}
UDCDBG(vhub, "vHub@%pR mapped @%p\n", res, vhub->regs);
platform_set_drvdata(pdev, vhub);
vhub->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(vhub->clk)) {
rc = PTR_ERR(vhub->clk);
goto err;
}
rc = clk_prepare_enable(vhub->clk);
if (rc) {
dev_err(&pdev->dev, "Error couldn't enable clock (%d)\n", rc);
goto err;
}
/* Check if we need to limit the HW to USB1 */
max_speed = usb_get_maximum_speed(&pdev->dev);
if (max_speed != USB_SPEED_UNKNOWN && max_speed < USB_SPEED_HIGH)
vhub->force_usb1 = true;
/* Mask & ack all interrupts before installing the handler */
writel(0, vhub->regs + AST_VHUB_IER);
writel(VHUB_IRQ_ACK_ALL, vhub->regs + AST_VHUB_ISR);
/* Find interrupt and install handler */
vhub->irq = platform_get_irq(pdev, 0);
if (vhub->irq < 0) {
dev_err(&pdev->dev, "Failed to get interrupt\n");
rc = vhub->irq;
goto err;
}
rc = devm_request_irq(&pdev->dev, vhub->irq, ast_vhub_irq, 0,
KBUILD_MODNAME, vhub);
if (rc) {
dev_err(&pdev->dev, "Failed to request interrupt\n");
goto err;
}
/*
* Allocate DMA buffers for all EP0s in one chunk,
* one per port and one for the vHub itself
*/
vhub->ep0_bufs = dma_alloc_coherent(&pdev->dev,
AST_VHUB_EP0_MAX_PACKET *
(AST_VHUB_NUM_PORTS + 1),
&vhub->ep0_bufs_dma, GFP_KERNEL);
if (!vhub->ep0_bufs) {
dev_err(&pdev->dev, "Failed to allocate EP0 DMA buffers\n");
rc = -ENOMEM;
goto err;
}
UDCVDBG(vhub, "EP0 DMA buffers @%p (DMA 0x%08x)\n",
vhub->ep0_bufs, (u32)vhub->ep0_bufs_dma);
/* Init vHub EP0 */
ast_vhub_init_ep0(vhub, &vhub->ep0, NULL);
/* Init devices */
for (i = 0; i < AST_VHUB_NUM_PORTS && rc == 0; i++)
rc = ast_vhub_init_dev(vhub, i);
if (rc)
goto err;
/* Init hub emulation */
ast_vhub_init_hub(vhub);
/* Initialize HW */
ast_vhub_init_hw(vhub);
dev_info(&pdev->dev, "Initialized virtual hub in USB%d mode\n",
vhub->force_usb1 ? 1 : 2);
return 0;
err:
ast_vhub_remove(pdev);
return rc;
}
static const struct of_device_id ast_vhub_dt_ids[] = {
{
.compatible = "aspeed,ast2400-usb-vhub",
},
{
.compatible = "aspeed,ast2500-usb-vhub",
},
{ }
};
MODULE_DEVICE_TABLE(of, ast_vhub_dt_ids);
static struct platform_driver ast_vhub_driver = {
.probe = ast_vhub_probe,
.remove = ast_vhub_remove,
.driver = {
.name = KBUILD_MODNAME,
.of_match_table = ast_vhub_dt_ids,
},
};
module_platform_driver(ast_vhub_driver);
MODULE_DESCRIPTION("Aspeed vHub udc driver");
MODULE_AUTHOR("Benjamin Herrenschmidt <benh@kernel.crashing.org>");
MODULE_LICENSE("GPL");

View File

@ -0,0 +1,589 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* aspeed-vhub -- Driver for Aspeed SoC "vHub" USB gadget
*
* dev.c - Individual device/gadget management (ie, a port = a gadget)
*
* Copyright 2017 IBM Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/delay.h>
#include <linux/ioport.h>
#include <linux/slab.h>
#include <linux/errno.h>
#include <linux/list.h>
#include <linux/interrupt.h>
#include <linux/proc_fs.h>
#include <linux/prefetch.h>
#include <linux/clk.h>
#include <linux/usb/gadget.h>
#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/regmap.h>
#include <linux/dma-mapping.h>
#include <linux/usb.h>
#include <linux/usb/hcd.h>
#include "vhub.h"
void ast_vhub_dev_irq(struct ast_vhub_dev *d)
{
u32 istat = readl(d->regs + AST_VHUB_DEV_ISR);
writel(istat, d->regs + AST_VHUB_DEV_ISR);
if (istat & VHUV_DEV_IRQ_EP0_IN_ACK_STALL)
ast_vhub_ep0_handle_ack(&d->ep0, true);
if (istat & VHUV_DEV_IRQ_EP0_OUT_ACK_STALL)
ast_vhub_ep0_handle_ack(&d->ep0, false);
if (istat & VHUV_DEV_IRQ_EP0_SETUP)
ast_vhub_ep0_handle_setup(&d->ep0);
}
static void ast_vhub_dev_enable(struct ast_vhub_dev *d)
{
u32 reg, hmsk;
if (d->enabled)
return;
/* Enable device and its EP0 interrupts */
reg = VHUB_DEV_EN_ENABLE_PORT |
VHUB_DEV_EN_EP0_IN_ACK_IRQEN |
VHUB_DEV_EN_EP0_OUT_ACK_IRQEN |
VHUB_DEV_EN_EP0_SETUP_IRQEN;
if (d->gadget.speed == USB_SPEED_HIGH)
reg |= VHUB_DEV_EN_SPEED_SEL_HIGH;
writel(reg, d->regs + AST_VHUB_DEV_EN_CTRL);
/* Enable device interrupt in the hub as well */
hmsk = VHUB_IRQ_DEVICE1 << d->index;
reg = readl(d->vhub->regs + AST_VHUB_IER);
reg |= hmsk;
writel(reg, d->vhub->regs + AST_VHUB_IER);
/* Set EP0 DMA buffer address */
writel(d->ep0.buf_dma, d->regs + AST_VHUB_DEV_EP0_DATA);
d->enabled = true;
}
static void ast_vhub_dev_disable(struct ast_vhub_dev *d)
{
u32 reg, hmsk;
if (!d->enabled)
return;
/* Disable device interrupt in the hub */
hmsk = VHUB_IRQ_DEVICE1 << d->index;
reg = readl(d->vhub->regs + AST_VHUB_IER);
reg &= ~hmsk;
writel(reg, d->vhub->regs + AST_VHUB_IER);
/* Then disable device */
writel(0, d->regs + AST_VHUB_DEV_EN_CTRL);
d->gadget.speed = USB_SPEED_UNKNOWN;
d->enabled = false;
d->suspended = false;
}
static int ast_vhub_dev_feature(struct ast_vhub_dev *d,
u16 wIndex, u16 wValue,
bool is_set)
{
DDBG(d, "%s_FEATURE(dev val=%02x)\n",
is_set ? "SET" : "CLEAR", wValue);
if (wValue != USB_DEVICE_REMOTE_WAKEUP)
return std_req_driver;
d->wakeup_en = is_set;
return std_req_complete;
}
static int ast_vhub_ep_feature(struct ast_vhub_dev *d,
u16 wIndex, u16 wValue, bool is_set)
{
struct ast_vhub_ep *ep;
int ep_num;
ep_num = wIndex & USB_ENDPOINT_NUMBER_MASK;
DDBG(d, "%s_FEATURE(ep%d val=%02x)\n",
is_set ? "SET" : "CLEAR", ep_num, wValue);
if (ep_num == 0)
return std_req_complete;
if (ep_num >= AST_VHUB_NUM_GEN_EPs || !d->epns[ep_num - 1])
return std_req_stall;
if (wValue != USB_ENDPOINT_HALT)
return std_req_driver;
ep = d->epns[ep_num - 1];
if (WARN_ON(!ep))
return std_req_stall;
if (!ep->epn.enabled || !ep->ep.desc || ep->epn.is_iso ||
ep->epn.is_in != !!(wIndex & USB_DIR_IN))
return std_req_stall;
DDBG(d, "%s stall on EP %d\n",
is_set ? "setting" : "clearing", ep_num);
ep->epn.stalled = is_set;
ast_vhub_update_epn_stall(ep);
return std_req_complete;
}
static int ast_vhub_dev_status(struct ast_vhub_dev *d,
u16 wIndex, u16 wValue)
{
u8 st0;
DDBG(d, "GET_STATUS(dev)\n");
st0 = d->gadget.is_selfpowered << USB_DEVICE_SELF_POWERED;
if (d->wakeup_en)
st0 |= 1 << USB_DEVICE_REMOTE_WAKEUP;
return ast_vhub_simple_reply(&d->ep0, st0, 0);
}
static int ast_vhub_ep_status(struct ast_vhub_dev *d,
u16 wIndex, u16 wValue)
{
int ep_num = wIndex & USB_ENDPOINT_NUMBER_MASK;
struct ast_vhub_ep *ep;
u8 st0 = 0;
DDBG(d, "GET_STATUS(ep%d)\n", ep_num);
if (ep_num >= AST_VHUB_NUM_GEN_EPs)
return std_req_stall;
if (ep_num != 0) {
ep = d->epns[ep_num - 1];
if (!ep)
return std_req_stall;
if (!ep->epn.enabled || !ep->ep.desc || ep->epn.is_iso ||
ep->epn.is_in != !!(wIndex & USB_DIR_IN))
return std_req_stall;
if (ep->epn.stalled)
st0 |= 1 << USB_ENDPOINT_HALT;
}
return ast_vhub_simple_reply(&d->ep0, st0, 0);
}
static void ast_vhub_dev_set_address(struct ast_vhub_dev *d, u8 addr)
{
u32 reg;
DDBG(d, "SET_ADDRESS: Got address %x\n", addr);
reg = readl(d->regs + AST_VHUB_DEV_EN_CTRL);
reg &= ~VHUB_DEV_EN_ADDR_MASK;
reg |= VHUB_DEV_EN_SET_ADDR(addr);
writel(reg, d->regs + AST_VHUB_DEV_EN_CTRL);
}
int ast_vhub_std_dev_request(struct ast_vhub_ep *ep,
struct usb_ctrlrequest *crq)
{
struct ast_vhub_dev *d = ep->dev;
u16 wValue, wIndex;
/* No driver, we shouldn't be enabled ... */
if (!d->driver || !d->enabled || d->suspended) {
EPDBG(ep,
"Device is wrong state driver=%p enabled=%d"
" suspended=%d\n",
d->driver, d->enabled, d->suspended);
return std_req_stall;
}
/* First packet, grab speed */
if (d->gadget.speed == USB_SPEED_UNKNOWN) {
d->gadget.speed = ep->vhub->speed;
if (d->gadget.speed > d->driver->max_speed)
d->gadget.speed = d->driver->max_speed;
DDBG(d, "fist packet, captured speed %d\n",
d->gadget.speed);
}
wValue = le16_to_cpu(crq->wValue);
wIndex = le16_to_cpu(crq->wIndex);
switch ((crq->bRequestType << 8) | crq->bRequest) {
/* SET_ADDRESS */
case DeviceOutRequest | USB_REQ_SET_ADDRESS:
ast_vhub_dev_set_address(d, wValue);
return std_req_complete;
/* GET_STATUS */
case DeviceRequest | USB_REQ_GET_STATUS:
return ast_vhub_dev_status(d, wIndex, wValue);
case InterfaceRequest | USB_REQ_GET_STATUS:
return ast_vhub_simple_reply(ep, 0, 0);
case EndpointRequest | USB_REQ_GET_STATUS:
return ast_vhub_ep_status(d, wIndex, wValue);
/* SET/CLEAR_FEATURE */
case DeviceOutRequest | USB_REQ_SET_FEATURE:
return ast_vhub_dev_feature(d, wIndex, wValue, true);
case DeviceOutRequest | USB_REQ_CLEAR_FEATURE:
return ast_vhub_dev_feature(d, wIndex, wValue, false);
case EndpointOutRequest | USB_REQ_SET_FEATURE:
return ast_vhub_ep_feature(d, wIndex, wValue, true);
case EndpointOutRequest | USB_REQ_CLEAR_FEATURE:
return ast_vhub_ep_feature(d, wIndex, wValue, false);
}
return std_req_driver;
}
static int ast_vhub_udc_wakeup(struct usb_gadget* gadget)
{
struct ast_vhub_dev *d = to_ast_dev(gadget);
unsigned long flags;
int rc = -EINVAL;
spin_lock_irqsave(&d->vhub->lock, flags);
if (!d->wakeup_en)
goto err;
DDBG(d, "Device initiated wakeup\n");
/* Wakeup the host */
ast_vhub_hub_wake_all(d->vhub);
rc = 0;
err:
spin_unlock_irqrestore(&d->vhub->lock, flags);
return rc;
}
static int ast_vhub_udc_get_frame(struct usb_gadget* gadget)
{
struct ast_vhub_dev *d = to_ast_dev(gadget);
return (readl(d->vhub->regs + AST_VHUB_USBSTS) >> 16) & 0x7ff;
}
static void ast_vhub_dev_nuke(struct ast_vhub_dev *d)
{
unsigned int i;
for (i = 0; i < AST_VHUB_NUM_GEN_EPs; i++) {
if (!d->epns[i])
continue;
ast_vhub_nuke(d->epns[i], -ESHUTDOWN);
}
}
static int ast_vhub_udc_pullup(struct usb_gadget* gadget, int on)
{
struct ast_vhub_dev *d = to_ast_dev(gadget);
unsigned long flags;
spin_lock_irqsave(&d->vhub->lock, flags);
DDBG(d, "pullup(%d)\n", on);
/* Mark disconnected in the hub */
ast_vhub_device_connect(d->vhub, d->index, on);
/*
* If enabled, nuke all requests if any (there shouldn't be)
* and disable the port. This will clear the address too.
*/
if (d->enabled) {
ast_vhub_dev_nuke(d);
ast_vhub_dev_disable(d);
}
spin_unlock_irqrestore(&d->vhub->lock, flags);
return 0;
}
static int ast_vhub_udc_start(struct usb_gadget *gadget,
struct usb_gadget_driver *driver)
{
struct ast_vhub_dev *d = to_ast_dev(gadget);
unsigned long flags;
spin_lock_irqsave(&d->vhub->lock, flags);
DDBG(d, "start\n");
/* We don't do much more until the hub enables us */
d->driver = driver;
d->gadget.is_selfpowered = 1;
spin_unlock_irqrestore(&d->vhub->lock, flags);
return 0;
}
static struct usb_ep *ast_vhub_udc_match_ep(struct usb_gadget *gadget,
struct usb_endpoint_descriptor *desc,
struct usb_ss_ep_comp_descriptor *ss)
{
struct ast_vhub_dev *d = to_ast_dev(gadget);
struct ast_vhub_ep *ep;
struct usb_ep *u_ep;
unsigned int max, addr, i;
DDBG(d, "Match EP type %d\n", usb_endpoint_type(desc));
/*
* First we need to look for an existing unclaimed EP as another
* configuration may have already associated a bunch of EPs with
* this gadget. This duplicates the code in usb_ep_autoconfig_ss()
* unfortunately.
*/
list_for_each_entry(u_ep, &gadget->ep_list, ep_list) {
if (usb_gadget_ep_match_desc(gadget, u_ep, desc, ss)) {
DDBG(d, " -> using existing EP%d\n",
to_ast_ep(u_ep)->d_idx);
return u_ep;
}
}
/*
* We didn't find one, we need to grab one from the pool.
*
* First let's do some sanity checking
*/
switch(usb_endpoint_type(desc)) {
case USB_ENDPOINT_XFER_CONTROL:
/* Only EP0 can be a control endpoint */
return NULL;
case USB_ENDPOINT_XFER_ISOC:
/* ISO: limit 1023 bytes full speed, 1024 high/super speed */
if (gadget_is_dualspeed(gadget))
max = 1024;
else
max = 1023;
break;
case USB_ENDPOINT_XFER_BULK:
if (gadget_is_dualspeed(gadget))
max = 512;
else
max = 64;
break;
case USB_ENDPOINT_XFER_INT:
if (gadget_is_dualspeed(gadget))
max = 1024;
else
max = 64;
break;
}
if (usb_endpoint_maxp(desc) > max)
return NULL;
/*
* Find a free EP address for that device. We can't
* let the generic code assign these as it would
* create overlapping numbers for IN and OUT which
* we don't support, so also create a suitable name
* that will allow the generic code to use our
* assigned address.
*/
for (i = 0; i < AST_VHUB_NUM_GEN_EPs; i++)
if (d->epns[i] == NULL)
break;
if (i >= AST_VHUB_NUM_GEN_EPs)
return NULL;
addr = i + 1;
/*
* Now grab an EP from the shared pool and associate
* it with our device
*/
ep = ast_vhub_alloc_epn(d, addr);
if (!ep)
return NULL;
DDBG(d, "Allocated epn#%d for port EP%d\n",
ep->epn.g_idx, addr);
return &ep->ep;
}
static int ast_vhub_udc_stop(struct usb_gadget *gadget)
{
struct ast_vhub_dev *d = to_ast_dev(gadget);
unsigned long flags;
spin_lock_irqsave(&d->vhub->lock, flags);
DDBG(d, "stop\n");
d->driver = NULL;
d->gadget.speed = USB_SPEED_UNKNOWN;
ast_vhub_dev_nuke(d);
if (d->enabled)
ast_vhub_dev_disable(d);
spin_unlock_irqrestore(&d->vhub->lock, flags);
return 0;
}
static struct usb_gadget_ops ast_vhub_udc_ops = {
.get_frame = ast_vhub_udc_get_frame,
.wakeup = ast_vhub_udc_wakeup,
.pullup = ast_vhub_udc_pullup,
.udc_start = ast_vhub_udc_start,
.udc_stop = ast_vhub_udc_stop,
.match_ep = ast_vhub_udc_match_ep,
};
void ast_vhub_dev_suspend(struct ast_vhub_dev *d)
{
d->suspended = true;
if (d->driver) {
spin_unlock(&d->vhub->lock);
d->driver->suspend(&d->gadget);
spin_lock(&d->vhub->lock);
}
}
void ast_vhub_dev_resume(struct ast_vhub_dev *d)
{
d->suspended = false;
if (d->driver) {
spin_unlock(&d->vhub->lock);
d->driver->resume(&d->gadget);
spin_lock(&d->vhub->lock);
}
}
void ast_vhub_dev_reset(struct ast_vhub_dev *d)
{
/*
* If speed is not set, we enable the port. If it is,
* send reset to the gadget and reset "speed".
*
* Speed is an indication that we have got the first
* setup packet to the device.
*/
if (d->gadget.speed == USB_SPEED_UNKNOWN && !d->enabled) {
DDBG(d, "Reset at unknown speed of disabled device, enabling...\n");
ast_vhub_dev_enable(d);
d->suspended = false;
}
if (d->gadget.speed != USB_SPEED_UNKNOWN && d->driver) {
unsigned int i;
DDBG(d, "Reset at known speed of bound device, resetting...\n");
spin_unlock(&d->vhub->lock);
d->driver->reset(&d->gadget);
spin_lock(&d->vhub->lock);
/*
* Disable/re-enable HW, this will clear the address
* and speed setting.
*/
ast_vhub_dev_disable(d);
ast_vhub_dev_enable(d);
/* Clear stall on all EPs */
for (i = 0; i < AST_VHUB_NUM_GEN_EPs; i++) {
struct ast_vhub_ep *ep = d->epns[i];
if (ep && ep->epn.stalled) {
ep->epn.stalled = false;
ast_vhub_update_epn_stall(ep);
}
}
/* Additional cleanups */
d->wakeup_en = false;
d->suspended = false;
}
}
void ast_vhub_del_dev(struct ast_vhub_dev *d)
{
unsigned long flags;
spin_lock_irqsave(&d->vhub->lock, flags);
if (!d->registered) {
spin_unlock_irqrestore(&d->vhub->lock, flags);
return;
}
d->registered = false;
spin_unlock_irqrestore(&d->vhub->lock, flags);
usb_del_gadget_udc(&d->gadget);
device_unregister(d->port_dev);
}
static void ast_vhub_dev_release(struct device *dev)
{
kfree(dev);
}
int ast_vhub_init_dev(struct ast_vhub *vhub, unsigned int idx)
{
struct ast_vhub_dev *d = &vhub->ports[idx].dev;
struct device *parent = &vhub->pdev->dev;
int rc;
d->vhub = vhub;
d->index = idx;
d->name = devm_kasprintf(parent, GFP_KERNEL, "port%d", idx+1);
d->regs = vhub->regs + 0x100 + 0x10 * idx;
ast_vhub_init_ep0(vhub, &d->ep0, d);
/*
* The UDC core really needs us to have separate and uniquely
* named "parent" devices for each port so we create a sub device
* here for that purpose
*/
d->port_dev = kzalloc(sizeof(struct device), GFP_KERNEL);
if (!d->port_dev)
return -ENOMEM;
device_initialize(d->port_dev);
d->port_dev->release = ast_vhub_dev_release;
d->port_dev->parent = parent;
dev_set_name(d->port_dev, "%s:p%d", dev_name(parent), idx + 1);
rc = device_add(d->port_dev);
if (rc)
goto fail_add;
/* Populate gadget */
INIT_LIST_HEAD(&d->gadget.ep_list);
d->gadget.ops = &ast_vhub_udc_ops;
d->gadget.ep0 = &d->ep0.ep;
d->gadget.name = KBUILD_MODNAME;
if (vhub->force_usb1)
d->gadget.max_speed = USB_SPEED_FULL;
else
d->gadget.max_speed = USB_SPEED_HIGH;
d->gadget.speed = USB_SPEED_UNKNOWN;
d->gadget.dev.of_node = vhub->pdev->dev.of_node;
rc = usb_add_gadget_udc(d->port_dev, &d->gadget);
if (rc != 0)
goto fail_udc;
d->registered = true;
return 0;
fail_udc:
device_del(d->port_dev);
fail_add:
put_device(d->port_dev);
return rc;
}

View File

@ -0,0 +1,486 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* aspeed-vhub -- Driver for Aspeed SoC "vHub" USB gadget
*
* ep0.c - Endpoint 0 handling
*
* Copyright 2017 IBM Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/delay.h>
#include <linux/ioport.h>
#include <linux/slab.h>
#include <linux/errno.h>
#include <linux/list.h>
#include <linux/interrupt.h>
#include <linux/proc_fs.h>
#include <linux/prefetch.h>
#include <linux/clk.h>
#include <linux/usb/gadget.h>
#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/regmap.h>
#include <linux/dma-mapping.h>
#include "vhub.h"
int ast_vhub_reply(struct ast_vhub_ep *ep, char *ptr, int len)
{
struct usb_request *req = &ep->ep0.req.req;
int rc;
if (WARN_ON(ep->d_idx != 0))
return std_req_stall;
if (WARN_ON(!ep->ep0.dir_in))
return std_req_stall;
if (WARN_ON(len > AST_VHUB_EP0_MAX_PACKET))
return std_req_stall;
if (WARN_ON(req->status == -EINPROGRESS))
return std_req_stall;
req->buf = ptr;
req->length = len;
req->complete = NULL;
req->zero = true;
/*
* Call internal queue directly after dropping the lock. This is
* safe to do as the reply is always the last thing done when
* processing a SETUP packet, usually as a tail call
*/
spin_unlock(&ep->vhub->lock);
if (ep->ep.ops->queue(&ep->ep, req, GFP_ATOMIC))
rc = std_req_stall;
else
rc = std_req_data;
spin_lock(&ep->vhub->lock);
return rc;
}
int __ast_vhub_simple_reply(struct ast_vhub_ep *ep, int len, ...)
{
u8 *buffer = ep->buf;
unsigned int i;
va_list args;
va_start(args, len);
/* Copy data directly into EP buffer */
for (i = 0; i < len; i++)
buffer[i] = va_arg(args, int);
va_end(args);
/* req->buf NULL means data is already there */
return ast_vhub_reply(ep, NULL, len);
}
void ast_vhub_ep0_handle_setup(struct ast_vhub_ep *ep)
{
struct usb_ctrlrequest crq;
enum std_req_rc std_req_rc;
int rc = -ENODEV;
if (WARN_ON(ep->d_idx != 0))
return;
/*
* Grab the setup packet from the chip and byteswap
* interesting fields
*/
memcpy_fromio(&crq, ep->ep0.setup, sizeof(crq));
EPDBG(ep, "SETUP packet %02x/%02x/%04x/%04x/%04x [%s] st=%d\n",
crq.bRequestType, crq.bRequest,
le16_to_cpu(crq.wValue),
le16_to_cpu(crq.wIndex),
le16_to_cpu(crq.wLength),
(crq.bRequestType & USB_DIR_IN) ? "in" : "out",
ep->ep0.state);
/* Check our state, cancel pending requests if needed */
if (ep->ep0.state != ep0_state_token) {
EPDBG(ep, "wrong state\n");
ast_vhub_nuke(ep, 0);
goto stall;
}
/* Calculate next state for EP0 */
ep->ep0.state = ep0_state_data;
ep->ep0.dir_in = !!(crq.bRequestType & USB_DIR_IN);
/* If this is the vHub, we handle requests differently */
std_req_rc = std_req_driver;
if (ep->dev == NULL) {
if ((crq.bRequestType & USB_TYPE_MASK) == USB_TYPE_STANDARD)
std_req_rc = ast_vhub_std_hub_request(ep, &crq);
else if ((crq.bRequestType & USB_TYPE_MASK) == USB_TYPE_CLASS)
std_req_rc = ast_vhub_class_hub_request(ep, &crq);
else
std_req_rc = std_req_stall;
} else if ((crq.bRequestType & USB_TYPE_MASK) == USB_TYPE_STANDARD)
std_req_rc = ast_vhub_std_dev_request(ep, &crq);
/* Act upon result */
switch(std_req_rc) {
case std_req_complete:
goto complete;
case std_req_stall:
goto stall;
case std_req_driver:
break;
case std_req_data:
return;
}
/* Pass request up to the gadget driver */
if (WARN_ON(!ep->dev))
goto stall;
if (ep->dev->driver) {
EPDBG(ep, "forwarding to gadget...\n");
spin_unlock(&ep->vhub->lock);
rc = ep->dev->driver->setup(&ep->dev->gadget, &crq);
spin_lock(&ep->vhub->lock);
EPDBG(ep, "driver returned %d\n", rc);
} else {
EPDBG(ep, "no gadget for request !\n");
}
if (rc >= 0)
return;
stall:
EPDBG(ep, "stalling\n");
writel(VHUB_EP0_CTRL_STALL, ep->ep0.ctlstat);
ep->ep0.state = ep0_state_status;
ep->ep0.dir_in = false;
return;
complete:
EPVDBG(ep, "sending [in] status with no data\n");
writel(VHUB_EP0_TX_BUFF_RDY, ep->ep0.ctlstat);
ep->ep0.state = ep0_state_status;
ep->ep0.dir_in = false;
}
static void ast_vhub_ep0_do_send(struct ast_vhub_ep *ep,
struct ast_vhub_req *req)
{
unsigned int chunk;
u32 reg;
/* If this is a 0-length request, it's the gadget trying to
* send a status on our behalf. We take it from here.
*/
if (req->req.length == 0)
req->last_desc = 1;
/* Are we done ? Complete request, otherwise wait for next interrupt */
if (req->last_desc >= 0) {
EPVDBG(ep, "complete send %d/%d\n",
req->req.actual, req->req.length);
ep->ep0.state = ep0_state_status;
writel(VHUB_EP0_RX_BUFF_RDY, ep->ep0.ctlstat);
ast_vhub_done(ep, req, 0);
return;
}
/*
* Next chunk cropped to max packet size. Also check if this
* is the last packet
*/
chunk = req->req.length - req->req.actual;
if (chunk > ep->ep.maxpacket)
chunk = ep->ep.maxpacket;
else if ((chunk < ep->ep.maxpacket) || !req->req.zero)
req->last_desc = 1;
EPVDBG(ep, "send chunk=%d last=%d, req->act=%d mp=%d\n",
chunk, req->last_desc, req->req.actual, ep->ep.maxpacket);
/*
* Copy data if any (internal requests already have data
* in the EP buffer)
*/
if (chunk && req->req.buf)
memcpy(ep->buf, req->req.buf + req->req.actual, chunk);
/* Remember chunk size and trigger send */
reg = VHUB_EP0_SET_TX_LEN(chunk);
writel(reg, ep->ep0.ctlstat);
writel(reg | VHUB_EP0_TX_BUFF_RDY, ep->ep0.ctlstat);
req->req.actual += chunk;
}
static void ast_vhub_ep0_rx_prime(struct ast_vhub_ep *ep)
{
EPVDBG(ep, "rx prime\n");
/* Prime endpoint for receiving data */
writel(VHUB_EP0_RX_BUFF_RDY, ep->ep0.ctlstat + AST_VHUB_EP0_CTRL);
}
static void ast_vhub_ep0_do_receive(struct ast_vhub_ep *ep, struct ast_vhub_req *req,
unsigned int len)
{
unsigned int remain;
int rc = 0;
/* We are receiving... grab request */
remain = req->req.length - req->req.actual;
EPVDBG(ep, "receive got=%d remain=%d\n", len, remain);
/* Are we getting more than asked ? */
if (len > remain) {
EPDBG(ep, "receiving too much (ovf: %d) !\n",
len - remain);
len = remain;
rc = -EOVERFLOW;
}
if (len && req->req.buf)
memcpy(req->req.buf + req->req.actual, ep->buf, len);
req->req.actual += len;
/* Done ? */
if (len < ep->ep.maxpacket || len == remain) {
ep->ep0.state = ep0_state_status;
writel(VHUB_EP0_TX_BUFF_RDY, ep->ep0.ctlstat);
ast_vhub_done(ep, req, rc);
} else
ast_vhub_ep0_rx_prime(ep);
}
void ast_vhub_ep0_handle_ack(struct ast_vhub_ep *ep, bool in_ack)
{
struct ast_vhub_req *req;
struct ast_vhub *vhub = ep->vhub;
struct device *dev = &vhub->pdev->dev;
bool stall = false;
u32 stat;
/* Read EP0 status */
stat = readl(ep->ep0.ctlstat);
/* Grab current request if any */
req = list_first_entry_or_null(&ep->queue, struct ast_vhub_req, queue);
EPVDBG(ep, "ACK status=%08x,state=%d is_in=%d in_ack=%d req=%p\n",
stat, ep->ep0.state, ep->ep0.dir_in, in_ack, req);
switch(ep->ep0.state) {
case ep0_state_token:
/* There should be no request queued in that state... */
if (req) {
dev_warn(dev, "request present while in TOKEN state\n");
ast_vhub_nuke(ep, -EINVAL);
}
dev_warn(dev, "ack while in TOKEN state\n");
stall = true;
break;
case ep0_state_data:
/* Check the state bits corresponding to our direction */
if ((ep->ep0.dir_in && (stat & VHUB_EP0_TX_BUFF_RDY)) ||
(!ep->ep0.dir_in && (stat & VHUB_EP0_RX_BUFF_RDY)) ||
(ep->ep0.dir_in != in_ack)) {
dev_warn(dev, "irq state mismatch");
stall = true;
break;
}
/*
* We are in data phase and there's no request, something is
* wrong, stall
*/
if (!req) {
dev_warn(dev, "data phase, no request\n");
stall = true;
break;
}
/* We have a request, handle data transfers */
if (ep->ep0.dir_in)
ast_vhub_ep0_do_send(ep, req);
else
ast_vhub_ep0_do_receive(ep, req, VHUB_EP0_RX_LEN(stat));
return;
case ep0_state_status:
/* Nuke stale requests */
if (req) {
dev_warn(dev, "request present while in STATUS state\n");
ast_vhub_nuke(ep, -EINVAL);
}
/*
* If the status phase completes with the wrong ack, stall
* the endpoint just in case, to abort whatever the host
* was doing.
*/
if (ep->ep0.dir_in == in_ack) {
dev_warn(dev, "status direction mismatch\n");
stall = true;
}
}
/* Reset to token state */
ep->ep0.state = ep0_state_token;
if (stall)
writel(VHUB_EP0_CTRL_STALL, ep->ep0.ctlstat);
}
static int ast_vhub_ep0_queue(struct usb_ep* u_ep, struct usb_request *u_req,
gfp_t gfp_flags)
{
struct ast_vhub_req *req = to_ast_req(u_req);
struct ast_vhub_ep *ep = to_ast_ep(u_ep);
struct ast_vhub *vhub = ep->vhub;
struct device *dev = &vhub->pdev->dev;
unsigned long flags;
/* Paranoid cheks */
if (!u_req || (!u_req->complete && !req->internal)) {
dev_warn(dev, "Bogus EP0 request ! u_req=%p\n", u_req);
if (u_req) {
dev_warn(dev, "complete=%p internal=%d\n",
u_req->complete, req->internal);
}
return -EINVAL;
}
/* Not endpoint 0 ? */
if (WARN_ON(ep->d_idx != 0))
return -EINVAL;
/* Disabled device */
if (ep->dev && (!ep->dev->enabled || ep->dev->suspended))
return -ESHUTDOWN;
/* Data, no buffer and not internal ? */
if (u_req->length && !u_req->buf && !req->internal) {
dev_warn(dev, "Request with no buffer !\n");
return -EINVAL;
}
EPVDBG(ep, "enqueue req @%p\n", req);
EPVDBG(ep, " l=%d zero=%d noshort=%d is_in=%d\n",
u_req->length, u_req->zero,
u_req->short_not_ok, ep->ep0.dir_in);
/* Initialize request progress fields */
u_req->status = -EINPROGRESS;
u_req->actual = 0;
req->last_desc = -1;
req->active = false;
spin_lock_irqsave(&vhub->lock, flags);
/* EP0 can only support a single request at a time */
if (!list_empty(&ep->queue) || ep->ep0.state == ep0_state_token) {
dev_warn(dev, "EP0: Request in wrong state\n");
spin_unlock_irqrestore(&vhub->lock, flags);
return -EBUSY;
}
/* Add request to list and kick processing if empty */
list_add_tail(&req->queue, &ep->queue);
if (ep->ep0.dir_in) {
/* IN request, send data */
ast_vhub_ep0_do_send(ep, req);
} else if (u_req->length == 0) {
/* 0-len request, send completion as rx */
EPVDBG(ep, "0-length rx completion\n");
ep->ep0.state = ep0_state_status;
writel(VHUB_EP0_TX_BUFF_RDY, ep->ep0.ctlstat);
ast_vhub_done(ep, req, 0);
} else {
/* OUT request, start receiver */
ast_vhub_ep0_rx_prime(ep);
}
spin_unlock_irqrestore(&vhub->lock, flags);
return 0;
}
static int ast_vhub_ep0_dequeue(struct usb_ep* u_ep, struct usb_request *u_req)
{
struct ast_vhub_ep *ep = to_ast_ep(u_ep);
struct ast_vhub *vhub = ep->vhub;
struct ast_vhub_req *req;
unsigned long flags;
int rc = -EINVAL;
spin_lock_irqsave(&vhub->lock, flags);
/* Only one request can be in the queue */
req = list_first_entry_or_null(&ep->queue, struct ast_vhub_req, queue);
/* Is it ours ? */
if (req && u_req == &req->req) {
EPVDBG(ep, "dequeue req @%p\n", req);
/*
* We don't have to deal with "active" as all
* DMAs go to the EP buffers, not the request.
*/
ast_vhub_done(ep, req, -ECONNRESET);
/* We do stall the EP to clean things up in HW */
writel(VHUB_EP0_CTRL_STALL, ep->ep0.ctlstat);
ep->ep0.state = ep0_state_status;
ep->ep0.dir_in = false;
rc = 0;
}
spin_unlock_irqrestore(&vhub->lock, flags);
return rc;
}
static const struct usb_ep_ops ast_vhub_ep0_ops = {
.queue = ast_vhub_ep0_queue,
.dequeue = ast_vhub_ep0_dequeue,
.alloc_request = ast_vhub_alloc_request,
.free_request = ast_vhub_free_request,
};
void ast_vhub_init_ep0(struct ast_vhub *vhub, struct ast_vhub_ep *ep,
struct ast_vhub_dev *dev)
{
memset(ep, 0, sizeof(*ep));
INIT_LIST_HEAD(&ep->ep.ep_list);
INIT_LIST_HEAD(&ep->queue);
ep->ep.ops = &ast_vhub_ep0_ops;
ep->ep.name = "ep0";
ep->ep.caps.type_control = true;
usb_ep_set_maxpacket_limit(&ep->ep, AST_VHUB_EP0_MAX_PACKET);
ep->d_idx = 0;
ep->dev = dev;
ep->vhub = vhub;
ep->ep0.state = ep0_state_token;
INIT_LIST_HEAD(&ep->ep0.req.queue);
ep->ep0.req.internal = true;
/* Small difference between vHub and devices */
if (dev) {
ep->ep0.ctlstat = dev->regs + AST_VHUB_DEV_EP0_CTRL;
ep->ep0.setup = vhub->regs +
AST_VHUB_SETUP0 + 8 * (dev->index + 1);
ep->buf = vhub->ep0_bufs +
AST_VHUB_EP0_MAX_PACKET * (dev->index + 1);
ep->buf_dma = vhub->ep0_bufs_dma +
AST_VHUB_EP0_MAX_PACKET * (dev->index + 1);
} else {
ep->ep0.ctlstat = vhub->regs + AST_VHUB_EP0_CTRL;
ep->ep0.setup = vhub->regs + AST_VHUB_SETUP0;
ep->buf = vhub->ep0_bufs;
ep->buf_dma = vhub->ep0_bufs_dma;
}
}

View File

@ -0,0 +1,843 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* aspeed-vhub -- Driver for Aspeed SoC "vHub" USB gadget
*
* epn.c - Generic endpoints management
*
* Copyright 2017 IBM Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/delay.h>
#include <linux/ioport.h>
#include <linux/slab.h>
#include <linux/errno.h>
#include <linux/list.h>
#include <linux/interrupt.h>
#include <linux/proc_fs.h>
#include <linux/prefetch.h>
#include <linux/clk.h>
#include <linux/usb/gadget.h>
#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/regmap.h>
#include <linux/dma-mapping.h>
#include "vhub.h"
#define EXTRA_CHECKS
#ifdef EXTRA_CHECKS
#define CHECK(ep, expr, fmt...) \
do { \
if (!(expr)) EPDBG(ep, "CHECK:" fmt); \
} while(0)
#else
#define CHECK(ep, expr, fmt...) do { } while(0)
#endif
static void ast_vhub_epn_kick(struct ast_vhub_ep *ep, struct ast_vhub_req *req)
{
unsigned int act = req->req.actual;
unsigned int len = req->req.length;
unsigned int chunk;
/* There should be no DMA ongoing */
WARN_ON(req->active);
/* Calculate next chunk size */
chunk = len - act;
if (chunk > ep->ep.maxpacket)
chunk = ep->ep.maxpacket;
else if ((chunk < ep->ep.maxpacket) || !req->req.zero)
req->last_desc = 1;
EPVDBG(ep, "kick req %p act=%d/%d chunk=%d last=%d\n",
req, act, len, chunk, req->last_desc);
/* If DMA unavailable, using staging EP buffer */
if (!req->req.dma) {
/* For IN transfers, copy data over first */
if (ep->epn.is_in)
memcpy(ep->buf, req->req.buf + act, chunk);
writel(ep->buf_dma, ep->epn.regs + AST_VHUB_EP_DESC_BASE);
} else
writel(req->req.dma + act, ep->epn.regs + AST_VHUB_EP_DESC_BASE);
/* Start DMA */
req->active = true;
writel(VHUB_EP_DMA_SET_TX_SIZE(chunk),
ep->epn.regs + AST_VHUB_EP_DESC_STATUS);
writel(VHUB_EP_DMA_SET_TX_SIZE(chunk) | VHUB_EP_DMA_SINGLE_KICK,
ep->epn.regs + AST_VHUB_EP_DESC_STATUS);
}
static void ast_vhub_epn_handle_ack(struct ast_vhub_ep *ep)
{
struct ast_vhub_req *req;
unsigned int len;
u32 stat;
/* Read EP status */
stat = readl(ep->epn.regs + AST_VHUB_EP_DESC_STATUS);
/* Grab current request if any */
req = list_first_entry_or_null(&ep->queue, struct ast_vhub_req, queue);
EPVDBG(ep, "ACK status=%08x is_in=%d, req=%p (active=%d)\n",
stat, ep->epn.is_in, req, req ? req->active : 0);
/* In absence of a request, bail out, must have been dequeued */
if (!req)
return;
/*
* Request not active, move on to processing queue, active request
* was probably dequeued
*/
if (!req->active)
goto next_chunk;
/* Check if HW has moved on */
if (VHUB_EP_DMA_RPTR(stat) != 0) {
EPDBG(ep, "DMA read pointer not 0 !\n");
return;
}
/* No current DMA ongoing */
req->active = false;
/* Grab lenght out of HW */
len = VHUB_EP_DMA_TX_SIZE(stat);
/* If not using DMA, copy data out if needed */
if (!req->req.dma && !ep->epn.is_in && len)
memcpy(req->req.buf + req->req.actual, ep->buf, len);
/* Adjust size */
req->req.actual += len;
/* Check for short packet */
if (len < ep->ep.maxpacket)
req->last_desc = 1;
/* That's it ? complete the request and pick a new one */
if (req->last_desc >= 0) {
ast_vhub_done(ep, req, 0);
req = list_first_entry_or_null(&ep->queue, struct ast_vhub_req,
queue);
/*
* Due to lock dropping inside "done" the next request could
* already be active, so check for that and bail if needed.
*/
if (!req || req->active)
return;
}
next_chunk:
ast_vhub_epn_kick(ep, req);
}
static inline unsigned int ast_vhub_count_free_descs(struct ast_vhub_ep *ep)
{
/*
* d_next == d_last means descriptor list empty to HW,
* thus we can only have AST_VHUB_DESCS_COUNT-1 descriptors
* in the list
*/
return (ep->epn.d_last + AST_VHUB_DESCS_COUNT - ep->epn.d_next - 1) &
(AST_VHUB_DESCS_COUNT - 1);
}
static void ast_vhub_epn_kick_desc(struct ast_vhub_ep *ep,
struct ast_vhub_req *req)
{
unsigned int act = req->act_count;
unsigned int len = req->req.length;
unsigned int chunk;
/* Mark request active if not already */
req->active = true;
/* If the request was already completely written, do nothing */
if (req->last_desc >= 0)
return;
EPVDBG(ep, "kick act=%d/%d chunk_max=%d free_descs=%d\n",
act, len, ep->epn.chunk_max, ast_vhub_count_free_descs(ep));
/* While we can create descriptors */
while (ast_vhub_count_free_descs(ep) && req->last_desc < 0) {
struct ast_vhub_desc *desc;
unsigned int d_num;
/* Grab next free descriptor */
d_num = ep->epn.d_next;
desc = &ep->epn.descs[d_num];
ep->epn.d_next = (d_num + 1) & (AST_VHUB_DESCS_COUNT - 1);
/* Calculate next chunk size */
chunk = len - act;
if (chunk <= ep->epn.chunk_max) {
/*
* Is this the last packet ? Because of having up to 8
* packets in a descriptor we can't just compare "chunk"
* with ep.maxpacket. We have to see if it's a multiple
* of it to know if we have to send a zero packet.
* Sadly that involves a modulo which is a bit expensive
* but probably still better than not doing it.
*/
if (!chunk || !req->req.zero || (chunk % ep->ep.maxpacket) != 0)
req->last_desc = d_num;
} else {
chunk = ep->epn.chunk_max;
}
EPVDBG(ep, " chunk: act=%d/%d chunk=%d last=%d desc=%d free=%d\n",
act, len, chunk, req->last_desc, d_num,
ast_vhub_count_free_descs(ep));
/* Populate descriptor */
desc->w0 = cpu_to_le32(req->req.dma + act);
/* Interrupt if end of request or no more descriptors */
/*
* TODO: Be smarter about it, if we don't have enough
* descriptors request an interrupt before queue empty
* or so in order to be able to populate more before
* the HW runs out. This isn't a problem at the moment
* as we use 256 descriptors and only put at most one
* request in the ring.
*/
desc->w1 = cpu_to_le32(VHUB_DSC1_IN_SET_LEN(chunk));
if (req->last_desc >= 0 || !ast_vhub_count_free_descs(ep))
desc->w1 |= cpu_to_le32(VHUB_DSC1_IN_INTERRUPT);
/* Account packet */
req->act_count = act = act + chunk;
}
/* Tell HW about new descriptors */
writel(VHUB_EP_DMA_SET_CPU_WPTR(ep->epn.d_next),
ep->epn.regs + AST_VHUB_EP_DESC_STATUS);
EPVDBG(ep, "HW kicked, d_next=%d dstat=%08x\n",
ep->epn.d_next, readl(ep->epn.regs + AST_VHUB_EP_DESC_STATUS));
}
static void ast_vhub_epn_handle_ack_desc(struct ast_vhub_ep *ep)
{
struct ast_vhub_req *req;
unsigned int len, d_last;
u32 stat, stat1;
/* Read EP status, workaround HW race */
do {
stat = readl(ep->epn.regs + AST_VHUB_EP_DESC_STATUS);
stat1 = readl(ep->epn.regs + AST_VHUB_EP_DESC_STATUS);
} while(stat != stat1);
/* Extract RPTR */
d_last = VHUB_EP_DMA_RPTR(stat);
/* Grab current request if any */
req = list_first_entry_or_null(&ep->queue, struct ast_vhub_req, queue);
EPVDBG(ep, "ACK status=%08x is_in=%d ep->d_last=%d..%d\n",
stat, ep->epn.is_in, ep->epn.d_last, d_last);
/* Check all completed descriptors */
while (ep->epn.d_last != d_last) {
struct ast_vhub_desc *desc;
unsigned int d_num;
bool is_last_desc;
/* Grab next completed descriptor */
d_num = ep->epn.d_last;
desc = &ep->epn.descs[d_num];
ep->epn.d_last = (d_num + 1) & (AST_VHUB_DESCS_COUNT - 1);
/* Grab len out of descriptor */
len = VHUB_DSC1_IN_LEN(le32_to_cpu(desc->w1));
EPVDBG(ep, " desc %d len=%d req=%p (act=%d)\n",
d_num, len, req, req ? req->active : 0);
/* If no active request pending, move on */
if (!req || !req->active)
continue;
/* Adjust size */
req->req.actual += len;
/* Is that the last chunk ? */
is_last_desc = req->last_desc == d_num;
CHECK(ep, is_last_desc == (len < ep->ep.maxpacket ||
(req->req.actual >= req->req.length &&
!req->req.zero)),
"Last packet discrepancy: last_desc=%d len=%d r.act=%d "
"r.len=%d r.zero=%d mp=%d\n",
is_last_desc, len, req->req.actual, req->req.length,
req->req.zero, ep->ep.maxpacket);
if (is_last_desc) {
/*
* Because we can only have one request at a time
* in our descriptor list in this implementation,
* d_last and ep->d_last should now be equal
*/
CHECK(ep, d_last == ep->epn.d_last,
"DMA read ptr mismatch %d vs %d\n",
d_last, ep->epn.d_last);
/* Note: done will drop and re-acquire the lock */
ast_vhub_done(ep, req, 0);
req = list_first_entry_or_null(&ep->queue,
struct ast_vhub_req,
queue);
break;
}
}
/* More work ? */
if (req)
ast_vhub_epn_kick_desc(ep, req);
}
void ast_vhub_epn_ack_irq(struct ast_vhub_ep *ep)
{
if (ep->epn.desc_mode)
ast_vhub_epn_handle_ack_desc(ep);
else
ast_vhub_epn_handle_ack(ep);
}
static int ast_vhub_epn_queue(struct usb_ep* u_ep, struct usb_request *u_req,
gfp_t gfp_flags)
{
struct ast_vhub_req *req = to_ast_req(u_req);
struct ast_vhub_ep *ep = to_ast_ep(u_ep);
struct ast_vhub *vhub = ep->vhub;
unsigned long flags;
bool empty;
int rc;
/* Paranoid checks */
if (!u_req || !u_req->complete || !u_req->buf) {
dev_warn(&vhub->pdev->dev, "Bogus EPn request ! u_req=%p\n", u_req);
if (u_req) {
dev_warn(&vhub->pdev->dev, "complete=%p internal=%d\n",
u_req->complete, req->internal);
}
return -EINVAL;
}
/* Endpoint enabled ? */
if (!ep->epn.enabled || !u_ep->desc || !ep->dev || !ep->d_idx ||
!ep->dev->enabled || ep->dev->suspended) {
EPDBG(ep,"Enqueing request on wrong or disabled EP\n");
return -ESHUTDOWN;
}
/* Map request for DMA if possible. For now, the rule for DMA is
* that:
*
* * For single stage mode (no descriptors):
*
* - The buffer is aligned to a 8 bytes boundary (HW requirement)
* - For a OUT endpoint, the request size is a multiple of the EP
* packet size (otherwise the controller will DMA past the end
* of the buffer if the host is sending a too long packet).
*
* * For descriptor mode (tx only for now), always.
*
* We could relax the latter by making the decision to use the bounce
* buffer based on the size of a given *segment* of the request rather
* than the whole request.
*/
if (ep->epn.desc_mode ||
((((unsigned long)u_req->buf & 7) == 0) &&
(ep->epn.is_in || !(u_req->length & (u_ep->maxpacket - 1))))) {
rc = usb_gadget_map_request(&ep->dev->gadget, u_req,
ep->epn.is_in);
if (rc) {
dev_warn(&vhub->pdev->dev,
"Request mapping failure %d\n", rc);
return rc;
}
} else
u_req->dma = 0;
EPVDBG(ep, "enqueue req @%p\n", req);
EPVDBG(ep, " l=%d dma=0x%x zero=%d noshort=%d noirq=%d is_in=%d\n",
u_req->length, (u32)u_req->dma, u_req->zero,
u_req->short_not_ok, u_req->no_interrupt,
ep->epn.is_in);
/* Initialize request progress fields */
u_req->status = -EINPROGRESS;
u_req->actual = 0;
req->act_count = 0;
req->active = false;
req->last_desc = -1;
spin_lock_irqsave(&vhub->lock, flags);
empty = list_empty(&ep->queue);
/* Add request to list and kick processing if empty */
list_add_tail(&req->queue, &ep->queue);
if (empty) {
if (ep->epn.desc_mode)
ast_vhub_epn_kick_desc(ep, req);
else
ast_vhub_epn_kick(ep, req);
}
spin_unlock_irqrestore(&vhub->lock, flags);
return 0;
}
static void ast_vhub_stop_active_req(struct ast_vhub_ep *ep,
bool restart_ep)
{
u32 state, reg, loops;
/* Stop DMA activity */
writel(0, ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT);
/* Wait for it to complete */
for (loops = 0; loops < 1000; loops++) {
state = readl(ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT);
state = VHUB_EP_DMA_PROC_STATUS(state);
if (state == EP_DMA_PROC_RX_IDLE ||
state == EP_DMA_PROC_TX_IDLE)
break;
udelay(1);
}
if (loops >= 1000)
dev_warn(&ep->vhub->pdev->dev, "Timeout waiting for DMA\n");
/* If we don't have to restart the endpoint, that's it */
if (!restart_ep)
return;
/* Restart the endpoint */
if (ep->epn.desc_mode) {
/*
* Take out descriptors by resetting the DMA read
* pointer to be equal to the CPU write pointer.
*
* Note: If we ever support creating descriptors for
* requests that aren't the head of the queue, we
* may have to do something more complex here,
* especially if the request being taken out is
* not the current head descriptors.
*/
reg = VHUB_EP_DMA_SET_RPTR(ep->epn.d_next) |
VHUB_EP_DMA_SET_CPU_WPTR(ep->epn.d_next);
writel(reg, ep->epn.regs + AST_VHUB_EP_DESC_STATUS);
/* Then turn it back on */
writel(ep->epn.dma_conf,
ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT);
} else {
/* Single mode: just turn it back on */
writel(ep->epn.dma_conf,
ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT);
}
}
static int ast_vhub_epn_dequeue(struct usb_ep* u_ep, struct usb_request *u_req)
{
struct ast_vhub_ep *ep = to_ast_ep(u_ep);
struct ast_vhub *vhub = ep->vhub;
struct ast_vhub_req *req;
unsigned long flags;
int rc = -EINVAL;
spin_lock_irqsave(&vhub->lock, flags);
/* Make sure it's actually queued on this endpoint */
list_for_each_entry (req, &ep->queue, queue) {
if (&req->req == u_req)
break;
}
if (&req->req == u_req) {
EPVDBG(ep, "dequeue req @%p active=%d\n",
req, req->active);
if (req->active)
ast_vhub_stop_active_req(ep, true);
ast_vhub_done(ep, req, -ECONNRESET);
rc = 0;
}
spin_unlock_irqrestore(&vhub->lock, flags);
return rc;
}
void ast_vhub_update_epn_stall(struct ast_vhub_ep *ep)
{
u32 reg;
if (WARN_ON(ep->d_idx == 0))
return;
reg = readl(ep->epn.regs + AST_VHUB_EP_CONFIG);
if (ep->epn.stalled || ep->epn.wedged)
reg |= VHUB_EP_CFG_STALL_CTRL;
else
reg &= ~VHUB_EP_CFG_STALL_CTRL;
writel(reg, ep->epn.regs + AST_VHUB_EP_CONFIG);
if (!ep->epn.stalled && !ep->epn.wedged)
writel(VHUB_EP_TOGGLE_SET_EPNUM(ep->epn.g_idx),
ep->vhub->regs + AST_VHUB_EP_TOGGLE);
}
static int ast_vhub_set_halt_and_wedge(struct usb_ep* u_ep, bool halt,
bool wedge)
{
struct ast_vhub_ep *ep = to_ast_ep(u_ep);
struct ast_vhub *vhub = ep->vhub;
unsigned long flags;
EPDBG(ep, "Set halt (%d) & wedge (%d)\n", halt, wedge);
if (!u_ep || !u_ep->desc)
return -EINVAL;
if (ep->d_idx == 0)
return 0;
if (ep->epn.is_iso)
return -EOPNOTSUPP;
spin_lock_irqsave(&vhub->lock, flags);
/* Fail with still-busy IN endpoints */
if (halt && ep->epn.is_in && !list_empty(&ep->queue)) {
spin_unlock_irqrestore(&vhub->lock, flags);
return -EAGAIN;
}
ep->epn.stalled = halt;
ep->epn.wedged = wedge;
ast_vhub_update_epn_stall(ep);
spin_unlock_irqrestore(&vhub->lock, flags);
return 0;
}
static int ast_vhub_epn_set_halt(struct usb_ep *u_ep, int value)
{
return ast_vhub_set_halt_and_wedge(u_ep, value != 0, false);
}
static int ast_vhub_epn_set_wedge(struct usb_ep *u_ep)
{
return ast_vhub_set_halt_and_wedge(u_ep, true, true);
}
static int ast_vhub_epn_disable(struct usb_ep* u_ep)
{
struct ast_vhub_ep *ep = to_ast_ep(u_ep);
struct ast_vhub *vhub = ep->vhub;
unsigned long flags;
u32 imask, ep_ier;
EPDBG(ep, "Disabling !\n");
spin_lock_irqsave(&vhub->lock, flags);
ep->epn.enabled = false;
/* Stop active DMA if any */
ast_vhub_stop_active_req(ep, false);
/* Disable endpoint */
writel(0, ep->epn.regs + AST_VHUB_EP_CONFIG);
/* Disable ACK interrupt */
imask = VHUB_EP_IRQ(ep->epn.g_idx);
ep_ier = readl(vhub->regs + AST_VHUB_EP_ACK_IER);
ep_ier &= ~imask;
writel(ep_ier, vhub->regs + AST_VHUB_EP_ACK_IER);
writel(imask, vhub->regs + AST_VHUB_EP_ACK_ISR);
/* Nuke all pending requests */
ast_vhub_nuke(ep, -ESHUTDOWN);
/* No more descriptor associated with request */
ep->ep.desc = NULL;
spin_unlock_irqrestore(&vhub->lock, flags);
return 0;
}
static int ast_vhub_epn_enable(struct usb_ep* u_ep,
const struct usb_endpoint_descriptor *desc)
{
static const char *ep_type_string[] __maybe_unused = { "ctrl",
"isoc",
"bulk",
"intr" };
struct ast_vhub_ep *ep = to_ast_ep(u_ep);
struct ast_vhub_dev *dev;
struct ast_vhub *vhub;
u16 maxpacket, type;
unsigned long flags;
u32 ep_conf, ep_ier, imask;
/* Check arguments */
if (!u_ep || !desc)
return -EINVAL;
maxpacket = usb_endpoint_maxp(desc);
if (!ep->d_idx || !ep->dev ||
desc->bDescriptorType != USB_DT_ENDPOINT ||
maxpacket == 0 || maxpacket > ep->ep.maxpacket) {
EPDBG(ep, "Invalid EP enable,d_idx=%d,dev=%p,type=%d,mp=%d/%d\n",
ep->d_idx, ep->dev, desc->bDescriptorType,
maxpacket, ep->ep.maxpacket);
return -EINVAL;
}
if (ep->d_idx != usb_endpoint_num(desc)) {
EPDBG(ep, "EP number mismatch !\n");
return -EINVAL;
}
if (ep->epn.enabled) {
EPDBG(ep, "Already enabled\n");
return -EBUSY;
}
dev = ep->dev;
vhub = ep->vhub;
/* Check device state */
if (!dev->driver) {
EPDBG(ep, "Bogus device state: driver=%p speed=%d\n",
dev->driver, dev->gadget.speed);
return -ESHUTDOWN;
}
/* Grab some info from the descriptor */
ep->epn.is_in = usb_endpoint_dir_in(desc);
ep->ep.maxpacket = maxpacket;
type = usb_endpoint_type(desc);
ep->epn.d_next = ep->epn.d_last = 0;
ep->epn.is_iso = false;
ep->epn.stalled = false;
ep->epn.wedged = false;
EPDBG(ep, "Enabling [%s] %s num %d maxpacket=%d\n",
ep->epn.is_in ? "in" : "out", ep_type_string[type],
usb_endpoint_num(desc), maxpacket);
/* Can we use DMA descriptor mode ? */
ep->epn.desc_mode = ep->epn.descs && ep->epn.is_in;
if (ep->epn.desc_mode)
memset(ep->epn.descs, 0, 8 * AST_VHUB_DESCS_COUNT);
/*
* Large send function can send up to 8 packets from
* one descriptor with a limit of 4095 bytes.
*/
ep->epn.chunk_max = ep->ep.maxpacket;
if (ep->epn.is_in) {
ep->epn.chunk_max <<= 3;
while (ep->epn.chunk_max > 4095)
ep->epn.chunk_max -= ep->ep.maxpacket;
}
switch(type) {
case USB_ENDPOINT_XFER_CONTROL:
EPDBG(ep, "Only one control endpoint\n");
return -EINVAL;
case USB_ENDPOINT_XFER_INT:
ep_conf = VHUB_EP_CFG_SET_TYPE(EP_TYPE_INT);
break;
case USB_ENDPOINT_XFER_BULK:
ep_conf = VHUB_EP_CFG_SET_TYPE(EP_TYPE_BULK);
break;
case USB_ENDPOINT_XFER_ISOC:
ep_conf = VHUB_EP_CFG_SET_TYPE(EP_TYPE_ISO);
ep->epn.is_iso = true;
break;
default:
return -EINVAL;
}
/* Encode the rest of the EP config register */
if (maxpacket < 1024)
ep_conf |= VHUB_EP_CFG_SET_MAX_PKT(maxpacket);
if (!ep->epn.is_in)
ep_conf |= VHUB_EP_CFG_DIR_OUT;
ep_conf |= VHUB_EP_CFG_SET_EP_NUM(usb_endpoint_num(desc));
ep_conf |= VHUB_EP_CFG_ENABLE;
ep_conf |= VHUB_EP_CFG_SET_DEV(dev->index + 1);
EPVDBG(ep, "config=%08x\n", ep_conf);
spin_lock_irqsave(&vhub->lock, flags);
/* Disable HW and reset DMA */
writel(0, ep->epn.regs + AST_VHUB_EP_CONFIG);
writel(VHUB_EP_DMA_CTRL_RESET,
ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT);
/* Configure and enable */
writel(ep_conf, ep->epn.regs + AST_VHUB_EP_CONFIG);
if (ep->epn.desc_mode) {
/* Clear DMA status, including the DMA read ptr */
writel(0, ep->epn.regs + AST_VHUB_EP_DESC_STATUS);
/* Set descriptor base */
writel(ep->epn.descs_dma,
ep->epn.regs + AST_VHUB_EP_DESC_BASE);
/* Set base DMA config value */
ep->epn.dma_conf = VHUB_EP_DMA_DESC_MODE;
if (ep->epn.is_in)
ep->epn.dma_conf |= VHUB_EP_DMA_IN_LONG_MODE;
/* First reset and disable all operations */
writel(ep->epn.dma_conf | VHUB_EP_DMA_CTRL_RESET,
ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT);
/* Enable descriptor mode */
writel(ep->epn.dma_conf,
ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT);
} else {
/* Set base DMA config value */
ep->epn.dma_conf = VHUB_EP_DMA_SINGLE_STAGE;
/* Reset and switch to single stage mode */
writel(ep->epn.dma_conf | VHUB_EP_DMA_CTRL_RESET,
ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT);
writel(ep->epn.dma_conf,
ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT);
writel(0, ep->epn.regs + AST_VHUB_EP_DESC_STATUS);
}
/* Cleanup data toggle just in case */
writel(VHUB_EP_TOGGLE_SET_EPNUM(ep->epn.g_idx),
vhub->regs + AST_VHUB_EP_TOGGLE);
/* Cleanup and enable ACK interrupt */
imask = VHUB_EP_IRQ(ep->epn.g_idx);
writel(imask, vhub->regs + AST_VHUB_EP_ACK_ISR);
ep_ier = readl(vhub->regs + AST_VHUB_EP_ACK_IER);
ep_ier |= imask;
writel(ep_ier, vhub->regs + AST_VHUB_EP_ACK_IER);
/* Woot, we are online ! */
ep->epn.enabled = true;
spin_unlock_irqrestore(&vhub->lock, flags);
return 0;
}
static void ast_vhub_epn_dispose(struct usb_ep *u_ep)
{
struct ast_vhub_ep *ep = to_ast_ep(u_ep);
if (WARN_ON(!ep->dev || !ep->d_idx))
return;
EPDBG(ep, "Releasing endpoint\n");
/* Take it out of the EP list */
list_del_init(&ep->ep.ep_list);
/* Mark the address free in the device */
ep->dev->epns[ep->d_idx - 1] = NULL;
/* Free name & DMA buffers */
kfree(ep->ep.name);
ep->ep.name = NULL;
dma_free_coherent(&ep->vhub->pdev->dev,
AST_VHUB_EPn_MAX_PACKET +
8 * AST_VHUB_DESCS_COUNT,
ep->buf, ep->buf_dma);
ep->buf = NULL;
ep->epn.descs = NULL;
/* Mark free */
ep->dev = NULL;
}
static const struct usb_ep_ops ast_vhub_epn_ops = {
.enable = ast_vhub_epn_enable,
.disable = ast_vhub_epn_disable,
.dispose = ast_vhub_epn_dispose,
.queue = ast_vhub_epn_queue,
.dequeue = ast_vhub_epn_dequeue,
.set_halt = ast_vhub_epn_set_halt,
.set_wedge = ast_vhub_epn_set_wedge,
.alloc_request = ast_vhub_alloc_request,
.free_request = ast_vhub_free_request,
};
struct ast_vhub_ep *ast_vhub_alloc_epn(struct ast_vhub_dev *d, u8 addr)
{
struct ast_vhub *vhub = d->vhub;
struct ast_vhub_ep *ep;
unsigned long flags;
int i;
/* Find a free one (no device) */
spin_lock_irqsave(&vhub->lock, flags);
for (i = 0; i < AST_VHUB_NUM_GEN_EPs; i++)
if (vhub->epns[i].dev == NULL)
break;
if (i >= AST_VHUB_NUM_GEN_EPs) {
spin_unlock_irqrestore(&vhub->lock, flags);
return NULL;
}
/* Set it up */
ep = &vhub->epns[i];
ep->dev = d;
spin_unlock_irqrestore(&vhub->lock, flags);
DDBG(d, "Allocating gen EP %d for addr %d\n", i, addr);
INIT_LIST_HEAD(&ep->queue);
ep->d_idx = addr;
ep->vhub = vhub;
ep->ep.ops = &ast_vhub_epn_ops;
ep->ep.name = kasprintf(GFP_KERNEL, "ep%d", addr);
d->epns[addr-1] = ep;
ep->epn.g_idx = i;
ep->epn.regs = vhub->regs + 0x200 + (i * 0x10);
ep->buf = dma_alloc_coherent(&vhub->pdev->dev,
AST_VHUB_EPn_MAX_PACKET +
8 * AST_VHUB_DESCS_COUNT,
&ep->buf_dma, GFP_KERNEL);
if (!ep->buf) {
kfree(ep->ep.name);
ep->ep.name = NULL;
return NULL;
}
ep->epn.descs = ep->buf + AST_VHUB_EPn_MAX_PACKET;
ep->epn.descs_dma = ep->buf_dma + AST_VHUB_EPn_MAX_PACKET;
usb_ep_set_maxpacket_limit(&ep->ep, AST_VHUB_EPn_MAX_PACKET);
list_add_tail(&ep->ep.ep_list, &d->gadget.ep_list);
ep->ep.caps.type_iso = true;
ep->ep.caps.type_bulk = true;
ep->ep.caps.type_int = true;
ep->ep.caps.dir_in = true;
ep->ep.caps.dir_out = true;
return ep;
}

View File

@ -0,0 +1,829 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* aspeed-vhub -- Driver for Aspeed SoC "vHub" USB gadget
*
* hub.c - virtual hub handling
*
* Copyright 2017 IBM Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/delay.h>
#include <linux/ioport.h>
#include <linux/slab.h>
#include <linux/errno.h>
#include <linux/list.h>
#include <linux/interrupt.h>
#include <linux/proc_fs.h>
#include <linux/prefetch.h>
#include <linux/clk.h>
#include <linux/usb/gadget.h>
#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/regmap.h>
#include <linux/dma-mapping.h>
#include <linux/bcd.h>
#include <linux/version.h>
#include <linux/usb.h>
#include <linux/usb/hcd.h>
#include "vhub.h"
/* usb 2.0 hub device descriptor
*
* A few things we may want to improve here:
*
* - We may need to indicate TT support
* - We may need a device qualifier descriptor
* as devices can pretend to be usb1 or 2
* - Make vid/did overridable
* - make it look like usb1 if usb1 mode forced
*/
#define KERNEL_REL bin2bcd(((LINUX_VERSION_CODE >> 16) & 0x0ff))
#define KERNEL_VER bin2bcd(((LINUX_VERSION_CODE >> 8) & 0x0ff))
enum {
AST_VHUB_STR_MANUF = 3,
AST_VHUB_STR_PRODUCT = 2,
AST_VHUB_STR_SERIAL = 1,
};
static const struct usb_device_descriptor ast_vhub_dev_desc = {
.bLength = USB_DT_DEVICE_SIZE,
.bDescriptorType = USB_DT_DEVICE,
.bcdUSB = cpu_to_le16(0x0200),
.bDeviceClass = USB_CLASS_HUB,
.bDeviceSubClass = 0,
.bDeviceProtocol = 1,
.bMaxPacketSize0 = 64,
.idVendor = cpu_to_le16(0x1d6b),
.idProduct = cpu_to_le16(0x0107),
.bcdDevice = cpu_to_le16(0x0100),
.iManufacturer = AST_VHUB_STR_MANUF,
.iProduct = AST_VHUB_STR_PRODUCT,
.iSerialNumber = AST_VHUB_STR_SERIAL,
.bNumConfigurations = 1,
};
/* Patches to the above when forcing USB1 mode */
static void ast_vhub_patch_dev_desc_usb1(struct usb_device_descriptor *desc)
{
desc->bcdUSB = cpu_to_le16(0x0100);
desc->bDeviceProtocol = 0;
}
/*
* Configuration descriptor: same comments as above
* regarding handling USB1 mode.
*/
/*
* We don't use sizeof() as Linux definition of
* struct usb_endpoint_descriptor contains 2
* extra bytes
*/
#define AST_VHUB_CONF_DESC_SIZE (USB_DT_CONFIG_SIZE + \
USB_DT_INTERFACE_SIZE + \
USB_DT_ENDPOINT_SIZE)
static const struct ast_vhub_full_cdesc {
struct usb_config_descriptor cfg;
struct usb_interface_descriptor intf;
struct usb_endpoint_descriptor ep;
} __attribute__ ((packed)) ast_vhub_conf_desc = {
.cfg = {
.bLength = USB_DT_CONFIG_SIZE,
.bDescriptorType = USB_DT_CONFIG,
.wTotalLength = cpu_to_le16(AST_VHUB_CONF_DESC_SIZE),
.bNumInterfaces = 1,
.bConfigurationValue = 1,
.iConfiguration = 0,
.bmAttributes = USB_CONFIG_ATT_ONE |
USB_CONFIG_ATT_SELFPOWER |
USB_CONFIG_ATT_WAKEUP,
.bMaxPower = 0,
},
.intf = {
.bLength = USB_DT_INTERFACE_SIZE,
.bDescriptorType = USB_DT_INTERFACE,
.bInterfaceNumber = 0,
.bAlternateSetting = 0,
.bNumEndpoints = 1,
.bInterfaceClass = USB_CLASS_HUB,
.bInterfaceSubClass = 0,
.bInterfaceProtocol = 0,
.iInterface = 0,
},
.ep = {
.bLength = USB_DT_ENDPOINT_SIZE,
.bDescriptorType = USB_DT_ENDPOINT,
.bEndpointAddress = 0x81,
.bmAttributes = USB_ENDPOINT_XFER_INT,
.wMaxPacketSize = cpu_to_le16(1),
.bInterval = 0x0c,
},
};
#define AST_VHUB_HUB_DESC_SIZE (USB_DT_HUB_NONVAR_SIZE + 2)
static const struct usb_hub_descriptor ast_vhub_hub_desc = {
.bDescLength = AST_VHUB_HUB_DESC_SIZE,
.bDescriptorType = USB_DT_HUB,
.bNbrPorts = AST_VHUB_NUM_PORTS,
.wHubCharacteristics = cpu_to_le16(HUB_CHAR_NO_LPSM),
.bPwrOn2PwrGood = 10,
.bHubContrCurrent = 0,
.u.hs.DeviceRemovable[0] = 0,
.u.hs.DeviceRemovable[1] = 0xff,
};
/*
* These strings converted to UTF-16 must be smaller than
* our EP0 buffer.
*/
static const struct usb_string ast_vhub_str_array[] = {
{
.id = AST_VHUB_STR_SERIAL,
.s = "00000000"
},
{
.id = AST_VHUB_STR_PRODUCT,
.s = "USB Virtual Hub"
},
{
.id = AST_VHUB_STR_MANUF,
.s = "Aspeed"
},
{ }
};
static const struct usb_gadget_strings ast_vhub_strings = {
.language = 0x0409,
.strings = (struct usb_string *)ast_vhub_str_array
};
static int ast_vhub_hub_dev_status(struct ast_vhub_ep *ep,
u16 wIndex, u16 wValue)
{
u8 st0;
EPDBG(ep, "GET_STATUS(dev)\n");
/*
* Mark it as self-powered, I doubt the BMC is powered off
* the USB bus ...
*/
st0 = 1 << USB_DEVICE_SELF_POWERED;
/*
* Need to double check how remote wakeup actually works
* on that chip and what triggers it.
*/
if (ep->vhub->wakeup_en)
st0 |= 1 << USB_DEVICE_REMOTE_WAKEUP;
return ast_vhub_simple_reply(ep, st0, 0);
}
static int ast_vhub_hub_ep_status(struct ast_vhub_ep *ep,
u16 wIndex, u16 wValue)
{
int ep_num;
u8 st0 = 0;
ep_num = wIndex & USB_ENDPOINT_NUMBER_MASK;
EPDBG(ep, "GET_STATUS(ep%d)\n", ep_num);
/* On the hub we have only EP 0 and 1 */
if (ep_num == 1) {
if (ep->vhub->ep1_stalled)
st0 |= 1 << USB_ENDPOINT_HALT;
} else if (ep_num != 0)
return std_req_stall;
return ast_vhub_simple_reply(ep, st0, 0);
}
static int ast_vhub_hub_dev_feature(struct ast_vhub_ep *ep,
u16 wIndex, u16 wValue,
bool is_set)
{
EPDBG(ep, "%s_FEATURE(dev val=%02x)\n",
is_set ? "SET" : "CLEAR", wValue);
if (wValue != USB_DEVICE_REMOTE_WAKEUP)
return std_req_stall;
ep->vhub->wakeup_en = is_set;
EPDBG(ep, "Hub remote wakeup %s\n",
is_set ? "enabled" : "disabled");
return std_req_complete;
}
static int ast_vhub_hub_ep_feature(struct ast_vhub_ep *ep,
u16 wIndex, u16 wValue,
bool is_set)
{
int ep_num;
u32 reg;
ep_num = wIndex & USB_ENDPOINT_NUMBER_MASK;
EPDBG(ep, "%s_FEATURE(ep%d val=%02x)\n",
is_set ? "SET" : "CLEAR", ep_num, wValue);
if (ep_num > 1)
return std_req_stall;
if (wValue != USB_ENDPOINT_HALT)
return std_req_stall;
if (ep_num == 0)
return std_req_complete;
EPDBG(ep, "%s stall on EP 1\n",
is_set ? "setting" : "clearing");
ep->vhub->ep1_stalled = is_set;
reg = readl(ep->vhub->regs + AST_VHUB_EP1_CTRL);
if (is_set) {
reg |= VHUB_EP1_CTRL_STALL;
} else {
reg &= ~VHUB_EP1_CTRL_STALL;
reg |= VHUB_EP1_CTRL_RESET_TOGGLE;
}
writel(reg, ep->vhub->regs + AST_VHUB_EP1_CTRL);
return std_req_complete;
}
static int ast_vhub_rep_desc(struct ast_vhub_ep *ep,
u8 desc_type, u16 len)
{
size_t dsize;
EPDBG(ep, "GET_DESCRIPTOR(type:%d)\n", desc_type);
/*
* Copy first to EP buffer and send from there, so
* we can do some in-place patching if needed. We know
* the EP buffer is big enough but ensure that doesn't
* change. We do that now rather than later after we
* have checked sizes etc... to avoid a gcc bug where
* it thinks len is constant and barfs about read
* overflows in memcpy.
*/
switch(desc_type) {
case USB_DT_DEVICE:
dsize = USB_DT_DEVICE_SIZE;
memcpy(ep->buf, &ast_vhub_dev_desc, dsize);
BUILD_BUG_ON(dsize > sizeof(ast_vhub_dev_desc));
BUILD_BUG_ON(USB_DT_DEVICE_SIZE >= AST_VHUB_EP0_MAX_PACKET);
break;
case USB_DT_CONFIG:
dsize = AST_VHUB_CONF_DESC_SIZE;
memcpy(ep->buf, &ast_vhub_conf_desc, dsize);
BUILD_BUG_ON(dsize > sizeof(ast_vhub_conf_desc));
BUILD_BUG_ON(AST_VHUB_CONF_DESC_SIZE >= AST_VHUB_EP0_MAX_PACKET);
break;
case USB_DT_HUB:
dsize = AST_VHUB_HUB_DESC_SIZE;
memcpy(ep->buf, &ast_vhub_hub_desc, dsize);
BUILD_BUG_ON(dsize > sizeof(ast_vhub_hub_desc));
BUILD_BUG_ON(AST_VHUB_HUB_DESC_SIZE >= AST_VHUB_EP0_MAX_PACKET);
break;
default:
return std_req_stall;
}
/* Crop requested length */
if (len > dsize)
len = dsize;
/* Patch it if forcing USB1 */
if (desc_type == USB_DT_DEVICE && ep->vhub->force_usb1)
ast_vhub_patch_dev_desc_usb1(ep->buf);
/* Shoot it from the EP buffer */
return ast_vhub_reply(ep, NULL, len);
}
static int ast_vhub_rep_string(struct ast_vhub_ep *ep,
u8 string_id, u16 lang_id,
u16 len)
{
int rc = usb_gadget_get_string (&ast_vhub_strings, string_id, ep->buf);
/*
* This should never happen unless we put too big strings in
* the array above
*/
BUG_ON(rc >= AST_VHUB_EP0_MAX_PACKET);
if (rc < 0)
return std_req_stall;
/* Shoot it from the EP buffer */
return ast_vhub_reply(ep, NULL, min_t(u16, rc, len));
}
enum std_req_rc ast_vhub_std_hub_request(struct ast_vhub_ep *ep,
struct usb_ctrlrequest *crq)
{
struct ast_vhub *vhub = ep->vhub;
u16 wValue, wIndex, wLength;
wValue = le16_to_cpu(crq->wValue);
wIndex = le16_to_cpu(crq->wIndex);
wLength = le16_to_cpu(crq->wLength);
/* First packet, grab speed */
if (vhub->speed == USB_SPEED_UNKNOWN) {
u32 ustat = readl(vhub->regs + AST_VHUB_USBSTS);
if (ustat & VHUB_USBSTS_HISPEED)
vhub->speed = USB_SPEED_HIGH;
else
vhub->speed = USB_SPEED_FULL;
UDCDBG(vhub, "USB status=%08x speed=%s\n", ustat,
vhub->speed == USB_SPEED_HIGH ? "high" : "full");
}
switch ((crq->bRequestType << 8) | crq->bRequest) {
/* SET_ADDRESS */
case DeviceOutRequest | USB_REQ_SET_ADDRESS:
EPDBG(ep, "SET_ADDRESS: Got address %x\n", wValue);
writel(wValue, vhub->regs + AST_VHUB_CONF);
return std_req_complete;
/* GET_STATUS */
case DeviceRequest | USB_REQ_GET_STATUS:
return ast_vhub_hub_dev_status(ep, wIndex, wValue);
case InterfaceRequest | USB_REQ_GET_STATUS:
return ast_vhub_simple_reply(ep, 0, 0);
case EndpointRequest | USB_REQ_GET_STATUS:
return ast_vhub_hub_ep_status(ep, wIndex, wValue);
/* SET/CLEAR_FEATURE */
case DeviceOutRequest | USB_REQ_SET_FEATURE:
return ast_vhub_hub_dev_feature(ep, wIndex, wValue, true);
case DeviceOutRequest | USB_REQ_CLEAR_FEATURE:
return ast_vhub_hub_dev_feature(ep, wIndex, wValue, false);
case EndpointOutRequest | USB_REQ_SET_FEATURE:
return ast_vhub_hub_ep_feature(ep, wIndex, wValue, true);
case EndpointOutRequest | USB_REQ_CLEAR_FEATURE:
return ast_vhub_hub_ep_feature(ep, wIndex, wValue, false);
/* GET/SET_CONFIGURATION */
case DeviceRequest | USB_REQ_GET_CONFIGURATION:
return ast_vhub_simple_reply(ep, 1);
case DeviceOutRequest | USB_REQ_SET_CONFIGURATION:
if (wValue != 1)
return std_req_stall;
return std_req_complete;
/* GET_DESCRIPTOR */
case DeviceRequest | USB_REQ_GET_DESCRIPTOR:
switch (wValue >> 8) {
case USB_DT_DEVICE:
case USB_DT_CONFIG:
return ast_vhub_rep_desc(ep, wValue >> 8,
wLength);
case USB_DT_STRING:
return ast_vhub_rep_string(ep, wValue & 0xff,
wIndex, wLength);
}
return std_req_stall;
/* GET/SET_INTERFACE */
case DeviceRequest | USB_REQ_GET_INTERFACE:
return ast_vhub_simple_reply(ep, 0);
case DeviceOutRequest | USB_REQ_SET_INTERFACE:
if (wValue != 0 || wIndex != 0)
return std_req_stall;
return std_req_complete;
}
return std_req_stall;
}
static void ast_vhub_update_hub_ep1(struct ast_vhub *vhub,
unsigned int port)
{
/* Update HW EP1 response */
u32 reg = readl(vhub->regs + AST_VHUB_EP1_STS_CHG);
u32 pmask = (1 << (port + 1));
if (vhub->ports[port].change)
reg |= pmask;
else
reg &= ~pmask;
writel(reg, vhub->regs + AST_VHUB_EP1_STS_CHG);
}
static void ast_vhub_change_port_stat(struct ast_vhub *vhub,
unsigned int port,
u16 clr_flags,
u16 set_flags,
bool set_c)
{
struct ast_vhub_port *p = &vhub->ports[port];
u16 prev;
/* Update port status */
prev = p->status;
p->status = (prev & ~clr_flags) | set_flags;
DDBG(&p->dev, "port %d status %04x -> %04x (C=%d)\n",
port + 1, prev, p->status, set_c);
/* Update change bits if needed */
if (set_c) {
u16 chg = p->status ^ prev;
/* Only these are relevant for change */
chg &= USB_PORT_STAT_C_CONNECTION |
USB_PORT_STAT_C_ENABLE |
USB_PORT_STAT_C_SUSPEND |
USB_PORT_STAT_C_OVERCURRENT |
USB_PORT_STAT_C_RESET |
USB_PORT_STAT_C_L1;
p->change |= chg;
ast_vhub_update_hub_ep1(vhub, port);
}
}
static void ast_vhub_send_host_wakeup(struct ast_vhub *vhub)
{
u32 reg = readl(vhub->regs + AST_VHUB_CTRL);
UDCDBG(vhub, "Waking up host !\n");
reg |= VHUB_CTRL_MANUAL_REMOTE_WAKEUP;
writel(reg, vhub->regs + AST_VHUB_CTRL);
}
void ast_vhub_device_connect(struct ast_vhub *vhub,
unsigned int port, bool on)
{
if (on)
ast_vhub_change_port_stat(vhub, port, 0,
USB_PORT_STAT_CONNECTION, true);
else
ast_vhub_change_port_stat(vhub, port,
USB_PORT_STAT_CONNECTION |
USB_PORT_STAT_ENABLE,
0, true);
/*
* If the hub is set to wakup the host on connection events
* then send a wakeup.
*/
if (vhub->wakeup_en)
ast_vhub_send_host_wakeup(vhub);
}
static void ast_vhub_wake_work(struct work_struct *work)
{
struct ast_vhub *vhub = container_of(work,
struct ast_vhub,
wake_work);
unsigned long flags;
unsigned int i;
/*
* Wake all sleeping ports. If a port is suspended by
* the host suspend (without explicit state suspend),
* we let the normal host wake path deal with it later.
*/
spin_lock_irqsave(&vhub->lock, flags);
for (i = 0; i < AST_VHUB_NUM_PORTS; i++) {
struct ast_vhub_port *p = &vhub->ports[i];
if (!(p->status & USB_PORT_STAT_SUSPEND))
continue;
ast_vhub_change_port_stat(vhub, i,
USB_PORT_STAT_SUSPEND,
0, true);
ast_vhub_dev_resume(&p->dev);
}
ast_vhub_send_host_wakeup(vhub);
spin_unlock_irqrestore(&vhub->lock, flags);
}
void ast_vhub_hub_wake_all(struct ast_vhub *vhub)
{
/*
* A device is trying to wake the world, because this
* can recurse into the device, we break the call chain
* using a work queue
*/
schedule_work(&vhub->wake_work);
}
static void ast_vhub_port_reset(struct ast_vhub *vhub, u8 port)
{
struct ast_vhub_port *p = &vhub->ports[port];
u16 set, clr, speed;
/* First mark disabled */
ast_vhub_change_port_stat(vhub, port,
USB_PORT_STAT_ENABLE |
USB_PORT_STAT_SUSPEND,
USB_PORT_STAT_RESET,
false);
if (!p->dev.driver)
return;
/*
* This will either "start" the port or reset the
* device if already started...
*/
ast_vhub_dev_reset(&p->dev);
/* Grab the right speed */
speed = p->dev.driver->max_speed;
if (speed == USB_SPEED_UNKNOWN || speed > vhub->speed)
speed = vhub->speed;
switch (speed) {
case USB_SPEED_LOW:
set = USB_PORT_STAT_LOW_SPEED;
clr = USB_PORT_STAT_HIGH_SPEED;
break;
case USB_SPEED_FULL:
set = 0;
clr = USB_PORT_STAT_LOW_SPEED |
USB_PORT_STAT_HIGH_SPEED;
break;
case USB_SPEED_HIGH:
set = USB_PORT_STAT_HIGH_SPEED;
clr = USB_PORT_STAT_LOW_SPEED;
break;
default:
UDCDBG(vhub, "Unsupported speed %d when"
" connecting device\n",
speed);
return;
}
clr |= USB_PORT_STAT_RESET;
set |= USB_PORT_STAT_ENABLE;
/* This should ideally be delayed ... */
ast_vhub_change_port_stat(vhub, port, clr, set, true);
}
static enum std_req_rc ast_vhub_set_port_feature(struct ast_vhub_ep *ep,
u8 port, u16 feat)
{
struct ast_vhub *vhub = ep->vhub;
struct ast_vhub_port *p;
if (port == 0 || port > AST_VHUB_NUM_PORTS)
return std_req_stall;
port--;
p = &vhub->ports[port];
switch(feat) {
case USB_PORT_FEAT_SUSPEND:
if (!(p->status & USB_PORT_STAT_ENABLE))
return std_req_complete;
ast_vhub_change_port_stat(vhub, port,
0, USB_PORT_STAT_SUSPEND,
false);
ast_vhub_dev_suspend(&p->dev);
return std_req_complete;
case USB_PORT_FEAT_RESET:
EPDBG(ep, "Port reset !\n");
ast_vhub_port_reset(vhub, port);
return std_req_complete;
case USB_PORT_FEAT_POWER:
/*
* On Power-on, we mark the connected flag changed,
* if there's a connected device, some hosts will
* otherwise fail to detect it.
*/
if (p->status & USB_PORT_STAT_CONNECTION) {
p->change |= USB_PORT_STAT_C_CONNECTION;
ast_vhub_update_hub_ep1(vhub, port);
}
return std_req_complete;
case USB_PORT_FEAT_TEST:
case USB_PORT_FEAT_INDICATOR:
/* We don't do anything with these */
return std_req_complete;
}
return std_req_stall;
}
static enum std_req_rc ast_vhub_clr_port_feature(struct ast_vhub_ep *ep,
u8 port, u16 feat)
{
struct ast_vhub *vhub = ep->vhub;
struct ast_vhub_port *p;
if (port == 0 || port > AST_VHUB_NUM_PORTS)
return std_req_stall;
port--;
p = &vhub->ports[port];
switch(feat) {
case USB_PORT_FEAT_ENABLE:
ast_vhub_change_port_stat(vhub, port,
USB_PORT_STAT_ENABLE |
USB_PORT_STAT_SUSPEND, 0,
false);
ast_vhub_dev_suspend(&p->dev);
return std_req_complete;
case USB_PORT_FEAT_SUSPEND:
if (!(p->status & USB_PORT_STAT_SUSPEND))
return std_req_complete;
ast_vhub_change_port_stat(vhub, port,
USB_PORT_STAT_SUSPEND, 0,
false);
ast_vhub_dev_resume(&p->dev);
return std_req_complete;
case USB_PORT_FEAT_POWER:
/* We don't do power control */
return std_req_complete;
case USB_PORT_FEAT_INDICATOR:
/* We don't have indicators */
return std_req_complete;
case USB_PORT_FEAT_C_CONNECTION:
case USB_PORT_FEAT_C_ENABLE:
case USB_PORT_FEAT_C_SUSPEND:
case USB_PORT_FEAT_C_OVER_CURRENT:
case USB_PORT_FEAT_C_RESET:
/* Clear state-change feature */
p->change &= ~(1u << (feat - 16));
ast_vhub_update_hub_ep1(vhub, port);
return std_req_complete;
}
return std_req_stall;
}
static enum std_req_rc ast_vhub_get_port_stat(struct ast_vhub_ep *ep,
u8 port)
{
struct ast_vhub *vhub = ep->vhub;
u16 stat, chg;
if (port == 0 || port > AST_VHUB_NUM_PORTS)
return std_req_stall;
port--;
stat = vhub->ports[port].status;
chg = vhub->ports[port].change;
/* We always have power */
stat |= USB_PORT_STAT_POWER;
EPDBG(ep, " port status=%04x change=%04x\n", stat, chg);
return ast_vhub_simple_reply(ep,
stat & 0xff,
stat >> 8,
chg & 0xff,
chg >> 8);
}
enum std_req_rc ast_vhub_class_hub_request(struct ast_vhub_ep *ep,
struct usb_ctrlrequest *crq)
{
u16 wValue, wIndex, wLength;
wValue = le16_to_cpu(crq->wValue);
wIndex = le16_to_cpu(crq->wIndex);
wLength = le16_to_cpu(crq->wLength);
switch ((crq->bRequestType << 8) | crq->bRequest) {
case GetHubStatus:
EPDBG(ep, "GetHubStatus\n");
return ast_vhub_simple_reply(ep, 0, 0, 0, 0);
case GetPortStatus:
EPDBG(ep, "GetPortStatus(%d)\n", wIndex & 0xff);
return ast_vhub_get_port_stat(ep, wIndex & 0xf);
case GetHubDescriptor:
if (wValue != (USB_DT_HUB << 8))
return std_req_stall;
EPDBG(ep, "GetHubDescriptor(%d)\n", wIndex & 0xff);
return ast_vhub_rep_desc(ep, USB_DT_HUB, wLength);
case SetHubFeature:
case ClearHubFeature:
EPDBG(ep, "Get/SetHubFeature(%d)\n", wValue);
/* No feature, just complete the requests */
if (wValue == C_HUB_LOCAL_POWER ||
wValue == C_HUB_OVER_CURRENT)
return std_req_complete;
return std_req_stall;
case SetPortFeature:
EPDBG(ep, "SetPortFeature(%d,%d)\n", wIndex & 0xf, wValue);
return ast_vhub_set_port_feature(ep, wIndex & 0xf, wValue);
case ClearPortFeature:
EPDBG(ep, "ClearPortFeature(%d,%d)\n", wIndex & 0xf, wValue);
return ast_vhub_clr_port_feature(ep, wIndex & 0xf, wValue);
default:
EPDBG(ep, "Unknown class request\n");
}
return std_req_stall;
}
void ast_vhub_hub_suspend(struct ast_vhub *vhub)
{
unsigned int i;
UDCDBG(vhub, "USB bus suspend\n");
if (vhub->suspended)
return;
vhub->suspended = true;
/*
* Forward to unsuspended ports without changing
* their connection status.
*/
for (i = 0; i < AST_VHUB_NUM_PORTS; i++) {
struct ast_vhub_port *p = &vhub->ports[i];
if (!(p->status & USB_PORT_STAT_SUSPEND))
ast_vhub_dev_suspend(&p->dev);
}
}
void ast_vhub_hub_resume(struct ast_vhub *vhub)
{
unsigned int i;
UDCDBG(vhub, "USB bus resume\n");
if (!vhub->suspended)
return;
vhub->suspended = false;
/*
* Forward to unsuspended ports without changing
* their connection status.
*/
for (i = 0; i < AST_VHUB_NUM_PORTS; i++) {
struct ast_vhub_port *p = &vhub->ports[i];
if (!(p->status & USB_PORT_STAT_SUSPEND))
ast_vhub_dev_resume(&p->dev);
}
}
void ast_vhub_hub_reset(struct ast_vhub *vhub)
{
unsigned int i;
UDCDBG(vhub, "USB bus reset\n");
/*
* Is the speed known ? If not we don't care, we aren't
* initialized yet and ports haven't been enabled.
*/
if (vhub->speed == USB_SPEED_UNKNOWN)
return;
/* We aren't suspended anymore obviously */
vhub->suspended = false;
/* No speed set */
vhub->speed = USB_SPEED_UNKNOWN;
/* Wakeup not enabled anymore */
vhub->wakeup_en = false;
/*
* Clear all port status, disable gadgets and "suspend"
* them. They will be woken up by a port reset.
*/
for (i = 0; i < AST_VHUB_NUM_PORTS; i++) {
struct ast_vhub_port *p = &vhub->ports[i];
/* Only keep the connected flag */
p->status &= USB_PORT_STAT_CONNECTION;
p->change = 0;
/* Suspend the gadget if any */
ast_vhub_dev_suspend(&p->dev);
}
/* Cleanup HW */
writel(0, vhub->regs + AST_VHUB_CONF);
writel(0, vhub->regs + AST_VHUB_EP0_CTRL);
writel(VHUB_EP1_CTRL_RESET_TOGGLE |
VHUB_EP1_CTRL_ENABLE,
vhub->regs + AST_VHUB_EP1_CTRL);
writel(0, vhub->regs + AST_VHUB_EP1_STS_CHG);
}
void ast_vhub_init_hub(struct ast_vhub *vhub)
{
vhub->speed = USB_SPEED_UNKNOWN;
INIT_WORK(&vhub->wake_work, ast_vhub_wake_work);
}

View File

@ -0,0 +1,514 @@
/* SPDX-License-Identifier: GPL-2.0+ */
#ifndef __ASPEED_VHUB_H
#define __ASPEED_VHUB_H
/*****************************
* *
* VHUB register definitions *
* *
*****************************/
#define AST_VHUB_CTRL 0x00 /* Root Function Control & Status Register */
#define AST_VHUB_CONF 0x04 /* Root Configuration Setting Register */
#define AST_VHUB_IER 0x08 /* Interrupt Ctrl Register */
#define AST_VHUB_ISR 0x0C /* Interrupt Status Register */
#define AST_VHUB_EP_ACK_IER 0x10 /* Programmable Endpoint Pool ACK Interrupt Enable Register */
#define AST_VHUB_EP_NACK_IER 0x14 /* Programmable Endpoint Pool NACK Interrupt Enable Register */
#define AST_VHUB_EP_ACK_ISR 0x18 /* Programmable Endpoint Pool ACK Interrupt Status Register */
#define AST_VHUB_EP_NACK_ISR 0x1C /* Programmable Endpoint Pool NACK Interrupt Status Register */
#define AST_VHUB_SW_RESET 0x20 /* Device Controller Soft Reset Enable Register */
#define AST_VHUB_USBSTS 0x24 /* USB Status Register */
#define AST_VHUB_EP_TOGGLE 0x28 /* Programmable Endpoint Pool Data Toggle Value Set */
#define AST_VHUB_ISO_FAIL_ACC 0x2C /* Isochronous Transaction Fail Accumulator */
#define AST_VHUB_EP0_CTRL 0x30 /* Endpoint 0 Contrl/Status Register */
#define AST_VHUB_EP0_DATA 0x34 /* Base Address of Endpoint 0 In/OUT Data Buffer Register */
#define AST_VHUB_EP1_CTRL 0x38 /* Endpoint 1 Contrl/Status Register */
#define AST_VHUB_EP1_STS_CHG 0x3C /* Endpoint 1 Status Change Bitmap Data */
#define AST_VHUB_SETUP0 0x80 /* Root Device Setup Data Buffer0 */
#define AST_VHUB_SETUP1 0x84 /* Root Device Setup Data Buffer1 */
/* Main control reg */
#define VHUB_CTRL_PHY_CLK (1 << 31)
#define VHUB_CTRL_PHY_LOOP_TEST (1 << 25)
#define VHUB_CTRL_DN_PWN (1 << 24)
#define VHUB_CTRL_DP_PWN (1 << 23)
#define VHUB_CTRL_LONG_DESC (1 << 18)
#define VHUB_CTRL_ISO_RSP_CTRL (1 << 17)
#define VHUB_CTRL_SPLIT_IN (1 << 16)
#define VHUB_CTRL_LOOP_T_RESULT (1 << 15)
#define VHUB_CTRL_LOOP_T_STS (1 << 14)
#define VHUB_CTRL_PHY_BIST_RESULT (1 << 13)
#define VHUB_CTRL_PHY_BIST_CTRL (1 << 12)
#define VHUB_CTRL_PHY_RESET_DIS (1 << 11)
#define VHUB_CTRL_SET_TEST_MODE(x) ((x) << 8)
#define VHUB_CTRL_MANUAL_REMOTE_WAKEUP (1 << 4)
#define VHUB_CTRL_AUTO_REMOTE_WAKEUP (1 << 3)
#define VHUB_CTRL_CLK_STOP_SUSPEND (1 << 2)
#define VHUB_CTRL_FULL_SPEED_ONLY (1 << 1)
#define VHUB_CTRL_UPSTREAM_CONNECT (1 << 0)
/* IER & ISR */
#define VHUB_IRQ_USB_CMD_DEADLOCK (1 << 18)
#define VHUB_IRQ_EP_POOL_NAK (1 << 17)
#define VHUB_IRQ_EP_POOL_ACK_STALL (1 << 16)
#define VHUB_IRQ_DEVICE5 (1 << 13)
#define VHUB_IRQ_DEVICE4 (1 << 12)
#define VHUB_IRQ_DEVICE3 (1 << 11)
#define VHUB_IRQ_DEVICE2 (1 << 10)
#define VHUB_IRQ_DEVICE1 (1 << 9)
#define VHUB_IRQ_BUS_RESUME (1 << 8)
#define VHUB_IRQ_BUS_SUSPEND (1 << 7)
#define VHUB_IRQ_BUS_RESET (1 << 6)
#define VHUB_IRQ_HUB_EP1_IN_DATA_ACK (1 << 5)
#define VHUB_IRQ_HUB_EP0_IN_DATA_NAK (1 << 4)
#define VHUB_IRQ_HUB_EP0_IN_ACK_STALL (1 << 3)
#define VHUB_IRQ_HUB_EP0_OUT_NAK (1 << 2)
#define VHUB_IRQ_HUB_EP0_OUT_ACK_STALL (1 << 1)
#define VHUB_IRQ_HUB_EP0_SETUP (1 << 0)
#define VHUB_IRQ_ACK_ALL 0x1ff
/* SW reset reg */
#define VHUB_SW_RESET_EP_POOL (1 << 9)
#define VHUB_SW_RESET_DMA_CONTROLLER (1 << 8)
#define VHUB_SW_RESET_DEVICE5 (1 << 5)
#define VHUB_SW_RESET_DEVICE4 (1 << 4)
#define VHUB_SW_RESET_DEVICE3 (1 << 3)
#define VHUB_SW_RESET_DEVICE2 (1 << 2)
#define VHUB_SW_RESET_DEVICE1 (1 << 1)
#define VHUB_SW_RESET_ROOT_HUB (1 << 0)
#define VHUB_SW_RESET_ALL (VHUB_SW_RESET_EP_POOL | \
VHUB_SW_RESET_DMA_CONTROLLER | \
VHUB_SW_RESET_DEVICE5 | \
VHUB_SW_RESET_DEVICE4 | \
VHUB_SW_RESET_DEVICE3 | \
VHUB_SW_RESET_DEVICE2 | \
VHUB_SW_RESET_DEVICE1 | \
VHUB_SW_RESET_ROOT_HUB)
/* EP ACK/NACK IRQ masks */
#define VHUB_EP_IRQ(n) (1 << (n))
#define VHUB_EP_IRQ_ALL 0x7fff /* 15 EPs */
/* USB status reg */
#define VHUB_USBSTS_HISPEED (1 << 27)
/* EP toggle */
#define VHUB_EP_TOGGLE_VALUE (1 << 8)
#define VHUB_EP_TOGGLE_SET_EPNUM(x) ((x) & 0x1f)
/* HUB EP0 control */
#define VHUB_EP0_CTRL_STALL (1 << 0)
#define VHUB_EP0_TX_BUFF_RDY (1 << 1)
#define VHUB_EP0_RX_BUFF_RDY (1 << 2)
#define VHUB_EP0_RX_LEN(x) (((x) >> 16) & 0x7f)
#define VHUB_EP0_SET_TX_LEN(x) (((x) & 0x7f) << 8)
/* HUB EP1 control */
#define VHUB_EP1_CTRL_RESET_TOGGLE (1 << 2)
#define VHUB_EP1_CTRL_STALL (1 << 1)
#define VHUB_EP1_CTRL_ENABLE (1 << 0)
/***********************************
* *
* per-device register definitions *
* *
***********************************/
#define AST_VHUB_DEV_EN_CTRL 0x00
#define AST_VHUB_DEV_ISR 0x04
#define AST_VHUB_DEV_EP0_CTRL 0x08
#define AST_VHUB_DEV_EP0_DATA 0x0c
/* Device enable control */
#define VHUB_DEV_EN_SET_ADDR(x) ((x) << 8)
#define VHUB_DEV_EN_ADDR_MASK ((0xff) << 8)
#define VHUB_DEV_EN_EP0_NAK_IRQEN (1 << 6)
#define VHUB_DEV_EN_EP0_IN_ACK_IRQEN (1 << 5)
#define VHUB_DEV_EN_EP0_OUT_NAK_IRQEN (1 << 4)
#define VHUB_DEV_EN_EP0_OUT_ACK_IRQEN (1 << 3)
#define VHUB_DEV_EN_EP0_SETUP_IRQEN (1 << 2)
#define VHUB_DEV_EN_SPEED_SEL_HIGH (1 << 1)
#define VHUB_DEV_EN_ENABLE_PORT (1 << 0)
/* Interrupt status */
#define VHUV_DEV_IRQ_EP0_IN_DATA_NACK (1 << 4)
#define VHUV_DEV_IRQ_EP0_IN_ACK_STALL (1 << 3)
#define VHUV_DEV_IRQ_EP0_OUT_DATA_NACK (1 << 2)
#define VHUV_DEV_IRQ_EP0_OUT_ACK_STALL (1 << 1)
#define VHUV_DEV_IRQ_EP0_SETUP (1 << 0)
/* Control bits.
*
* Note: The driver relies on the bulk of those bits
* matching corresponding vHub EP0 control bits
*/
#define VHUB_DEV_EP0_CTRL_STALL VHUB_EP0_CTRL_STALL
#define VHUB_DEV_EP0_TX_BUFF_RDY VHUB_EP0_TX_BUFF_RDY
#define VHUB_DEV_EP0_RX_BUFF_RDY VHUB_EP0_RX_BUFF_RDY
#define VHUB_DEV_EP0_RX_LEN(x) VHUB_EP0_RX_LEN(x)
#define VHUB_DEV_EP0_SET_TX_LEN(x) VHUB_EP0_SET_TX_LEN(x)
/*************************************
* *
* per-endpoint register definitions *
* *
*************************************/
#define AST_VHUB_EP_CONFIG 0x00
#define AST_VHUB_EP_DMA_CTLSTAT 0x04
#define AST_VHUB_EP_DESC_BASE 0x08
#define AST_VHUB_EP_DESC_STATUS 0x0C
/* EP config reg */
#define VHUB_EP_CFG_SET_MAX_PKT(x) (((x) & 0x3ff) << 16)
#define VHUB_EP_CFG_AUTO_DATA_DISABLE (1 << 13)
#define VHUB_EP_CFG_STALL_CTRL (1 << 12)
#define VHUB_EP_CFG_SET_EP_NUM(x) (((x) & 0xf) << 8)
#define VHUB_EP_CFG_SET_TYPE(x) ((x) << 5)
#define EP_TYPE_OFF 0
#define EP_TYPE_BULK 1
#define EP_TYPE_INT 2
#define EP_TYPE_ISO 3
#define VHUB_EP_CFG_DIR_OUT (1 << 4)
#define VHUB_EP_CFG_SET_DEV(x) ((x) << 1)
#define VHUB_EP_CFG_ENABLE (1 << 0)
/* EP DMA control */
#define VHUB_EP_DMA_PROC_STATUS(x) (((x) >> 4) & 0xf)
#define EP_DMA_PROC_RX_IDLE 0
#define EP_DMA_PROC_TX_IDLE 8
#define VHUB_EP_DMA_IN_LONG_MODE (1 << 3)
#define VHUB_EP_DMA_OUT_CONTIG_MODE (1 << 3)
#define VHUB_EP_DMA_CTRL_RESET (1 << 2)
#define VHUB_EP_DMA_SINGLE_STAGE (1 << 1)
#define VHUB_EP_DMA_DESC_MODE (1 << 0)
/* EP DMA status */
#define VHUB_EP_DMA_SET_TX_SIZE(x) ((x) << 16)
#define VHUB_EP_DMA_TX_SIZE(x) (((x) >> 16) & 0x7ff)
#define VHUB_EP_DMA_RPTR(x) (((x) >> 8) & 0xff)
#define VHUB_EP_DMA_SET_RPTR(x) (((x) & 0xff) << 8)
#define VHUB_EP_DMA_SET_CPU_WPTR(x) (x)
#define VHUB_EP_DMA_SINGLE_KICK (1 << 0) /* WPTR = 1 for single mode */
/*******************************
* *
* DMA descriptors definitions *
* *
*******************************/
/* Desc W1 IN */
#define VHUB_DSC1_IN_INTERRUPT (1 << 31)
#define VHUB_DSC1_IN_SPID_DATA0 (0 << 14)
#define VHUB_DSC1_IN_SPID_DATA2 (1 << 14)
#define VHUB_DSC1_IN_SPID_DATA1 (2 << 14)
#define VHUB_DSC1_IN_SPID_MDATA (3 << 14)
#define VHUB_DSC1_IN_SET_LEN(x) ((x) & 0xfff)
#define VHUB_DSC1_IN_LEN(x) ((x) & 0xfff)
/****************************************
* *
* Data structures and misc definitions *
* *
****************************************/
#define AST_VHUB_NUM_GEN_EPs 15 /* Generic non-0 EPs */
#define AST_VHUB_NUM_PORTS 5 /* vHub ports */
#define AST_VHUB_EP0_MAX_PACKET 64 /* EP0's max packet size */
#define AST_VHUB_EPn_MAX_PACKET 1024 /* Generic EPs max packet size */
#define AST_VHUB_DESCS_COUNT 256 /* Use 256 descriptor mode (valid
* values are 256 and 32)
*/
struct ast_vhub;
struct ast_vhub_dev;
/*
* DMA descriptor (generic EPs only, currently only used
* for IN endpoints
*/
struct ast_vhub_desc {
__le32 w0;
__le32 w1;
};
/* A transfer request, either core-originated or internal */
struct ast_vhub_req {
struct usb_request req;
struct list_head queue;
/* Actual count written to descriptors (desc mode only) */
unsigned int act_count;
/*
* Desc number of the final packet or -1. For non-desc
* mode (or ep0), any >= 0 value means "last packet"
*/
int last_desc;
/* Request active (pending DMAs) */
bool active : 1;
/* Internal request (don't call back core) */
bool internal : 1;
};
#define to_ast_req(__ureq) container_of(__ureq, struct ast_vhub_req, req)
/* Current state of an EP0 */
enum ep0_state {
ep0_state_token,
ep0_state_data,
ep0_state_status,
};
/*
* An endpoint, either generic, ep0, actual gadget EP
* or internal use vhub EP0. vhub EP1 doesn't have an
* associated structure as it's mostly HW managed.
*/
struct ast_vhub_ep {
struct usb_ep ep;
/* Request queue */
struct list_head queue;
/* EP index in the device, 0 means this is an EP0 */
unsigned int d_idx;
/* Dev pointer or NULL for vHub EP0 */
struct ast_vhub_dev *dev;
/* vHub itself */
struct ast_vhub *vhub;
/*
* DMA buffer for EP0, fallback DMA buffer for misaligned
* OUT transfers for generic EPs
*/
void *buf;
dma_addr_t buf_dma;
/* The rest depends on the EP type */
union {
/* EP0 (either device or vhub) */
struct {
/*
* EP0 registers are "similar" for
* vHub and devices but located in
* different places.
*/
void __iomem *ctlstat;
void __iomem *setup;
/* Current state & direction */
enum ep0_state state;
bool dir_in;
/* Internal use request */
struct ast_vhub_req req;
} ep0;
/* Generic endpoint (aka EPn) */
struct {
/* Registers */
void __iomem *regs;
/* Index in global pool (0..14) */
unsigned int g_idx;
/* DMA Descriptors */
struct ast_vhub_desc *descs;
dma_addr_t descs_dma;
unsigned int d_next;
unsigned int d_last;
unsigned int dma_conf;
/* Max chunk size for IN EPs */
unsigned int chunk_max;
/* State flags */
bool is_in : 1;
bool is_iso : 1;
bool stalled : 1;
bool wedged : 1;
bool enabled : 1;
bool desc_mode : 1;
} epn;
};
};
#define to_ast_ep(__uep) container_of(__uep, struct ast_vhub_ep, ep)
/* A device attached to a vHub port */
struct ast_vhub_dev {
struct ast_vhub *vhub;
void __iomem *regs;
/* Device index (0...4) and name string */
unsigned int index;
const char *name;
/* sysfs enclosure for the gadget gunk */
struct device *port_dev;
/* Link to gadget core */
struct usb_gadget gadget;
struct usb_gadget_driver *driver;
bool registered : 1;
bool wakeup_en : 1;
bool suspended : 1;
bool enabled : 1;
/* Endpoint structures */
struct ast_vhub_ep ep0;
struct ast_vhub_ep *epns[AST_VHUB_NUM_GEN_EPs];
};
#define to_ast_dev(__g) container_of(__g, struct ast_vhub_dev, gadget)
/* Per vhub port stateinfo structure */
struct ast_vhub_port {
/* Port status & status change registers */
u16 status;
u16 change;
/* Associated device slot */
struct ast_vhub_dev dev;
};
/* Global vhub structure */
struct ast_vhub {
struct platform_device *pdev;
void __iomem *regs;
int irq;
spinlock_t lock;
struct work_struct wake_work;
struct clk *clk;
/* EP0 DMA buffers allocated in one chunk */
void *ep0_bufs;
dma_addr_t ep0_bufs_dma;
/* EP0 of the vhub itself */
struct ast_vhub_ep ep0;
/* State of vhub ep1 */
bool ep1_stalled : 1;
/* Per-port info */
struct ast_vhub_port ports[AST_VHUB_NUM_PORTS];
/* Generic EP data structures */
struct ast_vhub_ep epns[AST_VHUB_NUM_GEN_EPs];
/* Upstream bus is suspended ? */
bool suspended : 1;
/* Hub itself can signal remote wakeup */
bool wakeup_en : 1;
/* Force full speed only */
bool force_usb1 : 1;
/* Upstream bus speed captured at bus reset */
unsigned int speed;
};
/* Standard request handlers result codes */
enum std_req_rc {
std_req_stall = -1, /* Stall requested */
std_req_complete = 0, /* Request completed with no data */
std_req_data = 1, /* Request completed with data */
std_req_driver = 2, /* Pass to driver pls */
};
#ifdef CONFIG_USB_GADGET_VERBOSE
#define UDCVDBG(u, fmt...) dev_dbg(&(u)->pdev->dev, fmt)
#define EPVDBG(ep, fmt, ...) do { \
dev_dbg(&(ep)->vhub->pdev->dev, \
"%s:EP%d " fmt, \
(ep)->dev ? (ep)->dev->name : "hub", \
(ep)->d_idx, ##__VA_ARGS__); \
} while(0)
#define DVDBG(d, fmt, ...) do { \
dev_dbg(&(d)->vhub->pdev->dev, \
"%s " fmt, (d)->name, \
##__VA_ARGS__); \
} while(0)
#else
#define UDCVDBG(u, fmt...) do { } while(0)
#define EPVDBG(ep, fmt, ...) do { } while(0)
#define DVDBG(d, fmt, ...) do { } while(0)
#endif
#ifdef CONFIG_USB_GADGET_DEBUG
#define UDCDBG(u, fmt...) dev_dbg(&(u)->pdev->dev, fmt)
#define EPDBG(ep, fmt, ...) do { \
dev_dbg(&(ep)->vhub->pdev->dev, \
"%s:EP%d " fmt, \
(ep)->dev ? (ep)->dev->name : "hub", \
(ep)->d_idx, ##__VA_ARGS__); \
} while(0)
#define DDBG(d, fmt, ...) do { \
dev_dbg(&(d)->vhub->pdev->dev, \
"%s " fmt, (d)->name, \
##__VA_ARGS__); \
} while(0)
#else
#define UDCDBG(u, fmt...) do { } while(0)
#define EPDBG(ep, fmt, ...) do { } while(0)
#define DDBG(d, fmt, ...) do { } while(0)
#endif
/* core.c */
void ast_vhub_done(struct ast_vhub_ep *ep, struct ast_vhub_req *req,
int status);
void ast_vhub_nuke(struct ast_vhub_ep *ep, int status);
struct usb_request *ast_vhub_alloc_request(struct usb_ep *u_ep,
gfp_t gfp_flags);
void ast_vhub_free_request(struct usb_ep *u_ep, struct usb_request *u_req);
void ast_vhub_init_hw(struct ast_vhub *vhub);
/* ep0.c */
void ast_vhub_ep0_handle_ack(struct ast_vhub_ep *ep, bool in_ack);
void ast_vhub_ep0_handle_setup(struct ast_vhub_ep *ep);
void ast_vhub_init_ep0(struct ast_vhub *vhub, struct ast_vhub_ep *ep,
struct ast_vhub_dev *dev);
int ast_vhub_reply(struct ast_vhub_ep *ep, char *ptr, int len);
int __ast_vhub_simple_reply(struct ast_vhub_ep *ep, int len, ...);
#define ast_vhub_simple_reply(udc, ...) \
__ast_vhub_simple_reply((udc), \
sizeof((u8[]) { __VA_ARGS__ })/sizeof(u8), \
__VA_ARGS__)
/* hub.c */
void ast_vhub_init_hub(struct ast_vhub *vhub);
enum std_req_rc ast_vhub_std_hub_request(struct ast_vhub_ep *ep,
struct usb_ctrlrequest *crq);
enum std_req_rc ast_vhub_class_hub_request(struct ast_vhub_ep *ep,
struct usb_ctrlrequest *crq);
void ast_vhub_device_connect(struct ast_vhub *vhub, unsigned int port,
bool on);
void ast_vhub_hub_suspend(struct ast_vhub *vhub);
void ast_vhub_hub_resume(struct ast_vhub *vhub);
void ast_vhub_hub_reset(struct ast_vhub *vhub);
void ast_vhub_hub_wake_all(struct ast_vhub *vhub);
/* dev.c */
int ast_vhub_init_dev(struct ast_vhub *vhub, unsigned int idx);
void ast_vhub_del_dev(struct ast_vhub_dev *d);
void ast_vhub_dev_irq(struct ast_vhub_dev *d);
int ast_vhub_std_dev_request(struct ast_vhub_ep *ep,
struct usb_ctrlrequest *crq);
/* epn.c */
void ast_vhub_epn_ack_irq(struct ast_vhub_ep *ep);
void ast_vhub_update_epn_stall(struct ast_vhub_ep *ep);
struct ast_vhub_ep *ast_vhub_alloc_epn(struct ast_vhub_dev *d, u8 addr);
void ast_vhub_dev_suspend(struct ast_vhub_dev *d);
void ast_vhub_dev_resume(struct ast_vhub_dev *d);
void ast_vhub_dev_reset(struct ast_vhub_dev *d);
#endif /* __ASPEED_VHUB_H */

View File

@ -20,7 +20,6 @@
#include <linux/ctype.h> #include <linux/ctype.h>
#include <linux/usb/ch9.h> #include <linux/usb/ch9.h>
#include <linux/usb/gadget.h> #include <linux/usb/gadget.h>
#include <linux/usb/atmel_usba_udc.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/irq.h> #include <linux/irq.h>
@ -417,7 +416,7 @@ static inline void usba_int_enb_set(struct usba_udc *udc, u32 val)
static int vbus_is_present(struct usba_udc *udc) static int vbus_is_present(struct usba_udc *udc)
{ {
if (udc->vbus_pin) if (udc->vbus_pin)
return gpiod_get_value(udc->vbus_pin) ^ udc->vbus_pin_inverted; return gpiod_get_value(udc->vbus_pin);
/* No Vbus detection: Assume always present */ /* No Vbus detection: Assume always present */
return 1; return 1;
@ -2076,7 +2075,6 @@ static struct usba_ep * atmel_udc_of_init(struct platform_device *pdev,
udc->vbus_pin = devm_gpiod_get_optional(&pdev->dev, "atmel,vbus", udc->vbus_pin = devm_gpiod_get_optional(&pdev->dev, "atmel,vbus",
GPIOD_IN); GPIOD_IN);
udc->vbus_pin_inverted = gpiod_is_active_low(udc->vbus_pin);
if (fifo_mode == 0) { if (fifo_mode == 0) {
pp = NULL; pp = NULL;
@ -2279,15 +2277,15 @@ static int usba_udc_probe(struct platform_device *pdev)
if (udc->vbus_pin) { if (udc->vbus_pin) {
irq_set_status_flags(gpiod_to_irq(udc->vbus_pin), IRQ_NOAUTOEN); irq_set_status_flags(gpiod_to_irq(udc->vbus_pin), IRQ_NOAUTOEN);
ret = devm_request_threaded_irq(&pdev->dev, ret = devm_request_threaded_irq(&pdev->dev,
gpiod_to_irq(udc->vbus_pin), NULL, gpiod_to_irq(udc->vbus_pin), NULL,
usba_vbus_irq_thread, USBA_VBUS_IRQFLAGS, usba_vbus_irq_thread, USBA_VBUS_IRQFLAGS,
"atmel_usba_udc", udc); "atmel_usba_udc", udc);
if (ret) { if (ret) {
udc->vbus_pin = NULL; udc->vbus_pin = NULL;
dev_warn(&udc->pdev->dev, dev_warn(&udc->pdev->dev,
"failed to request vbus irq; " "failed to request vbus irq; "
"assuming always on\n"); "assuming always on\n");
} }
} }
ret = usb_add_gadget_udc(&pdev->dev, &udc->gadget); ret = usb_add_gadget_udc(&pdev->dev, &udc->gadget);

View File

@ -326,7 +326,6 @@ struct usba_udc {
const struct usba_udc_errata *errata; const struct usba_udc_errata *errata;
int irq; int irq;
struct gpio_desc *vbus_pin; struct gpio_desc *vbus_pin;
int vbus_pin_inverted;
int num_ep; int num_ep;
int configured_ep; int configured_ep;
struct usba_fifo_cfg *fifo_cfg; struct usba_fifo_cfg *fifo_cfg;

View File

@ -244,6 +244,12 @@ EXPORT_SYMBOL_GPL(usb_ep_free_request);
* Returns zero, or a negative error code. Endpoints that are not enabled * Returns zero, or a negative error code. Endpoints that are not enabled
* report errors; errors will also be * report errors; errors will also be
* reported when the usb peripheral is disconnected. * reported when the usb peripheral is disconnected.
*
* If and only if @req is successfully queued (the return value is zero),
* @req->complete() will be called exactly once, when the Gadget core and
* UDC are finished with the request. When the completion function is called,
* control of the request is returned to the device driver which submitted it.
* The completion handler may then immediately free or reuse @req.
*/ */
int usb_ep_queue(struct usb_ep *ep, int usb_ep_queue(struct usb_ep *ep,
struct usb_request *req, gfp_t gfp_flags) struct usb_request *req, gfp_t gfp_flags)

View File

@ -253,6 +253,7 @@ static int dr_controller_setup(struct fsl_udc *udc)
portctrl |= PORTSCX_PTW_16BIT; portctrl |= PORTSCX_PTW_16BIT;
/* fall through */ /* fall through */
case FSL_USB2_PHY_UTMI: case FSL_USB2_PHY_UTMI:
case FSL_USB2_PHY_UTMI_DUAL:
if (udc->pdata->have_sysif_regs) { if (udc->pdata->have_sysif_regs) {
if (udc->pdata->controller_ver) { if (udc->pdata->controller_ver) {
/* controller version 1.6 or above */ /* controller version 1.6 or above */

View File

@ -333,6 +333,7 @@ struct renesas_usb3 {
struct extcon_dev *extcon; struct extcon_dev *extcon;
struct work_struct extcon_work; struct work_struct extcon_work;
struct phy *phy; struct phy *phy;
struct dentry *dentry;
struct renesas_usb3_ep *usb3_ep; struct renesas_usb3_ep *usb3_ep;
int num_usb3_eps; int num_usb3_eps;
@ -622,6 +623,13 @@ static void usb3_disconnect(struct renesas_usb3 *usb3)
usb3_usb2_pullup(usb3, 0); usb3_usb2_pullup(usb3, 0);
usb3_clear_bit(usb3, USB30_CON_B3_CONNECT, USB3_USB30_CON); usb3_clear_bit(usb3, USB30_CON_B3_CONNECT, USB3_USB30_CON);
usb3_reset_epc(usb3); usb3_reset_epc(usb3);
usb3_disable_irq_1(usb3, USB_INT_1_B2_RSUM | USB_INT_1_B3_PLLWKUP |
USB_INT_1_B3_LUPSUCS | USB_INT_1_B3_DISABLE |
USB_INT_1_SPEED | USB_INT_1_B3_WRMRST |
USB_INT_1_B3_HOTRST | USB_INT_1_B2_SPND |
USB_INT_1_B2_L1SPND | USB_INT_1_B2_USBRST);
usb3_clear_bit(usb3, USB_COM_CON_SPD_MODE, USB3_USB_COM_CON);
usb3_init_epc_registers(usb3);
if (usb3->driver) if (usb3->driver)
usb3->driver->disconnect(&usb3->gadget); usb3->driver->disconnect(&usb3->gadget);
@ -2393,8 +2401,12 @@ static void renesas_usb3_debugfs_init(struct renesas_usb3 *usb3,
file = debugfs_create_file("b_device", 0644, root, usb3, file = debugfs_create_file("b_device", 0644, root, usb3,
&renesas_usb3_b_device_fops); &renesas_usb3_b_device_fops);
if (!file) if (!file) {
dev_info(dev, "%s: Can't create debugfs mode\n", __func__); dev_info(dev, "%s: Can't create debugfs mode\n", __func__);
debugfs_remove_recursive(root);
} else {
usb3->dentry = root;
}
} }
/*------- platform_driver ------------------------------------------------*/ /*------- platform_driver ------------------------------------------------*/
@ -2402,14 +2414,13 @@ static int renesas_usb3_remove(struct platform_device *pdev)
{ {
struct renesas_usb3 *usb3 = platform_get_drvdata(pdev); struct renesas_usb3 *usb3 = platform_get_drvdata(pdev);
debugfs_remove_recursive(usb3->dentry);
device_remove_file(&pdev->dev, &dev_attr_role); device_remove_file(&pdev->dev, &dev_attr_role);
usb_del_gadget_udc(&usb3->gadget); usb_del_gadget_udc(&usb3->gadget);
renesas_usb3_dma_free_prd(usb3, &pdev->dev); renesas_usb3_dma_free_prd(usb3, &pdev->dev);
__renesas_usb3_ep_free_request(usb3->ep0_req); __renesas_usb3_ep_free_request(usb3->ep0_req);
if (usb3->phy)
phy_put(usb3->phy);
pm_runtime_disable(&pdev->dev); pm_runtime_disable(&pdev->dev);
return 0; return 0;
@ -2628,6 +2639,17 @@ static int renesas_usb3_probe(struct platform_device *pdev)
if (ret < 0) if (ret < 0)
goto err_alloc_prd; goto err_alloc_prd;
/*
* This is optional. So, if this driver cannot get a phy,
* this driver will not handle a phy anymore.
*/
usb3->phy = devm_phy_optional_get(&pdev->dev, "usb");
if (IS_ERR(usb3->phy)) {
ret = PTR_ERR(usb3->phy);
goto err_add_udc;
}
pm_runtime_enable(&pdev->dev);
ret = usb_add_gadget_udc(&pdev->dev, &usb3->gadget); ret = usb_add_gadget_udc(&pdev->dev, &usb3->gadget);
if (ret < 0) if (ret < 0)
goto err_add_udc; goto err_add_udc;
@ -2636,20 +2658,11 @@ static int renesas_usb3_probe(struct platform_device *pdev)
if (ret < 0) if (ret < 0)
goto err_dev_create; goto err_dev_create;
/*
* This is an optional. So, if this driver cannot get a phy,
* this driver will not handle a phy anymore.
*/
usb3->phy = devm_phy_get(&pdev->dev, "usb");
if (IS_ERR(usb3->phy))
usb3->phy = NULL;
usb3->workaround_for_vbus = priv->workaround_for_vbus; usb3->workaround_for_vbus = priv->workaround_for_vbus;
renesas_usb3_debugfs_init(usb3, &pdev->dev); renesas_usb3_debugfs_init(usb3, &pdev->dev);
dev_info(&pdev->dev, "probed%s\n", usb3->phy ? " with phy" : ""); dev_info(&pdev->dev, "probed%s\n", usb3->phy ? " with phy" : "");
pm_runtime_enable(usb3_to_dev(usb3));
return 0; return 0;

View File

@ -33,7 +33,7 @@
* characters (which are also widely used in C strings). * characters (which are also widely used in C strings).
*/ */
int int
usb_gadget_get_string (struct usb_gadget_strings *table, int id, u8 *buf) usb_gadget_get_string (const struct usb_gadget_strings *table, int id, u8 *buf)
{ {
struct usb_string *s; struct usb_string *s;
int len; int len;

View File

@ -2,7 +2,7 @@
config USB_MTU3 config USB_MTU3
tristate "MediaTek USB3 Dual Role controller" tristate "MediaTek USB3 Dual Role controller"
depends on EXTCON && (USB || USB_GADGET) depends on USB || USB_GADGET
depends on ARCH_MEDIATEK || COMPILE_TEST depends on ARCH_MEDIATEK || COMPILE_TEST
select USB_XHCI_MTK if USB_SUPPORT && USB_XHCI_HCD select USB_XHCI_MTK if USB_SUPPORT && USB_XHCI_HCD
help help
@ -40,6 +40,7 @@ config USB_MTU3_GADGET
config USB_MTU3_DUAL_ROLE config USB_MTU3_DUAL_ROLE
bool "Dual Role mode" bool "Dual Role mode"
depends on ((USB=y || USB=USB_MTU3) && (USB_GADGET=y || USB_GADGET=USB_MTU3)) depends on ((USB=y || USB=USB_MTU3) && (USB_GADGET=y || USB_GADGET=USB_MTU3))
depends on (EXTCON=y || EXTCON=USB_MTU3)
help help
This is the default mode of working of MTU3 controller where This is the default mode of working of MTU3 controller where
both host and gadget features are enabled. both host and gadget features are enabled.

View File

@ -197,9 +197,6 @@ struct mtu3_gpd_ring {
* @edev: external connector used to detect vbus and iddig changes * @edev: external connector used to detect vbus and iddig changes
* @vbus_nb: notifier for vbus detection * @vbus_nb: notifier for vbus detection
* @vbus_nb: notifier for iddig(idpin) detection * @vbus_nb: notifier for iddig(idpin) detection
* @extcon_reg_dwork: delay work for extcon notifier register, waiting for
* xHCI driver initialization, it's necessary for system bootup
* as device.
* @is_u3_drd: whether port0 supports usb3.0 dual-role device or not * @is_u3_drd: whether port0 supports usb3.0 dual-role device or not
* @manual_drd_enabled: it's true when supports dual-role device by debugfs * @manual_drd_enabled: it's true when supports dual-role device by debugfs
* to switch host/device modes depending on user input. * to switch host/device modes depending on user input.
@ -209,7 +206,6 @@ struct otg_switch_mtk {
struct extcon_dev *edev; struct extcon_dev *edev;
struct notifier_block vbus_nb; struct notifier_block vbus_nb;
struct notifier_block id_nb; struct notifier_block id_nb;
struct delayed_work extcon_reg_dwork;
bool is_u3_drd; bool is_u3_drd;
bool manual_drd_enabled; bool manual_drd_enabled;
}; };

View File

@ -238,15 +238,6 @@ static int ssusb_extcon_register(struct otg_switch_mtk *otg_sx)
return 0; return 0;
} }
static void extcon_register_dwork(struct work_struct *work)
{
struct delayed_work *dwork = to_delayed_work(work);
struct otg_switch_mtk *otg_sx =
container_of(dwork, struct otg_switch_mtk, extcon_reg_dwork);
ssusb_extcon_register(otg_sx);
}
/* /*
* We provide an interface via debugfs to switch between host and device modes * We provide an interface via debugfs to switch between host and device modes
* depending on user input. * depending on user input.
@ -407,18 +398,10 @@ int ssusb_otg_switch_init(struct ssusb_mtk *ssusb)
{ {
struct otg_switch_mtk *otg_sx = &ssusb->otg_switch; struct otg_switch_mtk *otg_sx = &ssusb->otg_switch;
if (otg_sx->manual_drd_enabled) { if (otg_sx->manual_drd_enabled)
ssusb_debugfs_init(ssusb); ssusb_debugfs_init(ssusb);
} else { else
INIT_DELAYED_WORK(&otg_sx->extcon_reg_dwork, ssusb_extcon_register(otg_sx);
extcon_register_dwork);
/*
* It is enough to delay 1s for waiting for
* host initialization
*/
schedule_delayed_work(&otg_sx->extcon_reg_dwork, HZ);
}
return 0; return 0;
} }
@ -429,6 +412,4 @@ void ssusb_otg_switch_exit(struct ssusb_mtk *ssusb)
if (otg_sx->manual_drd_enabled) if (otg_sx->manual_drd_enabled)
ssusb_debugfs_exit(ssusb); ssusb_debugfs_exit(ssusb);
else
cancel_delayed_work(&otg_sx->extcon_reg_dwork);
} }

View File

@ -660,14 +660,10 @@ int mtu3_gadget_setup(struct mtu3 *mtu)
mtu3_gadget_init_eps(mtu); mtu3_gadget_init_eps(mtu);
ret = usb_add_gadget_udc(mtu->dev, &mtu->g); ret = usb_add_gadget_udc(mtu->dev, &mtu->g);
if (ret) { if (ret)
dev_err(mtu->dev, "failed to register udc\n"); dev_err(mtu->dev, "failed to register udc\n");
return ret;
}
usb_gadget_set_state(&mtu->g, USB_STATE_NOTATTACHED); return ret;
return 0;
} }
void mtu3_gadget_cleanup(struct mtu3 *mtu) void mtu3_gadget_cleanup(struct mtu3 *mtu)

View File

@ -7,6 +7,7 @@
* Author: Chunfeng.Yun <chunfeng.yun@mediatek.com> * Author: Chunfeng.Yun <chunfeng.yun@mediatek.com>
*/ */
#include <linux/iopoll.h>
#include <linux/usb/composite.h> #include <linux/usb/composite.h>
#include "mtu3.h" #include "mtu3.h"
@ -263,6 +264,7 @@ static int handle_test_mode(struct mtu3 *mtu, struct usb_ctrlrequest *setup)
{ {
void __iomem *mbase = mtu->mac_base; void __iomem *mbase = mtu->mac_base;
int handled = 1; int handled = 1;
u32 value;
switch (le16_to_cpu(setup->wIndex) >> 8) { switch (le16_to_cpu(setup->wIndex) >> 8) {
case TEST_J: case TEST_J:
@ -292,6 +294,14 @@ static int handle_test_mode(struct mtu3 *mtu, struct usb_ctrlrequest *setup)
if (mtu->test_mode_nr == TEST_PACKET_MODE) if (mtu->test_mode_nr == TEST_PACKET_MODE)
ep0_load_test_packet(mtu); ep0_load_test_packet(mtu);
/* send status before entering test mode. */
value = mtu3_readl(mbase, U3D_EP0CSR) & EP0_W1C_BITS;
mtu3_writel(mbase, U3D_EP0CSR, value | EP0_SETUPPKTRDY | EP0_DATAEND);
/* wait for ACK status sent by host */
readl_poll_timeout(mbase + U3D_EP0CSR, value,
!(value & EP0_DATAEND), 100, 5000);
mtu3_writel(mbase, U3D_USB2_TEST_MODE, mtu->test_mode_nr); mtu3_writel(mbase, U3D_USB2_TEST_MODE, mtu->test_mode_nr);
mtu->ep0_state = MU3D_EP0_STATE_SETUP; mtu->ep0_state = MU3D_EP0_STATE_SETUP;
@ -546,7 +556,7 @@ static void ep0_tx_state(struct mtu3 *mtu)
struct usb_request *req; struct usb_request *req;
u32 csr; u32 csr;
u8 *src; u8 *src;
u8 count; u32 count;
u32 maxp; u32 maxp;
dev_dbg(mtu->dev, "%s\n", __func__); dev_dbg(mtu->dev, "%s\n", __func__);

View File

@ -1,24 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Platform data definitions for Atmel USBA gadget driver.
*/
#ifndef __LINUX_USB_USBA_H
#define __LINUX_USB_USBA_H
struct usba_ep_data {
char *name;
int index;
int fifo_size;
int nr_banks;
int can_dma;
int can_isoc;
};
struct usba_platform_data {
int vbus_pin;
int vbus_pin_inverted;
int num_ep;
struct usba_ep_data ep[0];
};
#endif /* __LINUX_USB_USBA_H */

View File

@ -763,7 +763,7 @@ struct usb_gadget_string_container {
}; };
/* put descriptor for string with that id into buf (buflen >= 256) */ /* put descriptor for string with that id into buf (buflen >= 256) */
int usb_gadget_get_string(struct usb_gadget_strings *table, int id, u8 *buf); int usb_gadget_get_string(const struct usb_gadget_strings *table, int id, u8 *buf);
/*-------------------------------------------------------------------------*/ /*-------------------------------------------------------------------------*/