In order to have a more coherent DW AHB DMA slave configuration method -
dwc_config() - let's simplify the source and destination channel max-burst
calculation procedure:
1. Create the max-burst verification method as it has been just done for
the memory and peripheral address widths. Thus the dwc_config() method
will turn to a set of the verification methods execution.
2. Since both the generic DW AHB DMA and Intel iDMA 32-bit engines support
the power-of-2 bursts only, then the specified by the client driver
max-burst values can be converted to being power-of-2 right in the
max-burst verification method.
3. Since max-burst encoded value is required on the CTL_LO fields
calculation stage, the encode_maxburst() callback can be easily dropped
from the dw_dma structure meanwhile the encoding procedure will be
executed right in the CTL_LO register value calculation.
Thus the update will provide the next positive effects: the internal
DMA-slave config structure will contain only the real DMA-transfer config
values, which will be encoded to the DMA-controller register fields only
when it's required on the buffer mapping; the redundant encode_maxburst()
callback will be dropped simplifying the internal HW-abstraction API;
dwc_config() will look more readable executing the verification functions
one-by-one.
Signed-off-by: Serge Semin <fancer.lancer@gmail.com>
Acked-by: Andy Shevchenko <andy@kernel.org>
Link: https://lore.kernel.org/r/20240802075100.6475-6-fancer.lancer@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
As a preparatory change before dropping the encode_maxburst() callbacks
let's move dw_dma_encode_maxburst() and idma32_encode_maxburst() to being
defined above the dw_dma_prepare_ctllo() and idma32_prepare_ctllo()
methods respectively. That's required since the former methods will be
called from the later ones directly.
Signed-off-by: Serge Semin <fancer.lancer@gmail.com>
Acked-by: Andy Shevchenko <andy@kernel.org>
Link: https://lore.kernel.org/r/20240802075100.6475-5-fancer.lancer@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Currently the CTL LO fields are calculated on the platform-specific basis.
It's implemented by means of the prepare_ctllo() callbacks using the
ternary operator within the local variables init block at the beginning of
the block scope. The functions code currently is relatively hard to
comprehend and isn't that optimal since implies four conditional
statements executed and two additional local variables defined. Let's
simplify the DW AHB DMA prepare_ctllo() method by unrolling the ternary
operators into the normal if-else statement, dropping redundant
master-interface ID variables and initializing the local variables based
on the singly evaluated DMA-transfer direction check. Thus the method will
look much more readable since now the fields content can be easily
inferred right from the if-else branch. Provide the same update in the
Intel DMA32 core driver for the sake of the driver code unification.
Note besides of the effects described above this update is basically a
preparation before dropping the max burst encoding callback. The dropping
will require to call the burst fields calculation methods right in the
prepare_ctllo() callbacks. It would have made the later functions code
even more complex should they were left in the original state.
Signed-off-by: Serge Semin <fancer.lancer@gmail.com>
Acked-by: Andy Shevchenko <andy@kernel.org>
Link: https://lore.kernel.org/r/20240802075100.6475-4-fancer.lancer@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
According to the DW DMA controller Databook 2.18b (page 40 "3.5 Memory
Peripherals") memory peripherals don't have handshaking interface
connected to the controller, therefore they can never be a flow
controller. Since the CTLx.SRC_MSIZE and CTLx.DEST_MSIZE are properties
valid only for peripherals with a handshaking interface, we can freely
zero these fields out if the memory peripheral is selected to be the
source or the destination of the DMA transfers.
Note according to the databook, length of burst transfers to memory is
always equal to the number of data items available in a channel FIFO or
data items required to complete the block transfer, whichever is smaller;
length of burst transfers from memory is always equal to the space
available in a channel FIFO or number of data items required to complete
the block transfer, whichever is smaller.
Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
Link: https://lore.kernel.org/r/20200731200826.9292-5-Sergey.Semin@baikalelectronics.ru
Signed-off-by: Vinod Koul <vkoul@kernel.org>
CFGx.FIFO_MODE field controls a DMA-controller "FIFO readiness" criterion.
In other words it determines when to start pushing data out of a DW
DMAC channel FIFO to a destination peripheral or from a source
peripheral to the DW DMAC channel FIFO. Currently FIFO-mode is set to one
for all DW DMAC channels. It means they are tuned to flush data out of
FIFO (to a memory peripheral or by accepting the burst transaction
requests) when FIFO is at least half-full (except at the end of the block
transfer, when FIFO-flush mode is activated) and are configured to get
data to the FIFO when it's at least half-empty.
Such configuration is a good choice when there is no slave device involved
in the DMA transfers. In that case the number of bursts per block is less
than when CFGx.FIFO_MODE = 0 and, hence, the bus utilization will improve.
But the latency of DMA transfers may increase when CFGx.FIFO_MODE = 1,
since DW DMAC will wait for the channel FIFO contents to be either
half-full or half-empty depending on having the destination or the source
transfers. Such latencies might be dangerous in case if the DMA transfers
are expected to be performed from/to a slave device. Since normally
peripheral devices keep data in internal FIFOs, any latency at some
critical moment may cause one being overflown and consequently losing
data. This especially concerns a case when either a peripheral device is
relatively fast or the DW DMAC engine is relatively slow with respect to
the incoming data pace.
In order to solve problems, which might be caused by the latencies
described above, let's enable the FIFO half-full/half-empty "FIFO
readiness" criterion only for DMA transfers with no slave device involved.
Thanks to the commit 99ba8b9b0d ("dmaengine: dw: Initialize channel
before each transfer") we can freely do that in the generic
dw_dma_initialize_chan() method.
Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20200731200826.9292-3-Sergey.Semin@baikalelectronics.ru
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Intel iDMA 32-bit doesn't have a concept of bus masters and thus
there is no need to setup any kind of masters in the CTL_LO register.
Moreover, the burst size for memory-to-memory transfer is not what is says,
we need to have a corrected list of possible sizes. Note, that
the size of 8 items, each of that up to 4 bytes, is chosen because of
maximum of 1/2 FIFO, which is 64 bytes on Intel Merrifield.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
For Intel iDMA 32-bit the channel can be drained on a suspend.
We need to reset the bit on the resume to return a status quo.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Here is a kinda big refactoring that should have been done
in the first place, when Intel iDMA 32-bit support appeared.
It splits operations which are different to Synopsys DesignWare and
Intel iDMA 32-bit controllers.
No functional change intended.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>