DMA mapping updates for 5.1

- add debugfs support for dumping dma-debug information (Corentin Labbe)
  - Kconfig cleanups (Andy Shevchenko and me)
  - debugfs cleanups (Greg Kroah-Hartman)
  - improve dma_map_resource and use it in the media code
  - arch_setup_dma_ops / arch_teardown_dma_ops cleanups
  - various small cleanups and improvements for the per-device coherent
    allocator
  - make the DMA mask an upper bound and don't fail "too large" dma mask
    in the remaning two architectures - this will allow big driver
    cleanups in the following merge windows
 -----BEGIN PGP SIGNATURE-----
 
 iQI/BAABCgApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAlyCKUgLHGhjaEBsc3Qu
 ZGUACgkQD55TZVIEUYP1vA//WNK5cxQVGZZsmsmkcNe3sCaJCZD4MpVpq/D+l87t
 3j1C1qmduOPyI1m061niYk7j4B4DeyeLs+XOeUsl5Yz+FqVvDICuNHXXJQSUr3Ao
 JbMfBis8Ne65Eyz0xxBltCWM7WiE6fdo7AGoR4Bzj3+f4xGOOazkRy4R6r67bU6x
 v3R5dTvfbSlvvKhn+j8ksAEYb+WPUmr6Z2dnlF0mShnOCpZVy0wd0M1gtEFKrVHx
 zKz9/va4/7yEcpdVqNtSDlHIsSZcFE3ZfTRWq6ZtBoRN+gNwrI0YylY7HtCfJWZG
 IxMiuQ+8SHGE8+NI2d56bs4MsHbqPBRSuadJNuZaTzdxs6FDTEnlCDeXwGF1cHf2
 qhVMfn17V4TZNT4NAd2wHa60cjTMoqraWeS06/b2tyXTF0uxyWj0BCjaHNJa+Ayc
 KCulq1n2LmTDiOGnZJT7Oui6PO5etOHAmvgMQumBNkzQJbPGvuiYGgsciYAMSmuy
 NccIrghQzR9BlG6U1srzTiGQJnpm38x1hWphtU6gQPwz5iKt3FBAfEWCic8U81QE
 JKSwoYv/5ChO+sy9880t/FLO8hn/7L55IOdZEfGkQ22gFzf3W5f9v2jFQc8XN2BO
 Fc6EjWERrmTzUi0f1Ooj3VPRtWuZq86KqlKByy6iZ5eXwxpGE1M0HZVoHYCW+aDd
 MYc=
 =nAMI
 -----END PGP SIGNATURE-----

Merge tag 'dma-mapping-5.1' of git://git.infradead.org/users/hch/dma-mapping

Pull DMA mapping updates from Christoph Hellwig:

 - add debugfs support for dumping dma-debug information (Corentin
   Labbe)

 - Kconfig cleanups (Andy Shevchenko and me)

 - debugfs cleanups (Greg Kroah-Hartman)

 - improve dma_map_resource and use it in the media code

 - arch_setup_dma_ops / arch_teardown_dma_ops cleanups

 - various small cleanups and improvements for the per-device coherent
   allocator

 - make the DMA mask an upper bound and don't fail "too large" dma mask
   in the remaning two architectures - this will allow big driver
   cleanups in the following merge windows

* tag 'dma-mapping-5.1' of git://git.infradead.org/users/hch/dma-mapping: (21 commits)
  Documentation/DMA-API-HOWTO: update dma_mask sections
  sparc64/pci_sun4v: allow large DMA masks
  sparc64/iommu: allow large DMA masks
  sparc64: refactor the ali DMA quirk
  ccio: allow large DMA masks
  dma-mapping: remove the DMA_MEMORY_EXCLUSIVE flag
  dma-mapping: remove dma_mark_declared_memory_occupied
  dma-mapping: move CONFIG_DMA_CMA to kernel/dma/Kconfig
  dma-mapping: improve selection of dma_declare_coherent availability
  dma-mapping: remove an incorrect __iommem annotation
  of: select OF_RESERVED_MEM automatically
  device.h: dma_mem is only needed for HAVE_GENERIC_DMA_COHERENT
  mfd/sm501: depend on HAS_DMA
  dma-mapping: add a kconfig symbol for arch_teardown_dma_ops availability
  dma-mapping: add a kconfig symbol for arch_setup_dma_ops availability
  dma-mapping: move debug configuration options to kernel/dma
  dma-debug: add dumping facility via debugfs
  dma: debug: no need to check return value of debugfs_create functions
  videobuf2: replace a layering violation with dma_map_resource
  dma-mapping: don't BUG when calling dma_map_resource on RAM
  ...
This commit is contained in:
Linus Torvalds 2019-03-10 11:54:48 -07:00
commit b7a7d1c1ec
47 changed files with 341 additions and 542 deletions

View File

@ -146,114 +146,75 @@ What about block I/O and networking buffers? The block I/O and
networking subsystems make sure that the buffers they use are valid networking subsystems make sure that the buffers they use are valid
for you to DMA from/to. for you to DMA from/to.
DMA addressing limitations DMA addressing capabilities
========================== ==========================
Does your device have any DMA addressing limitations? For example, is By default, the kernel assumes that your device can address 32-bits of DMA
your device only capable of driving the low order 24-bits of address? addressing. For a 64-bit capable device, this needs to be increased, and for
If so, you need to inform the kernel of this fact. a device with limitations, it needs to be decreased.
By default, the kernel assumes that your device can address the full Special note about PCI: PCI-X specification requires PCI-X devices to support
32-bits. For a 64-bit capable device, this needs to be increased. 64-bit addressing (DAC) for all transactions. And at least one platform (SGI
And for a device with limitations, as discussed in the previous SN2) requires 64-bit consistent allocations to operate correctly when the IO
paragraph, it needs to be decreased. bus is in PCI-X mode.
Special note about PCI: PCI-X specification requires PCI-X devices to For correct operation, you must set the DMA mask to inform the kernel about
support 64-bit addressing (DAC) for all transactions. And at least your devices DMA addressing capabilities.
one platform (SGI SN2) requires 64-bit consistent allocations to
operate correctly when the IO bus is in PCI-X mode.
For correct operation, you must interrogate the kernel in your device This is performed via a call to dma_set_mask_and_coherent()::
probe routine to see if the DMA controller on the machine can properly
support the DMA addressing limitation your device has. It is good
style to do this even if your device holds the default setting,
because this shows that you did think about these issues wrt. your
device.
The query is performed via a call to dma_set_mask_and_coherent()::
int dma_set_mask_and_coherent(struct device *dev, u64 mask); int dma_set_mask_and_coherent(struct device *dev, u64 mask);
which will query the mask for both streaming and coherent APIs together. which will set the mask for both streaming and coherent APIs together. If you
If you have some special requirements, then the following two separate have some special requirements, then the following two separate calls can be
queries can be used instead: used instead:
The query for streaming mappings is performed via a call to The setup for streaming mappings is performed via a call to
dma_set_mask():: dma_set_mask()::
int dma_set_mask(struct device *dev, u64 mask); int dma_set_mask(struct device *dev, u64 mask);
The query for consistent allocations is performed via a call The setup for consistent allocations is performed via a call
to dma_set_coherent_mask():: to dma_set_coherent_mask()::
int dma_set_coherent_mask(struct device *dev, u64 mask); int dma_set_coherent_mask(struct device *dev, u64 mask);
Here, dev is a pointer to the device struct of your device, and mask Here, dev is a pointer to the device struct of your device, and mask is a bit
is a bit mask describing which bits of an address your device mask describing which bits of an address your device supports. Often the
supports. It returns zero if your card can perform DMA properly on device struct of your device is embedded in the bus-specific device struct of
the machine given the address mask you provided. In general, the your device. For example, &pdev->dev is a pointer to the device struct of a
device struct of your device is embedded in the bus-specific device PCI device (pdev is a pointer to the PCI device struct of your device).
struct of your device. For example, &pdev->dev is a pointer to the
device struct of a PCI device (pdev is a pointer to the PCI device
struct of your device).
If it returns non-zero, your device cannot perform DMA properly on These calls usually return zero to indicated your device can perform DMA
this platform, and attempting to do so will result in undefined properly on the machine given the address mask you provided, but they might
behavior. You must either use a different mask, or not use DMA. return an error if the mask is too small to be supportable on the given
system. If it returns non-zero, your device cannot perform DMA properly on
this platform, and attempting to do so will result in undefined behavior.
You must not use DMA on this device unless the dma_set_mask family of
functions has returned success.
This means that in the failure case, you have three options: This means that in the failure case, you have two options:
1) Use another DMA mask, if possible (see below). 1) Use some non-DMA mode for data transfer, if possible.
2) Use some non-DMA mode for data transfer, if possible. 2) Ignore this device and do not initialize it.
3) Ignore this device and do not initialize it.
It is recommended that your driver print a kernel KERN_WARNING message It is recommended that your driver print a kernel KERN_WARNING message when
when you end up performing either #2 or #3. In this manner, if a user setting the DMA mask fails. In this manner, if a user of your driver reports
of your driver reports that performance is bad or that the device is not that performance is bad or that the device is not even detected, you can ask
even detected, you can ask them for the kernel messages to find out them for the kernel messages to find out exactly why.
exactly why.
The standard 32-bit addressing device would do something like this:: The standard 64-bit addressing device would do something like this::
if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) { if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) {
dev_warn(dev, "mydev: No suitable DMA available\n"); dev_warn(dev, "mydev: No suitable DMA available\n");
goto ignore_this_device; goto ignore_this_device;
} }
Another common scenario is a 64-bit capable device. The approach here If the device only supports 32-bit addressing for descriptors in the
is to try for 64-bit addressing, but back down to a 32-bit mask that coherent allocations, but supports full 64-bits for streaming mappings
should not fail. The kernel may fail the 64-bit mask not because the it would look like this:
platform is not capable of 64-bit addressing. Rather, it may fail in
this case simply because 32-bit addressing is done more efficiently
than 64-bit addressing. For example, Sparc64 PCI SAC addressing is
more efficient than DAC addressing.
Here is how you would handle a 64-bit capable device which can drive if (dma_set_mask(dev, DMA_BIT_MASK(64))) {
all 64-bits when accessing streaming DMA::
int using_dac;
if (!dma_set_mask(dev, DMA_BIT_MASK(64))) {
using_dac = 1;
} else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
using_dac = 0;
} else {
dev_warn(dev, "mydev: No suitable DMA available\n");
goto ignore_this_device;
}
If a card is capable of using 64-bit consistent allocations as well,
the case would look like this::
int using_dac, consistent_using_dac;
if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) {
using_dac = 1;
consistent_using_dac = 1;
} else if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
using_dac = 0;
consistent_using_dac = 0;
} else {
dev_warn(dev, "mydev: No suitable DMA available\n"); dev_warn(dev, "mydev: No suitable DMA available\n");
goto ignore_this_device; goto ignore_this_device;
} }

View File

@ -566,8 +566,7 @@ boundaries when doing this.
int int
dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
dma_addr_t device_addr, size_t size, int dma_addr_t device_addr, size_t size);
flags)
Declare region of memory to be handed out by dma_alloc_coherent() when Declare region of memory to be handed out by dma_alloc_coherent() when
it's asked for coherent memory for this device. it's asked for coherent memory for this device.
@ -581,12 +580,6 @@ dma_addr_t in dma_alloc_coherent()).
size is the size of the area (must be multiples of PAGE_SIZE). size is the size of the area (must be multiples of PAGE_SIZE).
flags can be ORed together and are:
- DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions.
Do not allow dma_alloc_coherent() to fall back to system memory when
it's out of memory in the declared region.
As a simplification for the platforms, only *one* such region of As a simplification for the platforms, only *one* such region of
memory may be declared per device. memory may be declared per device.
@ -605,23 +598,6 @@ unconditionally having removed all the required structures. It is the
driver's job to ensure that no parts of this memory region are driver's job to ensure that no parts of this memory region are
currently in use. currently in use.
::
void *
dma_mark_declared_memory_occupied(struct device *dev,
dma_addr_t device_addr, size_t size)
This is used to occupy specific regions of the declared space
(dma_alloc_coherent() will hand out the first free region it finds).
device_addr is the *device* address of the region requested.
size is the size (and should be a page-sized multiple).
The return value will be either a pointer to the processor virtual
address of the memory, or an error (via PTR_ERR()) if any part of the
region is occupied.
Part III - Debug drivers use of the DMA-API Part III - Debug drivers use of the DMA-API
------------------------------------------- -------------------------------------------
@ -696,6 +672,9 @@ dma-api/disabled This read-only file contains the character 'Y'
happen when it runs out of memory or if it was happen when it runs out of memory or if it was
disabled at boot time disabled at boot time
dma-api/dump This read-only file contains current DMA
mappings.
dma-api/error_count This file is read-only and shows the total dma-api/error_count This file is read-only and shows the total
numbers of errors found. numbers of errors found.

View File

@ -11,6 +11,7 @@ config ARC
select ARC_TIMERS select ARC_TIMERS
select ARCH_HAS_DMA_COHERENT_TO_PFN select ARCH_HAS_DMA_COHERENT_TO_PFN
select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_PTE_SPECIAL
select ARCH_HAS_SETUP_DMA_OPS
select ARCH_HAS_SYNC_DMA_FOR_CPU select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select ARCH_SUPPORTS_ATOMIC_RMW if ARC_HAS_LLSC select ARCH_SUPPORTS_ATOMIC_RMW if ARC_HAS_LLSC
@ -31,7 +32,6 @@ config ARC
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_DEBUG_STACKOVERFLOW select HAVE_DEBUG_STACKOVERFLOW
select HAVE_FUTEX_CMPXCHG if FUTEX select HAVE_FUTEX_CMPXCHG if FUTEX
select HAVE_GENERIC_DMA_COHERENT
select HAVE_IOREMAP_PROT select HAVE_IOREMAP_PROT
select HAVE_KERNEL_GZIP select HAVE_KERNEL_GZIP
select HAVE_KERNEL_LZMA select HAVE_KERNEL_LZMA
@ -45,7 +45,6 @@ config ARC
select MODULES_USE_ELF_RELA select MODULES_USE_ELF_RELA
select OF select OF
select OF_EARLY_FLATTREE select OF_EARLY_FLATTREE
select OF_RESERVED_MEM
select PCI_SYSCALL if PCI select PCI_SYSCALL if PCI
select PERF_USE_VMALLOC if ARC_CACHE_VIPT_ALIASING select PERF_USE_VMALLOC if ARC_CACHE_VIPT_ALIASING

View File

@ -3,6 +3,7 @@ generic-y += bugs.h
generic-y += compat.h generic-y += compat.h
generic-y += device.h generic-y += device.h
generic-y += div64.h generic-y += div64.h
generic-y += dma-mapping.h
generic-y += emergency-restart.h generic-y += emergency-restart.h
generic-y += extable.h generic-y += extable.h
generic-y += ftrace.h generic-y += ftrace.h

View File

@ -1,13 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
// (C) 2018 Synopsys, Inc. (www.synopsys.com)
#ifndef ASM_ARC_DMA_MAPPING_H
#define ASM_ARC_DMA_MAPPING_H
#include <asm-generic/dma-mapping.h>
void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
const struct iommu_ops *iommu, bool coherent);
#define arch_setup_dma_ops arch_setup_dma_ops
#endif

View File

@ -13,9 +13,11 @@ config ARM
select ARCH_HAS_MEMBARRIER_SYNC_CORE select ARCH_HAS_MEMBARRIER_SYNC_CORE
select ARCH_HAS_PTE_SPECIAL if ARM_LPAE select ARCH_HAS_PTE_SPECIAL if ARM_LPAE
select ARCH_HAS_PHYS_TO_DMA select ARCH_HAS_PHYS_TO_DMA
select ARCH_HAS_SETUP_DMA_OPS
select ARCH_HAS_SET_MEMORY select ARCH_HAS_SET_MEMORY
select ARCH_HAS_STRICT_KERNEL_RWX if MMU && !XIP_KERNEL select ARCH_HAS_STRICT_KERNEL_RWX if MMU && !XIP_KERNEL
select ARCH_HAS_STRICT_MODULE_RWX if MMU select ARCH_HAS_STRICT_MODULE_RWX if MMU
select ARCH_HAS_TEARDOWN_DMA_OPS if MMU
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAVE_CUSTOM_GPIO_H select ARCH_HAVE_CUSTOM_GPIO_H
select ARCH_HAS_GCOV_PROFILE_ALL select ARCH_HAS_GCOV_PROFILE_ALL
@ -31,6 +33,7 @@ config ARM
select CLONE_BACKWARDS select CLONE_BACKWARDS
select CPU_PM if SUSPEND || CPU_IDLE select CPU_PM if SUSPEND || CPU_IDLE
select DCACHE_WORD_ACCESS if HAVE_EFFICIENT_UNALIGNED_ACCESS select DCACHE_WORD_ACCESS if HAVE_EFFICIENT_UNALIGNED_ACCESS
select DMA_DECLARE_COHERENT
select DMA_REMAP if MMU select DMA_REMAP if MMU
select EDAC_SUPPORT select EDAC_SUPPORT
select EDAC_ATOMIC_SCRUB select EDAC_ATOMIC_SCRUB
@ -73,7 +76,6 @@ config ARM
select HAVE_FUNCTION_GRAPH_TRACER if !THUMB2_KERNEL select HAVE_FUNCTION_GRAPH_TRACER if !THUMB2_KERNEL
select HAVE_FUNCTION_TRACER if !XIP_KERNEL select HAVE_FUNCTION_TRACER if !XIP_KERNEL
select HAVE_GCC_PLUGINS select HAVE_GCC_PLUGINS
select HAVE_GENERIC_DMA_COHERENT
select HAVE_HW_BREAKPOINT if PERF_EVENTS && (CPU_V6 || CPU_V6K || CPU_V7) select HAVE_HW_BREAKPOINT if PERF_EVENTS && (CPU_V6 || CPU_V6K || CPU_V7)
select HAVE_IDE if PCI || ISA || PCMCIA select HAVE_IDE if PCI || ISA || PCMCIA
select HAVE_IRQ_TIME_ACCOUNTING select HAVE_IRQ_TIME_ACCOUNTING
@ -102,7 +104,6 @@ config ARM
select MODULES_USE_ELF_REL select MODULES_USE_ELF_REL
select NEED_DMA_MAP_STATE select NEED_DMA_MAP_STATE
select OF_EARLY_FLATTREE if OF select OF_EARLY_FLATTREE if OF
select OF_RESERVED_MEM if OF
select OLD_SIGACTION select OLD_SIGACTION
select OLD_SIGSUSPEND3 select OLD_SIGSUSPEND3
select PCI_SYSCALL if PCI select PCI_SYSCALL if PCI

View File

@ -96,15 +96,6 @@ static inline unsigned long dma_max_pfn(struct device *dev)
} }
#define dma_max_pfn(dev) dma_max_pfn(dev) #define dma_max_pfn(dev) dma_max_pfn(dev)
#define arch_setup_dma_ops arch_setup_dma_ops
extern void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
const struct iommu_ops *iommu, bool coherent);
#ifdef CONFIG_MMU
#define arch_teardown_dma_ops arch_teardown_dma_ops
extern void arch_teardown_dma_ops(struct device *dev);
#endif
/* do not use this function in a driver */ /* do not use this function in a driver */
static inline bool is_device_dma_coherent(struct device *dev) static inline bool is_device_dma_coherent(struct device *dev)
{ {

View File

@ -258,8 +258,7 @@ static void __init visstrim_analog_camera_init(void)
return; return;
dma_declare_coherent_memory(&pdev->dev, mx2_camera_base, dma_declare_coherent_memory(&pdev->dev, mx2_camera_base,
mx2_camera_base, MX2_CAMERA_BUF_SIZE, mx2_camera_base, MX2_CAMERA_BUF_SIZE);
DMA_MEMORY_EXCLUSIVE);
} }
static void __init visstrim_reserve(void) static void __init visstrim_reserve(void)
@ -445,8 +444,7 @@ static void __init visstrim_coda_init(void)
dma_declare_coherent_memory(&pdev->dev, dma_declare_coherent_memory(&pdev->dev,
mx2_camera_base + MX2_CAMERA_BUF_SIZE, mx2_camera_base + MX2_CAMERA_BUF_SIZE,
mx2_camera_base + MX2_CAMERA_BUF_SIZE, mx2_camera_base + MX2_CAMERA_BUF_SIZE,
MX2_CAMERA_BUF_SIZE, MX2_CAMERA_BUF_SIZE);
DMA_MEMORY_EXCLUSIVE);
} }
/* DMA deinterlace */ /* DMA deinterlace */
@ -465,8 +463,7 @@ static void __init visstrim_deinterlace_init(void)
dma_declare_coherent_memory(&pdev->dev, dma_declare_coherent_memory(&pdev->dev,
mx2_camera_base + 2 * MX2_CAMERA_BUF_SIZE, mx2_camera_base + 2 * MX2_CAMERA_BUF_SIZE,
mx2_camera_base + 2 * MX2_CAMERA_BUF_SIZE, mx2_camera_base + 2 * MX2_CAMERA_BUF_SIZE,
MX2_CAMERA_BUF_SIZE, MX2_CAMERA_BUF_SIZE);
DMA_MEMORY_EXCLUSIVE);
} }
/* Emma-PrP for format conversion */ /* Emma-PrP for format conversion */
@ -485,8 +482,7 @@ static void __init visstrim_emmaprp_init(void)
*/ */
ret = dma_declare_coherent_memory(&pdev->dev, ret = dma_declare_coherent_memory(&pdev->dev,
mx2_camera_base, mx2_camera_base, mx2_camera_base, mx2_camera_base,
MX2_CAMERA_BUF_SIZE, MX2_CAMERA_BUF_SIZE);
DMA_MEMORY_EXCLUSIVE);
if (ret) if (ret)
pr_err("Failed to declare memory for emmaprp\n"); pr_err("Failed to declare memory for emmaprp\n");
} }

View File

@ -475,8 +475,7 @@ static int __init mx31moboard_init_cam(void)
ret = dma_declare_coherent_memory(&pdev->dev, ret = dma_declare_coherent_memory(&pdev->dev,
mx3_camera_base, mx3_camera_base, mx3_camera_base, mx3_camera_base,
MX3_CAMERA_BUF_SIZE, MX3_CAMERA_BUF_SIZE);
DMA_MEMORY_EXCLUSIVE);
if (ret) if (ret)
goto err; goto err;

View File

@ -188,6 +188,7 @@ const struct dma_map_ops arm_dma_ops = {
.unmap_page = arm_dma_unmap_page, .unmap_page = arm_dma_unmap_page,
.map_sg = arm_dma_map_sg, .map_sg = arm_dma_map_sg,
.unmap_sg = arm_dma_unmap_sg, .unmap_sg = arm_dma_unmap_sg,
.map_resource = dma_direct_map_resource,
.sync_single_for_cpu = arm_dma_sync_single_for_cpu, .sync_single_for_cpu = arm_dma_sync_single_for_cpu,
.sync_single_for_device = arm_dma_sync_single_for_device, .sync_single_for_device = arm_dma_sync_single_for_device,
.sync_sg_for_cpu = arm_dma_sync_sg_for_cpu, .sync_sg_for_cpu = arm_dma_sync_sg_for_cpu,
@ -211,6 +212,7 @@ const struct dma_map_ops arm_coherent_dma_ops = {
.get_sgtable = arm_dma_get_sgtable, .get_sgtable = arm_dma_get_sgtable,
.map_page = arm_coherent_dma_map_page, .map_page = arm_coherent_dma_map_page,
.map_sg = arm_dma_map_sg, .map_sg = arm_dma_map_sg,
.map_resource = dma_direct_map_resource,
.dma_supported = arm_dma_supported, .dma_supported = arm_dma_supported,
}; };
EXPORT_SYMBOL(arm_coherent_dma_ops); EXPORT_SYMBOL(arm_coherent_dma_ops);

View File

@ -22,12 +22,14 @@ config ARM64
select ARCH_HAS_KCOV select ARCH_HAS_KCOV
select ARCH_HAS_MEMBARRIER_SYNC_CORE select ARCH_HAS_MEMBARRIER_SYNC_CORE
select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_PTE_SPECIAL
select ARCH_HAS_SETUP_DMA_OPS
select ARCH_HAS_SET_MEMORY select ARCH_HAS_SET_MEMORY
select ARCH_HAS_STRICT_KERNEL_RWX select ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_HAS_STRICT_MODULE_RWX select ARCH_HAS_STRICT_MODULE_RWX
select ARCH_HAS_SYNC_DMA_FOR_DEVICE select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select ARCH_HAS_SYNC_DMA_FOR_CPU select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYSCALL_WRAPPER select ARCH_HAS_SYSCALL_WRAPPER
select ARCH_HAS_TEARDOWN_DMA_OPS if IOMMU_SUPPORT
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAVE_NMI_SAFE_CMPXCHG select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_INLINE_READ_LOCK if !PREEMPT select ARCH_INLINE_READ_LOCK if !PREEMPT
@ -137,7 +139,6 @@ config ARM64
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_GCC_PLUGINS select HAVE_GCC_PLUGINS
select HAVE_GENERIC_DMA_COHERENT
select HAVE_HW_BREAKPOINT if PERF_EVENTS select HAVE_HW_BREAKPOINT if PERF_EVENTS
select HAVE_IRQ_TIME_ACCOUNTING select HAVE_IRQ_TIME_ACCOUNTING
select HAVE_MEMBLOCK_NODE_MAP if NUMA select HAVE_MEMBLOCK_NODE_MAP if NUMA
@ -163,7 +164,6 @@ config ARM64
select NEED_SG_DMA_LENGTH select NEED_SG_DMA_LENGTH
select OF select OF
select OF_EARLY_FLATTREE select OF_EARLY_FLATTREE
select OF_RESERVED_MEM
select PCI_DOMAINS_GENERIC if PCI select PCI_DOMAINS_GENERIC if PCI
select PCI_ECAM if (ACPI && PCI) select PCI_ECAM if (ACPI && PCI)
select PCI_SYSCALL if PCI select PCI_SYSCALL if PCI

View File

@ -29,15 +29,6 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
return NULL; return NULL;
} }
void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
const struct iommu_ops *iommu, bool coherent);
#define arch_setup_dma_ops arch_setup_dma_ops
#ifdef CONFIG_IOMMU_DMA
void arch_teardown_dma_ops(struct device *dev);
#define arch_teardown_dma_ops arch_teardown_dma_ops
#endif
/* /*
* Do not use this function in a driver, it is only provided for * Do not use this function in a driver, it is only provided for
* arch/arm/mm/xen.c, which is used by arm64 as well. * arch/arm/mm/xen.c, which is used by arm64 as well.

View File

@ -31,7 +31,6 @@ config CSKY
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_GENERIC_DMA_COHERENT
select HAVE_KERNEL_GZIP select HAVE_KERNEL_GZIP
select HAVE_KERNEL_LZO select HAVE_KERNEL_LZO
select HAVE_KERNEL_LZMA select HAVE_KERNEL_LZMA
@ -43,7 +42,6 @@ config CSKY
select MODULES_USE_ELF_RELA if MODULES select MODULES_USE_ELF_RELA if MODULES
select OF select OF
select OF_EARLY_FLATTREE select OF_EARLY_FLATTREE
select OF_RESERVED_MEM
select PERF_USE_VMALLOC if CPU_CK610 select PERF_USE_VMALLOC if CPU_CK610
select RTC_LIB select RTC_LIB
select TIMER_OF select TIMER_OF

View File

@ -57,7 +57,6 @@ config MIPS
select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select HAVE_GENERIC_DMA_COHERENT
select HAVE_IDE select HAVE_IDE
select HAVE_IOREMAP_PROT select HAVE_IOREMAP_PROT
select HAVE_IRQ_EXIT_ON_IRQ_STACK select HAVE_IRQ_EXIT_ON_IRQ_STACK
@ -1119,6 +1118,7 @@ config DMA_MAYBE_COHERENT
config DMA_PERDEV_COHERENT config DMA_PERDEV_COHERENT
bool bool
select ARCH_HAS_SETUP_DMA_OPS
select DMA_NONCOHERENT select DMA_NONCOHERENT
config DMA_NONCOHERENT config DMA_NONCOHERENT

View File

@ -15,14 +15,4 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
#endif #endif
} }
#define arch_setup_dma_ops arch_setup_dma_ops
static inline void arch_setup_dma_ops(struct device *dev, u64 dma_base,
u64 size, const struct iommu_ops *iommu,
bool coherent)
{
#ifdef CONFIG_DMA_PERDEV_COHERENT
dev->dma_coherent = coherent;
#endif
}
#endif /* _ASM_DMA_MAPPING_H */ #endif /* _ASM_DMA_MAPPING_H */

View File

@ -156,3 +156,11 @@ void arch_dma_cache_sync(struct device *dev, void *vaddr, size_t size,
dma_sync_virt(vaddr, size, direction); dma_sync_virt(vaddr, size, direction);
} }
#ifdef CONFIG_DMA_PERDEV_COHERENT
void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
const struct iommu_ops *iommu, bool coherent)
{
dev->dma_coherent = coherent;
}
#endif

View File

@ -232,7 +232,6 @@ config PPC
select NEED_SG_DMA_LENGTH select NEED_SG_DMA_LENGTH
select OF select OF
select OF_EARLY_FLATTREE select OF_EARLY_FLATTREE
select OF_RESERVED_MEM
select OLD_SIGACTION if PPC32 select OLD_SIGACTION if PPC32
select OLD_SIGSUSPEND select OLD_SIGSUSPEND
select PCI_DOMAINS if PCI select PCI_DOMAINS if PCI

View File

@ -32,7 +32,6 @@ config RISCV
select HAVE_MEMBLOCK_NODE_MAP select HAVE_MEMBLOCK_NODE_MAP
select HAVE_DMA_CONTIGUOUS select HAVE_DMA_CONTIGUOUS
select HAVE_FUTEX_CMPXCHG if FUTEX select HAVE_FUTEX_CMPXCHG if FUTEX
select HAVE_GENERIC_DMA_COHERENT
select HAVE_PERF_EVENTS select HAVE_PERF_EVENTS
select HAVE_SYSCALL_TRACEPOINTS select HAVE_SYSCALL_TRACEPOINTS
select IRQ_DOMAIN select IRQ_DOMAIN

View File

@ -7,11 +7,11 @@ config SUPERH
select ARCH_NO_COHERENT_DMA_MMAP if !MMU select ARCH_NO_COHERENT_DMA_MMAP if !MMU
select HAVE_PATA_PLATFORM select HAVE_PATA_PLATFORM
select CLKDEV_LOOKUP select CLKDEV_LOOKUP
select DMA_DECLARE_COHERENT
select HAVE_IDE if HAS_IOPORT_MAP select HAVE_IDE if HAS_IOPORT_MAP
select HAVE_MEMBLOCK_NODE_MAP select HAVE_MEMBLOCK_NODE_MAP
select ARCH_DISCARD_MEMBLOCK select ARCH_DISCARD_MEMBLOCK
select HAVE_OPROFILE select HAVE_OPROFILE
select HAVE_GENERIC_DMA_COHERENT
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_PERF_EVENTS select HAVE_PERF_EVENTS
select HAVE_DEBUG_BUGVERBOSE select HAVE_DEBUG_BUGVERBOSE

View File

@ -529,9 +529,8 @@ static int __init ap325rxa_devices_setup(void)
device_initialize(&ap325rxa_ceu_device.dev); device_initialize(&ap325rxa_ceu_device.dev);
arch_setup_pdev_archdata(&ap325rxa_ceu_device); arch_setup_pdev_archdata(&ap325rxa_ceu_device);
dma_declare_coherent_memory(&ap325rxa_ceu_device.dev, dma_declare_coherent_memory(&ap325rxa_ceu_device.dev,
ceu_dma_membase, ceu_dma_membase, ceu_dma_membase, ceu_dma_membase,
ceu_dma_membase + CEU_BUFFER_MEMORY_SIZE - 1, ceu_dma_membase + CEU_BUFFER_MEMORY_SIZE - 1);
DMA_MEMORY_EXCLUSIVE);
platform_device_add(&ap325rxa_ceu_device); platform_device_add(&ap325rxa_ceu_device);

View File

@ -1438,8 +1438,7 @@ static int __init arch_setup(void)
dma_declare_coherent_memory(&ecovec_ceu_devices[0]->dev, dma_declare_coherent_memory(&ecovec_ceu_devices[0]->dev,
ceu0_dma_membase, ceu0_dma_membase, ceu0_dma_membase, ceu0_dma_membase,
ceu0_dma_membase + ceu0_dma_membase +
CEU_BUFFER_MEMORY_SIZE - 1, CEU_BUFFER_MEMORY_SIZE - 1);
DMA_MEMORY_EXCLUSIVE);
platform_device_add(ecovec_ceu_devices[0]); platform_device_add(ecovec_ceu_devices[0]);
device_initialize(&ecovec_ceu_devices[1]->dev); device_initialize(&ecovec_ceu_devices[1]->dev);
@ -1447,8 +1446,7 @@ static int __init arch_setup(void)
dma_declare_coherent_memory(&ecovec_ceu_devices[1]->dev, dma_declare_coherent_memory(&ecovec_ceu_devices[1]->dev,
ceu1_dma_membase, ceu1_dma_membase, ceu1_dma_membase, ceu1_dma_membase,
ceu1_dma_membase + ceu1_dma_membase +
CEU_BUFFER_MEMORY_SIZE - 1, CEU_BUFFER_MEMORY_SIZE - 1);
DMA_MEMORY_EXCLUSIVE);
platform_device_add(ecovec_ceu_devices[1]); platform_device_add(ecovec_ceu_devices[1]);
gpiod_add_lookup_table(&cn12_power_gpiod_table); gpiod_add_lookup_table(&cn12_power_gpiod_table);

View File

@ -603,9 +603,8 @@ static int __init kfr2r09_devices_setup(void)
device_initialize(&kfr2r09_ceu_device.dev); device_initialize(&kfr2r09_ceu_device.dev);
arch_setup_pdev_archdata(&kfr2r09_ceu_device); arch_setup_pdev_archdata(&kfr2r09_ceu_device);
dma_declare_coherent_memory(&kfr2r09_ceu_device.dev, dma_declare_coherent_memory(&kfr2r09_ceu_device.dev,
ceu_dma_membase, ceu_dma_membase, ceu_dma_membase, ceu_dma_membase,
ceu_dma_membase + CEU_BUFFER_MEMORY_SIZE - 1, ceu_dma_membase + CEU_BUFFER_MEMORY_SIZE - 1);
DMA_MEMORY_EXCLUSIVE);
platform_device_add(&kfr2r09_ceu_device); platform_device_add(&kfr2r09_ceu_device);

View File

@ -604,9 +604,8 @@ static int __init migor_devices_setup(void)
device_initialize(&migor_ceu_device.dev); device_initialize(&migor_ceu_device.dev);
arch_setup_pdev_archdata(&migor_ceu_device); arch_setup_pdev_archdata(&migor_ceu_device);
dma_declare_coherent_memory(&migor_ceu_device.dev, dma_declare_coherent_memory(&migor_ceu_device.dev,
ceu_dma_membase, ceu_dma_membase, ceu_dma_membase, ceu_dma_membase,
ceu_dma_membase + CEU_BUFFER_MEMORY_SIZE - 1, ceu_dma_membase + CEU_BUFFER_MEMORY_SIZE - 1);
DMA_MEMORY_EXCLUSIVE);
platform_device_add(&migor_ceu_device); platform_device_add(&migor_ceu_device);

View File

@ -941,8 +941,7 @@ static int __init devices_setup(void)
dma_declare_coherent_memory(&ms7724se_ceu_devices[0]->dev, dma_declare_coherent_memory(&ms7724se_ceu_devices[0]->dev,
ceu0_dma_membase, ceu0_dma_membase, ceu0_dma_membase, ceu0_dma_membase,
ceu0_dma_membase + ceu0_dma_membase +
CEU_BUFFER_MEMORY_SIZE - 1, CEU_BUFFER_MEMORY_SIZE - 1);
DMA_MEMORY_EXCLUSIVE);
platform_device_add(ms7724se_ceu_devices[0]); platform_device_add(ms7724se_ceu_devices[0]);
device_initialize(&ms7724se_ceu_devices[1]->dev); device_initialize(&ms7724se_ceu_devices[1]->dev);
@ -950,8 +949,7 @@ static int __init devices_setup(void)
dma_declare_coherent_memory(&ms7724se_ceu_devices[1]->dev, dma_declare_coherent_memory(&ms7724se_ceu_devices[1]->dev,
ceu1_dma_membase, ceu1_dma_membase, ceu1_dma_membase, ceu1_dma_membase,
ceu1_dma_membase + ceu1_dma_membase +
CEU_BUFFER_MEMORY_SIZE - 1, CEU_BUFFER_MEMORY_SIZE - 1);
DMA_MEMORY_EXCLUSIVE);
platform_device_add(ms7724se_ceu_devices[1]); platform_device_add(ms7724se_ceu_devices[1]);
return platform_add_devices(ms7724se_devices, return platform_add_devices(ms7724se_devices,

View File

@ -63,8 +63,7 @@ static void gapspci_fixup_resources(struct pci_dev *dev)
BUG_ON(dma_declare_coherent_memory(&dev->dev, BUG_ON(dma_declare_coherent_memory(&dev->dev,
res.start, res.start,
region.start, region.start,
resource_size(&res), resource_size(&res)));
DMA_MEMORY_EXCLUSIVE));
break; break;
default: default:
printk("PCI: Failed resource fixup\n"); printk("PCI: Failed resource fixup\n");

View File

@ -745,15 +745,12 @@ static int dma_4u_supported(struct device *dev, u64 device_mask)
{ {
struct iommu *iommu = dev->archdata.iommu; struct iommu *iommu = dev->archdata.iommu;
if (device_mask > DMA_BIT_MASK(32)) if (ali_sound_dma_hack(dev, device_mask))
return 0;
if ((device_mask & iommu->dma_addr_mask) == iommu->dma_addr_mask)
return 1; return 1;
#ifdef CONFIG_PCI
if (dev_is_pci(dev)) if (device_mask < iommu->dma_addr_mask)
return pci64_dma_supported(to_pci_dev(dev), device_mask); return 0;
#endif return 1;
return 0;
} }
static const struct dma_map_ops sun4u_dma_ops = { static const struct dma_map_ops sun4u_dma_ops = {

View File

@ -45,7 +45,11 @@ void __irq_entry smp_receive_signal_client(int irq, struct pt_regs *regs);
void __irq_entry smp_kgdb_capture_client(int irq, struct pt_regs *regs); void __irq_entry smp_kgdb_capture_client(int irq, struct pt_regs *regs);
/* pci.c */ /* pci.c */
int pci64_dma_supported(struct pci_dev *pdev, u64 device_mask); #ifdef CONFIG_PCI
int ali_sound_dma_hack(struct device *dev, u64 device_mask);
#else
#define ali_sound_dma_hack(dev, mask) (0)
#endif
/* signal32.c */ /* signal32.c */
void do_sigreturn32(struct pt_regs *regs); void do_sigreturn32(struct pt_regs *regs);

View File

@ -956,51 +956,35 @@ void arch_teardown_msi_irq(unsigned int irq)
} }
#endif /* !(CONFIG_PCI_MSI) */ #endif /* !(CONFIG_PCI_MSI) */
static void ali_sound_dma_hack(struct pci_dev *pdev, int set_bit) /* ALI sound chips generate 31-bits of DMA, a special register
* determines what bit 31 is emitted as.
*/
int ali_sound_dma_hack(struct device *dev, u64 device_mask)
{ {
struct iommu *iommu = dev->archdata.iommu;
struct pci_dev *ali_isa_bridge; struct pci_dev *ali_isa_bridge;
u8 val; u8 val;
/* ALI sound chips generate 31-bits of DMA, a special register if (!dev_is_pci(dev))
* determines what bit 31 is emitted as. return 0;
*/
if (to_pci_dev(dev)->vendor != PCI_VENDOR_ID_AL ||
to_pci_dev(dev)->device != PCI_DEVICE_ID_AL_M5451 ||
device_mask != 0x7fffffff)
return 0;
ali_isa_bridge = pci_get_device(PCI_VENDOR_ID_AL, ali_isa_bridge = pci_get_device(PCI_VENDOR_ID_AL,
PCI_DEVICE_ID_AL_M1533, PCI_DEVICE_ID_AL_M1533,
NULL); NULL);
pci_read_config_byte(ali_isa_bridge, 0x7e, &val); pci_read_config_byte(ali_isa_bridge, 0x7e, &val);
if (set_bit) if (iommu->dma_addr_mask & 0x80000000)
val |= 0x01; val |= 0x01;
else else
val &= ~0x01; val &= ~0x01;
pci_write_config_byte(ali_isa_bridge, 0x7e, val); pci_write_config_byte(ali_isa_bridge, 0x7e, val);
pci_dev_put(ali_isa_bridge); pci_dev_put(ali_isa_bridge);
} return 1;
int pci64_dma_supported(struct pci_dev *pdev, u64 device_mask)
{
u64 dma_addr_mask;
if (pdev == NULL) {
dma_addr_mask = 0xffffffff;
} else {
struct iommu *iommu = pdev->dev.archdata.iommu;
dma_addr_mask = iommu->dma_addr_mask;
if (pdev->vendor == PCI_VENDOR_ID_AL &&
pdev->device == PCI_DEVICE_ID_AL_M5451 &&
device_mask == 0x7fffffff) {
ali_sound_dma_hack(pdev,
(dma_addr_mask & 0x80000000) != 0);
return 1;
}
}
if (device_mask >= (1UL << 32UL))
return 0;
return (device_mask & dma_addr_mask) == dma_addr_mask;
} }
void pci_resource_to_user(const struct pci_dev *pdev, int bar, void pci_resource_to_user(const struct pci_dev *pdev, int bar,

View File

@ -92,7 +92,7 @@ static long iommu_batch_flush(struct iommu_batch *p, u64 mask)
prot &= (HV_PCI_MAP_ATTR_READ | HV_PCI_MAP_ATTR_WRITE); prot &= (HV_PCI_MAP_ATTR_READ | HV_PCI_MAP_ATTR_WRITE);
while (npages != 0) { while (npages != 0) {
if (mask <= DMA_BIT_MASK(32)) { if (mask <= DMA_BIT_MASK(32) || !pbm->iommu->atu) {
num = pci_sun4v_iommu_map(devhandle, num = pci_sun4v_iommu_map(devhandle,
HV_PCI_TSBID(0, entry), HV_PCI_TSBID(0, entry),
npages, npages,
@ -208,7 +208,7 @@ static void *dma_4v_alloc_coherent(struct device *dev, size_t size,
atu = iommu->atu; atu = iommu->atu;
mask = dev->coherent_dma_mask; mask = dev->coherent_dma_mask;
if (mask <= DMA_BIT_MASK(32)) if (mask <= DMA_BIT_MASK(32) || !atu)
tbl = &iommu->tbl; tbl = &iommu->tbl;
else else
tbl = &atu->tbl; tbl = &atu->tbl;
@ -674,18 +674,12 @@ static void dma_4v_unmap_sg(struct device *dev, struct scatterlist *sglist,
static int dma_4v_supported(struct device *dev, u64 device_mask) static int dma_4v_supported(struct device *dev, u64 device_mask)
{ {
struct iommu *iommu = dev->archdata.iommu; struct iommu *iommu = dev->archdata.iommu;
u64 dma_addr_mask = iommu->dma_addr_mask;
if (device_mask > DMA_BIT_MASK(32)) { if (ali_sound_dma_hack(dev, device_mask))
if (iommu->atu)
dma_addr_mask = iommu->atu->dma_addr_mask;
else
return 0;
}
if ((device_mask & dma_addr_mask) == dma_addr_mask)
return 1; return 1;
return pci64_dma_supported(to_pci_dev(dev), device_mask); if (device_mask < iommu->dma_addr_mask)
return 0;
return 1;
} }
static const struct dma_map_ops sun4v_dma_ops = { static const struct dma_map_ops sun4v_dma_ops = {

View File

@ -5,7 +5,6 @@ config UNICORE32
select ARCH_HAS_DEVMEM_IS_ALLOWED select ARCH_HAS_DEVMEM_IS_ALLOWED
select ARCH_MIGHT_HAVE_PC_PARPORT select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO select ARCH_MIGHT_HAVE_PC_SERIO
select HAVE_GENERIC_DMA_COHERENT
select HAVE_KERNEL_GZIP select HAVE_KERNEL_GZIP
select HAVE_KERNEL_BZIP2 select HAVE_KERNEL_BZIP2
select GENERIC_ATOMIC64 select GENERIC_ATOMIC64

View File

@ -14,7 +14,6 @@ config X86_32
select ARCH_WANT_IPC_PARSE_VERSION select ARCH_WANT_IPC_PARSE_VERSION
select CLKSRC_I8253 select CLKSRC_I8253
select CLONE_BACKWARDS select CLONE_BACKWARDS
select HAVE_GENERIC_DMA_COHERENT
select MODULES_USE_ELF_REL select MODULES_USE_ELF_REL
select OLD_SIGACTION select OLD_SIGACTION

View File

@ -450,7 +450,6 @@ config USE_OF
bool "Flattened Device Tree support" bool "Flattened Device Tree support"
select OF select OF
select OF_EARLY_FLATTREE select OF_EARLY_FLATTREE
select OF_RESERVED_MEM
help help
Include support for flattened device tree machine descriptions. Include support for flattened device tree machine descriptions.

View File

@ -191,83 +191,6 @@ config DMA_FENCE_TRACE
lockup related problems for dma-buffers shared across multiple lockup related problems for dma-buffers shared across multiple
devices. devices.
config DMA_CMA
bool "DMA Contiguous Memory Allocator"
depends on HAVE_DMA_CONTIGUOUS && CMA
help
This enables the Contiguous Memory Allocator which allows drivers
to allocate big physically-contiguous blocks of memory for use with
hardware components that do not support I/O map nor scatter-gather.
You can disable CMA by specifying "cma=0" on the kernel's command
line.
For more information see <include/linux/dma-contiguous.h>.
If unsure, say "n".
if DMA_CMA
comment "Default contiguous memory area size:"
config CMA_SIZE_MBYTES
int "Size in Mega Bytes"
depends on !CMA_SIZE_SEL_PERCENTAGE
default 0 if X86
default 16
help
Defines the size (in MiB) of the default memory area for Contiguous
Memory Allocator. If the size of 0 is selected, CMA is disabled by
default, but it can be enabled by passing cma=size[MG] to the kernel.
config CMA_SIZE_PERCENTAGE
int "Percentage of total memory"
depends on !CMA_SIZE_SEL_MBYTES
default 0 if X86
default 10
help
Defines the size of the default memory area for Contiguous Memory
Allocator as a percentage of the total memory in the system.
If 0 percent is selected, CMA is disabled by default, but it can be
enabled by passing cma=size[MG] to the kernel.
choice
prompt "Selected region size"
default CMA_SIZE_SEL_MBYTES
config CMA_SIZE_SEL_MBYTES
bool "Use mega bytes value only"
config CMA_SIZE_SEL_PERCENTAGE
bool "Use percentage value only"
config CMA_SIZE_SEL_MIN
bool "Use lower value (minimum)"
config CMA_SIZE_SEL_MAX
bool "Use higher value (maximum)"
endchoice
config CMA_ALIGNMENT
int "Maximum PAGE_SIZE order of alignment for contiguous buffers"
range 4 12
default 8
help
DMA mapping framework by default aligns all buffers to the smallest
PAGE_SIZE order which is greater than or equal to the requested buffer
size. This works well for buffers up to a few hundreds kilobytes, but
for larger buffers it just a memory waste. With this parameter you can
specify the maximum PAGE_SIZE order for contiguous buffers. Larger
buffers will be aligned only to this specified order. The order is
expressed as a power of two multiplied by the PAGE_SIZE.
For example, if your system defaults to 4KiB pages, the order value
of 8 means that the buffers will be aligned up to 1MiB only.
If unsure, leave the default value "8".
endif
config GENERIC_ARCH_TOPOLOGY config GENERIC_ARCH_TOPOLOGY
bool bool
help help

View File

@ -439,42 +439,14 @@ static void vb2_dc_put_userptr(void *buf_priv)
set_page_dirty_lock(pages[i]); set_page_dirty_lock(pages[i]);
sg_free_table(sgt); sg_free_table(sgt);
kfree(sgt); kfree(sgt);
} else {
dma_unmap_resource(buf->dev, buf->dma_addr, buf->size,
buf->dma_dir, 0);
} }
vb2_destroy_framevec(buf->vec); vb2_destroy_framevec(buf->vec);
kfree(buf); kfree(buf);
} }
/*
* For some kind of reserved memory there might be no struct page available,
* so all that can be done to support such 'pages' is to try to convert
* pfn to dma address or at the last resort just assume that
* dma address == physical address (like it has been assumed in earlier version
* of videobuf2-dma-contig
*/
#ifdef __arch_pfn_to_dma
static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn)
{
return (dma_addr_t)__arch_pfn_to_dma(dev, pfn);
}
#elif defined(__pfn_to_bus)
static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn)
{
return (dma_addr_t)__pfn_to_bus(pfn);
}
#elif defined(__pfn_to_phys)
static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn)
{
return (dma_addr_t)__pfn_to_phys(pfn);
}
#else
static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn)
{
/* really, we cannot do anything better at this point */
return (dma_addr_t)(pfn) << PAGE_SHIFT;
}
#endif
static void *vb2_dc_get_userptr(struct device *dev, unsigned long vaddr, static void *vb2_dc_get_userptr(struct device *dev, unsigned long vaddr,
unsigned long size, enum dma_data_direction dma_dir) unsigned long size, enum dma_data_direction dma_dir)
{ {
@ -528,7 +500,12 @@ static void *vb2_dc_get_userptr(struct device *dev, unsigned long vaddr,
for (i = 1; i < n_pages; i++) for (i = 1; i < n_pages; i++)
if (nums[i-1] + 1 != nums[i]) if (nums[i-1] + 1 != nums[i])
goto fail_pfnvec; goto fail_pfnvec;
buf->dma_addr = vb2_dc_pfn_to_dma(buf->dev, nums[0]); buf->dma_addr = dma_map_resource(buf->dev,
__pfn_to_phys(nums[0]), size, buf->dma_dir, 0);
if (dma_mapping_error(buf->dev, buf->dma_addr)) {
ret = -ENOMEM;
goto fail_pfnvec;
}
goto out; goto out;
} }

View File

@ -1065,6 +1065,8 @@ config MFD_SI476X_CORE
config MFD_SM501 config MFD_SM501
tristate "Silicon Motion SM501" tristate "Silicon Motion SM501"
depends on HAS_DMA
select DMA_DECLARE_COHERENT
---help--- ---help---
This is the core driver for the Silicon Motion SM501 multimedia This is the core driver for the Silicon Motion SM501 multimedia
companion chip. This device is a multifunction device which may companion chip. This device is a multifunction device which may
@ -1674,6 +1676,7 @@ config MFD_TC6393XB
select GPIOLIB select GPIOLIB
select MFD_CORE select MFD_CORE
select MFD_TMIO select MFD_TMIO
select DMA_DECLARE_COHERENT
help help
Support for Toshiba Mobile IO Controller TC6393XB Support for Toshiba Mobile IO Controller TC6393XB

View File

@ -43,6 +43,7 @@ config OF_FLATTREE
config OF_EARLY_FLATTREE config OF_EARLY_FLATTREE
bool bool
select DMA_DECLARE_COHERENT if HAS_DMA
select OF_FLATTREE select OF_FLATTREE
config OF_PROMTREE config OF_PROMTREE
@ -81,10 +82,9 @@ config OF_MDIO
OpenFirmware MDIO bus (Ethernet PHY) accessors OpenFirmware MDIO bus (Ethernet PHY) accessors
config OF_RESERVED_MEM config OF_RESERVED_MEM
depends on OF_EARLY_FLATTREE
bool bool
help depends on OF_EARLY_FLATTREE
Helpers to allow for reservation of memory regions default y if DMA_DECLARE_COHERENT || DMA_CMA
config OF_RESOLVE config OF_RESOLVE
bool bool

View File

@ -712,8 +712,8 @@ ccio_dma_supported(struct device *dev, u64 mask)
return 0; return 0;
} }
/* only support 32-bit devices (ie PCI/GSC) */ /* only support 32-bit or better devices (ie PCI/GSC) */
return (int)(mask == 0xffffffffUL); return (int)(mask >= 0xffffffffUL);
} }
/** /**

View File

@ -126,8 +126,7 @@ static int ohci_hcd_sm501_drv_probe(struct platform_device *pdev)
retval = dma_declare_coherent_memory(dev, mem->start, retval = dma_declare_coherent_memory(dev, mem->start,
mem->start - mem->parent->start, mem->start - mem->parent->start,
resource_size(mem), resource_size(mem));
DMA_MEMORY_EXCLUSIVE);
if (retval) { if (retval) {
dev_err(dev, "cannot declare coherent memory\n"); dev_err(dev, "cannot declare coherent memory\n");
goto err1; goto err1;

View File

@ -225,7 +225,7 @@ static int ohci_hcd_tmio_drv_probe(struct platform_device *dev)
} }
ret = dma_declare_coherent_memory(&dev->dev, sram->start, sram->start, ret = dma_declare_coherent_memory(&dev->dev, sram->start, sram->start,
resource_size(sram), DMA_MEMORY_EXCLUSIVE); resource_size(sram));
if (ret) if (ret)
goto err_dma_declare; goto err_dma_declare;

View File

@ -1028,8 +1028,10 @@ struct device {
struct list_head dma_pools; /* dma pools (if dma'ble) */ struct list_head dma_pools; /* dma pools (if dma'ble) */
#ifdef CONFIG_DMA_DECLARE_COHERENT
struct dma_coherent_mem *dma_mem; /* internal for coherent mem struct dma_coherent_mem *dma_mem; /* internal for coherent mem
override */ override */
#endif
#ifdef CONFIG_DMA_CMA #ifdef CONFIG_DMA_CMA
struct cma *cma_area; /* contiguous memory area for dma struct cma *cma_area; /* contiguous memory area for dma
allocations */ allocations */

View File

@ -153,7 +153,7 @@ static inline int is_device_dma_capable(struct device *dev)
return dev->dma_mask != NULL && *dev->dma_mask != DMA_MASK_NONE; return dev->dma_mask != NULL && *dev->dma_mask != DMA_MASK_NONE;
} }
#ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT #ifdef CONFIG_DMA_DECLARE_COHERENT
/* /*
* These three functions are only for dma allocator. * These three functions are only for dma allocator.
* Don't use them in device drivers. * Don't use them in device drivers.
@ -192,7 +192,7 @@ static inline int dma_mmap_from_global_coherent(struct vm_area_struct *vma,
{ {
return 0; return 0;
} }
#endif /* CONFIG_HAVE_GENERIC_DMA_COHERENT */ #endif /* CONFIG_DMA_DECLARE_COHERENT */
static inline bool dma_is_direct(const struct dma_map_ops *ops) static inline bool dma_is_direct(const struct dma_map_ops *ops)
{ {
@ -208,6 +208,8 @@ dma_addr_t dma_direct_map_page(struct device *dev, struct page *page,
unsigned long attrs); unsigned long attrs);
int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents, int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents,
enum dma_data_direction dir, unsigned long attrs); enum dma_data_direction dir, unsigned long attrs);
dma_addr_t dma_direct_map_resource(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir, unsigned long attrs);
#if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \ #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \
defined(CONFIG_SWIOTLB) defined(CONFIG_SWIOTLB)
@ -346,19 +348,20 @@ static inline dma_addr_t dma_map_resource(struct device *dev,
unsigned long attrs) unsigned long attrs)
{ {
const struct dma_map_ops *ops = get_dma_ops(dev); const struct dma_map_ops *ops = get_dma_ops(dev);
dma_addr_t addr; dma_addr_t addr = DMA_MAPPING_ERROR;
BUG_ON(!valid_dma_direction(dir)); BUG_ON(!valid_dma_direction(dir));
/* Don't allow RAM to be mapped */ /* Don't allow RAM to be mapped */
BUG_ON(pfn_valid(PHYS_PFN(phys_addr))); if (WARN_ON_ONCE(pfn_valid(PHYS_PFN(phys_addr))))
return DMA_MAPPING_ERROR;
addr = phys_addr; if (dma_is_direct(ops))
if (ops && ops->map_resource) addr = dma_direct_map_resource(dev, phys_addr, size, dir, attrs);
else if (ops->map_resource)
addr = ops->map_resource(dev, phys_addr, size, dir, attrs); addr = ops->map_resource(dev, phys_addr, size, dir, attrs);
debug_dma_map_resource(dev, phys_addr, size, dir, addr); debug_dma_map_resource(dev, phys_addr, size, dir, addr);
return addr; return addr;
} }
@ -369,7 +372,7 @@ static inline void dma_unmap_resource(struct device *dev, dma_addr_t addr,
const struct dma_map_ops *ops = get_dma_ops(dev); const struct dma_map_ops *ops = get_dma_ops(dev);
BUG_ON(!valid_dma_direction(dir)); BUG_ON(!valid_dma_direction(dir));
if (ops && ops->unmap_resource) if (!dma_is_direct(ops) && ops->unmap_resource)
ops->unmap_resource(dev, addr, size, dir, attrs); ops->unmap_resource(dev, addr, size, dir, attrs);
debug_dma_unmap_resource(dev, addr, size, dir); debug_dma_unmap_resource(dev, addr, size, dir);
} }
@ -668,15 +671,23 @@ static inline int dma_coerce_mask_and_coherent(struct device *dev, u64 mask)
return dma_set_mask_and_coherent(dev, mask); return dma_set_mask_and_coherent(dev, mask);
} }
#ifndef arch_setup_dma_ops #ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS
void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
const struct iommu_ops *iommu, bool coherent);
#else
static inline void arch_setup_dma_ops(struct device *dev, u64 dma_base, static inline void arch_setup_dma_ops(struct device *dev, u64 dma_base,
u64 size, const struct iommu_ops *iommu, u64 size, const struct iommu_ops *iommu, bool coherent)
bool coherent) { } {
#endif }
#endif /* CONFIG_ARCH_HAS_SETUP_DMA_OPS */
#ifndef arch_teardown_dma_ops #ifdef CONFIG_ARCH_HAS_TEARDOWN_DMA_OPS
static inline void arch_teardown_dma_ops(struct device *dev) { } void arch_teardown_dma_ops(struct device *dev);
#endif #else
static inline void arch_teardown_dma_ops(struct device *dev)
{
}
#endif /* CONFIG_ARCH_HAS_TEARDOWN_DMA_OPS */
static inline unsigned int dma_get_max_seg_size(struct device *dev) static inline unsigned int dma_get_max_seg_size(struct device *dev)
{ {
@ -725,19 +736,14 @@ static inline int dma_get_cache_alignment(void)
return 1; return 1;
} }
/* flags for the coherent memory api */ #ifdef CONFIG_DMA_DECLARE_COHERENT
#define DMA_MEMORY_EXCLUSIVE 0x01
#ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT
int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
dma_addr_t device_addr, size_t size, int flags); dma_addr_t device_addr, size_t size);
void dma_release_declared_memory(struct device *dev); void dma_release_declared_memory(struct device *dev);
void *dma_mark_declared_memory_occupied(struct device *dev,
dma_addr_t device_addr, size_t size);
#else #else
static inline int static inline int
dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
dma_addr_t device_addr, size_t size, int flags) dma_addr_t device_addr, size_t size)
{ {
return -ENOSYS; return -ENOSYS;
} }
@ -746,14 +752,7 @@ static inline void
dma_release_declared_memory(struct device *dev) dma_release_declared_memory(struct device *dev)
{ {
} }
#endif /* CONFIG_DMA_DECLARE_COHERENT */
static inline void *
dma_mark_declared_memory_occupied(struct device *dev,
dma_addr_t device_addr, size_t size)
{
return ERR_PTR(-EBUSY);
}
#endif /* CONFIG_HAVE_GENERIC_DMA_COHERENT */
static inline void *dmam_alloc_coherent(struct device *dev, size_t size, static inline void *dmam_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp) dma_addr_t *dma_handle, gfp_t gfp)

View File

@ -19,7 +19,13 @@ config ARCH_HAS_DMA_COHERENCE_H
config ARCH_HAS_DMA_SET_MASK config ARCH_HAS_DMA_SET_MASK
bool bool
config HAVE_GENERIC_DMA_COHERENT config DMA_DECLARE_COHERENT
bool
config ARCH_HAS_SETUP_DMA_OPS
bool
config ARCH_HAS_TEARDOWN_DMA_OPS
bool bool
config ARCH_HAS_SYNC_DMA_FOR_DEVICE config ARCH_HAS_SYNC_DMA_FOR_DEVICE
@ -56,3 +62,116 @@ config DMA_REMAP
config DMA_DIRECT_REMAP config DMA_DIRECT_REMAP
bool bool
select DMA_REMAP select DMA_REMAP
config DMA_CMA
bool "DMA Contiguous Memory Allocator"
depends on HAVE_DMA_CONTIGUOUS && CMA
help
This enables the Contiguous Memory Allocator which allows drivers
to allocate big physically-contiguous blocks of memory for use with
hardware components that do not support I/O map nor scatter-gather.
You can disable CMA by specifying "cma=0" on the kernel's command
line.
For more information see <include/linux/dma-contiguous.h>.
If unsure, say "n".
if DMA_CMA
comment "Default contiguous memory area size:"
config CMA_SIZE_MBYTES
int "Size in Mega Bytes"
depends on !CMA_SIZE_SEL_PERCENTAGE
default 0 if X86
default 16
help
Defines the size (in MiB) of the default memory area for Contiguous
Memory Allocator. If the size of 0 is selected, CMA is disabled by
default, but it can be enabled by passing cma=size[MG] to the kernel.
config CMA_SIZE_PERCENTAGE
int "Percentage of total memory"
depends on !CMA_SIZE_SEL_MBYTES
default 0 if X86
default 10
help
Defines the size of the default memory area for Contiguous Memory
Allocator as a percentage of the total memory in the system.
If 0 percent is selected, CMA is disabled by default, but it can be
enabled by passing cma=size[MG] to the kernel.
choice
prompt "Selected region size"
default CMA_SIZE_SEL_MBYTES
config CMA_SIZE_SEL_MBYTES
bool "Use mega bytes value only"
config CMA_SIZE_SEL_PERCENTAGE
bool "Use percentage value only"
config CMA_SIZE_SEL_MIN
bool "Use lower value (minimum)"
config CMA_SIZE_SEL_MAX
bool "Use higher value (maximum)"
endchoice
config CMA_ALIGNMENT
int "Maximum PAGE_SIZE order of alignment for contiguous buffers"
range 4 12
default 8
help
DMA mapping framework by default aligns all buffers to the smallest
PAGE_SIZE order which is greater than or equal to the requested buffer
size. This works well for buffers up to a few hundreds kilobytes, but
for larger buffers it just a memory waste. With this parameter you can
specify the maximum PAGE_SIZE order for contiguous buffers. Larger
buffers will be aligned only to this specified order. The order is
expressed as a power of two multiplied by the PAGE_SIZE.
For example, if your system defaults to 4KiB pages, the order value
of 8 means that the buffers will be aligned up to 1MiB only.
If unsure, leave the default value "8".
endif
config DMA_API_DEBUG
bool "Enable debugging of DMA-API usage"
select NEED_DMA_MAP_STATE
help
Enable this option to debug the use of the DMA API by device drivers.
With this option you will be able to detect common bugs in device
drivers like double-freeing of DMA mappings or freeing mappings that
were never allocated.
This also attempts to catch cases where a page owned by DMA is
accessed by the cpu in a way that could cause data corruption. For
example, this enables cow_user_page() to check that the source page is
not undergoing DMA.
This option causes a performance degradation. Use only if you want to
debug device drivers and dma interactions.
If unsure, say N.
config DMA_API_DEBUG_SG
bool "Debug DMA scatter-gather usage"
default y
depends on DMA_API_DEBUG
help
Perform extra checking that callers of dma_map_sg() have respected the
appropriate segment length/boundary limits for the given device when
preparing DMA scatterlists.
This is particularly likely to have been overlooked in cases where the
dma_map_sg() API is used for general bulk mapping of pages rather than
preparing literal scatter-gather descriptors, where there is a risk of
unexpected behaviour from DMA API implementations if the scatterlist
is technically out-of-spec.
If unsure, say N.

View File

@ -2,7 +2,7 @@
obj-$(CONFIG_HAS_DMA) += mapping.o direct.o dummy.o obj-$(CONFIG_HAS_DMA) += mapping.o direct.o dummy.o
obj-$(CONFIG_DMA_CMA) += contiguous.o obj-$(CONFIG_DMA_CMA) += contiguous.o
obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += coherent.o obj-$(CONFIG_DMA_DECLARE_COHERENT) += coherent.o
obj-$(CONFIG_DMA_VIRT_OPS) += virt.o obj-$(CONFIG_DMA_VIRT_OPS) += virt.o
obj-$(CONFIG_DMA_API_DEBUG) += debug.o obj-$(CONFIG_DMA_API_DEBUG) += debug.o
obj-$(CONFIG_SWIOTLB) += swiotlb.o obj-$(CONFIG_SWIOTLB) += swiotlb.o

View File

@ -14,7 +14,6 @@ struct dma_coherent_mem {
dma_addr_t device_base; dma_addr_t device_base;
unsigned long pfn_base; unsigned long pfn_base;
int size; int size;
int flags;
unsigned long *bitmap; unsigned long *bitmap;
spinlock_t spinlock; spinlock_t spinlock;
bool use_dev_dma_pfn_offset; bool use_dev_dma_pfn_offset;
@ -38,12 +37,12 @@ static inline dma_addr_t dma_get_device_base(struct device *dev,
return mem->device_base; return mem->device_base;
} }
static int dma_init_coherent_memory( static int dma_init_coherent_memory(phys_addr_t phys_addr,
phys_addr_t phys_addr, dma_addr_t device_addr, size_t size, int flags, dma_addr_t device_addr, size_t size,
struct dma_coherent_mem **mem) struct dma_coherent_mem **mem)
{ {
struct dma_coherent_mem *dma_mem = NULL; struct dma_coherent_mem *dma_mem = NULL;
void __iomem *mem_base = NULL; void *mem_base = NULL;
int pages = size >> PAGE_SHIFT; int pages = size >> PAGE_SHIFT;
int bitmap_size = BITS_TO_LONGS(pages) * sizeof(long); int bitmap_size = BITS_TO_LONGS(pages) * sizeof(long);
int ret; int ret;
@ -73,7 +72,6 @@ static int dma_init_coherent_memory(
dma_mem->device_base = device_addr; dma_mem->device_base = device_addr;
dma_mem->pfn_base = PFN_DOWN(phys_addr); dma_mem->pfn_base = PFN_DOWN(phys_addr);
dma_mem->size = pages; dma_mem->size = pages;
dma_mem->flags = flags;
spin_lock_init(&dma_mem->spinlock); spin_lock_init(&dma_mem->spinlock);
*mem = dma_mem; *mem = dma_mem;
@ -110,12 +108,12 @@ static int dma_assign_coherent_memory(struct device *dev,
} }
int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
dma_addr_t device_addr, size_t size, int flags) dma_addr_t device_addr, size_t size)
{ {
struct dma_coherent_mem *mem; struct dma_coherent_mem *mem;
int ret; int ret;
ret = dma_init_coherent_memory(phys_addr, device_addr, size, flags, &mem); ret = dma_init_coherent_memory(phys_addr, device_addr, size, &mem);
if (ret) if (ret)
return ret; return ret;
@ -137,29 +135,6 @@ void dma_release_declared_memory(struct device *dev)
} }
EXPORT_SYMBOL(dma_release_declared_memory); EXPORT_SYMBOL(dma_release_declared_memory);
void *dma_mark_declared_memory_occupied(struct device *dev,
dma_addr_t device_addr, size_t size)
{
struct dma_coherent_mem *mem = dev->dma_mem;
unsigned long flags;
int pos, err;
size += device_addr & ~PAGE_MASK;
if (!mem)
return ERR_PTR(-EINVAL);
spin_lock_irqsave(&mem->spinlock, flags);
pos = PFN_DOWN(device_addr - dma_get_device_base(dev, mem));
err = bitmap_allocate_region(mem->bitmap, pos, get_order(size));
spin_unlock_irqrestore(&mem->spinlock, flags);
if (err != 0)
return ERR_PTR(err);
return mem->virt_base + (pos << PAGE_SHIFT);
}
EXPORT_SYMBOL(dma_mark_declared_memory_occupied);
static void *__dma_alloc_from_coherent(struct dma_coherent_mem *mem, static void *__dma_alloc_from_coherent(struct dma_coherent_mem *mem,
ssize_t size, dma_addr_t *dma_handle) ssize_t size, dma_addr_t *dma_handle)
{ {
@ -213,15 +188,7 @@ int dma_alloc_from_dev_coherent(struct device *dev, ssize_t size,
return 0; return 0;
*ret = __dma_alloc_from_coherent(mem, size, dma_handle); *ret = __dma_alloc_from_coherent(mem, size, dma_handle);
if (*ret) return 1;
return 1;
/*
* In the case where the allocation can not be satisfied from the
* per-device area, try to fall back to generic memory if the
* constraints allow it.
*/
return mem->flags & DMA_MEMORY_EXCLUSIVE;
} }
void *dma_alloc_from_global_coherent(ssize_t size, dma_addr_t *dma_handle) void *dma_alloc_from_global_coherent(ssize_t size, dma_addr_t *dma_handle)
@ -350,8 +317,7 @@ static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
if (!mem) { if (!mem) {
ret = dma_init_coherent_memory(rmem->base, rmem->base, ret = dma_init_coherent_memory(rmem->base, rmem->base,
rmem->size, rmem->size, &mem);
DMA_MEMORY_EXCLUSIVE, &mem);
if (ret) { if (ret) {
pr_err("Reserved memory: failed to init DMA memory pool at %pa, size %ld MiB\n", pr_err("Reserved memory: failed to init DMA memory pool at %pa, size %ld MiB\n",
&rmem->base, (unsigned long)rmem->size / SZ_1M); &rmem->base, (unsigned long)rmem->size / SZ_1M);

View File

@ -134,17 +134,6 @@ static u32 nr_total_entries;
/* number of preallocated entries requested by kernel cmdline */ /* number of preallocated entries requested by kernel cmdline */
static u32 nr_prealloc_entries = PREALLOC_DMA_DEBUG_ENTRIES; static u32 nr_prealloc_entries = PREALLOC_DMA_DEBUG_ENTRIES;
/* debugfs dentry's for the stuff above */
static struct dentry *dma_debug_dent __read_mostly;
static struct dentry *global_disable_dent __read_mostly;
static struct dentry *error_count_dent __read_mostly;
static struct dentry *show_all_errors_dent __read_mostly;
static struct dentry *show_num_errors_dent __read_mostly;
static struct dentry *num_free_entries_dent __read_mostly;
static struct dentry *min_free_entries_dent __read_mostly;
static struct dentry *nr_total_entries_dent __read_mostly;
static struct dentry *filter_dent __read_mostly;
/* per-driver filter related state */ /* per-driver filter related state */
#define NAME_MAX_LEN 64 #define NAME_MAX_LEN 64
@ -840,66 +829,46 @@ static const struct file_operations filter_fops = {
.llseek = default_llseek, .llseek = default_llseek,
}; };
static int dma_debug_fs_init(void) static int dump_show(struct seq_file *seq, void *v)
{ {
dma_debug_dent = debugfs_create_dir("dma-api", NULL); int idx;
if (!dma_debug_dent) {
pr_err("can not create debugfs directory\n"); for (idx = 0; idx < HASH_SIZE; idx++) {
return -ENOMEM; struct hash_bucket *bucket = &dma_entry_hash[idx];
struct dma_debug_entry *entry;
unsigned long flags;
spin_lock_irqsave(&bucket->lock, flags);
list_for_each_entry(entry, &bucket->list, list) {
seq_printf(seq,
"%s %s %s idx %d P=%llx N=%lx D=%llx L=%llx %s %s\n",
dev_name(entry->dev),
dev_driver_string(entry->dev),
type2name[entry->type], idx,
phys_addr(entry), entry->pfn,
entry->dev_addr, entry->size,
dir2name[entry->direction],
maperr2str[entry->map_err_type]);
}
spin_unlock_irqrestore(&bucket->lock, flags);
} }
global_disable_dent = debugfs_create_bool("disabled", 0444,
dma_debug_dent,
&global_disable);
if (!global_disable_dent)
goto out_err;
error_count_dent = debugfs_create_u32("error_count", 0444,
dma_debug_dent, &error_count);
if (!error_count_dent)
goto out_err;
show_all_errors_dent = debugfs_create_u32("all_errors", 0644,
dma_debug_dent,
&show_all_errors);
if (!show_all_errors_dent)
goto out_err;
show_num_errors_dent = debugfs_create_u32("num_errors", 0644,
dma_debug_dent,
&show_num_errors);
if (!show_num_errors_dent)
goto out_err;
num_free_entries_dent = debugfs_create_u32("num_free_entries", 0444,
dma_debug_dent,
&num_free_entries);
if (!num_free_entries_dent)
goto out_err;
min_free_entries_dent = debugfs_create_u32("min_free_entries", 0444,
dma_debug_dent,
&min_free_entries);
if (!min_free_entries_dent)
goto out_err;
nr_total_entries_dent = debugfs_create_u32("nr_total_entries", 0444,
dma_debug_dent,
&nr_total_entries);
if (!nr_total_entries_dent)
goto out_err;
filter_dent = debugfs_create_file("driver_filter", 0644,
dma_debug_dent, NULL, &filter_fops);
if (!filter_dent)
goto out_err;
return 0; return 0;
}
DEFINE_SHOW_ATTRIBUTE(dump);
out_err: static void dma_debug_fs_init(void)
debugfs_remove_recursive(dma_debug_dent); {
struct dentry *dentry = debugfs_create_dir("dma-api", NULL);
return -ENOMEM; debugfs_create_bool("disabled", 0444, dentry, &global_disable);
debugfs_create_u32("error_count", 0444, dentry, &error_count);
debugfs_create_u32("all_errors", 0644, dentry, &show_all_errors);
debugfs_create_u32("num_errors", 0644, dentry, &show_num_errors);
debugfs_create_u32("num_free_entries", 0444, dentry, &num_free_entries);
debugfs_create_u32("min_free_entries", 0444, dentry, &min_free_entries);
debugfs_create_u32("nr_total_entries", 0444, dentry, &nr_total_entries);
debugfs_create_file("driver_filter", 0644, dentry, NULL, &filter_fops);
debugfs_create_file("dump", 0444, dentry, NULL, &dump_fops);
} }
static int device_dma_allocations(struct device *dev, struct dma_debug_entry **out_entry) static int device_dma_allocations(struct device *dev, struct dma_debug_entry **out_entry)
@ -985,12 +954,7 @@ static int dma_debug_init(void)
spin_lock_init(&dma_entry_hash[i].lock); spin_lock_init(&dma_entry_hash[i].lock);
} }
if (dma_debug_fs_init() != 0) { dma_debug_fs_init();
pr_err("error creating debugfs entries - disabling\n");
global_disable = true;
return 0;
}
nr_pages = DIV_ROUND_UP(nr_prealloc_entries, DMA_DEBUG_DYNAMIC_ENTRIES); nr_pages = DIV_ROUND_UP(nr_prealloc_entries, DMA_DEBUG_DYNAMIC_ENTRIES);
for (i = 0; i < nr_pages; ++i) for (i = 0; i < nr_pages; ++i)

View File

@ -355,6 +355,20 @@ out_unmap:
} }
EXPORT_SYMBOL(dma_direct_map_sg); EXPORT_SYMBOL(dma_direct_map_sg);
dma_addr_t dma_direct_map_resource(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir, unsigned long attrs)
{
dma_addr_t dma_addr = paddr;
if (unlikely(!dma_direct_possible(dev, dma_addr, size))) {
report_addr(dev, dma_addr, size);
return DMA_MAPPING_ERROR;
}
return dma_addr;
}
EXPORT_SYMBOL(dma_direct_map_resource);
/* /*
* Because 32-bit DMA masks are so common we expect every architecture to be * Because 32-bit DMA masks are so common we expect every architecture to be
* able to satisfy them - either by not supporting more physical memory, or by * able to satisfy them - either by not supporting more physical memory, or by

View File

@ -1654,42 +1654,6 @@ config PROVIDE_OHCI1394_DMA_INIT
See Documentation/debugging-via-ohci1394.txt for more information. See Documentation/debugging-via-ohci1394.txt for more information.
config DMA_API_DEBUG
bool "Enable debugging of DMA-API usage"
select NEED_DMA_MAP_STATE
help
Enable this option to debug the use of the DMA API by device drivers.
With this option you will be able to detect common bugs in device
drivers like double-freeing of DMA mappings or freeing mappings that
were never allocated.
This also attempts to catch cases where a page owned by DMA is
accessed by the cpu in a way that could cause data corruption. For
example, this enables cow_user_page() to check that the source page is
not undergoing DMA.
This option causes a performance degradation. Use only if you want to
debug device drivers and dma interactions.
If unsure, say N.
config DMA_API_DEBUG_SG
bool "Debug DMA scatter-gather usage"
default y
depends on DMA_API_DEBUG
help
Perform extra checking that callers of dma_map_sg() have respected the
appropriate segment length/boundary limits for the given device when
preparing DMA scatterlists.
This is particularly likely to have been overlooked in cases where the
dma_map_sg() API is used for general bulk mapping of pages rather than
preparing literal scatter-gather descriptors, where there is a risk of
unexpected behaviour from DMA API implementations if the scatterlist
is technically out-of-spec.
If unsure, say N.
menuconfig RUNTIME_TESTING_MENU menuconfig RUNTIME_TESTING_MENU
bool "Runtime Testing" bool "Runtime Testing"
def_bool y def_bool y