mirror of
https://github.com/torvalds/linux.git
synced 2024-11-16 00:52:01 +00:00
bad8c6c0b1
Patch series "mm/cma: manage the memory of the CMA area by using the ZONE_MOVABLE", v2. 0. History This patchset is the follow-up of the discussion about the "Introduce ZONE_CMA (v7)" [1]. Please reference it if more information is needed. 1. What does this patch do? This patch changes the management way for the memory of the CMA area in the MM subsystem. Currently the memory of the CMA area is managed by the zone where their pfn is belong to. However, this approach has some problems since MM subsystem doesn't have enough logic to handle the situation that different characteristic memories are in a single zone. To solve this issue, this patch try to manage all the memory of the CMA area by using the MOVABLE zone. In MM subsystem's point of view, characteristic of the memory on the MOVABLE zone and the memory of the CMA area are the same. So, managing the memory of the CMA area by using the MOVABLE zone will not have any problem. 2. Motivation There are some problems with current approach. See following. Although these problem would not be inherent and it could be fixed without this conception change, it requires many hooks addition in various code path and it would be intrusive to core MM and would be really error-prone. Therefore, I try to solve them with this new approach. Anyway, following is the problems of the current implementation. o CMA memory utilization First, following is the freepage calculation logic in MM. - For movable allocation: freepage = total freepage - For unmovable allocation: freepage = total freepage - CMA freepage Freepages on the CMA area is used after the normal freepages in the zone where the memory of the CMA area is belong to are exhausted. At that moment that the number of the normal freepages is zero, so - For movable allocation: freepage = total freepage = CMA freepage - For unmovable allocation: freepage = 0 If unmovable allocation comes at this moment, allocation request would fail to pass the watermark check and reclaim is started. After reclaim, there would exist the normal freepages so freepages on the CMA areas would not be used. FYI, there is another attempt [2] trying to solve this problem in lkml. And, as far as I know, Qualcomm also has out-of-tree solution for this problem. Useless reclaim: There is no logic to distinguish CMA pages in the reclaim path. Hence, CMA page is reclaimed even if the system just needs the page that can be usable for the kernel allocation. Atomic allocation failure: This is also related to the fallback allocation policy for the memory of the CMA area. Consider the situation that the number of the normal freepages is *zero* since the bunch of the movable allocation requests come. Kswapd would not be woken up due to following freepage calculation logic. - For movable allocation: freepage = total freepage = CMA freepage If atomic unmovable allocation request comes at this moment, it would fails due to following logic. - For unmovable allocation: freepage = total freepage - CMA freepage = 0 It was reported by Aneesh [3]. Useless compaction: Usual high-order allocation request is unmovable allocation request and it cannot be served from the memory of the CMA area. In compaction, migration scanner try to migrate the page in the CMA area and make high-order page there. As mentioned above, it cannot be usable for the unmovable allocation request so it's just waste. 3. Current approach and new approach Current approach is that the memory of the CMA area is managed by the zone where their pfn is belong to. However, these memory should be distinguishable since they have a strong limitation. So, they are marked as MIGRATE_CMA in pageblock flag and handled specially. However, as mentioned in section 2, the MM subsystem doesn't have enough logic to deal with this special pageblock so many problems raised. New approach is that the memory of the CMA area is managed by the MOVABLE zone. MM already have enough logic to deal with special zone like as HIGHMEM and MOVABLE zone. So, managing the memory of the CMA area by the MOVABLE zone just naturally work well because constraints for the memory of the CMA area that the memory should always be migratable is the same with the constraint for the MOVABLE zone. There is one side-effect for the usability of the memory of the CMA area. The use of MOVABLE zone is only allowed for a request with GFP_HIGHMEM && GFP_MOVABLE so now the memory of the CMA area is also only allowed for this gfp flag. Before this patchset, a request with GFP_MOVABLE can use them. IMO, It would not be a big issue since most of GFP_MOVABLE request also has GFP_HIGHMEM flag. For example, file cache page and anonymous page. However, file cache page for blockdev file is an exception. Request for it has no GFP_HIGHMEM flag. There is pros and cons on this exception. In my experience, blockdev file cache pages are one of the top reason that causes cma_alloc() to fail temporarily. So, we can get more guarantee of cma_alloc() success by discarding this case. Note that there is no change in admin POV since this patchset is just for internal implementation change in MM subsystem. Just one minor difference for admin is that the memory stat for CMA area will be printed in the MOVABLE zone. That's all. 4. Result Following is the experimental result related to utilization problem. 8 CPUs, 1024 MB, VIRTUAL MACHINE make -j16 <Before> CMA area: 0 MB 512 MB Elapsed-time: 92.4 186.5 pswpin: 82 18647 pswpout: 160 69839 <After> CMA : 0 MB 512 MB Elapsed-time: 93.1 93.4 pswpin: 84 46 pswpout: 183 92 akpm: "kernel test robot" reported a 26% improvement in vm-scalability.throughput: http://lkml.kernel.org/r/20180330012721.GA3845@yexl-desktop [1]: lkml.kernel.org/r/1491880640-9944-1-git-send-email-iamjoonsoo.kim@lge.com [2]: https://lkml.org/lkml/2014/10/15/623 [3]: http://www.spinics.net/lists/linux-mm/msg100562.html Link: http://lkml.kernel.org/r/1512114786-5085-2-git-send-email-iamjoonsoo.kim@lge.com Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Tested-by: Tony Lindgren <tony@atomide.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Laura Abbott <lauraa@codeaurora.org> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Michal Hocko <mhocko@suse.com> Cc: Michal Nazarewicz <mina86@mina86.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
586 lines
15 KiB
C
586 lines
15 KiB
C
/*
|
|
* Contiguous Memory Allocator
|
|
*
|
|
* Copyright (c) 2010-2011 by Samsung Electronics.
|
|
* Copyright IBM Corporation, 2013
|
|
* Copyright LG Electronics Inc., 2014
|
|
* Written by:
|
|
* Marek Szyprowski <m.szyprowski@samsung.com>
|
|
* Michal Nazarewicz <mina86@mina86.com>
|
|
* Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
|
|
* Joonsoo Kim <iamjoonsoo.kim@lge.com>
|
|
*
|
|
* This program is free software; you can redistribute it and/or
|
|
* modify it under the terms of the GNU General Public License as
|
|
* published by the Free Software Foundation; either version 2 of the
|
|
* License or (at your optional) any later version of the license.
|
|
*/
|
|
|
|
#define pr_fmt(fmt) "cma: " fmt
|
|
|
|
#ifdef CONFIG_CMA_DEBUG
|
|
#ifndef DEBUG
|
|
# define DEBUG
|
|
#endif
|
|
#endif
|
|
#define CREATE_TRACE_POINTS
|
|
|
|
#include <linux/memblock.h>
|
|
#include <linux/err.h>
|
|
#include <linux/mm.h>
|
|
#include <linux/mutex.h>
|
|
#include <linux/sizes.h>
|
|
#include <linux/slab.h>
|
|
#include <linux/log2.h>
|
|
#include <linux/cma.h>
|
|
#include <linux/highmem.h>
|
|
#include <linux/io.h>
|
|
#include <linux/kmemleak.h>
|
|
#include <trace/events/cma.h>
|
|
|
|
#include "cma.h"
|
|
#include "internal.h"
|
|
|
|
struct cma cma_areas[MAX_CMA_AREAS];
|
|
unsigned cma_area_count;
|
|
static DEFINE_MUTEX(cma_mutex);
|
|
|
|
phys_addr_t cma_get_base(const struct cma *cma)
|
|
{
|
|
return PFN_PHYS(cma->base_pfn);
|
|
}
|
|
|
|
unsigned long cma_get_size(const struct cma *cma)
|
|
{
|
|
return cma->count << PAGE_SHIFT;
|
|
}
|
|
|
|
const char *cma_get_name(const struct cma *cma)
|
|
{
|
|
return cma->name ? cma->name : "(undefined)";
|
|
}
|
|
|
|
static unsigned long cma_bitmap_aligned_mask(const struct cma *cma,
|
|
unsigned int align_order)
|
|
{
|
|
if (align_order <= cma->order_per_bit)
|
|
return 0;
|
|
return (1UL << (align_order - cma->order_per_bit)) - 1;
|
|
}
|
|
|
|
/*
|
|
* Find the offset of the base PFN from the specified align_order.
|
|
* The value returned is represented in order_per_bits.
|
|
*/
|
|
static unsigned long cma_bitmap_aligned_offset(const struct cma *cma,
|
|
unsigned int align_order)
|
|
{
|
|
return (cma->base_pfn & ((1UL << align_order) - 1))
|
|
>> cma->order_per_bit;
|
|
}
|
|
|
|
static unsigned long cma_bitmap_pages_to_bits(const struct cma *cma,
|
|
unsigned long pages)
|
|
{
|
|
return ALIGN(pages, 1UL << cma->order_per_bit) >> cma->order_per_bit;
|
|
}
|
|
|
|
static void cma_clear_bitmap(struct cma *cma, unsigned long pfn,
|
|
unsigned int count)
|
|
{
|
|
unsigned long bitmap_no, bitmap_count;
|
|
|
|
bitmap_no = (pfn - cma->base_pfn) >> cma->order_per_bit;
|
|
bitmap_count = cma_bitmap_pages_to_bits(cma, count);
|
|
|
|
mutex_lock(&cma->lock);
|
|
bitmap_clear(cma->bitmap, bitmap_no, bitmap_count);
|
|
mutex_unlock(&cma->lock);
|
|
}
|
|
|
|
static int __init cma_activate_area(struct cma *cma)
|
|
{
|
|
int bitmap_size = BITS_TO_LONGS(cma_bitmap_maxno(cma)) * sizeof(long);
|
|
unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
|
|
unsigned i = cma->count >> pageblock_order;
|
|
struct zone *zone;
|
|
|
|
cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
|
|
|
|
if (!cma->bitmap)
|
|
return -ENOMEM;
|
|
|
|
do {
|
|
unsigned j;
|
|
|
|
base_pfn = pfn;
|
|
if (!pfn_valid(base_pfn))
|
|
goto err;
|
|
|
|
zone = page_zone(pfn_to_page(base_pfn));
|
|
for (j = pageblock_nr_pages; j; --j, pfn++) {
|
|
if (!pfn_valid(pfn))
|
|
goto err;
|
|
|
|
/*
|
|
* In init_cma_reserved_pageblock(), present_pages
|
|
* is adjusted with assumption that all pages in
|
|
* the pageblock come from a single zone.
|
|
*/
|
|
if (page_zone(pfn_to_page(pfn)) != zone)
|
|
goto err;
|
|
}
|
|
init_cma_reserved_pageblock(pfn_to_page(base_pfn));
|
|
} while (--i);
|
|
|
|
mutex_init(&cma->lock);
|
|
|
|
#ifdef CONFIG_CMA_DEBUGFS
|
|
INIT_HLIST_HEAD(&cma->mem_head);
|
|
spin_lock_init(&cma->mem_head_lock);
|
|
#endif
|
|
|
|
return 0;
|
|
|
|
err:
|
|
pr_err("CMA area %s could not be activated\n", cma->name);
|
|
kfree(cma->bitmap);
|
|
cma->count = 0;
|
|
return -EINVAL;
|
|
}
|
|
|
|
static int __init cma_init_reserved_areas(void)
|
|
{
|
|
int i;
|
|
struct zone *zone;
|
|
pg_data_t *pgdat;
|
|
|
|
if (!cma_area_count)
|
|
return 0;
|
|
|
|
for_each_online_pgdat(pgdat) {
|
|
unsigned long start_pfn = UINT_MAX, end_pfn = 0;
|
|
|
|
zone = &pgdat->node_zones[ZONE_MOVABLE];
|
|
|
|
/*
|
|
* In this case, we cannot adjust the zone range
|
|
* since it is now maximum node span and we don't
|
|
* know original zone range.
|
|
*/
|
|
if (populated_zone(zone))
|
|
continue;
|
|
|
|
for (i = 0; i < cma_area_count; i++) {
|
|
if (pfn_to_nid(cma_areas[i].base_pfn) !=
|
|
pgdat->node_id)
|
|
continue;
|
|
|
|
start_pfn = min(start_pfn, cma_areas[i].base_pfn);
|
|
end_pfn = max(end_pfn, cma_areas[i].base_pfn +
|
|
cma_areas[i].count);
|
|
}
|
|
|
|
if (!end_pfn)
|
|
continue;
|
|
|
|
zone->zone_start_pfn = start_pfn;
|
|
zone->spanned_pages = end_pfn - start_pfn;
|
|
}
|
|
|
|
for (i = 0; i < cma_area_count; i++) {
|
|
int ret = cma_activate_area(&cma_areas[i]);
|
|
|
|
if (ret)
|
|
return ret;
|
|
}
|
|
|
|
/*
|
|
* Reserved pages for ZONE_MOVABLE are now activated and
|
|
* this would change ZONE_MOVABLE's managed page counter and
|
|
* the other zones' present counter. We need to re-calculate
|
|
* various zone information that depends on this initialization.
|
|
*/
|
|
build_all_zonelists(NULL);
|
|
for_each_populated_zone(zone) {
|
|
if (zone_idx(zone) == ZONE_MOVABLE) {
|
|
zone_pcp_reset(zone);
|
|
setup_zone_pageset(zone);
|
|
} else
|
|
zone_pcp_update(zone);
|
|
|
|
set_zone_contiguous(zone);
|
|
}
|
|
|
|
/*
|
|
* We need to re-init per zone wmark by calling
|
|
* init_per_zone_wmark_min() but doesn't call here because it is
|
|
* registered on core_initcall and it will be called later than us.
|
|
*/
|
|
|
|
return 0;
|
|
}
|
|
pure_initcall(cma_init_reserved_areas);
|
|
|
|
/**
|
|
* cma_init_reserved_mem() - create custom contiguous area from reserved memory
|
|
* @base: Base address of the reserved area
|
|
* @size: Size of the reserved area (in bytes),
|
|
* @order_per_bit: Order of pages represented by one bit on bitmap.
|
|
* @name: The name of the area. If this parameter is NULL, the name of
|
|
* the area will be set to "cmaN", where N is a running counter of
|
|
* used areas.
|
|
* @res_cma: Pointer to store the created cma region.
|
|
*
|
|
* This function creates custom contiguous area from already reserved memory.
|
|
*/
|
|
int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
|
|
unsigned int order_per_bit,
|
|
const char *name,
|
|
struct cma **res_cma)
|
|
{
|
|
struct cma *cma;
|
|
phys_addr_t alignment;
|
|
|
|
/* Sanity checks */
|
|
if (cma_area_count == ARRAY_SIZE(cma_areas)) {
|
|
pr_err("Not enough slots for CMA reserved regions!\n");
|
|
return -ENOSPC;
|
|
}
|
|
|
|
if (!size || !memblock_is_region_reserved(base, size))
|
|
return -EINVAL;
|
|
|
|
/* ensure minimal alignment required by mm core */
|
|
alignment = PAGE_SIZE <<
|
|
max_t(unsigned long, MAX_ORDER - 1, pageblock_order);
|
|
|
|
/* alignment should be aligned with order_per_bit */
|
|
if (!IS_ALIGNED(alignment >> PAGE_SHIFT, 1 << order_per_bit))
|
|
return -EINVAL;
|
|
|
|
if (ALIGN(base, alignment) != base || ALIGN(size, alignment) != size)
|
|
return -EINVAL;
|
|
|
|
/*
|
|
* Each reserved area must be initialised later, when more kernel
|
|
* subsystems (like slab allocator) are available.
|
|
*/
|
|
cma = &cma_areas[cma_area_count];
|
|
if (name) {
|
|
cma->name = name;
|
|
} else {
|
|
cma->name = kasprintf(GFP_KERNEL, "cma%d\n", cma_area_count);
|
|
if (!cma->name)
|
|
return -ENOMEM;
|
|
}
|
|
cma->base_pfn = PFN_DOWN(base);
|
|
cma->count = size >> PAGE_SHIFT;
|
|
cma->order_per_bit = order_per_bit;
|
|
*res_cma = cma;
|
|
cma_area_count++;
|
|
totalcma_pages += (size / PAGE_SIZE);
|
|
|
|
return 0;
|
|
}
|
|
|
|
/**
|
|
* cma_declare_contiguous() - reserve custom contiguous area
|
|
* @base: Base address of the reserved area optional, use 0 for any
|
|
* @size: Size of the reserved area (in bytes),
|
|
* @limit: End address of the reserved memory (optional, 0 for any).
|
|
* @alignment: Alignment for the CMA area, should be power of 2 or zero
|
|
* @order_per_bit: Order of pages represented by one bit on bitmap.
|
|
* @fixed: hint about where to place the reserved area
|
|
* @name: The name of the area. See function cma_init_reserved_mem()
|
|
* @res_cma: Pointer to store the created cma region.
|
|
*
|
|
* This function reserves memory from early allocator. It should be
|
|
* called by arch specific code once the early allocator (memblock or bootmem)
|
|
* has been activated and all other subsystems have already allocated/reserved
|
|
* memory. This function allows to create custom reserved areas.
|
|
*
|
|
* If @fixed is true, reserve contiguous area at exactly @base. If false,
|
|
* reserve in range from @base to @limit.
|
|
*/
|
|
int __init cma_declare_contiguous(phys_addr_t base,
|
|
phys_addr_t size, phys_addr_t limit,
|
|
phys_addr_t alignment, unsigned int order_per_bit,
|
|
bool fixed, const char *name, struct cma **res_cma)
|
|
{
|
|
phys_addr_t memblock_end = memblock_end_of_DRAM();
|
|
phys_addr_t highmem_start;
|
|
int ret = 0;
|
|
|
|
/*
|
|
* We can't use __pa(high_memory) directly, since high_memory
|
|
* isn't a valid direct map VA, and DEBUG_VIRTUAL will (validly)
|
|
* complain. Find the boundary by adding one to the last valid
|
|
* address.
|
|
*/
|
|
highmem_start = __pa(high_memory - 1) + 1;
|
|
pr_debug("%s(size %pa, base %pa, limit %pa alignment %pa)\n",
|
|
__func__, &size, &base, &limit, &alignment);
|
|
|
|
if (cma_area_count == ARRAY_SIZE(cma_areas)) {
|
|
pr_err("Not enough slots for CMA reserved regions!\n");
|
|
return -ENOSPC;
|
|
}
|
|
|
|
if (!size)
|
|
return -EINVAL;
|
|
|
|
if (alignment && !is_power_of_2(alignment))
|
|
return -EINVAL;
|
|
|
|
/*
|
|
* Sanitise input arguments.
|
|
* Pages both ends in CMA area could be merged into adjacent unmovable
|
|
* migratetype page by page allocator's buddy algorithm. In the case,
|
|
* you couldn't get a contiguous memory, which is not what we want.
|
|
*/
|
|
alignment = max(alignment, (phys_addr_t)PAGE_SIZE <<
|
|
max_t(unsigned long, MAX_ORDER - 1, pageblock_order));
|
|
base = ALIGN(base, alignment);
|
|
size = ALIGN(size, alignment);
|
|
limit &= ~(alignment - 1);
|
|
|
|
if (!base)
|
|
fixed = false;
|
|
|
|
/* size should be aligned with order_per_bit */
|
|
if (!IS_ALIGNED(size >> PAGE_SHIFT, 1 << order_per_bit))
|
|
return -EINVAL;
|
|
|
|
/*
|
|
* If allocating at a fixed base the request region must not cross the
|
|
* low/high memory boundary.
|
|
*/
|
|
if (fixed && base < highmem_start && base + size > highmem_start) {
|
|
ret = -EINVAL;
|
|
pr_err("Region at %pa defined on low/high memory boundary (%pa)\n",
|
|
&base, &highmem_start);
|
|
goto err;
|
|
}
|
|
|
|
/*
|
|
* If the limit is unspecified or above the memblock end, its effective
|
|
* value will be the memblock end. Set it explicitly to simplify further
|
|
* checks.
|
|
*/
|
|
if (limit == 0 || limit > memblock_end)
|
|
limit = memblock_end;
|
|
|
|
/* Reserve memory */
|
|
if (fixed) {
|
|
if (memblock_is_region_reserved(base, size) ||
|
|
memblock_reserve(base, size) < 0) {
|
|
ret = -EBUSY;
|
|
goto err;
|
|
}
|
|
} else {
|
|
phys_addr_t addr = 0;
|
|
|
|
/*
|
|
* All pages in the reserved area must come from the same zone.
|
|
* If the requested region crosses the low/high memory boundary,
|
|
* try allocating from high memory first and fall back to low
|
|
* memory in case of failure.
|
|
*/
|
|
if (base < highmem_start && limit > highmem_start) {
|
|
addr = memblock_alloc_range(size, alignment,
|
|
highmem_start, limit,
|
|
MEMBLOCK_NONE);
|
|
limit = highmem_start;
|
|
}
|
|
|
|
if (!addr) {
|
|
addr = memblock_alloc_range(size, alignment, base,
|
|
limit,
|
|
MEMBLOCK_NONE);
|
|
if (!addr) {
|
|
ret = -ENOMEM;
|
|
goto err;
|
|
}
|
|
}
|
|
|
|
/*
|
|
* kmemleak scans/reads tracked objects for pointers to other
|
|
* objects but this address isn't mapped and accessible
|
|
*/
|
|
kmemleak_ignore_phys(addr);
|
|
base = addr;
|
|
}
|
|
|
|
ret = cma_init_reserved_mem(base, size, order_per_bit, name, res_cma);
|
|
if (ret)
|
|
goto err;
|
|
|
|
pr_info("Reserved %ld MiB at %pa\n", (unsigned long)size / SZ_1M,
|
|
&base);
|
|
return 0;
|
|
|
|
err:
|
|
pr_err("Failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
|
|
return ret;
|
|
}
|
|
|
|
#ifdef CONFIG_CMA_DEBUG
|
|
static void cma_debug_show_areas(struct cma *cma)
|
|
{
|
|
unsigned long next_zero_bit, next_set_bit;
|
|
unsigned long start = 0;
|
|
unsigned int nr_zero, nr_total = 0;
|
|
|
|
mutex_lock(&cma->lock);
|
|
pr_info("number of available pages: ");
|
|
for (;;) {
|
|
next_zero_bit = find_next_zero_bit(cma->bitmap, cma->count, start);
|
|
if (next_zero_bit >= cma->count)
|
|
break;
|
|
next_set_bit = find_next_bit(cma->bitmap, cma->count, next_zero_bit);
|
|
nr_zero = next_set_bit - next_zero_bit;
|
|
pr_cont("%s%u@%lu", nr_total ? "+" : "", nr_zero, next_zero_bit);
|
|
nr_total += nr_zero;
|
|
start = next_zero_bit + nr_zero;
|
|
}
|
|
pr_cont("=> %u free of %lu total pages\n", nr_total, cma->count);
|
|
mutex_unlock(&cma->lock);
|
|
}
|
|
#else
|
|
static inline void cma_debug_show_areas(struct cma *cma) { }
|
|
#endif
|
|
|
|
/**
|
|
* cma_alloc() - allocate pages from contiguous area
|
|
* @cma: Contiguous memory region for which the allocation is performed.
|
|
* @count: Requested number of pages.
|
|
* @align: Requested alignment of pages (in PAGE_SIZE order).
|
|
* @gfp_mask: GFP mask to use during compaction
|
|
*
|
|
* This function allocates part of contiguous memory on specific
|
|
* contiguous memory area.
|
|
*/
|
|
struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
|
|
gfp_t gfp_mask)
|
|
{
|
|
unsigned long mask, offset;
|
|
unsigned long pfn = -1;
|
|
unsigned long start = 0;
|
|
unsigned long bitmap_maxno, bitmap_no, bitmap_count;
|
|
struct page *page = NULL;
|
|
int ret = -ENOMEM;
|
|
|
|
if (!cma || !cma->count)
|
|
return NULL;
|
|
|
|
pr_debug("%s(cma %p, count %zu, align %d)\n", __func__, (void *)cma,
|
|
count, align);
|
|
|
|
if (!count)
|
|
return NULL;
|
|
|
|
mask = cma_bitmap_aligned_mask(cma, align);
|
|
offset = cma_bitmap_aligned_offset(cma, align);
|
|
bitmap_maxno = cma_bitmap_maxno(cma);
|
|
bitmap_count = cma_bitmap_pages_to_bits(cma, count);
|
|
|
|
if (bitmap_count > bitmap_maxno)
|
|
return NULL;
|
|
|
|
for (;;) {
|
|
mutex_lock(&cma->lock);
|
|
bitmap_no = bitmap_find_next_zero_area_off(cma->bitmap,
|
|
bitmap_maxno, start, bitmap_count, mask,
|
|
offset);
|
|
if (bitmap_no >= bitmap_maxno) {
|
|
mutex_unlock(&cma->lock);
|
|
break;
|
|
}
|
|
bitmap_set(cma->bitmap, bitmap_no, bitmap_count);
|
|
/*
|
|
* It's safe to drop the lock here. We've marked this region for
|
|
* our exclusive use. If the migration fails we will take the
|
|
* lock again and unmark it.
|
|
*/
|
|
mutex_unlock(&cma->lock);
|
|
|
|
pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit);
|
|
mutex_lock(&cma_mutex);
|
|
ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA,
|
|
gfp_mask);
|
|
mutex_unlock(&cma_mutex);
|
|
if (ret == 0) {
|
|
page = pfn_to_page(pfn);
|
|
break;
|
|
}
|
|
|
|
cma_clear_bitmap(cma, pfn, count);
|
|
if (ret != -EBUSY)
|
|
break;
|
|
|
|
pr_debug("%s(): memory range at %p is busy, retrying\n",
|
|
__func__, pfn_to_page(pfn));
|
|
/* try again with a bit different memory target */
|
|
start = bitmap_no + mask + 1;
|
|
}
|
|
|
|
trace_cma_alloc(pfn, page, count, align);
|
|
|
|
if (ret && !(gfp_mask & __GFP_NOWARN)) {
|
|
pr_err("%s: alloc failed, req-size: %zu pages, ret: %d\n",
|
|
__func__, count, ret);
|
|
cma_debug_show_areas(cma);
|
|
}
|
|
|
|
pr_debug("%s(): returned %p\n", __func__, page);
|
|
return page;
|
|
}
|
|
|
|
/**
|
|
* cma_release() - release allocated pages
|
|
* @cma: Contiguous memory region for which the allocation is performed.
|
|
* @pages: Allocated pages.
|
|
* @count: Number of allocated pages.
|
|
*
|
|
* This function releases memory allocated by alloc_cma().
|
|
* It returns false when provided pages do not belong to contiguous area and
|
|
* true otherwise.
|
|
*/
|
|
bool cma_release(struct cma *cma, const struct page *pages, unsigned int count)
|
|
{
|
|
unsigned long pfn;
|
|
|
|
if (!cma || !pages)
|
|
return false;
|
|
|
|
pr_debug("%s(page %p)\n", __func__, (void *)pages);
|
|
|
|
pfn = page_to_pfn(pages);
|
|
|
|
if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
|
|
return false;
|
|
|
|
VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
|
|
|
|
free_contig_range(pfn, count);
|
|
cma_clear_bitmap(cma, pfn, count);
|
|
trace_cma_release(pfn, pages, count);
|
|
|
|
return true;
|
|
}
|
|
|
|
int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data)
|
|
{
|
|
int i;
|
|
|
|
for (i = 0; i < cma_area_count; i++) {
|
|
int ret = it(&cma_areas[i], data);
|
|
|
|
if (ret)
|
|
return ret;
|
|
}
|
|
|
|
return 0;
|
|
}
|