staging: memrar: remove driver from tree

It's no longer needed at all.

Cc: Ossama Othman <ossama.othman@intel.com>
Cc: Eugene Epshteyn <eugene.epshteyn@intel.com>
Cc: Alan Cox <alan@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This commit is contained in:
Greg Kroah-Hartman 2011-04-04 21:41:20 -07:00
parent 00838d4f50
commit 4dd2b32f3c
10 changed files with 0 additions and 1914 deletions

View File

@ -117,8 +117,6 @@ source "drivers/staging/hv/Kconfig"
source "drivers/staging/vme/Kconfig" source "drivers/staging/vme/Kconfig"
source "drivers/staging/memrar/Kconfig"
source "drivers/staging/sep/Kconfig" source "drivers/staging/sep/Kconfig"
source "drivers/staging/iio/Kconfig" source "drivers/staging/iio/Kconfig"

View File

@ -40,7 +40,6 @@ obj-$(CONFIG_VT6655) += vt6655/
obj-$(CONFIG_VT6656) += vt6656/ obj-$(CONFIG_VT6656) += vt6656/
obj-$(CONFIG_HYPERV) += hv/ obj-$(CONFIG_HYPERV) += hv/
obj-$(CONFIG_VME_BUS) += vme/ obj-$(CONFIG_VME_BUS) += vme/
obj-$(CONFIG_MRST_RAR_HANDLER) += memrar/
obj-$(CONFIG_DX_SEP) += sep/ obj-$(CONFIG_DX_SEP) += sep/
obj-$(CONFIG_IIO) += iio/ obj-$(CONFIG_IIO) += iio/
obj-$(CONFIG_CS5535_GPIO) += cs5535_gpio/ obj-$(CONFIG_CS5535_GPIO) += cs5535_gpio/

View File

@ -1,15 +0,0 @@
config MRST_RAR_HANDLER
tristate "RAR handler driver for Intel Moorestown platform"
depends on RAR_REGISTER
---help---
This driver provides a memory management interface to
restricted access regions (RAR) available on the Intel
Moorestown platform.
Once locked down, restricted access regions are only
accessible by specific hardware on the platform. The x86
CPU is typically not one of those platforms. As such this
driver does not access RAR, and only provides a buffer
allocation/bookkeeping mechanism.
If unsure, say N.

View File

@ -1,2 +0,0 @@
obj-$(CONFIG_MRST_RAR_HANDLER) += memrar.o
memrar-y := memrar_allocator.o memrar_handler.o

View File

@ -1,43 +0,0 @@
RAR Handler (memrar) Driver TODO Items
======================================
Maintainer: Eugene Epshteyn <eugene.epshteyn@intel.com>
memrar.h
--------
1. This header exposes the driver's user space and kernel space
interfaces. It should be moved to <linux/rar/memrar.h>, or
something along those lines, when this memrar driver is moved out
of `staging'.
a. It would be ideal if staging/rar_register/rar_register.h was
moved to the same directory.
memrar_allocator.[ch]
---------------------
1. Address potential fragmentation issues with the memrar_allocator.
2. Hide struct memrar_allocator details/fields. They need not be
exposed to the user.
a. Forward declare struct memrar_allocator.
b. Move all three struct definitions to `memrar_allocator.c'
source file.
c. Add a memrar_allocator_largest_free_area() function, or
something like that to get access to the value of the struct
memrar_allocator "largest_free_area" field. This allows the
struct memrar_allocator fields to be completely hidden from
the user. The memrar_handler code really only needs this for
statistic gathering on-demand.
d. Do the same for the "capacity" field as the
"largest_free_area" field.
3. Move memrar_allocator.* to kernel `lib' directory since it is HW
neutral.
a. Alternatively, use lib/genalloc.c instead.
b. A kernel port of Doug Lea's malloc() implementation may also
be an option.
memrar_handler.c
----------------
1. Split user space interface (ioctl code) from core/kernel code,
e.g.:
memrar_handler.c -> memrar_core.c, memrar_user.c

View File

@ -1,89 +0,0 @@
What: /dev/memrar
Date: March 2010
KernelVersion: 2.6.34
Contact: Eugene Epshteyn <eugene.epshteyn@intel.com>
Description: The Intel Moorestown Restricted Access Region (RAR)
Handler driver exposes an ioctl() based interface that
allows a user to reserve and release blocks of RAR
memory.
Note: A sysfs based one was not appropriate for the
RAR handler's usage model.
=========================================================
ioctl() Requests
=========================================================
RAR_HANDLER_RESERVE
-------------------
Description: Reserve RAR block.
Type: struct RAR_block_info
Direction: in/out
Errors: EINVAL (invalid RAR type or size)
ENOMEM (not enough RAR memory)
RAR_HANDLER_STAT
----------------
Description: Get RAR statistics.
Type: struct RAR_stat
Direction: in/out
Errors: EINVAL (invalid RAR type)
RAR_HANDLER_RELEASE
-------------------
Description: Release previously reserved RAR block.
Type: 32 bit unsigned integer
(e.g. uint32_t), i.e the RAR "handle".
Direction: in
Errors: EINVAL (invalid RAR handle)
=========================================================
ioctl() Request Parameter Types
=========================================================
The structures referred to above are defined as
follows:
/**
* struct RAR_block_info - user space struct that
* describes RAR buffer
* @type: Type of RAR memory (e.g.,
* RAR_TYPE_VIDEO or RAR_TYPE_AUDIO) [in]
* @size: Requested size of a block in bytes to
* be reserved in RAR. [in]
* @handle: Handle that can be used to refer to
* reserved block. [out]
*
* This is the basic structure exposed to the user
* space that describes a given RAR buffer. It used
* as the parameter for the RAR_HANDLER_RESERVE ioctl.
* The buffer's underlying bus address is not exposed
* to the user. User space code refers to the buffer
* entirely by "handle".
*/
struct RAR_block_info {
__u32 type;
__u32 size;
__u32 handle;
};
/**
* struct RAR_stat - RAR statistics structure
* @type: Type of RAR memory (e.g.,
* RAR_TYPE_VIDEO or
* RAR_TYPE_AUDIO) [in]
* @capacity: Total size of RAR memory
* region. [out]
* @largest_block_size: Size of the largest reservable
* block. [out]
*
* This structure is used for RAR_HANDLER_STAT ioctl.
*/
struct RAR_stat {
__u32 type;
__u32 capacity;
__u32 largest_block_size;
};
Lastly, the RAR_HANDLER_RELEASE ioctl expects a
"handle" to the RAR block of memory. It is a 32 bit
unsigned integer.

View File

@ -1,174 +0,0 @@
/*
* RAR Handler (/dev/memrar) internal driver API.
* Copyright (C) 2010 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of version 2 of the GNU General
* Public License as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be
* useful, but WITHOUT ANY WARRANTY; without even the implied
* warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
* PURPOSE. See the GNU General Public License for more details.
* You should have received a copy of the GNU General Public
* License along with this program; if not, write to the Free
* Software Foundation, Inc., 59 Temple Place - Suite 330,
* Boston, MA 02111-1307, USA.
* The full GNU General Public License is included in this
* distribution in the file called COPYING.
*/
#ifndef _MEMRAR_H
#define _MEMRAR_H
#include <linux/ioctl.h>
#include <linux/types.h>
/**
* struct RAR_stat - RAR statistics structure
* @type: Type of RAR memory (e.g., audio vs. video)
* @capacity: Total size of RAR memory region.
* @largest_block_size: Size of the largest reservable block.
*
* This structure is used for RAR_HANDLER_STAT ioctl and for the
* RAR_get_stat() user space wrapper function.
*/
struct RAR_stat {
__u32 type;
__u32 capacity;
__u32 largest_block_size;
};
/**
* struct RAR_block_info - user space struct that describes RAR buffer
* @type: Type of RAR memory (e.g., audio vs. video)
* @size: Requested size of a block to be reserved in RAR.
* @handle: Handle that can be used to refer to reserved block.
*
* This is the basic structure exposed to the user space that
* describes a given RAR buffer. The buffer's underlying bus address
* is not exposed to the user. User space code refers to the buffer
* entirely by "handle".
*/
struct RAR_block_info {
__u32 type;
__u32 size;
__u32 handle;
};
#define RAR_IOCTL_BASE 0xE0
/* Reserve RAR block. */
#define RAR_HANDLER_RESERVE _IOWR(RAR_IOCTL_BASE, 0x00, struct RAR_block_info)
/* Release previously reserved RAR block. */
#define RAR_HANDLER_RELEASE _IOW(RAR_IOCTL_BASE, 0x01, __u32)
/* Get RAR stats. */
#define RAR_HANDLER_STAT _IOWR(RAR_IOCTL_BASE, 0x02, struct RAR_stat)
#ifdef __KERNEL__
/* -------------------------------------------------------------- */
/* Kernel Side RAR Handler Interface */
/* -------------------------------------------------------------- */
/**
* struct RAR_buffer - kernel space struct that describes RAR buffer
* @info: structure containing base RAR buffer information
* @bus_address: buffer bus address
*
* Structure that contains all information related to a given block of
* memory in RAR. It is generally only used when retrieving RAR
* related bus addresses.
*
* Note: This structure is used only by RAR-enabled drivers, and is
* not intended to be exposed to the user space.
*/
struct RAR_buffer {
struct RAR_block_info info;
dma_addr_t bus_address;
};
#if defined(CONFIG_MRST_RAR_HANDLER)
/**
* rar_reserve() - reserve RAR buffers
* @buffers: array of RAR_buffers where type and size of buffers to
* reserve are passed in, handle and bus address are
* passed out
* @count: number of RAR_buffers in the "buffers" array
*
* This function will reserve buffers in the restricted access regions
* of given types.
*
* It returns the number of successfully reserved buffers. Successful
* buffer reservations will have the corresponding bus_address field
* set to a non-zero value in the given buffers vector.
*/
extern size_t rar_reserve(struct RAR_buffer *buffers,
size_t count);
/**
* rar_release() - release RAR buffers
* @buffers: array of RAR_buffers where handles to buffers to be
* released are passed in
* @count: number of RAR_buffers in the "buffers" array
*
* This function will release RAR buffers that were retrieved through
* a call to rar_reserve() or rar_handle_to_bus() by decrementing the
* reference count. The RAR buffer will be reclaimed when the
* reference count drops to zero.
*
* It returns the number of successfully released buffers. Successful
* releases will have their handle field set to zero in the given
* buffers vector.
*/
extern size_t rar_release(struct RAR_buffer *buffers,
size_t count);
/**
* rar_handle_to_bus() - convert a vector of RAR handles to bus addresses
* @buffers: array of RAR_buffers containing handles to be
* converted to bus_addresses
* @count: number of RAR_buffers in the "buffers" array
* This function will retrieve the RAR buffer bus addresses, type and
* size corresponding to the RAR handles provided in the buffers
* vector.
*
* It returns the number of successfully converted buffers. The bus
* address will be set to 0 for unrecognized handles.
*
* The reference count for each corresponding buffer in RAR will be
* incremented. Call rar_release() when done with the buffers.
*/
extern size_t rar_handle_to_bus(struct RAR_buffer *buffers,
size_t count);
#else
extern inline size_t rar_reserve(struct RAR_buffer *buffers, size_t count)
{
return 0;
}
extern inline size_t rar_release(struct RAR_buffer *buffers, size_t count)
{
return 0;
}
extern inline size_t rar_handle_to_bus(struct RAR_buffer *buffers,
size_t count)
{
return 0;
}
#endif /* MRST_RAR_HANDLER */
#endif /* __KERNEL__ */
#endif /* _MEMRAR_H */

View File

@ -1,432 +0,0 @@
/*
* memrar_allocator 1.0: An allocator for Intel RAR.
*
* Copyright (C) 2010 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of version 2 of the GNU General
* Public License as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be
* useful, but WITHOUT ANY WARRANTY; without even the implied
* warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
* PURPOSE. See the GNU General Public License for more details.
* You should have received a copy of the GNU General Public
* License along with this program; if not, write to the Free
* Software Foundation, Inc., 59 Temple Place - Suite 330,
* Boston, MA 02111-1307, USA.
* The full GNU General Public License is included in this
* distribution in the file called COPYING.
*
*
* ------------------------------------------------------------------
*
* This simple allocator implementation provides a
* malloc()/free()-like interface for reserving space within a
* previously reserved block of memory. It is not specific to
* any hardware, nor is it coupled with the lower level paging
* mechanism.
*
* The primary goal of this implementation is to provide a means
* to partition an arbitrary block of memory without actually
* accessing the memory or incurring any hardware side-effects
* (e.g. paging). It is, in effect, a bookkeeping mechanism for
* buffers.
*/
#include "memrar_allocator.h"
#include <linux/slab.h>
#include <linux/bug.h>
#include <linux/kernel.h>
struct memrar_allocator *memrar_create_allocator(unsigned long base,
size_t capacity,
size_t block_size)
{
struct memrar_allocator *allocator = NULL;
struct memrar_address_ranges *first_node = NULL;
/*
* Make sure the base address is aligned on a block_size
* boundary.
*
* @todo Is this necessary?
*/
/* base = ALIGN(base, block_size); */
/* Validate parameters.
*
* Make sure we can allocate the entire memory space. Zero
* capacity or block size are obviously invalid.
*/
if (base == 0
|| capacity == 0
|| block_size == 0
|| ULONG_MAX - capacity < base
|| capacity < block_size)
return allocator;
/*
* There isn't much point in creating a memory allocator that
* is only capable of holding one block but we'll allow it,
* and issue a diagnostic.
*/
WARN(capacity < block_size * 2,
"memrar: Only one block available to allocator.\n");
allocator = kmalloc(sizeof(*allocator), GFP_KERNEL);
if (allocator == NULL)
return allocator;
mutex_init(&allocator->lock);
allocator->base = base;
/* Round the capacity down to a multiple of block_size. */
allocator->capacity = (capacity / block_size) * block_size;
allocator->block_size = block_size;
allocator->largest_free_area = allocator->capacity;
/* Initialize the handle and free lists. */
INIT_LIST_HEAD(&allocator->allocated_list.list);
INIT_LIST_HEAD(&allocator->free_list.list);
first_node = kmalloc(sizeof(*first_node), GFP_KERNEL);
if (first_node == NULL) {
kfree(allocator);
allocator = NULL;
} else {
/* Full range of blocks is available. */
first_node->range.begin = base;
first_node->range.end = base + allocator->capacity;
list_add(&first_node->list,
&allocator->free_list.list);
}
return allocator;
}
void memrar_destroy_allocator(struct memrar_allocator *allocator)
{
/*
* Assume that the memory allocator lock isn't held at this
* point in time. Caller must ensure that.
*/
struct memrar_address_ranges *pos = NULL;
struct memrar_address_ranges *n = NULL;
if (allocator == NULL)
return;
mutex_lock(&allocator->lock);
/* Reclaim free list resources. */
list_for_each_entry_safe(pos,
n,
&allocator->free_list.list,
list) {
list_del(&pos->list);
kfree(pos);
}
mutex_unlock(&allocator->lock);
kfree(allocator);
}
unsigned long memrar_allocator_alloc(struct memrar_allocator *allocator,
size_t size)
{
struct memrar_address_ranges *pos = NULL;
size_t num_blocks;
unsigned long reserved_bytes;
/*
* Address of allocated buffer. We assume that zero is not a
* valid address.
*/
unsigned long addr = 0;
if (allocator == NULL || size == 0)
return addr;
/* Reserve enough blocks to hold the amount of bytes requested. */
num_blocks = DIV_ROUND_UP(size, allocator->block_size);
reserved_bytes = num_blocks * allocator->block_size;
mutex_lock(&allocator->lock);
if (reserved_bytes > allocator->largest_free_area) {
mutex_unlock(&allocator->lock);
return addr;
}
/*
* Iterate through the free list to find a suitably sized
* range of free contiguous memory blocks.
*
* We also take the opportunity to reset the size of the
* largest free area size statistic.
*/
list_for_each_entry(pos, &allocator->free_list.list, list) {
struct memrar_address_range * const fr = &pos->range;
size_t const curr_size = fr->end - fr->begin;
if (curr_size >= reserved_bytes && addr == 0) {
struct memrar_address_range *range = NULL;
struct memrar_address_ranges * const new_node =
kmalloc(sizeof(*new_node), GFP_KERNEL);
if (new_node == NULL)
break;
list_add(&new_node->list,
&allocator->allocated_list.list);
/*
* Carve out area of memory from end of free
* range.
*/
range = &new_node->range;
range->end = fr->end;
fr->end -= reserved_bytes;
range->begin = fr->end;
addr = range->begin;
/*
* Check if largest area has decreased in
* size. We'll need to continue scanning for
* the next largest area if it has.
*/
if (curr_size == allocator->largest_free_area)
allocator->largest_free_area -=
reserved_bytes;
else
break;
}
/*
* Reset largest free area size statistic as needed,
* but only if we've actually allocated memory.
*/
if (addr != 0
&& curr_size > allocator->largest_free_area) {
allocator->largest_free_area = curr_size;
break;
}
}
mutex_unlock(&allocator->lock);
return addr;
}
long memrar_allocator_free(struct memrar_allocator *allocator,
unsigned long addr)
{
struct list_head *pos = NULL;
struct list_head *tmp = NULL;
struct list_head *dst = NULL;
struct memrar_address_ranges *allocated = NULL;
struct memrar_address_range const *handle = NULL;
unsigned long old_end = 0;
unsigned long new_chunk_size = 0;
if (allocator == NULL)
return -EINVAL;
if (addr == 0)
return 0; /* Ignore "free(0)". */
mutex_lock(&allocator->lock);
/* Find the corresponding handle. */
list_for_each_entry(allocated,
&allocator->allocated_list.list,
list) {
if (allocated->range.begin == addr) {
handle = &allocated->range;
break;
}
}
/* No such buffer created by this allocator. */
if (handle == NULL) {
mutex_unlock(&allocator->lock);
return -EFAULT;
}
/*
* Coalesce adjacent chunks of memory if possible.
*
* @note This isn't full blown coalescing since we're only
* coalescing at most three chunks of memory.
*/
list_for_each_safe(pos, tmp, &allocator->free_list.list) {
/* @todo O(n) performance. Optimize. */
struct memrar_address_range * const chunk =
&list_entry(pos,
struct memrar_address_ranges,
list)->range;
/* Extend size of existing free adjacent chunk. */
if (chunk->end == handle->begin) {
/*
* Chunk "less than" than the one we're
* freeing is adjacent.
*
* Before:
*
* +-----+------+
* |chunk|handle|
* +-----+------+
*
* After:
*
* +------------+
* | chunk |
* +------------+
*/
struct memrar_address_ranges const * const next =
list_entry(pos->next,
struct memrar_address_ranges,
list);
chunk->end = handle->end;
/*
* Now check if next free chunk is adjacent to
* the current extended free chunk.
*
* Before:
*
* +------------+----+
* | chunk |next|
* +------------+----+
*
* After:
*
* +-----------------+
* | chunk |
* +-----------------+
*/
if (!list_is_singular(pos)
&& chunk->end == next->range.begin) {
chunk->end = next->range.end;
list_del(pos->next);
kfree(next);
}
list_del(&allocated->list);
new_chunk_size = chunk->end - chunk->begin;
goto exit_memrar_free;
} else if (handle->end == chunk->begin) {
/*
* Chunk "greater than" than the one we're
* freeing is adjacent.
*
* +------+-----+
* |handle|chunk|
* +------+-----+
*
* After:
*
* +------------+
* | chunk |
* +------------+
*/
struct memrar_address_ranges const * const prev =
list_entry(pos->prev,
struct memrar_address_ranges,
list);
chunk->begin = handle->begin;
/*
* Now check if previous free chunk is
* adjacent to the current extended free
* chunk.
*
*
* Before:
*
* +----+------------+
* |prev| chunk |
* +----+------------+
*
* After:
*
* +-----------------+
* | chunk |
* +-----------------+
*/
if (!list_is_singular(pos)
&& prev->range.end == chunk->begin) {
chunk->begin = prev->range.begin;
list_del(pos->prev);
kfree(prev);
}
list_del(&allocated->list);
new_chunk_size = chunk->end - chunk->begin;
goto exit_memrar_free;
} else if (chunk->end < handle->begin
&& chunk->end > old_end) {
/* Keep track of where the entry could be
* potentially moved from the "allocated" list
* to the "free" list if coalescing doesn't
* occur, making sure the "free" list remains
* sorted.
*/
old_end = chunk->end;
dst = pos;
}
}
/*
* Nothing to coalesce.
*
* Move the entry from the "allocated" list to the "free"
* list.
*/
list_move(&allocated->list, dst);
new_chunk_size = handle->end - handle->begin;
allocated = NULL;
exit_memrar_free:
if (new_chunk_size > allocator->largest_free_area)
allocator->largest_free_area = new_chunk_size;
mutex_unlock(&allocator->lock);
kfree(allocated);
return 0;
}
/*
Local Variables:
c-file-style: "linux"
End:
*/

View File

@ -1,149 +0,0 @@
/*
* Copyright (C) 2010 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of version 2 of the GNU General
* Public License as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be
* useful, but WITHOUT ANY WARRANTY; without even the implied
* warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
* PURPOSE. See the GNU General Public License for more details.
* You should have received a copy of the GNU General Public
* License along with this program; if not, write to the Free
* Software Foundation, Inc., 59 Temple Place - Suite 330,
* Boston, MA 02111-1307, USA.
* The full GNU General Public License is included in this
* distribution in the file called COPYING.
*/
#ifndef MEMRAR_ALLOCATOR_H
#define MEMRAR_ALLOCATOR_H
#include <linux/mutex.h>
#include <linux/list.h>
#include <linux/types.h>
#include <linux/kernel.h>
/**
* struct memrar_address_range - struct that describes a memory range
* @begin: Beginning of available address range.
* @end: End of available address range, one past the end,
* i.e. [begin, end).
*/
struct memrar_address_range {
/* private: internal use only */
unsigned long begin;
unsigned long end;
};
/**
* struct memrar_address_ranges - list of areas of memory.
* @list: Linked list of address ranges.
* @range: Memory address range corresponding to given list node.
*/
struct memrar_address_ranges {
/* private: internal use only */
struct list_head list;
struct memrar_address_range range;
};
/**
* struct memrar_allocator - encapsulation of the memory allocator state
* @lock: Lock used to synchronize access to the memory
* allocator state.
* @base: Base (start) address of the allocator memory
* space.
* @capacity: Size of the allocator memory space in bytes.
* @block_size: The size in bytes of individual blocks within
* the allocator memory space.
* @largest_free_area: Largest free area of memory in the allocator
* in bytes.
* @allocated_list: List of allocated memory block address
* ranges.
* @free_list: List of free address ranges.
*
* This structure contains all memory allocator state, including the
* base address, capacity, free list, lock, etc.
*/
struct memrar_allocator {
/* private: internal use only */
struct mutex lock;
unsigned long base;
size_t capacity;
size_t block_size;
size_t largest_free_area;
struct memrar_address_ranges allocated_list;
struct memrar_address_ranges free_list;
};
/**
* memrar_create_allocator() - create a memory allocator
* @base: Address at which the memory allocator begins.
* @capacity: Desired size of the memory allocator. This value must
* be larger than the block_size, ideally more than twice
* as large since there wouldn't be much point in using a
* memory allocator otherwise.
* @block_size: The size of individual blocks within the memory
* allocator. This value must smaller than the
* capacity.
*
* Create a memory allocator with the given capacity and block size.
* The capacity will be reduced to be a multiple of the block size, if
* necessary.
*
* Returns an instance of the memory allocator, if creation succeeds,
* otherwise zero if creation fails. Failure may occur if not enough
* kernel memory exists to create the memrar_allocator instance
* itself, or if the capacity and block_size arguments are not
* compatible or make sense.
*/
struct memrar_allocator *memrar_create_allocator(unsigned long base,
size_t capacity,
size_t block_size);
/**
* memrar_destroy_allocator() - destroy allocator
* @allocator: The allocator being destroyed.
*
* Reclaim resources held by the memory allocator. The caller must
* explicitly free all memory reserved by memrar_allocator_alloc()
* prior to calling this function. Otherwise leaks will occur.
*/
void memrar_destroy_allocator(struct memrar_allocator *allocator);
/**
* memrar_allocator_alloc() - reserve an area of memory of given size
* @allocator: The allocator instance being used to reserve buffer.
* @size: The size in bytes of the buffer to allocate.
*
* This functions reserves an area of memory managed by the given
* allocator. It returns zero if allocation was not possible.
* Failure may occur if the allocator no longer has space available.
*/
unsigned long memrar_allocator_alloc(struct memrar_allocator *allocator,
size_t size);
/**
* memrar_allocator_free() - release buffer starting at given address
* @allocator: The allocator instance being used to release the buffer.
* @address: The address of the buffer being released.
*
* Release an area of memory starting at the given address. Failure
* could occur if the given address is not in the address space
* managed by the allocator. Returns zero on success or an errno
* (negative value) on failure.
*/
long memrar_allocator_free(struct memrar_allocator *allocator,
unsigned long address);
#endif /* MEMRAR_ALLOCATOR_H */
/*
Local Variables:
c-file-style: "linux"
End:
*/

File diff suppressed because it is too large Load Diff