Merge branch 'akpm' (incoming from Andrew)

Merge first batch of fixes from Andrew Morton:

 - A couple of kthread changes

 - A few minor audit patches

 - A number of fbdev patches.  Florian remains AWOL so I'm picking up
   some of these.

 - A few kbuild things

 - ocfs2 updates

 - Almost all of the MM queue

(And in the meantime, I already have the second big batch from Andrew
pending in my mailbox ;^)

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (149 commits)
  memcg: take reference before releasing rcu_read_lock
  mem hotunplug: fix kfree() of bootmem memory
  mmKconfig: add an option to disable bounce
  mm, nobootmem: do memset() after memblock_reserve()
  mm, nobootmem: clean-up of free_low_memory_core_early()
  fs/buffer.c: remove unnecessary init operation after allocating buffer_head.
  numa, cpu hotplug: change links of CPU and node when changing node number by onlining CPU
  mm: fix memory_hotplug.c printk format warning
  mm: swap: mark swap pages writeback before queueing for direct IO
  swap: redirty page if page write fails on swap file
  mm, memcg: give exiting processes access to memory reserves
  thp: fix huge zero page logic for page with pfn == 0
  memcg: avoid accessing memcg after releasing reference
  fs: fix fsync() error reporting
  memblock: fix missing comment of memblock_insert_region()
  mm: Remove unused parameter of pages_correctly_reserved()
  firmware, memmap: fix firmware_map_entry leak
  mm/vmstat: add note on safety of drain_zonestat
  mm: thp: add split tail pages to shrink page list in page reclaim
  mm: allow for outstanding swap writeback accounting
  ...
This commit is contained in:
Linus Torvalds 2013-04-29 17:29:08 -07:00
commit 73154383f0
166 changed files with 3444 additions and 2183 deletions

View File

@ -40,6 +40,7 @@ Features:
- soft limit
- moving (recharging) account at moving a task is selectable.
- usage threshold notifier
- memory pressure notifier
- oom-killer disable knob and oom-notifier
- Root cgroup has no limit controls.
@ -65,6 +66,7 @@ Brief summary of control files.
memory.stat # show various statistics
memory.use_hierarchy # set/show hierarchical account enabled
memory.force_empty # trigger forced move charge to parent
memory.pressure_level # set memory pressure notifications
memory.swappiness # set/show swappiness parameter of vmscan
(See sysctl's vm.swappiness)
memory.move_charge_at_immigrate # set/show controls of moving charges
@ -762,7 +764,73 @@ At reading, current status of OOM is shown.
under_oom 0 or 1 (if 1, the memory cgroup is under OOM, tasks may
be stopped.)
11. TODO
11. Memory Pressure
The pressure level notifications can be used to monitor the memory
allocation cost; based on the pressure, applications can implement
different strategies of managing their memory resources. The pressure
levels are defined as following:
The "low" level means that the system is reclaiming memory for new
allocations. Monitoring this reclaiming activity might be useful for
maintaining cache level. Upon notification, the program (typically
"Activity Manager") might analyze vmstat and act in advance (i.e.
prematurely shutdown unimportant services).
The "medium" level means that the system is experiencing medium memory
pressure, the system might be making swap, paging out active file caches,
etc. Upon this event applications may decide to further analyze
vmstat/zoneinfo/memcg or internal memory usage statistics and free any
resources that can be easily reconstructed or re-read from a disk.
The "critical" level means that the system is actively thrashing, it is
about to out of memory (OOM) or even the in-kernel OOM killer is on its
way to trigger. Applications should do whatever they can to help the
system. It might be too late to consult with vmstat or any other
statistics, so it's advisable to take an immediate action.
The events are propagated upward until the event is handled, i.e. the
events are not pass-through. Here is what this means: for example you have
three cgroups: A->B->C. Now you set up an event listener on cgroups A, B
and C, and suppose group C experiences some pressure. In this situation,
only group C will receive the notification, i.e. groups A and B will not
receive it. This is done to avoid excessive "broadcasting" of messages,
which disturbs the system and which is especially bad if we are low on
memory or thrashing. So, organize the cgroups wisely, or propagate the
events manually (or, ask us to implement the pass-through events,
explaining why would you need them.)
The file memory.pressure_level is only used to setup an eventfd. To
register a notification, an application must:
- create an eventfd using eventfd(2);
- open memory.pressure_level;
- write string like "<event_fd> <fd of memory.pressure_level> <level>"
to cgroup.event_control.
Application will be notified through eventfd when memory pressure is at
the specific level (or higher). Read/write operations to
memory.pressure_level are no implemented.
Test:
Here is a small script example that makes a new cgroup, sets up a
memory limit, sets up a notification in the cgroup and then makes child
cgroup experience a critical pressure:
# cd /sys/fs/cgroup/memory/
# mkdir foo
# cd foo
# cgroup_event_listener memory.pressure_level low &
# echo 8000000 > memory.limit_in_bytes
# echo 8000000 > memory.memsw.limit_in_bytes
# echo $$ > tasks
# dd if=/dev/zero | read x
(Expect a bunch of notifications, and eventually, the oom-killer will
trigger.)
12. TODO
1. Add support for accounting huge pages (as a separate controller)
2. Make per-cgroup scanner reclaim not-shared pages first

View File

@ -18,6 +18,7 @@ files can be found in mm/swap.c.
Currently, these files are in /proc/sys/vm:
- admin_reserve_kbytes
- block_dump
- compact_memory
- dirty_background_bytes
@ -53,11 +54,41 @@ Currently, these files are in /proc/sys/vm:
- percpu_pagelist_fraction
- stat_interval
- swappiness
- user_reserve_kbytes
- vfs_cache_pressure
- zone_reclaim_mode
==============================================================
admin_reserve_kbytes
The amount of free memory in the system that should be reserved for users
with the capability cap_sys_admin.
admin_reserve_kbytes defaults to min(3% of free pages, 8MB)
That should provide enough for the admin to log in and kill a process,
if necessary, under the default overcommit 'guess' mode.
Systems running under overcommit 'never' should increase this to account
for the full Virtual Memory Size of programs used to recover. Otherwise,
root may not be able to log in to recover the system.
How do you calculate a minimum useful reserve?
sshd or login + bash (or some other shell) + top (or ps, kill, etc.)
For overcommit 'guess', we can sum resident set sizes (RSS).
On x86_64 this is about 8MB.
For overcommit 'never', we can take the max of their virtual sizes (VSZ)
and add the sum of their RSS.
On x86_64 this is about 128MB.
Changing this takes effect whenever an application requests memory.
==============================================================
block_dump
block_dump enables block I/O debugging when set to a nonzero value. More
@ -542,6 +573,7 @@ memory until it actually runs out.
When this flag is 2, the kernel uses a "never overcommit"
policy that attempts to prevent any overcommit of memory.
Note that user_reserve_kbytes affects this policy.
This feature can be very useful because there are a lot of
programs that malloc() huge amounts of memory "just-in-case"
@ -645,6 +677,24 @@ The default value is 60.
==============================================================
- user_reserve_kbytes
When overcommit_memory is set to 2, "never overommit" mode, reserve
min(3% of current process size, user_reserve_kbytes) of free memory.
This is intended to prevent a user from starting a single memory hogging
process, such that they cannot recover (kill the hog).
user_reserve_kbytes defaults to min(3% of the current process size, 128MB).
If this is reduced to zero, then the user will be allowed to allocate
all free memory with a single process, minus admin_reserve_kbytes.
Any subsequent attempts to execute a command will result in
"fork: Cannot allocate memory".
Changing this takes effect whenever an application requests memory.
==============================================================
vfs_cache_pressure
------------------

View File

@ -8,7 +8,9 @@ The Linux kernel supports the following overcommit handling modes
default.
1 - Always overcommit. Appropriate for some scientific
applications.
applications. Classic example is code using sparse arrays
and just relying on the virtual memory consisting almost
entirely of zero pages.
2 - Don't overcommit. The total address space commit
for the system is not permitted to exceed swap + a
@ -18,6 +20,10 @@ The Linux kernel supports the following overcommit handling modes
pages but will receive errors on memory allocation as
appropriate.
Useful for applications that want to guarantee their
memory allocations will be available in the future
without having to initialize every page.
The overcommit policy is set via the sysctl `vm.overcommit_memory'.
The overcommit percentage is set via `vm.overcommit_ratio'.

View File

@ -185,7 +185,6 @@ nautilus_machine_check(unsigned long vector, unsigned long la_ptr)
mb();
}
extern void free_reserved_mem(void *, void *);
extern void pcibios_claim_one_bus(struct pci_bus *);
static struct resource irongate_io = {
@ -239,8 +238,8 @@ nautilus_init_pci(void)
if (pci_mem < memtop)
memtop = pci_mem;
if (memtop > alpha_mv.min_mem_address) {
free_reserved_mem(__va(alpha_mv.min_mem_address),
__va(memtop));
free_reserved_area((unsigned long)__va(alpha_mv.min_mem_address),
(unsigned long)__va(memtop), 0, NULL);
printk("nautilus_init_pci: %ldk freed\n",
(memtop - alpha_mv.min_mem_address) >> 10);
}

View File

@ -31,6 +31,7 @@
#include <asm/console.h>
#include <asm/tlb.h>
#include <asm/setup.h>
#include <asm/sections.h>
extern void die_if_kernel(char *,struct pt_regs *,long);
@ -281,8 +282,6 @@ printk_memory_info(void)
{
unsigned long codesize, reservedpages, datasize, initsize, tmp;
extern int page_is_ram(unsigned long) __init;
extern char _text, _etext, _data, _edata;
extern char __init_begin, __init_end;
/* printk all informations */
reservedpages = 0;
@ -317,33 +316,16 @@ mem_init(void)
}
#endif /* CONFIG_DISCONTIGMEM */
void
free_reserved_mem(void *start, void *end)
{
void *__start = start;
for (; __start < end; __start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(__start));
init_page_count(virt_to_page(__start));
free_page((long)__start);
totalram_pages++;
}
}
void
free_initmem(void)
{
extern char __init_begin, __init_end;
free_reserved_mem(&__init_begin, &__init_end);
printk ("Freeing unused kernel memory: %ldk freed\n",
(&__init_end - &__init_begin) >> 10);
free_initmem_default(0);
}
#ifdef CONFIG_BLK_DEV_INITRD
void
free_initrd_mem(unsigned long start, unsigned long end)
{
free_reserved_mem((void *)start, (void *)end);
printk ("Freeing initrd memory: %ldk freed\n", (end - start) >> 10);
free_reserved_area(start, end, 0, "initrd");
}
#endif

View File

@ -17,6 +17,7 @@
#include <asm/hwrpb.h>
#include <asm/pgalloc.h>
#include <asm/sections.h>
pg_data_t node_data[MAX_NUMNODES];
EXPORT_SYMBOL(node_data);
@ -325,8 +326,6 @@ void __init mem_init(void)
{
unsigned long codesize, reservedpages, datasize, initsize, pfn;
extern int page_is_ram(unsigned long) __init;
extern char _text, _etext, _data, _edata;
extern char __init_begin, __init_end;
unsigned long nid, i;
high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT);

View File

@ -144,37 +144,18 @@ void __init mem_init(void)
PAGES_TO_KB(reserved_pages));
}
static void __init free_init_pages(const char *what, unsigned long begin,
unsigned long end)
{
unsigned long addr;
pr_info("Freeing %s: %ldk [%lx] to [%lx]\n",
what, TO_KB(end - begin), begin, end);
/* need to check that the page we free is not a partial page */
for (addr = begin; addr + PAGE_SIZE <= end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
}
/*
* free_initmem: Free all the __init memory.
*/
void __init_refok free_initmem(void)
{
free_init_pages("unused kernel memory",
(unsigned long)__init_begin,
(unsigned long)__init_end);
free_initmem_default(0);
}
#ifdef CONFIG_BLK_DEV_INITRD
void __init free_initrd_mem(unsigned long start, unsigned long end)
{
free_init_pages("initrd memory", start, end);
free_reserved_area(start, end, 0, "initrd");
}
#endif

View File

@ -60,6 +60,15 @@ extern void __pgd_error(const char *file, int line, pgd_t);
*/
#define FIRST_USER_ADDRESS PAGE_SIZE
/*
* Use TASK_SIZE as the ceiling argument for free_pgtables() and
* free_pgd_range() to avoid freeing the modules pmd when LPAE is enabled (pmd
* page shared between user and kernel).
*/
#ifdef CONFIG_ARM_LPAE
#define USER_PGTABLES_CEILING TASK_SIZE
#endif
/*
* The pgprot_* and protection_map entries will be fixed up in runtime
* to include the cachable and bufferable bits based on memory policy,

View File

@ -99,6 +99,9 @@ void show_mem(unsigned int filter)
printk("Mem-info:\n");
show_free_areas(filter);
if (filter & SHOW_MEM_FILTER_PAGE_COUNT)
return;
for_each_bank (i, mi) {
struct membank *bank = &mi->bank[i];
unsigned int pfn1, pfn2;
@ -424,24 +427,6 @@ void __init bootmem_init(void)
max_pfn = max_high - PHYS_PFN_OFFSET;
}
static inline int free_area(unsigned long pfn, unsigned long end, char *s)
{
unsigned int pages = 0, size = (end - pfn) << (PAGE_SHIFT - 10);
for (; pfn < end; pfn++) {
struct page *page = pfn_to_page(pfn);
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
pages++;
}
if (size && s)
printk(KERN_INFO "Freeing %s memory: %dK\n", s, size);
return pages;
}
/*
* Poison init memory with an undefined instruction (ARM) or a branch to an
* undefined instruction (Thumb).
@ -534,6 +519,14 @@ static void __init free_unused_memmap(struct meminfo *mi)
#endif
}
#ifdef CONFIG_HIGHMEM
static inline void free_area_high(unsigned long pfn, unsigned long end)
{
for (; pfn < end; pfn++)
free_highmem_page(pfn_to_page(pfn));
}
#endif
static void __init free_highpages(void)
{
#ifdef CONFIG_HIGHMEM
@ -569,8 +562,7 @@ static void __init free_highpages(void)
if (res_end > end)
res_end = end;
if (res_start != start)
totalhigh_pages += free_area(start, res_start,
NULL);
free_area_high(start, res_start);
start = res_end;
if (start == end)
break;
@ -578,9 +570,8 @@ static void __init free_highpages(void)
/* And now free anything which remains */
if (start < end)
totalhigh_pages += free_area(start, end, NULL);
free_area_high(start, end);
}
totalram_pages += totalhigh_pages;
#endif
}
@ -609,8 +600,7 @@ void __init mem_init(void)
#ifdef CONFIG_SA1111
/* now that our DMA memory is actually so designated, we can free it */
totalram_pages += free_area(PHYS_PFN_OFFSET,
__phys_to_pfn(__pa(swapper_pg_dir)), NULL);
free_reserved_area(__va(PHYS_PFN_OFFSET), swapper_pg_dir, 0, NULL);
#endif
free_highpages();
@ -738,16 +728,12 @@ void free_initmem(void)
extern char __tcm_start, __tcm_end;
poison_init_mem(&__tcm_start, &__tcm_end - &__tcm_start);
totalram_pages += free_area(__phys_to_pfn(__pa(&__tcm_start)),
__phys_to_pfn(__pa(&__tcm_end)),
"TCM link");
free_reserved_area(&__tcm_start, &__tcm_end, 0, "TCM link");
#endif
poison_init_mem(__init_begin, __init_end - __init_begin);
if (!machine_is_integrator() && !machine_is_cintegrator())
totalram_pages += free_area(__phys_to_pfn(__pa(__init_begin)),
__phys_to_pfn(__pa(__init_end)),
"init");
free_initmem_default(0);
}
#ifdef CONFIG_BLK_DEV_INITRD
@ -758,9 +744,7 @@ void free_initrd_mem(unsigned long start, unsigned long end)
{
if (!keep_initrd) {
poison_init_mem((void *)start, PAGE_ALIGN(end) - start);
totalram_pages += free_area(__phys_to_pfn(__pa(start)),
__phys_to_pfn(__pa(end)),
"initrd");
free_reserved_area(start, end, 0, "initrd");
}
}

View File

@ -197,24 +197,6 @@ void __init bootmem_init(void)
max_pfn = max_low_pfn = max;
}
static inline int free_area(unsigned long pfn, unsigned long end, char *s)
{
unsigned int pages = 0, size = (end - pfn) << (PAGE_SHIFT - 10);
for (; pfn < end; pfn++) {
struct page *page = pfn_to_page(pfn);
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
pages++;
}
if (size && s)
pr_info("Freeing %s memory: %dK\n", s, size);
return pages;
}
/*
* Poison init memory with an undefined instruction (0x0).
*/
@ -405,9 +387,7 @@ void __init mem_init(void)
void free_initmem(void)
{
poison_init_mem(__init_begin, __init_end - __init_begin);
totalram_pages += free_area(__phys_to_pfn(__pa(__init_begin)),
__phys_to_pfn(__pa(__init_end)),
"init");
free_initmem_default(0);
}
#ifdef CONFIG_BLK_DEV_INITRD
@ -418,9 +398,7 @@ void free_initrd_mem(unsigned long start, unsigned long end)
{
if (!keep_initrd) {
poison_init_mem((void *)start, PAGE_ALIGN(end) - start);
totalram_pages += free_area(__phys_to_pfn(__pa(start)),
__phys_to_pfn(__pa(end)),
"initrd");
free_reserved_area(start, end, 0, "initrd");
}
}

View File

@ -391,17 +391,14 @@ int kern_addr_valid(unsigned long addr)
}
#ifdef CONFIG_SPARSEMEM_VMEMMAP
#ifdef CONFIG_ARM64_64K_PAGES
int __meminit vmemmap_populate(struct page *start_page,
unsigned long size, int node)
int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
{
return vmemmap_populate_basepages(start_page, size, node);
return vmemmap_populate_basepages(start, end, node);
}
#else /* !CONFIG_ARM64_64K_PAGES */
int __meminit vmemmap_populate(struct page *start_page,
unsigned long size, int node)
int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
{
unsigned long addr = (unsigned long)start_page;
unsigned long end = (unsigned long)(start_page + size);
unsigned long addr = start;
unsigned long next;
pgd_t *pgd;
pud_t *pud;
@ -434,7 +431,7 @@ int __meminit vmemmap_populate(struct page *start_page,
return 0;
}
#endif /* CONFIG_ARM64_64K_PAGES */
void vmemmap_free(struct page *memmap, unsigned long nr_pages)
void vmemmap_free(unsigned long start, unsigned long end)
{
}
#endif /* CONFIG_SPARSEMEM_VMEMMAP */

View File

@ -146,34 +146,14 @@ void __init mem_init(void)
initsize >> 10);
}
static inline void free_area(unsigned long addr, unsigned long end, char *s)
{
unsigned int size = (end - addr) >> 10;
for (; addr < end; addr += PAGE_SIZE) {
struct page *page = virt_to_page(addr);
ClearPageReserved(page);
init_page_count(page);
free_page(addr);
totalram_pages++;
}
if (size && s)
printk(KERN_INFO "Freeing %s memory: %dK (%lx - %lx)\n",
s, size, end - (size << 10), end);
}
void free_initmem(void)
{
free_area((unsigned long)__init_begin, (unsigned long)__init_end,
"init");
free_initmem_default(0);
}
#ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end)
{
free_area(start, end, "initrd");
free_reserved_area(start, end, 0, "initrd");
}
#endif

View File

@ -103,7 +103,7 @@ void __init mem_init(void)
max_mapnr = num_physpages = MAP_NR(high_memory);
printk(KERN_DEBUG "Kernel managed physical pages: %lu\n", num_physpages);
/* This will put all memory onto the freelists. */
/* This will put all low memory onto the freelists. */
totalram_pages = free_all_bootmem();
reservedpages = 0;
@ -129,24 +129,11 @@ void __init mem_init(void)
initk, codek, datak, DMA_UNCACHED_REGION >> 10, (reservedpages << (PAGE_SHIFT-10)));
}
static void __init free_init_pages(const char *what, unsigned long begin, unsigned long end)
{
unsigned long addr;
/* next to check that the page we free is not a partial page */
for (addr = begin; addr + PAGE_SIZE <= end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
printk(KERN_INFO "Freeing %s: %ldk freed\n", what, (end - begin) >> 10);
}
#ifdef CONFIG_BLK_DEV_INITRD
void __init free_initrd_mem(unsigned long start, unsigned long end)
{
#ifndef CONFIG_MPU
free_init_pages("initrd memory", start, end);
free_reserved_area(start, end, 0, "initrd");
#endif
}
#endif
@ -154,10 +141,7 @@ void __init free_initrd_mem(unsigned long start, unsigned long end)
void __init_refok free_initmem(void)
{
#if defined CONFIG_RAMKERNEL && !defined CONFIG_MPU
free_init_pages("unused kernel memory",
(unsigned long)(&__init_begin),
(unsigned long)(&__init_end));
free_initmem_default(0);
if (memory_start == (unsigned long)(&__init_end))
memory_start = (unsigned long)(&__init_begin);
#endif

View File

@ -77,37 +77,11 @@ void __init mem_init(void)
#ifdef CONFIG_BLK_DEV_INITRD
void __init free_initrd_mem(unsigned long start, unsigned long end)
{
int pages = 0;
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
pages++;
}
printk(KERN_INFO "Freeing initrd memory: %luk freed\n",
(pages * PAGE_SIZE) >> 10);
free_reserved_area(start, end, 0, "initrd");
}
#endif
void __init free_initmem(void)
{
unsigned long addr;
/*
* The following code should be cool even if these sections
* are not page aligned.
*/
addr = PAGE_ALIGN((unsigned long)(__init_begin));
/* next to check that the page we free is not a partial page */
for (; addr + PAGE_SIZE < (unsigned long)(__init_end);
addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
printk(KERN_INFO "Freeing unused kernel memory: %dK freed\n",
(int) ((addr - PAGE_ALIGN((long) &__init_begin)) >> 10));
free_initmem_default(0);
}

View File

@ -12,12 +12,10 @@
#include <linux/init.h>
#include <linux/bootmem.h>
#include <asm/tlb.h>
#include <asm/sections.h>
unsigned long empty_zero_page;
extern char _stext, _edata, _etext; /* From linkerscript */
extern char __init_begin, __init_end;
void __init
mem_init(void)
{
@ -67,15 +65,5 @@ mem_init(void)
void
free_initmem(void)
{
unsigned long addr;
addr = (unsigned long)(&__init_begin);
for (; addr < (unsigned long)(&__init_end); addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
printk (KERN_INFO "Freeing unused kernel memory: %luk freed\n",
(unsigned long)((&__init_end - &__init_begin) >> 10));
free_initmem_default(0);
}

View File

@ -122,7 +122,7 @@ void __init mem_init(void)
#endif
int codek = 0, datak = 0;
/* this will put all memory onto the freelists */
/* this will put all low memory onto the freelists */
totalram_pages = free_all_bootmem();
#ifdef CONFIG_MMU
@ -131,14 +131,8 @@ void __init mem_init(void)
datapages++;
#ifdef CONFIG_HIGHMEM
for (pfn = num_physpages - 1; pfn >= num_mappedpages; pfn--) {
struct page *page = &mem_map[pfn];
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
totalram_pages++;
}
for (pfn = num_physpages - 1; pfn >= num_mappedpages; pfn--)
free_highmem_page(&mem_map[pfn]);
#endif
codek = ((unsigned long) &_etext - (unsigned long) &_stext) >> 10;
@ -168,21 +162,7 @@ void __init mem_init(void)
void free_initmem(void)
{
#if defined(CONFIG_RAMKERNEL) && !defined(CONFIG_PROTECT_KERNEL)
unsigned long start, end, addr;
start = PAGE_ALIGN((unsigned long) &__init_begin); /* round up */
end = ((unsigned long) &__init_end) & PAGE_MASK; /* round down */
/* next to check that the page we free is not a partial page */
for (addr = start; addr < end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
printk("Freeing unused kernel memory: %ldKiB freed (0x%lx - 0x%lx)\n",
(end - start) >> 10, start, end);
free_initmem_default(0);
#endif
} /* end free_initmem() */
@ -193,14 +173,6 @@ void free_initmem(void)
#ifdef CONFIG_BLK_DEV_INITRD
void __init free_initrd_mem(unsigned long start, unsigned long end)
{
int pages = 0;
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
pages++;
}
printk("Freeing initrd memory: %dKiB freed\n", (pages * PAGE_SIZE) >> 10);
free_reserved_area(start, end, 0, "initrd");
} /* end free_initrd_mem() */
#endif

View File

@ -139,7 +139,7 @@ void __init mem_init(void)
start_mem = PAGE_ALIGN(start_mem);
max_mapnr = num_physpages = MAP_NR(high_memory);
/* this will put all memory onto the freelists */
/* this will put all low memory onto the freelists */
totalram_pages = free_all_bootmem();
codek = (_etext - _stext) >> 10;
@ -161,15 +161,7 @@ void __init mem_init(void)
#ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end)
{
int pages = 0;
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
pages++;
}
printk ("Freeing initrd memory: %dk freed\n", pages);
free_reserved_area(start, end, 0, "initrd");
}
#endif
@ -177,23 +169,7 @@ void
free_initmem(void)
{
#ifdef CONFIG_RAMKERNEL
unsigned long addr;
/*
* the following code should be cool even if these sections
* are not page aligned.
*/
addr = PAGE_ALIGN((unsigned long)(__init_begin));
/* next to check that the page we free is not a partial page */
for (; addr + PAGE_SIZE < (unsigned long)__init_end; addr +=PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
printk(KERN_INFO "Freeing unused kernel memory: %ldk freed (0x%x - 0x%x)\n",
(addr - PAGE_ALIGN((long) __init_begin)) >> 10,
(int)(PAGE_ALIGN((unsigned long)__init_begin)),
(int)(addr - PAGE_SIZE));
free_initmem_default(0);
#endif
}

View File

@ -2,6 +2,7 @@
#define _ASM_IA64_HUGETLB_H
#include <asm/page.h>
#include <asm-generic/hugetlb.h>
void hugetlb_free_pgd_range(struct mmu_gather *tlb, unsigned long addr,

View File

@ -47,6 +47,8 @@ void show_mem(unsigned int filter)
printk(KERN_INFO "Mem-info:\n");
show_free_areas(filter);
printk(KERN_INFO "Node memory in pages:\n");
if (filter & SHOW_MEM_FILTER_PAGE_COUNT)
return;
for_each_online_pgdat(pgdat) {
unsigned long present;
unsigned long flags;

View File

@ -623,6 +623,8 @@ void show_mem(unsigned int filter)
printk(KERN_INFO "Mem-info:\n");
show_free_areas(filter);
if (filter & SHOW_MEM_FILTER_PAGE_COUNT)
return;
printk(KERN_INFO "Node memory in pages:\n");
for_each_online_pgdat(pgdat) {
unsigned long present;
@ -817,13 +819,12 @@ void arch_refresh_nodedata(int update_node, pg_data_t *update_pgdat)
#endif
#ifdef CONFIG_SPARSEMEM_VMEMMAP
int __meminit vmemmap_populate(struct page *start_page,
unsigned long size, int node)
int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
{
return vmemmap_populate_basepages(start_page, size, node);
return vmemmap_populate_basepages(start, end, node);
}
void vmemmap_free(struct page *memmap, unsigned long nr_pages)
void vmemmap_free(unsigned long start, unsigned long end)
{
}
#endif

View File

@ -154,25 +154,14 @@ ia64_init_addr_space (void)
void
free_initmem (void)
{
unsigned long addr, eaddr;
addr = (unsigned long) ia64_imva(__init_begin);
eaddr = (unsigned long) ia64_imva(__init_end);
while (addr < eaddr) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
++totalram_pages;
addr += PAGE_SIZE;
}
printk(KERN_INFO "Freeing unused kernel memory: %ldkB freed\n",
(__init_end - __init_begin) >> 10);
free_reserved_area((unsigned long)ia64_imva(__init_begin),
(unsigned long)ia64_imva(__init_end),
0, "unused kernel");
}
void __init
free_initrd_mem (unsigned long start, unsigned long end)
{
struct page *page;
/*
* EFI uses 4KB pages while the kernel can use 4KB or bigger.
* Thus EFI and the kernel may have different page sizes. It is
@ -213,11 +202,7 @@ free_initrd_mem (unsigned long start, unsigned long end)
for (; start < end; start += PAGE_SIZE) {
if (!virt_addr_valid(start))
continue;
page = virt_to_page(start);
ClearPageReserved(page);
init_page_count(page);
free_page(start);
++totalram_pages;
free_reserved_page(virt_to_page(start));
}
}

View File

@ -61,13 +61,26 @@ paddr_to_nid(unsigned long paddr)
int __meminit __early_pfn_to_nid(unsigned long pfn)
{
int i, section = pfn >> PFN_SECTION_SHIFT, ssec, esec;
/*
* NOTE: The following SMP-unsafe globals are only used early in boot
* when the kernel is running single-threaded.
*/
static int __meminitdata last_ssec, last_esec;
static int __meminitdata last_nid;
if (section >= last_ssec && section < last_esec)
return last_nid;
for (i = 0; i < num_node_memblks; i++) {
ssec = node_memblk[i].start_paddr >> PA_SECTION_SHIFT;
esec = (node_memblk[i].start_paddr + node_memblk[i].size +
((1L << PA_SECTION_SHIFT) - 1)) >> PA_SECTION_SHIFT;
if (section >= ssec && section < esec)
if (section >= ssec && section < esec) {
last_ssec = ssec;
last_esec = esec;
last_nid = node_memblk[i].nid;
return node_memblk[i].nid;
}
}
return -1;

View File

@ -28,10 +28,7 @@
#include <asm/mmu_context.h>
#include <asm/setup.h>
#include <asm/tlb.h>
/* References to section boundaries */
extern char _text, _etext, _edata;
extern char __init_begin, __init_end;
#include <asm/sections.h>
pgd_t swapper_pg_dir[1024];
@ -184,17 +181,7 @@ void __init mem_init(void)
*======================================================================*/
void free_initmem(void)
{
unsigned long addr;
addr = (unsigned long)(&__init_begin);
for (; addr < (unsigned long)(&__init_end); addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
printk (KERN_INFO "Freeing unused kernel memory: %dk freed\n", \
(int)(&__init_end - &__init_begin) >> 10);
free_initmem_default(0);
}
#ifdef CONFIG_BLK_DEV_INITRD
@ -204,13 +191,6 @@ void free_initmem(void)
*======================================================================*/
void free_initrd_mem(unsigned long start, unsigned long end)
{
unsigned long p;
for (p = start; p < end; p += PAGE_SIZE) {
ClearPageReserved(virt_to_page(p));
init_page_count(virt_to_page(p));
free_page(p);
totalram_pages++;
}
printk (KERN_INFO "Freeing initrd memory: %ldk freed\n", (end - start) >> 10);
free_reserved_area(start, end, 0, "initrd");
}
#endif

View File

@ -110,18 +110,7 @@ void __init paging_init(void)
void free_initmem(void)
{
#ifndef CONFIG_MMU_SUN3
unsigned long addr;
addr = (unsigned long) __init_begin;
for (; addr < ((unsigned long) __init_end); addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
pr_notice("Freeing unused kernel memory: %luk freed (0x%x - 0x%x)\n",
(addr - (unsigned long) __init_begin) >> 10,
(unsigned int) __init_begin, (unsigned int) __init_end);
free_initmem_default(0);
#endif /* CONFIG_MMU_SUN3 */
}
@ -213,15 +202,6 @@ void __init mem_init(void)
#ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end)
{
int pages = 0;
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
pages++;
}
pr_notice("Freeing initrd memory: %dk freed\n",
pages << (PAGE_SHIFT - 10));
free_reserved_area(start, end, 0, "initrd");
}
#endif

View File

@ -380,14 +380,8 @@ void __init mem_init(void)
#ifdef CONFIG_HIGHMEM
unsigned long tmp;
for (tmp = highstart_pfn; tmp < highend_pfn; tmp++) {
struct page *page = pfn_to_page(tmp);
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
totalhigh_pages++;
}
totalram_pages += totalhigh_pages;
for (tmp = highstart_pfn; tmp < highend_pfn; tmp++)
free_highmem_page(pfn_to_page(tmp));
num_physpages += totalhigh_pages;
#endif /* CONFIG_HIGHMEM */
@ -412,32 +406,15 @@ void __init mem_init(void)
return;
}
static void free_init_pages(char *what, unsigned long begin, unsigned long end)
{
unsigned long addr;
for (addr = begin; addr < end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
memset((void *)addr, POISON_FREE_INITMEM, PAGE_SIZE);
free_page(addr);
totalram_pages++;
}
pr_info("Freeing %s: %luk freed\n", what, (end - begin) >> 10);
}
void free_initmem(void)
{
free_init_pages("unused kernel memory",
(unsigned long)(&__init_begin),
(unsigned long)(&__init_end));
free_initmem_default(POISON_FREE_INITMEM);
}
#ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end)
{
end = end & PAGE_MASK;
free_init_pages("initrd memory", start, end);
free_reserved_area(start, end, POISON_FREE_INITMEM, "initrd");
}
#endif

View File

@ -46,7 +46,6 @@ void machine_shutdown(void);
void machine_halt(void);
void machine_power_off(void);
void free_init_pages(char *what, unsigned long begin, unsigned long end);
extern void *alloc_maybe_bootmem(size_t size, gfp_t mask);
extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);

View File

@ -82,13 +82,9 @@ static unsigned long highmem_setup(void)
/* FIXME not sure about */
if (memblock_is_reserved(pfn << PAGE_SHIFT))
continue;
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
totalhigh_pages++;
free_highmem_page(page);
reservedpages++;
}
totalram_pages += totalhigh_pages;
pr_info("High memory: %luk\n",
totalhigh_pages << (PAGE_SHIFT-10));
@ -236,40 +232,16 @@ void __init setup_memory(void)
paging_init();
}
void free_init_pages(char *what, unsigned long begin, unsigned long end)
{
unsigned long addr;
for (addr = begin; addr < end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
pr_info("Freeing %s: %ldk freed\n", what, (end - begin) >> 10);
}
#ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end)
{
int pages = 0;
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
pages++;
}
pr_notice("Freeing initrd memory: %dk freed\n",
(int)(pages * (PAGE_SIZE / 1024)));
free_reserved_area(start, end, 0, "initrd");
}
#endif
void free_initmem(void)
{
free_init_pages("unused kernel memory",
(unsigned long)(&__init_begin),
(unsigned long)(&__init_end));
free_initmem_default(0);
}
void __init mem_init(void)

View File

@ -10,6 +10,7 @@
#define __ASM_HUGETLB_H
#include <asm/page.h>
#include <asm-generic/hugetlb.h>
static inline int is_hugepage_only_range(struct mm_struct *mm,

View File

@ -77,10 +77,9 @@ EXPORT_SYMBOL_GPL(empty_zero_page);
/*
* Not static inline because used by IP27 special magic initialization code
*/
unsigned long setup_zero_pages(void)
void setup_zero_pages(void)
{
unsigned int order;
unsigned long size;
unsigned int order, i;
struct page *page;
if (cpu_has_vce)
@ -94,15 +93,10 @@ unsigned long setup_zero_pages(void)
page = virt_to_page((void *)empty_zero_page);
split_page(page, order);
while (page < virt_to_page((void *)(empty_zero_page + (PAGE_SIZE << order)))) {
SetPageReserved(page);
page++;
}
for (i = 0; i < (1 << order); i++, page++)
mark_page_reserved(page);
size = PAGE_SIZE << order;
zero_page_mask = (size - 1) & PAGE_MASK;
return 1UL << order;
zero_page_mask = ((PAGE_SIZE << order) - 1) & PAGE_MASK;
}
#ifdef CONFIG_MIPS_MT_SMTC
@ -380,7 +374,7 @@ void __init mem_init(void)
high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT);
totalram_pages += free_all_bootmem();
totalram_pages -= setup_zero_pages(); /* Setup zeroed pages. */
setup_zero_pages(); /* Setup zeroed pages. */
reservedpages = ram = 0;
for (tmp = 0; tmp < max_low_pfn; tmp++)
@ -399,12 +393,8 @@ void __init mem_init(void)
SetPageReserved(page);
continue;
}
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
totalhigh_pages++;
free_highmem_page(page);
}
totalram_pages += totalhigh_pages;
num_physpages += totalhigh_pages;
#endif
@ -440,11 +430,8 @@ void free_init_pages(const char *what, unsigned long begin, unsigned long end)
struct page *page = pfn_to_page(pfn);
void *addr = phys_to_virt(PFN_PHYS(pfn));
ClearPageReserved(page);
init_page_count(page);
memset(addr, POISON_FREE_INITMEM, PAGE_SIZE);
__free_page(page);
totalram_pages++;
free_reserved_page(page);
}
printk(KERN_INFO "Freeing %s: %ldk freed\n", what, (end - begin) >> 10);
}
@ -452,18 +439,14 @@ void free_init_pages(const char *what, unsigned long begin, unsigned long end)
#ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end)
{
free_init_pages("initrd memory",
virt_to_phys((void *)start),
virt_to_phys((void *)end));
free_reserved_area(start, end, POISON_FREE_INITMEM, "initrd");
}
#endif
void __init_refok free_initmem(void)
{
prom_free_prom_memory();
free_init_pages("unused kernel memory",
__pa_symbol(&__init_begin),
__pa_symbol(&__init_end));
free_initmem_default(POISON_FREE_INITMEM);
}
#ifndef CONFIG_MIPS_PGD_C0_CONTEXT

View File

@ -457,7 +457,7 @@ void __init prom_free_prom_memory(void)
/* We got nothing to free here ... */
}
extern unsigned long setup_zero_pages(void);
extern void setup_zero_pages(void);
void __init paging_init(void)
{
@ -492,7 +492,7 @@ void __init mem_init(void)
totalram_pages += free_all_bootmem_node(NODE_DATA(node));
}
totalram_pages -= setup_zero_pages(); /* This comes from node 0 */
setup_zero_pages(); /* This comes from node 0 */
codesize = (unsigned long) &_etext - (unsigned long) &_text;
datasize = (unsigned long) &_edata - (unsigned long) &_etext;

View File

@ -138,31 +138,12 @@ void __init mem_init(void)
totalhigh_pages << (PAGE_SHIFT - 10));
}
/*
*
*/
void free_init_pages(char *what, unsigned long begin, unsigned long end)
{
unsigned long addr;
for (addr = begin; addr < end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
memset((void *) addr, 0xcc, PAGE_SIZE);
free_page(addr);
totalram_pages++;
}
printk(KERN_INFO "Freeing %s: %ldk freed\n", what, (end - begin) >> 10);
}
/*
* recycle memory containing stuff only required for initialisation
*/
void free_initmem(void)
{
free_init_pages("unused kernel memory",
(unsigned long) &__init_begin,
(unsigned long) &__init_end);
free_initmem_default(POISON_FREE_INITMEM);
}
/*
@ -171,6 +152,6 @@ void free_initmem(void)
#ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end)
{
free_init_pages("initrd memory", start, end);
free_reserved_area(start, end, POISON_FREE_INITMEM, "initrd");
}
#endif

View File

@ -43,6 +43,7 @@
#include <asm/kmap_types.h>
#include <asm/fixmap.h>
#include <asm/tlbflush.h>
#include <asm/sections.h>
int mem_init_done;
@ -201,9 +202,6 @@ void __init paging_init(void)
/* References to section boundaries */
extern char _stext, _etext, _edata, __bss_start, _end;
extern char __init_begin, __init_end;
static int __init free_pages_init(void)
{
int reservedpages, pfn;
@ -263,30 +261,11 @@ void __init mem_init(void)
#ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end)
{
printk(KERN_INFO "Freeing initrd memory: %ldk freed\n",
(end - start) >> 10);
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
}
free_reserved_area(start, end, 0, "initrd");
}
#endif
void free_initmem(void)
{
unsigned long addr;
addr = (unsigned long)(&__init_begin);
for (; addr < (unsigned long)(&__init_end); addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
printk(KERN_INFO "Freeing unused kernel memory: %luk freed\n",
((unsigned long)&__init_end -
(unsigned long)&__init_begin) >> 10);
free_initmem_default(0);
}

View File

@ -505,7 +505,6 @@ static void __init map_pages(unsigned long start_vaddr,
void free_initmem(void)
{
unsigned long addr;
unsigned long init_begin = (unsigned long)__init_begin;
unsigned long init_end = (unsigned long)__init_end;
@ -533,19 +532,10 @@ void free_initmem(void)
* pages are no-longer executable */
flush_icache_range(init_begin, init_end);
for (addr = init_begin; addr < init_end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
num_physpages++;
totalram_pages++;
}
num_physpages += free_initmem_default(0);
/* set up a new led state on systems shipped LED State panel */
pdc_chassis_send_status(PDC_CHASSIS_DIRECT_BCOMPLETE);
printk(KERN_INFO "Freeing unused kernel memory: %luk freed\n",
(init_end - init_begin) >> 10);
}
@ -697,6 +687,8 @@ void show_mem(unsigned int filter)
printk(KERN_INFO "Mem-info:\n");
show_free_areas(filter);
if (filter & SHOW_MEM_FILTER_PAGE_COUNT)
return;
#ifndef CONFIG_DISCONTIGMEM
i = max_mapnr;
while (i-- > 0) {
@ -1107,15 +1099,6 @@ void flush_tlb_all(void)
#ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end)
{
if (start >= end)
return;
printk(KERN_INFO "Freeing initrd memory: %ldk freed\n", (end - start) >> 10);
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
num_physpages++;
totalram_pages++;
}
num_physpages += free_reserved_area(start, end, 0, "initrd");
}
#endif

View File

@ -3,6 +3,7 @@
#ifdef CONFIG_HUGETLB_PAGE
#include <asm/page.h>
#include <asm-generic/hugetlb.h>
extern struct kmem_cache *hugepte_cache;

View File

@ -150,10 +150,7 @@ void crash_free_reserved_phys_range(unsigned long begin, unsigned long end)
if (addr <= rtas_end && ((addr + PAGE_SIZE) > rtas_start))
continue;
ClearPageReserved(pfn_to_page(addr >> PAGE_SHIFT));
init_page_count(pfn_to_page(addr >> PAGE_SHIFT));
free_page((unsigned long)__va(addr));
totalram_pages++;
free_reserved_page(pfn_to_page(addr >> PAGE_SHIFT));
}
}
#endif

View File

@ -1045,10 +1045,7 @@ static void fadump_release_memory(unsigned long begin, unsigned long end)
if (addr <= ra_end && ((addr + PAGE_SIZE) > ra_start))
continue;
ClearPageReserved(pfn_to_page(addr >> PAGE_SHIFT));
init_page_count(pfn_to_page(addr >> PAGE_SHIFT));
free_page((unsigned long)__va(addr));
totalram_pages++;
free_reserved_page(pfn_to_page(addr >> PAGE_SHIFT));
}
}

View File

@ -756,12 +756,7 @@ static __init void kvm_free_tmp(void)
end = (ulong)&kvm_tmp[ARRAY_SIZE(kvm_tmp)] & PAGE_MASK;
/* Free the tmp space we don't need */
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
}
free_reserved_area(start, end, 0, NULL);
}
static int __init kvm_guest_init(void)

View File

@ -263,19 +263,14 @@ static __meminit void vmemmap_list_populate(unsigned long phys,
vmemmap_list = vmem_back;
}
int __meminit vmemmap_populate(struct page *start_page,
unsigned long nr_pages, int node)
int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
{
unsigned long start = (unsigned long)start_page;
unsigned long end = (unsigned long)(start_page + nr_pages);
unsigned long page_size = 1 << mmu_psize_defs[mmu_vmemmap_psize].shift;
/* Align to the page size of the linear mapping. */
start = _ALIGN_DOWN(start, page_size);
pr_debug("vmemmap_populate page %p, %ld pages, node %d\n",
start_page, nr_pages, node);
pr_debug(" -> map %lx..%lx\n", start, end);
pr_debug("vmemmap_populate %lx..%lx, node %d\n", start, end, node);
for (; start < end; start += page_size) {
void *p;
@ -298,7 +293,7 @@ int __meminit vmemmap_populate(struct page *start_page,
return 0;
}
void vmemmap_free(struct page *memmap, unsigned long nr_pages)
void vmemmap_free(unsigned long start, unsigned long end)
{
}

View File

@ -352,13 +352,9 @@ void __init mem_init(void)
struct page *page = pfn_to_page(pfn);
if (memblock_is_reserved(paddr))
continue;
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
totalhigh_pages++;
free_highmem_page(page);
reservedpages--;
}
totalram_pages += totalhigh_pages;
printk(KERN_DEBUG "High memory: %luk\n",
totalhigh_pages << (PAGE_SHIFT-10));
}
@ -405,39 +401,14 @@ void __init mem_init(void)
void free_initmem(void)
{
unsigned long addr;
ppc_md.progress = ppc_printk_progress;
addr = (unsigned long)__init_begin;
for (; addr < (unsigned long)__init_end; addr += PAGE_SIZE) {
memset((void *)addr, POISON_FREE_INITMEM, PAGE_SIZE);
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
pr_info("Freeing unused kernel memory: %luk freed\n",
((unsigned long)__init_end -
(unsigned long)__init_begin) >> 10);
free_initmem_default(POISON_FREE_INITMEM);
}
#ifdef CONFIG_BLK_DEV_INITRD
void __init free_initrd_mem(unsigned long start, unsigned long end)
{
if (start >= end)
return;
start = _ALIGN_DOWN(start, PAGE_SIZE);
end = _ALIGN_UP(end, PAGE_SIZE);
pr_info("Freeing initrd memory: %ldk freed\n", (end - start) >> 10);
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
}
free_reserved_area(start, end, 0, "initrd");
}
#endif

View File

@ -62,14 +62,11 @@ static int distance_lookup_table[MAX_NUMNODES][MAX_DISTANCE_REF_POINTS];
*/
static void __init setup_node_to_cpumask_map(void)
{
unsigned int node, num = 0;
unsigned int node;
/* setup nr_node_ids if not done yet */
if (nr_node_ids == MAX_NUMNODES) {
for_each_node_mask(node, node_possible_map)
num = node;
nr_node_ids = num + 1;
}
if (nr_node_ids == MAX_NUMNODES)
setup_nr_node_ids();
/* allocate the map */
for (node = 0; node < nr_node_ids; node++)

View File

@ -172,12 +172,9 @@ static struct fsl_diu_shared_fb __attribute__ ((__aligned__(8))) diu_shared_fb;
static inline void mpc512x_free_bootmem(struct page *page)
{
__ClearPageReserved(page);
BUG_ON(PageTail(page));
BUG_ON(atomic_read(&page->_count) > 1);
atomic_set(&page->_count, 1);
__free_page(page);
totalram_pages++;
free_reserved_page(page);
}
void mpc512x_release_bootmem(void)

View File

@ -72,6 +72,7 @@ unsigned long memory_block_size_bytes(void)
return get_memblock_size();
}
#ifdef CONFIG_MEMORY_HOTREMOVE
static int pseries_remove_memblock(unsigned long base, unsigned int memblock_size)
{
unsigned long start, start_pfn;
@ -153,6 +154,17 @@ static int pseries_remove_memory(struct device_node *np)
ret = pseries_remove_memblock(base, lmb_size);
return ret;
}
#else
static inline int pseries_remove_memblock(unsigned long base,
unsigned int memblock_size)
{
return -EOPNOTSUPP;
}
static inline int pseries_remove_memory(struct device_node *np)
{
return -EOPNOTSUPP;
}
#endif /* CONFIG_MEMORY_HOTREMOVE */
static int pseries_add_memory(struct device_node *np)
{

View File

@ -114,7 +114,7 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
#define huge_ptep_set_wrprotect(__mm, __addr, __ptep) \
({ \
pte_t __pte = huge_ptep_get(__ptep); \
if (pte_write(__pte)) { \
if (huge_pte_write(__pte)) { \
huge_ptep_invalidate(__mm, __addr, __ptep); \
set_huge_pte_at(__mm, __addr, __ptep, \
huge_pte_wrprotect(__pte)); \
@ -127,4 +127,58 @@ static inline void huge_ptep_clear_flush(struct vm_area_struct *vma,
huge_ptep_invalidate(vma->vm_mm, address, ptep);
}
static inline pte_t mk_huge_pte(struct page *page, pgprot_t pgprot)
{
pte_t pte;
pmd_t pmd;
pmd = mk_pmd_phys(page_to_phys(page), pgprot);
pte_val(pte) = pmd_val(pmd);
return pte;
}
static inline int huge_pte_write(pte_t pte)
{
pmd_t pmd;
pmd_val(pmd) = pte_val(pte);
return pmd_write(pmd);
}
static inline int huge_pte_dirty(pte_t pte)
{
/* No dirty bit in the segment table entry. */
return 0;
}
static inline pte_t huge_pte_mkwrite(pte_t pte)
{
pmd_t pmd;
pmd_val(pmd) = pte_val(pte);
pte_val(pte) = pmd_val(pmd_mkwrite(pmd));
return pte;
}
static inline pte_t huge_pte_mkdirty(pte_t pte)
{
/* No dirty bit in the segment table entry. */
return pte;
}
static inline pte_t huge_pte_modify(pte_t pte, pgprot_t newprot)
{
pmd_t pmd;
pmd_val(pmd) = pte_val(pte);
pte_val(pte) = pmd_val(pmd_modify(pmd, newprot));
return pte;
}
static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep)
{
pmd_clear((pmd_t *) ptep);
}
#endif /* _ASM_S390_HUGETLB_H */

View File

@ -424,6 +424,13 @@ extern unsigned long MODULES_END;
#define __S110 PAGE_RW
#define __S111 PAGE_RW
/*
* Segment entry (large page) protection definitions.
*/
#define SEGMENT_NONE __pgprot(_HPAGE_TYPE_NONE)
#define SEGMENT_RO __pgprot(_HPAGE_TYPE_RO)
#define SEGMENT_RW __pgprot(_HPAGE_TYPE_RW)
static inline int mm_exclusive(struct mm_struct *mm)
{
return likely(mm == current->active_mm &&
@ -914,26 +921,6 @@ static inline pte_t pte_mkspecial(pte_t pte)
#ifdef CONFIG_HUGETLB_PAGE
static inline pte_t pte_mkhuge(pte_t pte)
{
/*
* PROT_NONE needs to be remapped from the pte type to the ste type.
* The HW invalid bit is also different for pte and ste. The pte
* invalid bit happens to be the same as the ste _SEGMENT_ENTRY_LARGE
* bit, so we don't have to clear it.
*/
if (pte_val(pte) & _PAGE_INVALID) {
if (pte_val(pte) & _PAGE_SWT)
pte_val(pte) |= _HPAGE_TYPE_NONE;
pte_val(pte) |= _SEGMENT_ENTRY_INV;
}
/*
* Clear SW pte bits, there are no SW bits in a segment table entry.
*/
pte_val(pte) &= ~(_PAGE_SWT | _PAGE_SWX | _PAGE_SWC |
_PAGE_SWR | _PAGE_SWW);
/*
* Also set the change-override bit because we don't need dirty bit
* tracking for hugetlbfs pages.
*/
pte_val(pte) |= (_SEGMENT_ENTRY_LARGE | _SEGMENT_ENTRY_CO);
return pte;
}
@ -1278,31 +1265,7 @@ static inline void __pmd_idte(unsigned long address, pmd_t *pmdp)
}
}
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define SEGMENT_NONE __pgprot(_HPAGE_TYPE_NONE)
#define SEGMENT_RO __pgprot(_HPAGE_TYPE_RO)
#define SEGMENT_RW __pgprot(_HPAGE_TYPE_RW)
#define __HAVE_ARCH_PGTABLE_DEPOSIT
extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pgtable_t pgtable);
#define __HAVE_ARCH_PGTABLE_WITHDRAW
extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm);
static inline int pmd_trans_splitting(pmd_t pmd)
{
return pmd_val(pmd) & _SEGMENT_ENTRY_SPLIT;
}
static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
pmd_t *pmdp, pmd_t entry)
{
if (!(pmd_val(entry) & _SEGMENT_ENTRY_INV) && MACHINE_HAS_EDAT1)
pmd_val(entry) |= _SEGMENT_ENTRY_CO;
*pmdp = entry;
}
#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLB_PAGE)
static inline unsigned long massage_pgprot_pmd(pgprot_t pgprot)
{
/*
@ -1323,10 +1286,11 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
return pmd;
}
static inline pmd_t pmd_mkhuge(pmd_t pmd)
static inline pmd_t mk_pmd_phys(unsigned long physpage, pgprot_t pgprot)
{
pmd_val(pmd) |= _SEGMENT_ENTRY_LARGE;
return pmd;
pmd_t __pmd;
pmd_val(__pmd) = physpage + massage_pgprot_pmd(pgprot);
return __pmd;
}
static inline pmd_t pmd_mkwrite(pmd_t pmd)
@ -1336,6 +1300,34 @@ static inline pmd_t pmd_mkwrite(pmd_t pmd)
pmd_val(pmd) &= ~_SEGMENT_ENTRY_RO;
return pmd;
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLB_PAGE */
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define __HAVE_ARCH_PGTABLE_DEPOSIT
extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pgtable_t pgtable);
#define __HAVE_ARCH_PGTABLE_WITHDRAW
extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm);
static inline int pmd_trans_splitting(pmd_t pmd)
{
return pmd_val(pmd) & _SEGMENT_ENTRY_SPLIT;
}
static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
pmd_t *pmdp, pmd_t entry)
{
if (!(pmd_val(entry) & _SEGMENT_ENTRY_INV) && MACHINE_HAS_EDAT1)
pmd_val(entry) |= _SEGMENT_ENTRY_CO;
*pmdp = entry;
}
static inline pmd_t pmd_mkhuge(pmd_t pmd)
{
pmd_val(pmd) |= _SEGMENT_ENTRY_LARGE;
return pmd;
}
static inline pmd_t pmd_wrprotect(pmd_t pmd)
{
@ -1432,13 +1424,6 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
}
}
static inline pmd_t mk_pmd_phys(unsigned long physpage, pgprot_t pgprot)
{
pmd_t __pmd;
pmd_val(__pmd) = physpage + massage_pgprot_pmd(pgprot);
return __pmd;
}
#define pfn_pmd(pfn, pgprot) mk_pmd_phys(__pa((pfn) << PAGE_SHIFT), (pgprot))
#define mk_pmd(page, pgprot) pfn_pmd(page_to_pfn(page), (pgprot))

View File

@ -39,7 +39,7 @@ int arch_prepare_hugepage(struct page *page)
if (!ptep)
return -ENOMEM;
pte = mk_pte(page, PAGE_RW);
pte_val(pte) = addr;
for (i = 0; i < PTRS_PER_PTE; i++) {
set_pte_at(&init_mm, addr + i * PAGE_SIZE, ptep + i, pte);
pte_val(pte) += PAGE_SIZE;

View File

@ -42,11 +42,10 @@ pgd_t swapper_pg_dir[PTRS_PER_PGD] __attribute__((__aligned__(PAGE_SIZE)));
unsigned long empty_zero_page, zero_page_mask;
EXPORT_SYMBOL(empty_zero_page);
static unsigned long __init setup_zero_pages(void)
static void __init setup_zero_pages(void)
{
struct cpuid cpu_id;
unsigned int order;
unsigned long size;
struct page *page;
int i;
@ -83,14 +82,11 @@ static unsigned long __init setup_zero_pages(void)
page = virt_to_page((void *) empty_zero_page);
split_page(page, order);
for (i = 1 << order; i > 0; i--) {
SetPageReserved(page);
mark_page_reserved(page);
page++;
}
size = PAGE_SIZE << order;
zero_page_mask = (size - 1) & PAGE_MASK;
return 1UL << order;
zero_page_mask = ((PAGE_SIZE << order) - 1) & PAGE_MASK;
}
/*
@ -147,7 +143,7 @@ void __init mem_init(void)
/* this will put all low memory onto the freelists */
totalram_pages += free_all_bootmem();
totalram_pages -= setup_zero_pages(); /* Setup zeroed pages. */
setup_zero_pages(); /* Setup zeroed pages. */
reservedpages = 0;
@ -166,34 +162,15 @@ void __init mem_init(void)
PFN_ALIGN((unsigned long)&_eshared) - 1);
}
void free_init_pages(char *what, unsigned long begin, unsigned long end)
{
unsigned long addr = begin;
if (begin >= end)
return;
for (; addr < end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
memset((void *)(addr & PAGE_MASK), POISON_FREE_INITMEM,
PAGE_SIZE);
free_page(addr);
totalram_pages++;
}
printk(KERN_INFO "Freeing %s: %luk freed\n", what, (end - begin) >> 10);
}
void free_initmem(void)
{
free_init_pages("unused kernel memory",
(unsigned long)&__init_begin,
(unsigned long)&__init_end);
free_initmem_default(0);
}
#ifdef CONFIG_BLK_DEV_INITRD
void __init free_initrd_mem(unsigned long start, unsigned long end)
{
free_init_pages("initrd memory", start, end);
free_reserved_area(start, end, POISON_FREE_INITMEM, "initrd");
}
#endif

View File

@ -191,19 +191,16 @@ static void vmem_remove_range(unsigned long start, unsigned long size)
/*
* Add a backed mem_map array to the virtual mem_map array.
*/
int __meminit vmemmap_populate(struct page *start, unsigned long nr, int node)
int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
{
unsigned long address, start_addr, end_addr;
unsigned long address = start;
pgd_t *pg_dir;
pud_t *pu_dir;
pmd_t *pm_dir;
pte_t *pt_dir;
int ret = -ENOMEM;
start_addr = (unsigned long) start;
end_addr = (unsigned long) (start + nr);
for (address = start_addr; address < end_addr;) {
for (address = start; address < end;) {
pg_dir = pgd_offset_k(address);
if (pgd_none(*pg_dir)) {
pu_dir = vmem_pud_alloc();
@ -262,14 +259,14 @@ int __meminit vmemmap_populate(struct page *start, unsigned long nr, int node)
}
address += PAGE_SIZE;
}
memset(start, 0, nr * sizeof(struct page));
memset((void *)start, 0, end - start);
ret = 0;
out:
flush_tlb_kernel_range(start_addr, end_addr);
flush_tlb_kernel_range(start, end);
return ret;
}
void vmemmap_free(struct page *memmap, unsigned long nr_pages)
void vmemmap_free(unsigned long start, unsigned long end)
{
}

View File

@ -43,7 +43,7 @@ EXPORT_SYMBOL_GPL(empty_zero_page);
static struct kcore_list kcore_mem, kcore_vmalloc;
static unsigned long setup_zero_page(void)
static void setup_zero_page(void)
{
struct page *page;
@ -52,9 +52,7 @@ static unsigned long setup_zero_page(void)
panic("Oh boy, that early out of memory?");
page = virt_to_page((void *) empty_zero_page);
SetPageReserved(page);
return 1UL;
mark_page_reserved(page);
}
#ifndef CONFIG_NEED_MULTIPLE_NODES
@ -84,7 +82,7 @@ void __init mem_init(void)
high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT);
totalram_pages += free_all_bootmem();
totalram_pages -= setup_zero_page(); /* Setup zeroed pages. */
setup_zero_page(); /* Setup zeroed pages. */
reservedpages = 0;
for (tmp = 0; tmp < max_low_pfn; tmp++)
@ -109,37 +107,16 @@ void __init mem_init(void)
}
#endif /* !CONFIG_NEED_MULTIPLE_NODES */
static void free_init_pages(const char *what, unsigned long begin, unsigned long end)
{
unsigned long pfn;
for (pfn = PFN_UP(begin); pfn < PFN_DOWN(end); pfn++) {
struct page *page = pfn_to_page(pfn);
void *addr = phys_to_virt(PFN_PHYS(pfn));
ClearPageReserved(page);
init_page_count(page);
memset(addr, POISON_FREE_INITMEM, PAGE_SIZE);
__free_page(page);
totalram_pages++;
}
printk(KERN_INFO "Freeing %s: %ldk freed\n", what, (end - begin) >> 10);
}
#ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end)
{
free_init_pages("initrd memory",
virt_to_phys((void *) start),
virt_to_phys((void *) end));
free_reserved_area(start, end, POISON_FREE_INITMEM, "initrd");
}
#endif
void __init_refok free_initmem(void)
{
free_init_pages("unused kernel memory",
__pa(&__init_begin),
__pa(&__init_end));
free_initmem_default(POISON_FREE_INITMEM);
}
unsigned long pgd_current;

View File

@ -3,6 +3,7 @@
#include <asm/cacheflush.h>
#include <asm/page.h>
#include <asm-generic/hugetlb.h>
static inline int is_hugepage_only_range(struct mm_struct *mm,

View File

@ -417,15 +417,13 @@ void __init mem_init(void)
for_each_online_node(nid) {
pg_data_t *pgdat = NODE_DATA(nid);
unsigned long node_pages = 0;
void *node_high_memory;
num_physpages += pgdat->node_present_pages;
if (pgdat->node_spanned_pages)
node_pages = free_all_bootmem_node(pgdat);
totalram_pages += free_all_bootmem_node(pgdat);
totalram_pages += node_pages;
node_high_memory = (void *)__va((pgdat->node_start_pfn +
pgdat->node_spanned_pages) <<
@ -501,31 +499,13 @@ void __init mem_init(void)
void free_initmem(void)
{
unsigned long addr;
addr = (unsigned long)(&__init_begin);
for (; addr < (unsigned long)(&__init_end); addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
free_page(addr);
totalram_pages++;
}
printk("Freeing unused kernel memory: %ldk freed\n",
((unsigned long)&__init_end -
(unsigned long)&__init_begin) >> 10);
free_initmem_default(0);
}
#ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end)
{
unsigned long p;
for (p = start; p < end; p += PAGE_SIZE) {
ClearPageReserved(virt_to_page(p));
init_page_count(virt_to_page(p));
free_page(p);
totalram_pages++;
}
printk("Freeing initrd memory: %ldk freed\n", (end - start) >> 10);
free_reserved_area(start, end, 0, "initrd");
}
#endif

View File

@ -2,6 +2,7 @@
#define _ASM_SPARC64_HUGETLB_H
#include <asm/page.h>
#include <asm-generic/hugetlb.h>
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,

View File

@ -282,14 +282,8 @@ static void map_high_region(unsigned long start_pfn, unsigned long end_pfn)
printk("mapping high region %08lx - %08lx\n", start_pfn, end_pfn);
#endif
for (tmp = start_pfn; tmp < end_pfn; tmp++) {
struct page *page = pfn_to_page(tmp);
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
totalhigh_pages++;
}
for (tmp = start_pfn; tmp < end_pfn; tmp++)
free_highmem_page(pfn_to_page(tmp));
}
void __init mem_init(void)
@ -347,8 +341,6 @@ void __init mem_init(void)
map_high_region(start_pfn, end_pfn);
}
totalram_pages += totalhigh_pages;
codepages = (((unsigned long) &_etext) - ((unsigned long)&_start));
codepages = PAGE_ALIGN(codepages) >> PAGE_SHIFT;
datapages = (((unsigned long) &_edata) - ((unsigned long)&_etext));

View File

@ -2181,10 +2181,9 @@ unsigned long vmemmap_table[VMEMMAP_SIZE];
static long __meminitdata addr_start, addr_end;
static int __meminitdata node_start;
int __meminit vmemmap_populate(struct page *start, unsigned long nr, int node)
int __meminit vmemmap_populate(unsigned long vstart, unsigned long vend,
int node)
{
unsigned long vstart = (unsigned long) start;
unsigned long vend = (unsigned long) (start + nr);
unsigned long phys_start = (vstart - VMEMMAP_BASE);
unsigned long phys_end = (vend - VMEMMAP_BASE);
unsigned long addr = phys_start & VMEMMAP_CHUNK_MASK;
@ -2236,7 +2235,7 @@ void __meminit vmemmap_populate_print_last(void)
}
}
void vmemmap_free(struct page *memmap, unsigned long nr_pages)
void vmemmap_free(unsigned long start, unsigned long end)
{
}

View File

@ -16,6 +16,7 @@
#define _ASM_TILE_HUGETLB_H
#include <asm/page.h>
#include <asm-generic/hugetlb.h>
static inline int is_hugepage_only_range(struct mm_struct *mm,

View File

@ -592,12 +592,7 @@ void iounmap(volatile void __iomem *addr_in)
in parallel. Reuse of the virtual address is prevented by
leaving it in the global lists until we're done with it.
cpa takes care of the direct mappings. */
read_lock(&vmlist_lock);
for (p = vmlist; p; p = p->next) {
if (p->addr == addr)
break;
}
read_unlock(&vmlist_lock);
p = find_vm_area((void *)addr);
if (!p) {
pr_err("iounmap: bad address %p\n", addr);

View File

@ -42,17 +42,12 @@ static unsigned long brk_end;
static void setup_highmem(unsigned long highmem_start,
unsigned long highmem_len)
{
struct page *page;
unsigned long highmem_pfn;
int i;
highmem_pfn = __pa(highmem_start) >> PAGE_SHIFT;
for (i = 0; i < highmem_len >> PAGE_SHIFT; i++) {
page = &mem_map[highmem_pfn + i];
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
}
for (i = 0; i < highmem_len >> PAGE_SHIFT; i++)
free_highmem_page(&mem_map[highmem_pfn + i]);
}
#endif
@ -73,18 +68,13 @@ void __init mem_init(void)
totalram_pages = free_all_bootmem();
max_low_pfn = totalram_pages;
#ifdef CONFIG_HIGHMEM
totalhigh_pages = highmem >> PAGE_SHIFT;
totalram_pages += totalhigh_pages;
setup_highmem(end_iomem, highmem);
#endif
num_physpages = totalram_pages;
max_pfn = totalram_pages;
printk(KERN_INFO "Memory: %luk available\n",
nr_free_pages() << (PAGE_SHIFT-10));
kmalloc_ok = 1;
#ifdef CONFIG_HIGHMEM
setup_highmem(end_iomem, highmem);
#endif
}
/*
@ -254,15 +244,7 @@ void free_initmem(void)
#ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end)
{
if (start < end)
printk(KERN_INFO "Freeing initrd memory: %ldk freed\n",
(end - start) >> 10);
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page(start);
totalram_pages++;
}
free_reserved_area(start, end, 0, "initrd");
}
#endif

View File

@ -66,6 +66,9 @@ void show_mem(unsigned int filter)
printk(KERN_DEFAULT "Mem-info:\n");
show_free_areas(filter);
if (filter & SHOW_MEM_FILTER_PAGE_COUNT)
return;
for_each_bank(i, mi) {
struct membank *bank = &mi->bank[i];
unsigned int pfn1, pfn2;
@ -313,24 +316,6 @@ void __init bootmem_init(void)
max_pfn = max_high - PHYS_PFN_OFFSET;
}
static inline int free_area(unsigned long pfn, unsigned long end, char *s)
{
unsigned int pages = 0, size = (end - pfn) << (PAGE_SHIFT - 10);
for (; pfn < end; pfn++) {
struct page *page = pfn_to_page(pfn);
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
pages++;
}
if (size && s)
printk(KERN_INFO "Freeing %s memory: %dK\n", s, size);
return pages;
}
static inline void
free_memmap(unsigned long start_pfn, unsigned long end_pfn)
{
@ -404,9 +389,9 @@ void __init mem_init(void)
max_mapnr = pfn_to_page(max_pfn + PHYS_PFN_OFFSET) - mem_map;
/* this will put all unused low memory onto the freelists */
free_unused_memmap(&meminfo);
/* this will put all unused low memory onto the freelists */
totalram_pages += free_all_bootmem();
reserved_pages = free_pages = 0;
@ -491,9 +476,7 @@ void __init mem_init(void)
void free_initmem(void)
{
totalram_pages += free_area(__phys_to_pfn(__pa(__init_begin)),
__phys_to_pfn(__pa(__init_end)),
"init");
free_initmem_default(0);
}
#ifdef CONFIG_BLK_DEV_INITRD
@ -503,9 +486,7 @@ static int keep_initrd;
void free_initrd_mem(unsigned long start, unsigned long end)
{
if (!keep_initrd)
totalram_pages += free_area(__phys_to_pfn(__pa(start)),
__phys_to_pfn(__pa(end)),
"initrd");
free_reserved_area(start, end, 0, "initrd");
}
static int __init keepinitrd_setup(char *__unused)

View File

@ -235,7 +235,7 @@ EXPORT_SYMBOL(__uc32_ioremap_cached);
void __uc32_iounmap(volatile void __iomem *io_addr)
{
void *addr = (void *)(PAGE_MASK & (unsigned long)io_addr);
struct vm_struct **p, *tmp;
struct vm_struct *vm;
/*
* If this is a section based mapping we need to handle it
@ -244,17 +244,10 @@ void __uc32_iounmap(volatile void __iomem *io_addr)
* all the mappings before the area can be reclaimed
* by someone else.
*/
write_lock(&vmlist_lock);
for (p = &vmlist ; (tmp = *p) ; p = &tmp->next) {
if ((tmp->flags & VM_IOREMAP) && (tmp->addr == addr)) {
if (tmp->flags & VM_UNICORE_SECTION_MAPPING) {
unmap_area_sections((unsigned long)tmp->addr,
tmp->size);
}
break;
}
}
write_unlock(&vmlist_lock);
vm = find_vm_area(addr);
if (vm && (vm->flags & VM_IOREMAP) &&
(vm->flags & VM_UNICORE_SECTION_MAPPING))
unmap_area_sections((unsigned long)vm->addr, vm->size);
vunmap(addr);
}

View File

@ -2,6 +2,7 @@
#define _ASM_X86_HUGETLB_H
#include <asm/page.h>
#include <asm-generic/hugetlb.h>
static inline int is_hugepage_only_range(struct mm_struct *mm,

View File

@ -43,10 +43,10 @@ obj-$(CONFIG_MTRR) += mtrr/
obj-$(CONFIG_X86_LOCAL_APIC) += perfctr-watchdog.o perf_event_amd_ibs.o
quiet_cmd_mkcapflags = MKCAP $@
cmd_mkcapflags = $(PERL) $(srctree)/$(src)/mkcapflags.pl $< $@
cmd_mkcapflags = $(CONFIG_SHELL) $(srctree)/$(src)/mkcapflags.sh $< $@
cpufeature = $(src)/../../include/asm/cpufeature.h
targets += capflags.c
$(obj)/capflags.c: $(cpufeature) $(src)/mkcapflags.pl FORCE
$(obj)/capflags.c: $(cpufeature) $(src)/mkcapflags.sh FORCE
$(call if_changed,mkcapflags)

View File

@ -1,48 +0,0 @@
#!/usr/bin/perl -w
#
# Generate the x86_cap_flags[] array from include/asm-x86/cpufeature.h
#
($in, $out) = @ARGV;
open(IN, "< $in\0") or die "$0: cannot open: $in: $!\n";
open(OUT, "> $out\0") or die "$0: cannot create: $out: $!\n";
print OUT "#ifndef _ASM_X86_CPUFEATURE_H\n";
print OUT "#include <asm/cpufeature.h>\n";
print OUT "#endif\n";
print OUT "\n";
print OUT "const char * const x86_cap_flags[NCAPINTS*32] = {\n";
%features = ();
$err = 0;
while (defined($line = <IN>)) {
if ($line =~ /^\s*\#\s*define\s+(X86_FEATURE_(\S+))\s+(.*)$/) {
$macro = $1;
$feature = "\L$2";
$tail = $3;
if ($tail =~ /\/\*\s*\"([^"]*)\".*\*\//) {
$feature = "\L$1";
}
next if ($feature eq '');
if ($features{$feature}++) {
print STDERR "$in: duplicate feature name: $feature\n";
$err++;
}
printf OUT "\t%-32s = \"%s\",\n", "[$macro]", $feature;
}
}
print OUT "};\n";
close(IN);
close(OUT);
if ($err) {
unlink($out);
exit(1);
}
exit(0);

View File

@ -0,0 +1,41 @@
#!/bin/sh
#
# Generate the x86_cap_flags[] array from include/asm/cpufeature.h
#
IN=$1
OUT=$2
TABS="$(printf '\t\t\t\t\t')"
trap 'rm "$OUT"' EXIT
(
echo "#ifndef _ASM_X86_CPUFEATURE_H"
echo "#include <asm/cpufeature.h>"
echo "#endif"
echo ""
echo "const char * const x86_cap_flags[NCAPINTS*32] = {"
# Iterate through any input lines starting with #define X86_FEATURE_
sed -n -e 's/\t/ /g' -e 's/^ *# *define *X86_FEATURE_//p' $IN |
while read i
do
# Name is everything up to the first whitespace
NAME="$(echo "$i" | sed 's/ .*//')"
# If the /* comment */ starts with a quote string, grab that.
VALUE="$(echo "$i" | sed -n 's@.*/\* *\("[^"]*"\).*\*/@\1@p')"
[ -z "$VALUE" ] && VALUE="\"$NAME\""
[ "$VALUE" == '""' ] && continue
# Name is uppercase, VALUE is all lowercase
VALUE="$(echo "$VALUE" | tr A-Z a-z)"
TABCOUNT=$(( ( 5*8 - 14 - $(echo "$NAME" | wc -c) ) / 8 ))
printf "\t[%s]%.*s = %s,\n" \
"X86_FEATURE_$NAME" "$TABCOUNT" "$TABS" "$VALUE"
done
echo "};"
) > $OUT
trap - EXIT

View File

@ -137,5 +137,4 @@ void __init set_highmem_pages_init(void)
add_highpages_with_active_regions(nid, zone_start_pfn,
zone_end_pfn);
}
totalram_pages += totalhigh_pages;
}

View File

@ -515,11 +515,8 @@ void free_init_pages(char *what, unsigned long begin, unsigned long end)
printk(KERN_INFO "Freeing %s: %luk freed\n", what, (end - begin) >> 10);
for (; addr < end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
memset((void *)addr, POISON_FREE_INITMEM, PAGE_SIZE);
free_page(addr);
totalram_pages++;
free_reserved_page(virt_to_page(addr));
}
#endif
}

View File

@ -427,14 +427,6 @@ static void __init permanent_kmaps_init(pgd_t *pgd_base)
pkmap_page_table = pte;
}
static void __init add_one_highpage_init(struct page *page)
{
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
totalhigh_pages++;
}
void __init add_highpages_with_active_regions(int nid,
unsigned long start_pfn, unsigned long end_pfn)
{
@ -448,7 +440,7 @@ void __init add_highpages_with_active_regions(int nid,
start_pfn, end_pfn);
for ( ; pfn < e_pfn; pfn++)
if (pfn_valid(pfn))
add_one_highpage_init(pfn_to_page(pfn));
free_highmem_page(pfn_to_page(pfn));
}
}
#else

View File

@ -1011,11 +1011,8 @@ remove_pagetable(unsigned long start, unsigned long end, bool direct)
flush_tlb_all();
}
void __ref vmemmap_free(struct page *memmap, unsigned long nr_pages)
void __ref vmemmap_free(unsigned long start, unsigned long end)
{
unsigned long start = (unsigned long)memmap;
unsigned long end = (unsigned long)(memmap + nr_pages);
remove_pagetable(start, end, false);
}
@ -1067,10 +1064,9 @@ void __init mem_init(void)
/* clear_bss() already clear the empty_zero_page */
reservedpages = 0;
/* this will put all low memory onto the freelists */
register_page_bootmem_info();
/* this will put all memory onto the freelists */
totalram_pages = free_all_bootmem();
absent_pages = absent_pages_in_range(0, max_pfn);
@ -1285,18 +1281,17 @@ static long __meminitdata addr_start, addr_end;
static void __meminitdata *p_start, *p_end;
static int __meminitdata node_start;
int __meminit
vmemmap_populate(struct page *start_page, unsigned long size, int node)
static int __meminit vmemmap_populate_hugepages(unsigned long start,
unsigned long end, int node)
{
unsigned long addr = (unsigned long)start_page;
unsigned long end = (unsigned long)(start_page + size);
unsigned long addr;
unsigned long next;
pgd_t *pgd;
pud_t *pud;
pmd_t *pmd;
for (; addr < end; addr = next) {
void *p = NULL;
for (addr = start; addr < end; addr = next) {
next = pmd_addr_end(addr, end);
pgd = vmemmap_pgd_populate(addr, node);
if (!pgd)
@ -1306,31 +1301,14 @@ vmemmap_populate(struct page *start_page, unsigned long size, int node)
if (!pud)
return -ENOMEM;
if (!cpu_has_pse) {
next = (addr + PAGE_SIZE) & PAGE_MASK;
pmd = vmemmap_pmd_populate(pud, addr, node);
pmd = pmd_offset(pud, addr);
if (pmd_none(*pmd)) {
void *p;
if (!pmd)
return -ENOMEM;
p = vmemmap_pte_populate(pmd, addr, node);
if (!p)
return -ENOMEM;
addr_end = addr + PAGE_SIZE;
p_end = p + PAGE_SIZE;
} else {
next = pmd_addr_end(addr, end);
pmd = pmd_offset(pud, addr);
if (pmd_none(*pmd)) {
p = vmemmap_alloc_block_buf(PMD_SIZE, node);
if (p) {
pte_t entry;
p = vmemmap_alloc_block_buf(PMD_SIZE, node);
if (!p)
return -ENOMEM;
entry = pfn_pte(__pa(p) >> PAGE_SHIFT,
PAGE_KERNEL_LARGE);
set_pmd(pmd, __pmd(pte_val(entry)));
@ -1347,15 +1325,32 @@ vmemmap_populate(struct page *start_page, unsigned long size, int node)
addr_end = addr + PMD_SIZE;
p_end = p + PMD_SIZE;
} else
vmemmap_verify((pte_t *)pmd, node, addr, next);
continue;
}
} else if (pmd_large(*pmd)) {
vmemmap_verify((pte_t *)pmd, node, addr, next);
continue;
}
pr_warn_once("vmemmap: falling back to regular page backing\n");
if (vmemmap_populate_basepages(addr, next, node))
return -ENOMEM;
}
sync_global_pgds((unsigned long)start_page, end - 1);
return 0;
}
int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
{
int err;
if (cpu_has_pse)
err = vmemmap_populate_hugepages(start, end, node);
else
err = vmemmap_populate_basepages(start, end, node);
if (!err)
sync_global_pgds(start, end - 1);
return err;
}
#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE)
void register_page_bootmem_memmap(unsigned long section_nr,
struct page *start_page, unsigned long size)

View File

@ -282,12 +282,7 @@ void iounmap(volatile void __iomem *addr)
in parallel. Reuse of the virtual address is prevented by
leaving it in the global lists until we're done with it.
cpa takes care of the direct mappings. */
read_lock(&vmlist_lock);
for (p = vmlist; p; p = p->next) {
if (p->addr == (void __force *)addr)
break;
}
read_unlock(&vmlist_lock);
p = find_vm_area((void __force *)addr);
if (!p) {
printk(KERN_ERR "iounmap: bad address %p\n", addr);

View File

@ -114,14 +114,11 @@ void numa_clear_node(int cpu)
*/
void __init setup_node_to_cpumask_map(void)
{
unsigned int node, num = 0;
unsigned int node;
/* setup nr_node_ids if not done yet */
if (nr_node_ids == MAX_NUMNODES) {
for_each_node_mask(node, node_possible_map)
num = node;
nr_node_ids = num + 1;
}
if (nr_node_ids == MAX_NUMNODES)
setup_nr_node_ids();
/* allocate the map */
for (node = 0; node < nr_node_ids; node++)

View File

@ -208,32 +208,17 @@ void __init mem_init(void)
highmemsize >> 10);
}
void
free_reserved_mem(void *start, void *end)
{
for (; start < end; start += PAGE_SIZE) {
ClearPageReserved(virt_to_page(start));
init_page_count(virt_to_page(start));
free_page((unsigned long)start);
totalram_pages++;
}
}
#ifdef CONFIG_BLK_DEV_INITRD
extern int initrd_is_mapped;
void free_initrd_mem(unsigned long start, unsigned long end)
{
if (initrd_is_mapped) {
free_reserved_mem((void*)start, (void*)end);
printk ("Freeing initrd memory: %ldk freed\n",(end-start)>>10);
}
if (initrd_is_mapped)
free_reserved_area(start, end, 0, "initrd");
}
#endif
void free_initmem(void)
{
free_reserved_mem(__init_begin, __init_end);
printk("Freeing unused kernel memory: %zuk freed\n",
(__init_end - __init_begin) >> 10);
free_initmem_default(0);
}

View File

@ -25,6 +25,15 @@ EXPORT_SYMBOL_GPL(cpu_subsys);
static DEFINE_PER_CPU(struct device *, cpu_sys_devices);
#ifdef CONFIG_HOTPLUG_CPU
static void change_cpu_under_node(struct cpu *cpu,
unsigned int from_nid, unsigned int to_nid)
{
int cpuid = cpu->dev.id;
unregister_cpu_under_node(cpuid, from_nid);
register_cpu_under_node(cpuid, to_nid);
cpu->node_id = to_nid;
}
static ssize_t show_online(struct device *dev,
struct device_attribute *attr,
char *buf)
@ -39,17 +48,29 @@ static ssize_t __ref store_online(struct device *dev,
const char *buf, size_t count)
{
struct cpu *cpu = container_of(dev, struct cpu, dev);
int cpuid = cpu->dev.id;
int from_nid, to_nid;
ssize_t ret;
cpu_hotplug_driver_lock();
switch (buf[0]) {
case '0':
ret = cpu_down(cpu->dev.id);
ret = cpu_down(cpuid);
if (!ret)
kobject_uevent(&dev->kobj, KOBJ_OFFLINE);
break;
case '1':
ret = cpu_up(cpu->dev.id);
from_nid = cpu_to_node(cpuid);
ret = cpu_up(cpuid);
/*
* When hot adding memory to memoryless node and enabling a cpu
* on the node, node number of the cpu may internally change.
*/
to_nid = cpu_to_node(cpuid);
if (from_nid != to_nid)
change_cpu_under_node(cpu, from_nid, to_nid);
if (!ret)
kobject_uevent(&dev->kobj, KOBJ_ONLINE);
break;

View File

@ -93,16 +93,6 @@ int register_memory(struct memory_block *memory)
return error;
}
static void
unregister_memory(struct memory_block *memory)
{
BUG_ON(memory->dev.bus != &memory_subsys);
/* drop the ref. we got in remove_memory_block() */
kobject_put(&memory->dev.kobj);
device_unregister(&memory->dev);
}
unsigned long __weak memory_block_size_bytes(void)
{
return MIN_MEMORY_BLOCK_SIZE;
@ -217,8 +207,7 @@ int memory_isolate_notify(unsigned long val, void *v)
* The probe routines leave the pages reserved, just as the bootmem code does.
* Make sure they're still that way.
*/
static bool pages_correctly_reserved(unsigned long start_pfn,
unsigned long nr_pages)
static bool pages_correctly_reserved(unsigned long start_pfn)
{
int i, j;
struct page *page;
@ -266,7 +255,7 @@ memory_block_action(unsigned long phys_index, unsigned long action, int online_t
switch (action) {
case MEM_ONLINE:
if (!pages_correctly_reserved(start_pfn, nr_pages))
if (!pages_correctly_reserved(start_pfn))
return -EBUSY;
ret = online_pages(start_pfn, nr_pages, online_type);
@ -637,8 +626,28 @@ static int add_memory_section(int nid, struct mem_section *section,
return ret;
}
int remove_memory_block(unsigned long node_id, struct mem_section *section,
int phys_device)
/*
* need an interface for the VM to add new memory regions,
* but without onlining it.
*/
int register_new_memory(int nid, struct mem_section *section)
{
return add_memory_section(nid, section, NULL, MEM_OFFLINE, HOTPLUG);
}
#ifdef CONFIG_MEMORY_HOTREMOVE
static void
unregister_memory(struct memory_block *memory)
{
BUG_ON(memory->dev.bus != &memory_subsys);
/* drop the ref. we got in remove_memory_block() */
kobject_put(&memory->dev.kobj);
device_unregister(&memory->dev);
}
static int remove_memory_block(unsigned long node_id,
struct mem_section *section, int phys_device)
{
struct memory_block *mem;
@ -661,15 +670,6 @@ int remove_memory_block(unsigned long node_id, struct mem_section *section,
return 0;
}
/*
* need an interface for the VM to add new memory regions,
* but without onlining it.
*/
int register_new_memory(int nid, struct mem_section *section)
{
return add_memory_section(nid, section, NULL, MEM_OFFLINE, HOTPLUG);
}
int unregister_memory_section(struct mem_section *section)
{
if (!present_section(section))
@ -677,6 +677,7 @@ int unregister_memory_section(struct mem_section *section)
return remove_memory_block(0, section, 0);
}
#endif /* CONFIG_MEMORY_HOTREMOVE */
/*
* offline one memory block. If the memory block has been offlined, do nothing.

View File

@ -7,6 +7,7 @@
#include <linux/mm.h>
#include <linux/memory.h>
#include <linux/vmstat.h>
#include <linux/notifier.h>
#include <linux/node.h>
#include <linux/hugetlb.h>
#include <linux/compaction.h>
@ -683,8 +684,11 @@ static int __init register_node_type(void)
ret = subsys_system_register(&node_subsys, cpu_root_attr_groups);
if (!ret) {
hotplug_memory_notifier(node_memory_callback,
NODE_CALLBACK_PRI);
static struct notifier_block node_memory_callback_nb = {
.notifier_call = node_memory_callback,
.priority = NODE_CALLBACK_PRI,
};
register_hotmemory_notifier(&node_memory_callback_nb);
}
/*

View File

@ -114,12 +114,9 @@ static void __meminit release_firmware_map_entry(struct kobject *kobj)
* map_entries_bootmem here, and deleted from &map_entries in
* firmware_map_remove_entry().
*/
if (firmware_map_find_entry(entry->start, entry->end,
entry->type)) {
spin_lock(&map_entries_bootmem_lock);
list_add(&entry->list, &map_entries_bootmem);
spin_unlock(&map_entries_bootmem_lock);
}
spin_lock(&map_entries_bootmem_lock);
list_add(&entry->list, &map_entries_bootmem);
spin_unlock(&map_entries_bootmem_lock);
return;
}

View File

@ -2440,6 +2440,15 @@ config FB_PUV3_UNIGFX
Choose this option if you want to use the Unigfx device as a
framebuffer device. Without the support of PCI & AGP.
config FB_HYPERV
tristate "Microsoft Hyper-V Synthetic Video support"
depends on FB && HYPERV
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
help
This framebuffer driver supports Microsoft Hyper-V Synthetic Video.
source "drivers/video/omap/Kconfig"
source "drivers/video/omap2/Kconfig"
source "drivers/video/exynos/Kconfig"

View File

@ -149,6 +149,7 @@ obj-$(CONFIG_FB_MSM) += msm/
obj-$(CONFIG_FB_NUC900) += nuc900fb.o
obj-$(CONFIG_FB_JZ4740) += jz4740_fb.o
obj-$(CONFIG_FB_PUV3_UNIGFX) += fb-puv3.o
obj-$(CONFIG_FB_HYPERV) += hyperv_fb.o
# Platform or fallback drivers go here
obj-$(CONFIG_FB_UVESA) += uvesafb.o

View File

@ -27,7 +27,7 @@ static void cw_update_attr(u8 *dst, u8 *src, int attribute,
{
int i, j, offset = (vc->vc_font.height < 10) ? 1 : 2;
int width = (vc->vc_font.height + 7) >> 3;
u8 c, t = 0, msk = ~(0xff >> offset);
u8 c, msk = ~(0xff >> offset);
for (i = 0; i < vc->vc_font.width; i++) {
for (j = 0; j < width; j++) {
@ -40,7 +40,6 @@ static void cw_update_attr(u8 *dst, u8 *src, int attribute,
c = ~c;
src++;
*dst++ = c;
t = c;
}
}
}

View File

@ -419,7 +419,7 @@ static struct fb_ops ep93xxfb_ops = {
.fb_mmap = ep93xxfb_mmap,
};
static int __init ep93xxfb_calc_fbsize(struct ep93xxfb_mach_info *mach_info)
static int ep93xxfb_calc_fbsize(struct ep93xxfb_mach_info *mach_info)
{
int i, fb_size = 0;
@ -441,7 +441,7 @@ static int __init ep93xxfb_calc_fbsize(struct ep93xxfb_mach_info *mach_info)
return fb_size;
}
static int __init ep93xxfb_alloc_videomem(struct fb_info *info)
static int ep93xxfb_alloc_videomem(struct fb_info *info)
{
struct ep93xx_fbi *fbi = info->par;
char __iomem *virt_addr;
@ -627,19 +627,7 @@ static struct platform_driver ep93xxfb_driver = {
.owner = THIS_MODULE,
},
};
static int ep93xxfb_init(void)
{
return platform_driver_register(&ep93xxfb_driver);
}
static void __exit ep93xxfb_exit(void)
{
platform_driver_unregister(&ep93xxfb_driver);
}
module_init(ep93xxfb_init);
module_exit(ep93xxfb_exit);
module_platform_driver(ep93xxfb_driver);
MODULE_DESCRIPTION("EP93XX Framebuffer Driver");
MODULE_ALIAS("platform:ep93xx-fb");

View File

@ -32,6 +32,7 @@
#include <linux/notifier.h>
#include <linux/regulator/consumer.h>
#include <linux/pm_runtime.h>
#include <linux/err.h>
#include <video/exynos_mipi_dsim.h>
@ -382,10 +383,9 @@ static int exynos_mipi_dsi_probe(struct platform_device *pdev)
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
dsim->reg_base = devm_request_and_ioremap(&pdev->dev, res);
if (!dsim->reg_base) {
dev_err(&pdev->dev, "failed to remap io region\n");
ret = -ENOMEM;
dsim->reg_base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(dsim->reg_base)) {
ret = PTR_ERR(dsim->reg_base);
goto error;
}

829
drivers/video/hyperv_fb.c Normal file
View File

@ -0,0 +1,829 @@
/*
* Copyright (c) 2012, Microsoft Corporation.
*
* Author:
* Haiyang Zhang <haiyangz@microsoft.com>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for more
* details.
*/
/*
* Hyper-V Synthetic Video Frame Buffer Driver
*
* This is the driver for the Hyper-V Synthetic Video, which supports
* screen resolution up to Full HD 1920x1080 with 32 bit color on Windows
* Server 2012, and 1600x1200 with 16 bit color on Windows Server 2008 R2
* or earlier.
*
* It also solves the double mouse cursor issue of the emulated video mode.
*
* The default screen resolution is 1152x864, which may be changed by a
* kernel parameter:
* video=hyperv_fb:<width>x<height>
* For example: video=hyperv_fb:1280x1024
*
* Portrait orientation is also supported:
* For example: video=hyperv_fb:864x1152
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/completion.h>
#include <linux/fb.h>
#include <linux/pci.h>
#include <linux/hyperv.h>
/* Hyper-V Synthetic Video Protocol definitions and structures */
#define MAX_VMBUS_PKT_SIZE 0x4000
#define SYNTHVID_VERSION(major, minor) ((minor) << 16 | (major))
#define SYNTHVID_VERSION_WIN7 SYNTHVID_VERSION(3, 0)
#define SYNTHVID_VERSION_WIN8 SYNTHVID_VERSION(3, 2)
#define SYNTHVID_DEPTH_WIN7 16
#define SYNTHVID_DEPTH_WIN8 32
#define SYNTHVID_FB_SIZE_WIN7 (4 * 1024 * 1024)
#define SYNTHVID_WIDTH_MAX_WIN7 1600
#define SYNTHVID_HEIGHT_MAX_WIN7 1200
#define SYNTHVID_FB_SIZE_WIN8 (8 * 1024 * 1024)
#define PCI_VENDOR_ID_MICROSOFT 0x1414
#define PCI_DEVICE_ID_HYPERV_VIDEO 0x5353
enum pipe_msg_type {
PIPE_MSG_INVALID,
PIPE_MSG_DATA,
PIPE_MSG_MAX
};
struct pipe_msg_hdr {
u32 type;
u32 size; /* size of message after this field */
} __packed;
enum synthvid_msg_type {
SYNTHVID_ERROR = 0,
SYNTHVID_VERSION_REQUEST = 1,
SYNTHVID_VERSION_RESPONSE = 2,
SYNTHVID_VRAM_LOCATION = 3,
SYNTHVID_VRAM_LOCATION_ACK = 4,
SYNTHVID_SITUATION_UPDATE = 5,
SYNTHVID_SITUATION_UPDATE_ACK = 6,
SYNTHVID_POINTER_POSITION = 7,
SYNTHVID_POINTER_SHAPE = 8,
SYNTHVID_FEATURE_CHANGE = 9,
SYNTHVID_DIRT = 10,
SYNTHVID_MAX = 11
};
struct synthvid_msg_hdr {
u32 type;
u32 size; /* size of this header + payload after this field*/
} __packed;
struct synthvid_version_req {
u32 version;
} __packed;
struct synthvid_version_resp {
u32 version;
u8 is_accepted;
u8 max_video_outputs;
} __packed;
struct synthvid_vram_location {
u64 user_ctx;
u8 is_vram_gpa_specified;
u64 vram_gpa;
} __packed;
struct synthvid_vram_location_ack {
u64 user_ctx;
} __packed;
struct video_output_situation {
u8 active;
u32 vram_offset;
u8 depth_bits;
u32 width_pixels;
u32 height_pixels;
u32 pitch_bytes;
} __packed;
struct synthvid_situation_update {
u64 user_ctx;
u8 video_output_count;
struct video_output_situation video_output[1];
} __packed;
struct synthvid_situation_update_ack {
u64 user_ctx;
} __packed;
struct synthvid_pointer_position {
u8 is_visible;
u8 video_output;
s32 image_x;
s32 image_y;
} __packed;
#define CURSOR_MAX_X 96
#define CURSOR_MAX_Y 96
#define CURSOR_ARGB_PIXEL_SIZE 4
#define CURSOR_MAX_SIZE (CURSOR_MAX_X * CURSOR_MAX_Y * CURSOR_ARGB_PIXEL_SIZE)
#define CURSOR_COMPLETE (-1)
struct synthvid_pointer_shape {
u8 part_idx;
u8 is_argb;
u32 width; /* CURSOR_MAX_X at most */
u32 height; /* CURSOR_MAX_Y at most */
u32 hot_x; /* hotspot relative to upper-left of pointer image */
u32 hot_y;
u8 data[4];
} __packed;
struct synthvid_feature_change {
u8 is_dirt_needed;
u8 is_ptr_pos_needed;
u8 is_ptr_shape_needed;
u8 is_situ_needed;
} __packed;
struct rect {
s32 x1, y1; /* top left corner */
s32 x2, y2; /* bottom right corner, exclusive */
} __packed;
struct synthvid_dirt {
u8 video_output;
u8 dirt_count;
struct rect rect[1];
} __packed;
struct synthvid_msg {
struct pipe_msg_hdr pipe_hdr;
struct synthvid_msg_hdr vid_hdr;
union {
struct synthvid_version_req ver_req;
struct synthvid_version_resp ver_resp;
struct synthvid_vram_location vram;
struct synthvid_vram_location_ack vram_ack;
struct synthvid_situation_update situ;
struct synthvid_situation_update_ack situ_ack;
struct synthvid_pointer_position ptr_pos;
struct synthvid_pointer_shape ptr_shape;
struct synthvid_feature_change feature_chg;
struct synthvid_dirt dirt;
};
} __packed;
/* FB driver definitions and structures */
#define HVFB_WIDTH 1152 /* default screen width */
#define HVFB_HEIGHT 864 /* default screen height */
#define HVFB_WIDTH_MIN 640
#define HVFB_HEIGHT_MIN 480
#define RING_BUFSIZE (256 * 1024)
#define VSP_TIMEOUT (10 * HZ)
#define HVFB_UPDATE_DELAY (HZ / 20)
struct hvfb_par {
struct fb_info *info;
bool fb_ready; /* fb device is ready */
struct completion wait;
u32 synthvid_version;
struct delayed_work dwork;
bool update;
u32 pseudo_palette[16];
u8 init_buf[MAX_VMBUS_PKT_SIZE];
u8 recv_buf[MAX_VMBUS_PKT_SIZE];
};
static uint screen_width = HVFB_WIDTH;
static uint screen_height = HVFB_HEIGHT;
static uint screen_depth;
static uint screen_fb_size;
/* Send message to Hyper-V host */
static inline int synthvid_send(struct hv_device *hdev,
struct synthvid_msg *msg)
{
static atomic64_t request_id = ATOMIC64_INIT(0);
int ret;
msg->pipe_hdr.type = PIPE_MSG_DATA;
msg->pipe_hdr.size = msg->vid_hdr.size;
ret = vmbus_sendpacket(hdev->channel, msg,
msg->vid_hdr.size + sizeof(struct pipe_msg_hdr),
atomic64_inc_return(&request_id),
VM_PKT_DATA_INBAND, 0);
if (ret)
pr_err("Unable to send packet via vmbus\n");
return ret;
}
/* Send screen resolution info to host */
static int synthvid_send_situ(struct hv_device *hdev)
{
struct fb_info *info = hv_get_drvdata(hdev);
struct synthvid_msg msg;
if (!info)
return -ENODEV;
memset(&msg, 0, sizeof(struct synthvid_msg));
msg.vid_hdr.type = SYNTHVID_SITUATION_UPDATE;
msg.vid_hdr.size = sizeof(struct synthvid_msg_hdr) +
sizeof(struct synthvid_situation_update);
msg.situ.user_ctx = 0;
msg.situ.video_output_count = 1;
msg.situ.video_output[0].active = 1;
msg.situ.video_output[0].vram_offset = 0;
msg.situ.video_output[0].depth_bits = info->var.bits_per_pixel;
msg.situ.video_output[0].width_pixels = info->var.xres;
msg.situ.video_output[0].height_pixels = info->var.yres;
msg.situ.video_output[0].pitch_bytes = info->fix.line_length;
synthvid_send(hdev, &msg);
return 0;
}
/* Send mouse pointer info to host */
static int synthvid_send_ptr(struct hv_device *hdev)
{
struct synthvid_msg msg;
memset(&msg, 0, sizeof(struct synthvid_msg));
msg.vid_hdr.type = SYNTHVID_POINTER_POSITION;
msg.vid_hdr.size = sizeof(struct synthvid_msg_hdr) +
sizeof(struct synthvid_pointer_position);
msg.ptr_pos.is_visible = 1;
msg.ptr_pos.video_output = 0;
msg.ptr_pos.image_x = 0;
msg.ptr_pos.image_y = 0;
synthvid_send(hdev, &msg);
memset(&msg, 0, sizeof(struct synthvid_msg));
msg.vid_hdr.type = SYNTHVID_POINTER_SHAPE;
msg.vid_hdr.size = sizeof(struct synthvid_msg_hdr) +
sizeof(struct synthvid_pointer_shape);
msg.ptr_shape.part_idx = CURSOR_COMPLETE;
msg.ptr_shape.is_argb = 1;
msg.ptr_shape.width = 1;
msg.ptr_shape.height = 1;
msg.ptr_shape.hot_x = 0;
msg.ptr_shape.hot_y = 0;
msg.ptr_shape.data[0] = 0;
msg.ptr_shape.data[1] = 1;
msg.ptr_shape.data[2] = 1;
msg.ptr_shape.data[3] = 1;
synthvid_send(hdev, &msg);
return 0;
}
/* Send updated screen area (dirty rectangle) location to host */
static int synthvid_update(struct fb_info *info)
{
struct hv_device *hdev = device_to_hv_device(info->device);
struct synthvid_msg msg;
memset(&msg, 0, sizeof(struct synthvid_msg));
msg.vid_hdr.type = SYNTHVID_DIRT;
msg.vid_hdr.size = sizeof(struct synthvid_msg_hdr) +
sizeof(struct synthvid_dirt);
msg.dirt.video_output = 0;
msg.dirt.dirt_count = 1;
msg.dirt.rect[0].x1 = 0;
msg.dirt.rect[0].y1 = 0;
msg.dirt.rect[0].x2 = info->var.xres;
msg.dirt.rect[0].y2 = info->var.yres;
synthvid_send(hdev, &msg);
return 0;
}
/*
* Actions on received messages from host:
* Complete the wait event.
* Or, reply with screen and cursor info.
*/
static void synthvid_recv_sub(struct hv_device *hdev)
{
struct fb_info *info = hv_get_drvdata(hdev);
struct hvfb_par *par;
struct synthvid_msg *msg;
if (!info)
return;
par = info->par;
msg = (struct synthvid_msg *)par->recv_buf;
/* Complete the wait event */
if (msg->vid_hdr.type == SYNTHVID_VERSION_RESPONSE ||
msg->vid_hdr.type == SYNTHVID_VRAM_LOCATION_ACK) {
memcpy(par->init_buf, msg, MAX_VMBUS_PKT_SIZE);
complete(&par->wait);
return;
}
/* Reply with screen and cursor info */
if (msg->vid_hdr.type == SYNTHVID_FEATURE_CHANGE) {
if (par->fb_ready) {
synthvid_send_ptr(hdev);
synthvid_send_situ(hdev);
}
par->update = msg->feature_chg.is_dirt_needed;
if (par->update)
schedule_delayed_work(&par->dwork, HVFB_UPDATE_DELAY);
}
}
/* Receive callback for messages from the host */
static void synthvid_receive(void *ctx)
{
struct hv_device *hdev = ctx;
struct fb_info *info = hv_get_drvdata(hdev);
struct hvfb_par *par;
struct synthvid_msg *recv_buf;
u32 bytes_recvd;
u64 req_id;
int ret;
if (!info)
return;
par = info->par;
recv_buf = (struct synthvid_msg *)par->recv_buf;
do {
ret = vmbus_recvpacket(hdev->channel, recv_buf,
MAX_VMBUS_PKT_SIZE,
&bytes_recvd, &req_id);
if (bytes_recvd > 0 &&
recv_buf->pipe_hdr.type == PIPE_MSG_DATA)
synthvid_recv_sub(hdev);
} while (bytes_recvd > 0 && ret == 0);
}
/* Check synthetic video protocol version with the host */
static int synthvid_negotiate_ver(struct hv_device *hdev, u32 ver)
{
struct fb_info *info = hv_get_drvdata(hdev);
struct hvfb_par *par = info->par;
struct synthvid_msg *msg = (struct synthvid_msg *)par->init_buf;
int t, ret = 0;
memset(msg, 0, sizeof(struct synthvid_msg));
msg->vid_hdr.type = SYNTHVID_VERSION_REQUEST;
msg->vid_hdr.size = sizeof(struct synthvid_msg_hdr) +
sizeof(struct synthvid_version_req);
msg->ver_req.version = ver;
synthvid_send(hdev, msg);
t = wait_for_completion_timeout(&par->wait, VSP_TIMEOUT);
if (!t) {
pr_err("Time out on waiting version response\n");
ret = -ETIMEDOUT;
goto out;
}
if (!msg->ver_resp.is_accepted) {
ret = -ENODEV;
goto out;
}
par->synthvid_version = ver;
out:
return ret;
}
/* Connect to VSP (Virtual Service Provider) on host */
static int synthvid_connect_vsp(struct hv_device *hdev)
{
struct fb_info *info = hv_get_drvdata(hdev);
struct hvfb_par *par = info->par;
int ret;
ret = vmbus_open(hdev->channel, RING_BUFSIZE, RING_BUFSIZE,
NULL, 0, synthvid_receive, hdev);
if (ret) {
pr_err("Unable to open vmbus channel\n");
return ret;
}
/* Negotiate the protocol version with host */
if (vmbus_proto_version == VERSION_WS2008 ||
vmbus_proto_version == VERSION_WIN7)
ret = synthvid_negotiate_ver(hdev, SYNTHVID_VERSION_WIN7);
else
ret = synthvid_negotiate_ver(hdev, SYNTHVID_VERSION_WIN8);
if (ret) {
pr_err("Synthetic video device version not accepted\n");
goto error;
}
if (par->synthvid_version == SYNTHVID_VERSION_WIN7) {
screen_depth = SYNTHVID_DEPTH_WIN7;
screen_fb_size = SYNTHVID_FB_SIZE_WIN7;
} else {
screen_depth = SYNTHVID_DEPTH_WIN8;
screen_fb_size = SYNTHVID_FB_SIZE_WIN8;
}
return 0;
error:
vmbus_close(hdev->channel);
return ret;
}
/* Send VRAM and Situation messages to the host */
static int synthvid_send_config(struct hv_device *hdev)
{
struct fb_info *info = hv_get_drvdata(hdev);
struct hvfb_par *par = info->par;
struct synthvid_msg *msg = (struct synthvid_msg *)par->init_buf;
int t, ret = 0;
/* Send VRAM location */
memset(msg, 0, sizeof(struct synthvid_msg));
msg->vid_hdr.type = SYNTHVID_VRAM_LOCATION;
msg->vid_hdr.size = sizeof(struct synthvid_msg_hdr) +
sizeof(struct synthvid_vram_location);
msg->vram.user_ctx = msg->vram.vram_gpa = info->fix.smem_start;
msg->vram.is_vram_gpa_specified = 1;
synthvid_send(hdev, msg);
t = wait_for_completion_timeout(&par->wait, VSP_TIMEOUT);
if (!t) {
pr_err("Time out on waiting vram location ack\n");
ret = -ETIMEDOUT;
goto out;
}
if (msg->vram_ack.user_ctx != info->fix.smem_start) {
pr_err("Unable to set VRAM location\n");
ret = -ENODEV;
goto out;
}
/* Send pointer and situation update */
synthvid_send_ptr(hdev);
synthvid_send_situ(hdev);
out:
return ret;
}
/*
* Delayed work callback:
* It is called at HVFB_UPDATE_DELAY or longer time interval to process
* screen updates. It is re-scheduled if further update is necessary.
*/
static void hvfb_update_work(struct work_struct *w)
{
struct hvfb_par *par = container_of(w, struct hvfb_par, dwork.work);
struct fb_info *info = par->info;
if (par->fb_ready)
synthvid_update(info);
if (par->update)
schedule_delayed_work(&par->dwork, HVFB_UPDATE_DELAY);
}
/* Framebuffer operation handlers */
static int hvfb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
{
if (var->xres < HVFB_WIDTH_MIN || var->yres < HVFB_HEIGHT_MIN ||
var->xres > screen_width || var->yres > screen_height ||
var->bits_per_pixel != screen_depth)
return -EINVAL;
var->xres_virtual = var->xres;
var->yres_virtual = var->yres;
return 0;
}
static int hvfb_set_par(struct fb_info *info)
{
struct hv_device *hdev = device_to_hv_device(info->device);
return synthvid_send_situ(hdev);
}
static inline u32 chan_to_field(u32 chan, struct fb_bitfield *bf)
{
return ((chan & 0xffff) >> (16 - bf->length)) << bf->offset;
}
static int hvfb_setcolreg(unsigned regno, unsigned red, unsigned green,
unsigned blue, unsigned transp, struct fb_info *info)
{
u32 *pal = info->pseudo_palette;
if (regno > 15)
return -EINVAL;
pal[regno] = chan_to_field(red, &info->var.red)
| chan_to_field(green, &info->var.green)
| chan_to_field(blue, &info->var.blue)
| chan_to_field(transp, &info->var.transp);
return 0;
}
static struct fb_ops hvfb_ops = {
.owner = THIS_MODULE,
.fb_check_var = hvfb_check_var,
.fb_set_par = hvfb_set_par,
.fb_setcolreg = hvfb_setcolreg,
.fb_fillrect = cfb_fillrect,
.fb_copyarea = cfb_copyarea,
.fb_imageblit = cfb_imageblit,
};
/* Get options from kernel paramenter "video=" */
static void hvfb_get_option(struct fb_info *info)
{
struct hvfb_par *par = info->par;
char *opt = NULL, *p;
uint x = 0, y = 0;
if (fb_get_options(KBUILD_MODNAME, &opt) || !opt || !*opt)
return;
p = strsep(&opt, "x");
if (!*p || kstrtouint(p, 0, &x) ||
!opt || !*opt || kstrtouint(opt, 0, &y)) {
pr_err("Screen option is invalid: skipped\n");
return;
}
if (x < HVFB_WIDTH_MIN || y < HVFB_HEIGHT_MIN ||
(par->synthvid_version == SYNTHVID_VERSION_WIN8 &&
x * y * screen_depth / 8 > SYNTHVID_FB_SIZE_WIN8) ||
(par->synthvid_version == SYNTHVID_VERSION_WIN7 &&
(x > SYNTHVID_WIDTH_MAX_WIN7 || y > SYNTHVID_HEIGHT_MAX_WIN7))) {
pr_err("Screen resolution option is out of range: skipped\n");
return;
}
screen_width = x;
screen_height = y;
return;
}
/* Get framebuffer memory from Hyper-V video pci space */
static int hvfb_getmem(struct fb_info *info)
{
struct pci_dev *pdev;
ulong fb_phys;
void __iomem *fb_virt;
pdev = pci_get_device(PCI_VENDOR_ID_MICROSOFT,
PCI_DEVICE_ID_HYPERV_VIDEO, NULL);
if (!pdev) {
pr_err("Unable to find PCI Hyper-V video\n");
return -ENODEV;
}
if (!(pci_resource_flags(pdev, 0) & IORESOURCE_MEM) ||
pci_resource_len(pdev, 0) < screen_fb_size)
goto err1;
fb_phys = pci_resource_end(pdev, 0) - screen_fb_size + 1;
if (!request_mem_region(fb_phys, screen_fb_size, KBUILD_MODNAME))
goto err1;
fb_virt = ioremap(fb_phys, screen_fb_size);
if (!fb_virt)
goto err2;
info->apertures = alloc_apertures(1);
if (!info->apertures)
goto err3;
info->apertures->ranges[0].base = pci_resource_start(pdev, 0);
info->apertures->ranges[0].size = pci_resource_len(pdev, 0);
info->fix.smem_start = fb_phys;
info->fix.smem_len = screen_fb_size;
info->screen_base = fb_virt;
info->screen_size = screen_fb_size;
pci_dev_put(pdev);
return 0;
err3:
iounmap(fb_virt);
err2:
release_mem_region(fb_phys, screen_fb_size);
err1:
pci_dev_put(pdev);
return -ENOMEM;
}
/* Release the framebuffer */
static void hvfb_putmem(struct fb_info *info)
{
iounmap(info->screen_base);
release_mem_region(info->fix.smem_start, screen_fb_size);
}
static int hvfb_probe(struct hv_device *hdev,
const struct hv_vmbus_device_id *dev_id)
{
struct fb_info *info;
struct hvfb_par *par;
int ret;
info = framebuffer_alloc(sizeof(struct hvfb_par), &hdev->device);
if (!info) {
pr_err("No memory for framebuffer info\n");
return -ENOMEM;
}
par = info->par;
par->info = info;
par->fb_ready = false;
init_completion(&par->wait);
INIT_DELAYED_WORK(&par->dwork, hvfb_update_work);
/* Connect to VSP */
hv_set_drvdata(hdev, info);
ret = synthvid_connect_vsp(hdev);
if (ret) {
pr_err("Unable to connect to VSP\n");
goto error1;
}
ret = hvfb_getmem(info);
if (ret) {
pr_err("No memory for framebuffer\n");
goto error2;
}
hvfb_get_option(info);
pr_info("Screen resolution: %dx%d, Color depth: %d\n",
screen_width, screen_height, screen_depth);
/* Set up fb_info */
info->flags = FBINFO_DEFAULT;
info->var.xres_virtual = info->var.xres = screen_width;
info->var.yres_virtual = info->var.yres = screen_height;
info->var.bits_per_pixel = screen_depth;
if (info->var.bits_per_pixel == 16) {
info->var.red = (struct fb_bitfield){11, 5, 0};
info->var.green = (struct fb_bitfield){5, 6, 0};
info->var.blue = (struct fb_bitfield){0, 5, 0};
info->var.transp = (struct fb_bitfield){0, 0, 0};
} else {
info->var.red = (struct fb_bitfield){16, 8, 0};
info->var.green = (struct fb_bitfield){8, 8, 0};
info->var.blue = (struct fb_bitfield){0, 8, 0};
info->var.transp = (struct fb_bitfield){24, 8, 0};
}
info->var.activate = FB_ACTIVATE_NOW;
info->var.height = -1;
info->var.width = -1;
info->var.vmode = FB_VMODE_NONINTERLACED;
strcpy(info->fix.id, KBUILD_MODNAME);
info->fix.type = FB_TYPE_PACKED_PIXELS;
info->fix.visual = FB_VISUAL_TRUECOLOR;
info->fix.line_length = screen_width * screen_depth / 8;
info->fix.accel = FB_ACCEL_NONE;
info->fbops = &hvfb_ops;
info->pseudo_palette = par->pseudo_palette;
/* Send config to host */
ret = synthvid_send_config(hdev);
if (ret)
goto error;
ret = register_framebuffer(info);
if (ret) {
pr_err("Unable to register framebuffer\n");
goto error;
}
par->fb_ready = true;
return 0;
error:
hvfb_putmem(info);
error2:
vmbus_close(hdev->channel);
error1:
cancel_delayed_work_sync(&par->dwork);
hv_set_drvdata(hdev, NULL);
framebuffer_release(info);
return ret;
}
static int hvfb_remove(struct hv_device *hdev)
{
struct fb_info *info = hv_get_drvdata(hdev);
struct hvfb_par *par = info->par;
par->update = false;
par->fb_ready = false;
unregister_framebuffer(info);
cancel_delayed_work_sync(&par->dwork);
vmbus_close(hdev->channel);
hv_set_drvdata(hdev, NULL);
hvfb_putmem(info);
framebuffer_release(info);
return 0;
}
static const struct hv_vmbus_device_id id_table[] = {
/* Synthetic Video Device GUID */
{HV_SYNTHVID_GUID},
{}
};
MODULE_DEVICE_TABLE(vmbus, id_table);
static struct hv_driver hvfb_drv = {
.name = KBUILD_MODNAME,
.id_table = id_table,
.probe = hvfb_probe,
.remove = hvfb_remove,
};
static int __init hvfb_drv_init(void)
{
return vmbus_driver_register(&hvfb_drv);
}
static void __exit hvfb_drv_exit(void)
{
vmbus_driver_unregister(&hvfb_drv);
}
module_init(hvfb_drv_init);
module_exit(hvfb_drv_exit);
MODULE_LICENSE("GPL");
MODULE_VERSION(HV_DRV_VERSION);
MODULE_DESCRIPTION("Microsoft Hyper-V Synthetic Video Frame Buffer Driver");

View File

@ -137,8 +137,20 @@ static int* get_ctrl_ptr(struct maven_data* md, int idx) {
static int maven_get_reg(struct i2c_client* c, char reg) {
char dst;
struct i2c_msg msgs[] = {{ c->addr, I2C_M_REV_DIR_ADDR, sizeof(reg), &reg },
{ c->addr, I2C_M_RD | I2C_M_NOSTART, sizeof(dst), &dst }};
struct i2c_msg msgs[] = {
{
.addr = c->addr,
.flags = I2C_M_REV_DIR_ADDR,
.len = sizeof(reg),
.buf = &reg
},
{
.addr = c->addr,
.flags = I2C_M_RD | I2C_M_NOSTART,
.len = sizeof(dst),
.buf = &dst
}
};
s32 err;
err = i2c_transfer(c->adapter, msgs, 2);

View File

@ -961,56 +961,7 @@ struct lcd_regs {
LCD_TVG_CUTVLN : PN2_LCD_GRA_CUTVLN) : LCD_GRA_CUTVLN)
/*
* defined Video Memory Color format for DMA control 0 register
* DMA0 bit[23:20]
*/
#define VMODE_RGB565 0x0
#define VMODE_RGB1555 0x1
#define VMODE_RGB888PACKED 0x2
#define VMODE_RGB888UNPACKED 0x3
#define VMODE_RGBA888 0x4
#define VMODE_YUV422PACKED 0x5
#define VMODE_YUV422PLANAR 0x6
#define VMODE_YUV420PLANAR 0x7
#define VMODE_SMPNCMD 0x8
#define VMODE_PALETTE4BIT 0x9
#define VMODE_PALETTE8BIT 0xa
#define VMODE_RESERVED 0xb
/*
* defined Graphic Memory Color format for DMA control 0 register
* DMA0 bit[19:16]
*/
#define GMODE_RGB565 0x0
#define GMODE_RGB1555 0x1
#define GMODE_RGB888PACKED 0x2
#define GMODE_RGB888UNPACKED 0x3
#define GMODE_RGBA888 0x4
#define GMODE_YUV422PACKED 0x5
#define GMODE_YUV422PLANAR 0x6
#define GMODE_YUV420PLANAR 0x7
#define GMODE_SMPNCMD 0x8
#define GMODE_PALETTE4BIT 0x9
#define GMODE_PALETTE8BIT 0xa
#define GMODE_RESERVED 0xb
/*
* define for DMA control 1 register
*/
#define DMA1_FRAME_TRIG 31 /* bit location */
#define DMA1_VSYNC_MODE 28
#define DMA1_VSYNC_INV 27
#define DMA1_CKEY 24
#define DMA1_CARRY 23
#define DMA1_LNBUF_ENA 22
#define DMA1_GATED_ENA 21
#define DMA1_PWRDN_ENA 20
#define DMA1_DSCALE 18
#define DMA1_ALPHA_MODE 16
#define DMA1_ALPHA 08
#define DMA1_PXLCMD 00
/*
* defined for Configure Dumb Mode
* defined for Configure Dumb Mode
* DUMB LCD Panel bit[31:28]
*/
@ -1050,18 +1001,6 @@ struct lcd_regs {
#define CFG_CYC_BURST_LEN16 (1<<4)
#define CFG_CYC_BURST_LEN8 (0<<4)
/*
* defined Dumb Panel Clock Divider register
* SCLK_Source bit[31]
*/
/* 0: PLL clock select*/
#define AXI_BUS_SEL 0x80000000
#define CCD_CLK_SEL 0x40000000
#define DCON_CLK_SEL 0x20000000
#define ENA_CLK_INT_DIV CONFIG_FB_DOVE_CLCD_SCLK_DIV
#define IDLE_CLK_INT_DIV 0x1 /* idle Integer Divider */
#define DIS_CLK_INT_DIV 0x0 /* Disable Integer Divider */
/* SRAM ID */
#define SRAMID_GAMMA_YR 0x0
#define SRAMID_GAMMA_UG 0x1
@ -1471,422 +1410,6 @@ struct dsi_regs {
#define LVDS_FREQ_OFFSET_MODE_CK_DIV4_OUT (0x1 << 1)
#define LVDS_FREQ_OFFSET_MODE_EN (0x1 << 0)
/* VDMA */
struct vdma_ch_regs {
#define VDMA_DC_SADDR_1 0x320
#define VDMA_DC_SADDR_2 0x3A0
#define VDMA_DC_SZ_1 0x324
#define VDMA_DC_SZ_2 0x3A4
#define VDMA_CTRL_1 0x328
#define VDMA_CTRL_2 0x3A8
#define VDMA_SRC_SZ_1 0x32C
#define VDMA_SRC_SZ_2 0x3AC
#define VDMA_SA_1 0x330
#define VDMA_SA_2 0x3B0
#define VDMA_DA_1 0x334
#define VDMA_DA_2 0x3B4
#define VDMA_SZ_1 0x338
#define VDMA_SZ_2 0x3B8
u32 dc_saddr;
u32 dc_size;
u32 ctrl;
u32 src_size;
u32 src_addr;
u32 dst_addr;
u32 dst_size;
#define VDMA_PITCH_1 0x33C
#define VDMA_PITCH_2 0x3BC
#define VDMA_ROT_CTRL_1 0x340
#define VDMA_ROT_CTRL_2 0x3C0
#define VDMA_RAM_CTRL0_1 0x344
#define VDMA_RAM_CTRL0_2 0x3C4
#define VDMA_RAM_CTRL1_1 0x348
#define VDMA_RAM_CTRL1_2 0x3C8
u32 pitch;
u32 rot_ctrl;
u32 ram_ctrl0;
u32 ram_ctrl1;
};
struct vdma_regs {
#define VDMA_ARBR_CTRL 0x300
#define VDMA_IRQR 0x304
#define VDMA_IRQM 0x308
#define VDMA_IRQS 0x30C
#define VDMA_MDMA_ARBR_CTRL 0x310
u32 arbr_ctr;
u32 irq_raw;
u32 irq_mask;
u32 irq_status;
u32 mdma_arbr_ctrl;
u32 reserved[3];
struct vdma_ch_regs ch1;
u32 reserved2[21];
struct vdma_ch_regs ch2;
};
/* CMU */
#define CMU_PIP_DE_H_CFG 0x0008
#define CMU_PRI1_H_CFG 0x000C
#define CMU_PRI2_H_CFG 0x0010
#define CMU_ACE_MAIN_DE1_H_CFG 0x0014
#define CMU_ACE_MAIN_DE2_H_CFG 0x0018
#define CMU_ACE_PIP_DE1_H_CFG 0x001C
#define CMU_ACE_PIP_DE2_H_CFG 0x0020
#define CMU_PIP_DE_V_CFG 0x0024
#define CMU_PRI_V_CFG 0x0028
#define CMU_ACE_MAIN_DE_V_CFG 0x002C
#define CMU_ACE_PIP_DE_V_CFG 0x0030
#define CMU_BAR_0_CFG 0x0034
#define CMU_BAR_1_CFG 0x0038
#define CMU_BAR_2_CFG 0x003C
#define CMU_BAR_3_CFG 0x0040
#define CMU_BAR_4_CFG 0x0044
#define CMU_BAR_5_CFG 0x0048
#define CMU_BAR_6_CFG 0x004C
#define CMU_BAR_7_CFG 0x0050
#define CMU_BAR_8_CFG 0x0054
#define CMU_BAR_9_CFG 0x0058
#define CMU_BAR_10_CFG 0x005C
#define CMU_BAR_11_CFG 0x0060
#define CMU_BAR_12_CFG 0x0064
#define CMU_BAR_13_CFG 0x0068
#define CMU_BAR_14_CFG 0x006C
#define CMU_BAR_15_CFG 0x0070
#define CMU_BAR_CTRL 0x0074
#define PATTERN_TOTAL 0x0078
#define PATTERN_ACTIVE 0x007C
#define PATTERN_FRONT_PORCH 0x0080
#define PATTERN_BACK_PORCH 0x0084
#define CMU_CLK_CTRL 0x0088
#define CMU_ICSC_M_C0_L 0x0900
#define CMU_ICSC_M_C0_H 0x0901
#define CMU_ICSC_M_C1_L 0x0902
#define CMU_ICSC_M_C1_H 0x0903
#define CMU_ICSC_M_C2_L 0x0904
#define CMU_ICSC_M_C2_H 0x0905
#define CMU_ICSC_M_C3_L 0x0906
#define CMU_ICSC_M_C3_H 0x0907
#define CMU_ICSC_M_C4_L 0x0908
#define CMU_ICSC_M_C4_H 0x0909
#define CMU_ICSC_M_C5_L 0x090A
#define CMU_ICSC_M_C5_H 0x090B
#define CMU_ICSC_M_C6_L 0x090C
#define CMU_ICSC_M_C6_H 0x090D
#define CMU_ICSC_M_C7_L 0x090E
#define CMU_ICSC_M_C7_H 0x090F
#define CMU_ICSC_M_C8_L 0x0910
#define CMU_ICSC_M_C8_H 0x0911
#define CMU_ICSC_M_O1_0 0x0914
#define CMU_ICSC_M_O1_1 0x0915
#define CMU_ICSC_M_O1_2 0x0916
#define CMU_ICSC_M_O2_0 0x0918
#define CMU_ICSC_M_O2_1 0x0919
#define CMU_ICSC_M_O2_2 0x091A
#define CMU_ICSC_M_O3_0 0x091C
#define CMU_ICSC_M_O3_1 0x091D
#define CMU_ICSC_M_O3_2 0x091E
#define CMU_ICSC_P_C0_L 0x0920
#define CMU_ICSC_P_C0_H 0x0921
#define CMU_ICSC_P_C1_L 0x0922
#define CMU_ICSC_P_C1_H 0x0923
#define CMU_ICSC_P_C2_L 0x0924
#define CMU_ICSC_P_C2_H 0x0925
#define CMU_ICSC_P_C3_L 0x0926
#define CMU_ICSC_P_C3_H 0x0927
#define CMU_ICSC_P_C4_L 0x0928
#define CMU_ICSC_P_C4_H 0x0929
#define CMU_ICSC_P_C5_L 0x092A
#define CMU_ICSC_P_C5_H 0x092B
#define CMU_ICSC_P_C6_L 0x092C
#define CMU_ICSC_P_C6_H 0x092D
#define CMU_ICSC_P_C7_L 0x092E
#define CMU_ICSC_P_C7_H 0x092F
#define CMU_ICSC_P_C8_L 0x0930
#define CMU_ICSC_P_C8_H 0x0931
#define CMU_ICSC_P_O1_0 0x0934
#define CMU_ICSC_P_O1_1 0x0935
#define CMU_ICSC_P_O1_2 0x0936
#define CMU_ICSC_P_O2_0 0x0938
#define CMU_ICSC_P_O2_1 0x0939
#define CMU_ICSC_P_O2_2 0x093A
#define CMU_ICSC_P_O3_0 0x093C
#define CMU_ICSC_P_O3_1 0x093D
#define CMU_ICSC_P_O3_2 0x093E
#define CMU_BR_M_EN 0x0940
#define CMU_BR_M_TH1_L 0x0942
#define CMU_BR_M_TH1_H 0x0943
#define CMU_BR_M_TH2_L 0x0944
#define CMU_BR_M_TH2_H 0x0945
#define CMU_ACE_M_EN 0x0950
#define CMU_ACE_M_WFG1 0x0951
#define CMU_ACE_M_WFG2 0x0952
#define CMU_ACE_M_WFG3 0x0953
#define CMU_ACE_M_TH0 0x0954
#define CMU_ACE_M_TH1 0x0955
#define CMU_ACE_M_TH2 0x0956
#define CMU_ACE_M_TH3 0x0957
#define CMU_ACE_M_TH4 0x0958
#define CMU_ACE_M_TH5 0x0959
#define CMU_ACE_M_OP0_L 0x095A
#define CMU_ACE_M_OP0_H 0x095B
#define CMU_ACE_M_OP5_L 0x095C
#define CMU_ACE_M_OP5_H 0x095D
#define CMU_ACE_M_GB2 0x095E
#define CMU_ACE_M_GB3 0x095F
#define CMU_ACE_M_MS1 0x0960
#define CMU_ACE_M_MS2 0x0961
#define CMU_ACE_M_MS3 0x0962
#define CMU_BR_P_EN 0x0970
#define CMU_BR_P_TH1_L 0x0972
#define CMU_BR_P_TH1_H 0x0973
#define CMU_BR_P_TH2_L 0x0974
#define CMU_BR_P_TH2_H 0x0975
#define CMU_ACE_P_EN 0x0980
#define CMU_ACE_P_WFG1 0x0981
#define CMU_ACE_P_WFG2 0x0982
#define CMU_ACE_P_WFG3 0x0983
#define CMU_ACE_P_TH0 0x0984
#define CMU_ACE_P_TH1 0x0985
#define CMU_ACE_P_TH2 0x0986
#define CMU_ACE_P_TH3 0x0987
#define CMU_ACE_P_TH4 0x0988
#define CMU_ACE_P_TH5 0x0989
#define CMU_ACE_P_OP0_L 0x098A
#define CMU_ACE_P_OP0_H 0x098B
#define CMU_ACE_P_OP5_L 0x098C
#define CMU_ACE_P_OP5_H 0x098D
#define CMU_ACE_P_GB2 0x098E
#define CMU_ACE_P_GB3 0x098F
#define CMU_ACE_P_MS1 0x0990
#define CMU_ACE_P_MS2 0x0991
#define CMU_ACE_P_MS3 0x0992
#define CMU_FTDC_M_EN 0x09A0
#define CMU_FTDC_P_EN 0x09A1
#define CMU_FTDC_INLOW_L 0x09A2
#define CMU_FTDC_INLOW_H 0x09A3
#define CMU_FTDC_INHIGH_L 0x09A4
#define CMU_FTDC_INHIGH_H 0x09A5
#define CMU_FTDC_OUTLOW_L 0x09A6
#define CMU_FTDC_OUTLOW_H 0x09A7
#define CMU_FTDC_OUTHIGH_L 0x09A8
#define CMU_FTDC_OUTHIGH_H 0x09A9
#define CMU_FTDC_YLOW 0x09AA
#define CMU_FTDC_YHIGH 0x09AB
#define CMU_FTDC_CH1 0x09AC
#define CMU_FTDC_CH2_L 0x09AE
#define CMU_FTDC_CH2_H 0x09AF
#define CMU_FTDC_CH3_L 0x09B0
#define CMU_FTDC_CH3_H 0x09B1
#define CMU_FTDC_1_C00_6 0x09B2
#define CMU_FTDC_1_C01_6 0x09B8
#define CMU_FTDC_1_C11_6 0x09BE
#define CMU_FTDC_1_C10_6 0x09C4
#define CMU_FTDC_1_OFF00_6 0x09CA
#define CMU_FTDC_1_OFF10_6 0x09D0
#define CMU_HS_M_EN 0x0A00
#define CMU_HS_M_AX1_L 0x0A02
#define CMU_HS_M_AX1_H 0x0A03
#define CMU_HS_M_AX2_L 0x0A04
#define CMU_HS_M_AX2_H 0x0A05
#define CMU_HS_M_AX3_L 0x0A06
#define CMU_HS_M_AX3_H 0x0A07
#define CMU_HS_M_AX4_L 0x0A08
#define CMU_HS_M_AX4_H 0x0A09
#define CMU_HS_M_AX5_L 0x0A0A
#define CMU_HS_M_AX5_H 0x0A0B
#define CMU_HS_M_AX6_L 0x0A0C
#define CMU_HS_M_AX6_H 0x0A0D
#define CMU_HS_M_AX7_L 0x0A0E
#define CMU_HS_M_AX7_H 0x0A0F
#define CMU_HS_M_AX8_L 0x0A10
#define CMU_HS_M_AX8_H 0x0A11
#define CMU_HS_M_AX9_L 0x0A12
#define CMU_HS_M_AX9_H 0x0A13
#define CMU_HS_M_AX10_L 0x0A14
#define CMU_HS_M_AX10_H 0x0A15
#define CMU_HS_M_AX11_L 0x0A16
#define CMU_HS_M_AX11_H 0x0A17
#define CMU_HS_M_AX12_L 0x0A18
#define CMU_HS_M_AX12_H 0x0A19
#define CMU_HS_M_AX13_L 0x0A1A
#define CMU_HS_M_AX13_H 0x0A1B
#define CMU_HS_M_AX14_L 0x0A1C
#define CMU_HS_M_AX14_H 0x0A1D
#define CMU_HS_M_H1_H14 0x0A1E
#define CMU_HS_M_S1_S14 0x0A2C
#define CMU_HS_M_GL 0x0A3A
#define CMU_HS_M_MAXSAT_RGB_Y_L 0x0A3C
#define CMU_HS_M_MAXSAT_RGB_Y_H 0x0A3D
#define CMU_HS_M_MAXSAT_RCR_L 0x0A3E
#define CMU_HS_M_MAXSAT_RCR_H 0x0A3F
#define CMU_HS_M_MAXSAT_RCB_L 0x0A40
#define CMU_HS_M_MAXSAT_RCB_H 0x0A41
#define CMU_HS_M_MAXSAT_GCR_L 0x0A42
#define CMU_HS_M_MAXSAT_GCR_H 0x0A43
#define CMU_HS_M_MAXSAT_GCB_L 0x0A44
#define CMU_HS_M_MAXSAT_GCB_H 0x0A45
#define CMU_HS_M_MAXSAT_BCR_L 0x0A46
#define CMU_HS_M_MAXSAT_BCR_H 0x0A47
#define CMU_HS_M_MAXSAT_BCB_L 0x0A48
#define CMU_HS_M_MAXSAT_BCB_H 0x0A49
#define CMU_HS_M_ROFF_L 0x0A4A
#define CMU_HS_M_ROFF_H 0x0A4B
#define CMU_HS_M_GOFF_L 0x0A4C
#define CMU_HS_M_GOFF_H 0x0A4D
#define CMU_HS_M_BOFF_L 0x0A4E
#define CMU_HS_M_BOFF_H 0x0A4F
#define CMU_HS_P_EN 0x0A50
#define CMU_HS_P_AX1_L 0x0A52
#define CMU_HS_P_AX1_H 0x0A53
#define CMU_HS_P_AX2_L 0x0A54
#define CMU_HS_P_AX2_H 0x0A55
#define CMU_HS_P_AX3_L 0x0A56
#define CMU_HS_P_AX3_H 0x0A57
#define CMU_HS_P_AX4_L 0x0A58
#define CMU_HS_P_AX4_H 0x0A59
#define CMU_HS_P_AX5_L 0x0A5A
#define CMU_HS_P_AX5_H 0x0A5B
#define CMU_HS_P_AX6_L 0x0A5C
#define CMU_HS_P_AX6_H 0x0A5D
#define CMU_HS_P_AX7_L 0x0A5E
#define CMU_HS_P_AX7_H 0x0A5F
#define CMU_HS_P_AX8_L 0x0A60
#define CMU_HS_P_AX8_H 0x0A61
#define CMU_HS_P_AX9_L 0x0A62
#define CMU_HS_P_AX9_H 0x0A63
#define CMU_HS_P_AX10_L 0x0A64
#define CMU_HS_P_AX10_H 0x0A65
#define CMU_HS_P_AX11_L 0x0A66
#define CMU_HS_P_AX11_H 0x0A67
#define CMU_HS_P_AX12_L 0x0A68
#define CMU_HS_P_AX12_H 0x0A69
#define CMU_HS_P_AX13_L 0x0A6A
#define CMU_HS_P_AX13_H 0x0A6B
#define CMU_HS_P_AX14_L 0x0A6C
#define CMU_HS_P_AX14_H 0x0A6D
#define CMU_HS_P_H1_H14 0x0A6E
#define CMU_HS_P_S1_S14 0x0A7C
#define CMU_HS_P_GL 0x0A8A
#define CMU_HS_P_MAXSAT_RGB_Y_L 0x0A8C
#define CMU_HS_P_MAXSAT_RGB_Y_H 0x0A8D
#define CMU_HS_P_MAXSAT_RCR_L 0x0A8E
#define CMU_HS_P_MAXSAT_RCR_H 0x0A8F
#define CMU_HS_P_MAXSAT_RCB_L 0x0A90
#define CMU_HS_P_MAXSAT_RCB_H 0x0A91
#define CMU_HS_P_MAXSAT_GCR_L 0x0A92
#define CMU_HS_P_MAXSAT_GCR_H 0x0A93
#define CMU_HS_P_MAXSAT_GCB_L 0x0A94
#define CMU_HS_P_MAXSAT_GCB_H 0x0A95
#define CMU_HS_P_MAXSAT_BCR_L 0x0A96
#define CMU_HS_P_MAXSAT_BCR_H 0x0A97
#define CMU_HS_P_MAXSAT_BCB_L 0x0A98
#define CMU_HS_P_MAXSAT_BCB_H 0x0A99
#define CMU_HS_P_ROFF_L 0x0A9A
#define CMU_HS_P_ROFF_H 0x0A9B
#define CMU_HS_P_GOFF_L 0x0A9C
#define CMU_HS_P_GOFF_H 0x0A9D
#define CMU_HS_P_BOFF_L 0x0A9E
#define CMU_HS_P_BOFF_H 0x0A9F
#define CMU_GLCSC_M_C0_L 0x0AA0
#define CMU_GLCSC_M_C0_H 0x0AA1
#define CMU_GLCSC_M_C1_L 0x0AA2
#define CMU_GLCSC_M_C1_H 0x0AA3
#define CMU_GLCSC_M_C2_L 0x0AA4
#define CMU_GLCSC_M_C2_H 0x0AA5
#define CMU_GLCSC_M_C3_L 0x0AA6
#define CMU_GLCSC_M_C3_H 0x0AA7
#define CMU_GLCSC_M_C4_L 0x0AA8
#define CMU_GLCSC_M_C4_H 0x0AA9
#define CMU_GLCSC_M_C5_L 0x0AAA
#define CMU_GLCSC_M_C5_H 0x0AAB
#define CMU_GLCSC_M_C6_L 0x0AAC
#define CMU_GLCSC_M_C6_H 0x0AAD
#define CMU_GLCSC_M_C7_L 0x0AAE
#define CMU_GLCSC_M_C7_H 0x0AAF
#define CMU_GLCSC_M_C8_L 0x0AB0
#define CMU_GLCSC_M_C8_H 0x0AB1
#define CMU_GLCSC_M_O1_1 0x0AB4
#define CMU_GLCSC_M_O1_2 0x0AB5
#define CMU_GLCSC_M_O1_3 0x0AB6
#define CMU_GLCSC_M_O2_1 0x0AB8
#define CMU_GLCSC_M_O2_2 0x0AB9
#define CMU_GLCSC_M_O2_3 0x0ABA
#define CMU_GLCSC_M_O3_1 0x0ABC
#define CMU_GLCSC_M_O3_2 0x0ABD
#define CMU_GLCSC_M_O3_3 0x0ABE
#define CMU_GLCSC_P_C0_L 0x0AC0
#define CMU_GLCSC_P_C0_H 0x0AC1
#define CMU_GLCSC_P_C1_L 0x0AC2
#define CMU_GLCSC_P_C1_H 0x0AC3
#define CMU_GLCSC_P_C2_L 0x0AC4
#define CMU_GLCSC_P_C2_H 0x0AC5
#define CMU_GLCSC_P_C3_L 0x0AC6
#define CMU_GLCSC_P_C3_H 0x0AC7
#define CMU_GLCSC_P_C4_L 0x0AC8
#define CMU_GLCSC_P_C4_H 0x0AC9
#define CMU_GLCSC_P_C5_L 0x0ACA
#define CMU_GLCSC_P_C5_H 0x0ACB
#define CMU_GLCSC_P_C6_L 0x0ACC
#define CMU_GLCSC_P_C6_H 0x0ACD
#define CMU_GLCSC_P_C7_L 0x0ACE
#define CMU_GLCSC_P_C7_H 0x0ACF
#define CMU_GLCSC_P_C8_L 0x0AD0
#define CMU_GLCSC_P_C8_H 0x0AD1
#define CMU_GLCSC_P_O1_1 0x0AD4
#define CMU_GLCSC_P_O1_2 0x0AD5
#define CMU_GLCSC_P_O1_3 0x0AD6
#define CMU_GLCSC_P_O2_1 0x0AD8
#define CMU_GLCSC_P_O2_2 0x0AD9
#define CMU_GLCSC_P_O2_3 0x0ADA
#define CMU_GLCSC_P_O3_1 0x0ADC
#define CMU_GLCSC_P_O3_2 0x0ADD
#define CMU_GLCSC_P_O3_3 0x0ADE
#define CMU_PIXVAL_M_EN 0x0AE0
#define CMU_PIXVAL_P_EN 0x0AE1
#define CMU_CLK_CTRL_TCLK 0x0
#define CMU_CLK_CTRL_SCLK 0x2
#define CMU_CLK_CTRL_MSK 0x2
#define CMU_CLK_CTRL_ENABLE 0x1
#define LCD_TOP_CTRL_TV 0x2
#define LCD_TOP_CTRL_PN 0x0
#define LCD_TOP_CTRL_SEL_MSK 0x2
#define LCD_IO_CMU_IN_SEL_MSK (0x3 << 20)
#define LCD_IO_CMU_IN_SEL_TV 0
#define LCD_IO_CMU_IN_SEL_PN 1
#define LCD_IO_CMU_IN_SEL_PN2 2
#define LCD_IO_TV_OUT_SEL_MSK (0x3 << 26)
#define LCD_IO_PN_OUT_SEL_MSK (0x3 << 24)
#define LCD_IO_PN2_OUT_SEL_MSK (0x3 << 28)
#define LCD_IO_TV_OUT_SEL_NON 3
#define LCD_IO_PN_OUT_SEL_NON 3
#define LCD_IO_PN2_OUT_SEL_NON 3
#define LCD_TOP_CTRL_CMU_ENABLE 0x1
#define LCD_IO_OVERL_MSK 0xC00000
#define LCD_IO_OVERL_TV 0x0
#define LCD_IO_OVERL_LCD1 0x400000
#define LCD_IO_OVERL_LCD2 0xC00000
#define HINVERT_MSK 0x4
#define VINVERT_MSK 0x8
#define HINVERT_LEN 0x2
#define VINVERT_LEN 0x3
#define CMU_CTRL 0x88
#define CMU_CTRL_A0_MSK 0x6
#define CMU_CTRL_A0_TV 0x0
#define CMU_CTRL_A0_LCD1 0x1
#define CMU_CTRL_A0_LCD2 0x2
#define CMU_CTRL_A0_HDMI 0x3
#define ICR_DRV_ROUTE_OFF 0x0
#define ICR_DRV_ROUTE_TV 0x1
#define ICR_DRV_ROUTE_LCD1 0x2
#define ICR_DRV_ROUTE_LCD2 0x3
enum {
PATH_PN = 0,
PATH_TV,

View File

@ -10,7 +10,7 @@ obj-y := open.o read_write.o file_table.o super.o \
ioctl.o readdir.o select.o fifo.o dcache.o inode.o \
attr.o bad_inode.o file.o filesystems.o namespace.o \
seq_file.o xattr.o libfs.o fs-writeback.o \
pnode.o drop_caches.o splice.o sync.o utimes.o \
pnode.o splice.o sync.o utimes.o \
stack.o fs_struct.o statfs.o
ifeq ($(CONFIG_BLOCK),y)
@ -49,6 +49,7 @@ obj-$(CONFIG_FS_POSIX_ACL) += posix_acl.o xattr_acl.o
obj-$(CONFIG_NFS_COMMON) += nfs_common/
obj-$(CONFIG_GENERIC_ACL) += generic_acl.o
obj-$(CONFIG_COREDUMP) += coredump.o
obj-$(CONFIG_SYSCTL) += drop_caches.o
obj-$(CONFIG_FHANDLE) += fhandle.o

View File

@ -865,8 +865,6 @@ try_again:
/* Link the buffer to its page */
set_bh_page(bh, page, offset);
init_buffer(bh, NULL, NULL);
}
return head;
/*
@ -2949,7 +2947,7 @@ static void guard_bh_eod(int rw, struct bio *bio, struct buffer_head *bh)
}
}
int submit_bh(int rw, struct buffer_head * bh)
int _submit_bh(int rw, struct buffer_head *bh, unsigned long bio_flags)
{
struct bio *bio;
int ret = 0;
@ -2984,6 +2982,7 @@ int submit_bh(int rw, struct buffer_head * bh)
bio->bi_end_io = end_bio_bh_io_sync;
bio->bi_private = bh;
bio->bi_flags |= bio_flags;
/* Take care of bh's that straddle the end of the device */
guard_bh_eod(rw, bio, bh);
@ -2997,6 +2996,12 @@ int submit_bh(int rw, struct buffer_head * bh)
bio_put(bio);
return ret;
}
EXPORT_SYMBOL_GPL(_submit_bh);
int submit_bh(int rw, struct buffer_head *bh)
{
return _submit_bh(rw, bh, 0);
}
EXPORT_SYMBOL(submit_bh);
/**

View File

@ -672,12 +672,6 @@ static inline int dio_send_cur_page(struct dio *dio, struct dio_submit *sdio,
if (sdio->final_block_in_bio != sdio->cur_page_block ||
cur_offset != bio_next_offset)
dio_bio_submit(dio, sdio);
/*
* Submit now if the underlying fs is about to perform a
* metadata read
*/
else if (sdio->boundary)
dio_bio_submit(dio, sdio);
}
if (sdio->bio == NULL) {
@ -737,16 +731,6 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page,
sdio->cur_page_block +
(sdio->cur_page_len >> sdio->blkbits) == blocknr) {
sdio->cur_page_len += len;
/*
* If sdio->boundary then we want to schedule the IO now to
* avoid metadata seeks.
*/
if (sdio->boundary) {
ret = dio_send_cur_page(dio, sdio, map_bh);
page_cache_release(sdio->cur_page);
sdio->cur_page = NULL;
}
goto out;
}
@ -758,7 +742,7 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page,
page_cache_release(sdio->cur_page);
sdio->cur_page = NULL;
if (ret)
goto out;
return ret;
}
page_cache_get(page); /* It is in dio */
@ -768,6 +752,16 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page,
sdio->cur_page_block = blocknr;
sdio->cur_page_fs_offset = sdio->block_in_file << sdio->blkbits;
out:
/*
* If sdio->boundary then we want to schedule the IO now to
* avoid metadata seeks.
*/
if (sdio->boundary) {
ret = dio_send_cur_page(dio, sdio, map_bh);
dio_bio_submit(dio, sdio);
page_cache_release(sdio->cur_page);
sdio->cur_page = NULL;
}
return ret;
}
@ -969,7 +963,8 @@ do_holes:
this_chunk_bytes = this_chunk_blocks << blkbits;
BUG_ON(this_chunk_bytes == 0);
sdio->boundary = buffer_boundary(map_bh);
if (this_chunk_blocks == sdio->blocks_available)
sdio->boundary = buffer_boundary(map_bh);
ret = submit_page_section(dio, sdio, page,
offset_in_page,
this_chunk_bytes,

View File

@ -613,7 +613,7 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)
* when the old and new regions overlap clear from new_end.
*/
free_pgd_range(&tlb, new_end, old_end, new_end,
vma->vm_next ? vma->vm_next->vm_start : 0);
vma->vm_next ? vma->vm_next->vm_start : USER_PGTABLES_CEILING);
} else {
/*
* otherwise, clean from old_start; this is done to not touch
@ -622,7 +622,7 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)
* for the others its just a little faster.
*/
free_pgd_range(&tlb, old_start, old_end, new_end,
vma->vm_next ? vma->vm_next->vm_start : 0);
vma->vm_next ? vma->vm_next->vm_start : USER_PGTABLES_CEILING);
}
tlb_finish_mmu(&tlb, new_end, old_end);

View File

@ -2067,7 +2067,6 @@ static int ext3_fill_super (struct super_block *sb, void *data, int silent)
test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_JOURNAL_DATA ? "journal":
test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_ORDERED_DATA ? "ordered":
"writeback");
sb->s_flags |= MS_SNAP_STABLE;
return 0;

View File

@ -287,5 +287,5 @@ const struct file_operations fscache_stats_fops = {
.open = fscache_stats_open,
.read = seq_read,
.llseek = seq_lseek,
.release = seq_release,
.release = single_release,
};

View File

@ -162,8 +162,17 @@ static void journal_do_submit_data(struct buffer_head **wbuf, int bufs,
for (i = 0; i < bufs; i++) {
wbuf[i]->b_end_io = end_buffer_write_sync;
/* We use-up our safety reference in submit_bh() */
submit_bh(write_op, wbuf[i]);
/*
* Here we write back pagecache data that may be mmaped. Since
* we cannot afford to clean the page and set PageWriteback
* here due to lock ordering (page lock ranks above transaction
* start), the data can change while IO is in flight. Tell the
* block layer it should bounce the bio pages if stable data
* during write is required.
*
* We use up our safety reference in submit_bh().
*/
_submit_bh(write_op, wbuf[i], 1 << BIO_SNAP_STABLE);
}
}
@ -667,7 +676,17 @@ start_journal_io:
clear_buffer_dirty(bh);
set_buffer_uptodate(bh);
bh->b_end_io = journal_end_buffer_io_sync;
submit_bh(write_op, bh);
/*
* In data=journal mode, here we can end up
* writing pagecache data that might be
* mmapped. Since we can't afford to clean the
* page and set PageWriteback (see the comment
* near the other use of _submit_bh()), the
* data can change while the write is in
* flight. Tell the block layer to bounce the
* bio pages if stable pages are required.
*/
_submit_bh(write_op, bh, 1 << BIO_SNAP_STABLE);
}
cond_resched();

View File

@ -310,8 +310,6 @@ int journal_write_metadata_buffer(transaction_t *transaction,
new_bh = alloc_buffer_head(GFP_NOFS|__GFP_NOFAIL);
/* keep subsequent assertions sane */
new_bh->b_state = 0;
init_buffer(new_bh, NULL, NULL);
atomic_set(&new_bh->b_count, 1);
new_jh = journal_add_journal_head(new_bh); /* This sleeps */

View File

@ -367,8 +367,6 @@ retry_alloc:
}
/* keep subsequent assertions sane */
new_bh->b_state = 0;
init_buffer(new_bh, NULL, NULL);
atomic_set(&new_bh->b_count, 1);
new_jh = jbd2_journal_add_journal_head(new_bh); /* This sleeps */

View File

@ -1498,10 +1498,8 @@ leave:
dlm_put(dlm);
if (ret < 0) {
if (buf)
kfree(buf);
if (item)
kfree(item);
kfree(buf);
kfree(item);
mlog_errno(ret);
}

View File

@ -101,13 +101,6 @@ static int ocfs2_set_inode_attr(struct inode *inode, unsigned flags,
if (!S_ISDIR(inode->i_mode))
flags &= ~OCFS2_DIRSYNC_FL;
handle = ocfs2_start_trans(osb, OCFS2_INODE_UPDATE_CREDITS);
if (IS_ERR(handle)) {
status = PTR_ERR(handle);
mlog_errno(status);
goto bail_unlock;
}
oldflags = ocfs2_inode->ip_attr;
flags = flags & mask;
flags |= oldflags & ~mask;
@ -120,7 +113,14 @@ static int ocfs2_set_inode_attr(struct inode *inode, unsigned flags,
if ((oldflags & OCFS2_IMMUTABLE_FL) || ((flags ^ oldflags) &
(OCFS2_APPEND_FL | OCFS2_IMMUTABLE_FL))) {
if (!capable(CAP_LINUX_IMMUTABLE))
goto bail_commit;
goto bail_unlock;
}
handle = ocfs2_start_trans(osb, OCFS2_INODE_UPDATE_CREDITS);
if (IS_ERR(handle)) {
status = PTR_ERR(handle);
mlog_errno(status);
goto bail_unlock;
}
ocfs2_inode->ip_attr = flags;
@ -130,8 +130,8 @@ static int ocfs2_set_inode_attr(struct inode *inode, unsigned flags,
if (status < 0)
mlog_errno(status);
bail_commit:
ocfs2_commit_trans(osb, handle);
bail_unlock:
ocfs2_inode_unlock(inode, 1);
bail:
@ -706,8 +706,10 @@ int ocfs2_info_handle_freefrag(struct inode *inode,
o2info_set_request_filled(&oiff->iff_req);
if (o2info_to_user(*oiff, req))
if (o2info_to_user(*oiff, req)) {
status = -EFAULT;
goto bail;
}
status = 0;
bail:

View File

@ -471,7 +471,7 @@ static int ocfs2_validate_and_adjust_move_goal(struct inode *inode,
int ret, goal_bit = 0;
struct buffer_head *gd_bh = NULL;
struct ocfs2_group_desc *bg = NULL;
struct ocfs2_group_desc *bg;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
int c_to_b = 1 << (osb->s_clustersize_bits -
inode->i_sb->s_blocksize_bits);
@ -481,13 +481,6 @@ static int ocfs2_validate_and_adjust_move_goal(struct inode *inode,
*/
range->me_goal = ocfs2_block_to_cluster_start(inode->i_sb,
range->me_goal);
/*
* moving goal is not allowd to start with a group desc blok(#0 blk)
* let's compromise to the latter cluster.
*/
if (range->me_goal == le64_to_cpu(bg->bg_blkno))
range->me_goal += c_to_b;
/*
* validate goal sits within global_bitmap, and return the victim
* group desc
@ -501,6 +494,13 @@ static int ocfs2_validate_and_adjust_move_goal(struct inode *inode,
bg = (struct ocfs2_group_desc *)gd_bh->b_data;
/*
* moving goal is not allowd to start with a group desc blok(#0 blk)
* let's compromise to the latter cluster.
*/
if (range->me_goal == le64_to_cpu(bg->bg_blkno))
range->me_goal += c_to_b;
/*
* movement is not gonna cross two groups.
*/
@ -1057,42 +1057,40 @@ int ocfs2_ioctl_move_extents(struct file *filp, void __user *argp)
struct inode *inode = file_inode(filp);
struct ocfs2_move_extents range;
struct ocfs2_move_extents_context *context = NULL;
struct ocfs2_move_extents_context *context;
if (!argp)
return -EINVAL;
status = mnt_want_write_file(filp);
if (status)
return status;
if ((!S_ISREG(inode->i_mode)) || !(filp->f_mode & FMODE_WRITE))
goto out;
goto out_drop;
if (inode->i_flags & (S_IMMUTABLE|S_APPEND)) {
status = -EPERM;
goto out;
goto out_drop;
}
context = kzalloc(sizeof(struct ocfs2_move_extents_context), GFP_NOFS);
if (!context) {
status = -ENOMEM;
mlog_errno(status);
goto out;
goto out_drop;
}
context->inode = inode;
context->file = filp;
if (argp) {
if (copy_from_user(&range, argp, sizeof(range))) {
status = -EFAULT;
goto out;
}
} else {
status = -EINVAL;
goto out;
if (copy_from_user(&range, argp, sizeof(range))) {
status = -EFAULT;
goto out_free;
}
if (range.me_start > i_size_read(inode))
goto out;
goto out_free;
if (range.me_start + range.me_len > i_size_read(inode))
range.me_len = i_size_read(inode) - range.me_start;
@ -1124,25 +1122,24 @@ int ocfs2_ioctl_move_extents(struct file *filp, void __user *argp)
status = ocfs2_validate_and_adjust_move_goal(inode, &range);
if (status)
goto out;
goto out_copy;
}
status = ocfs2_move_extents(context);
if (status)
mlog_errno(status);
out:
out_copy:
/*
* movement/defragmentation may end up being partially completed,
* that's the reason why we need to return userspace the finished
* length and new_offset even if failure happens somewhere.
*/
if (argp) {
if (copy_to_user(argp, &range, sizeof(range)))
status = -EFAULT;
}
if (copy_to_user(argp, &range, sizeof(range)))
status = -EFAULT;
out_free:
kfree(context);
out_drop:
mnt_drop_write_file(filp);
return status;

View File

@ -5,7 +5,7 @@
obj-y += proc.o
proc-y := nommu.o task_nommu.o
proc-$(CONFIG_MMU) := mmu.o task_mmu.o
proc-$(CONFIG_MMU) := task_mmu.o
proc-y += inode.o root.o base.o generic.o array.o \
fd.o

View File

@ -30,24 +30,6 @@ extern int proc_net_init(void);
static inline int proc_net_init(void) { return 0; }
#endif
struct vmalloc_info {
unsigned long used;
unsigned long largest_chunk;
};
#ifdef CONFIG_MMU
#define VMALLOC_TOTAL (VMALLOC_END - VMALLOC_START)
extern void get_vmalloc_info(struct vmalloc_info *vmi);
#else
#define VMALLOC_TOTAL 0UL
#define get_vmalloc_info(vmi) \
do { \
(vmi)->used = 0; \
(vmi)->largest_chunk = 0; \
} while(0)
#endif
extern int proc_tid_stat(struct seq_file *m, struct pid_namespace *ns,
struct pid *pid, struct task_struct *task);
extern int proc_tgid_stat(struct seq_file *m, struct pid_namespace *ns,

View File

@ -15,6 +15,7 @@
#include <linux/capability.h>
#include <linux/elf.h>
#include <linux/elfcore.h>
#include <linux/notifier.h>
#include <linux/vmalloc.h>
#include <linux/highmem.h>
#include <linux/printk.h>
@ -564,7 +565,6 @@ static const struct file_operations proc_kcore_operations = {
.llseek = default_llseek,
};
#ifdef CONFIG_MEMORY_HOTPLUG
/* just remember that we have to update kcore */
static int __meminit kcore_callback(struct notifier_block *self,
unsigned long action, void *arg)
@ -578,8 +578,11 @@ static int __meminit kcore_callback(struct notifier_block *self,
}
return NOTIFY_OK;
}
#endif
static struct notifier_block kcore_callback_nb __meminitdata = {
.notifier_call = kcore_callback,
.priority = 0,
};
static struct kcore_list kcore_vmalloc;
@ -631,7 +634,7 @@ static int __init proc_kcore_init(void)
add_modules_range();
/* Store direct-map area from physical memory map */
kcore_update_ram();
hotplug_memory_notifier(kcore_callback, 0);
register_hotmemory_notifier(&kcore_callback_nb);
return 0;
}

View File

@ -11,6 +11,7 @@
#include <linux/swap.h>
#include <linux/vmstat.h>
#include <linux/atomic.h>
#include <linux/vmalloc.h>
#include <asm/page.h>
#include <asm/pgtable.h>
#include "internal.h"

View File

@ -1,60 +0,0 @@
/* mmu.c: mmu memory info files
*
* Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/spinlock.h>
#include <linux/vmalloc.h>
#include <linux/highmem.h>
#include <asm/pgtable.h>
#include "internal.h"
void get_vmalloc_info(struct vmalloc_info *vmi)
{
struct vm_struct *vma;
unsigned long free_area_size;
unsigned long prev_end;
vmi->used = 0;
if (!vmlist) {
vmi->largest_chunk = VMALLOC_TOTAL;
}
else {
vmi->largest_chunk = 0;
prev_end = VMALLOC_START;
read_lock(&vmlist_lock);
for (vma = vmlist; vma; vma = vma->next) {
unsigned long addr = (unsigned long) vma->addr;
/*
* Some archs keep another range for modules in vmlist
*/
if (addr < VMALLOC_START)
continue;
if (addr >= VMALLOC_END)
break;
vmi->used += vma->size;
free_area_size = addr - prev_end;
if (vmi->largest_chunk < free_area_size)
vmi->largest_chunk = free_area_size;
prev_end = vma->size + addr;
}
if (VMALLOC_END - prev_end > vmi->largest_chunk)
vmi->largest_chunk = VMALLOC_END - prev_end;
read_unlock(&vmlist_lock);
}
}

View File

@ -128,7 +128,7 @@ EXPORT_SYMBOL(generic_file_llseek_size);
*
* This is a generic implemenation of ->llseek useable for all normal local
* filesystems. It just updates the file offset to the value specified by
* @offset and @whence under i_mutex.
* @offset and @whence.
*/
loff_t generic_file_llseek(struct file *file, loff_t offset, int whence)
{

View File

@ -0,0 +1,40 @@
#ifndef _ASM_GENERIC_HUGETLB_H
#define _ASM_GENERIC_HUGETLB_H
static inline pte_t mk_huge_pte(struct page *page, pgprot_t pgprot)
{
return mk_pte(page, pgprot);
}
static inline int huge_pte_write(pte_t pte)
{
return pte_write(pte);
}
static inline int huge_pte_dirty(pte_t pte)
{
return pte_dirty(pte);
}
static inline pte_t huge_pte_mkwrite(pte_t pte)
{
return pte_mkwrite(pte);
}
static inline pte_t huge_pte_mkdirty(pte_t pte)
{
return pte_mkdirty(pte);
}
static inline pte_t huge_pte_modify(pte_t pte, pgprot_t newprot)
{
return pte_modify(pte, newprot);
}
static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep)
{
pte_clear(mm, addr, ptep);
}
#endif /* _ASM_GENERIC_HUGETLB_H */

Some files were not shown because too many files have changed in this diff Show More