Merge with git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git

This commit is contained in:
Adrian Bunk 2006-02-03 23:49:49 +01:00 committed by Adrian Bunk
commit 01d206a7c1
998 changed files with 31084 additions and 19947 deletions

View File

@ -90,16 +90,20 @@ at OLS. The resulting abundance of RCU patches was presented the
following year [McKenney02a], and use of RCU in dcache was first
described that same year [Linder02a].
Also in 2002, Michael [Michael02b,Michael02a] presented techniques
that defer the destruction of data structures to simplify non-blocking
synchronization (wait-free synchronization, lock-free synchronization,
and obstruction-free synchronization are all examples of non-blocking
synchronization). In particular, this technique eliminates locking,
reduces contention, reduces memory latency for readers, and parallelizes
pipeline stalls and memory latency for writers. However, these
techniques still impose significant read-side overhead in the form of
memory barriers. Researchers at Sun worked along similar lines in the
same timeframe [HerlihyLM02,HerlihyLMS03].
Also in 2002, Michael [Michael02b,Michael02a] presented "hazard-pointer"
techniques that defer the destruction of data structures to simplify
non-blocking synchronization (wait-free synchronization, lock-free
synchronization, and obstruction-free synchronization are all examples of
non-blocking synchronization). In particular, this technique eliminates
locking, reduces contention, reduces memory latency for readers, and
parallelizes pipeline stalls and memory latency for writers. However,
these techniques still impose significant read-side overhead in the
form of memory barriers. Researchers at Sun worked along similar lines
in the same timeframe [HerlihyLM02,HerlihyLMS03]. These techniques
can be thought of as inside-out reference counts, where the count is
represented by the number of hazard pointers referencing a given data
structure (rather than the more conventional counter field within the
data structure itself).
In 2003, the K42 group described how RCU could be used to create
hot-pluggable implementations of operating-system functions. Later that
@ -113,7 +117,6 @@ number of operating-system kernels [PaulEdwardMcKenneyPhD], a paper
describing how to make RCU safe for soft-realtime applications [Sarma04c],
and a paper describing SELinux performance with RCU [JamesMorris04b].
2005 has seen further adaptation of RCU to realtime use, permitting
preemption of RCU realtime critical sections [PaulMcKenney05a,
PaulMcKenney05b].

View File

@ -177,3 +177,9 @@ over a rather long period of time, but improvements are always welcome!
If you want to wait for some of these other things, you might
instead need to use synchronize_irq() or synchronize_sched().
12. Any lock acquired by an RCU callback must be acquired elsewhere
with irq disabled, e.g., via spin_lock_irqsave(). Failing to
disable irq on a given acquisition of that lock will result in
deadlock as soon as the RCU callback happens to interrupt that
acquisition's critical section.

View File

@ -232,7 +232,7 @@ entry does not exist. For this to be helpful, the search function must
return holding the per-entry spinlock, as ipc_lock() does in fact do.
Quick Quiz: Why does the search function need to return holding the
per-entry lock for this deleted-flag technique to be helpful?
per-entry lock for this deleted-flag technique to be helpful?
If the system-call audit module were to ever need to reject stale data,
one way to accomplish this would be to add a "deleted" flag and a "lock"
@ -275,8 +275,8 @@ flag under the spinlock as follows:
{
struct audit_entry *e;
/* Do not use the _rcu iterator here, since this is the only
* deletion routine. */
/* Do not need to use the _rcu iterator here, since this
* is the only deletion routine. */
list_for_each_entry(e, list, list) {
if (!audit_compare_rule(rule, &e->rule)) {
spin_lock(&e->lock);
@ -304,9 +304,12 @@ function to reject newly deleted data.
Answer to Quick Quiz
Why does the search function need to return holding the per-entry
lock for this deleted-flag technique to be helpful?
If the search function drops the per-entry lock before returning, then
the caller will be processing stale data in any case. If it is really
OK to be processing stale data, then you don't need a "deleted" flag.
If processing stale data really is a problem, then you need to hold the
per-entry lock across all of the code that uses the value looked up.
If the search function drops the per-entry lock before returning,
then the caller will be processing stale data in any case. If it
is really OK to be processing stale data, then you don't need a
"deleted" flag. If processing stale data really is a problem,
then you need to hold the per-entry lock across all of the code
that uses the value that was returned.

View File

@ -111,6 +111,11 @@ o What are all these files in this directory?
You are reading it!
rcuref.txt
Describes how to combine use of reference counts
with RCU.
whatisRCU.txt
Overview of how the RCU implementation works. Along

View File

@ -1,7 +1,7 @@
Refcounter design for elements of lists/arrays protected by RCU.
Reference-count design for elements of lists/arrays protected by RCU.
Refcounting on elements of lists which are protected by traditional
reader/writer spinlocks or semaphores are straight forward as in:
Reference counting on elements of lists which are protected by traditional
reader/writer spinlocks or semaphores are straightforward:
1. 2.
add() search_and_reference()
@ -28,12 +28,12 @@ release_referenced() delete()
...
}
If this list/array is made lock free using rcu as in changing the
write_lock in add() and delete() to spin_lock and changing read_lock
If this list/array is made lock free using RCU as in changing the
write_lock() in add() and delete() to spin_lock and changing read_lock
in search_and_reference to rcu_read_lock(), the atomic_get in
search_and_reference could potentially hold reference to an element which
has already been deleted from the list/array. atomic_inc_not_zero takes
care of this scenario. search_and_reference should look as;
has already been deleted from the list/array. Use atomic_inc_not_zero()
in this scenario as follows:
1. 2.
add() search_and_reference()
@ -51,17 +51,16 @@ add() search_and_reference()
release_referenced() delete()
{ {
... write_lock(&list_lock);
atomic_dec(&el->rc, relfunc) ...
... delete_element
} write_unlock(&list_lock);
...
if (atomic_dec_and_test(&el->rc)) ...
call_rcu(&el->head, el_free); delete_element
... write_unlock(&list_lock);
} ...
if (atomic_dec_and_test(&el->rc))
call_rcu(&el->head, el_free);
...
}
Sometimes, reference to the element need to be obtained in the
update (write) stream. In such cases, atomic_inc_not_zero might be an
overkill since the spinlock serialising list updates are held. atomic_inc
is to be used in such cases.
Sometimes, a reference to the element needs to be obtained in the
update (write) stream. In such cases, atomic_inc_not_zero() might be
overkill, since we hold the update-side spinlock. One might instead
use atomic_inc() in such cases.

View File

@ -200,10 +200,11 @@ rcu_assign_pointer()
the new value, and also executes any memory-barrier instructions
required for a given CPU architecture.
Perhaps more important, it serves to document which pointers
are protected by RCU. That said, rcu_assign_pointer() is most
frequently used indirectly, via the _rcu list-manipulation
primitives such as list_add_rcu().
Perhaps just as important, it serves to document (1) which
pointers are protected by RCU and (2) the point at which a
given structure becomes accessible to other CPUs. That said,
rcu_assign_pointer() is most frequently used indirectly, via
the _rcu list-manipulation primitives such as list_add_rcu().
rcu_dereference()
@ -258,9 +259,11 @@ rcu_dereference()
locking.
As with rcu_assign_pointer(), an important function of
rcu_dereference() is to document which pointers are protected
by RCU. And, again like rcu_assign_pointer(), rcu_dereference()
is typically used indirectly, via the _rcu list-manipulation
rcu_dereference() is to document which pointers are protected by
RCU, in particular, flagging a pointer that is subject to changing
at any time, including immediately after the rcu_dereference().
And, again like rcu_assign_pointer(), rcu_dereference() is
typically used indirectly, via the _rcu list-manipulation
primitives, such as list_for_each_entry_rcu().
The following diagram shows how each API communicates among the
@ -327,7 +330,7 @@ for specialized uses, but are relatively uncommon.
3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API?
This section shows a simple use of the core RCU API to protect a
global pointer to a dynamically allocated structure. More typical
global pointer to a dynamically allocated structure. More-typical
uses of RCU may be found in listRCU.txt, arrayRCU.txt, and NMI-RCU.txt.
struct foo {
@ -410,6 +413,8 @@ o Use synchronize_rcu() -after- removing a data element from an
data item.
See checklist.txt for additional rules to follow when using RCU.
And again, more-typical uses of RCU may be found in listRCU.txt,
arrayRCU.txt, and NMI-RCU.txt.
4. WHAT IF MY UPDATING THREAD CANNOT BLOCK?
@ -513,7 +518,7 @@ production-quality implementation, and see:
for papers describing the Linux kernel RCU implementation. The OLS'01
and OLS'02 papers are a good introduction, and the dissertation provides
more details on the current implementation.
more details on the current implementation as of early 2004.
5A. "TOY" IMPLEMENTATION #1: LOCKING
@ -768,7 +773,6 @@ RCU pointer/list traversal:
rcu_dereference
list_for_each_rcu (to be deprecated in favor of
list_for_each_entry_rcu)
list_for_each_safe_rcu (deprecated, not used)
list_for_each_entry_rcu
list_for_each_continue_rcu (to be deprecated in favor of new
list_for_each_entry_continue_rcu)
@ -807,7 +811,8 @@ Quick Quiz #1: Why is this argument naive? How could a deadlock
Answer: Consider the following sequence of events:
1. CPU 0 acquires some unrelated lock, call it
"problematic_lock".
"problematic_lock", disabling irq via
spin_lock_irqsave().
2. CPU 1 enters synchronize_rcu(), write-acquiring
rcu_gp_mutex.
@ -894,7 +899,7 @@ Answer: Just as PREEMPT_RT permits preemption of spinlock
ACKNOWLEDGEMENTS
My thanks to the people who helped make this human-readable, including
Jon Walpole, Josh Triplett, Serge Hallyn, and Suzanne Wood.
Jon Walpole, Josh Triplett, Serge Hallyn, Suzanne Wood, and Alan Stern.
For more information, see http://www.rdrop.com/users/paulmck/RCU.

View File

@ -0,0 +1,41 @@
Export cpu topology info by sysfs. Items (attributes) are similar
to /proc/cpuinfo.
1) /sys/devices/system/cpu/cpuX/topology/physical_package_id:
represent the physical package id of cpu X;
2) /sys/devices/system/cpu/cpuX/topology/core_id:
represent the cpu core id to cpu X;
3) /sys/devices/system/cpu/cpuX/topology/thread_siblings:
represent the thread siblings to cpu X in the same core;
4) /sys/devices/system/cpu/cpuX/topology/core_siblings:
represent the thread siblings to cpu X in the same physical package;
To implement it in an architecture-neutral way, a new source file,
driver/base/topology.c, is to export the 5 attributes.
If one architecture wants to support this feature, it just needs to
implement 4 defines, typically in file include/asm-XXX/topology.h.
The 4 defines are:
#define topology_physical_package_id(cpu)
#define topology_core_id(cpu)
#define topology_thread_siblings(cpu)
#define topology_core_siblings(cpu)
The type of **_id is int.
The type of siblings is cpumask_t.
To be consistent on all architectures, the 4 attributes should have
deafult values if their values are unavailable. Below is the rule.
1) physical_package_id: If cpu has no physical package id, -1 is the
default value.
2) core_id: If cpu doesn't support multi-core, its core id is 0.
3) thread_siblings: Just include itself, if the cpu doesn't support
HT/multi-thread.
4) core_siblings: Just include itself, if the cpu doesn't support
multi-core and HT/Multi-thread.
So be careful when declaring the 4 defines in include/asm-XXX/topology.h.
If an attribute isn't defined on an architecture, it won't be exported.

View File

@ -1,50 +1,43 @@
The Linux Kernel Device Model
Patrick Mochel <mochel@osdl.org>
Patrick Mochel <mochel@digitalimplant.org>
26 August 2002
Drafted 26 August 2002
Updated 31 January 2006
Overview
~~~~~~~~
This driver model is a unification of all the current, disparate driver models
that are currently in the kernel. It is intended to augment the
The Linux Kernel Driver Model is a unification of all the disparate driver
models that were previously used in the kernel. It is intended to augment the
bus-specific drivers for bridges and devices by consolidating a set of data
and operations into globally accessible data structures.
Current driver models implement some sort of tree-like structure (sometimes
just a list) for the devices they control. But, there is no linkage between
the different bus types.
Traditional driver models implemented some sort of tree-like structure
(sometimes just a list) for the devices they control. There wasn't any
uniformity across the different bus types.
A common data structure can provide this linkage with little overhead: when a
bus driver discovers a particular device, it can insert it into the global
tree as well as its local tree. In fact, the local tree becomes just a subset
of the global tree.
Common data fields can also be moved out of the local bus models into the
global model. Some of the manipulations of these fields can also be
consolidated. Most likely, manipulation functions will become a set
of helper functions, which the bus drivers wrap around to include any
bus-specific items.
The common device and bridge interface currently reflects the goals of the
modern PC: namely the ability to do seamless Plug and Play, power management,
and hot plug. (The model dictated by Intel and Microsoft (read: ACPI) ensures
us that any device in the system may fit any of these criteria.)
In reality, not every bus will be able to support such operations. But, most
buses will support a majority of those operations, and all future buses will.
In other words, a bus that doesn't support an operation is the exception,
instead of the other way around.
The current driver model provides a comon, uniform data model for describing
a bus and the devices that can appear under the bus. The unified bus
model includes a set of common attributes which all busses carry, and a set
of common callbacks, such as device discovery during bus probing, bus
shutdown, bus power management, etc.
The common device and bridge interface reflects the goals of the modern
computer: namely the ability to do seamless device "plug and play", power
management, and hot plug. In particular, the model dictated by Intel and
Microsoft (namely ACPI) ensures that almost every device on almost any bus
on an x86-compatible system can work within this paradigm. Of course,
not every bus is able to support all such operations, although most
buses support a most of those operations.
Downstream Access
~~~~~~~~~~~~~~~~~
Common data fields have been moved out of individual bus layers into a common
data structure. But, these fields must still be accessed by the bus layers,
data structure. These fields must still be accessed by the bus layers,
and sometimes by the device-specific drivers.
Other bus layers are encouraged to do what has been done for the PCI layer.
@ -53,7 +46,7 @@ struct pci_dev now looks like this:
struct pci_dev {
...
struct device device;
struct device dev;
};
Note first that it is statically allocated. This means only one allocation on
@ -64,9 +57,9 @@ the two.
The PCI bus layer freely accesses the fields of struct device. It knows about
the structure of struct pci_dev, and it should know the structure of struct
device. PCI devices that have been converted generally do not touch the fields
of struct device. More precisely, device-specific drivers should not touch
fields of struct device unless there is a strong compelling reason to do so.
device. Individual PCI device drivers that have been converted the the current
driver model generally do not and should not touch the fields of struct device,
unless there is a strong compelling reason to do so.
This abstraction is prevention of unnecessary pain during transitional phases.
If the name of the field changes or is removed, then every downstream driver

View File

@ -148,3 +148,17 @@ Why: The 8250 serial driver now has the ability to deal with the differences
brother on Alchemy SOCs. The loss of features is not considered an
issue.
Who: Ralf Baechle <ralf@linux-mips.org>
---------------------------
What: Legacy /proc/pci interface (PCI_LEGACY_PROC)
When: March 2006
Why: deprecated since 2.5.53 in favor of lspci(8)
Who: Adrian Bunk <bunk@stusta.de>
---------------------------
What: pci_module_init(driver)
When: January 2007
Why: Is replaced by pci_register_driver(pci_driver).
Who: Richard Knutsson <ricknu-0@student.ltu.se> and Greg Kroah-Hartman <gregkh@suse.de>

View File

@ -45,10 +45,10 @@ How to extract the documentation
If you just want to read the ready-made books on the various
subsystems (see Documentation/DocBook/*.tmpl), just type 'make
psdocs', or 'make pdfdocs', or 'make htmldocs', depending on your
preference. If you would rather read a different format, you can type
'make sgmldocs' and then use DocBook tools to convert
Documentation/DocBook/*.sgml to a format of your choice (for example,
psdocs', or 'make pdfdocs', or 'make htmldocs', depending on your
preference. If you would rather read a different format, you can type
'make sgmldocs' and then use DocBook tools to convert
Documentation/DocBook/*.sgml to a format of your choice (for example,
'db2html ...' if 'make htmldocs' was not defined).
If you want to see man pages instead, you can do this:
@ -124,6 +124,36 @@ patterns, which are highlighted appropriately.
Take a look around the source tree for examples.
kernel-doc for structs, unions, enums, and typedefs
---------------------------------------------------
Beside functions you can also write documentation for structs, unions,
enums and typedefs. Instead of the function name you must write the name
of the declaration; the struct/union/enum/typedef must always precede
the name. Nesting of declarations is not supported.
Use the argument mechanism to document members or constants.
Inside a struct description, you can use the "private:" and "public:"
comment tags. Structure fields that are inside a "private:" area
are not listed in the generated output documentation.
Example:
/**
* struct my_struct - short description
* @a: first member
* @b: second member
*
* Longer description
*/
struct my_struct {
int a;
int b;
/* private: */
int c;
};
How to make new SGML template files
-----------------------------------
@ -147,4 +177,3 @@ documentation, in <filename>, for the functions listed.
Tim.
*/ <twaugh@redhat.com>

View File

@ -452,6 +452,11 @@ running once the system is up.
eata= [HW,SCSI]
ec_intr= [HW,ACPI] ACPI Embedded Controller interrupt mode
Format: <int>
0: polling mode
non-0: interrupt mode (default)
eda= [HW,PS2]
edb= [HW,PS2]

View File

@ -427,6 +427,23 @@ icmp_ignore_bogus_error_responses - BOOLEAN
will avoid log file clutter.
Default: FALSE
icmp_errors_use_inbound_ifaddr - BOOLEAN
If zero, icmp error messages are sent with the primary address of
the exiting interface.
If non-zero, the message will be sent with the primary address of
the interface that received the packet that caused the icmp error.
This is the behaviour network many administrators will expect from
a router. And it can make debugging complicated network layouts
much easier.
Note that if no primary address exists for the interface selected,
then the primary address of the first non-loopback interface that
has one will be used regarldess of this setting.
Default: 0
igmp_max_memberships - INTEGER
Change the maximum number of multicast groups we can subscribe to.
Default: 20

View File

@ -1068,7 +1068,7 @@ SYNOPSIS
struct parport_operations {
...
void (*write_status) (struct parport *port, unsigned char s);
void (*write_control) (struct parport *port, unsigned char s);
...
};
@ -1097,9 +1097,9 @@ SYNOPSIS
struct parport_operations {
...
void (*frob_control) (struct parport *port,
unsigned char mask,
unsigned char val);
unsigned char (*frob_control) (struct parport *port,
unsigned char mask,
unsigned char val);
...
};

View File

@ -1,246 +1,396 @@
PCI Error Recovery
------------------
May 31, 2005
February 2, 2006
Current document maintainer:
Linas Vepstas <linas@austin.ibm.com>
Current document maintainer:
Linas Vepstas <linas@austin.ibm.com>
Some PCI bus controllers are able to detect certain "hard" PCI errors
on the bus, such as parity errors on the data and address busses, as
well as SERR and PERR errors. These chipsets are then able to disable
I/O to/from the affected device, so that, for example, a bad DMA
address doesn't end up corrupting system memory. These same chipsets
are also able to reset the affected PCI device, and return it to
working condition. This document describes a generic API form
performing error recovery.
Many PCI bus controllers are able to detect a variety of hardware
PCI errors on the bus, such as parity errors on the data and address
busses, as well as SERR and PERR errors. Some of the more advanced
chipsets are able to deal with these errors; these include PCI-E chipsets,
and the PCI-host bridges found on IBM Power4 and Power5-based pSeries
boxes. A typical action taken is to disconnect the affected device,
halting all I/O to it. The goal of a disconnection is to avoid system
corruption; for example, to halt system memory corruption due to DMA's
to "wild" addresses. Typically, a reconnection mechanism is also
offered, so that the affected PCI device(s) are reset and put back
into working condition. The reset phase requires coordination
between the affected device drivers and the PCI controller chip.
This document describes a generic API for notifying device drivers
of a bus disconnection, and then performing error recovery.
This API is currently implemented in the 2.6.16 and later kernels.
The core idea is that after a PCI error has been detected, there must
be a way for the kernel to coordinate with all affected device drivers
so that the pci card can be made operational again, possibly after
performing a full electrical #RST of the PCI card. The API below
provides a generic API for device drivers to be notified of PCI
errors, and to be notified of, and respond to, a reset sequence.
Reporting and recovery is performed in several steps. First, when
a PCI hardware error has resulted in a bus disconnect, that event
is reported as soon as possible to all affected device drivers,
including multiple instances of a device driver on multi-function
cards. This allows device drivers to avoid deadlocking in spinloops,
waiting for some i/o-space register to change, when it never will.
It also gives the drivers a chance to defer incoming I/O as
needed.
Preliminary sketch of API, cut-n-pasted-n-modified email from
Ben Herrenschmidt, circa 5 april 2005
Next, recovery is performed in several stages. Most of the complexity
is forced by the need to handle multi-function devices, that is,
devices that have multiple device drivers associated with them.
In the first stage, each driver is allowed to indicate what type
of reset it desires, the choices being a simple re-enabling of I/O
or requesting a hard reset (a full electrical #RST of the PCI card).
If any driver requests a full reset, that is what will be done.
After a full reset and/or a re-enabling of I/O, all drivers are
again notified, so that they may then perform any device setup/config
that may be required. After these have all completed, a final
"resume normal operations" event is sent out.
The biggest reason for choosing a kernel-based implementation rather
than a user-space implementation was the need to deal with bus
disconnects of PCI devices attached to storage media, and, in particular,
disconnects from devices holding the root file system. If the root
file system is disconnected, a user-space mechanism would have to go
through a large number of contortions to complete recovery. Almost all
of the current Linux file systems are not tolerant of disconnection
from/reconnection to their underlying block device. By contrast,
bus errors are easy to manage in the device driver. Indeed, most
device drivers already handle very similar recovery procedures;
for example, the SCSI-generic layer already provides significant
mechanisms for dealing with SCSI bus errors and SCSI bus resets.
Detailed Design
---------------
Design and implementation details below, based on a chain of
public email discussions with Ben Herrenschmidt, circa 5 April 2005.
The error recovery API support is exposed to the driver in the form of
a structure of function pointers pointed to by a new field in struct
pci_driver. The absence of this pointer in pci_driver denotes an
"non-aware" driver, behaviour on these is platform dependant.
Platforms like ppc64 can try to simulate pci hotplug remove/add.
The definition of "pci_error_token" is not covered here. It is based on
Seto's work on the synchronous error detection. We still need to define
functions for extracting infos out of an opaque error token. This is
separate from this API.
pci_driver. A driver that fails to provide the structure is "non-aware",
and the actual recovery steps taken are platform dependent. The
arch/powerpc implementation will simulate a PCI hotplug remove/add.
This structure has the form:
struct pci_error_handlers
{
int (*error_detected)(struct pci_dev *dev, pci_error_token error);
int (*error_detected)(struct pci_dev *dev, enum pci_channel_state);
int (*mmio_enabled)(struct pci_dev *dev);
int (*resume)(struct pci_dev *dev);
int (*link_reset)(struct pci_dev *dev);
int (*slot_reset)(struct pci_dev *dev);
void (*resume)(struct pci_dev *dev);
};
A driver doesn't have to implement all of these callbacks. The
only mandatory one is error_detected(). If a callback is not
implemented, the corresponding feature is considered unsupported.
For example, if mmio_enabled() and resume() aren't there, then the
driver is assumed as not doing any direct recovery and requires
The possible channel states are:
enum pci_channel_state {
pci_channel_io_normal, /* I/O channel is in normal state */
pci_channel_io_frozen, /* I/O to channel is blocked */
pci_channel_io_perm_failure, /* PCI card is dead */
};
Possible return values are:
enum pci_ers_result {
PCI_ERS_RESULT_NONE, /* no result/none/not supported in device driver */
PCI_ERS_RESULT_CAN_RECOVER, /* Device driver can recover without slot reset */
PCI_ERS_RESULT_NEED_RESET, /* Device driver wants slot to be reset. */
PCI_ERS_RESULT_DISCONNECT, /* Device has completely failed, is unrecoverable */
PCI_ERS_RESULT_RECOVERED, /* Device driver is fully recovered and operational */
};
A driver does not have to implement all of these callbacks; however,
if it implements any, it must implement error_detected(). If a callback
is not implemented, the corresponding feature is considered unsupported.
For example, if mmio_enabled() and resume() aren't there, then it
is assumed that the driver is not doing any direct recovery and requires
a reset. If link_reset() is not implemented, the card is assumed as
not caring about link resets, in which case, if recover is supported,
the core can try recover (but not slot_reset() unless it really did
reset the slot). If slot_reset() is not supported, link_reset() can
be called instead on a slot reset.
not care about link resets. Typically a driver will want to know about
a slot_reset().
At first, the call will always be :
The actual steps taken by a platform to recover from a PCI error
event will be platform-dependent, but will follow the general
sequence described below.
1) error_detected()
STEP 0: Error Event
-------------------
PCI bus error is detect by the PCI hardware. On powerpc, the slot
is isolated, in that all I/O is blocked: all reads return 0xffffffff,
all writes are ignored.
Error detected. This is sent once after an error has been detected. At
this point, the device might not be accessible anymore depending on the
platform (the slot will be isolated on ppc64). The driver may already
have "noticed" the error because of a failing IO, but this is the proper
"synchronisation point", that is, it gives a chance to the driver to
cleanup, waiting for pending stuff (timers, whatever, etc...) to
complete; it can take semaphores, schedule, etc... everything but touch
the device. Within this function and after it returns, the driver
STEP 1: Notification
--------------------
Platform calls the error_detected() callback on every instance of
every driver affected by the error.
At this point, the device might not be accessible anymore, depending on
the platform (the slot will be isolated on powerpc). The driver may
already have "noticed" the error because of a failing I/O, but this
is the proper "synchronization point", that is, it gives the driver
a chance to cleanup, waiting for pending stuff (timers, whatever, etc...)
to complete; it can take semaphores, schedule, etc... everything but
touch the device. Within this function and after it returns, the driver
shouldn't do any new IOs. Called in task context. This is sort of a
"quiesce" point. See note about interrupts at the end of this doc.
Result codes:
- PCIERR_RESULT_CAN_RECOVER:
Driever returns this if it thinks it might be able to recover
All drivers participating in this system must implement this call.
The driver must return one of the following result codes:
- PCI_ERS_RESULT_CAN_RECOVER:
Driver returns this if it thinks it might be able to recover
the HW by just banging IOs or if it wants to be given
a chance to extract some diagnostic informations (see
below).
- PCIERR_RESULT_NEED_RESET:
Driver returns this if it thinks it can't recover unless the
slot is reset.
- PCIERR_RESULT_DISCONNECT:
Return this if driver thinks it won't recover at all,
(this will detach the driver ? or just leave it
dangling ? to be decided)
a chance to extract some diagnostic information (see
mmio_enable, below).
- PCI_ERS_RESULT_NEED_RESET:
Driver returns this if it can't recover without a hard
slot reset.
- PCI_ERS_RESULT_DISCONNECT:
Driver returns this if it doesn't want to recover at all.
So at this point, we have called error_detected() for all drivers
on the segment that had the error. On ppc64, the slot is isolated. What
happens now typically depends on the result from the drivers. If all
drivers on the segment/slot return PCIERR_RESULT_CAN_RECOVER, we would
re-enable IOs on the slot (or do nothing special if the platform doesn't
isolate slots) and call 2). If not and we can reset slots, we go to 4),
if neither, we have a dead slot. If it's an hotplug slot, we might
"simulate" reset by triggering HW unplug/replug though.
The next step taken will depend on the result codes returned by the
drivers.
>>> Current ppc64 implementation assumes that a device driver will
>>> *not* schedule or semaphore in this routine; the current ppc64
If all drivers on the segment/slot return PCI_ERS_RESULT_CAN_RECOVER,
then the platform should re-enable IOs on the slot (or do nothing in
particular, if the platform doesn't isolate slots), and recovery
proceeds to STEP 2 (MMIO Enable).
If any driver requested a slot reset (by returning PCI_ERS_RESULT_NEED_RESET),
then recovery proceeds to STEP 4 (Slot Reset).
If the platform is unable to recover the slot, the next step
is STEP 6 (Permanent Failure).
>>> The current powerpc implementation assumes that a device driver will
>>> *not* schedule or semaphore in this routine; the current powerpc
>>> implementation uses one kernel thread to notify all devices;
>>> thus, of one device sleeps/schedules, all devices are affected.
>>> thus, if one device sleeps/schedules, all devices are affected.
>>> Doing better requires complex multi-threaded logic in the error
>>> recovery implementation (e.g. waiting for all notification threads
>>> to "join" before proceeding with recovery.) This seems excessively
>>> complex and not worth implementing.
>>> The current ppc64 implementation doesn't much care if the device
>>> attempts i/o at this point, or not. I/O's will fail, returning
>>> The current powerpc implementation doesn't much care if the device
>>> attempts I/O at this point, or not. I/O's will fail, returning
>>> a value of 0xff on read, and writes will be dropped. If the device
>>> driver attempts more than 10K I/O's to a frozen adapter, it will
>>> assume that the device driver has gone into an infinite loop, and
>>> it will panic the the kernel.
>>> it will panic the the kernel. There doesn't seem to be any other
>>> way of stopping a device driver that insists on spinning on I/O.
2) mmio_enabled()
STEP 2: MMIO Enabled
-------------------
The platform re-enables MMIO to the device (but typically not the
DMA), and then calls the mmio_enabled() callback on all affected
device drivers.
This is the "early recovery" call. IOs are allowed again, but DMA is
This is the "early recovery" call. IOs are allowed again, but DMA is
not (hrm... to be discussed, I prefer not), with some restrictions. This
is NOT a callback for the driver to start operations again, only to
peek/poke at the device, extract diagnostic information, if any, and
eventually do things like trigger a device local reset or some such,
but not restart operations. This is sent if all drivers on a segment
agree that they can try to recover and no automatic link reset was
performed by the HW. If the platform can't just re-enable IOs without
a slot reset or a link reset, it doesn't call this callback and goes
directly to 3) or 4). All IOs should be done _synchronously_ from
within this callback, errors triggered by them will be returned via
the normal pci_check_whatever() api, no new error_detected() callback
will be issued due to an error happening here. However, such an error
might cause IOs to be re-blocked for the whole segment, and thus
invalidate the recovery that other devices on the same segment might
have done, forcing the whole segment into one of the next states,
that is link reset or slot reset.
but not restart operations. This is callback is made if all drivers on
a segment agree that they can try to recover and if no automatic link reset
was performed by the HW. If the platform can't just re-enable IOs without
a slot reset or a link reset, it wont call this callback, and instead
will have gone directly to STEP 3 (Link Reset) or STEP 4 (Slot Reset)
Result codes:
- PCIERR_RESULT_RECOVERED
>>> The following is proposed; no platform implements this yet:
>>> Proposal: All I/O's should be done _synchronously_ from within
>>> this callback, errors triggered by them will be returned via
>>> the normal pci_check_whatever() API, no new error_detected()
>>> callback will be issued due to an error happening here. However,
>>> such an error might cause IOs to be re-blocked for the whole
>>> segment, and thus invalidate the recovery that other devices
>>> on the same segment might have done, forcing the whole segment
>>> into one of the next states, that is, link reset or slot reset.
The driver should return one of the following result codes:
- PCI_ERS_RESULT_RECOVERED
Driver returns this if it thinks the device is fully
functionnal and thinks it is ready to start
functional and thinks it is ready to start
normal driver operations again. There is no
guarantee that the driver will actually be
allowed to proceed, as another driver on the
same segment might have failed and thus triggered a
slot reset on platforms that support it.
- PCIERR_RESULT_NEED_RESET
- PCI_ERS_RESULT_NEED_RESET
Driver returns this if it thinks the device is not
recoverable in it's current state and it needs a slot
reset to proceed.
- PCIERR_RESULT_DISCONNECT
- PCI_ERS_RESULT_DISCONNECT
Same as above. Total failure, no recovery even after
reset driver dead. (To be defined more precisely)
>>> The current ppc64 implementation does not implement this callback.
The next step taken depends on the results returned by the drivers.
If all drivers returned PCI_ERS_RESULT_RECOVERED, then the platform
proceeds to either STEP3 (Link Reset) or to STEP 5 (Resume Operations).
3) link_reset()
If any driver returned PCI_ERS_RESULT_NEED_RESET, then the platform
proceeds to STEP 4 (Slot Reset)
This is called after the link has been reset. This is typically
a PCI Express specific state at this point and is done whenever a
non-fatal error has been detected that can be "solved" by resetting
the link. This call informs the driver of the reset and the driver
should check if the device appears to be in working condition.
This function acts a bit like 2) mmio_enabled(), in that the driver
is not supposed to restart normal driver I/O operations right away.
Instead, it should just "probe" the device to check it's recoverability
status. If all is right, then the core will call resume() once all
drivers have ack'd link_reset().
>>> The current powerpc implementation does not implement this callback.
STEP 3: Link Reset
------------------
The platform resets the link, and then calls the link_reset() callback
on all affected device drivers. This is a PCI-Express specific state
and is done whenever a non-fatal error has been detected that can be
"solved" by resetting the link. This call informs the driver of the
reset and the driver should check to see if the device appears to be
in working condition.
The driver is not supposed to restart normal driver I/O operations
at this point. It should limit itself to "probing" the device to
check it's recoverability status. If all is right, then the platform
will call resume() once all drivers have ack'd link_reset().
Result codes:
(identical to mmio_enabled)
(identical to STEP 3 (MMIO Enabled)
>>> The current ppc64 implementation does not implement this callback.
The platform then proceeds to either STEP 4 (Slot Reset) or STEP 5
(Resume Operations).
4) slot_reset()
>>> The current powerpc implementation does not implement this callback.
This is called after the slot has been soft or hard reset by the
platform. A soft reset consists of asserting the adapter #RST line
and then restoring the PCI BARs and PCI configuration header. If the
platform supports PCI hotplug, then it might instead perform a hard
reset by toggling power on the slot off/on. This call gives drivers
the chance to re-initialize the hardware (re-download firmware, etc.),
but drivers shouldn't restart normal I/O processing operations at
this point. (See note about interrupts; interrupts aren't guaranteed
to be delivered until the resume() callback has been called). If all
device drivers report success on this callback, the patform will call
resume() to complete the error handling and let the driver restart
normal I/O processing.
STEP 4: Slot Reset
------------------
The platform performs a soft or hard reset of the device, and then
calls the slot_reset() callback.
A soft reset consists of asserting the adapter #RST line and then
restoring the PCI BAR's and PCI configuration header to a state
that is equivalent to what it would be after a fresh system
power-on followed by power-on BIOS/system firmware initialization.
If the platform supports PCI hotplug, then the reset might be
performed by toggling the slot electrical power off/on.
It is important for the platform to restore the PCI config space
to the "fresh poweron" state, rather than the "last state". After
a slot reset, the device driver will almost always use its standard
device initialization routines, and an unusual config space setup
may result in hung devices, kernel panics, or silent data corruption.
This call gives drivers the chance to re-initialize the hardware
(re-download firmware, etc.). At this point, the driver may assume
that he card is in a fresh state and is fully functional. In
particular, interrupt generation should work normally.
Drivers should not yet restart normal I/O processing operations
at this point. If all device drivers report success on this
callback, the platform will call resume() to complete the sequence,
and let the driver restart normal I/O processing.
A driver can still return a critical failure for this function if
it can't get the device operational after reset. If the platform
previously tried a soft reset, it migh now try a hard reset (power
previously tried a soft reset, it might now try a hard reset (power
cycle) and then call slot_reset() again. It the device still can't
be recovered, there is nothing more that can be done; the platform
will typically report a "permanent failure" in such a case. The
device will be considered "dead" in this case.
Drivers for multi-function cards will need to coordinate among
themselves as to which driver instance will perform any "one-shot"
or global device initialization. For example, the Symbios sym53cxx2
driver performs device init only from PCI function 0:
+ if (PCI_FUNC(pdev->devfn) == 0)
+ sym_reset_scsi_bus(np, 0);
Result codes:
- PCIERR_RESULT_DISCONNECT
- PCI_ERS_RESULT_DISCONNECT
Same as above.
>>> The current ppc64 implementation does not try a power-cycle reset
>>> if the driver returned PCIERR_RESULT_DISCONNECT. However, it should.
Platform proceeds either to STEP 5 (Resume Operations) or STEP 6 (Permanent
Failure).
5) resume()
>>> The current powerpc implementation does not currently try a
>>> power-cycle reset if the driver returned PCI_ERS_RESULT_DISCONNECT.
>>> However, it probably should.
This is called if all drivers on the segment have returned
PCIERR_RESULT_RECOVERED from one of the 3 prevous callbacks.
That basically tells the driver to restart activity, tht everything
is back and running. No result code is taken into account here. If
a new error happens, it will restart a new error handling process.
That's it. I think this covers all the possibilities. The way those
callbacks are called is platform policy. A platform with no slot reset
capability for example may want to just "ignore" drivers that can't
STEP 5: Resume Operations
-------------------------
The platform will call the resume() callback on all affected device
drivers if all drivers on the segment have returned
PCI_ERS_RESULT_RECOVERED from one of the 3 previous callbacks.
The goal of this callback is to tell the driver to restart activity,
that everything is back and running. This callback does not return
a result code.
At this point, if a new error happens, the platform will restart
a new error recovery sequence.
STEP 6: Permanent Failure
-------------------------
A "permanent failure" has occurred, and the platform cannot recover
the device. The platform will call error_detected() with a
pci_channel_state value of pci_channel_io_perm_failure.
The device driver should, at this point, assume the worst. It should
cancel all pending I/O, refuse all new I/O, returning -EIO to
higher layers. The device driver should then clean up all of its
memory and remove itself from kernel operations, much as it would
during system shutdown.
The platform will typically notify the system operator of the
permanent failure in some way. If the device is hotplug-capable,
the operator will probably want to remove and replace the device.
Note, however, not all failures are truly "permanent". Some are
caused by over-heating, some by a poorly seated card. Many
PCI error events are caused by software bugs, e.g. DMA's to
wild addresses or bogus split transactions due to programming
errors. See the discussion in powerpc/eeh-pci-error-recovery.txt
for additional detail on real-life experience of the causes of
software errors.
Conclusion; General Remarks
---------------------------
The way those callbacks are called is platform policy. A platform with
no slot reset capability may want to just "ignore" drivers that can't
recover (disconnect them) and try to let other cards on the same segment
recover. Keep in mind that in most real life cases, though, there will
be only one driver per segment.
Now, there is a note about interrupts. If you get an interrupt and your
Now, a note about interrupts. If you get an interrupt and your
device is dead or has been isolated, there is a problem :)
After much thinking, I decided to leave that to the platform. That is,
the recovery API only precies that:
The current policy is to turn this into a platform policy.
That is, the recovery API only requires that:
- There is no guarantee that interrupt delivery can proceed from any
device on the segment starting from the error detection and until the
restart callback is sent, at which point interrupts are expected to be
resume callback is sent, at which point interrupts are expected to be
fully operational.
- There is no guarantee that interrupt delivery is stopped, that is, ad
river that gets an interrupts after detecting an error, or that detects
and error within the interrupt handler such that it prevents proper
- There is no guarantee that interrupt delivery is stopped, that is,
a driver that gets an interrupt after detecting an error, or that detects
an error within the interrupt handler such that it prevents proper
ack'ing of the interrupt (and thus removal of the source) should just
return IRQ_NOTHANDLED. It's up to the platform to deal with taht
condition, typically by masking the irq source during the duration of
return IRQ_NOTHANDLED. It's up to the platform to deal with that
condition, typically by masking the IRQ source during the duration of
the error handling. It is expected that the platform "knows" which
interrupts are routed to error-management capable slots and can deal
with temporarily disabling that irq number during error processing (this
with temporarily disabling that IRQ number during error processing (this
isn't terribly complex). That means some IRQ latency for other devices
sharing the interrupt, but there is simply no other way. High end
platforms aren't supposed to share interrupts between many devices
anyway :)
>>> Implementation details for the powerpc platform are discussed in
>>> the file Documentation/powerpc/eeh-pci-error-recovery.txt
Revised: 31 May 2005 Linas Vepstas <linas@austin.ibm.com>
>>> As of this writing, there are six device drivers with patches
>>> implementing error recovery. Not all of these patches are in
>>> mainline yet. These may be used as "examples":
>>>
>>> drivers/scsi/ipr.c
>>> drivers/scsi/sym53cxx_2
>>> drivers/next/e100.c
>>> drivers/net/e1000
>>> drivers/net/ixgb
>>> drivers/net/s2io.c
The End
-------

View File

@ -44,7 +44,7 @@ it.
/sys/power/image_size controls the size of the image created by
the suspend-to-disk mechanism. It can be written a string
representing a non-negative integer that will be used as an upper
limit of the image size, in megabytes. The suspend-to-disk mechanism will
limit of the image size, in bytes. The suspend-to-disk mechanism will
do its best to ensure the image size will not exceed that number. However,
if this turns out to be impossible, it will try to suspend anyway using the
smallest image possible. In particular, if "0" is written to this file, the

View File

@ -27,7 +27,7 @@ echo shutdown > /sys/power/disk; echo disk > /sys/power/state
echo platform > /sys/power/disk; echo disk > /sys/power/state
If you want to limit the suspend image size to N megabytes, do
If you want to limit the suspend image size to N bytes, do
echo N > /sys/power/image_size

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,24 @@
1 Release Date : Mon Jan 23 14:09:01 PST 2006 - Sumant Patro <Sumant.Patro@lsil.com>
2 Current Version : 00.00.02.02
3 Older Version : 00.00.02.01
i. New template defined to represent each family of controllers (identified by processor used).
The template will have defintions that will be initialised to appropritae values for a specific family of controllers. The template definition has four function pointers. During driver initialisation the function pointers will be set based on the controller family type. This change is done to support new controllers that has different processors and thus different register set.
-Sumant Patro <Sumant.Patro@lsil.com>
1 Release Date : Mon Dec 19 14:36:26 PST 2005 - Sumant Patro <Sumant.Patro@lsil.com>
2 Current Version : 00.00.02.00-rc4
3 Older Version : 00.00.02.01
i. Code reorganized to remove code duplication in megasas_build_cmd.
"There's a lot of duplicate code megasas_build_cmd. Move that out of the different codepathes and merge the reminder of megasas_build_cmd into megasas_queue_command"
- Christoph Hellwig <hch@lst.de>
ii. Defined MEGASAS_IOC_FIRMWARE32 for code paths that handles 32 bit applications in 64 bit systems.
"MEGASAS_IOC_FIRMWARE can't be redefined if CONFIG_COMPAT is set, we need to define a MEGASAS_IOC_FIRMWARE32 define so native binaries continue to work"
- Christoph Hellwig <hch@lst.de>

View File

@ -1,5 +1,5 @@
====================================================================
= Adaptec Ultra320 Family Manager Set v1.3.11 =
= Adaptec Ultra320 Family Manager Set =
= =
= README for =
= The Linux Operating System =
@ -63,6 +63,11 @@ The following information is available in this file:
68-pin)
2. Version History
3.0 (December 1st, 2005)
- Updated driver to use SCSI transport class infrastructure
- Upported sequencer and core fixes from adaptec released
version 2.0.15 of the driver.
1.3.11 (July 11, 2003)
- Fix several deadlock issues.
- Add 29320ALP and 39320B Id's.
@ -194,7 +199,7 @@ The following information is available in this file:
supported)
- Support for the PCI-X standard up to 133MHz
- Support for the PCI v2.2 standard
- Domain Validation
- Domain Validation
2.2. Operating System Support:
- Redhat Linux 7.2, 7.3, 8.0, Advanced Server 2.1
@ -411,77 +416,53 @@ The following information is available in this file:
http://www.adaptec.com.
5. Contacting Adaptec
5. Adaptec Customer Support
A Technical Support Identification (TSID) Number is required for
Adaptec technical support.
- The 12-digit TSID can be found on the white barcode-type label
included inside the box with your product. The TSID helps us
included inside the box with your product. The TSID helps us
provide more efficient service by accurately identifying your
product and support status.
Support Options
- Search the Adaptec Support Knowledgebase (ASK) at
http://ask.adaptec.com for articles, troubleshooting tips, and
frequently asked questions for your product.
frequently asked questions about your product.
- For support via Email, submit your question to Adaptec's
Technical Support Specialists at http://ask.adaptec.com.
Technical Support Specialists at http://ask.adaptec.com/.
North America
- Visit our Web site at http://www.adaptec.com.
- To speak with a Fibre Channel/RAID/External Storage Technical
Support Specialist, call 1-321-207-2000,
Hours: Monday-Friday, 3:00 A.M. to 5:00 P.M., PST.
(Not open on holidays)
- For Technical Support in all other technologies including
SCSI, call 1-408-934-7274,
Hours: Monday-Friday, 6:00 A.M. to 5:00 P.M., PST.
(Not open on holidays)
- For after hours support, call 1-800-416-8066 ($99/call,
$149/call on holidays)
- To order Adaptec products including software and cables, call
1-800-442-7274 or 1-408-957-7274. You can also visit our
online store at http://www.adaptecstore.com
- Visit our Web site at http://www.adaptec.com/.
- For information about Adaptec's support options, call
408-957-2550, 24 hours a day, 7 days a week.
- To speak with a Technical Support Specialist,
* For hardware products, call 408-934-7274,
Monday to Friday, 3:00 am to 5:00 pm, PDT.
* For RAID and Fibre Channel products, call 321-207-2000,
Monday to Friday, 3:00 am to 5:00 pm, PDT.
To expedite your service, have your computer with you.
- To order Adaptec products, including accessories and cables,
call 408-957-7274. To order cables online go to
http://www.adaptec.com/buy-cables/.
Europe
- Visit our Web site at http://www.adaptec-europe.com.
- English and French: To speak with a Technical Support
Specialist, call one of the following numbers:
- English: +32-2-352-3470
- French: +32-2-352-3460
Hours: Monday-Thursday, 10:00 to 12:30, 13:30 to 17:30 CET
Friday, 10:00 to 12:30, 13:30 to 16:30 CET
- German: To speak with a Technical Support Specialist,
call +49-89-456-40660
Hours: Monday-Thursday, 09:30 to 12:30, 13:30 to 16:30 CET
Friday, 09:30 to 12:30, 13:30 to 15:00 CET
- To order Adaptec products, including accessories and cables:
- UK: +0800-96-65-26 or fax +0800-731-02-95
- Other European countries: +32-11-300-379
Australia and New Zealand
- Visit our Web site at http://www.adaptec.com.au.
- To speak with a Technical Support Specialist, call
+612-9416-0698
Hours: Monday-Friday, 10:00 A.M. to 4:30 P.M., EAT
(Not open on holidays)
- Visit our Web site at http://www.adaptec-europe.com/.
- To speak with a Technical Support Specialist, call, or email,
* German: +49 89 4366 5522, Monday-Friday, 9:00-17:00 CET,
http://ask-de.adaptec.com/.
* French: +49 89 4366 5533, Monday-Friday, 9:00-17:00 CET,
http://ask-fr.adaptec.com/.
* English: +49 89 4366 5544, Monday-Friday, 9:00-17:00 GMT,
http://ask.adaptec.com/.
- You can order Adaptec cables online at
http://www.adaptec.com/buy-cables/.
Japan
- Visit our web site at http://www.adaptec.co.jp/.
- To speak with a Technical Support Specialist, call
+81-3-5308-6120
Hours: Monday-Friday, 9:00 a.m. to 12:00 p.m., 1:00 p.m. to
6:00 p.m. TSC
Hong Kong and China
- To speak with a Technical Support Specialist, call
+852-2869-7200
Hours: Monday-Friday, 10:00 to 17:00.
- Fax Technical Support at +852-2869-7100.
Singapore
- To speak with a Technical Support Specialist, call
+65-245-7470
Hours: Monday-Friday, 10:00 to 17:00.
- Fax Technical Support at +852-2869-7100
+81 3 5308 6120, Monday-Friday, 9:00 a.m. to 12:00 p.m.,
1:00 p.m. to 6:00 p.m.
-------------------------------------------------------------------
/*

View File

@ -309,81 +309,57 @@ The following information is available in this file:
-----------------------------------------------------------------
Example:
'options aic7xxx aic7xxx=verbose,no_probe,tag_info:{{},{,,10}},seltime:1"
'options aic7xxx aic7xxx=verbose,no_probe,tag_info:{{},{,,10}},seltime:1'
enables verbose logging, Disable EISA/VLB probing,
and set tag depth on Controller 1/Target 2 to 10 tags.
3. Contacting Adaptec
4. Adaptec Customer Support
A Technical Support Identification (TSID) Number is required for
Adaptec technical support.
- The 12-digit TSID can be found on the white barcode-type label
included inside the box with your product. The TSID helps us
included inside the box with your product. The TSID helps us
provide more efficient service by accurately identifying your
product and support status.
Support Options
- Search the Adaptec Support Knowledgebase (ASK) at
http://ask.adaptec.com for articles, troubleshooting tips, and
frequently asked questions for your product.
frequently asked questions about your product.
- For support via Email, submit your question to Adaptec's
Technical Support Specialists at http://ask.adaptec.com.
Technical Support Specialists at http://ask.adaptec.com/.
North America
- Visit our Web site at http://www.adaptec.com.
- To speak with a Fibre Channel/RAID/External Storage Technical
Support Specialist, call 1-321-207-2000,
Hours: Monday-Friday, 3:00 A.M. to 5:00 P.M., PST.
(Not open on holidays)
- For Technical Support in all other technologies including
SCSI, call 1-408-934-7274,
Hours: Monday-Friday, 6:00 A.M. to 5:00 P.M., PST.
(Not open on holidays)
- For after hours support, call 1-800-416-8066 ($99/call,
$149/call on holidays)
- To order Adaptec products including software and cables, call
1-800-442-7274 or 1-408-957-7274. You can also visit our
online store at http://www.adaptecstore.com
- Visit our Web site at http://www.adaptec.com/.
- For information about Adaptec's support options, call
408-957-2550, 24 hours a day, 7 days a week.
- To speak with a Technical Support Specialist,
* For hardware products, call 408-934-7274,
Monday to Friday, 3:00 am to 5:00 pm, PDT.
* For RAID and Fibre Channel products, call 321-207-2000,
Monday to Friday, 3:00 am to 5:00 pm, PDT.
To expedite your service, have your computer with you.
- To order Adaptec products, including accessories and cables,
call 408-957-7274. To order cables online go to
http://www.adaptec.com/buy-cables/.
Europe
- Visit our Web site at http://www.adaptec-europe.com.
- English and French: To speak with a Technical Support
Specialist, call one of the following numbers:
- English: +32-2-352-3470
- French: +32-2-352-3460
Hours: Monday-Thursday, 10:00 to 12:30, 13:30 to 17:30 CET
Friday, 10:00 to 12:30, 13:30 to 16:30 CET
- German: To speak with a Technical Support Specialist,
call +49-89-456-40660
Hours: Monday-Thursday, 09:30 to 12:30, 13:30 to 16:30 CET
Friday, 09:30 to 12:30, 13:30 to 15:00 CET
- To order Adaptec products, including accessories and cables:
- UK: +0800-96-65-26 or fax +0800-731-02-95
- Other European countries: +32-11-300-379
Australia and New Zealand
- Visit our Web site at http://www.adaptec.com.au.
- To speak with a Technical Support Specialist, call
+612-9416-0698
Hours: Monday-Friday, 10:00 A.M. to 4:30 P.M., EAT
(Not open on holidays)
- Visit our Web site at http://www.adaptec-europe.com/.
- To speak with a Technical Support Specialist, call, or email,
* German: +49 89 4366 5522, Monday-Friday, 9:00-17:00 CET,
http://ask-de.adaptec.com/.
* French: +49 89 4366 5533, Monday-Friday, 9:00-17:00 CET,
http://ask-fr.adaptec.com/.
* English: +49 89 4366 5544, Monday-Friday, 9:00-17:00 GMT,
http://ask.adaptec.com/.
- You can order Adaptec cables online at
http://www.adaptec.com/buy-cables/.
Japan
- Visit our web site at http://www.adaptec.co.jp/.
- To speak with a Technical Support Specialist, call
+81-3-5308-6120
Hours: Monday-Friday, 9:00 a.m. to 12:00 p.m., 1:00 p.m. to
6:00 p.m. TSC
Hong Kong and China
- To speak with a Technical Support Specialist, call
+852-2869-7200
Hours: Monday-Friday, 10:00 to 17:00.
- Fax Technical Support at +852-2869-7100.
Singapore
- To speak with a Technical Support Specialist, call
+65-245-7470
Hours: Monday-Friday, 10:00 to 17:00.
- Fax Technical Support at +852-2869-7100
+81 3 5308 6120, Monday-Friday, 9:00 a.m. to 12:00 p.m.,
1:00 p.m. to 6:00 p.m.
-------------------------------------------------------------------
/*

View File

@ -837,8 +837,10 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed.
Module for AC'97 motherboards from Intel and compatibles.
* Intel i810/810E, i815, i820, i830, i84x, MX440
ICH5, ICH6, ICH7, ESB2
* SiS 7012 (SiS 735)
* NVidia NForce, NForce2
* NVidia NForce, NForce2, NForce3, MCP04, CK804
CK8, CK8S, MCP501
* AMD AMD768, AMD8111
* ALi m5455
@ -868,6 +870,12 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed.
--------------------
Module for Intel ICH (i8x0) chipset MC97 modems.
* Intel i810/810E, i815, i820, i830, i84x, MX440
ICH5, ICH6, ICH7
* SiS 7013 (SiS 735)
* NVidia NForce, NForce2, NForce2s, NForce3
* AMD AMD8111
* ALi m5455
ac97_clock - AC'97 codec clock base (0 = auto-detect)

View File

@ -5206,14 +5206,14 @@ struct _snd_pcm_runtime {
You need to pass the <function>snd_dma_pci_data(pci)</function>,
where pci is the struct <structname>pci_dev</structname> pointer
of the chip as well.
The <type>snd_sg_buf_t</type> instance is created as
The <type>struct snd_sg_buf</type> instance is created as
substream-&gt;dma_private. You can cast
the pointer like:
<informalexample>
<programlisting>
<![CDATA[
struct snd_sg_buf *sgbuf = (struct snd_sg_buf_t*)substream->dma_private;
struct snd_sg_buf *sgbuf = (struct snd_sg_buf *)substream->dma_private;
]]>
</programlisting>
</informalexample>

View File

@ -28,6 +28,7 @@ Currently, these files are in /proc/sys/vm:
- block_dump
- drop-caches
- zone_reclaim_mode
- zone_reclaim_interval
==============================================================
@ -126,15 +127,54 @@ the high water marks for each per cpu page list.
zone_reclaim_mode:
This is set during bootup to 1 if it is determined that pages from
remote zones will cause a significant performance reduction. The
Zone_reclaim_mode allows to set more or less agressive approaches to
reclaim memory when a zone runs out of memory. If it is set to zero then no
zone reclaim occurs. Allocations will be satisfied from other zones / nodes
in the system.
This is value ORed together of
1 = Zone reclaim on
2 = Zone reclaim writes dirty pages out
4 = Zone reclaim swaps pages
8 = Also do a global slab reclaim pass
zone_reclaim_mode is set during bootup to 1 if it is determined that pages
from remote zones will cause a measurable performance reduction. The
page allocator will then reclaim easily reusable pages (those page
cache pages that are currently not used) before going off node.
cache pages that are currently not used) before allocating off node pages.
The user can override this setting. It may be beneficial to switch
off zone reclaim if the system is used for a file server and all
of memory should be used for caching files from disk.
It may be beneficial to switch off zone reclaim if the system is
used for a file server and all of memory should be used for caching files
from disk. In that case the caching effect is more important than
data locality.
It may be beneficial to switch this on if one wants to do zone
reclaim regardless of the numa distances in the system.
Allowing zone reclaim to write out pages stops processes that are
writing large amounts of data from dirtying pages on other nodes. Zone
reclaim will write out dirty pages if a zone fills up and so effectively
throttle the process. This may decrease the performance of a single process
since it cannot use all of system memory to buffer the outgoing writes
anymore but it preserve the memory on other nodes so that the performance
of other processes running on other nodes will not be affected.
Allowing regular swap effectively restricts allocations to the local
node unless explicitly overridden by memory policies or cpuset
configurations.
It may be advisable to allow slab reclaim if the system makes heavy
use of files and builds up large slab caches. However, the slab
shrink operation is global, may take a long time and free slabs
in all nodes of the system.
================================================================
zone_reclaim_interval:
The time allowed for off node allocations after zone reclaim
has failed to reclaim enough pages to allow a local allocation.
Time is set in seconds and set by default to 30 seconds.
Reduce the interval if undesired off node allocations occur. However, too
frequent scans will have a negative impact onoff node allocation performance.

View File

@ -0,0 +1,306 @@
ET61X[12]51 PC Camera Controllers
Driver for Linux
=================================
- Documentation -
Index
=====
1. Copyright
2. Disclaimer
3. License
4. Overview and features
5. Module dependencies
6. Module loading
7. Module parameters
8. Optional device control through "sysfs"
9. Supported devices
10. Notes for V4L2 application developers
11. Contact information
1. Copyright
============
Copyright (C) 2006 by Luca Risolia <luca.risolia@studio.unibo.it>
2. Disclaimer
=============
Etoms is a trademark of Etoms Electronics Corp.
This software is not developed or sponsored by Etoms Electronics.
3. License
==========
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
4. Overview and features
========================
This driver supports the video interface of the devices mounting the ET61X151
or ET61X251 PC Camera Controllers.
It's worth to note that Etoms Electronics has never collaborated with the
author during the development of this project; despite several requests,
Etoms Electronics also refused to release enough detailed specifications of
the video compression engine.
The driver relies on the Video4Linux2 and USB core modules. It has been
designed to run properly on SMP systems as well.
The latest version of the ET61X[12]51 driver can be found at the following URL:
http://www.linux-projects.org/
Some of the features of the driver are:
- full compliance with the Video4Linux2 API (see also "Notes for V4L2
application developers" paragraph);
- available mmap or read/poll methods for video streaming through isochronous
data transfers;
- automatic detection of image sensor;
- support for any window resolutions and optional panning within the maximum
pixel area of image sensor;
- image downscaling with arbitrary scaling factors from 1 and 2 in both
directions (see "Notes for V4L2 application developers" paragraph);
- two different video formats for uncompressed or compressed data in low or
high compression quality (see also "Notes for V4L2 application developers"
paragraph);
- full support for the capabilities of every possible image sensors that can
be connected to the ET61X[12]51 bridges, including, for istance, red, green,
blue and global gain adjustments and exposure control (see "Supported
devices" paragraph for details);
- use of default color settings for sunlight conditions;
- dynamic I/O interface for both ET61X[12]51 and image sensor control (see
"Optional device control through 'sysfs'" paragraph);
- dynamic driver control thanks to various module parameters (see "Module
parameters" paragraph);
- up to 64 cameras can be handled at the same time; they can be connected and
disconnected from the host many times without turning off the computer, if
the system supports hotplugging;
- no known bugs.
5. Module dependencies
======================
For it to work properly, the driver needs kernel support for Video4Linux and
USB.
The following options of the kernel configuration file must be enabled and
corresponding modules must be compiled:
# Multimedia devices
#
CONFIG_VIDEO_DEV=m
To enable advanced debugging functionality on the device through /sysfs:
# Multimedia devices
#
CONFIG_VIDEO_ADV_DEBUG=y
# USB support
#
CONFIG_USB=m
In addition, depending on the hardware being used, the modules below are
necessary:
# USB Host Controller Drivers
#
CONFIG_USB_EHCI_HCD=m
CONFIG_USB_UHCI_HCD=m
CONFIG_USB_OHCI_HCD=m
And finally:
# USB Multimedia devices
#
CONFIG_USB_ET61X251=m
6. Module loading
=================
To use the driver, it is necessary to load the "et61x251" module into memory
after every other module required: "videodev", "usbcore" and, depending on
the USB host controller you have, "ehci-hcd", "uhci-hcd" or "ohci-hcd".
Loading can be done as shown below:
[root@localhost home]# modprobe et61x251
At this point the devices should be recognized. You can invoke "dmesg" to
analyze kernel messages and verify that the loading process has gone well:
[user@localhost home]$ dmesg
7. Module parameters
====================
Module parameters are listed below:
-------------------------------------------------------------------------------
Name: video_nr
Type: short array (min = 0, max = 64)
Syntax: <-1|n[,...]>
Description: Specify V4L2 minor mode number:
-1 = use next available
n = use minor number n
You can specify up to 64 cameras this way.
For example:
video_nr=-1,2,-1 would assign minor number 2 to the second
registered camera and use auto for the first one and for every
other camera.
Default: -1
-------------------------------------------------------------------------------
Name: force_munmap
Type: bool array (min = 0, max = 64)
Syntax: <0|1[,...]>
Description: Force the application to unmap previously mapped buffer memory
before calling any VIDIOC_S_CROP or VIDIOC_S_FMT ioctl's. Not
all the applications support this feature. This parameter is
specific for each detected camera.
0 = do not force memory unmapping
1 = force memory unmapping (save memory)
Default: 0
-------------------------------------------------------------------------------
Name: debug
Type: ushort
Syntax: <n>
Description: Debugging information level, from 0 to 3:
0 = none (use carefully)
1 = critical errors
2 = significant informations
3 = more verbose messages
Level 3 is useful for testing only, when only one device
is used at the same time. It also shows some more informations
about the hardware being detected. This module parameter can be
changed at runtime thanks to the /sys filesystem interface.
Default: 2
-------------------------------------------------------------------------------
8. Optional device control through "sysfs"
==========================================
If the kernel has been compiled with the CONFIG_VIDEO_ADV_DEBUG option enabled,
it is possible to read and write both the ET61X[12]51 and the image sensor
registers by using the "sysfs" filesystem interface.
There are four files in the /sys/class/video4linux/videoX directory for each
registered camera: "reg", "val", "i2c_reg" and "i2c_val". The first two files
control the ET61X[12]51 bridge, while the other two control the sensor chip.
"reg" and "i2c_reg" hold the values of the current register index where the
following reading/writing operations are addressed at through "val" and
"i2c_val". Their use is not intended for end-users, unless you know what you
are doing. Remember that you must be logged in as root before writing to them.
As an example, suppose we were to want to read the value contained in the
register number 1 of the sensor register table - which is usually the product
identifier - of the camera registered as "/dev/video0":
[root@localhost #] cd /sys/class/video4linux/video0
[root@localhost #] echo 1 > i2c_reg
[root@localhost #] cat i2c_val
Note that if the sensor registers can not be read, "cat" will fail.
To avoid race conditions, all the I/O accesses to the files are serialized.
9. Supported devices
====================
None of the names of the companies as well as their products will be mentioned
here. They have never collaborated with the author, so no advertising.
From the point of view of a driver, what unambiguously identify a device are
its vendor and product USB identifiers. Below is a list of known identifiers of
devices mounting the ET61X[12]51 PC camera controllers:
Vendor ID Product ID
--------- ----------
0x102c 0x6151
0x102c 0x6251
0x102c 0x6253
0x102c 0x6254
0x102c 0x6255
0x102c 0x6256
0x102c 0x6257
0x102c 0x6258
0x102c 0x6259
0x102c 0x625a
0x102c 0x625b
0x102c 0x625c
0x102c 0x625d
0x102c 0x625e
0x102c 0x625f
0x102c 0x6260
0x102c 0x6261
0x102c 0x6262
0x102c 0x6263
0x102c 0x6264
0x102c 0x6265
0x102c 0x6266
0x102c 0x6267
0x102c 0x6268
0x102c 0x6269
The following image sensors are supported:
Model Manufacturer
----- ------------
TAS5130D1B Taiwan Advanced Sensor Corporation
All the available control settings of each image sensor are supported through
the V4L2 interface.
10. Notes for V4L2 application developers
========================================
This driver follows the V4L2 API specifications. In particular, it enforces two
rules:
- exactly one I/O method, either "mmap" or "read", is associated with each
file descriptor. Once it is selected, the application must close and reopen the
device to switch to the other I/O method;
- although it is not mandatory, previously mapped buffer memory should always
be unmapped before calling any "VIDIOC_S_CROP" or "VIDIOC_S_FMT" ioctl's.
The same number of buffers as before will be allocated again to match the size
of the new video frames, so you have to map the buffers again before any I/O
attempts on them.
Consistently with the hardware limits, this driver also supports image
downscaling with arbitrary scaling factors from 1 and 2 in both directions.
However, the V4L2 API specifications don't correctly define how the scaling
factor can be chosen arbitrarily by the "negotiation" of the "source" and
"target" rectangles. To work around this flaw, we have added the convention
that, during the negotiation, whenever the "VIDIOC_S_CROP" ioctl is issued, the
scaling factor is restored to 1.
This driver supports two different video formats: the first one is the "8-bit
Sequential Bayer" format and can be used to obtain uncompressed video data
from the device through the current I/O method, while the second one provides
"raw" compressed video data (without frame headers not related to the
compressed data). The current compression quality may vary from 0 to 1 and can
be selected or queried thanks to the VIDIOC_S_JPEGCOMP and VIDIOC_G_JPEGCOMP
V4L2 ioctl's.
11. Contact information
=======================
The author may be contacted by e-mail at <luca.risolia@studio.unibo.it>.
GPG/PGP encrypted e-mail's are accepted. The GPG key ID of the author is
'FCE635A4'; the public 1024-bit key should be available at any keyserver;
the fingerprint is: '88E8 F32F 7244 68BA 3958 5D40 99DA 5D2A FCE6 35A4'.

View File

@ -17,16 +17,15 @@ Index
7. Module parameters
8. Optional device control through "sysfs"
9. Supported devices
10. How to add plug-in's for new image sensors
11. Notes for V4L2 application developers
12. Video frame formats
13. Contact information
14. Credits
10. Notes for V4L2 application developers
11. Video frame formats
12. Contact information
13. Credits
1. Copyright
============
Copyright (C) 2004-2005 by Luca Risolia <luca.risolia@studio.unibo.it>
Copyright (C) 2004-2006 by Luca Risolia <luca.risolia@studio.unibo.it>
2. Disclaimer
@ -54,9 +53,8 @@ Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
4. Overview and features
========================
This driver attempts to support the video and audio streaming capabilities of
the devices mounting the SONiX SN9C101, SN9C102 and SN9C103 PC Camera
Controllers.
This driver attempts to support the video interface of the devices mounting the
SONiX SN9C101, SN9C102 and SN9C103 PC Camera Controllers.
It's worth to note that SONiX has never collaborated with the author during the
development of this project, despite several requests for enough detailed
@ -78,6 +76,7 @@ Some of the features of the driver are:
- available mmap or read/poll methods for video streaming through isochronous
data transfers;
- automatic detection of image sensor;
- support for built-in microphone interface;
- support for any window resolutions and optional panning within the maximum
pixel area of image sensor;
- image downscaling with arbitrary scaling factors from 1, 2 and 4 in both
@ -96,7 +95,7 @@ Some of the features of the driver are:
parameters" paragraph);
- up to 64 cameras can be handled at the same time; they can be connected and
disconnected from the host many times without turning off the computer, if
your system supports hotplugging;
the system supports hotplugging;
- no known bugs.
@ -112,6 +111,12 @@ corresponding modules must be compiled:
#
CONFIG_VIDEO_DEV=m
To enable advanced debugging functionality on the device through /sysfs:
# Multimedia devices
#
CONFIG_VIDEO_ADV_DEBUG=y
# USB support
#
CONFIG_USB=m
@ -125,6 +130,21 @@ necessary:
CONFIG_USB_UHCI_HCD=m
CONFIG_USB_OHCI_HCD=m
The SN9C103 controller also provides a built-in microphone interface. It is
supported by the USB Audio driver thanks to the ALSA API:
# Sound
#
CONFIG_SOUND=y
# Advanced Linux Sound Architecture
#
CONFIG_SND=m
# USB devices
#
CONFIG_SND_USB_AUDIO=m
And finally:
# USB Multimedia devices
@ -153,7 +173,7 @@ analyze kernel messages and verify that the loading process has gone well:
Module parameters are listed below:
-------------------------------------------------------------------------------
Name: video_nr
Type: int array (min = 0, max = 64)
Type: short array (min = 0, max = 64)
Syntax: <-1|n[,...]>
Description: Specify V4L2 minor mode number:
-1 = use next available
@ -165,19 +185,19 @@ Description: Specify V4L2 minor mode number:
other camera.
Default: -1
-------------------------------------------------------------------------------
Name: force_munmap;
Name: force_munmap
Type: bool array (min = 0, max = 64)
Syntax: <0|1[,...]>
Description: Force the application to unmap previously mapped buffer memory
before calling any VIDIOC_S_CROP or VIDIOC_S_FMT ioctl's. Not
all the applications support this feature. This parameter is
specific for each detected camera.
0 = do not force memory unmapping"
1 = force memory unmapping (save memory)"
0 = do not force memory unmapping
1 = force memory unmapping (save memory)
Default: 0
-------------------------------------------------------------------------------
Name: debug
Type: int
Type: ushort
Syntax: <n>
Description: Debugging information level, from 0 to 3:
0 = none (use carefully)
@ -187,14 +207,15 @@ Description: Debugging information level, from 0 to 3:
Level 3 is useful for testing only, when only one device
is used. It also shows some more informations about the
hardware being detected. This parameter can be changed at
runtime thanks to the /sys filesystem.
runtime thanks to the /sys filesystem interface.
Default: 2
-------------------------------------------------------------------------------
8. Optional device control through "sysfs" [1]
==========================================
It is possible to read and write both the SN9C10x and the image sensor
If the kernel has been compiled with the CONFIG_VIDEO_ADV_DEBUG option enabled,
it is possible to read and write both the SN9C10x and the image sensor
registers by using the "sysfs" filesystem interface.
Every time a supported device is recognized, a write-only file named "green" is
@ -236,7 +257,7 @@ serialized.
The sysfs interface also provides the "frame_header" entry, which exports the
frame header of the most recent requested and captured video frame. The header
is 12-bytes long and is appended to every video frame by the SN9C10x
is always 18-bytes long and is appended to every video frame by the SN9C10x
controllers. As an example, this additional information can be used by the user
application for implementing auto-exposure features via software.
@ -250,7 +271,8 @@ Byte # Value Description
0x03 0xC4 Frame synchronisation pattern.
0x04 0xC4 Frame synchronisation pattern.
0x05 0x96 Frame synchronisation pattern.
0x06 0x00 or 0x01 Unknown meaning. The exact value depends on the chip.
0x06 0xXX Unknown meaning. The exact value depends on the chip;
possible values are 0x00, 0x01 and 0x20.
0x07 0xXX Variable value, whose bits are ff00uzzc, where ff is a
frame counter, u is unknown, zz is a size indicator
(00 = VGA, 01 = SIF, 10 = QSIF) and c stands for
@ -267,12 +289,23 @@ Byte # Value Description
times the area outside of the specified AE area. For
images that are not pure white, the value scales down
according to relative whiteness.
according to relative whiteness.
The following bytes are used by the SN9C103 bridge only:
0x0C 0xXX Unknown meaning
0x0D 0xXX Unknown meaning
0x0E 0xXX Unknown meaning
0x0F 0xXX Unknown meaning
0x10 0xXX Unknown meaning
0x11 0xXX Unknown meaning
The AE area (sx, sy, ex, ey) in the active window can be set by programming the
registers 0x1c, 0x1d, 0x1e and 0x1f of the SN9C10x controllers, where one unit
corresponds to 32 pixels.
[1] The frame header has been documented by Bertrik Sikken.
[1] Part of the meaning of the frame header has been documented by Bertrik
Sikken.
9. Supported devices
@ -298,6 +331,7 @@ Vendor ID Product ID
0x0c45 0x602b
0x0c45 0x602c
0x0c45 0x602d
0x0c45 0x602e
0x0c45 0x6030
0x0c45 0x6080
0x0c45 0x6082
@ -348,18 +382,7 @@ appreciated. Non-available hardware will not be supported by the author of this
driver.
10. How to add plug-in's for new image sensors
==============================================
It should be easy to write plug-in's for new sensors by using the small API
that has been created for this purpose, which is present in "sn9c102_sensor.h"
(documentation is included there). As an example, have a look at the code in
"sn9c102_pas106b.c", which uses the mentioned interface.
At the moment, possible unsupported image sensors are: CIS-VF10 (VGA),
OV7620 (VGA), OV7630 (VGA).
11. Notes for V4L2 application developers
10. Notes for V4L2 application developers
=========================================
This driver follows the V4L2 API specifications. In particular, it enforces two
rules:
@ -394,7 +417,7 @@ initialized (as described in the documentation of the API for the image sensors
supplied by this driver).
12. Video frame formats [1]
11. Video frame formats [1]
=======================
The SN9C10x PC Camera Controllers can send images in two possible video
formats over the USB: either native "Sequential RGB Bayer" or Huffman
@ -455,7 +478,7 @@ The following Huffman codes have been found:
documented by Bertrik Sikken.
13. Contact information
12. Contact information
=======================
The author may be contacted by e-mail at <luca.risolia@studio.unibo.it>.
@ -464,7 +487,7 @@ GPG/PGP encrypted e-mail's are accepted. The GPG key ID of the author is
the fingerprint is: '88E8 F32F 7244 68BA 3958 5D40 99DA 5D2A FCE6 35A4'.
14. Credits
13. Credits
===========
Many thanks to following persons for their contribute (listed in alphabetical
order):
@ -480,5 +503,5 @@ order):
- Bertrik Sikken, who reverse-engineered and documented the Huffman compression
algorithm used in the SN9C10x controllers and implemented the first decoder;
- Mizuno Takafumi for the donation of a webcam;
- An "anonymous" donator (who didn't want his name to be revealed) for the
- an "anonymous" donator (who didn't want his name to be revealed) for the
donation of a webcam.

View File

@ -57,16 +57,12 @@ based cameras should be supported as well.
The driver is divided into two modules: the basic one, "w9968cf", is needed for
the supported devices to work; the second one, "w9968cf-vpp", is an optional
module, which provides some useful video post-processing functions like video
decoding, up-scaling and colour conversions. Once the driver is installed,
every time an application tries to open a recognized device, "w9968cf" checks
the presence of the "w9968cf-vpp" module and loads it automatically by default.
decoding, up-scaling and colour conversions.
Please keep in mind that official kernels do not include the second module for
performance purposes. However it is always recommended to download and install
the latest and complete release of the driver, replacing the existing one, if
present: it will be still even possible not to load the "w9968cf-vpp" module at
all, if you ever want to. Another important missing feature of the version in
the official Linux 2.4 kernels is the writeable /proc filesystem interface.
Note that the official kernels do neither include nor support the second
module for performance purposes. Therefore, it is always recommended to
download and install the latest and complete release of the driver,
replacing the existing one, if present.
The latest and full-featured version of the W996[87]CF driver can be found at:
http://www.linux-projects.org. Please refer to the documentation included in
@ -201,22 +197,6 @@ Note: The kernel must be compiled with the CONFIG_KMOD option
enabled for the 'ovcamchip' module to be loaded and for
this parameter to be present.
-------------------------------------------------------------------------------
Name: vppmod_load
Type: bool
Syntax: <0|1>
Description: Automatic 'w9968cf-vpp' module loading: 0 disabled, 1 enabled.
If enabled, every time an application attempts to open a
camera, 'insmod' searches for the video post-processing module
in the system and loads it automatically (if present).
The optional 'w9968cf-vpp' module adds extra image manipulation
capabilities to the 'w9968cf' module,like software up-scaling,
colour conversions and video decompression for very high frame
rates.
Default: 1
Note: The kernel must be compiled with the CONFIG_KMOD option
enabled for the 'w9968cf-vpp' module to be loaded and for
this parameter to be present.
-------------------------------------------------------------------------------
Name: simcams
Type: int
Syntax: <n>

View File

@ -0,0 +1,129 @@
Page migration
--------------
Page migration allows the moving of the physical location of pages between
nodes in a numa system while the process is running. This means that the
virtual addresses that the process sees do not change. However, the
system rearranges the physical location of those pages.
The main intend of page migration is to reduce the latency of memory access
by moving pages near to the processor where the process accessing that memory
is running.
Page migration allows a process to manually relocate the node on which its
pages are located through the MF_MOVE and MF_MOVE_ALL options while setting
a new memory policy. The pages of process can also be relocated
from another process using the sys_migrate_pages() function call. The
migrate_pages function call takes two sets of nodes and moves pages of a
process that are located on the from nodes to the destination nodes.
Manual migration is very useful if for example the scheduler has relocated
a process to a processor on a distant node. A batch scheduler or an
administrator may detect the situation and move the pages of the process
nearer to the new processor. At some point in the future we may have
some mechanism in the scheduler that will automatically move the pages.
Larger installations usually partition the system using cpusets into
sections of nodes. Paul Jackson has equipped cpusets with the ability to
move pages when a task is moved to another cpuset. This allows automatic
control over locality of a process. If a task is moved to a new cpuset
then also all its pages are moved with it so that the performance of the
process does not sink dramatically (as is the case today).
Page migration allows the preservation of the relative location of pages
within a group of nodes for all migration techniques which will preserve a
particular memory allocation pattern generated even after migrating a
process. This is necessary in order to preserve the memory latencies.
Processes will run with similar performance after migration.
Page migration occurs in several steps. First a high level
description for those trying to use migrate_pages() and then
a low level description of how the low level details work.
A. Use of migrate_pages()
-------------------------
1. Remove pages from the LRU.
Lists of pages to be migrated are generated by scanning over
pages and moving them into lists. This is done by
calling isolate_lru_page() or __isolate_lru_page().
Calling isolate_lru_page increases the references to the page
so that it cannot vanish under us.
2. Generate a list of newly allocates page to move the contents
of the first list to.
3. The migrate_pages() function is called which attempts
to do the migration. It returns the moved pages in the
list specified as the third parameter and the failed
migrations in the fourth parameter. The first parameter
will contain the pages that could still be retried.
4. The leftover pages of various types are returned
to the LRU using putback_to_lru_pages() or otherwise
disposed of. The pages will still have the refcount as
increased by isolate_lru_pages()!
B. Operation of migrate_pages()
--------------------------------
migrate_pages does several passes over its list of pages. A page is moved
if all references to a page are removable at the time.
Steps:
1. Lock the page to be migrated
2. Insure that writeback is complete.
3. Make sure that the page has assigned swap cache entry if
it is an anonyous page. The swap cache reference is necessary
to preserve the information contain in the page table maps.
4. Prep the new page that we want to move to. It is locked
and set to not being uptodate so that all accesses to the new
page immediately lock while we are moving references.
5. All the page table references to the page are either dropped (file backed)
or converted to swap references (anonymous pages). This should decrease the
reference count.
6. The radix tree lock is taken
7. The refcount of the page is examined and we back out if references remain
otherwise we know that we are the only one referencing this page.
8. The radix tree is checked and if it does not contain the pointer to this
page then we back out.
9. The mapping is checked. If the mapping is gone then a truncate action may
be in progress and we back out.
10. The new page is prepped with some settings from the old page so that accesses
to the new page will be discovered to have the correct settings.
11. The radix tree is changed to point to the new page.
12. The reference count of the old page is dropped because the reference has now
been removed.
13. The radix tree lock is dropped.
14. The page contents are copied to the new page.
15. The remaining page flags are copied to the new page.
16. The old page flags are cleared to indicate that the page does
not use any information anymore.
17. Queued up writeback on the new page is triggered.
18. If swap pte's were generated for the page then remove them again.
19. The locks are dropped from the old and new page.
20. The new page is moved to the LRU.
Christoph Lameter, December 19, 2005.

View File

@ -1176,8 +1176,8 @@ T: git kernel.org:/pub/scm/linux/kernel/git/aegl/linux-2.6.git
S: Maintained
SN-IA64 (Itanium) SUB-PLATFORM
P: Greg Edwards
M: edwardsg@sgi.com
P: Jes Sorensen
M: jes@sgi.com
L: linux-altix@sgi.com
L: linux-ia64@vger.kernel.org
W: http://www.sgi.com/altix
@ -1984,7 +1984,6 @@ M: philb@gnu.org
P: Tim Waugh
M: tim@cyberelk.net
P: David Campbell
M: campbell@torque.net
P: Andrea Arcangeli
M: andrea@suse.de
L: linux-parport@lists.infradead.org
@ -2673,6 +2672,14 @@ M: dbrownell@users.sourceforge.net
L: linux-usb-devel@lists.sourceforge.net
S: Maintained
USB ET61X[12]51 DRIVER
P: Luca Risolia
M: luca.risolia@studio.unibo.it
L: linux-usb-devel@lists.sourceforge.net
L: video4linux-list@redhat.com
W: http://www.linux-projects.org
S: Maintained
USB HID/HIDBP DRIVERS
P: Vojtech Pavlik
M: vojtech@suse.cz
@ -2836,6 +2843,7 @@ USB SN9C10x DRIVER
P: Luca Risolia
M: luca.risolia@studio.unibo.it
L: linux-usb-devel@lists.sourceforge.net
L: video4linux-list@redhat.com
W: http://www.linux-projects.org
S: Maintained
@ -2865,6 +2873,7 @@ USB W996[87]CF DRIVER
P: Luca Risolia
M: luca.risolia@studio.unibo.it
L: linux-usb-devel@lists.sourceforge.net
L: video4linux-list@redhat.com
W: http://www.linux-projects.org
S: Maintained

View File

@ -1,7 +1,7 @@
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 16
EXTRAVERSION =-rc1
EXTRAVERSION =-rc2
NAME=Sliding Snow Leopard
# *DOCUMENTATION*

View File

@ -28,6 +28,7 @@ void foo(void)
DEFINE(TASK_GID, offsetof(struct task_struct, gid));
DEFINE(TASK_EGID, offsetof(struct task_struct, egid));
DEFINE(TASK_REAL_PARENT, offsetof(struct task_struct, real_parent));
DEFINE(TASK_GROUP_LEADER, offsetof(struct task_struct, group_leader));
DEFINE(TASK_TGID, offsetof(struct task_struct, tgid));
BLANK();

View File

@ -879,17 +879,19 @@ sys_getxpid:
/* See linux/kernel/timer.c sys_getppid for discussion
about this loop. */
ldq $3, TASK_REAL_PARENT($2)
1: ldl $1, TASK_TGID($3)
ldq $3, TASK_GROUP_LEADER($2)
ldq $4, TASK_REAL_PARENT($3)
ldl $0, TASK_TGID($2)
1: ldl $1, TASK_TGID($4)
#ifdef CONFIG_SMP
mov $3, $4
mov $4, $5
mb
ldq $3, TASK_REAL_PARENT($2)
cmpeq $3, $4, $4
beq $4, 1b
ldq $3, TASK_GROUP_LEADER($2)
ldq $4, TASK_REAL_PARENT($3)
cmpeq $4, $5, $5
beq $5, 1b
#endif
stq $1, 80($sp)
ldl $0, TASK_TGID($2)
ret
.end sys_getxpid

View File

@ -68,34 +68,32 @@ show_interrupts(struct seq_file *p, void *v)
#ifdef CONFIG_SMP
int j;
#endif
int i = *(loff_t *) v;
int irq = *(loff_t *) v;
struct irqaction * action;
unsigned long flags;
#ifdef CONFIG_SMP
if (i == 0) {
if (irq == 0) {
seq_puts(p, " ");
for (i = 0; i < NR_CPUS; i++)
if (cpu_online(i))
seq_printf(p, "CPU%d ", i);
for_each_online_cpu(j)
seq_printf(p, "CPU%d ", j);
seq_putc(p, '\n');
}
#endif
if (i < ACTUAL_NR_IRQS) {
spin_lock_irqsave(&irq_desc[i].lock, flags);
action = irq_desc[i].action;
if (irq < ACTUAL_NR_IRQS) {
spin_lock_irqsave(&irq_desc[irq].lock, flags);
action = irq_desc[irq].action;
if (!action)
goto unlock;
seq_printf(p, "%3d: ",i);
seq_printf(p, "%3d: ", irq);
#ifndef CONFIG_SMP
seq_printf(p, "%10u ", kstat_irqs(i));
seq_printf(p, "%10u ", kstat_irqs(irq));
#else
for (j = 0; j < NR_CPUS; j++)
if (cpu_online(j))
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[irq]);
#endif
seq_printf(p, " %14s", irq_desc[i].handler->typename);
seq_printf(p, " %14s", irq_desc[irq].handler->typename);
seq_printf(p, " %c%s",
(action->flags & SA_INTERRUPT)?'+':' ',
action->name);
@ -108,13 +106,12 @@ show_interrupts(struct seq_file *p, void *v)
seq_putc(p, '\n');
unlock:
spin_unlock_irqrestore(&irq_desc[i].lock, flags);
} else if (i == ACTUAL_NR_IRQS) {
spin_unlock_irqrestore(&irq_desc[irq].lock, flags);
} else if (irq == ACTUAL_NR_IRQS) {
#ifdef CONFIG_SMP
seq_puts(p, "IPI: ");
for (i = 0; i < NR_CPUS; i++)
if (cpu_online(i))
seq_printf(p, "%10lu ", cpu_data[i].ipi_count);
for_each_online_cpu(j)
seq_printf(p, "%10lu ", cpu_data[j].ipi_count);
seq_putc(p, '\n');
#endif
seq_printf(p, "ERR: %10lu\n", irq_err_count);
@ -122,7 +119,6 @@ unlock:
return 0;
}
/*
* handle_irq handles all normal device IRQ's (the special
* SMP cross-CPU interrupts have their own specific

View File

@ -14,8 +14,7 @@ CONFIG_GENERIC_IOMAP=y
# Code maturity level options
#
CONFIG_EXPERIMENTAL=y
# CONFIG_CLEAN_COMPILE is not set
CONFIG_BROKEN=y
CONFIG_CLEAN_COMPILE=y
CONFIG_BROKEN_ON_SMP=y
#
@ -360,7 +359,6 @@ CONFIG_BLK_DEV_IDE_BAST=y
#
# IEEE 1394 (FireWire) support
#
# CONFIG_IEEE1394 is not set
#
# I2O device support
@ -781,7 +779,6 @@ CONFIG_SYSFS=y
# CONFIG_DEVFS_FS is not set
# CONFIG_DEVPTS_FS_XATTR is not set
# CONFIG_TMPFS is not set
# CONFIG_HUGETLBFS is not set
# CONFIG_HUGETLB_PAGE is not set
CONFIG_RAMFS=y

View File

@ -13,8 +13,7 @@ CONFIG_GENERIC_CALIBRATE_DELAY=y
# Code maturity level options
#
CONFIG_EXPERIMENTAL=y
# CONFIG_CLEAN_COMPILE is not set
CONFIG_BROKEN=y
CONFIG_CLEAN_COMPILE=y
CONFIG_BROKEN_ON_SMP=y
CONFIG_LOCK_KERNEL=y
CONFIG_INIT_ENV_ARG_LIMIT=32
@ -308,9 +307,7 @@ CONFIG_MTD_CFI_I2=y
# CONFIG_MTD_ROM is not set
# CONFIG_MTD_ABSENT is not set
CONFIG_MTD_OBSOLETE_CHIPS=y
# CONFIG_MTD_AMDSTD is not set
CONFIG_MTD_SHARP=y
# CONFIG_MTD_JEDEC is not set
#
# Mapping drivers for chip access
@ -396,7 +393,6 @@ CONFIG_ATA_OVER_ETH=m
#
# IEEE 1394 (FireWire) support
#
# CONFIG_IEEE1394 is not set
#
# I2O device support
@ -741,7 +737,6 @@ CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
CONFIG_PROC_FS=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
# CONFIG_HUGETLBFS is not set
# CONFIG_HUGETLB_PAGE is not set
CONFIG_RAMFS=y
# CONFIG_RELAYFS_FS is not set

View File

@ -13,8 +13,7 @@ CONFIG_GENERIC_CALIBRATE_DELAY=y
# Code maturity level options
#
CONFIG_EXPERIMENTAL=y
# CONFIG_CLEAN_COMPILE is not set
CONFIG_BROKEN=y
CONFIG_CLEAN_COMPILE=y
CONFIG_BROKEN_ON_SMP=y
CONFIG_INIT_ENV_ARG_LIMIT=32
@ -473,7 +472,6 @@ CONFIG_BLK_DEV_IDE_BAST=y
#
# IEEE 1394 (FireWire) support
#
# CONFIG_IEEE1394 is not set
#
# I2O device support
@ -896,7 +894,6 @@ CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
CONFIG_PROC_FS=y
CONFIG_SYSFS=y
# CONFIG_TMPFS is not set
# CONFIG_HUGETLBFS is not set
# CONFIG_HUGETLB_PAGE is not set
CONFIG_RAMFS=y
# CONFIG_RELAYFS_FS is not set

View File

@ -7,337 +7,334 @@
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This file is included twice in entry-common.S
* This file is included thrice in entry-common.S
*/
#ifndef NR_syscalls
#define NR_syscalls 328
#else
100:
/* 0 */ .long sys_restart_syscall
.long sys_exit
.long sys_fork_wrapper
.long sys_read
.long sys_write
/* 5 */ .long sys_open
.long sys_close
.long sys_ni_syscall /* was sys_waitpid */
.long sys_creat
.long sys_link
/* 10 */ .long sys_unlink
.long sys_execve_wrapper
.long sys_chdir
.long OBSOLETE(sys_time) /* used by libc4 */
.long sys_mknod
/* 15 */ .long sys_chmod
.long sys_lchown16
.long sys_ni_syscall /* was sys_break */
.long sys_ni_syscall /* was sys_stat */
.long sys_lseek
/* 20 */ .long sys_getpid
.long sys_mount
.long OBSOLETE(sys_oldumount) /* used by libc4 */
.long sys_setuid16
.long sys_getuid16
/* 25 */ .long OBSOLETE(sys_stime)
.long sys_ptrace
.long OBSOLETE(sys_alarm) /* used by libc4 */
.long sys_ni_syscall /* was sys_fstat */
.long sys_pause
/* 30 */ .long OBSOLETE(sys_utime) /* used by libc4 */
.long sys_ni_syscall /* was sys_stty */
.long sys_ni_syscall /* was sys_getty */
.long sys_access
.long sys_nice
/* 35 */ .long sys_ni_syscall /* was sys_ftime */
.long sys_sync
.long sys_kill
.long sys_rename
.long sys_mkdir
/* 40 */ .long sys_rmdir
.long sys_dup
.long sys_pipe
.long sys_times
.long sys_ni_syscall /* was sys_prof */
/* 45 */ .long sys_brk
.long sys_setgid16
.long sys_getgid16
.long sys_ni_syscall /* was sys_signal */
.long sys_geteuid16
/* 50 */ .long sys_getegid16
.long sys_acct
.long sys_umount
.long sys_ni_syscall /* was sys_lock */
.long sys_ioctl
/* 55 */ .long sys_fcntl
.long sys_ni_syscall /* was sys_mpx */
.long sys_setpgid
.long sys_ni_syscall /* was sys_ulimit */
.long sys_ni_syscall /* was sys_olduname */
/* 60 */ .long sys_umask
.long sys_chroot
.long sys_ustat
.long sys_dup2
.long sys_getppid
/* 65 */ .long sys_getpgrp
.long sys_setsid
.long sys_sigaction
.long sys_ni_syscall /* was sys_sgetmask */
.long sys_ni_syscall /* was sys_ssetmask */
/* 70 */ .long sys_setreuid16
.long sys_setregid16
.long sys_sigsuspend_wrapper
.long sys_sigpending
.long sys_sethostname
/* 75 */ .long sys_setrlimit
.long OBSOLETE(sys_old_getrlimit) /* used by libc4 */
.long sys_getrusage
.long sys_gettimeofday
.long sys_settimeofday
/* 80 */ .long sys_getgroups16
.long sys_setgroups16
.long OBSOLETE(old_select) /* used by libc4 */
.long sys_symlink
.long sys_ni_syscall /* was sys_lstat */
/* 85 */ .long sys_readlink
.long sys_uselib
.long sys_swapon
.long sys_reboot
.long OBSOLETE(old_readdir) /* used by libc4 */
/* 90 */ .long OBSOLETE(old_mmap) /* used by libc4 */
.long sys_munmap
.long sys_truncate
.long sys_ftruncate
.long sys_fchmod
/* 95 */ .long sys_fchown16
.long sys_getpriority
.long sys_setpriority
.long sys_ni_syscall /* was sys_profil */
.long sys_statfs
/* 100 */ .long sys_fstatfs
.long sys_ni_syscall
.long OBSOLETE(sys_socketcall)
.long sys_syslog
.long sys_setitimer
/* 105 */ .long sys_getitimer
.long sys_newstat
.long sys_newlstat
.long sys_newfstat
.long sys_ni_syscall /* was sys_uname */
/* 110 */ .long sys_ni_syscall /* was sys_iopl */
.long sys_vhangup
.long sys_ni_syscall
.long OBSOLETE(sys_syscall) /* call a syscall */
.long sys_wait4
/* 115 */ .long sys_swapoff
.long sys_sysinfo
.long OBSOLETE(ABI(sys_ipc, sys_oabi_ipc))
.long sys_fsync
.long sys_sigreturn_wrapper
/* 120 */ .long sys_clone_wrapper
.long sys_setdomainname
.long sys_newuname
.long sys_ni_syscall
.long sys_adjtimex
/* 125 */ .long sys_mprotect
.long sys_sigprocmask
.long sys_ni_syscall /* was sys_create_module */
.long sys_init_module
.long sys_delete_module
/* 130 */ .long sys_ni_syscall /* was sys_get_kernel_syms */
.long sys_quotactl
.long sys_getpgid
.long sys_fchdir
.long sys_bdflush
/* 135 */ .long sys_sysfs
.long sys_personality
.long sys_ni_syscall /* .long _sys_afs_syscall */
.long sys_setfsuid16
.long sys_setfsgid16
/* 140 */ .long sys_llseek
.long sys_getdents
.long sys_select
.long sys_flock
.long sys_msync
/* 145 */ .long sys_readv
.long sys_writev
.long sys_getsid
.long sys_fdatasync
.long sys_sysctl
/* 150 */ .long sys_mlock
.long sys_munlock
.long sys_mlockall
.long sys_munlockall
.long sys_sched_setparam
/* 155 */ .long sys_sched_getparam
.long sys_sched_setscheduler
.long sys_sched_getscheduler
.long sys_sched_yield
.long sys_sched_get_priority_max
/* 160 */ .long sys_sched_get_priority_min
.long sys_sched_rr_get_interval
.long sys_nanosleep
.long sys_arm_mremap
.long sys_setresuid16
/* 165 */ .long sys_getresuid16
.long sys_ni_syscall
.long sys_ni_syscall /* was sys_query_module */
.long sys_poll
.long sys_nfsservctl
/* 170 */ .long sys_setresgid16
.long sys_getresgid16
.long sys_prctl
.long sys_rt_sigreturn_wrapper
.long sys_rt_sigaction
/* 175 */ .long sys_rt_sigprocmask
.long sys_rt_sigpending
.long sys_rt_sigtimedwait
.long sys_rt_sigqueueinfo
.long sys_rt_sigsuspend_wrapper
/* 180 */ .long ABI(sys_pread64, sys_oabi_pread64)
.long ABI(sys_pwrite64, sys_oabi_pwrite64)
.long sys_chown16
.long sys_getcwd
.long sys_capget
/* 185 */ .long sys_capset
.long sys_sigaltstack_wrapper
.long sys_sendfile
.long sys_ni_syscall
.long sys_ni_syscall
/* 190 */ .long sys_vfork_wrapper
.long sys_getrlimit
.long sys_mmap2
.long ABI(sys_truncate64, sys_oabi_truncate64)
.long ABI(sys_ftruncate64, sys_oabi_ftruncate64)
/* 195 */ .long ABI(sys_stat64, sys_oabi_stat64)
.long ABI(sys_lstat64, sys_oabi_lstat64)
.long ABI(sys_fstat64, sys_oabi_fstat64)
.long sys_lchown
.long sys_getuid
/* 200 */ .long sys_getgid
.long sys_geteuid
.long sys_getegid
.long sys_setreuid
.long sys_setregid
/* 205 */ .long sys_getgroups
.long sys_setgroups
.long sys_fchown
.long sys_setresuid
.long sys_getresuid
/* 210 */ .long sys_setresgid
.long sys_getresgid
.long sys_chown
.long sys_setuid
.long sys_setgid
/* 215 */ .long sys_setfsuid
.long sys_setfsgid
.long sys_getdents64
.long sys_pivot_root
.long sys_mincore
/* 220 */ .long sys_madvise
.long ABI(sys_fcntl64, sys_oabi_fcntl64)
.long sys_ni_syscall /* TUX */
.long sys_ni_syscall
.long sys_gettid
/* 225 */ .long ABI(sys_readahead, sys_oabi_readahead)
.long sys_setxattr
.long sys_lsetxattr
.long sys_fsetxattr
.long sys_getxattr
/* 230 */ .long sys_lgetxattr
.long sys_fgetxattr
.long sys_listxattr
.long sys_llistxattr
.long sys_flistxattr
/* 235 */ .long sys_removexattr
.long sys_lremovexattr
.long sys_fremovexattr
.long sys_tkill
.long sys_sendfile64
/* 240 */ .long sys_futex
.long sys_sched_setaffinity
.long sys_sched_getaffinity
.long sys_io_setup
.long sys_io_destroy
/* 245 */ .long sys_io_getevents
.long sys_io_submit
.long sys_io_cancel
.long sys_exit_group
.long sys_lookup_dcookie
/* 250 */ .long sys_epoll_create
.long ABI(sys_epoll_ctl, sys_oabi_epoll_ctl)
.long ABI(sys_epoll_wait, sys_oabi_epoll_wait)
.long sys_remap_file_pages
.long sys_ni_syscall /* sys_set_thread_area */
/* 255 */ .long sys_ni_syscall /* sys_get_thread_area */
.long sys_set_tid_address
.long sys_timer_create
.long sys_timer_settime
.long sys_timer_gettime
/* 260 */ .long sys_timer_getoverrun
.long sys_timer_delete
.long sys_clock_settime
.long sys_clock_gettime
.long sys_clock_getres
/* 265 */ .long sys_clock_nanosleep
.long sys_statfs64_wrapper
.long sys_fstatfs64_wrapper
.long sys_tgkill
.long sys_utimes
/* 270 */ .long sys_arm_fadvise64_64
.long sys_pciconfig_iobase
.long sys_pciconfig_read
.long sys_pciconfig_write
.long sys_mq_open
/* 275 */ .long sys_mq_unlink
.long sys_mq_timedsend
.long sys_mq_timedreceive
.long sys_mq_notify
.long sys_mq_getsetattr
/* 280 */ .long sys_waitid
.long sys_socket
.long sys_bind
.long sys_connect
.long sys_listen
/* 285 */ .long sys_accept
.long sys_getsockname
.long sys_getpeername
.long sys_socketpair
.long sys_send
/* 290 */ .long sys_sendto
.long sys_recv
.long sys_recvfrom
.long sys_shutdown
.long sys_setsockopt
/* 295 */ .long sys_getsockopt
.long sys_sendmsg
.long sys_recvmsg
.long ABI(sys_semop, sys_oabi_semop)
.long sys_semget
/* 300 */ .long sys_semctl
.long sys_msgsnd
.long sys_msgrcv
.long sys_msgget
.long sys_msgctl
/* 305 */ .long sys_shmat
.long sys_shmdt
.long sys_shmget
.long sys_shmctl
.long sys_add_key
/* 310 */ .long sys_request_key
.long sys_keyctl
.long ABI(sys_semtimedop, sys_oabi_semtimedop)
/* vserver */ .long sys_ni_syscall
.long sys_ioprio_set
/* 315 */ .long sys_ioprio_get
.long sys_inotify_init
.long sys_inotify_add_watch
.long sys_inotify_rm_watch
.long sys_mbind
/* 320 */ .long sys_get_mempolicy
.long sys_set_mempolicy
.rept NR_syscalls - (. - 100b) / 4
.long sys_ni_syscall
.endr
/* 0 */ CALL(sys_restart_syscall)
CALL(sys_exit)
CALL(sys_fork_wrapper)
CALL(sys_read)
CALL(sys_write)
/* 5 */ CALL(sys_open)
CALL(sys_close)
CALL(sys_ni_syscall) /* was sys_waitpid */
CALL(sys_creat)
CALL(sys_link)
/* 10 */ CALL(sys_unlink)
CALL(sys_execve_wrapper)
CALL(sys_chdir)
CALL(OBSOLETE(sys_time)) /* used by libc4 */
CALL(sys_mknod)
/* 15 */ CALL(sys_chmod)
CALL(sys_lchown16)
CALL(sys_ni_syscall) /* was sys_break */
CALL(sys_ni_syscall) /* was sys_stat */
CALL(sys_lseek)
/* 20 */ CALL(sys_getpid)
CALL(sys_mount)
CALL(OBSOLETE(sys_oldumount)) /* used by libc4 */
CALL(sys_setuid16)
CALL(sys_getuid16)
/* 25 */ CALL(OBSOLETE(sys_stime))
CALL(sys_ptrace)
CALL(OBSOLETE(sys_alarm)) /* used by libc4 */
CALL(sys_ni_syscall) /* was sys_fstat */
CALL(sys_pause)
/* 30 */ CALL(OBSOLETE(sys_utime)) /* used by libc4 */
CALL(sys_ni_syscall) /* was sys_stty */
CALL(sys_ni_syscall) /* was sys_getty */
CALL(sys_access)
CALL(sys_nice)
/* 35 */ CALL(sys_ni_syscall) /* was sys_ftime */
CALL(sys_sync)
CALL(sys_kill)
CALL(sys_rename)
CALL(sys_mkdir)
/* 40 */ CALL(sys_rmdir)
CALL(sys_dup)
CALL(sys_pipe)
CALL(sys_times)
CALL(sys_ni_syscall) /* was sys_prof */
/* 45 */ CALL(sys_brk)
CALL(sys_setgid16)
CALL(sys_getgid16)
CALL(sys_ni_syscall) /* was sys_signal */
CALL(sys_geteuid16)
/* 50 */ CALL(sys_getegid16)
CALL(sys_acct)
CALL(sys_umount)
CALL(sys_ni_syscall) /* was sys_lock */
CALL(sys_ioctl)
/* 55 */ CALL(sys_fcntl)
CALL(sys_ni_syscall) /* was sys_mpx */
CALL(sys_setpgid)
CALL(sys_ni_syscall) /* was sys_ulimit */
CALL(sys_ni_syscall) /* was sys_olduname */
/* 60 */ CALL(sys_umask)
CALL(sys_chroot)
CALL(sys_ustat)
CALL(sys_dup2)
CALL(sys_getppid)
/* 65 */ CALL(sys_getpgrp)
CALL(sys_setsid)
CALL(sys_sigaction)
CALL(sys_ni_syscall) /* was sys_sgetmask */
CALL(sys_ni_syscall) /* was sys_ssetmask */
/* 70 */ CALL(sys_setreuid16)
CALL(sys_setregid16)
CALL(sys_sigsuspend_wrapper)
CALL(sys_sigpending)
CALL(sys_sethostname)
/* 75 */ CALL(sys_setrlimit)
CALL(OBSOLETE(sys_old_getrlimit)) /* used by libc4 */
CALL(sys_getrusage)
CALL(sys_gettimeofday)
CALL(sys_settimeofday)
/* 80 */ CALL(sys_getgroups16)
CALL(sys_setgroups16)
CALL(OBSOLETE(old_select)) /* used by libc4 */
CALL(sys_symlink)
CALL(sys_ni_syscall) /* was sys_lstat */
/* 85 */ CALL(sys_readlink)
CALL(sys_uselib)
CALL(sys_swapon)
CALL(sys_reboot)
CALL(OBSOLETE(old_readdir)) /* used by libc4 */
/* 90 */ CALL(OBSOLETE(old_mmap)) /* used by libc4 */
CALL(sys_munmap)
CALL(sys_truncate)
CALL(sys_ftruncate)
CALL(sys_fchmod)
/* 95 */ CALL(sys_fchown16)
CALL(sys_getpriority)
CALL(sys_setpriority)
CALL(sys_ni_syscall) /* was sys_profil */
CALL(sys_statfs)
/* 100 */ CALL(sys_fstatfs)
CALL(sys_ni_syscall)
CALL(OBSOLETE(sys_socketcall))
CALL(sys_syslog)
CALL(sys_setitimer)
/* 105 */ CALL(sys_getitimer)
CALL(sys_newstat)
CALL(sys_newlstat)
CALL(sys_newfstat)
CALL(sys_ni_syscall) /* was sys_uname */
/* 110 */ CALL(sys_ni_syscall) /* was sys_iopl */
CALL(sys_vhangup)
CALL(sys_ni_syscall)
CALL(OBSOLETE(sys_syscall)) /* call a syscall */
CALL(sys_wait4)
/* 115 */ CALL(sys_swapoff)
CALL(sys_sysinfo)
CALL(OBSOLETE(ABI(sys_ipc, sys_oabi_ipc)))
CALL(sys_fsync)
CALL(sys_sigreturn_wrapper)
/* 120 */ CALL(sys_clone_wrapper)
CALL(sys_setdomainname)
CALL(sys_newuname)
CALL(sys_ni_syscall)
CALL(sys_adjtimex)
/* 125 */ CALL(sys_mprotect)
CALL(sys_sigprocmask)
CALL(sys_ni_syscall) /* was sys_create_module */
CALL(sys_init_module)
CALL(sys_delete_module)
/* 130 */ CALL(sys_ni_syscall) /* was sys_get_kernel_syms */
CALL(sys_quotactl)
CALL(sys_getpgid)
CALL(sys_fchdir)
CALL(sys_bdflush)
/* 135 */ CALL(sys_sysfs)
CALL(sys_personality)
CALL(sys_ni_syscall) /* CALL(_sys_afs_syscall) */
CALL(sys_setfsuid16)
CALL(sys_setfsgid16)
/* 140 */ CALL(sys_llseek)
CALL(sys_getdents)
CALL(sys_select)
CALL(sys_flock)
CALL(sys_msync)
/* 145 */ CALL(sys_readv)
CALL(sys_writev)
CALL(sys_getsid)
CALL(sys_fdatasync)
CALL(sys_sysctl)
/* 150 */ CALL(sys_mlock)
CALL(sys_munlock)
CALL(sys_mlockall)
CALL(sys_munlockall)
CALL(sys_sched_setparam)
/* 155 */ CALL(sys_sched_getparam)
CALL(sys_sched_setscheduler)
CALL(sys_sched_getscheduler)
CALL(sys_sched_yield)
CALL(sys_sched_get_priority_max)
/* 160 */ CALL(sys_sched_get_priority_min)
CALL(sys_sched_rr_get_interval)
CALL(sys_nanosleep)
CALL(sys_arm_mremap)
CALL(sys_setresuid16)
/* 165 */ CALL(sys_getresuid16)
CALL(sys_ni_syscall)
CALL(sys_ni_syscall) /* was sys_query_module */
CALL(sys_poll)
CALL(sys_nfsservctl)
/* 170 */ CALL(sys_setresgid16)
CALL(sys_getresgid16)
CALL(sys_prctl)
CALL(sys_rt_sigreturn_wrapper)
CALL(sys_rt_sigaction)
/* 175 */ CALL(sys_rt_sigprocmask)
CALL(sys_rt_sigpending)
CALL(sys_rt_sigtimedwait)
CALL(sys_rt_sigqueueinfo)
CALL(sys_rt_sigsuspend_wrapper)
/* 180 */ CALL(ABI(sys_pread64, sys_oabi_pread64))
CALL(ABI(sys_pwrite64, sys_oabi_pwrite64))
CALL(sys_chown16)
CALL(sys_getcwd)
CALL(sys_capget)
/* 185 */ CALL(sys_capset)
CALL(sys_sigaltstack_wrapper)
CALL(sys_sendfile)
CALL(sys_ni_syscall)
CALL(sys_ni_syscall)
/* 190 */ CALL(sys_vfork_wrapper)
CALL(sys_getrlimit)
CALL(sys_mmap2)
CALL(ABI(sys_truncate64, sys_oabi_truncate64))
CALL(ABI(sys_ftruncate64, sys_oabi_ftruncate64))
/* 195 */ CALL(ABI(sys_stat64, sys_oabi_stat64))
CALL(ABI(sys_lstat64, sys_oabi_lstat64))
CALL(ABI(sys_fstat64, sys_oabi_fstat64))
CALL(sys_lchown)
CALL(sys_getuid)
/* 200 */ CALL(sys_getgid)
CALL(sys_geteuid)
CALL(sys_getegid)
CALL(sys_setreuid)
CALL(sys_setregid)
/* 205 */ CALL(sys_getgroups)
CALL(sys_setgroups)
CALL(sys_fchown)
CALL(sys_setresuid)
CALL(sys_getresuid)
/* 210 */ CALL(sys_setresgid)
CALL(sys_getresgid)
CALL(sys_chown)
CALL(sys_setuid)
CALL(sys_setgid)
/* 215 */ CALL(sys_setfsuid)
CALL(sys_setfsgid)
CALL(sys_getdents64)
CALL(sys_pivot_root)
CALL(sys_mincore)
/* 220 */ CALL(sys_madvise)
CALL(ABI(sys_fcntl64, sys_oabi_fcntl64))
CALL(sys_ni_syscall) /* TUX */
CALL(sys_ni_syscall)
CALL(sys_gettid)
/* 225 */ CALL(ABI(sys_readahead, sys_oabi_readahead))
CALL(sys_setxattr)
CALL(sys_lsetxattr)
CALL(sys_fsetxattr)
CALL(sys_getxattr)
/* 230 */ CALL(sys_lgetxattr)
CALL(sys_fgetxattr)
CALL(sys_listxattr)
CALL(sys_llistxattr)
CALL(sys_flistxattr)
/* 235 */ CALL(sys_removexattr)
CALL(sys_lremovexattr)
CALL(sys_fremovexattr)
CALL(sys_tkill)
CALL(sys_sendfile64)
/* 240 */ CALL(sys_futex)
CALL(sys_sched_setaffinity)
CALL(sys_sched_getaffinity)
CALL(sys_io_setup)
CALL(sys_io_destroy)
/* 245 */ CALL(sys_io_getevents)
CALL(sys_io_submit)
CALL(sys_io_cancel)
CALL(sys_exit_group)
CALL(sys_lookup_dcookie)
/* 250 */ CALL(sys_epoll_create)
CALL(ABI(sys_epoll_ctl, sys_oabi_epoll_ctl))
CALL(ABI(sys_epoll_wait, sys_oabi_epoll_wait))
CALL(sys_remap_file_pages)
CALL(sys_ni_syscall) /* sys_set_thread_area */
/* 255 */ CALL(sys_ni_syscall) /* sys_get_thread_area */
CALL(sys_set_tid_address)
CALL(sys_timer_create)
CALL(sys_timer_settime)
CALL(sys_timer_gettime)
/* 260 */ CALL(sys_timer_getoverrun)
CALL(sys_timer_delete)
CALL(sys_clock_settime)
CALL(sys_clock_gettime)
CALL(sys_clock_getres)
/* 265 */ CALL(sys_clock_nanosleep)
CALL(sys_statfs64_wrapper)
CALL(sys_fstatfs64_wrapper)
CALL(sys_tgkill)
CALL(sys_utimes)
/* 270 */ CALL(sys_arm_fadvise64_64)
CALL(sys_pciconfig_iobase)
CALL(sys_pciconfig_read)
CALL(sys_pciconfig_write)
CALL(sys_mq_open)
/* 275 */ CALL(sys_mq_unlink)
CALL(sys_mq_timedsend)
CALL(sys_mq_timedreceive)
CALL(sys_mq_notify)
CALL(sys_mq_getsetattr)
/* 280 */ CALL(sys_waitid)
CALL(sys_socket)
CALL(sys_bind)
CALL(sys_connect)
CALL(sys_listen)
/* 285 */ CALL(sys_accept)
CALL(sys_getsockname)
CALL(sys_getpeername)
CALL(sys_socketpair)
CALL(sys_send)
/* 290 */ CALL(sys_sendto)
CALL(sys_recv)
CALL(sys_recvfrom)
CALL(sys_shutdown)
CALL(sys_setsockopt)
/* 295 */ CALL(sys_getsockopt)
CALL(sys_sendmsg)
CALL(sys_recvmsg)
CALL(ABI(sys_semop, sys_oabi_semop))
CALL(sys_semget)
/* 300 */ CALL(sys_semctl)
CALL(sys_msgsnd)
CALL(sys_msgrcv)
CALL(sys_msgget)
CALL(sys_msgctl)
/* 305 */ CALL(sys_shmat)
CALL(sys_shmdt)
CALL(sys_shmget)
CALL(sys_shmctl)
CALL(sys_add_key)
/* 310 */ CALL(sys_request_key)
CALL(sys_keyctl)
CALL(ABI(sys_semtimedop, sys_oabi_semtimedop))
/* vserver */ CALL(sys_ni_syscall)
CALL(sys_ioprio_set)
/* 315 */ CALL(sys_ioprio_get)
CALL(sys_inotify_init)
CALL(sys_inotify_add_watch)
CALL(sys_inotify_rm_watch)
CALL(sys_mbind)
/* 320 */ CALL(sys_get_mempolicy)
CALL(sys_set_mempolicy)
#ifndef syscalls_counted
.equ syscalls_padding, ((NR_syscalls + 3) & ~3) - NR_syscalls
#define syscalls_counted
#endif
.rept syscalls_padding
CALL(sys_ni_syscall)
.endr

View File

@ -87,7 +87,11 @@ ENTRY(ret_from_fork)
b ret_slow_syscall
.equ NR_syscalls,0
#define CALL(x) .equ NR_syscalls,NR_syscalls+1
#include "calls.S"
#undef CALL
#define CALL(x) .long x
/*=============================================================================
* SWI handler

View File

@ -469,7 +469,9 @@ static void cp_clcd_enable(struct clcd_fb *fb)
if (fb->fb.var.bits_per_pixel <= 8)
val = CM_CTRL_LCDMUXSEL_VGA_8421BPP;
else if (fb->fb.var.bits_per_pixel <= 16)
val = CM_CTRL_LCDMUXSEL_VGA_16BPP;
val = CM_CTRL_LCDMUXSEL_VGA_16BPP
| CM_CTRL_LCDEN0 | CM_CTRL_LCDEN1
| CM_CTRL_STATIC1 | CM_CTRL_STATIC2;
else
val = 0; /* no idea for this, don't trust the docs */

View File

@ -17,11 +17,12 @@
* 14-Jan-2005 BJD Added s3c24xx_init_clocks() call
* 10-Mar-2005 LCVR Changed S3C2410_{VA,SZ} to S3C24XX_{VA,SZ} & IODESC_ENT
* 14-Mar-2005 BJD Updated for __iomem
* 15-Jan-2006 LCVR Updated S3C2410_PA_##x to new S3C24XX_PA_##x macro
*/
/* todo - fix when rmk changes iodescs to use `void __iomem *` */
#define IODESC_ENT(x) { (unsigned long)S3C24XX_VA_##x, __phys_to_pfn(S3C2410_PA_##x), S3C24XX_SZ_##x, MT_DEVICE }
#define IODESC_ENT(x) { (unsigned long)S3C24XX_VA_##x, __phys_to_pfn(S3C24XX_PA_##x), S3C24XX_SZ_##x, MT_DEVICE }
#ifndef MHZ
#define MHZ (1000*1000)

View File

@ -10,6 +10,7 @@
* published by the Free Software Foundation.
*
* Modifications:
* 15-Jan-2006 LCVR Using S3C24XX_PA_##x macro for common S3C24XX devices
* 10-Mar-2005 LCVR Changed S3C2410_{VA,SZ} to S3C24XX_{VA,SZ}
* 10-Feb-2005 BJD Added camera from guillaume.gourat@nexvision.tv
* 29-Aug-2004 BJD Added timers 0 through 3
@ -46,8 +47,8 @@ struct platform_device *s3c24xx_uart_devs[3];
static struct resource s3c_usb_resource[] = {
[0] = {
.start = S3C2410_PA_USBHOST,
.end = S3C2410_PA_USBHOST + S3C24XX_SZ_USBHOST - 1,
.start = S3C24XX_PA_USBHOST,
.end = S3C24XX_PA_USBHOST + S3C24XX_SZ_USBHOST - 1,
.flags = IORESOURCE_MEM,
},
[1] = {
@ -76,8 +77,8 @@ EXPORT_SYMBOL(s3c_device_usb);
static struct resource s3c_lcd_resource[] = {
[0] = {
.start = S3C2410_PA_LCD,
.end = S3C2410_PA_LCD + S3C24XX_SZ_LCD - 1,
.start = S3C24XX_PA_LCD,
.end = S3C24XX_PA_LCD + S3C24XX_SZ_LCD - 1,
.flags = IORESOURCE_MEM,
},
[1] = {
@ -139,8 +140,8 @@ EXPORT_SYMBOL(s3c_device_nand);
static struct resource s3c_usbgadget_resource[] = {
[0] = {
.start = S3C2410_PA_USBDEV,
.end = S3C2410_PA_USBDEV + S3C24XX_SZ_USBDEV - 1,
.start = S3C24XX_PA_USBDEV,
.end = S3C24XX_PA_USBDEV + S3C24XX_SZ_USBDEV - 1,
.flags = IORESOURCE_MEM,
},
[1] = {
@ -164,8 +165,8 @@ EXPORT_SYMBOL(s3c_device_usbgadget);
static struct resource s3c_wdt_resource[] = {
[0] = {
.start = S3C2410_PA_WATCHDOG,
.end = S3C2410_PA_WATCHDOG + S3C24XX_SZ_WATCHDOG - 1,
.start = S3C24XX_PA_WATCHDOG,
.end = S3C24XX_PA_WATCHDOG + S3C24XX_SZ_WATCHDOG - 1,
.flags = IORESOURCE_MEM,
},
[1] = {
@ -189,8 +190,8 @@ EXPORT_SYMBOL(s3c_device_wdt);
static struct resource s3c_i2c_resource[] = {
[0] = {
.start = S3C2410_PA_IIC,
.end = S3C2410_PA_IIC + S3C24XX_SZ_IIC - 1,
.start = S3C24XX_PA_IIC,
.end = S3C24XX_PA_IIC + S3C24XX_SZ_IIC - 1,
.flags = IORESOURCE_MEM,
},
[1] = {
@ -214,8 +215,8 @@ EXPORT_SYMBOL(s3c_device_i2c);
static struct resource s3c_iis_resource[] = {
[0] = {
.start = S3C2410_PA_IIS,
.end = S3C2410_PA_IIS + S3C24XX_SZ_IIS -1,
.start = S3C24XX_PA_IIS,
.end = S3C24XX_PA_IIS + S3C24XX_SZ_IIS -1,
.flags = IORESOURCE_MEM,
}
};
@ -239,8 +240,8 @@ EXPORT_SYMBOL(s3c_device_iis);
static struct resource s3c_rtc_resource[] = {
[0] = {
.start = S3C2410_PA_RTC,
.end = S3C2410_PA_RTC + 0xff,
.start = S3C24XX_PA_RTC,
.end = S3C24XX_PA_RTC + 0xff,
.flags = IORESOURCE_MEM,
},
[1] = {
@ -268,8 +269,8 @@ EXPORT_SYMBOL(s3c_device_rtc);
static struct resource s3c_adc_resource[] = {
[0] = {
.start = S3C2410_PA_ADC,
.end = S3C2410_PA_ADC + S3C24XX_SZ_ADC - 1,
.start = S3C24XX_PA_ADC,
.end = S3C24XX_PA_ADC + S3C24XX_SZ_ADC - 1,
.flags = IORESOURCE_MEM,
},
[1] = {
@ -316,8 +317,8 @@ EXPORT_SYMBOL(s3c_device_sdi);
static struct resource s3c_spi0_resource[] = {
[0] = {
.start = S3C2410_PA_SPI,
.end = S3C2410_PA_SPI + 0x1f,
.start = S3C24XX_PA_SPI,
.end = S3C24XX_PA_SPI + 0x1f,
.flags = IORESOURCE_MEM,
},
[1] = {
@ -341,8 +342,8 @@ EXPORT_SYMBOL(s3c_device_spi0);
static struct resource s3c_spi1_resource[] = {
[0] = {
.start = S3C2410_PA_SPI + 0x20,
.end = S3C2410_PA_SPI + 0x20 + 0x1f,
.start = S3C24XX_PA_SPI + 0x20,
.end = S3C24XX_PA_SPI + 0x20 + 0x1f,
.flags = IORESOURCE_MEM,
},
[1] = {
@ -366,8 +367,8 @@ EXPORT_SYMBOL(s3c_device_spi1);
static struct resource s3c_timer0_resource[] = {
[0] = {
.start = S3C2410_PA_TIMER + 0x0C,
.end = S3C2410_PA_TIMER + 0x0C + 0xB,
.start = S3C24XX_PA_TIMER + 0x0C,
.end = S3C24XX_PA_TIMER + 0x0C + 0xB,
.flags = IORESOURCE_MEM,
},
[1] = {
@ -391,8 +392,8 @@ EXPORT_SYMBOL(s3c_device_timer0);
static struct resource s3c_timer1_resource[] = {
[0] = {
.start = S3C2410_PA_TIMER + 0x18,
.end = S3C2410_PA_TIMER + 0x23,
.start = S3C24XX_PA_TIMER + 0x18,
.end = S3C24XX_PA_TIMER + 0x23,
.flags = IORESOURCE_MEM,
},
[1] = {
@ -416,8 +417,8 @@ EXPORT_SYMBOL(s3c_device_timer1);
static struct resource s3c_timer2_resource[] = {
[0] = {
.start = S3C2410_PA_TIMER + 0x24,
.end = S3C2410_PA_TIMER + 0x2F,
.start = S3C24XX_PA_TIMER + 0x24,
.end = S3C24XX_PA_TIMER + 0x2F,
.flags = IORESOURCE_MEM,
},
[1] = {
@ -441,8 +442,8 @@ EXPORT_SYMBOL(s3c_device_timer2);
static struct resource s3c_timer3_resource[] = {
[0] = {
.start = S3C2410_PA_TIMER + 0x30,
.end = S3C2410_PA_TIMER + 0x3B,
.start = S3C24XX_PA_TIMER + 0x30,
.end = S3C24XX_PA_TIMER + 0x3B,
.flags = IORESOURCE_MEM,
},
[1] = {

View File

@ -1152,7 +1152,7 @@ static int __init s3c2410_init_dma(void)
printk("S3C2410 DMA Driver, (c) 2003-2004 Simtec Electronics\n");
dma_base = ioremap(S3C2410_PA_DMA, 0x200);
dma_base = ioremap(S3C24XX_PA_DMA, 0x200);
if (dma_base == NULL) {
printk(KERN_ERR "dma failed to remap register block\n");
return -ENOMEM;

View File

@ -133,12 +133,12 @@ ENTRY(s3c2410_cpu_resume)
@@ load UART to allow us to print the two characters for
@@ resume debug
mov r2, #S3C2410_PA_UART & 0xff000000
orr r2, r2, #S3C2410_PA_UART & 0xff000
mov r2, #S3C24XX_PA_UART & 0xff000000
orr r2, r2, #S3C24XX_PA_UART & 0xff000
#if 0
/* SMDK2440 LED set */
mov r14, #S3C2410_PA_GPIO
mov r14, #S3C24XX_PA_GPIO
ldr r12, [ r14, #0x54 ]
bic r12, r12, #3<<4
orr r12, r12, #1<<7

View File

@ -142,7 +142,7 @@ __ioremap_pfn(unsigned long pfn, unsigned long offset, size_t size,
return NULL;
addr = (unsigned long)area->addr;
if (remap_area_pages(addr, pfn, size, flags)) {
vfree(addr);
vfree((void *)addr);
return NULL;
}
return (void __iomem *) (offset + (char *)addr);

View File

@ -343,6 +343,12 @@ static struct mem_types mem_types[] __initdata = {
PMD_SECT_AP_WRITE | PMD_SECT_BUFFERABLE |
PMD_SECT_TEX(1),
.domain = DOMAIN_IO,
},
[MT_NONSHARED_DEVICE] = {
.prot_l1 = PMD_TYPE_TABLE,
.prot_sect = PMD_TYPE_SECT | PMD_SECT_NONSHARED_DEV |
PMD_SECT_AP_WRITE,
.domain = DOMAIN_IO,
}
};

View File

@ -53,14 +53,14 @@ config GENERIC_ISA_DMA
config ARCH_MAY_HAVE_PC_FDC
bool
default y
source "init/Kconfig"
menu "System Type"
comment "Archimedes/A5000 Implementations (select only ONE)"
choice
prompt "Archimedes/A5000 Implementations"
config ARCH_ARC
bool "Archimedes"
@ -73,6 +73,7 @@ config ARCH_ARC
config ARCH_A5K
bool "A5000"
select ARCH_MAY_HAVE_PC_FDC
help
Say Y here to to support the Acorn A5000.
@ -87,6 +88,7 @@ config PAGESIZE_16
Say Y here if your Archimedes or A5000 system has only 2MB of
memory, otherwise say N. The resulting kernel will not run on a
machine with 4MB of memory.
endchoice
endmenu
config ISA_DMA_API

View File

@ -104,14 +104,14 @@ void set_fiq_regs(struct pt_regs *regs)
{
register unsigned long tmp, tmp2;
__asm__ volatile (
"mov %0, pc
bic %1, %0, #0x3
orr %1, %1, %3
teqp %1, #0 @ select FIQ mode
mov r0, r0
ldmia %2, {r8 - r14}
teqp %0, #0 @ return to SVC mode
mov r0, r0"
"mov %0, pc \n"
"bic %1, %0, #0x3 \n"
"orr %1, %1, %3 \n"
"teqp %1, #0 @ select FIQ mode \n"
"mov r0, r0 \n"
"ldmia %2, {r8 - r14} \n"
"teqp %0, #0 @ return to SVC mode \n"
"mov r0, r0 "
: "=&r" (tmp), "=&r" (tmp2)
: "r" (&regs->ARM_r8), "I" (PSR_I_BIT | PSR_F_BIT | MODE_FIQ26)
/* These registers aren't modified by the above code in a way
@ -125,14 +125,14 @@ void get_fiq_regs(struct pt_regs *regs)
{
register unsigned long tmp, tmp2;
__asm__ volatile (
"mov %0, pc
bic %1, %0, #0x3
orr %1, %1, %3
teqp %1, #0 @ select FIQ mode
mov r0, r0
stmia %2, {r8 - r14}
teqp %0, #0 @ return to SVC mode
mov r0, r0"
"mov %0, pc \n"
"bic %1, %0, #0x3 \n"
"orr %1, %1, %3 \n"
"teqp %1, #0 @ select FIQ mode \n"
"mov r0, r0 \n"
"stmia %2, {r8 - r14} \n"
"teqp %0, #0 @ return to SVC mode \n"
"mov r0, r0 "
: "=&r" (tmp), "=&r" (tmp2)
: "r" (&regs->ARM_r8), "I" (PSR_I_BIT | PSR_F_BIT | MODE_FIQ26)
/* These registers aren't modified by the above code in a way

View File

@ -480,6 +480,7 @@ static int do_signal(sigset_t *oldset, struct pt_regs *regs, int syscall)
{
siginfo_t info;
int signr;
struct k_sigaction ka;
/*
* We want the common case to go fast, which
@ -493,7 +494,7 @@ static int do_signal(sigset_t *oldset, struct pt_regs *regs, int syscall)
if (current->ptrace & PT_SINGLESTEP)
ptrace_cancel_bpt(current);
signr = get_signal_to_deliver(&info, regs, NULL);
signr = get_signal_to_deliver(&info, &ka, regs, NULL);
if (signr > 0) {
handle_signal(signr, &info, oldset, regs, syscall);
if (current->ptrace & PT_SINGLESTEP)

View File

@ -47,15 +47,6 @@ config DMI
source "init/Kconfig"
config DOUBLEFAULT
default y
bool "Enable doublefault exception handler" if EMBEDDED
help
This option allows trapping of rare doublefault exceptions that
would otherwise cause a system to silently reboot. Disabling this
option saves about 4k and might cause you much additional grey
hair.
menu "Processor type and features"
choice
@ -457,6 +448,43 @@ config HIGHMEM64G
endchoice
choice
depends on EXPERIMENTAL && !X86_PAE
prompt "Memory split"
default VMSPLIT_3G
help
Select the desired split between kernel and user memory.
If the address range available to the kernel is less than the
physical memory installed, the remaining memory will be available
as "high memory". Accessing high memory is a little more costly
than low memory, as it needs to be mapped into the kernel first.
Note that increasing the kernel address space limits the range
available to user programs, making the address space there
tighter. Selecting anything other than the default 3G/1G split
will also likely make your kernel incompatible with binary-only
kernel modules.
If you are not absolutely sure what you are doing, leave this
option alone!
config VMSPLIT_3G
bool "3G/1G user/kernel split"
config VMSPLIT_3G_OPT
bool "3G/1G user/kernel split (for full 1G low memory)"
config VMSPLIT_2G
bool "2G/2G user/kernel split"
config VMSPLIT_1G
bool "1G/3G user/kernel split"
endchoice
config PAGE_OFFSET
hex
default 0xB0000000 if VMSPLIT_3G_OPT
default 0x78000000 if VMSPLIT_2G
default 0x40000000 if VMSPLIT_1G
default 0xC0000000
config HIGHMEM
bool
depends on HIGHMEM64G || HIGHMEM4G
@ -711,6 +739,15 @@ config HOTPLUG_CPU
Say N.
config DOUBLEFAULT
default y
bool "Enable doublefault exception handler" if EMBEDDED
help
This option allows trapping of rare doublefault exceptions that
would otherwise cause a system to silently reboot. Disabling this
option saves about 4k and might cause you much additional grey
hair.
endmenu

View File

@ -3,6 +3,6 @@ obj-$(CONFIG_X86_IO_APIC) += earlyquirk.o
obj-$(CONFIG_ACPI_SLEEP) += sleep.o wakeup.o
ifneq ($(CONFIG_ACPI_PROCESSOR),)
obj-y += cstate.o
obj-y += cstate.o processor.o
endif

View File

@ -464,7 +464,7 @@ int acpi_gsi_to_irq(u32 gsi, unsigned int *irq)
* success: return IRQ number (>=0)
* failure: return < 0
*/
int acpi_register_gsi(u32 gsi, int edge_level, int active_high_low)
int acpi_register_gsi(u32 gsi, int triggering, int polarity)
{
unsigned int irq;
unsigned int plat_gsi = gsi;
@ -476,14 +476,14 @@ int acpi_register_gsi(u32 gsi, int edge_level, int active_high_low)
if (acpi_irq_model == ACPI_IRQ_MODEL_PIC) {
extern void eisa_set_level_irq(unsigned int irq);
if (edge_level == ACPI_LEVEL_SENSITIVE)
if (triggering == ACPI_LEVEL_SENSITIVE)
eisa_set_level_irq(gsi);
}
#endif
#ifdef CONFIG_X86_IO_APIC
if (acpi_irq_model == ACPI_IRQ_MODEL_IOAPIC) {
plat_gsi = mp_register_gsi(gsi, edge_level, active_high_low);
plat_gsi = mp_register_gsi(gsi, triggering, polarity);
}
#endif
acpi_gsi_to_irq(plat_gsi, &irq);

View File

@ -14,64 +14,6 @@
#include <acpi/processor.h>
#include <asm/acpi.h>
static void acpi_processor_power_init_intel_pdc(struct acpi_processor_power
*pow)
{
struct acpi_object_list *obj_list;
union acpi_object *obj;
u32 *buf;
/* allocate and initialize pdc. It will be used later. */
obj_list = kmalloc(sizeof(struct acpi_object_list), GFP_KERNEL);
if (!obj_list) {
printk(KERN_ERR "Memory allocation error\n");
return;
}
obj = kmalloc(sizeof(union acpi_object), GFP_KERNEL);
if (!obj) {
printk(KERN_ERR "Memory allocation error\n");
kfree(obj_list);
return;
}
buf = kmalloc(12, GFP_KERNEL);
if (!buf) {
printk(KERN_ERR "Memory allocation error\n");
kfree(obj);
kfree(obj_list);
return;
}
buf[0] = ACPI_PDC_REVISION_ID;
buf[1] = 1;
buf[2] = ACPI_PDC_C_CAPABILITY_SMP;
obj->type = ACPI_TYPE_BUFFER;
obj->buffer.length = 12;
obj->buffer.pointer = (u8 *) buf;
obj_list->count = 1;
obj_list->pointer = obj;
pow->pdc = obj_list;
return;
}
/* Initialize _PDC data based on the CPU vendor */
void acpi_processor_power_init_pdc(struct acpi_processor_power *pow,
unsigned int cpu)
{
struct cpuinfo_x86 *c = cpu_data + cpu;
pow->pdc = NULL;
if (c->x86_vendor == X86_VENDOR_INTEL)
acpi_processor_power_init_intel_pdc(pow);
return;
}
EXPORT_SYMBOL(acpi_processor_power_init_pdc);
/*
* Initialize bm_flags based on the CPU cache properties
* On SMP it depends on cache configuration

View File

@ -0,0 +1,75 @@
/*
* arch/i386/kernel/acpi/processor.c
*
* Copyright (C) 2005 Intel Corporation
* Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
* - Added _PDC for platforms with Intel CPUs
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/acpi.h>
#include <acpi/processor.h>
#include <asm/acpi.h>
static void init_intel_pdc(struct acpi_processor *pr, struct cpuinfo_x86 *c)
{
struct acpi_object_list *obj_list;
union acpi_object *obj;
u32 *buf;
/* allocate and initialize pdc. It will be used later. */
obj_list = kmalloc(sizeof(struct acpi_object_list), GFP_KERNEL);
if (!obj_list) {
printk(KERN_ERR "Memory allocation error\n");
return;
}
obj = kmalloc(sizeof(union acpi_object), GFP_KERNEL);
if (!obj) {
printk(KERN_ERR "Memory allocation error\n");
kfree(obj_list);
return;
}
buf = kmalloc(12, GFP_KERNEL);
if (!buf) {
printk(KERN_ERR "Memory allocation error\n");
kfree(obj);
kfree(obj_list);
return;
}
buf[0] = ACPI_PDC_REVISION_ID;
buf[1] = 1;
buf[2] = ACPI_PDC_C_CAPABILITY_SMP;
if (cpu_has(c, X86_FEATURE_EST))
buf[2] |= ACPI_PDC_EST_CAPABILITY_SMP;
obj->type = ACPI_TYPE_BUFFER;
obj->buffer.length = 12;
obj->buffer.pointer = (u8 *) buf;
obj_list->count = 1;
obj_list->pointer = obj;
pr->pdc = obj_list;
return;
}
/* Initialize _PDC data based on the CPU vendor */
void arch_acpi_processor_init_pdc(struct acpi_processor *pr)
{
unsigned int cpu = pr->id;
struct cpuinfo_x86 *c = cpu_data + cpu;
pr->pdc = NULL;
if (c->x86_vendor == X86_VENDOR_INTEL)
init_intel_pdc(pr, c);
return;
}
EXPORT_SYMBOL(arch_acpi_processor_init_pdc);

View File

@ -405,10 +405,6 @@ static void __init init_centaur(struct cpuinfo_x86 *c)
winchip2_protect_mcr();
#endif
break;
case 10:
name="4";
/* no info on the WC4 yet */
break;
default:
name="??";
}

View File

@ -96,6 +96,7 @@ config X86_POWERNOW_K8_ACPI
config X86_GX_SUSPMOD
tristate "Cyrix MediaGX/NatSemi Geode Suspend Modulation"
depends on PCI
help
This add the CPUFreq driver for NatSemi Geode processors which
support suspend modulation.

View File

@ -295,68 +295,6 @@ acpi_cpufreq_guess_freq (
}
/*
* acpi_processor_cpu_init_pdc_est - let BIOS know about the SMP capabilities
* of this driver
* @perf: processor-specific acpi_io_data struct
* @cpu: CPU being initialized
*
* To avoid issues with legacy OSes, some BIOSes require to be informed of
* the SMP capabilities of OS P-state driver. Here we set the bits in _PDC
* accordingly, for Enhanced Speedstep. Actual call to _PDC is done in
* driver/acpi/processor.c
*/
static void
acpi_processor_cpu_init_pdc_est(
struct acpi_processor_performance *perf,
unsigned int cpu,
struct acpi_object_list *obj_list
)
{
union acpi_object *obj;
u32 *buf;
struct cpuinfo_x86 *c = cpu_data + cpu;
dprintk("acpi_processor_cpu_init_pdc_est\n");
if (!cpu_has(c, X86_FEATURE_EST))
return;
/* Initialize pdc. It will be used later. */
if (!obj_list)
return;
if (!(obj_list->count && obj_list->pointer))
return;
obj = obj_list->pointer;
if ((obj->buffer.length == 12) && obj->buffer.pointer) {
buf = (u32 *)obj->buffer.pointer;
buf[0] = ACPI_PDC_REVISION_ID;
buf[1] = 1;
buf[2] = ACPI_PDC_EST_CAPABILITY_SMP;
perf->pdc = obj_list;
}
return;
}
/* CPU specific PDC initialization */
static void
acpi_processor_cpu_init_pdc(
struct acpi_processor_performance *perf,
unsigned int cpu,
struct acpi_object_list *obj_list
)
{
struct cpuinfo_x86 *c = cpu_data + cpu;
dprintk("acpi_processor_cpu_init_pdc\n");
perf->pdc = NULL;
if (cpu_has(c, X86_FEATURE_EST))
acpi_processor_cpu_init_pdc_est(perf, cpu, obj_list);
return;
}
static int
acpi_cpufreq_cpu_init (
struct cpufreq_policy *policy)
@ -367,14 +305,7 @@ acpi_cpufreq_cpu_init (
unsigned int result = 0;
struct cpuinfo_x86 *c = &cpu_data[policy->cpu];
union acpi_object arg0 = {ACPI_TYPE_BUFFER};
u32 arg0_buf[3];
struct acpi_object_list arg_list = {1, &arg0};
dprintk("acpi_cpufreq_cpu_init\n");
/* setup arg_list for _PDC settings */
arg0.buffer.length = 12;
arg0.buffer.pointer = (u8 *) arg0_buf;
data = kzalloc(sizeof(struct cpufreq_acpi_io), GFP_KERNEL);
if (!data)
@ -382,9 +313,7 @@ acpi_cpufreq_cpu_init (
acpi_io_data[cpu] = data;
acpi_processor_cpu_init_pdc(&data->acpi_data, cpu, &arg_list);
result = acpi_processor_register_performance(&data->acpi_data, cpu);
data->acpi_data.pdc = NULL;
if (result)
goto err_free;

View File

@ -52,6 +52,7 @@ enum {
static int has_N44_O17_errata[NR_CPUS];
static int has_N60_errata[NR_CPUS];
static unsigned int stock_freq;
static struct cpufreq_driver p4clockmod_driver;
static unsigned int cpufreq_p4_get(unsigned int cpu);
@ -226,6 +227,12 @@ static int cpufreq_p4_cpu_init(struct cpufreq_policy *policy)
case 0x0f12:
has_N44_O17_errata[policy->cpu] = 1;
dprintk("has errata -- disabling low frequencies\n");
break;
case 0x0f29:
has_N60_errata[policy->cpu] = 1;
dprintk("has errata -- disabling frequencies lower than 2ghz\n");
break;
}
/* get max frequency */
@ -237,6 +244,8 @@ static int cpufreq_p4_cpu_init(struct cpufreq_policy *policy)
for (i=1; (p4clockmod_table[i].frequency != CPUFREQ_TABLE_END); i++) {
if ((i<2) && (has_N44_O17_errata[policy->cpu]))
p4clockmod_table[i].frequency = CPUFREQ_ENTRY_INVALID;
else if (has_N60_errata[policy->cpu] && p4clockmod_table[i].frequency < 2000000)
p4clockmod_table[i].frequency = CPUFREQ_ENTRY_INVALID;
else
p4clockmod_table[i].frequency = (stock_freq * i)/8;
}

View File

@ -362,22 +362,10 @@ static struct acpi_processor_performance p;
*/
static int centrino_cpu_init_acpi(struct cpufreq_policy *policy)
{
union acpi_object arg0 = {ACPI_TYPE_BUFFER};
u32 arg0_buf[3];
struct acpi_object_list arg_list = {1, &arg0};
unsigned long cur_freq;
int result = 0, i;
unsigned int cpu = policy->cpu;
/* _PDC settings */
arg0.buffer.length = 12;
arg0.buffer.pointer = (u8 *) arg0_buf;
arg0_buf[0] = ACPI_PDC_REVISION_ID;
arg0_buf[1] = 1;
arg0_buf[2] = ACPI_PDC_EST_CAPABILITY_SMP_MSR;
p.pdc = &arg_list;
/* register with ACPI core */
if (acpi_processor_register_performance(&p, cpu)) {
dprintk(KERN_INFO PFX "obtaining ACPI data failed\n");

View File

@ -43,13 +43,23 @@ static struct _cache_table cache_table[] __cpuinitdata =
{ 0x2c, LVL_1_DATA, 32 }, /* 8-way set assoc, 64 byte line size */
{ 0x30, LVL_1_INST, 32 }, /* 8-way set assoc, 64 byte line size */
{ 0x39, LVL_2, 128 }, /* 4-way set assoc, sectored cache, 64 byte line size */
{ 0x3a, LVL_2, 192 }, /* 6-way set assoc, sectored cache, 64 byte line size */
{ 0x3b, LVL_2, 128 }, /* 2-way set assoc, sectored cache, 64 byte line size */
{ 0x3c, LVL_2, 256 }, /* 4-way set assoc, sectored cache, 64 byte line size */
{ 0x3d, LVL_2, 384 }, /* 6-way set assoc, sectored cache, 64 byte line size */
{ 0x3e, LVL_2, 512 }, /* 4-way set assoc, sectored cache, 64 byte line size */
{ 0x41, LVL_2, 128 }, /* 4-way set assoc, 32 byte line size */
{ 0x42, LVL_2, 256 }, /* 4-way set assoc, 32 byte line size */
{ 0x43, LVL_2, 512 }, /* 4-way set assoc, 32 byte line size */
{ 0x44, LVL_2, 1024 }, /* 4-way set assoc, 32 byte line size */
{ 0x45, LVL_2, 2048 }, /* 4-way set assoc, 32 byte line size */
{ 0x46, LVL_3, 4096 }, /* 4-way set assoc, 64 byte line size */
{ 0x47, LVL_3, 8192 }, /* 8-way set assoc, 64 byte line size */
{ 0x49, LVL_3, 4096 }, /* 16-way set assoc, 64 byte line size */
{ 0x4a, LVL_3, 6144 }, /* 12-way set assoc, 64 byte line size */
{ 0x4b, LVL_3, 8192 }, /* 16-way set assoc, 64 byte line size */
{ 0x4c, LVL_3, 12288 }, /* 12-way set assoc, 64 byte line size */
{ 0x4d, LVL_3, 16384 }, /* 16-way set assoc, 64 byte line size */
{ 0x60, LVL_1_DATA, 16 }, /* 8-way set assoc, sectored cache, 64 byte line size */
{ 0x66, LVL_1_DATA, 8 }, /* 4-way set assoc, sectored cache, 64 byte line size */
{ 0x67, LVL_1_DATA, 16 }, /* 4-way set assoc, sectored cache, 64 byte line size */
@ -57,6 +67,7 @@ static struct _cache_table cache_table[] __cpuinitdata =
{ 0x70, LVL_TRACE, 12 }, /* 8-way set assoc */
{ 0x71, LVL_TRACE, 16 }, /* 8-way set assoc */
{ 0x72, LVL_TRACE, 32 }, /* 8-way set assoc */
{ 0x73, LVL_TRACE, 64 }, /* 8-way set assoc */
{ 0x78, LVL_2, 1024 }, /* 4-way set assoc, 64 byte line size */
{ 0x79, LVL_2, 128 }, /* 8-way set assoc, sectored cache, 64 byte line size */
{ 0x7a, LVL_2, 256 }, /* 8-way set assoc, sectored cache, 64 byte line size */

View File

@ -44,12 +44,10 @@
#include <asm/msr.h>
#include "mtrr.h"
#define MTRR_VERSION "2.0 (20020519)"
u32 num_var_ranges = 0;
unsigned int *usage_table;
static DECLARE_MUTEX(main_lock);
static DECLARE_MUTEX(mtrr_sem);
u32 size_or_mask, size_and_mask;
@ -335,7 +333,7 @@ int mtrr_add_page(unsigned long base, unsigned long size,
/* No CPU hotplug when we change MTRR entries */
lock_cpu_hotplug();
/* Search for existing MTRR */
down(&main_lock);
down(&mtrr_sem);
for (i = 0; i < num_var_ranges; ++i) {
mtrr_if->get(i, &lbase, &lsize, &ltype);
if (base >= lbase + lsize)
@ -373,7 +371,7 @@ int mtrr_add_page(unsigned long base, unsigned long size,
printk(KERN_INFO "mtrr: no more MTRRs available\n");
error = i;
out:
up(&main_lock);
up(&mtrr_sem);
unlock_cpu_hotplug();
return error;
}
@ -466,7 +464,7 @@ int mtrr_del_page(int reg, unsigned long base, unsigned long size)
max = num_var_ranges;
/* No CPU hotplug when we change MTRR entries */
lock_cpu_hotplug();
down(&main_lock);
down(&mtrr_sem);
if (reg < 0) {
/* Search for existing MTRR */
for (i = 0; i < max; ++i) {
@ -505,7 +503,7 @@ int mtrr_del_page(int reg, unsigned long base, unsigned long size)
set_mtrr(reg, 0, 0, 0);
error = reg;
out:
up(&main_lock);
up(&mtrr_sem);
unlock_cpu_hotplug();
return error;
}
@ -671,7 +669,6 @@ void __init mtrr_bp_init(void)
break;
}
}
printk(KERN_INFO "mtrr: v%s\n",MTRR_VERSION);
if (mtrr_if) {
set_num_var_ranges();
@ -688,7 +685,7 @@ void mtrr_ap_init(void)
if (!mtrr_if || !use_intel())
return;
/*
* Ideally we should hold main_lock here to avoid mtrr entries changed,
* Ideally we should hold mtrr_sem here to avoid mtrr entries changed,
* but this routine will be called in cpu boot time, holding the lock
* breaks it. This routine is called in two cases: 1.very earily time
* of software resume, when there absolutely isn't mtrr entry changes;

View File

@ -1080,7 +1080,7 @@ void __init mp_config_acpi_legacy_irqs (void)
#define MAX_GSI_NUM 4096
int mp_register_gsi (u32 gsi, int edge_level, int active_high_low)
int mp_register_gsi (u32 gsi, int triggering, int polarity)
{
int ioapic = -1;
int ioapic_pin = 0;
@ -1129,7 +1129,7 @@ int mp_register_gsi (u32 gsi, int edge_level, int active_high_low)
mp_ioapic_routing[ioapic].pin_programmed[idx] |= (1<<bit);
if (edge_level) {
if (triggering == ACPI_LEVEL_SENSITIVE) {
/*
* For PCI devices assign IRQs in order, avoiding gaps
* due to unused I/O APIC pins.
@ -1151,8 +1151,8 @@ int mp_register_gsi (u32 gsi, int edge_level, int active_high_low)
}
io_apic_set_pci_routing(ioapic, ioapic_pin, gsi,
edge_level == ACPI_EDGE_SENSITIVE ? 0 : 1,
active_high_low == ACPI_ACTIVE_HIGH ? 0 : 1);
triggering == ACPI_EDGE_SENSITIVE ? 0 : 1,
polarity == ACPI_ACTIVE_HIGH ? 0 : 1);
return gsi;
}

View File

@ -45,6 +45,15 @@ static unsigned long last_tsc_high; /* msb 32 bits of Time Stamp Counter */
static unsigned long long monotonic_base;
static seqlock_t monotonic_lock = SEQLOCK_UNLOCKED;
/* Avoid compensating for lost ticks before TSCs are synched */
static int detect_lost_ticks;
static int __init start_lost_tick_compensation(void)
{
detect_lost_ticks = 1;
return 0;
}
late_initcall(start_lost_tick_compensation);
/* convert from cycles(64bits) => nanoseconds (64bits)
* basic equation:
* ns = cycles / (freq / ns_per_sec)
@ -196,7 +205,8 @@ static void mark_offset_tsc_hpet(void)
/* lost tick compensation */
offset = hpet_readl(HPET_T0_CMP) - hpet_tick;
if (unlikely(((offset - hpet_last) > hpet_tick) && (hpet_last != 0))) {
if (unlikely(((offset - hpet_last) > hpet_tick) && (hpet_last != 0))
&& detect_lost_ticks) {
int lost_ticks = (offset - hpet_last) / hpet_tick;
jiffies_64 += lost_ticks;
}
@ -421,7 +431,7 @@ static void mark_offset_tsc(void)
delta += delay_at_last_interrupt;
lost = delta/(1000000/HZ);
delay = delta%(1000000/HZ);
if (lost >= 2) {
if (lost >= 2 && detect_lost_ticks) {
jiffies_64 += lost-1;
/* sanity check to ensure we're not always losing ticks */

View File

@ -49,7 +49,9 @@ dump_backtrace(struct frame_head * head)
* | stack |
* --------------- saved regs->ebp value if valid (frame_head address)
* . .
* --------------- struct pt_regs stored on stack (struct pt_regs *)
* --------------- saved regs->rsp value if x86_64
* | |
* --------------- struct pt_regs * stored on stack if 32-bit
* | |
* . .
* | |
@ -57,13 +59,26 @@ dump_backtrace(struct frame_head * head)
* | |
* | | \/ Lower addresses
*
* Thus, &pt_regs <-> stack base restricts the valid(ish) ebp values
* Thus, regs (or regs->rsp for x86_64) <-> stack base restricts the
* valid(ish) ebp values. Note: (1) for x86_64, NMI and several other
* exceptions use special stacks, maintained by the interrupt stack table
* (IST). These stacks are set up in trap_init() in
* arch/x86_64/kernel/traps.c. Thus, for x86_64, regs now does not point
* to the kernel stack; instead, it points to some location on the NMI
* stack. On the other hand, regs->rsp is the stack pointer saved when the
* NMI occurred. (2) For 32-bit, regs->esp is not valid because the
* processor does not save %esp on the kernel stack when interrupts occur
* in the kernel mode.
*/
#ifdef CONFIG_FRAME_POINTER
static int valid_kernel_stack(struct frame_head * head, struct pt_regs * regs)
{
unsigned long headaddr = (unsigned long)head;
#ifdef CONFIG_X86_64
unsigned long stack = (unsigned long)regs->rsp;
#else
unsigned long stack = (unsigned long)regs;
#endif
unsigned long stack_base = (stack & ~(THREAD_SIZE - 1)) + THREAD_SIZE;
return headaddr > stack && headaddr < stack_base;

View File

@ -539,6 +539,11 @@ static __init int intel_router_probe(struct irq_router *r, struct pci_dev *route
case PCI_DEVICE_ID_INTEL_ICH7_30:
case PCI_DEVICE_ID_INTEL_ICH7_31:
case PCI_DEVICE_ID_INTEL_ESB2_0:
case PCI_DEVICE_ID_INTEL_ICH8_0:
case PCI_DEVICE_ID_INTEL_ICH8_1:
case PCI_DEVICE_ID_INTEL_ICH8_2:
case PCI_DEVICE_ID_INTEL_ICH8_3:
case PCI_DEVICE_ID_INTEL_ICH8_4:
r->name = "PIIX/ICH";
r->get = pirq_piix_get;
r->set = pirq_piix_set;

View File

@ -36,8 +36,7 @@ static u32 get_base_addr(unsigned int seg, int bus, unsigned devfn)
while (1) {
++cfg_num;
if (cfg_num >= pci_mmcfg_config_num) {
/* Not found - fallback to type 1 */
return 0;
break;
}
cfg = &pci_mmcfg_config[cfg_num];
if (cfg->pci_segment_group_number != seg)
@ -46,6 +45,18 @@ static u32 get_base_addr(unsigned int seg, int bus, unsigned devfn)
(cfg->end_bus_number >= bus))
return cfg->base_address;
}
/* Handle more broken MCFG tables on Asus etc.
They only contain a single entry for bus 0-0. Assume
this applies to all busses. */
cfg = &pci_mmcfg_config[0];
if (pci_mmcfg_config_num == 1 &&
cfg->pci_segment_group_number == 0 &&
(cfg->start_bus_number | cfg->end_bus_number) == 0)
return cfg->base_address;
/* Fall back to type 0 */
return 0;
}
static inline void pci_exp_set_dev_base(unsigned int base, int bus, int devfn)

View File

@ -52,9 +52,9 @@
#include <linux/compat.h>
#include <linux/vfs.h>
#include <linux/mman.h>
#include <linux/mutex.h>
#include <asm/intrinsics.h>
#include <asm/semaphore.h>
#include <asm/types.h>
#include <asm/uaccess.h>
#include <asm/unistd.h>
@ -86,7 +86,7 @@
* while doing so.
*/
/* XXX make per-mm: */
static DECLARE_MUTEX(ia32_mmap_sem);
static DEFINE_MUTEX(ia32_mmap_mutex);
asmlinkage long
sys32_execve (char __user *name, compat_uptr_t __user *argv, compat_uptr_t __user *envp,
@ -895,11 +895,11 @@ ia32_do_mmap (struct file *file, unsigned long addr, unsigned long len, int prot
prot = get_prot32(prot);
#if PAGE_SHIFT > IA32_PAGE_SHIFT
down(&ia32_mmap_sem);
mutex_lock(&ia32_mmap_mutex);
{
addr = emulate_mmap(file, addr, len, prot, flags, offset);
}
up(&ia32_mmap_sem);
mutex_unlock(&ia32_mmap_mutex);
#else
down_write(&current->mm->mmap_sem);
{
@ -1000,11 +1000,9 @@ sys32_munmap (unsigned int start, unsigned int len)
if (start >= end)
return 0;
down(&ia32_mmap_sem);
{
ret = sys_munmap(start, end - start);
}
up(&ia32_mmap_sem);
mutex_lock(&ia32_mmap_mutex);
ret = sys_munmap(start, end - start);
mutex_unlock(&ia32_mmap_mutex);
#endif
return ret;
}
@ -1056,7 +1054,7 @@ sys32_mprotect (unsigned int start, unsigned int len, int prot)
if (retval < 0)
return retval;
down(&ia32_mmap_sem);
mutex_lock(&ia32_mmap_mutex);
{
if (offset_in_page(start)) {
/* start address is 4KB aligned but not page aligned. */
@ -1080,7 +1078,7 @@ sys32_mprotect (unsigned int start, unsigned int len, int prot)
retval = sys_mprotect(start, end - start, prot);
}
out:
up(&ia32_mmap_sem);
mutex_unlock(&ia32_mmap_mutex);
return retval;
#endif
}
@ -1124,11 +1122,9 @@ sys32_mremap (unsigned int addr, unsigned int old_len, unsigned int new_len,
old_len = PAGE_ALIGN(old_end) - addr;
new_len = PAGE_ALIGN(new_end) - addr;
down(&ia32_mmap_sem);
{
ret = sys_mremap(addr, old_len, new_len, flags, new_addr);
}
up(&ia32_mmap_sem);
mutex_lock(&ia32_mmap_mutex);
ret = sys_mremap(addr, old_len, new_len, flags, new_addr);
mutex_unlock(&ia32_mmap_mutex);
if ((ret >= 0) && (old_len < new_len)) {
/* mremap expanded successfully */

View File

@ -13,6 +13,11 @@ obj-$(CONFIG_IA64_BRL_EMU) += brl_emu.o
obj-$(CONFIG_IA64_GENERIC) += acpi-ext.o
obj-$(CONFIG_IA64_HP_ZX1) += acpi-ext.o
obj-$(CONFIG_IA64_HP_ZX1_SWIOTLB) += acpi-ext.o
ifneq ($(CONFIG_ACPI_PROCESSOR),)
obj-y += acpi-processor.o
endif
obj-$(CONFIG_IA64_PALINFO) += palinfo.o
obj-$(CONFIG_IOSAPIC) += iosapic.o
obj-$(CONFIG_MODULES) += module.o

View File

@ -33,33 +33,33 @@ acpi_vendor_resource_match(struct acpi_resource *resource, void *context)
struct acpi_vendor_info *info = (struct acpi_vendor_info *)context;
struct acpi_resource_vendor *vendor;
struct acpi_vendor_descriptor *descriptor;
u32 length;
u32 byte_length;
if (resource->id != ACPI_RSTYPE_VENDOR)
if (resource->type != ACPI_RESOURCE_TYPE_VENDOR)
return AE_OK;
vendor = (struct acpi_resource_vendor *)&resource->data;
descriptor = (struct acpi_vendor_descriptor *)vendor->reserved;
if (vendor->length <= sizeof(*info->descriptor) ||
descriptor = (struct acpi_vendor_descriptor *)vendor->byte_data;
if (vendor->byte_length <= sizeof(*info->descriptor) ||
descriptor->guid_id != info->descriptor->guid_id ||
efi_guidcmp(descriptor->guid, info->descriptor->guid))
return AE_OK;
length = vendor->length - sizeof(struct acpi_vendor_descriptor);
info->data = acpi_os_allocate(length);
byte_length = vendor->byte_length - sizeof(struct acpi_vendor_descriptor);
info->data = acpi_os_allocate(byte_length);
if (!info->data)
return AE_NO_MEMORY;
memcpy(info->data,
vendor->reserved + sizeof(struct acpi_vendor_descriptor),
length);
info->length = length;
vendor->byte_data + sizeof(struct acpi_vendor_descriptor),
byte_length);
info->length = byte_length;
return AE_CTRL_TERMINATE;
}
acpi_status
acpi_find_vendor_resource(acpi_handle obj, struct acpi_vendor_descriptor * id,
u8 ** data, u32 * length)
u8 ** data, u32 * byte_length)
{
struct acpi_vendor_info info;
@ -72,7 +72,7 @@ acpi_find_vendor_resource(acpi_handle obj, struct acpi_vendor_descriptor * id,
return AE_NOT_FOUND;
*data = info.data;
*length = info.length;
*byte_length = info.length;
return AE_OK;
}

View File

@ -0,0 +1,67 @@
/*
* arch/ia64/kernel/cpufreq/processor.c
*
* Copyright (C) 2005 Intel Corporation
* Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
* - Added _PDC for platforms with Intel CPUs
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/acpi.h>
#include <acpi/processor.h>
#include <asm/acpi.h>
static void init_intel_pdc(struct acpi_processor *pr)
{
struct acpi_object_list *obj_list;
union acpi_object *obj;
u32 *buf;
/* allocate and initialize pdc. It will be used later. */
obj_list = kmalloc(sizeof(struct acpi_object_list), GFP_KERNEL);
if (!obj_list) {
printk(KERN_ERR "Memory allocation error\n");
return;
}
obj = kmalloc(sizeof(union acpi_object), GFP_KERNEL);
if (!obj) {
printk(KERN_ERR "Memory allocation error\n");
kfree(obj_list);
return;
}
buf = kmalloc(12, GFP_KERNEL);
if (!buf) {
printk(KERN_ERR "Memory allocation error\n");
kfree(obj);
kfree(obj_list);
return;
}
buf[0] = ACPI_PDC_REVISION_ID;
buf[1] = 1;
buf[2] |= ACPI_PDC_EST_CAPABILITY_SMP;
obj->type = ACPI_TYPE_BUFFER;
obj->buffer.length = 12;
obj->buffer.pointer = (u8 *) buf;
obj_list->count = 1;
obj_list->pointer = obj;
pr->pdc = obj_list;
return;
}
/* Initialize _PDC data based on the CPU vendor */
void arch_acpi_processor_init_pdc(struct acpi_processor *pr)
{
pr->pdc = NULL;
init_intel_pdc(pr);
return;
}
EXPORT_SYMBOL(arch_acpi_processor_init_pdc);

View File

@ -567,16 +567,16 @@ void __init acpi_numa_arch_fixup(void)
* success: return IRQ number (>=0)
* failure: return < 0
*/
int acpi_register_gsi(u32 gsi, int edge_level, int active_high_low)
int acpi_register_gsi(u32 gsi, int triggering, int polarity)
{
if (has_8259 && gsi < 16)
return isa_irq_to_vector(gsi);
return iosapic_register_intr(gsi,
(active_high_low ==
(polarity ==
ACPI_ACTIVE_HIGH) ? IOSAPIC_POL_HIGH :
IOSAPIC_POL_LOW,
(edge_level ==
(triggering ==
ACPI_EDGE_SENSITIVE) ? IOSAPIC_EDGE :
IOSAPIC_LEVEL);
}

View File

@ -1 +1,2 @@
obj-$(CONFIG_IA64_ACPI_CPUFREQ) += acpi-cpufreq.o

View File

@ -269,48 +269,6 @@ acpi_cpufreq_verify (
}
/*
* processor_init_pdc - let BIOS know about the SMP capabilities
* of this driver
* @perf: processor-specific acpi_io_data struct
* @cpu: CPU being initialized
*
* To avoid issues with legacy OSes, some BIOSes require to be informed of
* the SMP capabilities of OS P-state driver. Here we set the bits in _PDC
* accordingly. Actual call to _PDC is done in driver/acpi/processor.c
*/
static void
processor_init_pdc (
struct acpi_processor_performance *perf,
unsigned int cpu,
struct acpi_object_list *obj_list
)
{
union acpi_object *obj;
u32 *buf;
dprintk("processor_init_pdc\n");
perf->pdc = NULL;
/* Initialize pdc. It will be used later. */
if (!obj_list)
return;
if (!(obj_list->count && obj_list->pointer))
return;
obj = obj_list->pointer;
if ((obj->buffer.length == 12) && obj->buffer.pointer) {
buf = (u32 *)obj->buffer.pointer;
buf[0] = ACPI_PDC_REVISION_ID;
buf[1] = 1;
buf[2] = ACPI_PDC_EST_CAPABILITY_SMP;
perf->pdc = obj_list;
}
return;
}
static int
acpi_cpufreq_cpu_init (
struct cpufreq_policy *policy)
@ -320,14 +278,7 @@ acpi_cpufreq_cpu_init (
struct cpufreq_acpi_io *data;
unsigned int result = 0;
union acpi_object arg0 = {ACPI_TYPE_BUFFER};
u32 arg0_buf[3];
struct acpi_object_list arg_list = {1, &arg0};
dprintk("acpi_cpufreq_cpu_init\n");
/* setup arg_list for _PDC settings */
arg0.buffer.length = 12;
arg0.buffer.pointer = (u8 *) arg0_buf;
data = kmalloc(sizeof(struct cpufreq_acpi_io), GFP_KERNEL);
if (!data)
@ -337,9 +288,7 @@ acpi_cpufreq_cpu_init (
acpi_io_data[cpu] = data;
processor_init_pdc(&data->acpi_data, cpu, &arg_list);
result = acpi_processor_register_performance(&data->acpi_data, cpu);
data->acpi_data.pdc = NULL;
if (result)
goto err_free;

View File

@ -512,7 +512,7 @@ ia64_state_save:
st8 [temp1]=r12 // os_status, default is cold boot
mov r6=IA64_MCA_SAME_CONTEXT
;;
st8 [temp1]=r6 // context, default is same context
st8 [temp2]=r6 // context, default is same context
// Save the pt_regs data that is not in minstate. The previous code
// left regs at sos.

View File

@ -40,6 +40,7 @@
#include <linux/bitops.h>
#include <linux/capability.h>
#include <linux/rcupdate.h>
#include <linux/completion.h>
#include <asm/errno.h>
#include <asm/intrinsics.h>
@ -286,7 +287,7 @@ typedef struct pfm_context {
unsigned long ctx_ovfl_regs[4]; /* which registers overflowed (notification) */
struct semaphore ctx_restart_sem; /* use for blocking notification mode */
struct completion ctx_restart_done; /* use for blocking notification mode */
unsigned long ctx_used_pmds[4]; /* bitmask of PMD used */
unsigned long ctx_all_pmds[4]; /* bitmask of all accessible PMDs */
@ -1991,7 +1992,7 @@ pfm_close(struct inode *inode, struct file *filp)
/*
* force task to wake up from MASKED state
*/
up(&ctx->ctx_restart_sem);
complete(&ctx->ctx_restart_done);
DPRINT(("waking up ctx_state=%d\n", state));
@ -2706,7 +2707,7 @@ pfm_context_create(pfm_context_t *ctx, void *arg, int count, struct pt_regs *reg
/*
* init restart semaphore to locked
*/
sema_init(&ctx->ctx_restart_sem, 0);
init_completion(&ctx->ctx_restart_done);
/*
* activation is used in SMP only
@ -3687,7 +3688,7 @@ pfm_restart(pfm_context_t *ctx, void *arg, int count, struct pt_regs *regs)
*/
if (CTX_OVFL_NOBLOCK(ctx) == 0 && state == PFM_CTX_MASKED) {
DPRINT(("unblocking [%d] \n", task->pid));
up(&ctx->ctx_restart_sem);
complete(&ctx->ctx_restart_done);
} else {
DPRINT(("[%d] armed exit trap\n", task->pid));
@ -5089,7 +5090,7 @@ pfm_handle_work(void)
* may go through without blocking on SMP systems
* if restart has been received already by the time we call down()
*/
ret = down_interruptible(&ctx->ctx_restart_sem);
ret = wait_for_completion_interruptible(&ctx->ctx_restart_done);
DPRINT(("after block sleeping ret=%d\n", ret));

View File

@ -71,31 +71,33 @@ static int __init topology_init(void)
int i, err = 0;
#ifdef CONFIG_NUMA
sysfs_nodes = kmalloc(sizeof(struct node) * MAX_NUMNODES, GFP_KERNEL);
sysfs_nodes = kzalloc(sizeof(struct node) * MAX_NUMNODES, GFP_KERNEL);
if (!sysfs_nodes) {
err = -ENOMEM;
goto out;
}
memset(sysfs_nodes, 0, sizeof(struct node) * MAX_NUMNODES);
/* MCD - Do we want to register all ONLINE nodes, or all POSSIBLE nodes? */
for_each_online_node(i)
/*
* MCD - Do we want to register all ONLINE nodes, or all POSSIBLE nodes?
*/
for_each_online_node(i) {
if ((err = register_node(&sysfs_nodes[i], i, 0)))
goto out;
}
#endif
sysfs_cpus = kmalloc(sizeof(struct ia64_cpu) * NR_CPUS, GFP_KERNEL);
sysfs_cpus = kzalloc(sizeof(struct ia64_cpu) * NR_CPUS, GFP_KERNEL);
if (!sysfs_cpus) {
err = -ENOMEM;
goto out;
}
memset(sysfs_cpus, 0, sizeof(struct ia64_cpu) * NR_CPUS);
for_each_present_cpu(i)
for_each_present_cpu(i) {
if((err = arch_register_cpu(i)))
goto out;
}
out:
return err;
}
__initcall(topology_init);
subsys_initcall(topology_init);

View File

@ -1283,8 +1283,9 @@ within_logging_rate_limit (void)
if (jiffies - last_time > 5*HZ)
count = 0;
if (++count < 5) {
if (count < 5) {
last_time = jiffies;
count++;
return 1;
}
return 0;

View File

@ -210,6 +210,7 @@ uncached_build_memmap(unsigned long start, unsigned long end, void *arg)
dprintk(KERN_ERR "uncached_build_memmap(%lx %lx)\n", start, end);
touch_softlockup_watchdog();
memset((char *)start, 0, length);
node = paddr_to_nid(start - __IA64_UNCACHED_OFFSET);

View File

@ -193,12 +193,12 @@ add_io_space (struct pci_root_info *info, struct acpi_resource_address64 *addr)
goto free_resource;
}
min = addr->min_address_range;
min = addr->minimum;
max = min + addr->address_length - 1;
if (addr->attribute.io.translation_attribute == ACPI_SPARSE_TRANSLATION)
if (addr->info.io.translation_type == ACPI_SPARSE_TRANSLATION)
sparse = 1;
space_nr = new_space(addr->address_translation_offset, sparse);
space_nr = new_space(addr->translation_offset, sparse);
if (space_nr == ~0)
goto free_name;
@ -285,7 +285,7 @@ static __devinit acpi_status add_window(struct acpi_resource *res, void *data)
if (addr.resource_type == ACPI_MEMORY_RANGE) {
flags = IORESOURCE_MEM;
root = &iomem_resource;
offset = addr.address_translation_offset;
offset = addr.translation_offset;
} else if (addr.resource_type == ACPI_IO_RANGE) {
flags = IORESOURCE_IO;
root = &ioport_resource;
@ -298,7 +298,7 @@ static __devinit acpi_status add_window(struct acpi_resource *res, void *data)
window = &info->controller->window[info->controller->windows++];
window->resource.name = info->name;
window->resource.flags = flags;
window->resource.start = addr.min_address_range + offset;
window->resource.start = addr.minimum + offset;
window->resource.end = window->resource.start + addr.address_length - 1;
window->resource.child = NULL;
window->offset = offset;

View File

@ -51,6 +51,15 @@ struct sn_flush_device_kernel {
struct sn_flush_device_common *common;
};
/* 01/16/06 This struct is the old PROM/kernel struct and needs to be included
* for older official PROMs to function on the new kernel base. This struct
* will be removed when the next official PROM release occurs. */
struct sn_flush_device_war {
struct sn_flush_device_common common;
u32 filler; /* older PROMs expect the default size of a spinlock_t */
};
/*
* **widget_p - Used as an array[wid_num][device] of sn_flush_device_kernel.
*/

View File

@ -10,6 +10,7 @@
#include <linux/nodemask.h>
#include <asm/sn/types.h>
#include <asm/sn/addrs.h>
#include <asm/sn/sn_feature_sets.h>
#include <asm/sn/geo.h>
#include <asm/sn/io.h>
#include <asm/sn/pcibr_provider.h>
@ -165,8 +166,46 @@ sn_pcidev_info_get(struct pci_dev *dev)
return NULL;
}
/* Older PROM flush WAR
*
* 01/16/06 -- This war will be in place until a new official PROM is released.
* Additionally note that the struct sn_flush_device_war also has to be
* removed from arch/ia64/sn/include/xtalk/hubdev.h
*/
static u8 war_implemented = 0;
static s64 sn_device_fixup_war(u64 nasid, u64 widget, int device,
struct sn_flush_device_common *common)
{
struct sn_flush_device_war *war_list;
struct sn_flush_device_war *dev_entry;
struct ia64_sal_retval isrv = {0,0,0,0};
if (!war_implemented) {
printk(KERN_WARNING "PROM version < 4.50 -- implementing old "
"PROM flush WAR\n");
war_implemented = 1;
}
war_list = kzalloc(DEV_PER_WIDGET * sizeof(*war_list), GFP_KERNEL);
if (!war_list)
BUG();
SAL_CALL_NOLOCK(isrv, SN_SAL_IOIF_GET_WIDGET_DMAFLUSH_LIST,
nasid, widget, __pa(war_list), 0, 0, 0 ,0);
if (isrv.status)
panic("sn_device_fixup_war failed: %s\n",
ia64_sal_strerror(isrv.status));
dev_entry = war_list + device;
memcpy(common,dev_entry, sizeof(*common));
kfree(war_list);
return isrv.status;
}
/*
* sn_fixup_ionodes() - This routine initializes the HUB data strcuture for
* sn_fixup_ionodes() - This routine initializes the HUB data strcuture for
* each node in the system.
*/
static void sn_fixup_ionodes(void)
@ -242,12 +281,21 @@ static void sn_fixup_ionodes(void)
memset(dev_entry->common, 0x0, sizeof(struct
sn_flush_device_common));
status = sal_get_device_dmaflush_list(nasid,
widget,
device,
if (sn_prom_feature_available(
PRF_DEVICE_FLUSH_LIST))
status = sal_get_device_dmaflush_list(
nasid,
widget,
device,
(u64)(dev_entry->common));
if (status)
BUG();
else
status = sn_device_fixup_war(nasid,
widget,
device,
dev_entry->common);
if (status != SALRET_OK)
panic("SAL call failed: %s\n",
ia64_sal_strerror(status));
spin_lock_init(&dev_entry->sfdl_flush_lock);
}

View File

@ -10,6 +10,7 @@
#include <linux/kernel.h>
#include <linux/timer.h>
#include <linux/vmalloc.h>
#include <linux/mutex.h>
#include <asm/mca.h>
#include <asm/sal.h>
#include <asm/sn/sn_sal.h>
@ -27,7 +28,7 @@ void sn_init_cpei_timer(void);
/* Printing oemdata from mca uses data that is not passed through SAL, it is
* global. Only one user at a time.
*/
static DECLARE_MUTEX(sn_oemdata_mutex);
static DEFINE_MUTEX(sn_oemdata_mutex);
static u8 **sn_oemdata;
static u64 *sn_oemdata_size, sn_oemdata_bufsize;
@ -89,7 +90,7 @@ static int
sn_platform_plat_specific_err_print(const u8 * sect_header, u8 ** oemdata,
u64 * oemdata_size)
{
down(&sn_oemdata_mutex);
mutex_lock(&sn_oemdata_mutex);
sn_oemdata = oemdata;
sn_oemdata_size = oemdata_size;
sn_oemdata_bufsize = 0;
@ -107,7 +108,7 @@ sn_platform_plat_specific_err_print(const u8 * sect_header, u8 ** oemdata,
*sn_oemdata_size = 0;
ia64_sn_plat_specific_err_print(print_hook, (char *)sect_header);
}
up(&sn_oemdata_mutex);
mutex_unlock(&sn_oemdata_mutex);
return 0;
}

View File

@ -19,6 +19,7 @@
#include <linux/kernel.h>
#include <linux/interrupt.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <asm/sn/intr.h>
#include <asm/sn/sn_sal.h>
#include <asm/sn/xp.h>
@ -136,13 +137,13 @@ xpc_connect(int ch_number, xpc_channel_func func, void *key, u16 payload_size,
registration = &xpc_registrations[ch_number];
if (down_interruptible(&registration->sema) != 0) {
if (mutex_lock_interruptible(&registration->mutex) != 0) {
return xpcInterrupted;
}
/* if XPC_CHANNEL_REGISTERED(ch_number) */
if (registration->func != NULL) {
up(&registration->sema);
mutex_unlock(&registration->mutex);
return xpcAlreadyRegistered;
}
@ -154,7 +155,7 @@ xpc_connect(int ch_number, xpc_channel_func func, void *key, u16 payload_size,
registration->key = key;
registration->func = func;
up(&registration->sema);
mutex_unlock(&registration->mutex);
xpc_interface.connect(ch_number);
@ -190,11 +191,11 @@ xpc_disconnect(int ch_number)
* figured XPC's users will just turn around and call xpc_disconnect()
* again anyways, so we might as well wait, if need be.
*/
down(&registration->sema);
mutex_lock(&registration->mutex);
/* if !XPC_CHANNEL_REGISTERED(ch_number) */
if (registration->func == NULL) {
up(&registration->sema);
mutex_unlock(&registration->mutex);
return;
}
@ -208,7 +209,7 @@ xpc_disconnect(int ch_number)
xpc_interface.disconnect(ch_number);
up(&registration->sema);
mutex_unlock(&registration->mutex);
return;
}
@ -250,9 +251,9 @@ xp_init(void)
xp_nofault_PIOR_target = SH1_IPI_ACCESS;
}
/* initialize the connection registration semaphores */
/* initialize the connection registration mutex */
for (ch_number = 0; ch_number < XPC_NCHANNELS; ch_number++) {
sema_init(&xpc_registrations[ch_number].sema, 1); /* mutex */
mutex_init(&xpc_registrations[ch_number].mutex);
}
return 0;

View File

@ -22,6 +22,8 @@
#include <linux/cache.h>
#include <linux/interrupt.h>
#include <linux/slab.h>
#include <linux/mutex.h>
#include <linux/completion.h>
#include <asm/sn/bte.h>
#include <asm/sn/sn_sal.h>
#include <asm/sn/xpc.h>
@ -56,8 +58,8 @@ xpc_initialize_channels(struct xpc_partition *part, partid_t partid)
atomic_set(&ch->n_to_notify, 0);
spin_lock_init(&ch->lock);
sema_init(&ch->msg_to_pull_sema, 1); /* mutex */
sema_init(&ch->wdisconnect_sema, 0); /* event wait */
mutex_init(&ch->msg_to_pull_mutex);
init_completion(&ch->wdisconnect_wait);
atomic_set(&ch->n_on_msg_allocate_wq, 0);
init_waitqueue_head(&ch->msg_allocate_wq);
@ -445,7 +447,7 @@ xpc_allocate_local_msgqueue(struct xpc_channel *ch)
nbytes = nentries * ch->msg_size;
ch->local_msgqueue = xpc_kmalloc_cacheline_aligned(nbytes,
(GFP_KERNEL | GFP_DMA),
GFP_KERNEL,
&ch->local_msgqueue_base);
if (ch->local_msgqueue == NULL) {
continue;
@ -453,7 +455,7 @@ xpc_allocate_local_msgqueue(struct xpc_channel *ch)
memset(ch->local_msgqueue, 0, nbytes);
nbytes = nentries * sizeof(struct xpc_notify);
ch->notify_queue = kmalloc(nbytes, (GFP_KERNEL | GFP_DMA));
ch->notify_queue = kmalloc(nbytes, GFP_KERNEL);
if (ch->notify_queue == NULL) {
kfree(ch->local_msgqueue_base);
ch->local_msgqueue = NULL;
@ -500,7 +502,7 @@ xpc_allocate_remote_msgqueue(struct xpc_channel *ch)
nbytes = nentries * ch->msg_size;
ch->remote_msgqueue = xpc_kmalloc_cacheline_aligned(nbytes,
(GFP_KERNEL | GFP_DMA),
GFP_KERNEL,
&ch->remote_msgqueue_base);
if (ch->remote_msgqueue == NULL) {
continue;
@ -534,7 +536,6 @@ static enum xpc_retval
xpc_allocate_msgqueues(struct xpc_channel *ch)
{
unsigned long irq_flags;
int i;
enum xpc_retval ret;
@ -552,11 +553,6 @@ xpc_allocate_msgqueues(struct xpc_channel *ch)
return ret;
}
for (i = 0; i < ch->local_nentries; i++) {
/* use a semaphore as an event wait queue */
sema_init(&ch->notify_queue[i].sema, 0);
}
spin_lock_irqsave(&ch->lock, irq_flags);
ch->flags |= XPC_C_SETUP;
spin_unlock_irqrestore(&ch->lock, irq_flags);
@ -799,10 +795,8 @@ xpc_process_disconnect(struct xpc_channel *ch, unsigned long *irq_flags)
}
if (ch->flags & XPC_C_WDISCONNECT) {
spin_unlock_irqrestore(&ch->lock, *irq_flags);
up(&ch->wdisconnect_sema);
spin_lock_irqsave(&ch->lock, *irq_flags);
/* we won't lose the CPU since we're holding ch->lock */
complete(&ch->wdisconnect_wait);
} else if (ch->delayed_IPI_flags) {
if (part->act_state != XPC_P_DEACTIVATING) {
/* time to take action on any delayed IPI flags */
@ -1092,12 +1086,12 @@ xpc_connect_channel(struct xpc_channel *ch)
struct xpc_registration *registration = &xpc_registrations[ch->number];
if (down_trylock(&registration->sema) != 0) {
if (mutex_trylock(&registration->mutex) == 0) {
return xpcRetry;
}
if (!XPC_CHANNEL_REGISTERED(ch->number)) {
up(&registration->sema);
mutex_unlock(&registration->mutex);
return xpcUnregistered;
}
@ -1108,7 +1102,7 @@ xpc_connect_channel(struct xpc_channel *ch)
if (ch->flags & XPC_C_DISCONNECTING) {
spin_unlock_irqrestore(&ch->lock, irq_flags);
up(&registration->sema);
mutex_unlock(&registration->mutex);
return ch->reason;
}
@ -1140,7 +1134,7 @@ xpc_connect_channel(struct xpc_channel *ch)
* channel lock be locked and will unlock and relock
* the channel lock as needed.
*/
up(&registration->sema);
mutex_unlock(&registration->mutex);
XPC_DISCONNECT_CHANNEL(ch, xpcUnequalMsgSizes,
&irq_flags);
spin_unlock_irqrestore(&ch->lock, irq_flags);
@ -1155,7 +1149,7 @@ xpc_connect_channel(struct xpc_channel *ch)
atomic_inc(&xpc_partitions[ch->partid].nchannels_active);
}
up(&registration->sema);
mutex_unlock(&registration->mutex);
/* initiate the connection */
@ -2089,7 +2083,7 @@ xpc_pull_remote_msg(struct xpc_channel *ch, s64 get)
enum xpc_retval ret;
if (down_interruptible(&ch->msg_to_pull_sema) != 0) {
if (mutex_lock_interruptible(&ch->msg_to_pull_mutex) != 0) {
/* we were interrupted by a signal */
return NULL;
}
@ -2125,7 +2119,7 @@ xpc_pull_remote_msg(struct xpc_channel *ch, s64 get)
XPC_DEACTIVATE_PARTITION(part, ret);
up(&ch->msg_to_pull_sema);
mutex_unlock(&ch->msg_to_pull_mutex);
return NULL;
}
@ -2134,7 +2128,7 @@ xpc_pull_remote_msg(struct xpc_channel *ch, s64 get)
ch->next_msg_to_pull += nmsgs;
}
up(&ch->msg_to_pull_sema);
mutex_unlock(&ch->msg_to_pull_mutex);
/* return the message we were looking for */
msg_offset = (get % ch->remote_nentries) * ch->msg_size;

View File

@ -55,6 +55,7 @@
#include <linux/slab.h>
#include <linux/delay.h>
#include <linux/reboot.h>
#include <linux/completion.h>
#include <asm/sn/intr.h>
#include <asm/sn/sn_sal.h>
#include <asm/kdebug.h>
@ -177,10 +178,10 @@ static DECLARE_WAIT_QUEUE_HEAD(xpc_act_IRQ_wq);
static unsigned long xpc_hb_check_timeout;
/* notification that the xpc_hb_checker thread has exited */
static DECLARE_MUTEX_LOCKED(xpc_hb_checker_exited);
static DECLARE_COMPLETION(xpc_hb_checker_exited);
/* notification that the xpc_discovery thread has exited */
static DECLARE_MUTEX_LOCKED(xpc_discovery_exited);
static DECLARE_COMPLETION(xpc_discovery_exited);
static struct timer_list xpc_hb_timer;
@ -321,7 +322,7 @@ xpc_hb_checker(void *ignore)
/* mark this thread as having exited */
up(&xpc_hb_checker_exited);
complete(&xpc_hb_checker_exited);
return 0;
}
@ -341,7 +342,7 @@ xpc_initiate_discovery(void *ignore)
dev_dbg(xpc_part, "discovery thread is exiting\n");
/* mark this thread as having exited */
up(&xpc_discovery_exited);
complete(&xpc_discovery_exited);
return 0;
}
@ -893,7 +894,7 @@ xpc_disconnect_wait(int ch_number)
continue;
}
(void) down(&ch->wdisconnect_sema);
wait_for_completion(&ch->wdisconnect_wait);
spin_lock_irqsave(&ch->lock, irq_flags);
DBUG_ON(!(ch->flags & XPC_C_DISCONNECTED));
@ -946,10 +947,10 @@ xpc_do_exit(enum xpc_retval reason)
free_irq(SGI_XPC_ACTIVATE, NULL);
/* wait for the discovery thread to exit */
down(&xpc_discovery_exited);
wait_for_completion(&xpc_discovery_exited);
/* wait for the heartbeat checker thread to exit */
down(&xpc_hb_checker_exited);
wait_for_completion(&xpc_hb_checker_exited);
/* sleep for a 1/3 of a second or so */
@ -1367,7 +1368,7 @@ xpc_init(void)
dev_err(xpc_part, "failed while forking discovery thread\n");
/* mark this new thread as a non-starter */
up(&xpc_discovery_exited);
complete(&xpc_discovery_exited);
xpc_do_exit(xpcUnloading);
return -EBUSY;

View File

@ -90,14 +90,14 @@ void *sn_dma_alloc_coherent(struct device *dev, size_t size,
*/
node = pcibus_to_node(pdev->bus);
if (likely(node >=0)) {
struct page *p = alloc_pages_node(node, GFP_ATOMIC, get_order(size));
struct page *p = alloc_pages_node(node, flags, get_order(size));
if (likely(p))
cpuaddr = page_address(p);
else
return NULL;
} else
cpuaddr = (void *)__get_free_pages(GFP_ATOMIC, get_order(size));
cpuaddr = (void *)__get_free_pages(flags, get_order(size));
if (unlikely(!cpuaddr))
return NULL;

View File

@ -24,13 +24,15 @@ sal_pcibr_slot_enable(struct pcibus_info *soft, int device, void *resp)
{
struct ia64_sal_retval ret_stuff;
u64 busnum;
u64 segment;
ret_stuff.status = 0;
ret_stuff.v0 = 0;
segment = soft->pbi_buscommon.bs_persist_segment;
busnum = soft->pbi_buscommon.bs_persist_busnum;
SAL_CALL_NOLOCK(ret_stuff, (u64) SN_SAL_IOIF_SLOT_ENABLE, (u64) busnum,
(u64) device, (u64) resp, 0, 0, 0, 0);
SAL_CALL_NOLOCK(ret_stuff, (u64) SN_SAL_IOIF_SLOT_ENABLE, segment,
busnum, (u64) device, (u64) resp, 0, 0, 0);
return (int)ret_stuff.v0;
}
@ -41,14 +43,16 @@ sal_pcibr_slot_disable(struct pcibus_info *soft, int device, int action,
{
struct ia64_sal_retval ret_stuff;
u64 busnum;
u64 segment;
ret_stuff.status = 0;
ret_stuff.v0 = 0;
segment = soft->pbi_buscommon.bs_persist_segment;
busnum = soft->pbi_buscommon.bs_persist_busnum;
SAL_CALL_NOLOCK(ret_stuff, (u64) SN_SAL_IOIF_SLOT_DISABLE,
(u64) busnum, (u64) device, (u64) action,
(u64) resp, 0, 0, 0);
segment, busnum, (u64) device, (u64) action,
(u64) resp, 0, 0);
return (int)ret_stuff.v0;
}

View File

@ -178,7 +178,7 @@ int kgdb_enabled;
*/
static DEFINE_SPINLOCK(kgdb_lock);
static raw_spinlock_t kgdb_cpulock[NR_CPUS] = {
[0 ... NR_CPUS-1] = __RAW_SPIN_LOCK_UNLOCKED;
[0 ... NR_CPUS-1] = __RAW_SPIN_LOCK_UNLOCKED,
};
/*

View File

@ -134,7 +134,6 @@ static int __init add_legacy_soc_port(struct device_node *np,
return add_legacy_port(np, -1, UPIO_MEM, addr, addr, NO_IRQ, flags);
}
#ifdef CONFIG_ISA
static int __init add_legacy_isa_port(struct device_node *np,
struct device_node *isa_brg)
{
@ -168,7 +167,6 @@ static int __init add_legacy_isa_port(struct device_node *np,
return add_legacy_port(np, index, UPIO_PORT, reg[1], taddr, NO_IRQ, UPF_BOOT_AUTOCONF);
}
#endif
#ifdef CONFIG_PCI
static int __init add_legacy_pci_port(struct device_node *np,
@ -276,7 +274,6 @@ void __init find_legacy_serial_ports(void)
of_node_put(soc);
}
#ifdef CONFIG_ISA
/* First fill our array with ISA ports */
for (np = NULL; (np = of_find_node_by_type(np, "serial"));) {
struct device_node *isa = of_get_parent(np);
@ -287,7 +284,6 @@ void __init find_legacy_serial_ports(void)
}
of_node_put(isa);
}
#endif
#ifdef CONFIG_PCI
/* Next, try to locate PCI ports */

View File

@ -254,11 +254,9 @@ int do_signal(sigset_t *oldset, struct pt_regs *regs);
*/
long sys_sigsuspend(old_sigset_t mask)
{
sigset_t saveset;
mask &= _BLOCKABLE;
spin_lock_irq(&current->sighand->siglock);
saveset = current->blocked;
current->saved_sigmask = current->blocked;
siginitset(&current->blocked, mask);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);

View File

@ -862,21 +862,28 @@ int pmf_register_irq_client(struct device_node *target,
spin_unlock_irqrestore(&pmf_lock, flags);
return -ENODEV;
}
if (list_empty(&func->irq_clients))
func->dev->handlers->irq_enable(func);
list_add(&client->link, &func->irq_clients);
client->func = func;
spin_unlock_irqrestore(&pmf_lock, flags);
return 0;
}
EXPORT_SYMBOL_GPL(pmf_register_irq_client);
void pmf_unregister_irq_client(struct device_node *np,
const char *name,
struct pmf_irq_client *client)
void pmf_unregister_irq_client(struct pmf_irq_client *client)
{
struct pmf_function *func = client->func;
unsigned long flags;
BUG_ON(func == NULL);
spin_lock_irqsave(&pmf_lock, flags);
client->func = NULL;
list_del(&client->link);
if (list_empty(&func->irq_clients))
func->dev->handlers->irq_disable(func);
spin_unlock_irqrestore(&pmf_lock, flags);
}
EXPORT_SYMBOL_GPL(pmf_unregister_irq_client);

View File

@ -58,6 +58,7 @@ pcibios_find_pci_bus(struct device_node *dn)
return find_bus_among_children(pdn->phb->bus, dn);
}
EXPORT_SYMBOL_GPL(pcibios_find_pci_bus);
/**
* pcibios_remove_pci_devices - remove all devices under this bus
@ -106,6 +107,7 @@ pcibios_fixup_new_pci_devices(struct pci_bus *bus, int fix_bus)
}
}
}
EXPORT_SYMBOL_GPL(pcibios_fixup_new_pci_devices);
static int
pcibios_pci_config_bridge(struct pci_dev *dev)
@ -172,3 +174,4 @@ pcibios_add_pci_devices(struct pci_bus * bus)
pcibios_pci_config_bridge(dev);
}
}
EXPORT_SYMBOL_GPL(pcibios_add_pci_devices);

View File

@ -313,7 +313,7 @@ static struct platform_device mpsc1_device = {
};
#endif
#ifdef CONFIG_MV643XX_ETH
#if defined(CONFIG_MV643XX_ETH) || defined(CONFIG_MV643XX_ETH_MODULE)
static struct resource mv64x60_eth_shared_resources[] = {
[0] = {
.name = "ethernet shared base",
@ -456,7 +456,7 @@ static struct platform_device *mv64x60_pd_devs[] __initdata = {
&mpsc0_device,
&mpsc1_device,
#endif
#ifdef CONFIG_MV643XX_ETH
#if defined(CONFIG_MV643XX_ETH) || defined(CONFIG_MV643XX_ETH_MODULE)
&mv64x60_eth_shared_device,
#endif
#ifdef CONFIG_MV643XX_ETH_0

View File

@ -1,13 +1,12 @@
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.15-rc2
# Mon Nov 21 13:51:30 2005
# Linux kernel version: 2.6.16-rc1
# Thu Jan 19 10:58:53 2006
#
CONFIG_MMU=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_S390=y
CONFIG_UID16=y
#
# Code maturity level options
@ -29,18 +28,20 @@ CONFIG_POSIX_MQUEUE=y
CONFIG_SYSCTL=y
CONFIG_AUDIT=y
# CONFIG_AUDITSYSCALL is not set
CONFIG_HOTPLUG=y
CONFIG_KOBJECT_UEVENT=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
# CONFIG_CPUSETS is not set
CONFIG_INITRAMFS_SOURCE=""
CONFIG_UID16=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
# CONFIG_EMBEDDED is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
# CONFIG_KALLSYMS_EXTRA_PASS is not set
CONFIG_HOTPLUG=y
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
@ -49,8 +50,10 @@ CONFIG_CC_ALIGN_FUNCTIONS=0
CONFIG_CC_ALIGN_LABELS=0
CONFIG_CC_ALIGN_LOOPS=0
CONFIG_CC_ALIGN_JUMPS=0
CONFIG_SLAB=y
# CONFIG_TINY_SHMEM is not set
CONFIG_BASE_SMALL=0
# CONFIG_SLOB is not set
#
# Loadable module support
@ -76,11 +79,11 @@ CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_AS=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
CONFIG_DEFAULT_AS=y
# CONFIG_DEFAULT_DEADLINE is not set
# CONFIG_DEFAULT_AS is not set
CONFIG_DEFAULT_DEADLINE=y
# CONFIG_DEFAULT_CFQ is not set
# CONFIG_DEFAULT_NOOP is not set
CONFIG_DEFAULT_IOSCHED="anticipatory"
CONFIG_DEFAULT_IOSCHED="deadline"
#
# Base setup
@ -193,6 +196,11 @@ CONFIG_IPV6=y
# SCTP Configuration (EXPERIMENTAL)
#
# CONFIG_IP_SCTP is not set
#
# TIPC Configuration (EXPERIMENTAL)
#
# CONFIG_TIPC is not set
# CONFIG_ATM is not set
# CONFIG_BRIDGE is not set
# CONFIG_VLAN_8021Q is not set
@ -362,6 +370,7 @@ CONFIG_DM_MULTIPATH=y
#
CONFIG_UNIX98_PTYS=y
CONFIG_UNIX98_PTY_COUNT=2048
# CONFIG_HANGCHECK_TIMER is not set
#
# Watchdog Cards
@ -488,6 +497,7 @@ CONFIG_FS_MBCACHE=y
# CONFIG_JFS_FS is not set
# CONFIG_FS_POSIX_ACL is not set
# CONFIG_XFS_FS is not set
# CONFIG_OCFS2_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_INOTIFY=y
@ -520,6 +530,7 @@ CONFIG_TMPFS=y
# CONFIG_HUGETLB_PAGE is not set
CONFIG_RAMFS=y
# CONFIG_RELAYFS_FS is not set
# CONFIG_CONFIGFS_FS is not set
#
# Miscellaneous filesystems
@ -584,6 +595,7 @@ CONFIG_MSDOS_PARTITION=y
# CONFIG_SGI_PARTITION is not set
# CONFIG_ULTRIX_PARTITION is not set
# CONFIG_SUN_PARTITION is not set
# CONFIG_KARMA_PARTITION is not set
# CONFIG_EFI_PARTITION is not set
#
@ -592,7 +604,7 @@ CONFIG_MSDOS_PARTITION=y
# CONFIG_NLS is not set
#
# Profiling support
# Instrumentation Support
#
# CONFIG_PROFILING is not set
@ -600,19 +612,21 @@ CONFIG_MSDOS_PARTITION=y
# Kernel hacking
#
# CONFIG_PRINTK_TIME is not set
CONFIG_DEBUG_KERNEL=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_KERNEL=y
CONFIG_LOG_BUF_SHIFT=17
CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_DETECT_SOFTLOCKUP is not set
# CONFIG_SCHEDSTATS is not set
# CONFIG_DEBUG_SLAB is not set
CONFIG_DEBUG_PREEMPT=y
# CONFIG_DEBUG_PREEMPT is not set
CONFIG_DEBUG_MUTEXES=y
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
# CONFIG_DEBUG_KOBJECT is not set
# CONFIG_DEBUG_INFO is not set
CONFIG_DEBUG_FS=y
# CONFIG_DEBUG_FS is not set
# CONFIG_DEBUG_VM is not set
CONFIG_FORCED_INLINING=y
# CONFIG_RCU_TORTURE_TEST is not set
#

View File

@ -1,8 +1,7 @@
/*
* arch/s390/kernel/signal32.c
* arch/s390/kernel/compat_signal.c
*
* S390 version
* Copyright (C) 2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
* Copyright (C) IBM Corp. 2000,2006
* Author(s): Denis Joseph Barrow (djbarrow@de.ibm.com,barrow_dj@yahoo.com)
* Gerhard Tonn (ton@de.ibm.com)
*
@ -52,8 +51,6 @@ typedef struct
struct ucontext32 uc;
} rt_sigframe32;
asmlinkage int FASTCALL(do_signal(struct pt_regs *regs, sigset_t *oldset));
int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from)
{
int err;
@ -161,66 +158,6 @@ int copy_siginfo_from_user32(siginfo_t *to, compat_siginfo_t __user *from)
return err;
}
/*
* Atomically swap in the new signal mask, and wait for a signal.
*/
asmlinkage int
sys32_sigsuspend(struct pt_regs * regs,int history0, int history1, old_sigset_t mask)
{
sigset_t saveset;
mask &= _BLOCKABLE;
spin_lock_irq(&current->sighand->siglock);
saveset = current->blocked;
siginitset(&current->blocked, mask);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
regs->gprs[2] = -EINTR;
while (1) {
set_current_state(TASK_INTERRUPTIBLE);
schedule();
if (do_signal(regs, &saveset))
return -EINTR;
}
}
asmlinkage int
sys32_rt_sigsuspend(struct pt_regs * regs, compat_sigset_t __user *unewset,
size_t sigsetsize)
{
sigset_t saveset, newset;
compat_sigset_t set32;
/* XXX: Don't preclude handling different sized sigset_t's. */
if (sigsetsize != sizeof(sigset_t))
return -EINVAL;
if (copy_from_user(&set32, unewset, sizeof(set32)))
return -EFAULT;
switch (_NSIG_WORDS) {
case 4: newset.sig[3] = set32.sig[6] + (((long)set32.sig[7]) << 32);
case 3: newset.sig[2] = set32.sig[4] + (((long)set32.sig[5]) << 32);
case 2: newset.sig[1] = set32.sig[2] + (((long)set32.sig[3]) << 32);
case 1: newset.sig[0] = set32.sig[0] + (((long)set32.sig[1]) << 32);
}
sigdelsetmask(&newset, ~_BLOCKABLE);
spin_lock_irq(&current->sighand->siglock);
saveset = current->blocked;
current->blocked = newset;
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
regs->gprs[2] = -EINTR;
while (1) {
set_current_state(TASK_INTERRUPTIBLE);
schedule();
if (do_signal(regs, &saveset))
return -EINTR;
}
}
asmlinkage long
sys32_sigaction(int sig, const struct old_sigaction32 __user *act,
struct old_sigaction32 __user *oact)
@ -520,7 +457,7 @@ static inline int map_signal(int sig)
return sig;
}
static void setup_frame32(int sig, struct k_sigaction *ka,
static int setup_frame32(int sig, struct k_sigaction *ka,
sigset_t *set, struct pt_regs * regs)
{
sigframe32 __user *frame = get_sigframe(ka, regs, sizeof(sigframe32));
@ -565,13 +502,14 @@ static void setup_frame32(int sig, struct k_sigaction *ka,
/* Place signal number on stack to allow backtrace from handler. */
if (__put_user(regs->gprs[2], (int __user *) &frame->signo))
goto give_sigsegv;
return;
return 0;
give_sigsegv:
force_sigsegv(sig, current);
return -EFAULT;
}
static void setup_rt_frame32(int sig, struct k_sigaction *ka, siginfo_t *info,
static int setup_rt_frame32(int sig, struct k_sigaction *ka, siginfo_t *info,
sigset_t *set, struct pt_regs * regs)
{
int err = 0;
@ -615,31 +553,37 @@ static void setup_rt_frame32(int sig, struct k_sigaction *ka, siginfo_t *info,
regs->gprs[2] = map_signal(sig);
regs->gprs[3] = (__u64) &frame->info;
regs->gprs[4] = (__u64) &frame->uc;
return;
return 0;
give_sigsegv:
force_sigsegv(sig, current);
return -EFAULT;
}
/*
* OK, we're invoking a handler
*/
void
int
handle_signal32(unsigned long sig, struct k_sigaction *ka,
siginfo_t *info, sigset_t *oldset, struct pt_regs * regs)
{
int ret;
/* Set up the stack frame */
if (ka->sa.sa_flags & SA_SIGINFO)
setup_rt_frame32(sig, ka, info, oldset, regs);
ret = setup_rt_frame32(sig, ka, info, oldset, regs);
else
setup_frame32(sig, ka, oldset, regs);
ret = setup_frame32(sig, ka, oldset, regs);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
if (ret == 0) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
return ret;
}

View File

@ -1,9 +1,8 @@
/*
* arch/s390/kernel/sys_wrapper31.S
* arch/s390/kernel/compat_wrapper.S
* wrapper for 31 bit compatible system calls.
*
* S390 version
* Copyright (C) 2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
* Copyright (C) IBM Corp. 2000,2006
* Author(s): Gerhard Tonn (ton@de.ibm.com),
* Thomas Spatzier (tspat@de.ibm.com)
*/
@ -288,7 +287,12 @@ sys32_setregid16_wrapper:
llgfr %r3,%r3 # __kernel_old_gid_emu31_t
jg sys32_setregid16 # branch to system call
#sys32_sigsuspend_wrapper # done in sigsuspend_glue
.globl sys_sigsuspend_wrapper
sys_sigsuspend_wrapper:
lgfr %r2,%r2 # int
lgfr %r3,%r3 # int
llgfr %r4,%r4 # old_sigset_t
jg sys_sigsuspend
.globl compat_sys_sigpending_wrapper
compat_sys_sigpending_wrapper:
@ -855,7 +859,11 @@ sys32_rt_sigqueueinfo_wrapper:
llgtr %r4,%r4 # siginfo_emu31_t *
jg sys32_rt_sigqueueinfo # branch to system call
#sys32_rt_sigsuspend_wrapper # done in rt_sigsuspend_glue
.globl compat_sys_rt_sigsuspend_wrapper
compat_sys_rt_sigsuspend_wrapper:
llgtr %r2,%r2 # compat_sigset_t *
llgfr %r3,%r3 # compat_size_t
jg compat_sys_rt_sigsuspend
.globl sys32_pread64_wrapper
sys32_pread64_wrapper:
@ -1475,3 +1483,122 @@ sys_inotify_rm_watch_wrapper:
lgfr %r2,%r2 # int
llgfr %r3,%r3 # u32
jg sys_inotify_rm_watch
.globl compat_sys_openat_wrapper
compat_sys_openat_wrapper:
llgfr %r2,%r2 # unsigned int
llgtr %r3,%r3 # const char *
lgfr %r4,%r4 # int
lgfr %r5,%r5 # int
jg compat_sys_openat
.globl sys_mkdirat_wrapper
sys_mkdirat_wrapper:
lgfr %r2,%r2 # int
llgtr %r3,%r3 # const char *
lgfr %r4,%r4 # int
jg sys_mkdirat
.globl sys_mknodat_wrapper
sys_mknodat_wrapper:
lgfr %r2,%r2 # int
llgtr %r3,%r3 # const char *
lgfr %r4,%r4 # int
llgfr %r5,%r5 # unsigned int
jg sys_mknodat
.globl sys_fchownat_wrapper
sys_fchownat_wrapper:
lgfr %r2,%r2 # int
llgtr %r3,%r3 # const char *
llgfr %r4,%r4 # uid_t
llgfr %r5,%r5 # gid_t
lgfr %r6,%r6 # int
jg sys_fchownat
.globl compat_sys_futimesat_wrapper
compat_sys_futimesat_wrapper:
llgfr %r2,%r2 # unsigned int
llgtr %r3,%r3 # char *
llgtr %r4,%r4 # struct timeval *
jg compat_sys_futimesat
.globl compat_sys_newfstatat_wrapper
compat_sys_newfstatat_wrapper:
llgfr %r2,%r2 # unsigned int
llgtr %r3,%r3 # char *
llgtr %r4,%r4 # struct stat *
lgfr %r5,%r5 # int
jg compat_sys_newfstatat
.globl sys_unlinkat_wrapper
sys_unlinkat_wrapper:
lgfr %r2,%r2 # int
llgtr %r3,%r3 # const char *
lgfr %r4,%r4 # int
jg sys_unlinkat
.globl sys_renameat_wrapper
sys_renameat_wrapper:
lgfr %r2,%r2 # int
llgtr %r3,%r3 # const char *
lgfr %r4,%r4 # int
llgtr %r5,%r5 # const char *
jg sys_renameat
.globl sys_linkat_wrapper
sys_linkat_wrapper:
lgfr %r2,%r2 # int
llgtr %r3,%r3 # const char *
lgfr %r4,%r4 # int
llgtr %r5,%r5 # const char *
jg sys_linkat
.globl sys_symlinkat_wrapper
sys_symlinkat_wrapper:
llgtr %r2,%r2 # const char *
lgfr %r3,%r3 # int
llgtr %r4,%r4 # const char *
jg sys_symlinkat
.globl sys_readlinkat_wrapper
sys_readlinkat_wrapper:
lgfr %r2,%r2 # int
llgtr %r3,%r3 # const char *
llgtr %r4,%r4 # char *
lgfr %r5,%r5 # int
jg sys_readlinkat
.globl sys_fchmodat_wrapper
sys_fchmodat_wrapper:
lgfr %r2,%r2 # int
llgtr %r3,%r3 # const char *
llgfr %r4,%r4 # mode_t
jg sys_fchmodat
.globl sys_faccessat_wrapper
sys_faccessat_wrapper:
lgfr %r2,%r2 # int
llgtr %r3,%r3 # const char *
lgfr %r4,%r4 # int
jg sys_faccessat
.globl compat_sys_pselect6_wrapper
compat_sys_pselect6_wrapper:
lgfr %r2,%r2 # int
llgtr %r3,%r3 # fd_set *
llgtr %r4,%r4 # fd_set *
llgtr %r5,%r5 # fd_set *
llgtr %r6,%r6 # struct timespec *
llgt %r0,164(%r15) # void *
stg %r0,160(%r15)
jg compat_sys_pselect6
.globl compat_sys_ppoll_wrapper
compat_sys_ppoll_wrapper:
llgtr %r2,%r2 # struct pollfd *
llgfr %r3,%r3 # unsigned int
llgtr %r4,%r4 # struct timespec *
llgtr %r5,%r5 # const sigset_t *
llgfr %r6,%r6 # size_t
jg compat_sys_ppoll

View File

@ -2,8 +2,7 @@
* arch/s390/kernel/entry.S
* S390 low-level entry points.
*
* S390 version
* Copyright (C) 1999,2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
* Copyright (C) IBM Corp. 1999,2006
* Author(s): Martin Schwidefsky (schwidefsky@de.ibm.com),
* Hartmut Penner (hp@de.ibm.com),
* Denis Joseph Barrow (djbarrow@de.ibm.com,barrow_dj@yahoo.com),
@ -50,9 +49,10 @@ SP_ILC = STACK_FRAME_OVERHEAD + __PT_ILC
SP_TRAP = STACK_FRAME_OVERHEAD + __PT_TRAP
SP_SIZE = STACK_FRAME_OVERHEAD + __PT_SIZE
_TIF_WORK_SVC = (_TIF_SIGPENDING | _TIF_NEED_RESCHED | _TIF_MCCK_PENDING | \
_TIF_RESTART_SVC | _TIF_SINGLE_STEP )
_TIF_WORK_INT = (_TIF_SIGPENDING | _TIF_NEED_RESCHED | _TIF_MCCK_PENDING)
_TIF_WORK_SVC = (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK | _TIF_NEED_RESCHED | \
_TIF_MCCK_PENDING | _TIF_RESTART_SVC | _TIF_SINGLE_STEP )
_TIF_WORK_INT = (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK | _TIF_NEED_RESCHED | \
_TIF_MCCK_PENDING)
STACK_SHIFT = PAGE_SHIFT + THREAD_ORDER
STACK_SIZE = 1 << STACK_SHIFT
@ -251,8 +251,8 @@ sysc_work:
bo BASED(sysc_mcck_pending)
tm __TI_flags+3(%r9),_TIF_NEED_RESCHED
bo BASED(sysc_reschedule)
tm __TI_flags+3(%r9),_TIF_SIGPENDING
bo BASED(sysc_sigpending)
tm __TI_flags+3(%r9),(_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK)
bnz BASED(sysc_sigpending)
tm __TI_flags+3(%r9),_TIF_RESTART_SVC
bo BASED(sysc_restart)
tm __TI_flags+3(%r9),_TIF_SINGLE_STEP
@ -276,12 +276,11 @@ sysc_mcck_pending:
br %r1 # TIF bit will be cleared by handler
#
# _TIF_SIGPENDING is set, call do_signal
# _TIF_SIGPENDING or _TIF_RESTORE_SIGMASK is set, call do_signal
#
sysc_sigpending:
ni __TI_flags+3(%r9),255-_TIF_SINGLE_STEP # clear TIF_SINGLE_STEP
la %r2,SP_PTREGS(%r15) # load pt_regs
sr %r3,%r3 # clear *oldset
l %r1,BASED(.Ldo_signal)
basr %r14,%r1 # call do_signal
tm __TI_flags+3(%r9),_TIF_RESTART_SVC
@ -397,30 +396,6 @@ sys_rt_sigreturn_glue:
l %r1,BASED(.Lrt_sigreturn)
br %r1 # branch to sys_sigreturn
#
# sigsuspend and rt_sigsuspend need pt_regs as an additional
# parameter and they have to skip the store of %r2 into the
# user register %r2 because the return value was set in
# sigsuspend and rt_sigsuspend already and must not be overwritten!
#
sys_sigsuspend_glue:
lr %r5,%r4 # move mask back
lr %r4,%r3 # move history1 parameter
lr %r3,%r2 # move history0 parameter
la %r2,SP_PTREGS(%r15) # load pt_regs as first parameter
l %r1,BASED(.Lsigsuspend)
la %r14,4(%r14) # skip store of return value
br %r1 # branch to sys_sigsuspend
sys_rt_sigsuspend_glue:
lr %r4,%r3 # move sigsetsize parameter
lr %r3,%r2 # move unewset parameter
la %r2,SP_PTREGS(%r15) # load pt_regs as first parameter
l %r1,BASED(.Lrt_sigsuspend)
la %r14,4(%r14) # skip store of return value
br %r1 # branch to sys_rt_sigsuspend
sys_sigaltstack_glue:
la %r4,SP_PTREGS(%r15) # load pt_regs as parameter
l %r1,BASED(.Lsigaltstack)
@ -604,15 +579,16 @@ io_work:
lr %r15,%r1
#
# One of the work bits is on. Find out which one.
# Checked are: _TIF_SIGPENDING, _TIF_NEED_RESCHED and _TIF_MCCK_PENDING
# Checked are: _TIF_SIGPENDING, _TIF_RESTORE_SIGMASK, _TIF_NEED_RESCHED
# and _TIF_MCCK_PENDING
#
io_work_loop:
tm __TI_flags+3(%r9),_TIF_MCCK_PENDING
bo BASED(io_mcck_pending)
tm __TI_flags+3(%r9),_TIF_NEED_RESCHED
bo BASED(io_reschedule)
tm __TI_flags+3(%r9),_TIF_SIGPENDING
bo BASED(io_sigpending)
tm __TI_flags+3(%r9),(_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK)
bnz BASED(io_sigpending)
b BASED(io_leave)
#
@ -636,12 +612,11 @@ io_reschedule:
b BASED(io_work_loop)
#
# _TIF_SIGPENDING is set, call do_signal
# _TIF_SIGPENDING or _TIF_RESTORE_SIGMASK is set, call do_signal
#
io_sigpending:
stosm __SF_EMPTY(%r15),0x03 # reenable interrupts
la %r2,SP_PTREGS(%r15) # load pt_regs
sr %r3,%r3 # clear *oldset
l %r1,BASED(.Ldo_signal)
basr %r14,%r1 # call do_signal
stnsm __SF_EMPTY(%r15),0xfc # disable I/O and ext. interrupts

View File

@ -1,9 +1,8 @@
/*
* arch/s390/kernel/entry.S
* arch/s390/kernel/entry64.S
* S390 low-level entry points.
*
* S390 version
* Copyright (C) 1999,2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
* Copyright (C) IBM Corp. 1999,2006
* Author(s): Martin Schwidefsky (schwidefsky@de.ibm.com),
* Hartmut Penner (hp@de.ibm.com),
* Denis Joseph Barrow (djbarrow@de.ibm.com,barrow_dj@yahoo.com),
@ -53,9 +52,10 @@ SP_SIZE = STACK_FRAME_OVERHEAD + __PT_SIZE
STACK_SHIFT = PAGE_SHIFT + THREAD_ORDER
STACK_SIZE = 1 << STACK_SHIFT
_TIF_WORK_SVC = (_TIF_SIGPENDING | _TIF_NEED_RESCHED | _TIF_MCCK_PENDING | \
_TIF_RESTART_SVC | _TIF_SINGLE_STEP )
_TIF_WORK_INT = (_TIF_SIGPENDING | _TIF_NEED_RESCHED | _TIF_MCCK_PENDING)
_TIF_WORK_SVC = (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK | _TIF_NEED_RESCHED | \
_TIF_MCCK_PENDING | _TIF_RESTART_SVC | _TIF_SINGLE_STEP )
_TIF_WORK_INT = (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK | _TIF_NEED_RESCHED | \
_TIF_MCCK_PENDING)
#define BASED(name) name-system_call(%r13)
@ -249,8 +249,8 @@ sysc_work:
jo sysc_mcck_pending
tm __TI_flags+7(%r9),_TIF_NEED_RESCHED
jo sysc_reschedule
tm __TI_flags+7(%r9),_TIF_SIGPENDING
jo sysc_sigpending
tm __TI_flags+7(%r9),(_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK)
jnz sysc_sigpending
tm __TI_flags+7(%r9),_TIF_RESTART_SVC
jo sysc_restart
tm __TI_flags+7(%r9),_TIF_SINGLE_STEP
@ -272,12 +272,11 @@ sysc_mcck_pending:
jg s390_handle_mcck # TIF bit will be cleared by handler
#
# _TIF_SIGPENDING is set, call do_signal
# _TIF_SIGPENDING or _TIF_RESTORE_SIGMASK is set, call do_signal
#
sysc_sigpending:
ni __TI_flags+7(%r9),255-_TIF_SINGLE_STEP # clear TIF_SINGLE_STEP
la %r2,SP_PTREGS(%r15) # load pt_regs
sgr %r3,%r3 # clear *oldset
brasl %r14,do_signal # call do_signal
tm __TI_flags+7(%r9),_TIF_RESTART_SVC
jo sysc_restart
@ -414,52 +413,6 @@ sys32_rt_sigreturn_glue:
jg sys32_rt_sigreturn # branch to sys32_sigreturn
#endif
#
# sigsuspend and rt_sigsuspend need pt_regs as an additional
# parameter and they have to skip the store of %r2 into the
# user register %r2 because the return value was set in
# sigsuspend and rt_sigsuspend already and must not be overwritten!
#
sys_sigsuspend_glue:
lgr %r5,%r4 # move mask back
lgr %r4,%r3 # move history1 parameter
lgr %r3,%r2 # move history0 parameter
la %r2,SP_PTREGS(%r15) # load pt_regs as first parameter
la %r14,6(%r14) # skip store of return value
jg sys_sigsuspend # branch to sys_sigsuspend
#ifdef CONFIG_COMPAT
sys32_sigsuspend_glue:
llgfr %r4,%r4 # unsigned long
lgr %r5,%r4 # move mask back
lgfr %r3,%r3 # int
lgr %r4,%r3 # move history1 parameter
lgfr %r2,%r2 # int
lgr %r3,%r2 # move history0 parameter
la %r2,SP_PTREGS(%r15) # load pt_regs as first parameter
la %r14,6(%r14) # skip store of return value
jg sys32_sigsuspend # branch to sys32_sigsuspend
#endif
sys_rt_sigsuspend_glue:
lgr %r4,%r3 # move sigsetsize parameter
lgr %r3,%r2 # move unewset parameter
la %r2,SP_PTREGS(%r15) # load pt_regs as first parameter
la %r14,6(%r14) # skip store of return value
jg sys_rt_sigsuspend # branch to sys_rt_sigsuspend
#ifdef CONFIG_COMPAT
sys32_rt_sigsuspend_glue:
llgfr %r3,%r3 # size_t
lgr %r4,%r3 # move sigsetsize parameter
llgtr %r2,%r2 # sigset_emu31_t *
lgr %r3,%r2 # move unewset parameter
la %r2,SP_PTREGS(%r15) # load pt_regs as first parameter
la %r14,6(%r14) # skip store of return value
jg sys32_rt_sigsuspend # branch to sys32_rt_sigsuspend
#endif
sys_sigaltstack_glue:
la %r4,SP_PTREGS(%r15) # load pt_regs as parameter
jg sys_sigaltstack # branch to sys_sigreturn
@ -646,15 +599,16 @@ io_work:
lgr %r15,%r1
#
# One of the work bits is on. Find out which one.
# Checked are: _TIF_SIGPENDING, _TIF_NEED_RESCHED and _TIF_MCCK_PENDING
# Checked are: _TIF_SIGPENDING, _TIF_RESTORE_SIGPENDING, _TIF_NEED_RESCHED
# and _TIF_MCCK_PENDING
#
io_work_loop:
tm __TI_flags+7(%r9),_TIF_MCCK_PENDING
jo io_mcck_pending
tm __TI_flags+7(%r9),_TIF_NEED_RESCHED
jo io_reschedule
tm __TI_flags+7(%r9),_TIF_SIGPENDING
jo io_sigpending
tm __TI_flags+7(%r9),(_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK)
jnz io_sigpending
j io_leave
#
@ -676,12 +630,11 @@ io_reschedule:
j io_work_loop
#
# _TIF_SIGPENDING is set, call do_signal
# _TIF_SIGPENDING or _TIF_RESTORE_SIGMASK is set, call do_signal
#
io_sigpending:
stosm __SF_EMPTY(%r15),0x03 # reenable interrupts
la %r2,SP_PTREGS(%r15) # load pt_regs
slgr %r3,%r3 # clear *oldset
brasl %r14,do_signal # call do_signal
stnsm __SF_EMPTY(%r15),0xfc # disable I/O and ext. interrupts
j io_work_loop

View File

@ -1,8 +1,7 @@
/*
* arch/s390/kernel/signal.c
*
* S390 version
* Copyright (C) 1999,2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
* Copyright (C) IBM Corp. 1999,2006
* Author(s): Denis Joseph Barrow (djbarrow@de.ibm.com,barrow_dj@yahoo.com)
*
* Based on Intel version
@ -51,60 +50,24 @@ typedef struct
struct ucontext uc;
} rt_sigframe;
int do_signal(struct pt_regs *regs, sigset_t *oldset);
/*
* Atomically swap in the new signal mask, and wait for a signal.
*/
asmlinkage int
sys_sigsuspend(struct pt_regs * regs, int history0, int history1,
old_sigset_t mask)
sys_sigsuspend(int history0, int history1, old_sigset_t mask)
{
sigset_t saveset;
mask &= _BLOCKABLE;
spin_lock_irq(&current->sighand->siglock);
saveset = current->blocked;
current->saved_sigmask = current->blocked;
siginitset(&current->blocked, mask);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
regs->gprs[2] = -EINTR;
while (1) {
set_current_state(TASK_INTERRUPTIBLE);
schedule();
if (do_signal(regs, &saveset))
return -EINTR;
}
}
current->state = TASK_INTERRUPTIBLE;
schedule();
set_thread_flag(TIF_RESTORE_SIGMASK);
asmlinkage long
sys_rt_sigsuspend(struct pt_regs *regs, sigset_t __user *unewset,
size_t sigsetsize)
{
sigset_t saveset, newset;
/* XXX: Don't preclude handling different sized sigset_t's. */
if (sigsetsize != sizeof(sigset_t))
return -EINVAL;
if (copy_from_user(&newset, unewset, sizeof(newset)))
return -EFAULT;
sigdelsetmask(&newset, ~_BLOCKABLE);
spin_lock_irq(&current->sighand->siglock);
saveset = current->blocked;
current->blocked = newset;
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
regs->gprs[2] = -EINTR;
while (1) {
set_current_state(TASK_INTERRUPTIBLE);
schedule();
if (do_signal(regs, &saveset))
return -EINTR;
}
return -ERESTARTNOHAND;
}
asmlinkage long
@ -306,8 +269,8 @@ static inline int map_signal(int sig)
return sig;
}
static void setup_frame(int sig, struct k_sigaction *ka,
sigset_t *set, struct pt_regs * regs)
static int setup_frame(int sig, struct k_sigaction *ka,
sigset_t *set, struct pt_regs * regs)
{
sigframe __user *frame;
@ -355,13 +318,14 @@ static void setup_frame(int sig, struct k_sigaction *ka,
/* Place signal number on stack to allow backtrace from handler. */
if (__put_user(regs->gprs[2], (int __user *) &frame->signo))
goto give_sigsegv;
return;
return 0;
give_sigsegv:
force_sigsegv(sig, current);
return -EFAULT;
}
static void setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
sigset_t *set, struct pt_regs * regs)
{
int err = 0;
@ -409,32 +373,39 @@ static void setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
regs->gprs[2] = map_signal(sig);
regs->gprs[3] = (unsigned long) &frame->info;
regs->gprs[4] = (unsigned long) &frame->uc;
return;
return 0;
give_sigsegv:
force_sigsegv(sig, current);
return -EFAULT;
}
/*
* OK, we're invoking a handler
*/
static void
static int
handle_signal(unsigned long sig, struct k_sigaction *ka,
siginfo_t *info, sigset_t *oldset, struct pt_regs * regs)
{
int ret;
/* Set up the stack frame */
if (ka->sa.sa_flags & SA_SIGINFO)
setup_rt_frame(sig, ka, info, oldset, regs);
ret = setup_rt_frame(sig, ka, info, oldset, regs);
else
setup_frame(sig, ka, oldset, regs);
ret = setup_frame(sig, ka, oldset, regs);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
if (ret == 0) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
return ret;
}
/*
@ -446,12 +417,13 @@ handle_signal(unsigned long sig, struct k_sigaction *ka,
* the kernel can handle, and then we build all the user-level signal handling
* stack-frames in one go after that.
*/
int do_signal(struct pt_regs *regs, sigset_t *oldset)
void do_signal(struct pt_regs *regs)
{
unsigned long retval = 0, continue_addr = 0, restart_addr = 0;
siginfo_t info;
int signr;
struct k_sigaction ka;
sigset_t *oldset;
/*
* We want the common case to go fast, which
@ -460,9 +432,11 @@ int do_signal(struct pt_regs *regs, sigset_t *oldset)
* if so.
*/
if (!user_mode(regs))
return 1;
return;
if (!oldset)
if (test_thread_flag(TIF_RESTORE_SIGMASK))
oldset = &current->saved_sigmask;
else
oldset = &current->blocked;
/* Are we from a system call? */
@ -473,12 +447,14 @@ int do_signal(struct pt_regs *regs, sigset_t *oldset)
/* Prepare for system call restart. We do this here so that a
debugger will see the already changed PSW. */
if (retval == -ERESTARTNOHAND ||
retval == -ERESTARTSYS ||
retval == -ERESTARTNOINTR) {
switch (retval) {
case -ERESTARTNOHAND:
case -ERESTARTSYS:
case -ERESTARTNOINTR:
regs->gprs[2] = regs->orig_gpr2;
regs->psw.addr = restart_addr;
} else if (retval == -ERESTART_RESTARTBLOCK) {
break;
case -ERESTART_RESTARTBLOCK:
regs->gprs[2] = -EINTR;
}
}
@ -503,17 +479,38 @@ int do_signal(struct pt_regs *regs, sigset_t *oldset)
/* Whee! Actually deliver the signal. */
#ifdef CONFIG_COMPAT
if (test_thread_flag(TIF_31BIT)) {
extern void handle_signal32(unsigned long sig,
struct k_sigaction *ka,
siginfo_t *info,
sigset_t *oldset,
struct pt_regs *regs);
handle_signal32(signr, &ka, &info, oldset, regs);
return 1;
extern int handle_signal32(unsigned long sig,
struct k_sigaction *ka,
siginfo_t *info,
sigset_t *oldset,
struct pt_regs *regs);
if (handle_signal32(
signr, &ka, &info, oldset, regs) == 0) {
if (test_thread_flag(TIF_RESTORE_SIGMASK))
clear_thread_flag(TIF_RESTORE_SIGMASK);
}
return;
}
#endif
handle_signal(signr, &ka, &info, oldset, regs);
return 1;
if (handle_signal(signr, &ka, &info, oldset, regs) == 0) {
/*
* A signal was successfully delivered; the saved
* sigmask will have been stored in the signal frame,
* and will be restored by sigreturn, so we can simply
* clear the TIF_RESTORE_SIGMASK flag.
*/
if (test_thread_flag(TIF_RESTORE_SIGMASK))
clear_thread_flag(TIF_RESTORE_SIGMASK);
}
return;
}
/*
* If there's no signal to deliver, we just put the saved sigmask back.
*/
if (test_thread_flag(TIF_RESTORE_SIGMASK)) {
clear_thread_flag(TIF_RESTORE_SIGMASK);
sigprocmask(SIG_SETMASK, &current->saved_sigmask, NULL);
}
/* Restart a different system call. */
@ -522,5 +519,4 @@ int do_signal(struct pt_regs *regs, sigset_t *oldset)
regs->gprs[2] = __NR_restart_syscall;
set_thread_flag(TIF_RESTART_SVC);
}
return 0;
}

View File

@ -80,7 +80,7 @@ NI_SYSCALL /* old sgetmask syscall*/
NI_SYSCALL /* old ssetmask syscall*/
SYSCALL(sys_setreuid16,sys_ni_syscall,sys32_setreuid16_wrapper) /* old setreuid16 syscall */
SYSCALL(sys_setregid16,sys_ni_syscall,sys32_setregid16_wrapper) /* old setregid16 syscall */
SYSCALL(sys_sigsuspend_glue,sys_sigsuspend_glue,sys32_sigsuspend_glue)
SYSCALL(sys_sigsuspend,sys_sigsuspend,sys_sigsuspend_wrapper)
SYSCALL(sys_sigpending,sys_sigpending,compat_sys_sigpending_wrapper)
SYSCALL(sys_sethostname,sys_sethostname,sys32_sethostname_wrapper)
SYSCALL(sys_setrlimit,sys_setrlimit,compat_sys_setrlimit_wrapper) /* 75 */
@ -187,7 +187,7 @@ SYSCALL(sys_rt_sigprocmask,sys_rt_sigprocmask,sys32_rt_sigprocmask_wrapper) /* 1
SYSCALL(sys_rt_sigpending,sys_rt_sigpending,sys32_rt_sigpending_wrapper)
SYSCALL(sys_rt_sigtimedwait,sys_rt_sigtimedwait,compat_sys_rt_sigtimedwait_wrapper)
SYSCALL(sys_rt_sigqueueinfo,sys_rt_sigqueueinfo,sys32_rt_sigqueueinfo_wrapper)
SYSCALL(sys_rt_sigsuspend_glue,sys_rt_sigsuspend_glue,sys32_rt_sigsuspend_glue)
SYSCALL(sys_rt_sigsuspend,sys_rt_sigsuspend,compat_sys_rt_sigsuspend_wrapper)
SYSCALL(sys_pread64,sys_pread64,sys32_pread64_wrapper) /* 180 */
SYSCALL(sys_pwrite64,sys_pwrite64,sys32_pwrite64_wrapper)
SYSCALL(sys_chown16,sys_ni_syscall,sys32_chown16_wrapper) /* old chown16 syscall */
@ -293,5 +293,21 @@ SYSCALL(sys_waitid,sys_waitid,compat_sys_waitid_wrapper)
SYSCALL(sys_ioprio_set,sys_ioprio_set,sys_ioprio_set_wrapper)
SYSCALL(sys_ioprio_get,sys_ioprio_get,sys_ioprio_get_wrapper)
SYSCALL(sys_inotify_init,sys_inotify_init,sys_inotify_init)
SYSCALL(sys_inotify_add_watch,sys_inotify_add_watch,sys_inotify_add_watch_wrapper)
SYSCALL(sys_inotify_add_watch,sys_inotify_add_watch,sys_inotify_add_watch_wrapper) /* 285 */
SYSCALL(sys_inotify_rm_watch,sys_inotify_rm_watch,sys_inotify_rm_watch_wrapper)
NI_SYSCALL /* 287 sys_migrate_pages */
SYSCALL(sys_openat,sys_openat,compat_sys_openat_wrapper)
SYSCALL(sys_mkdirat,sys_mkdirat,sys_mkdirat_wrapper)
SYSCALL(sys_mknodat,sys_mknodat,sys_mknodat_wrapper) /* 290 */
SYSCALL(sys_fchownat,sys_fchownat,sys_fchownat_wrapper)
SYSCALL(sys_futimesat,sys_futimesat,compat_sys_futimesat_wrapper)
SYSCALL(sys_newfstatat,sys_newfstatat,compat_sys_newfstatat_wrapper)
SYSCALL(sys_unlinkat,sys_unlinkat,sys_unlinkat_wrapper)
SYSCALL(sys_renameat,sys_renameat,sys_renameat_wrapper) /* 295 */
SYSCALL(sys_linkat,sys_linkat,sys_linkat_wrapper)
SYSCALL(sys_symlinkat,sys_symlinkat,sys_symlinkat_wrapper)
SYSCALL(sys_readlinkat,sys_readlinkat,sys_readlinkat_wrapper)
SYSCALL(sys_fchmodat,sys_fchmodat,sys_fchmodat_wrapper)
SYSCALL(sys_faccessat,sys_faccessat,sys_faccessat_wrapper) /* 300 */
SYSCALL(sys_pselect6,sys_pselect6,compat_sys_pselect6_wrapper)
SYSCALL(sys_ppoll,sys_ppoll,compat_sys_ppoll_wrapper)

View File

@ -61,9 +61,18 @@ extern unsigned long wall_jiffies;
*/
unsigned long long sched_clock(void)
{
return ((get_clock() - jiffies_timer_cc) * 1000) >> 12;
return ((get_clock() - jiffies_timer_cc) * 125) >> 9;
}
/*
* Monotonic_clock - returns # of nanoseconds passed since time_init()
*/
unsigned long long monotonic_clock(void)
{
return sched_clock();
}
EXPORT_SYMBOL(monotonic_clock);
void tod_to_timeval(__u64 todval, struct timespec *xtime)
{
unsigned long long sec;

View File

@ -6,4 +6,4 @@ EXTRA_AFLAGS := -traditional
lib-y += delay.o string.o
lib-y += $(if $(CONFIG_64BIT),uaccess64.o,uaccess.o)
lib-$(CONFIG_SMP) += spinlock.o
lib-$(CONFIG_SMP) += spinlock.o

Some files were not shown because too many files have changed in this diff Show More