Merge ../torvalds-2.6/

This commit is contained in:
Greg Kroah-Hartman 2006-01-06 12:59:59 -08:00
commit ccf18968b1
631 changed files with 62972 additions and 12433 deletions

View File

@ -263,14 +263,8 @@ A flag in the bio structure, BIO_BARRIER is used to identify a barrier i/o.
The generic i/o scheduler would make sure that it places the barrier request and
all other requests coming after it after all the previous requests in the
queue. Barriers may be implemented in different ways depending on the
driver. A SCSI driver for example could make use of ordered tags to
preserve the necessary ordering with a lower impact on throughput. For IDE
this might be two sync cache flush: a pre and post flush when encountering
a barrier write.
There is a provision for queues to indicate what kind of barriers they
can provide. This is as of yet unmerged, details will be added here once it
is in the kernel.
driver. For more details regarding I/O barriers, please read barrier.txt
in this directory.
1.2.2 Request Priority/Latency

View File

@ -47,17 +47,6 @@ Who: Paul E. McKenney <paulmck@us.ibm.com>
---------------------------
What: IEEE1394 Audio and Music Data Transmission Protocol driver,
Connection Management Procedures driver
When: November 2005
Files: drivers/ieee1394/{amdtp,cmp}*
Why: These are incomplete, have never worked, and are better implemented
in userland via raw1394 (see http://freebob.sourceforge.net/ for
example.)
Who: Jody McIntyre <scjody@steamballoon.com>
---------------------------
What: raw1394: requests of type RAW1394_REQ_ISO_SEND, RAW1394_REQ_ISO_LISTEN
When: November 2005
Why: Deprecated in favour of the new ioctl-based rawiso interface, which is

View File

@ -12,10 +12,14 @@ cifs.txt
- description of the CIFS filesystem
coda.txt
- description of the CODA filesystem.
configfs/
- directory containing configfs documentation and example code.
cramfs.txt
- info on the cram filesystem for small storage (ROMs etc)
devfs/
- directory containing devfs documentation.
dlmfs.txt
- info on the userspace interface to the OCFS2 DLM.
ext2.txt
- info, mount options and specifications for the Ext2 filesystem.
hpfs.txt
@ -30,6 +34,8 @@ ntfs.txt
- info and mount options for the NTFS filesystem (Windows NT).
proc.txt
- info on Linux's /proc filesystem.
ocfs2.txt
- info and mount options for the OCFS2 clustered filesystem.
romfs.txt
- Description of the ROMFS filesystem.
smbfs.txt

View File

@ -0,0 +1,434 @@
configfs - Userspace-driven kernel object configuation.
Joel Becker <joel.becker@oracle.com>
Updated: 31 March 2005
Copyright (c) 2005 Oracle Corporation,
Joel Becker <joel.becker@oracle.com>
[What is configfs?]
configfs is a ram-based filesystem that provides the converse of
sysfs's functionality. Where sysfs is a filesystem-based view of
kernel objects, configfs is a filesystem-based manager of kernel
objects, or config_items.
With sysfs, an object is created in kernel (for example, when a device
is discovered) and it is registered with sysfs. Its attributes then
appear in sysfs, allowing userspace to read the attributes via
readdir(3)/read(2). It may allow some attributes to be modified via
write(2). The important point is that the object is created and
destroyed in kernel, the kernel controls the lifecycle of the sysfs
representation, and sysfs is merely a window on all this.
A configfs config_item is created via an explicit userspace operation:
mkdir(2). It is destroyed via rmdir(2). The attributes appear at
mkdir(2) time, and can be read or modified via read(2) and write(2).
As with sysfs, readdir(3) queries the list of items and/or attributes.
symlink(2) can be used to group items together. Unlike sysfs, the
lifetime of the representation is completely driven by userspace. The
kernel modules backing the items must respond to this.
Both sysfs and configfs can and should exist together on the same
system. One is not a replacement for the other.
[Using configfs]
configfs can be compiled as a module or into the kernel. You can access
it by doing
mount -t configfs none /config
The configfs tree will be empty unless client modules are also loaded.
These are modules that register their item types with configfs as
subsystems. Once a client subsystem is loaded, it will appear as a
subdirectory (or more than one) under /config. Like sysfs, the
configfs tree is always there, whether mounted on /config or not.
An item is created via mkdir(2). The item's attributes will also
appear at this time. readdir(3) can determine what the attributes are,
read(2) can query their default values, and write(2) can store new
values. Like sysfs, attributes should be ASCII text files, preferably
with only one value per file. The same efficiency caveats from sysfs
apply. Don't mix more than one attribute in one attribute file.
Like sysfs, configfs expects write(2) to store the entire buffer at
once. When writing to configfs attributes, userspace processes should
first read the entire file, modify the portions they wish to change, and
then write the entire buffer back. Attribute files have a maximum size
of one page (PAGE_SIZE, 4096 on i386).
When an item needs to be destroyed, remove it with rmdir(2). An
item cannot be destroyed if any other item has a link to it (via
symlink(2)). Links can be removed via unlink(2).
[Configuring FakeNBD: an Example]
Imagine there's a Network Block Device (NBD) driver that allows you to
access remote block devices. Call it FakeNBD. FakeNBD uses configfs
for its configuration. Obviously, there will be a nice program that
sysadmins use to configure FakeNBD, but somehow that program has to tell
the driver about it. Here's where configfs comes in.
When the FakeNBD driver is loaded, it registers itself with configfs.
readdir(3) sees this just fine:
# ls /config
fakenbd
A fakenbd connection can be created with mkdir(2). The name is
arbitrary, but likely the tool will make some use of the name. Perhaps
it is a uuid or a disk name:
# mkdir /config/fakenbd/disk1
# ls /config/fakenbd/disk1
target device rw
The target attribute contains the IP address of the server FakeNBD will
connect to. The device attribute is the device on the server.
Predictably, the rw attribute determines whether the connection is
read-only or read-write.
# echo 10.0.0.1 > /config/fakenbd/disk1/target
# echo /dev/sda1 > /config/fakenbd/disk1/device
# echo 1 > /config/fakenbd/disk1/rw
That's it. That's all there is. Now the device is configured, via the
shell no less.
[Coding With configfs]
Every object in configfs is a config_item. A config_item reflects an
object in the subsystem. It has attributes that match values on that
object. configfs handles the filesystem representation of that object
and its attributes, allowing the subsystem to ignore all but the
basic show/store interaction.
Items are created and destroyed inside a config_group. A group is a
collection of items that share the same attributes and operations.
Items are created by mkdir(2) and removed by rmdir(2), but configfs
handles that. The group has a set of operations to perform these tasks
A subsystem is the top level of a client module. During initialization,
the client module registers the subsystem with configfs, the subsystem
appears as a directory at the top of the configfs filesystem. A
subsystem is also a config_group, and can do everything a config_group
can.
[struct config_item]
struct config_item {
char *ci_name;
char ci_namebuf[UOBJ_NAME_LEN];
struct kref ci_kref;
struct list_head ci_entry;
struct config_item *ci_parent;
struct config_group *ci_group;
struct config_item_type *ci_type;
struct dentry *ci_dentry;
};
void config_item_init(struct config_item *);
void config_item_init_type_name(struct config_item *,
const char *name,
struct config_item_type *type);
struct config_item *config_item_get(struct config_item *);
void config_item_put(struct config_item *);
Generally, struct config_item is embedded in a container structure, a
structure that actually represents what the subsystem is doing. The
config_item portion of that structure is how the object interacts with
configfs.
Whether statically defined in a source file or created by a parent
config_group, a config_item must have one of the _init() functions
called on it. This initializes the reference count and sets up the
appropriate fields.
All users of a config_item should have a reference on it via
config_item_get(), and drop the reference when they are done via
config_item_put().
By itself, a config_item cannot do much more than appear in configfs.
Usually a subsystem wants the item to display and/or store attributes,
among other things. For that, it needs a type.
[struct config_item_type]
struct configfs_item_operations {
void (*release)(struct config_item *);
ssize_t (*show_attribute)(struct config_item *,
struct configfs_attribute *,
char *);
ssize_t (*store_attribute)(struct config_item *,
struct configfs_attribute *,
const char *, size_t);
int (*allow_link)(struct config_item *src,
struct config_item *target);
int (*drop_link)(struct config_item *src,
struct config_item *target);
};
struct config_item_type {
struct module *ct_owner;
struct configfs_item_operations *ct_item_ops;
struct configfs_group_operations *ct_group_ops;
struct configfs_attribute **ct_attrs;
};
The most basic function of a config_item_type is to define what
operations can be performed on a config_item. All items that have been
allocated dynamically will need to provide the ct_item_ops->release()
method. This method is called when the config_item's reference count
reaches zero. Items that wish to display an attribute need to provide
the ct_item_ops->show_attribute() method. Similarly, storing a new
attribute value uses the store_attribute() method.
[struct configfs_attribute]
struct configfs_attribute {
char *ca_name;
struct module *ca_owner;
mode_t ca_mode;
};
When a config_item wants an attribute to appear as a file in the item's
configfs directory, it must define a configfs_attribute describing it.
It then adds the attribute to the NULL-terminated array
config_item_type->ct_attrs. When the item appears in configfs, the
attribute file will appear with the configfs_attribute->ca_name
filename. configfs_attribute->ca_mode specifies the file permissions.
If an attribute is readable and the config_item provides a
ct_item_ops->show_attribute() method, that method will be called
whenever userspace asks for a read(2) on the attribute. The converse
will happen for write(2).
[struct config_group]
A config_item cannot live in a vaccum. The only way one can be created
is via mkdir(2) on a config_group. This will trigger creation of a
child item.
struct config_group {
struct config_item cg_item;
struct list_head cg_children;
struct configfs_subsystem *cg_subsys;
struct config_group **default_groups;
};
void config_group_init(struct config_group *group);
void config_group_init_type_name(struct config_group *group,
const char *name,
struct config_item_type *type);
The config_group structure contains a config_item. Properly configuring
that item means that a group can behave as an item in its own right.
However, it can do more: it can create child items or groups. This is
accomplished via the group operations specified on the group's
config_item_type.
struct configfs_group_operations {
struct config_item *(*make_item)(struct config_group *group,
const char *name);
struct config_group *(*make_group)(struct config_group *group,
const char *name);
int (*commit_item)(struct config_item *item);
void (*drop_item)(struct config_group *group,
struct config_item *item);
};
A group creates child items by providing the
ct_group_ops->make_item() method. If provided, this method is called from mkdir(2) in the group's directory. The subsystem allocates a new
config_item (or more likely, its container structure), initializes it,
and returns it to configfs. Configfs will then populate the filesystem
tree to reflect the new item.
If the subsystem wants the child to be a group itself, the subsystem
provides ct_group_ops->make_group(). Everything else behaves the same,
using the group _init() functions on the group.
Finally, when userspace calls rmdir(2) on the item or group,
ct_group_ops->drop_item() is called. As a config_group is also a
config_item, it is not necessary for a seperate drop_group() method.
The subsystem must config_item_put() the reference that was initialized
upon item allocation. If a subsystem has no work to do, it may omit
the ct_group_ops->drop_item() method, and configfs will call
config_item_put() on the item on behalf of the subsystem.
IMPORTANT: drop_item() is void, and as such cannot fail. When rmdir(2)
is called, configfs WILL remove the item from the filesystem tree
(assuming that it has no children to keep it busy). The subsystem is
responsible for responding to this. If the subsystem has references to
the item in other threads, the memory is safe. It may take some time
for the item to actually disappear from the subsystem's usage. But it
is gone from configfs.
A config_group cannot be removed while it still has child items. This
is implemented in the configfs rmdir(2) code. ->drop_item() will not be
called, as the item has not been dropped. rmdir(2) will fail, as the
directory is not empty.
[struct configfs_subsystem]
A subsystem must register itself, ususally at module_init time. This
tells configfs to make the subsystem appear in the file tree.
struct configfs_subsystem {
struct config_group su_group;
struct semaphore su_sem;
};
int configfs_register_subsystem(struct configfs_subsystem *subsys);
void configfs_unregister_subsystem(struct configfs_subsystem *subsys);
A subsystem consists of a toplevel config_group and a semaphore.
The group is where child config_items are created. For a subsystem,
this group is usually defined statically. Before calling
configfs_register_subsystem(), the subsystem must have initialized the
group via the usual group _init() functions, and it must also have
initialized the semaphore.
When the register call returns, the subsystem is live, and it
will be visible via configfs. At that point, mkdir(2) can be called and
the subsystem must be ready for it.
[An Example]
The best example of these basic concepts is the simple_children
subsystem/group and the simple_child item in configfs_example.c It
shows a trivial object displaying and storing an attribute, and a simple
group creating and destroying these children.
[Hierarchy Navigation and the Subsystem Semaphore]
There is an extra bonus that configfs provides. The config_groups and
config_items are arranged in a hierarchy due to the fact that they
appear in a filesystem. A subsystem is NEVER to touch the filesystem
parts, but the subsystem might be interested in this hierarchy. For
this reason, the hierarchy is mirrored via the config_group->cg_children
and config_item->ci_parent structure members.
A subsystem can navigate the cg_children list and the ci_parent pointer
to see the tree created by the subsystem. This can race with configfs'
management of the hierarchy, so configfs uses the subsystem semaphore to
protect modifications. Whenever a subsystem wants to navigate the
hierarchy, it must do so under the protection of the subsystem
semaphore.
A subsystem will be prevented from acquiring the semaphore while a newly
allocated item has not been linked into this hierarchy. Similarly, it
will not be able to acquire the semaphore while a dropping item has not
yet been unlinked. This means that an item's ci_parent pointer will
never be NULL while the item is in configfs, and that an item will only
be in its parent's cg_children list for the same duration. This allows
a subsystem to trust ci_parent and cg_children while they hold the
semaphore.
[Item Aggregation Via symlink(2)]
configfs provides a simple group via the group->item parent/child
relationship. Often, however, a larger environment requires aggregation
outside of the parent/child connection. This is implemented via
symlink(2).
A config_item may provide the ct_item_ops->allow_link() and
ct_item_ops->drop_link() methods. If the ->allow_link() method exists,
symlink(2) may be called with the config_item as the source of the link.
These links are only allowed between configfs config_items. Any
symlink(2) attempt outside the configfs filesystem will be denied.
When symlink(2) is called, the source config_item's ->allow_link()
method is called with itself and a target item. If the source item
allows linking to target item, it returns 0. A source item may wish to
reject a link if it only wants links to a certain type of object (say,
in its own subsystem).
When unlink(2) is called on the symbolic link, the source item is
notified via the ->drop_link() method. Like the ->drop_item() method,
this is a void function and cannot return failure. The subsystem is
responsible for responding to the change.
A config_item cannot be removed while it links to any other item, nor
can it be removed while an item links to it. Dangling symlinks are not
allowed in configfs.
[Automatically Created Subgroups]
A new config_group may want to have two types of child config_items.
While this could be codified by magic names in ->make_item(), it is much
more explicit to have a method whereby userspace sees this divergence.
Rather than have a group where some items behave differently than
others, configfs provides a method whereby one or many subgroups are
automatically created inside the parent at its creation. Thus,
mkdir("parent) results in "parent", "parent/subgroup1", up through
"parent/subgroupN". Items of type 1 can now be created in
"parent/subgroup1", and items of type N can be created in
"parent/subgroupN".
These automatic subgroups, or default groups, do not preclude other
children of the parent group. If ct_group_ops->make_group() exists,
other child groups can be created on the parent group directly.
A configfs subsystem specifies default groups by filling in the
NULL-terminated array default_groups on the config_group structure.
Each group in that array is populated in the configfs tree at the same
time as the parent group. Similarly, they are removed at the same time
as the parent. No extra notification is provided. When a ->drop_item()
method call notifies the subsystem the parent group is going away, it
also means every default group child associated with that parent group.
As a consequence of this, default_groups cannot be removed directly via
rmdir(2). They also are not considered when rmdir(2) on the parent
group is checking for children.
[Committable Items]
NOTE: Committable items are currently unimplemented.
Some config_items cannot have a valid initial state. That is, no
default values can be specified for the item's attributes such that the
item can do its work. Userspace must configure one or more attributes,
after which the subsystem can start whatever entity this item
represents.
Consider the FakeNBD device from above. Without a target address *and*
a target device, the subsystem has no idea what block device to import.
The simple example assumes that the subsystem merely waits until all the
appropriate attributes are configured, and then connects. This will,
indeed, work, but now every attribute store must check if the attributes
are initialized. Every attribute store must fire off the connection if
that condition is met.
Far better would be an explicit action notifying the subsystem that the
config_item is ready to go. More importantly, an explicit action allows
the subsystem to provide feedback as to whether the attibutes are
initialized in a way that makes sense. configfs provides this as
committable items.
configfs still uses only normal filesystem operations. An item is
committed via rename(2). The item is moved from a directory where it
can be modified to a directory where it cannot.
Any group that provides the ct_group_ops->commit_item() method has
committable items. When this group appears in configfs, mkdir(2) will
not work directly in the group. Instead, the group will have two
subdirectories: "live" and "pending". The "live" directory does not
support mkdir(2) or rmdir(2) either. It only allows rename(2). The
"pending" directory does allow mkdir(2) and rmdir(2). An item is
created in the "pending" directory. Its attributes can be modified at
will. Userspace commits the item by renaming it into the "live"
directory. At this point, the subsystem recieves the ->commit_item()
callback. If all required attributes are filled to satisfaction, the
method returns zero and the item is moved to the "live" directory.
As rmdir(2) does not work in the "live" directory, an item must be
shutdown, or "uncommitted". Again, this is done via rename(2), this
time from the "live" directory back to the "pending" one. The subsystem
is notified by the ct_group_ops->uncommit_object() method.

View File

@ -0,0 +1,474 @@
/*
* vim: noexpandtab ts=8 sts=0 sw=8:
*
* configfs_example.c - This file is a demonstration module containing
* a number of configfs subsystems.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public
* License along with this program; if not, write to the
* Free Software Foundation, Inc., 59 Temple Place - Suite 330,
* Boston, MA 021110-1307, USA.
*
* Based on sysfs:
* sysfs is Copyright (C) 2001, 2002, 2003 Patrick Mochel
*
* configfs Copyright (C) 2005 Oracle. All rights reserved.
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/configfs.h>
/*
* 01-childless
*
* This first example is a childless subsystem. It cannot create
* any config_items. It just has attributes.
*
* Note that we are enclosing the configfs_subsystem inside a container.
* This is not necessary if a subsystem has no attributes directly
* on the subsystem. See the next example, 02-simple-children, for
* such a subsystem.
*/
struct childless {
struct configfs_subsystem subsys;
int showme;
int storeme;
};
struct childless_attribute {
struct configfs_attribute attr;
ssize_t (*show)(struct childless *, char *);
ssize_t (*store)(struct childless *, const char *, size_t);
};
static inline struct childless *to_childless(struct config_item *item)
{
return item ? container_of(to_configfs_subsystem(to_config_group(item)), struct childless, subsys) : NULL;
}
static ssize_t childless_showme_read(struct childless *childless,
char *page)
{
ssize_t pos;
pos = sprintf(page, "%d\n", childless->showme);
childless->showme++;
return pos;
}
static ssize_t childless_storeme_read(struct childless *childless,
char *page)
{
return sprintf(page, "%d\n", childless->storeme);
}
static ssize_t childless_storeme_write(struct childless *childless,
const char *page,
size_t count)
{
unsigned long tmp;
char *p = (char *) page;
tmp = simple_strtoul(p, &p, 10);
if (!p || (*p && (*p != '\n')))
return -EINVAL;
if (tmp > INT_MAX)
return -ERANGE;
childless->storeme = tmp;
return count;
}
static ssize_t childless_description_read(struct childless *childless,
char *page)
{
return sprintf(page,
"[01-childless]\n"
"\n"
"The childless subsystem is the simplest possible subsystem in\n"
"configfs. It does not support the creation of child config_items.\n"
"It only has a few attributes. In fact, it isn't much different\n"
"than a directory in /proc.\n");
}
static struct childless_attribute childless_attr_showme = {
.attr = { .ca_owner = THIS_MODULE, .ca_name = "showme", .ca_mode = S_IRUGO },
.show = childless_showme_read,
};
static struct childless_attribute childless_attr_storeme = {
.attr = { .ca_owner = THIS_MODULE, .ca_name = "storeme", .ca_mode = S_IRUGO | S_IWUSR },
.show = childless_storeme_read,
.store = childless_storeme_write,
};
static struct childless_attribute childless_attr_description = {
.attr = { .ca_owner = THIS_MODULE, .ca_name = "description", .ca_mode = S_IRUGO },
.show = childless_description_read,
};
static struct configfs_attribute *childless_attrs[] = {
&childless_attr_showme.attr,
&childless_attr_storeme.attr,
&childless_attr_description.attr,
NULL,
};
static ssize_t childless_attr_show(struct config_item *item,
struct configfs_attribute *attr,
char *page)
{
struct childless *childless = to_childless(item);
struct childless_attribute *childless_attr =
container_of(attr, struct childless_attribute, attr);
ssize_t ret = 0;
if (childless_attr->show)
ret = childless_attr->show(childless, page);
return ret;
}
static ssize_t childless_attr_store(struct config_item *item,
struct configfs_attribute *attr,
const char *page, size_t count)
{
struct childless *childless = to_childless(item);
struct childless_attribute *childless_attr =
container_of(attr, struct childless_attribute, attr);
ssize_t ret = -EINVAL;
if (childless_attr->store)
ret = childless_attr->store(childless, page, count);
return ret;
}
static struct configfs_item_operations childless_item_ops = {
.show_attribute = childless_attr_show,
.store_attribute = childless_attr_store,
};
static struct config_item_type childless_type = {
.ct_item_ops = &childless_item_ops,
.ct_attrs = childless_attrs,
.ct_owner = THIS_MODULE,
};
static struct childless childless_subsys = {
.subsys = {
.su_group = {
.cg_item = {
.ci_namebuf = "01-childless",
.ci_type = &childless_type,
},
},
},
};
/* ----------------------------------------------------------------- */
/*
* 02-simple-children
*
* This example merely has a simple one-attribute child. Note that
* there is no extra attribute structure, as the child's attribute is
* known from the get-go. Also, there is no container for the
* subsystem, as it has no attributes of its own.
*/
struct simple_child {
struct config_item item;
int storeme;
};
static inline struct simple_child *to_simple_child(struct config_item *item)
{
return item ? container_of(item, struct simple_child, item) : NULL;
}
static struct configfs_attribute simple_child_attr_storeme = {
.ca_owner = THIS_MODULE,
.ca_name = "storeme",
.ca_mode = S_IRUGO | S_IWUSR,
};
static struct configfs_attribute *simple_child_attrs[] = {
&simple_child_attr_storeme,
NULL,
};
static ssize_t simple_child_attr_show(struct config_item *item,
struct configfs_attribute *attr,
char *page)
{
ssize_t count;
struct simple_child *simple_child = to_simple_child(item);
count = sprintf(page, "%d\n", simple_child->storeme);
return count;
}
static ssize_t simple_child_attr_store(struct config_item *item,
struct configfs_attribute *attr,
const char *page, size_t count)
{
struct simple_child *simple_child = to_simple_child(item);
unsigned long tmp;
char *p = (char *) page;
tmp = simple_strtoul(p, &p, 10);
if (!p || (*p && (*p != '\n')))
return -EINVAL;
if (tmp > INT_MAX)
return -ERANGE;
simple_child->storeme = tmp;
return count;
}
static void simple_child_release(struct config_item *item)
{
kfree(to_simple_child(item));
}
static struct configfs_item_operations simple_child_item_ops = {
.release = simple_child_release,
.show_attribute = simple_child_attr_show,
.store_attribute = simple_child_attr_store,
};
static struct config_item_type simple_child_type = {
.ct_item_ops = &simple_child_item_ops,
.ct_attrs = simple_child_attrs,
.ct_owner = THIS_MODULE,
};
static struct config_item *simple_children_make_item(struct config_group *group, const char *name)
{
struct simple_child *simple_child;
simple_child = kmalloc(sizeof(struct simple_child), GFP_KERNEL);
if (!simple_child)
return NULL;
memset(simple_child, 0, sizeof(struct simple_child));
config_item_init_type_name(&simple_child->item, name,
&simple_child_type);
simple_child->storeme = 0;
return &simple_child->item;
}
static struct configfs_attribute simple_children_attr_description = {
.ca_owner = THIS_MODULE,
.ca_name = "description",
.ca_mode = S_IRUGO,
};
static struct configfs_attribute *simple_children_attrs[] = {
&simple_children_attr_description,
NULL,
};
static ssize_t simple_children_attr_show(struct config_item *item,
struct configfs_attribute *attr,
char *page)
{
return sprintf(page,
"[02-simple-children]\n"
"\n"
"This subsystem allows the creation of child config_items. These\n"
"items have only one attribute that is readable and writeable.\n");
}
static struct configfs_item_operations simple_children_item_ops = {
.show_attribute = simple_children_attr_show,
};
/*
* Note that, since no extra work is required on ->drop_item(),
* no ->drop_item() is provided.
*/
static struct configfs_group_operations simple_children_group_ops = {
.make_item = simple_children_make_item,
};
static struct config_item_type simple_children_type = {
.ct_item_ops = &simple_children_item_ops,
.ct_group_ops = &simple_children_group_ops,
.ct_attrs = simple_children_attrs,
};
static struct configfs_subsystem simple_children_subsys = {
.su_group = {
.cg_item = {
.ci_namebuf = "02-simple-children",
.ci_type = &simple_children_type,
},
},
};
/* ----------------------------------------------------------------- */
/*
* 03-group-children
*
* This example reuses the simple_children group from above. However,
* the simple_children group is not the subsystem itself, it is a
* child of the subsystem. Creation of a group in the subsystem creates
* a new simple_children group. That group can then have simple_child
* children of its own.
*/
struct simple_children {
struct config_group group;
};
static struct config_group *group_children_make_group(struct config_group *group, const char *name)
{
struct simple_children *simple_children;
simple_children = kmalloc(sizeof(struct simple_children),
GFP_KERNEL);
if (!simple_children)
return NULL;
memset(simple_children, 0, sizeof(struct simple_children));
config_group_init_type_name(&simple_children->group, name,
&simple_children_type);
return &simple_children->group;
}
static struct configfs_attribute group_children_attr_description = {
.ca_owner = THIS_MODULE,
.ca_name = "description",
.ca_mode = S_IRUGO,
};
static struct configfs_attribute *group_children_attrs[] = {
&group_children_attr_description,
NULL,
};
static ssize_t group_children_attr_show(struct config_item *item,
struct configfs_attribute *attr,
char *page)
{
return sprintf(page,
"[03-group-children]\n"
"\n"
"This subsystem allows the creation of child config_groups. These\n"
"groups are like the subsystem simple-children.\n");
}
static struct configfs_item_operations group_children_item_ops = {
.show_attribute = group_children_attr_show,
};
/*
* Note that, since no extra work is required on ->drop_item(),
* no ->drop_item() is provided.
*/
static struct configfs_group_operations group_children_group_ops = {
.make_group = group_children_make_group,
};
static struct config_item_type group_children_type = {
.ct_item_ops = &group_children_item_ops,
.ct_group_ops = &group_children_group_ops,
.ct_attrs = group_children_attrs,
};
static struct configfs_subsystem group_children_subsys = {
.su_group = {
.cg_item = {
.ci_namebuf = "03-group-children",
.ci_type = &group_children_type,
},
},
};
/* ----------------------------------------------------------------- */
/*
* We're now done with our subsystem definitions.
* For convenience in this module, here's a list of them all. It
* allows the init function to easily register them. Most modules
* will only have one subsystem, and will only call register_subsystem
* on it directly.
*/
static struct configfs_subsystem *example_subsys[] = {
&childless_subsys.subsys,
&simple_children_subsys,
&group_children_subsys,
NULL,
};
static int __init configfs_example_init(void)
{
int ret;
int i;
struct configfs_subsystem *subsys;
for (i = 0; example_subsys[i]; i++) {
subsys = example_subsys[i];
config_group_init(&subsys->su_group);
init_MUTEX(&subsys->su_sem);
ret = configfs_register_subsystem(subsys);
if (ret) {
printk(KERN_ERR "Error %d while registering subsystem %s\n",
ret,
subsys->su_group.cg_item.ci_namebuf);
goto out_unregister;
}
}
return 0;
out_unregister:
for (; i >= 0; i--) {
configfs_unregister_subsystem(example_subsys[i]);
}
return ret;
}
static void __exit configfs_example_exit(void)
{
int i;
for (i = 0; example_subsys[i]; i++) {
configfs_unregister_subsystem(example_subsys[i]);
}
}
module_init(configfs_example_init);
module_exit(configfs_example_exit);
MODULE_LICENSE("GPL");

View File

@ -0,0 +1,130 @@
dlmfs
==================
A minimal DLM userspace interface implemented via a virtual file
system.
dlmfs is built with OCFS2 as it requires most of its infrastructure.
Project web page: http://oss.oracle.com/projects/ocfs2
Tools web page: http://oss.oracle.com/projects/ocfs2-tools
OCFS2 mailing lists: http://oss.oracle.com/projects/ocfs2/mailman/
All code copyright 2005 Oracle except when otherwise noted.
CREDITS
=======
Some code taken from ramfs which is Copyright (C) 2000 Linus Torvalds
and Transmeta Corp.
Mark Fasheh <mark.fasheh@oracle.com>
Caveats
=======
- Right now it only works with the OCFS2 DLM, though support for other
DLM implementations should not be a major issue.
Mount options
=============
None
Usage
=====
If you're just interested in OCFS2, then please see ocfs2.txt. The
rest of this document will be geared towards those who want to use
dlmfs for easy to setup and easy to use clustered locking in
userspace.
Setup
=====
dlmfs requires that the OCFS2 cluster infrastructure be in
place. Please download ocfs2-tools from the above url and configure a
cluster.
You'll want to start heartbeating on a volume which all the nodes in
your lockspace can access. The easiest way to do this is via
ocfs2_hb_ctl (distributed with ocfs2-tools). Right now it requires
that an OCFS2 file system be in place so that it can automatically
find it's heartbeat area, though it will eventually support heartbeat
against raw disks.
Please see the ocfs2_hb_ctl and mkfs.ocfs2 manual pages distributed
with ocfs2-tools.
Once you're heartbeating, DLM lock 'domains' can be easily created /
destroyed and locks within them accessed.
Locking
=======
Users may access dlmfs via standard file system calls, or they can use
'libo2dlm' (distributed with ocfs2-tools) which abstracts the file
system calls and presents a more traditional locking api.
dlmfs handles lock caching automatically for the user, so a lock
request for an already acquired lock will not generate another DLM
call. Userspace programs are assumed to handle their own local
locking.
Two levels of locks are supported - Shared Read, and Exlcusive.
Also supported is a Trylock operation.
For information on the libo2dlm interface, please see o2dlm.h,
distributed with ocfs2-tools.
Lock value blocks can be read and written to a resource via read(2)
and write(2) against the fd obtained via your open(2) call. The
maximum currently supported LVB length is 64 bytes (though that is an
OCFS2 DLM limitation). Through this mechanism, users of dlmfs can share
small amounts of data amongst their nodes.
mkdir(2) signals dlmfs to join a domain (which will have the same name
as the resulting directory)
rmdir(2) signals dlmfs to leave the domain
Locks for a given domain are represented by regular inodes inside the
domain directory. Locking against them is done via the open(2) system
call.
The open(2) call will not return until your lock has been granted or
an error has occurred, unless it has been instructed to do a trylock
operation. If the lock succeeds, you'll get an fd.
open(2) with O_CREAT to ensure the resource inode is created - dlmfs does
not automatically create inodes for existing lock resources.
Open Flag Lock Request Type
--------- -----------------
O_RDONLY Shared Read
O_RDWR Exclusive
Open Flag Resulting Locking Behavior
--------- --------------------------
O_NONBLOCK Trylock operation
You must provide exactly one of O_RDONLY or O_RDWR.
If O_NONBLOCK is also provided and the trylock operation was valid but
could not lock the resource then open(2) will return ETXTBUSY.
close(2) drops the lock associated with your fd.
Modes passed to mkdir(2) or open(2) are adhered to locally. Chown is
supported locally as well. This means you can use them to restrict
access to the resources via dlmfs on your local node only.
The resource LVB may be read from the fd in either Shared Read or
Exclusive modes via the read(2) system call. It can be written via
write(2) only when open in Exclusive mode.
Once written, an LVB will be visible to other nodes who obtain Read
Only or higher level locks on the resource.
See Also
========
http://opendlm.sourceforge.net/cvsmirror/opendlm/docs/dlmbook_final.pdf
For more information on the VMS distributed locking API.

View File

@ -0,0 +1,55 @@
OCFS2 filesystem
==================
OCFS2 is a general purpose extent based shared disk cluster file
system with many similarities to ext3. It supports 64 bit inode
numbers, and has automatically extending metadata groups which may
also make it attractive for non-clustered use.
You'll want to install the ocfs2-tools package in order to at least
get "mount.ocfs2" and "ocfs2_hb_ctl".
Project web page: http://oss.oracle.com/projects/ocfs2
Tools web page: http://oss.oracle.com/projects/ocfs2-tools
OCFS2 mailing lists: http://oss.oracle.com/projects/ocfs2/mailman/
All code copyright 2005 Oracle except when otherwise noted.
CREDITS:
Lots of code taken from ext3 and other projects.
Authors in alphabetical order:
Joel Becker <joel.becker@oracle.com>
Zach Brown <zach.brown@oracle.com>
Mark Fasheh <mark.fasheh@oracle.com>
Kurt Hackel <kurt.hackel@oracle.com>
Sunil Mushran <sunil.mushran@oracle.com>
Manish Singh <manish.singh@oracle.com>
Caveats
=======
Features which OCFS2 does not support yet:
- sparse files
- extended attributes
- shared writeable mmap
- loopback is supported, but data written will not
be cluster coherent.
- quotas
- cluster aware flock
- Directory change notification (F_NOTIFY)
- Distributed Caching (F_SETLEASE/F_GETLEASE/break_lease)
- POSIX ACLs
- readpages / writepages (not user visible)
Mount options
=============
OCFS2 supports the following mount options:
(*) == default
barrier=1 This enables/disables barriers. barrier=0 disables it,
barrier=1 enables it.
errors=remount-ro(*) Remount the filesystem read-only on an error.
errors=panic Panic and halt the machine if an error occurs.
intr (*) Allow signals to interrupt cluster operations.
nointr Do not allow signals to interrupt cluster
operations.

View File

@ -860,24 +860,6 @@ The structure has a number of fields, some of which are mandatory:
It is safe to sleep in this method.
(*) int (*duplicate)(struct key *key, const struct key *source);
If this type of key can be duplicated, then this method should be
provided. It is called to copy the payload attached to the source into the
new key. The data length on the new key will have been updated and the
quota adjusted already.
This method will be called with the source key's semaphore read-locked to
prevent its payload from being changed, thus RCU constraints need not be
applied to the source key.
This method does not have to lock the destination key in order to attach a
payload. The fact that KEY_FLAG_INSTANTIATED is not set in key->flags
prevents anything else from gaining access to the key.
It is safe to sleep in this method.
(*) int (*update)(struct key *key, const void *data, size_t datalen);
If this type of key can be updated, then this method should be provided.

View File

@ -51,6 +51,30 @@ superblock can be autodetected and run at boot time.
The kernel parameter "raid=partitionable" (or "raid=part") means
that all auto-detected arrays are assembled as partitionable.
Boot time assembly of degraded/dirty arrays
-------------------------------------------
If a raid5 or raid6 array is both dirty and degraded, it could have
undetectable data corruption. This is because the fact that it is
'dirty' means that the parity cannot be trusted, and the fact that it
is degraded means that some datablocks are missing and cannot reliably
be reconstructed (due to no parity).
For this reason, md will normally refuse to start such an array. This
requires the sysadmin to take action to explicitly start the array
desipite possible corruption. This is normally done with
mdadm --assemble --force ....
This option is not really available if the array has the root
filesystem on it. In order to support this booting from such an
array, md supports a module parameter "start_dirty_degraded" which,
when set to 1, bypassed the checks and will allows dirty degraded
arrays to be started.
So, to boot with a root filesystem of a dirty degraded raid[56], use
md-mod.start_dirty_degraded=1
Superblock formats
------------------
@ -141,6 +165,70 @@ All md devices contain:
in a fully functional array. If this is not yet known, the file
will be empty. If an array is being resized (not currently
possible) this will contain the larger of the old and new sizes.
Some raid level (RAID1) allow this value to be set while the
array is active. This will reconfigure the array. Otherwise
it can only be set while assembling an array.
chunk_size
This is the size if bytes for 'chunks' and is only relevant to
raid levels that involve striping (1,4,5,6,10). The address space
of the array is conceptually divided into chunks and consecutive
chunks are striped onto neighbouring devices.
The size should be atleast PAGE_SIZE (4k) and should be a power
of 2. This can only be set while assembling an array
component_size
For arrays with data redundancy (i.e. not raid0, linear, faulty,
multipath), all components must be the same size - or at least
there must a size that they all provide space for. This is a key
part or the geometry of the array. It is measured in sectors
and can be read from here. Writing to this value may resize
the array if the personality supports it (raid1, raid5, raid6),
and if the component drives are large enough.
metadata_version
This indicates the format that is being used to record metadata
about the array. It can be 0.90 (traditional format), 1.0, 1.1,
1.2 (newer format in varying locations) or "none" indicating that
the kernel isn't managing metadata at all.
level
The raid 'level' for this array. The name will often (but not
always) be the same as the name of the module that implements the
level. To be auto-loaded the module must have an alias
md-$LEVEL e.g. md-raid5
This can be written only while the array is being assembled, not
after it is started.
new_dev
This file can be written but not read. The value written should
be a block device number as major:minor. e.g. 8:0
This will cause that device to be attached to the array, if it is
available. It will then appear at md/dev-XXX (depending on the
name of the device) and further configuration is then possible.
sync_speed_min
sync_speed_max
This are similar to /proc/sys/dev/raid/speed_limit_{min,max}
however they only apply to the particular array.
If no value has been written to these, of if the word 'system'
is written, then the system-wide value is used. If a value,
in kibibytes-per-second is written, then it is used.
When the files are read, they show the currently active value
followed by "(local)" or "(system)" depending on whether it is
a locally set or system-wide value.
sync_completed
This shows the number of sectors that have been completed of
whatever the current sync_action is, followed by the number of
sectors in total that could need to be processed. The two
numbers are separated by a '/' thus effectively showing one
value, a fraction of the process that is complete.
sync_speed
This shows the current actual speed, in K/sec, of the current
sync_action. It is averaged over the last 30 seconds.
As component devices are added to an md array, they appear in the 'md'
directory as new directories named
@ -167,6 +255,38 @@ Each directory contains:
of being recoverred to
This list make grow in future.
errors
An approximate count of read errors that have been detected on
this device but have not caused the device to be evicted from
the array (either because they were corrected or because they
happened while the array was read-only). When using version-1
metadata, this value persists across restarts of the array.
This value can be written while assembling an array thus
providing an ongoing count for arrays with metadata managed by
userspace.
slot
This gives the role that the device has in the array. It will
either be 'none' if the device is not active in the array
(i.e. is a spare or has failed) or an integer less than the
'raid_disks' number for the array indicating which possition
it currently fills. This can only be set while assembling an
array. A device for which this is set is assumed to be working.
offset
This gives the location in the device (in sectors from the
start) where data from the array will be stored. Any part of
the device before this offset us not touched, unless it is
used for storing metadata (Formats 1.1 and 1.2).
size
The amount of the device, after the offset, that can be used
for storage of data. This will normally be the same as the
component_size. This can be written while assembling an
array. If a value less than the current component_size is
written, component_size will be reduced to this value.
An active md device will also contain and entry for each active device
in the array. These are named

View File

@ -41,3 +41,14 @@ to. Writing to this file will accept one of
It will only change to 'firmware' or 'platform' if the system supports
it.
/sys/power/image_size controls the size of the image created by
the suspend-to-disk mechanism. It can be written a string
representing a non-negative integer that will be used as an upper
limit of the image size, in megabytes. The suspend-to-disk mechanism will
do its best to ensure the image size will not exceed that number. However,
if this turns out to be impossible, it will try to suspend anyway using the
smallest image possible. In particular, if "0" is written to this file, the
suspend image will be as small as possible.
Reading from this file will display the current image size limit, which
is set to 500 MB by default.

View File

@ -27,6 +27,11 @@ echo shutdown > /sys/power/disk; echo disk > /sys/power/state
echo platform > /sys/power/disk; echo disk > /sys/power/state
If you want to limit the suspend image size to N megabytes, do
echo N > /sys/power/image_size
before suspend (it is limited to 500 MB by default).
Encrypted suspend image:
------------------------

View File

@ -258,6 +258,13 @@ P: Ivan Kokshaysky
M: ink@jurassic.park.msu.ru
S: Maintained for 2.4; PCI support for 2.6.
AMD GEODE PROCESSOR/CHIPSET SUPPORT
P: Jordan Crouse
M: info-linux@geode.amd.com
L: info-linux@geode.amd.com
W: http://www.amd.com/us-en/ConnectivitySolutions/TechnicalResources/0,,50_2334_2452_11363,00.html
S: Supported
APM DRIVER
P: Stephen Rothwell
M: sfr@canb.auug.org.au
@ -554,6 +561,11 @@ W: http://us1.samba.org/samba/Linux_CIFS_client.html
T: git kernel.org:/pub/scm/linux/kernel/git/sfrench/cifs-2.6.git
S: Supported
CONFIGFS
P: Joel Becker
M: Joel Becker <joel.becker@oracle.com>
S: Supported
CIRRUS LOGIC GENERIC FBDEV DRIVER
P: Jeff Garzik
M: jgarzik@pobox.com
@ -1230,7 +1242,7 @@ IEEE 1394 SUBSYSTEM
P: Ben Collins
M: bcollins@debian.org
P: Jody McIntyre
M: scjody@steamballoon.com
M: scjody@modernduck.com
L: linux1394-devel@lists.sourceforge.net
W: http://www.linux1394.org/
T: git kernel.org:/pub/scm/linux/kernel/git/scjody/ieee1394.git
@ -1240,14 +1252,14 @@ IEEE 1394 OHCI DRIVER
P: Ben Collins
M: bcollins@debian.org
P: Jody McIntyre
M: scjody@steamballoon.com
M: scjody@modernduck.com
L: linux1394-devel@lists.sourceforge.net
W: http://www.linux1394.org/
S: Maintained
IEEE 1394 PCILYNX DRIVER
P: Jody McIntyre
M: scjody@steamballoon.com
M: scjody@modernduck.com
L: linux1394-devel@lists.sourceforge.net
W: http://www.linux1394.org/
S: Maintained
@ -1898,6 +1910,15 @@ M: ajoshi@shell.unixbox.com
L: linux-nvidia@lists.surfsouth.com
S: Maintained
ORACLE CLUSTER FILESYSTEM 2 (OCFS2)
P: Mark Fasheh
M: mark.fasheh@oracle.com
P: Kurt Hackel
M: kurt.hackel@oracle.com
L: ocfs2-devel@oss.oracle.com
W: http://oss.oracle.com/projects/ocfs2/
S: Supported
OLYMPIC NETWORK DRIVER
P: Peter De Shrijver
M: p2@ace.ulyssis.student.kuleuven.ac.be

View File

@ -40,6 +40,19 @@ config GENERIC_IOMAP
bool
default n
config GENERIC_HARDIRQS
bool
default y
config GENERIC_IRQ_PROBE
bool
default y
config AUTO_IRQ_AFFINITY
bool
depends on SMP
default y
source "init/Kconfig"

View File

@ -175,7 +175,6 @@ EXPORT_SYMBOL(up);
*/
#ifdef CONFIG_SMP
EXPORT_SYMBOL(synchronize_irq);
EXPORT_SYMBOL(flush_tlb_mm);
EXPORT_SYMBOL(flush_tlb_range);
EXPORT_SYMBOL(flush_tlb_page);

View File

@ -32,214 +32,25 @@
#include <asm/io.h>
#include <asm/uaccess.h>
/*
* Controller mappings for all interrupt sources:
*/
irq_desc_t irq_desc[NR_IRQS] __cacheline_aligned = {
[0 ... NR_IRQS-1] = {
.handler = &no_irq_type,
.lock = SPIN_LOCK_UNLOCKED
}
};
static void register_irq_proc(unsigned int irq);
volatile unsigned long irq_err_count;
/*
* Special irq handlers.
*/
irqreturn_t no_action(int cpl, void *dev_id, struct pt_regs *regs)
{
return IRQ_NONE;
}
/*
* Generic no controller code
*/
static void no_irq_enable_disable(unsigned int irq) { }
static unsigned int no_irq_startup(unsigned int irq) { return 0; }
static void
no_irq_ack(unsigned int irq)
void ack_bad_irq(unsigned int irq)
{
irq_err_count++;
printk(KERN_CRIT "Unexpected IRQ trap at vector %u\n", irq);
}
struct hw_interrupt_type no_irq_type = {
.typename = "none",
.startup = no_irq_startup,
.shutdown = no_irq_enable_disable,
.enable = no_irq_enable_disable,
.disable = no_irq_enable_disable,
.ack = no_irq_ack,
.end = no_irq_enable_disable,
};
int
handle_IRQ_event(unsigned int irq, struct pt_regs *regs,
struct irqaction *action)
{
int status = 1; /* Force the "do bottom halves" bit */
int ret;
do {
if (!(action->flags & SA_INTERRUPT))
local_irq_enable();
else
local_irq_disable();
ret = action->handler(irq, action->dev_id, regs);
if (ret == IRQ_HANDLED)
status |= action->flags;
action = action->next;
} while (action);
if (status & SA_SAMPLE_RANDOM)
add_interrupt_randomness(irq);
local_irq_disable();
return status;
}
/*
* Generic enable/disable code: this just calls
* down into the PIC-specific version for the actual
* hardware disable after having gotten the irq
* controller lock.
*/
void inline
disable_irq_nosync(unsigned int irq)
{
irq_desc_t *desc = irq_desc + irq;
unsigned long flags;
spin_lock_irqsave(&desc->lock, flags);
if (!desc->depth++) {
desc->status |= IRQ_DISABLED;
desc->handler->disable(irq);
}
spin_unlock_irqrestore(&desc->lock, flags);
}
/*
* Synchronous version of the above, making sure the IRQ is
* no longer running on any other IRQ..
*/
void
disable_irq(unsigned int irq)
{
disable_irq_nosync(irq);
synchronize_irq(irq);
}
void
enable_irq(unsigned int irq)
{
irq_desc_t *desc = irq_desc + irq;
unsigned long flags;
spin_lock_irqsave(&desc->lock, flags);
switch (desc->depth) {
case 1: {
unsigned int status = desc->status & ~IRQ_DISABLED;
desc->status = status;
if ((status & (IRQ_PENDING | IRQ_REPLAY)) == IRQ_PENDING) {
desc->status = status | IRQ_REPLAY;
hw_resend_irq(desc->handler,irq);
}
desc->handler->enable(irq);
/* fall-through */
}
default:
desc->depth--;
break;
case 0:
printk(KERN_ERR "enable_irq() unbalanced from %p\n",
__builtin_return_address(0));
}
spin_unlock_irqrestore(&desc->lock, flags);
}
int
setup_irq(unsigned int irq, struct irqaction * new)
{
int shared = 0;
struct irqaction *old, **p;
unsigned long flags;
irq_desc_t *desc = irq_desc + irq;
if (desc->handler == &no_irq_type)
return -ENOSYS;
/*
* Some drivers like serial.c use request_irq() heavily,
* so we have to be careful not to interfere with a
* running system.
*/
if (new->flags & SA_SAMPLE_RANDOM) {
/*
* This function might sleep, we want to call it first,
* outside of the atomic block.
* Yes, this might clear the entropy pool if the wrong
* driver is attempted to be loaded, without actually
* installing a new handler, but is this really a problem,
* only the sysadmin is able to do this.
*/
rand_initialize_irq(irq);
}
/*
* The following block of code has to be executed atomically
*/
spin_lock_irqsave(&desc->lock,flags);
p = &desc->action;
if ((old = *p) != NULL) {
/* Can't share interrupts unless both agree to */
if (!(old->flags & new->flags & SA_SHIRQ)) {
spin_unlock_irqrestore(&desc->lock,flags);
return -EBUSY;
}
/* add new interrupt at end of irq queue */
do {
p = &old->next;
old = *p;
} while (old);
shared = 1;
}
*p = new;
if (!shared) {
desc->depth = 0;
desc->status &=
~(IRQ_DISABLED|IRQ_AUTODETECT|IRQ_WAITING|IRQ_INPROGRESS);
desc->handler->startup(irq);
}
spin_unlock_irqrestore(&desc->lock,flags);
return 0;
}
static struct proc_dir_entry * root_irq_dir;
static struct proc_dir_entry * irq_dir[NR_IRQS];
#ifdef CONFIG_SMP
static struct proc_dir_entry * smp_affinity_entry[NR_IRQS];
static char irq_user_affinity[NR_IRQS];
static cpumask_t irq_affinity[NR_IRQS] = { [0 ... NR_IRQS-1] = CPU_MASK_ALL };
static void
select_smp_affinity(int irq)
int
select_smp_affinity(unsigned int irq)
{
static int last_cpu;
int cpu = last_cpu + 1;
if (! irq_desc[irq].handler->set_affinity || irq_user_affinity[irq])
return;
if (!irq_desc[irq].handler->set_affinity || irq_user_affinity[irq])
return 1;
while (!cpu_possible(cpu))
cpu = (cpu < (NR_CPUS-1) ? cpu + 1 : 0);
@ -247,208 +58,10 @@ select_smp_affinity(int irq)
irq_affinity[irq] = cpumask_of_cpu(cpu);
irq_desc[irq].handler->set_affinity(irq, cpumask_of_cpu(cpu));
return 0;
}
static int
irq_affinity_read_proc (char *page, char **start, off_t off,
int count, int *eof, void *data)
{
int len = cpumask_scnprintf(page, count, irq_affinity[(long)data]);
if (count - len < 2)
return -EINVAL;
len += sprintf(page + len, "\n");
return len;
}
static int
irq_affinity_write_proc(struct file *file, const char __user *buffer,
unsigned long count, void *data)
{
int irq = (long) data, full_count = count, err;
cpumask_t new_value;
if (!irq_desc[irq].handler->set_affinity)
return -EIO;
err = cpumask_parse(buffer, count, new_value);
/* The special value 0 means release control of the
affinity to kernel. */
cpus_and(new_value, new_value, cpu_online_map);
if (cpus_empty(new_value)) {
irq_user_affinity[irq] = 0;
select_smp_affinity(irq);
}
/* Do not allow disabling IRQs completely - it's a too easy
way to make the system unusable accidentally :-) At least
one online CPU still has to be targeted. */
else {
irq_affinity[irq] = new_value;
irq_user_affinity[irq] = 1;
irq_desc[irq].handler->set_affinity(irq, new_value);
}
return full_count;
}
#endif /* CONFIG_SMP */
#define MAX_NAMELEN 10
static void
register_irq_proc (unsigned int irq)
{
char name [MAX_NAMELEN];
if (!root_irq_dir || (irq_desc[irq].handler == &no_irq_type) ||
irq_dir[irq])
return;
memset(name, 0, MAX_NAMELEN);
sprintf(name, "%d", irq);
/* create /proc/irq/1234 */
irq_dir[irq] = proc_mkdir(name, root_irq_dir);
#ifdef CONFIG_SMP
if (irq_desc[irq].handler->set_affinity) {
struct proc_dir_entry *entry;
/* create /proc/irq/1234/smp_affinity */
entry = create_proc_entry("smp_affinity", 0600, irq_dir[irq]);
if (entry) {
entry->nlink = 1;
entry->data = (void *)(long)irq;
entry->read_proc = irq_affinity_read_proc;
entry->write_proc = irq_affinity_write_proc;
}
smp_affinity_entry[irq] = entry;
}
#endif
}
void
init_irq_proc (void)
{
int i;
/* create /proc/irq */
root_irq_dir = proc_mkdir("irq", NULL);
#ifdef CONFIG_SMP
/* create /proc/irq/prof_cpu_mask */
create_prof_cpu_mask(root_irq_dir);
#endif
/*
* Create entries for all existing IRQs.
*/
for (i = 0; i < ACTUAL_NR_IRQS; i++) {
if (irq_desc[i].handler == &no_irq_type)
continue;
register_irq_proc(i);
}
}
int
request_irq(unsigned int irq, irqreturn_t (*handler)(int, void *, struct pt_regs *),
unsigned long irqflags, const char * devname, void *dev_id)
{
int retval;
struct irqaction * action;
if (irq >= ACTUAL_NR_IRQS)
return -EINVAL;
if (!handler)
return -EINVAL;
#if 1
/*
* Sanity-check: shared interrupts should REALLY pass in
* a real dev-ID, otherwise we'll have trouble later trying
* to figure out which interrupt is which (messes up the
* interrupt freeing logic etc).
*/
if ((irqflags & SA_SHIRQ) && !dev_id) {
printk(KERN_ERR
"Bad boy: %s (at %p) called us without a dev_id!\n",
devname, __builtin_return_address(0));
}
#endif
action = (struct irqaction *)
kmalloc(sizeof(struct irqaction), GFP_KERNEL);
if (!action)
return -ENOMEM;
action->handler = handler;
action->flags = irqflags;
cpus_clear(action->mask);
action->name = devname;
action->next = NULL;
action->dev_id = dev_id;
#ifdef CONFIG_SMP
select_smp_affinity(irq);
#endif
retval = setup_irq(irq, action);
if (retval)
kfree(action);
return retval;
}
EXPORT_SYMBOL(request_irq);
void
free_irq(unsigned int irq, void *dev_id)
{
irq_desc_t *desc;
struct irqaction **p;
unsigned long flags;
if (irq >= ACTUAL_NR_IRQS) {
printk(KERN_CRIT "Trying to free IRQ%d\n", irq);
return;
}
desc = irq_desc + irq;
spin_lock_irqsave(&desc->lock,flags);
p = &desc->action;
for (;;) {
struct irqaction * action = *p;
if (action) {
struct irqaction **pp = p;
p = &action->next;
if (action->dev_id != dev_id)
continue;
/* Found - now remove it from the list of entries. */
*pp = action->next;
if (!desc->action) {
desc->status |= IRQ_DISABLED;
desc->handler->shutdown(irq);
}
spin_unlock_irqrestore(&desc->lock,flags);
#ifdef CONFIG_SMP
/* Wait to make sure it's not being used on
another CPU. */
while (desc->status & IRQ_INPROGRESS)
barrier();
#endif
kfree(action);
return;
}
printk(KERN_ERR "Trying to free free IRQ%d\n",irq);
spin_unlock_irqrestore(&desc->lock,flags);
return;
}
}
EXPORT_SYMBOL(free_irq);
int
show_interrupts(struct seq_file *p, void *v)
{
@ -531,10 +144,6 @@ handle_irq(int irq, struct pt_regs * regs)
* 0 return value means that this irq is already being
* handled by some other CPU. (or is disabled)
*/
int cpu = smp_processor_id();
irq_desc_t *desc = irq_desc + irq;
struct irqaction * action;
unsigned int status;
static unsigned int illegal_count=0;
if ((unsigned) irq > ACTUAL_NR_IRQS && illegal_count < MAX_ILLEGAL_IRQS ) {
@ -546,229 +155,8 @@ handle_irq(int irq, struct pt_regs * regs)
}
irq_enter();
kstat_cpu(cpu).irqs[irq]++;
spin_lock_irq(&desc->lock); /* mask also the higher prio events */
desc->handler->ack(irq);
/*
* REPLAY is when Linux resends an IRQ that was dropped earlier.
* WAITING is used by probe to mark irqs that are being tested.
*/
status = desc->status & ~(IRQ_REPLAY | IRQ_WAITING);
status |= IRQ_PENDING; /* we _want_ to handle it */
/*
* If the IRQ is disabled for whatever reason, we cannot
* use the action we have.
*/
action = NULL;
if (!(status & (IRQ_DISABLED | IRQ_INPROGRESS))) {
action = desc->action;
status &= ~IRQ_PENDING; /* we commit to handling */
status |= IRQ_INPROGRESS; /* we are handling it */
}
desc->status = status;
/*
* If there is no IRQ handler or it was disabled, exit early.
* Since we set PENDING, if another processor is handling
* a different instance of this same irq, the other processor
* will take care of it.
*/
if (!action)
goto out;
/*
* Edge triggered interrupts need to remember pending events.
* This applies to any hw interrupts that allow a second
* instance of the same irq to arrive while we are in handle_irq
* or in the handler. But the code here only handles the _second_
* instance of the irq, not the third or fourth. So it is mostly
* useful for irq hardware that does not mask cleanly in an
* SMP environment.
*/
for (;;) {
spin_unlock(&desc->lock);
handle_IRQ_event(irq, regs, action);
spin_lock(&desc->lock);
if (!(desc->status & IRQ_PENDING)
|| (desc->status & IRQ_LEVEL))
break;
desc->status &= ~IRQ_PENDING;
}
desc->status &= ~IRQ_INPROGRESS;
out:
/*
* The ->end() handler has to deal with interrupts which got
* disabled while the handler was running.
*/
desc->handler->end(irq);
spin_unlock(&desc->lock);
local_irq_disable();
__do_IRQ(irq, regs);
local_irq_enable();
irq_exit();
}
/*
* IRQ autodetection code..
*
* This depends on the fact that any interrupt that
* comes in on to an unassigned handler will get stuck
* with "IRQ_WAITING" cleared and the interrupt
* disabled.
*/
unsigned long
probe_irq_on(void)
{
int i;
irq_desc_t *desc;
unsigned long delay;
unsigned long val;
/* Something may have generated an irq long ago and we want to
flush such a longstanding irq before considering it as spurious. */
for (i = NR_IRQS-1; i >= 0; i--) {
desc = irq_desc + i;
spin_lock_irq(&desc->lock);
if (!irq_desc[i].action)
irq_desc[i].handler->startup(i);
spin_unlock_irq(&desc->lock);
}
/* Wait for longstanding interrupts to trigger. */
for (delay = jiffies + HZ/50; time_after(delay, jiffies); )
/* about 20ms delay */ barrier();
/* enable any unassigned irqs (we must startup again here because
if a longstanding irq happened in the previous stage, it may have
masked itself) first, enable any unassigned irqs. */
for (i = NR_IRQS-1; i >= 0; i--) {
desc = irq_desc + i;
spin_lock_irq(&desc->lock);
if (!desc->action) {
desc->status |= IRQ_AUTODETECT | IRQ_WAITING;
if (desc->handler->startup(i))
desc->status |= IRQ_PENDING;
}
spin_unlock_irq(&desc->lock);
}
/*
* Wait for spurious interrupts to trigger
*/
for (delay = jiffies + HZ/10; time_after(delay, jiffies); )
/* about 100ms delay */ barrier();
/*
* Now filter out any obviously spurious interrupts
*/
val = 0;
for (i=0; i<NR_IRQS; i++) {
irq_desc_t *desc = irq_desc + i;
unsigned int status;
spin_lock_irq(&desc->lock);
status = desc->status;
if (status & IRQ_AUTODETECT) {
/* It triggered already - consider it spurious. */
if (!(status & IRQ_WAITING)) {
desc->status = status & ~IRQ_AUTODETECT;
desc->handler->shutdown(i);
} else
if (i < 32)
val |= 1 << i;
}
spin_unlock_irq(&desc->lock);
}
return val;
}
EXPORT_SYMBOL(probe_irq_on);
/*
* Return a mask of triggered interrupts (this
* can handle only legacy ISA interrupts).
*/
unsigned int
probe_irq_mask(unsigned long val)
{
int i;
unsigned int mask;
mask = 0;
for (i = 0; i < NR_IRQS; i++) {
irq_desc_t *desc = irq_desc + i;
unsigned int status;
spin_lock_irq(&desc->lock);
status = desc->status;
if (status & IRQ_AUTODETECT) {
/* We only react to ISA interrupts */
if (!(status & IRQ_WAITING)) {
if (i < 16)
mask |= 1 << i;
}
desc->status = status & ~IRQ_AUTODETECT;
desc->handler->shutdown(i);
}
spin_unlock_irq(&desc->lock);
}
return mask & val;
}
/*
* Get the result of the IRQ probe.. A negative result means that
* we have several candidates (but we return the lowest-numbered
* one).
*/
int
probe_irq_off(unsigned long val)
{
int i, irq_found, nr_irqs;
nr_irqs = 0;
irq_found = 0;
for (i=0; i<NR_IRQS; i++) {
irq_desc_t *desc = irq_desc + i;
unsigned int status;
spin_lock_irq(&desc->lock);
status = desc->status;
if (status & IRQ_AUTODETECT) {
if (!(status & IRQ_WAITING)) {
if (!nr_irqs)
irq_found = i;
nr_irqs++;
}
desc->status = status & ~IRQ_AUTODETECT;
desc->handler->shutdown(i);
}
spin_unlock_irq(&desc->lock);
}
if (nr_irqs > 1)
irq_found = -irq_found;
return irq_found;
}
EXPORT_SYMBOL(probe_irq_off);
#ifdef CONFIG_SMP
void synchronize_irq(unsigned int irq)
{
/* is there anything to synchronize with? */
if (!irq_desc[irq].action)
return;
while (irq_desc[irq].status & IRQ_INPROGRESS)
barrier();
}
#endif

View File

@ -569,12 +569,6 @@ gdb_cris_strtol (const char *s, char **endptr, int base)
return x;
}
int
double_this(int x)
{
return 2 * x;
}
/********************************* Register image ****************************/
/* Copy the content of a register image into another. The size n is
the size of the register image. Due to struct assignment generation of

View File

@ -20,3 +20,4 @@ obj-$(CONFIG_FUJITSU_MB93493) += irq-mb93493.o
obj-$(CONFIG_PM) += pm.o cmode.o
obj-$(CONFIG_MB93093_PDK) += pm-mb93093.o
obj-$(CONFIG_SYSCTL) += sysctl.o
obj-$(CONFIG_FUTEX) += futex.o

View File

@ -1076,7 +1076,7 @@ __entry_work_notifysig:
LEDS 0x6410
ori.p gr4,#0,gr8
call do_notify_resume
bra __entry_return_direct
bra __entry_resume_userspace
# perform syscall entry tracing
__syscall_trace_entry:

242
arch/frv/kernel/futex.c Normal file
View File

@ -0,0 +1,242 @@
/* futex.c: futex operations
*
* Copyright (C) 2005 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/futex.h>
#include <asm/futex.h>
#include <asm/errno.h>
#include <asm/uaccess.h>
/*
* the various futex operations; MMU fault checking is ignored under no-MMU
* conditions
*/
static inline int atomic_futex_op_xchg_set(int oparg, int __user *uaddr, int *_oldval)
{
int oldval, ret;
asm("0: \n"
" orcc gr0,gr0,gr0,icc3 \n" /* set ICC3.Z */
" ckeq icc3,cc7 \n"
"1: ld.p %M0,%1 \n" /* LD.P/ORCR must be atomic */
" orcr cc7,cc7,cc3 \n" /* set CC3 to true */
"2: cst.p %3,%M0 ,cc3,#1 \n"
" corcc gr29,gr29,gr0 ,cc3,#1 \n" /* clear ICC3.Z if store happens */
" beq icc3,#0,0b \n"
" setlos 0,%2 \n"
"3: \n"
".subsection 2 \n"
"4: setlos %5,%2 \n"
" bra 3b \n"
".previous \n"
".section __ex_table,\"a\" \n"
" .balign 8 \n"
" .long 1b,4b \n"
" .long 2b,4b \n"
".previous"
: "+U"(*uaddr), "=&r"(oldval), "=&r"(ret), "=r"(oparg)
: "3"(oparg), "i"(-EFAULT)
: "memory", "cc7", "cc3", "icc3"
);
*_oldval = oldval;
return ret;
}
static inline int atomic_futex_op_xchg_add(int oparg, int __user *uaddr, int *_oldval)
{
int oldval, ret;
asm("0: \n"
" orcc gr0,gr0,gr0,icc3 \n" /* set ICC3.Z */
" ckeq icc3,cc7 \n"
"1: ld.p %M0,%1 \n" /* LD.P/ORCR must be atomic */
" orcr cc7,cc7,cc3 \n" /* set CC3 to true */
" add %1,%3,%3 \n"
"2: cst.p %3,%M0 ,cc3,#1 \n"
" corcc gr29,gr29,gr0 ,cc3,#1 \n" /* clear ICC3.Z if store happens */
" beq icc3,#0,0b \n"
" setlos 0,%2 \n"
"3: \n"
".subsection 2 \n"
"4: setlos %5,%2 \n"
" bra 3b \n"
".previous \n"
".section __ex_table,\"a\" \n"
" .balign 8 \n"
" .long 1b,4b \n"
" .long 2b,4b \n"
".previous"
: "+U"(*uaddr), "=&r"(oldval), "=&r"(ret), "=r"(oparg)
: "3"(oparg), "i"(-EFAULT)
: "memory", "cc7", "cc3", "icc3"
);
*_oldval = oldval;
return ret;
}
static inline int atomic_futex_op_xchg_or(int oparg, int __user *uaddr, int *_oldval)
{
int oldval, ret;
asm("0: \n"
" orcc gr0,gr0,gr0,icc3 \n" /* set ICC3.Z */
" ckeq icc3,cc7 \n"
"1: ld.p %M0,%1 \n" /* LD.P/ORCR must be atomic */
" orcr cc7,cc7,cc3 \n" /* set CC3 to true */
" or %1,%3,%3 \n"
"2: cst.p %3,%M0 ,cc3,#1 \n"
" corcc gr29,gr29,gr0 ,cc3,#1 \n" /* clear ICC3.Z if store happens */
" beq icc3,#0,0b \n"
" setlos 0,%2 \n"
"3: \n"
".subsection 2 \n"
"4: setlos %5,%2 \n"
" bra 3b \n"
".previous \n"
".section __ex_table,\"a\" \n"
" .balign 8 \n"
" .long 1b,4b \n"
" .long 2b,4b \n"
".previous"
: "+U"(*uaddr), "=&r"(oldval), "=&r"(ret), "=r"(oparg)
: "3"(oparg), "i"(-EFAULT)
: "memory", "cc7", "cc3", "icc3"
);
*_oldval = oldval;
return ret;
}
static inline int atomic_futex_op_xchg_and(int oparg, int __user *uaddr, int *_oldval)
{
int oldval, ret;
asm("0: \n"
" orcc gr0,gr0,gr0,icc3 \n" /* set ICC3.Z */
" ckeq icc3,cc7 \n"
"1: ld.p %M0,%1 \n" /* LD.P/ORCR must be atomic */
" orcr cc7,cc7,cc3 \n" /* set CC3 to true */
" and %1,%3,%3 \n"
"2: cst.p %3,%M0 ,cc3,#1 \n"
" corcc gr29,gr29,gr0 ,cc3,#1 \n" /* clear ICC3.Z if store happens */
" beq icc3,#0,0b \n"
" setlos 0,%2 \n"
"3: \n"
".subsection 2 \n"
"4: setlos %5,%2 \n"
" bra 3b \n"
".previous \n"
".section __ex_table,\"a\" \n"
" .balign 8 \n"
" .long 1b,4b \n"
" .long 2b,4b \n"
".previous"
: "+U"(*uaddr), "=&r"(oldval), "=&r"(ret), "=r"(oparg)
: "3"(oparg), "i"(-EFAULT)
: "memory", "cc7", "cc3", "icc3"
);
*_oldval = oldval;
return ret;
}
static inline int atomic_futex_op_xchg_xor(int oparg, int __user *uaddr, int *_oldval)
{
int oldval, ret;
asm("0: \n"
" orcc gr0,gr0,gr0,icc3 \n" /* set ICC3.Z */
" ckeq icc3,cc7 \n"
"1: ld.p %M0,%1 \n" /* LD.P/ORCR must be atomic */
" orcr cc7,cc7,cc3 \n" /* set CC3 to true */
" xor %1,%3,%3 \n"
"2: cst.p %3,%M0 ,cc3,#1 \n"
" corcc gr29,gr29,gr0 ,cc3,#1 \n" /* clear ICC3.Z if store happens */
" beq icc3,#0,0b \n"
" setlos 0,%2 \n"
"3: \n"
".subsection 2 \n"
"4: setlos %5,%2 \n"
" bra 3b \n"
".previous \n"
".section __ex_table,\"a\" \n"
" .balign 8 \n"
" .long 1b,4b \n"
" .long 2b,4b \n"
".previous"
: "+U"(*uaddr), "=&r"(oldval), "=&r"(ret), "=r"(oparg)
: "3"(oparg), "i"(-EFAULT)
: "memory", "cc7", "cc3", "icc3"
);
*_oldval = oldval;
return ret;
}
/*****************************************************************************/
/*
* do the futex operations
*/
int futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
{
int op = (encoded_op >> 28) & 7;
int cmp = (encoded_op >> 24) & 15;
int oparg = (encoded_op << 8) >> 20;
int cmparg = (encoded_op << 20) >> 20;
int oldval = 0, ret;
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
return -EFAULT;
inc_preempt_count();
switch (op) {
case FUTEX_OP_SET:
ret = atomic_futex_op_xchg_set(oparg, uaddr, &oldval);
break;
case FUTEX_OP_ADD:
ret = atomic_futex_op_xchg_add(oparg, uaddr, &oldval);
break;
case FUTEX_OP_OR:
ret = atomic_futex_op_xchg_or(oparg, uaddr, &oldval);
break;
case FUTEX_OP_ANDN:
ret = atomic_futex_op_xchg_and(~oparg, uaddr, &oldval);
break;
case FUTEX_OP_XOR:
ret = atomic_futex_op_xchg_xor(oparg, uaddr, &oldval);
break;
default:
ret = -ENOSYS;
break;
}
dec_preempt_count();
if (!ret) {
switch (cmp) {
case FUTEX_OP_CMP_EQ: ret = (oldval == cmparg); break;
case FUTEX_OP_CMP_NE: ret = (oldval != cmparg); break;
case FUTEX_OP_CMP_LT: ret = (oldval < cmparg); break;
case FUTEX_OP_CMP_GE: ret = (oldval >= cmparg); break;
case FUTEX_OP_CMP_LE: ret = (oldval <= cmparg); break;
case FUTEX_OP_CMP_GT: ret = (oldval > cmparg); break;
default: ret = -ENOSYS; break;
}
}
return ret;
} /* end futex_atomic_op_inuser() */

View File

@ -35,7 +35,7 @@ struct fdpic_func_descriptor {
unsigned long GOT;
};
asmlinkage int do_signal(struct pt_regs *regs, sigset_t *oldset);
static int do_signal(sigset_t *oldset);
/*
* Atomically swap in the new signal mask, and wait for a signal.
@ -55,7 +55,7 @@ asmlinkage int sys_sigsuspend(int history0, int history1, old_sigset_t mask)
while (1) {
current->state = TASK_INTERRUPTIBLE;
schedule();
if (do_signal(__frame, &saveset))
if (do_signal(&saveset))
/* return the signal number as the return value of this function
* - this is an utterly evil hack. syscalls should not invoke do_signal()
* as entry.S sets regs->gr8 to the return value of the system call
@ -91,7 +91,7 @@ asmlinkage int sys_rt_sigsuspend(sigset_t __user *unewset, size_t sigsetsize)
while (1) {
current->state = TASK_INTERRUPTIBLE;
schedule();
if (do_signal(__frame, &saveset))
if (do_signal(&saveset))
/* return the signal number as the return value of this function
* - this is an utterly evil hack. syscalls should not invoke do_signal()
* as entry.S sets regs->gr8 to the return value of the system call
@ -276,13 +276,12 @@ static int setup_sigcontext(struct sigcontext __user *sc, unsigned long mask)
* Determine which stack to use..
*/
static inline void __user *get_sigframe(struct k_sigaction *ka,
struct pt_regs *regs,
size_t frame_size)
{
unsigned long sp;
/* Default to using normal stack */
sp = regs->sp;
sp = __frame->sp;
/* This is the X/Open sanctioned signal stack switching. */
if (ka->sa.sa_flags & SA_ONSTACK) {
@ -291,18 +290,19 @@ static inline void __user *get_sigframe(struct k_sigaction *ka,
}
return (void __user *) ((sp - frame_size) & ~7UL);
} /* end get_sigframe() */
/*****************************************************************************/
/*
*
*/
static void setup_frame(int sig, struct k_sigaction *ka, sigset_t *set, struct pt_regs * regs)
static int setup_frame(int sig, struct k_sigaction *ka, sigset_t *set)
{
struct sigframe __user *frame;
int rsig;
frame = get_sigframe(ka, regs, sizeof(*frame));
frame = get_sigframe(ka, sizeof(*frame));
if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
goto give_sigsegv;
@ -346,47 +346,51 @@ static void setup_frame(int sig, struct k_sigaction *ka, sigset_t *set, struct p
}
/* set up registers for signal handler */
regs->sp = (unsigned long) frame;
regs->lr = (unsigned long) &frame->retcode;
regs->gr8 = sig;
__frame->sp = (unsigned long) frame;
__frame->lr = (unsigned long) &frame->retcode;
__frame->gr8 = sig;
if (get_personality & FDPIC_FUNCPTRS) {
struct fdpic_func_descriptor __user *funcptr =
(struct fdpic_func_descriptor *) ka->sa.sa_handler;
__get_user(regs->pc, &funcptr->text);
__get_user(regs->gr15, &funcptr->GOT);
__get_user(__frame->pc, &funcptr->text);
__get_user(__frame->gr15, &funcptr->GOT);
} else {
regs->pc = (unsigned long) ka->sa.sa_handler;
regs->gr15 = 0;
__frame->pc = (unsigned long) ka->sa.sa_handler;
__frame->gr15 = 0;
}
set_fs(USER_DS);
/* the tracer may want to single-step inside the handler */
if (test_thread_flag(TIF_SINGLESTEP))
ptrace_notify(SIGTRAP);
#if DEBUG_SIG
printk("SIG deliver %d (%s:%d): sp=%p pc=%lx ra=%p\n",
sig, current->comm, current->pid, frame, regs->pc, frame->pretcode);
sig, current->comm, current->pid, frame, __frame->pc,
frame->pretcode);
#endif
return;
return 1;
give_sigsegv:
if (sig == SIGSEGV)
ka->sa.sa_handler = SIG_DFL;
force_sig(SIGSEGV, current);
return 0;
} /* end setup_frame() */
/*****************************************************************************/
/*
*
*/
static void setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
sigset_t *set, struct pt_regs * regs)
static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
sigset_t *set)
{
struct rt_sigframe __user *frame;
int rsig;
frame = get_sigframe(ka, regs, sizeof(*frame));
frame = get_sigframe(ka, sizeof(*frame));
if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
goto give_sigsegv;
@ -409,7 +413,7 @@ static void setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
if (__put_user(0, &frame->uc.uc_flags) ||
__put_user(0, &frame->uc.uc_link) ||
__put_user((void*)current->sas_ss_sp, &frame->uc.uc_stack.ss_sp) ||
__put_user(sas_ss_flags(regs->sp), &frame->uc.uc_stack.ss_flags) ||
__put_user(sas_ss_flags(__frame->sp), &frame->uc.uc_stack.ss_flags) ||
__put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size))
goto give_sigsegv;
@ -440,34 +444,38 @@ static void setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
}
/* Set up registers for signal handler */
regs->sp = (unsigned long) frame;
regs->lr = (unsigned long) &frame->retcode;
regs->gr8 = sig;
regs->gr9 = (unsigned long) &frame->info;
__frame->sp = (unsigned long) frame;
__frame->lr = (unsigned long) &frame->retcode;
__frame->gr8 = sig;
__frame->gr9 = (unsigned long) &frame->info;
if (get_personality & FDPIC_FUNCPTRS) {
struct fdpic_func_descriptor *funcptr =
(struct fdpic_func_descriptor __user *) ka->sa.sa_handler;
__get_user(regs->pc, &funcptr->text);
__get_user(regs->gr15, &funcptr->GOT);
__get_user(__frame->pc, &funcptr->text);
__get_user(__frame->gr15, &funcptr->GOT);
} else {
regs->pc = (unsigned long) ka->sa.sa_handler;
regs->gr15 = 0;
__frame->pc = (unsigned long) ka->sa.sa_handler;
__frame->gr15 = 0;
}
set_fs(USER_DS);
/* the tracer may want to single-step inside the handler */
if (test_thread_flag(TIF_SINGLESTEP))
ptrace_notify(SIGTRAP);
#if DEBUG_SIG
printk("SIG deliver %d (%s:%d): sp=%p pc=%lx ra=%p\n",
sig, current->comm, current->pid, frame, regs->pc, frame->pretcode);
sig, current->comm, current->pid, frame, __frame->pc,
frame->pretcode);
#endif
return;
return 1;
give_sigsegv:
if (sig == SIGSEGV)
ka->sa.sa_handler = SIG_DFL;
force_sig(SIGSEGV, current);
return 0;
} /* end setup_rt_frame() */
@ -475,43 +483,51 @@ give_sigsegv:
/*
* OK, we're invoking a handler
*/
static void handle_signal(unsigned long sig, siginfo_t *info,
struct k_sigaction *ka, sigset_t *oldset,
struct pt_regs *regs)
static int handle_signal(unsigned long sig, siginfo_t *info,
struct k_sigaction *ka, sigset_t *oldset)
{
int ret;
/* Are we from a system call? */
if (in_syscall(regs)) {
if (in_syscall(__frame)) {
/* If so, check system call restarting.. */
switch (regs->gr8) {
switch (__frame->gr8) {
case -ERESTART_RESTARTBLOCK:
case -ERESTARTNOHAND:
regs->gr8 = -EINTR;
__frame->gr8 = -EINTR;
break;
case -ERESTARTSYS:
if (!(ka->sa.sa_flags & SA_RESTART)) {
regs->gr8 = -EINTR;
__frame->gr8 = -EINTR;
break;
}
/* fallthrough */
case -ERESTARTNOINTR:
regs->gr8 = regs->orig_gr8;
regs->pc -= 4;
__frame->gr8 = __frame->orig_gr8;
__frame->pc -= 4;
}
}
/* Set up the stack frame */
if (ka->sa.sa_flags & SA_SIGINFO)
setup_rt_frame(sig, ka, info, oldset, regs);
ret = setup_rt_frame(sig, ka, info, oldset);
else
setup_frame(sig, ka, oldset, regs);
ret = setup_frame(sig, ka, oldset);
if (ret) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked, &current->blocked,
&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked, sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
return ret;
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked, &current->blocked, &ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked, sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
} /* end handle_signal() */
/*****************************************************************************/
@ -520,7 +536,7 @@ static void handle_signal(unsigned long sig, siginfo_t *info,
* want to handle. Thus you cannot kill init even with a SIGKILL even by
* mistake.
*/
int do_signal(struct pt_regs *regs, sigset_t *oldset)
static int do_signal(sigset_t *oldset)
{
struct k_sigaction ka;
siginfo_t info;
@ -532,7 +548,7 @@ int do_signal(struct pt_regs *regs, sigset_t *oldset)
* kernel mode. Just return without doing anything
* if so.
*/
if (!user_mode(regs))
if (!user_mode(__frame))
return 1;
if (try_to_freeze())
@ -541,30 +557,29 @@ int do_signal(struct pt_regs *regs, sigset_t *oldset)
if (!oldset)
oldset = &current->blocked;
signr = get_signal_to_deliver(&info, &ka, regs, NULL);
if (signr > 0) {
handle_signal(signr, &info, &ka, oldset, regs);
return 1;
}
signr = get_signal_to_deliver(&info, &ka, __frame, NULL);
if (signr > 0)
return handle_signal(signr, &info, &ka, oldset);
no_signal:
no_signal:
/* Did we come from a system call? */
if (regs->syscallno >= 0) {
if (__frame->syscallno >= 0) {
/* Restart the system call - no handlers present */
if (regs->gr8 == -ERESTARTNOHAND ||
regs->gr8 == -ERESTARTSYS ||
regs->gr8 == -ERESTARTNOINTR) {
regs->gr8 = regs->orig_gr8;
regs->pc -= 4;
if (__frame->gr8 == -ERESTARTNOHAND ||
__frame->gr8 == -ERESTARTSYS ||
__frame->gr8 == -ERESTARTNOINTR) {
__frame->gr8 = __frame->orig_gr8;
__frame->pc -= 4;
}
if (regs->gr8 == -ERESTART_RESTARTBLOCK){
regs->gr8 = __NR_restart_syscall;
regs->pc -= 4;
if (__frame->gr8 == -ERESTART_RESTARTBLOCK){
__frame->gr8 = __NR_restart_syscall;
__frame->pc -= 4;
}
}
return 0;
} /* end do_signal() */
/*****************************************************************************/
@ -580,6 +595,6 @@ asmlinkage void do_notify_resume(__u32 thread_info_flags)
/* deal with pending signal delivery */
if (thread_info_flags & _TIF_SIGPENDING)
do_signal(__frame, NULL);
do_signal(NULL);
} /* end do_notify_resume() */

View File

@ -464,7 +464,6 @@ config NUMA
depends on SMP && HIGHMEM64G && (X86_NUMAQ || X86_GENERICARCH || (X86_SUMMIT && ACPI))
default n if X86_PC
default y if (X86_NUMAQ || X86_SUMMIT)
select SPARSEMEM_STATIC
# Need comments to help the hapless user trying to turn on NUMA support
comment "NUMA (NUMA-Q) requires SMP, 64GB highmem support"
@ -493,6 +492,10 @@ config HAVE_ARCH_ALLOC_REMAP
depends on NUMA
default y
config ARCH_FLATMEM_ENABLE
def_bool y
depends on (ARCH_SELECT_MEMORY_MODEL && X86_PC)
config ARCH_DISCONTIGMEM_ENABLE
def_bool y
depends on NUMA
@ -503,7 +506,8 @@ config ARCH_DISCONTIGMEM_DEFAULT
config ARCH_SPARSEMEM_ENABLE
def_bool y
depends on NUMA
depends on (NUMA || (X86_PC && EXPERIMENTAL))
select SPARSEMEM_STATIC
config ARCH_SELECT_MEMORY_MODEL
def_bool y

View File

@ -39,6 +39,7 @@ config M386
- "Winchip-2" for IDT Winchip 2.
- "Winchip-2A" for IDT Winchips with 3dNow! capabilities.
- "GeodeGX1" for Geode GX1 (Cyrix MediaGX).
- "Geode GX/LX" For AMD Geode GX and LX processors.
- "CyrixIII/VIA C3" for VIA Cyrix III or VIA C3.
- "VIA C3-2 for VIA C3-2 "Nehemiah" (model 9 and above).
@ -171,6 +172,11 @@ config MGEODEGX1
help
Select this for a Geode GX1 (Cyrix MediaGX) chip.
config MGEODE_LX
bool "Geode GX/LX"
help
Select this for AMD Geode GX and LX processors.
config MCYRIXIII
bool "CyrixIII/VIA-C3"
help
@ -220,8 +226,8 @@ config X86_XADD
config X86_L1_CACHE_SHIFT
int
default "7" if MPENTIUM4 || X86_GENERIC
default "4" if X86_ELAN || M486 || M386
default "5" if MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODEGX1
default "4" if X86_ELAN || M486 || M386 || MGEODEGX1
default "5" if MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
default "6" if MK7 || MK8 || MPENTIUMM
config RWSEM_GENERIC_SPINLOCK
@ -290,12 +296,12 @@ config X86_INTEL_USERCOPY
config X86_USE_PPRO_CHECKSUM
bool
depends on MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MEFFICEON
depends on MWINCHIP3D || MWINCHIP2 || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MEFFICEON || MGEODE_LX
default y
config X86_USE_3DNOW
bool
depends on MCYRIXIII || MK7
depends on MCYRIXIII || MK7 || MGEODE_LX
default y
config X86_OOSTORE

View File

@ -42,6 +42,16 @@ config DEBUG_PAGEALLOC
This results in a large slowdown, but helps to find certain types
of memory corruptions.
config DEBUG_RODATA
bool "Write protect kernel read-only data structures"
depends on DEBUG_KERNEL
help
Mark the kernel read-only data as write-protected in the pagetables,
in order to catch accidental (and incorrect) writes to such const
data. This option may have a slight performance impact because a
portion of the kernel code won't be covered by a 2MB TLB anymore.
If in doubt, say "N".
config 4KSTACKS
bool "Use 4Kb for kernel stacks instead of 8Kb"
depends on DEBUG_KERNEL

View File

@ -721,7 +721,7 @@ static int __init apic_set_verbosity(char *str)
apic_verbosity = APIC_VERBOSE;
else
printk(KERN_WARNING "APIC Verbosity level %s not recognised"
" use apic=verbose or apic=debug", str);
" use apic=verbose or apic=debug\n", str);
return 0;
}

View File

@ -302,17 +302,6 @@ extern int (*console_blank_hook)(int);
#include "apm.h"
/*
* Define to make all _set_limit calls use 64k limits. The APM 1.1 BIOS is
* supposed to provide limit information that it recognizes. Many machines
* do this correctly, but many others do not restrict themselves to their
* claimed limit. When this happens, they will cause a segmentation
* violation in the kernel at boot time. Most BIOS's, however, will
* respect a 64k limit, so we use that. If you want to be pedantic and
* hold your BIOS to its claims, then undefine this.
*/
#define APM_RELAX_SEGMENTS
/*
* Define to re-initialize the interrupt 0 timer to 100 Hz after a suspend.
* This patched by Chad Miller <cmiller@surfsouth.com>, original code by
@ -1075,22 +1064,23 @@ static int apm_engage_power_management(u_short device, int enable)
static int apm_console_blank(int blank)
{
int error;
u_short state;
int error, i;
u_short state;
static const u_short dev[3] = { 0x100, 0x1FF, 0x101 };
state = blank ? APM_STATE_STANDBY : APM_STATE_READY;
/* Blank the first display device */
error = set_power_state(0x100, state);
if ((error != APM_SUCCESS) && (error != APM_NO_ERROR)) {
/* try to blank them all instead */
error = set_power_state(0x1ff, state);
if ((error != APM_SUCCESS) && (error != APM_NO_ERROR))
/* try to blank device one instead */
error = set_power_state(0x101, state);
for (i = 0; i < ARRAY_SIZE(dev); i++) {
error = set_power_state(dev[i], state);
if ((error == APM_SUCCESS) || (error == APM_NO_ERROR))
return 1;
if (error == APM_NOT_ENGAGED)
break;
}
if ((error == APM_SUCCESS) || (error == APM_NO_ERROR))
return 1;
if (error == APM_NOT_ENGAGED) {
if (error == APM_NOT_ENGAGED && state != APM_STATE_READY) {
static int tried;
int eng_error;
if (tried++ == 0) {
@ -2233,8 +2223,8 @@ static struct dmi_system_id __initdata apm_dmi_table[] = {
static int __init apm_init(void)
{
struct proc_dir_entry *apm_proc;
struct desc_struct *gdt;
int ret;
int i;
dmi_check_system(apm_dmi_table);
@ -2312,45 +2302,30 @@ static int __init apm_init(void)
set_base(bad_bios_desc, __va((unsigned long)0x40 << 4));
_set_limit((char *)&bad_bios_desc, 4095 - (0x40 << 4));
/*
* Set up the long jump entry point to the APM BIOS, which is called
* from inline assembly.
*/
apm_bios_entry.offset = apm_info.bios.offset;
apm_bios_entry.segment = APM_CS;
for (i = 0; i < NR_CPUS; i++) {
struct desc_struct *gdt = get_cpu_gdt_table(i);
set_base(gdt[APM_CS >> 3],
__va((unsigned long)apm_info.bios.cseg << 4));
set_base(gdt[APM_CS_16 >> 3],
__va((unsigned long)apm_info.bios.cseg_16 << 4));
set_base(gdt[APM_DS >> 3],
__va((unsigned long)apm_info.bios.dseg << 4));
#ifndef APM_RELAX_SEGMENTS
if (apm_info.bios.version == 0x100) {
#endif
/* For ASUS motherboard, Award BIOS rev 110 (and others?) */
_set_limit((char *)&gdt[APM_CS >> 3], 64 * 1024 - 1);
/* For some unknown machine. */
_set_limit((char *)&gdt[APM_CS_16 >> 3], 64 * 1024 - 1);
/* For the DEC Hinote Ultra CT475 (and others?) */
_set_limit((char *)&gdt[APM_DS >> 3], 64 * 1024 - 1);
#ifndef APM_RELAX_SEGMENTS
} else {
_set_limit((char *)&gdt[APM_CS >> 3],
(apm_info.bios.cseg_len - 1) & 0xffff);
_set_limit((char *)&gdt[APM_CS_16 >> 3],
(apm_info.bios.cseg_16_len - 1) & 0xffff);
_set_limit((char *)&gdt[APM_DS >> 3],
(apm_info.bios.dseg_len - 1) & 0xffff);
/* workaround for broken BIOSes */
if (apm_info.bios.cseg_len <= apm_info.bios.offset)
_set_limit((char *)&gdt[APM_CS >> 3], 64 * 1024 -1);
if (apm_info.bios.dseg_len <= 0x40) { /* 0x40 * 4kB == 64kB */
/* for the BIOS that assumes granularity = 1 */
gdt[APM_DS >> 3].b |= 0x800000;
printk(KERN_NOTICE "apm: we set the granularity of dseg.\n");
}
}
#endif
}
/*
* The APM 1.1 BIOS is supposed to provide limit information that it
* recognizes. Many machines do this correctly, but many others do
* not restrict themselves to their claimed limit. When this happens,
* they will cause a segmentation violation in the kernel at boot time.
* Most BIOS's, however, will respect a 64k limit, so we use that.
*
* Note we only set APM segments on CPU zero, since we pin the APM
* code to that CPU.
*/
gdt = get_cpu_gdt_table(0);
set_base(gdt[APM_CS >> 3],
__va((unsigned long)apm_info.bios.cseg << 4));
set_base(gdt[APM_CS_16 >> 3],
__va((unsigned long)apm_info.bios.cseg_16 << 4));
set_base(gdt[APM_DS >> 3],
__va((unsigned long)apm_info.bios.dseg << 4));
apm_proc = create_proc_info_entry("apm", 0, NULL, apm_get_info);
if (apm_proc)

View File

@ -161,8 +161,13 @@ static void __init init_amd(struct cpuinfo_x86 *c)
set_bit(X86_FEATURE_K6_MTRR, c->x86_capability);
break;
}
break;
if (c->x86_model == 10) {
/* AMD Geode LX is model 10 */
/* placeholder for any needed mods */
break;
}
break;
case 6: /* An Athlon/Duron */
/* Bit 15 of Athlon specific MSR 15, needs to be 0

View File

@ -18,9 +18,6 @@
#include "cpu.h"
DEFINE_PER_CPU(struct desc_struct, cpu_gdt_table[GDT_ENTRIES]);
EXPORT_PER_CPU_SYMBOL(cpu_gdt_table);
DEFINE_PER_CPU(unsigned char, cpu_16bit_stack[CPU_16BIT_STACK_SIZE]);
EXPORT_PER_CPU_SYMBOL(cpu_16bit_stack);
@ -598,11 +595,6 @@ void __devinit cpu_init(void)
load_gdt(&cpu_gdt_descr[cpu]);
load_idt(&idt_descr);
/*
* Delete NT
*/
__asm__("pushfl ; andl $0xffffbfff,(%esp) ; popfl");
/*
* Set up and load the per-CPU TSS and LDT
*/

View File

@ -342,6 +342,31 @@ static void __init init_cyrix(struct cpuinfo_x86 *c)
return;
}
/*
* Handle National Semiconductor branded processors
*/
static void __devinit init_nsc(struct cpuinfo_x86 *c)
{
/* There may be GX1 processors in the wild that are branded
* NSC and not Cyrix.
*
* This function only handles the GX processor, and kicks every
* thing else to the Cyrix init function above - that should
* cover any processors that might have been branded differently
* after NSC aquired Cyrix.
*
* If this breaks your GX1 horribly, please e-mail
* info-linux@ldcmail.amd.com to tell us.
*/
/* Handle the GX (Formally known as the GX2) */
if (c->x86 == 5 && c->x86_model == 5)
display_cacheinfo(c);
else
init_cyrix(c);
}
/*
* Cyrix CPUs without cpuid or with cpuid not yet enabled can be detected
* by the fact that they preserve the flags across the division of 5/2.
@ -422,7 +447,7 @@ int __init cyrix_init_cpu(void)
static struct cpu_dev nsc_cpu_dev __initdata = {
.c_vendor = "NSC",
.c_ident = { "Geode by NSC" },
.c_init = init_cyrix,
.c_init = init_nsc,
.c_identify = generic_identify,
};

View File

@ -117,14 +117,13 @@ static ssize_t cpuid_read(struct file *file, char __user *buf,
{
char __user *tmp = buf;
u32 data[4];
size_t rv;
u32 reg = *ppos;
int cpu = iminor(file->f_dentry->d_inode);
if (count % 16)
return -EINVAL; /* Invalid chunk size */
for (rv = 0; count; count -= 16) {
for (; count; count -= 16) {
do_cpuid(cpu, reg, data);
if (copy_to_user(tmp, &data, 16))
return -EFAULT;

View File

@ -657,6 +657,7 @@ ENTRY(spurious_interrupt_bug)
pushl $do_spurious_interrupt_bug
jmp error_code
.section .rodata,"a"
#include "syscall_table.S"
syscall_table_size=(.-sys_call_table)

View File

@ -504,19 +504,24 @@ ENTRY(cpu_gdt_table)
.quad 0x0000000000000000 /* 0x80 TSS descriptor */
.quad 0x0000000000000000 /* 0x88 LDT descriptor */
/* Segments used for calling PnP BIOS */
.quad 0x00c09a0000000000 /* 0x90 32-bit code */
.quad 0x00809a0000000000 /* 0x98 16-bit code */
.quad 0x0080920000000000 /* 0xa0 16-bit data */
.quad 0x0080920000000000 /* 0xa8 16-bit data */
.quad 0x0080920000000000 /* 0xb0 16-bit data */
/*
* Segments used for calling PnP BIOS have byte granularity.
* They code segments and data segments have fixed 64k limits,
* the transfer segment sizes are set at run time.
*/
.quad 0x00409a000000ffff /* 0x90 32-bit code */
.quad 0x00009a000000ffff /* 0x98 16-bit code */
.quad 0x000092000000ffff /* 0xa0 16-bit data */
.quad 0x0000920000000000 /* 0xa8 16-bit data */
.quad 0x0000920000000000 /* 0xb0 16-bit data */
/*
* The APM segments have byte granularity and their bases
* and limits are set at run time.
* are set at run time. All have 64k limits.
*/
.quad 0x00409a0000000000 /* 0xb8 APM CS code */
.quad 0x00009a0000000000 /* 0xc0 APM CS 16 code (16 bit) */
.quad 0x0040920000000000 /* 0xc8 APM DS data */
.quad 0x00409a000000ffff /* 0xb8 APM CS code */
.quad 0x00009a000000ffff /* 0xc0 APM CS 16 code (16 bit) */
.quad 0x004092000000ffff /* 0xc8 APM DS data */
.quad 0x0000920000000000 /* 0xd0 - ESPFIX 16-bit SS */
.quad 0x0000000000000000 /* 0xd8 - unused */
@ -525,3 +530,5 @@ ENTRY(cpu_gdt_table)
.quad 0x0000000000000000 /* 0xf0 - unused */
.quad 0x0000000000000000 /* 0xf8 - GDT entry 31: double-fault TSS */
/* Be sure this is zeroed to avoid false validations in Xen */
.fill PAGE_SIZE_asm / 8 - GDT_ENTRIES,8,0

View File

@ -3,8 +3,7 @@
#include <asm/checksum.h>
#include <asm/desc.h>
/* This is definitely a GPL-only symbol */
EXPORT_SYMBOL_GPL(cpu_gdt_table);
EXPORT_SYMBOL_GPL(cpu_gdt_descr);
EXPORT_SYMBOL(__down_failed);
EXPORT_SYMBOL(__down_failed_interruptible);

View File

@ -1722,8 +1722,8 @@ void disable_IO_APIC(void)
entry.dest_mode = 0; /* Physical */
entry.delivery_mode = dest_ExtINT; /* ExtInt */
entry.vector = 0;
entry.dest.physical.physical_dest = 0;
entry.dest.physical.physical_dest =
GET_APIC_ID(apic_read(APIC_ID));
/*
* Add it to the IO-APIC irq-routing table:

View File

@ -38,6 +38,12 @@
int smp_found_config;
unsigned int __initdata maxcpus = NR_CPUS;
#ifdef CONFIG_HOTPLUG_CPU
#define CPU_HOTPLUG_ENABLED (1)
#else
#define CPU_HOTPLUG_ENABLED (0)
#endif
/*
* Various Linux-internal data structures created from the
* MP-table.
@ -219,14 +225,18 @@ static void __devinit MP_processor_info (struct mpc_config_processor *m)
cpu_set(num_processors, cpu_possible_map);
num_processors++;
if ((num_processors > 8) &&
((APIC_XAPIC(ver) &&
(boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)) ||
(boot_cpu_data.x86_vendor == X86_VENDOR_AMD)))
def_to_bigsmp = 1;
else
def_to_bigsmp = 0;
if (CPU_HOTPLUG_ENABLED || (num_processors > 8)) {
switch (boot_cpu_data.x86_vendor) {
case X86_VENDOR_INTEL:
if (!APIC_XAPIC(ver)) {
def_to_bigsmp = 0;
break;
}
/* If P4 and above fall through */
case X86_VENDOR_AMD:
def_to_bigsmp = 1;
}
}
bios_cpu_apicid[num_processors - 1] = m->mpc_apicid;
}

View File

@ -172,7 +172,6 @@ static ssize_t msr_read(struct file *file, char __user * buf,
{
u32 __user *tmp = (u32 __user *) buf;
u32 data[2];
size_t rv;
u32 reg = *ppos;
int cpu = iminor(file->f_dentry->d_inode);
int err;
@ -180,7 +179,7 @@ static ssize_t msr_read(struct file *file, char __user * buf,
if (count % 8)
return -EINVAL; /* Invalid chunk size */
for (rv = 0; count; count -= 8) {
for (; count; count -= 8) {
err = do_rdmsr(cpu, reg, &data[0], &data[1]);
if (err)
return err;

View File

@ -308,9 +308,7 @@ void show_regs(struct pt_regs * regs)
cr0 = read_cr0();
cr2 = read_cr2();
cr3 = read_cr3();
if (current_cpu_data.x86 > 4) {
cr4 = read_cr4();
}
cr4 = read_cr4_safe();
printk("CR0: %08lx CR2: %08lx CR3: %08lx CR4: %08lx\n", cr0, cr2, cr3, cr4);
show_trace(NULL, &regs->esp);
}
@ -404,17 +402,7 @@ void flush_thread(void)
void release_thread(struct task_struct *dead_task)
{
if (dead_task->mm) {
// temporary debugging check
if (dead_task->mm->context.size) {
printk("WARNING: dead process %8s still has LDT? <%p/%d>\n",
dead_task->comm,
dead_task->mm->context.ldt,
dead_task->mm->context.size);
BUG();
}
}
BUG_ON(dead_task->mm);
release_vm86_irqs(dead_task);
}

View File

@ -32,9 +32,12 @@
* in exit.c or in signal.c.
*/
/* determines which flags the user has access to. */
/* 1 = access 0 = no access */
#define FLAG_MASK 0x00044dd5
/*
* Determines which flags the user has access to [1 = access, 0 = no access].
* Prohibits changing ID(21), VIP(20), VIF(19), VM(17), IOPL(12-13), IF(9).
* Also masks reserved bits (31-22, 15, 5, 3, 1).
*/
#define FLAG_MASK 0x00054dd5
/* set's the trap flag. */
#define TRAP_FLAG 0x100

View File

@ -111,12 +111,12 @@ static struct dmi_system_id __initdata reboot_dmi_table[] = {
DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge 2400"),
},
},
{ /* Handle problems with rebooting on HP nc6120 */
{ /* Handle problems with rebooting on HP laptops */
.callback = set_bios_reboot,
.ident = "HP Compaq nc6120",
.ident = "HP Compaq Laptop",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
DMI_MATCH(DMI_PRODUCT_NAME, "HP Compaq nc6120"),
DMI_MATCH(DMI_PRODUCT_NAME, "HP Compaq"),
},
},
{ }

View File

@ -954,6 +954,12 @@ efi_find_max_pfn(unsigned long start, unsigned long end, void *arg)
return 0;
}
static int __init
efi_memory_present_wrapper(unsigned long start, unsigned long end, void *arg)
{
memory_present(0, start, end);
return 0;
}
/*
* Find the highest page frame number we have available
@ -965,6 +971,7 @@ void __init find_max_pfn(void)
max_pfn = 0;
if (efi_enabled) {
efi_memmap_walk(efi_find_max_pfn, &max_pfn);
efi_memmap_walk(efi_memory_present_wrapper, NULL);
return;
}
@ -979,6 +986,7 @@ void __init find_max_pfn(void)
continue;
if (end > max_pfn)
max_pfn = end;
memory_present(0, start, end);
}
}

View File

@ -903,6 +903,12 @@ static int __devinit do_boot_cpu(int apicid, int cpu)
unsigned long start_eip;
unsigned short nmi_high = 0, nmi_low = 0;
if (!cpu_gdt_descr[cpu].address &&
!(cpu_gdt_descr[cpu].address = get_zeroed_page(GFP_KERNEL))) {
printk("Failed to allocate GDT for CPU %d\n", cpu);
return 1;
}
++cpucount;
/*

View File

@ -1,4 +1,3 @@
.data
ENTRY(sys_call_table)
.long sys_restart_syscall /* 0 - old "setup()" system call, used for restarting */
.long sys_exit

View File

@ -330,7 +330,9 @@ int recalibrate_cpu_khz(void)
unsigned int cpu_khz_old = cpu_khz;
if (cpu_has_tsc) {
local_irq_disable();
init_cpu_khz();
local_irq_enable();
cpu_data[0].loops_per_jiffy =
cpufreq_scale(cpu_data[0].loops_per_jiffy,
cpu_khz_old,

View File

@ -306,14 +306,17 @@ void die(const char * str, struct pt_regs * regs, long err)
.lock_owner_depth = 0
};
static int die_counter;
unsigned long flags;
if (die.lock_owner != raw_smp_processor_id()) {
console_verbose();
spin_lock_irq(&die.lock);
spin_lock_irqsave(&die.lock, flags);
die.lock_owner = smp_processor_id();
die.lock_owner_depth = 0;
bust_spinlocks(1);
}
else
local_save_flags(flags);
if (++die.lock_owner_depth < 3) {
int nl = 0;
@ -340,7 +343,7 @@ void die(const char * str, struct pt_regs * regs, long err)
bust_spinlocks(0);
die.lock_owner = -1;
spin_unlock_irq(&die.lock);
spin_unlock_irqrestore(&die.lock, flags);
if (kexec_should_crash(current))
crash_kexec(regs);
@ -1075,9 +1078,9 @@ void __init trap_init(void)
set_trap_gate(0,&divide_error);
set_intr_gate(1,&debug);
set_intr_gate(2,&nmi);
set_system_intr_gate(3, &int3); /* int3-5 can be called from all */
set_system_intr_gate(3, &int3); /* int3/4 can be called from all */
set_system_gate(4,&overflow);
set_system_gate(5,&bounds);
set_trap_gate(5,&bounds);
set_trap_gate(6,&invalid_op);
set_trap_gate(7,&device_not_available);
set_task_gate(8,GDT_ENTRY_DOUBLEFAULT_TSS);
@ -1095,6 +1098,28 @@ void __init trap_init(void)
#endif
set_trap_gate(19,&simd_coprocessor_error);
if (cpu_has_fxsr) {
/*
* Verify that the FXSAVE/FXRSTOR data will be 16-byte aligned.
* Generates a compile-time "error: zero width for bit-field" if
* the alignment is wrong.
*/
struct fxsrAlignAssert {
int _:!(offsetof(struct task_struct,
thread.i387.fxsave) & 15);
};
printk(KERN_INFO "Enabling fast FPU save and restore... ");
set_in_cr4(X86_CR4_OSFXSR);
printk("done.\n");
}
if (cpu_has_xmm) {
printk(KERN_INFO "Enabling unmasked SIMD FPU exception "
"support... ");
set_in_cr4(X86_CR4_OSXMMEXCPT);
printk("done.\n");
}
set_system_gate(SYSCALL_VECTOR,&system_call);
/*

View File

@ -735,6 +735,30 @@ void free_initmem(void)
printk (KERN_INFO "Freeing unused kernel memory: %dk freed\n", (__init_end - __init_begin) >> 10);
}
#ifdef CONFIG_DEBUG_RODATA
extern char __start_rodata, __end_rodata;
void mark_rodata_ro(void)
{
unsigned long addr = (unsigned long)&__start_rodata;
for (; addr < (unsigned long)&__end_rodata; addr += PAGE_SIZE)
change_page_attr(virt_to_page(addr), 1, PAGE_KERNEL_RO);
printk ("Write protecting the kernel read-only data: %luk\n",
(unsigned long)(&__end_rodata - &__start_rodata) >> 10);
/*
* change_page_attr() requires a global_flush_tlb() call after it.
* We do this after the printk so that if something went wrong in the
* change, the printk gets out at least to give a better debug hint
* of who is the culprit.
*/
global_flush_tlb();
}
#endif
#ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end)
{

View File

@ -13,6 +13,7 @@
#include <asm/processor.h>
#include <asm/tlbflush.h>
#include <asm/pgalloc.h>
#include <asm/sections.h>
static DEFINE_SPINLOCK(cpa_lock);
static struct list_head df_list = LIST_HEAD_INIT(df_list);
@ -36,7 +37,8 @@ pte_t *lookup_address(unsigned long address)
return pte_offset_kernel(pmd, address);
}
static struct page *split_large_page(unsigned long address, pgprot_t prot)
static struct page *split_large_page(unsigned long address, pgprot_t prot,
pgprot_t ref_prot)
{
int i;
unsigned long addr;
@ -54,7 +56,7 @@ static struct page *split_large_page(unsigned long address, pgprot_t prot)
pbase = (pte_t *)page_address(base);
for (i = 0; i < PTRS_PER_PTE; i++, addr += PAGE_SIZE) {
set_pte(&pbase[i], pfn_pte(addr >> PAGE_SHIFT,
addr == address ? prot : PAGE_KERNEL));
addr == address ? prot : ref_prot));
}
return base;
}
@ -98,11 +100,18 @@ static void set_pmd_pte(pte_t *kpte, unsigned long address, pte_t pte)
*/
static inline void revert_page(struct page *kpte_page, unsigned long address)
{
pte_t *linear = (pte_t *)
pgprot_t ref_prot;
pte_t *linear;
ref_prot =
((address & LARGE_PAGE_MASK) < (unsigned long)&_etext)
? PAGE_KERNEL_LARGE_EXEC : PAGE_KERNEL_LARGE;
linear = (pte_t *)
pmd_offset(pud_offset(pgd_offset_k(address), address), address);
set_pmd_pte(linear, address,
pfn_pte((__pa(address) & LARGE_PAGE_MASK) >> PAGE_SHIFT,
PAGE_KERNEL_LARGE));
ref_prot));
}
static int
@ -123,10 +132,16 @@ __change_page_attr(struct page *page, pgprot_t prot)
if ((pte_val(*kpte) & _PAGE_PSE) == 0) {
set_pte_atomic(kpte, mk_pte(page, prot));
} else {
struct page *split = split_large_page(address, prot);
pgprot_t ref_prot;
struct page *split;
ref_prot =
((address & LARGE_PAGE_MASK) < (unsigned long)&_etext)
? PAGE_KERNEL_EXEC : PAGE_KERNEL;
split = split_large_page(address, prot, ref_prot);
if (!split)
return -ENOMEM;
set_pmd_pte(kpte,address,mk_pte(split, PAGE_KERNEL));
set_pmd_pte(kpte,address,mk_pte(split, ref_prot));
kpte_page = split;
}
get_page(kpte_page);

View File

@ -846,7 +846,7 @@ static int pcibios_lookup_irq(struct pci_dev *dev, int assign)
* reported by the device if possible.
*/
newirq = dev->irq;
if (!((1 << newirq) & mask)) {
if (newirq && !((1 << newirq) & mask)) {
if ( pci_probe & PCI_USE_PIRQ_MASK) newirq = 0;
else printk(KERN_WARNING "PCI: IRQ %i for device %s doesn't match PIRQ mask - try pci=usepirqmask\n", newirq, pci_name(dev));
}

View File

@ -81,6 +81,12 @@ config PLAT_MAPPI2
config PLAT_MAPPI3
bool "Mappi-III(M3A-2170)"
config PLAT_M32104UT
bool "M32104UT"
help
The M3T-M32104UT is an reference board based on uT-Engine
specification. This board has a M32104 chip.
endchoice
choice
@ -93,6 +99,10 @@ config CHIP_M32700
config CHIP_M32102
bool "M32102"
config CHIP_M32104
bool "M32104"
depends on PLAT_M32104UT
config CHIP_VDEC2
bool "VDEC2"
@ -115,7 +125,7 @@ config TLB_ENTRIES
config ISA_M32R
bool
depends on CHIP_M32102
depends on CHIP_M32102 || CHIP_M32104
default y
config ISA_M32R2
@ -140,6 +150,7 @@ config BUS_CLOCK
default "50000000" if PLAT_MAPPI3
default "50000000" if PLAT_M32700UT
default "50000000" if PLAT_OPSPUT
default "54000000" if PLAT_M32104UT
default "33333333" if PLAT_OAKS32R
default "20000000" if PLAT_MAPPI2
@ -157,6 +168,7 @@ config MEMORY_START
default "08000000" if PLAT_USRV
default "08000000" if PLAT_M32700UT
default "08000000" if PLAT_OPSPUT
default "04000000" if PLAT_M32104UT
default "01000000" if PLAT_OAKS32R
config MEMORY_SIZE
@ -166,6 +178,7 @@ config MEMORY_SIZE
default "02000000" if PLAT_USRV
default "01000000" if PLAT_M32700UT
default "01000000" if PLAT_OPSPUT
default "01000000" if PLAT_M32104UT
default "00800000" if PLAT_OAKS32R
config NOHIGHMEM
@ -174,21 +187,22 @@ config NOHIGHMEM
config ARCH_DISCONTIGMEM_ENABLE
bool "Internal RAM Support"
depends on CHIP_M32700 || CHIP_M32102 || CHIP_VDEC2 || CHIP_OPSP
depends on CHIP_M32700 || CHIP_M32102 || CHIP_VDEC2 || CHIP_OPSP || CHIP_M32104
default y
source "mm/Kconfig"
config IRAM_START
hex "Internal memory start address (hex)"
default "00f00000"
depends on (CHIP_M32700 || CHIP_M32102 || CHIP_VDEC2 || CHIP_OPSP) && DISCONTIGMEM
default "00f00000" if !CHIP_M32104
default "00700000" if CHIP_M32104
depends on (CHIP_M32700 || CHIP_M32102 || CHIP_VDEC2 || CHIP_OPSP || CHIP_M32104) && DISCONTIGMEM
config IRAM_SIZE
hex "Internal memory size (hex)"
depends on (CHIP_M32700 || CHIP_M32102 || CHIP_VDEC2 || CHIP_OPSP) && DISCONTIGMEM
depends on (CHIP_M32700 || CHIP_M32102 || CHIP_VDEC2 || CHIP_OPSP || CHIP_M32104) && DISCONTIGMEM
default "00080000" if CHIP_M32700
default "00010000" if CHIP_M32102 || CHIP_OPSP
default "00010000" if CHIP_M32102 || CHIP_OPSP || CHIP_M32104
default "00008000" if CHIP_VDEC2
#

View File

@ -143,6 +143,11 @@ startup:
ldi r0, -2
ldi r1, 0x0100 ; invalidate
stb r1, @r0
#elif defined(CONFIG_CHIP_M32104)
/* Cache flush */
ldi r0, -2
ldi r1, 0x0700 ; invalidate i-cache, copy back d-cache
sth r1, @r0
#else
#error "put your cache flush function, please"
#endif

View File

@ -1,11 +1,10 @@
/*
* linux/arch/m32r/boot/setup.S -- A setup code.
*
* Copyright (C) 2001, 2002 Hiroyuki Kondo, Hirokazu Takata,
* and Hitoshi Yamamoto
* Copyright (C) 2001-2005 Hiroyuki Kondo, Hirokazu Takata,
* Hitoshi Yamamoto, Hayato Fujiwara
*
*/
/* $Id$ */
#include <linux/linkage.h>
#include <asm/segment.h>
@ -80,6 +79,20 @@ ENTRY(boot)
ldi r1, #0x101 ; cache on (with invalidation)
; ldi r1, #0x00 ; cache off
st r1, @r0
#elif defined(CONFIG_CHIP_M32104)
ldi r0, #-96 ; DNCR0
seth r1, #0x0060 ; from 0x00600000
or3 r1, r1, #0x0005 ; size 2MB
st r1, @r0
seth r1, #0x0100 ; from 0x01000000
or3 r1, r1, #0x0003 ; size 16MB
st r1, @+r0
seth r1, #0x0200 ; from 0x02000000
or3 r1, r1, #0x0002 ; size 32MB
st r1, @+r0
ldi r0, #-4 ;LDIMM (r0, M32R_MCCR)
ldi r1, #0x703 ; cache on (with invalidation)
st r1, @r0
#else
#error unknown chip configuration
#endif
@ -115,10 +128,15 @@ mmu_on:
st r1, @(MATM_offset,r0) ; Set MATM (T bit ON)
ld r0, @(MATM_offset,r0) ; Check
#else
#if defined(CONFIG_CHIP_M32700)
seth r0,#high(M32R_MCDCAR)
or3 r0,r0,#low(M32R_MCDCAR)
ld24 r1,#0x8080
st r1,@r0
#elif defined(CONFIG_CHIP_M32104)
LDIMM (r2, eit_vector) ; set EVB(cr5)
mvtc r2, cr5
#endif
#endif /* CONFIG_MMU */
jmp r13
nop

View File

@ -16,5 +16,6 @@ obj-$(CONFIG_PLAT_M32700UT) += setup_m32700ut.o io_m32700ut.o
obj-$(CONFIG_PLAT_OPSPUT) += setup_opsput.o io_opsput.o
obj-$(CONFIG_MODULES) += module.o
obj-$(CONFIG_PLAT_OAKS32R) += setup_oaks32r.o io_oaks32r.o
obj-$(CONFIG_PLAT_M32104UT) += setup_m32104ut.o io_m32104ut.o
EXTRA_AFLAGS := -traditional

View File

@ -315,7 +315,7 @@ ENTRY(ei_handler)
mv r1, sp ; arg1(regs)
#if defined(CONFIG_CHIP_VDEC2) || defined(CONFIG_CHIP_XNUX2) \
|| defined(CONFIG_CHIP_M32700) || defined(CONFIG_CHIP_M32102) \
|| defined(CONFIG_CHIP_OPSP)
|| defined(CONFIG_CHIP_OPSP) || defined(CONFIG_CHIP_M32104)
; GET_ICU_STATUS;
seth r0, #shigh(M32R_ICU_ISTS_ADDR)
@ -541,7 +541,20 @@ check_int2:
bra check_end
.fillinsn
check_end:
#endif /* CONFIG_PLAT_OPSPUT */
#elif defined(CONFIG_PLAT_M32104UT)
add3 r2, r0, #-(M32R_IRQ_INT1) ; INT1# interrupt
bnez r2, check_end
; read ICU status register of PLD
seth r0, #high(PLD_ICUISTS)
or3 r0, r0, #low(PLD_ICUISTS)
lduh r0, @r0
slli r0, #21
srli r0, #27 ; ISN
addi r0, #(M32104UT_PLD_IRQ_BASE)
bra check_end
.fillinsn
check_end:
#endif /* CONFIG_PLAT_M32104UT */
bl do_IRQ
#endif /* CONFIG_SMP */
ld r14, @sp+
@ -651,8 +664,6 @@ ENTRY(rie_handler)
/* void rie_handler(int error_code) */
SWITCH_TO_KERNEL_STACK
SAVE_ALL
mvfc r0, bpc
ld r1, @r0
ldi r1, #0x20 ; error_code
mv r0, sp ; pt_regs
bl do_rie_handler

View File

@ -0,0 +1,298 @@
/*
* linux/arch/m32r/kernel/io_m32104ut.c
*
* Typical I/O routines for M32104UT board.
*
* Copyright (c) 2001-2005 Hiroyuki Kondo, Hirokazu Takata,
* Hitoshi Yamamoto, Mamoru Sakugawa,
* Naoto Sugai, Hayato Fujiwara
*/
#include <linux/config.h>
#include <asm/m32r.h>
#include <asm/page.h>
#include <asm/io.h>
#include <asm/byteorder.h>
#if defined(CONFIG_PCMCIA) && defined(CONFIG_M32R_CFC)
#include <linux/types.h>
#define M32R_PCC_IOMAP_SIZE 0x1000
#define M32R_PCC_IOSTART0 0x1000
#define M32R_PCC_IOEND0 (M32R_PCC_IOSTART0 + M32R_PCC_IOMAP_SIZE - 1)
extern void pcc_ioread_byte(int, unsigned long, void *, size_t, size_t, int);
extern void pcc_ioread_word(int, unsigned long, void *, size_t, size_t, int);
extern void pcc_iowrite_byte(int, unsigned long, void *, size_t, size_t, int);
extern void pcc_iowrite_word(int, unsigned long, void *, size_t, size_t, int);
#endif /* CONFIG_PCMCIA && CONFIG_M32R_CFC */
#define PORT2ADDR(port) _port2addr(port)
static inline void *_port2addr(unsigned long port)
{
return (void *)(port | NONCACHE_OFFSET);
}
#if defined(CONFIG_IDE) && !defined(CONFIG_M32R_CFC)
static inline void *__port2addr_ata(unsigned long port)
{
static int dummy_reg;
switch (port) {
case 0x1f0: return (void *)(0x0c002000 | NONCACHE_OFFSET);
case 0x1f1: return (void *)(0x0c012800 | NONCACHE_OFFSET);
case 0x1f2: return (void *)(0x0c012002 | NONCACHE_OFFSET);
case 0x1f3: return (void *)(0x0c012802 | NONCACHE_OFFSET);
case 0x1f4: return (void *)(0x0c012004 | NONCACHE_OFFSET);
case 0x1f5: return (void *)(0x0c012804 | NONCACHE_OFFSET);
case 0x1f6: return (void *)(0x0c012006 | NONCACHE_OFFSET);
case 0x1f7: return (void *)(0x0c012806 | NONCACHE_OFFSET);
case 0x3f6: return (void *)(0x0c01200e | NONCACHE_OFFSET);
default: return (void *)&dummy_reg;
}
}
#endif
/*
* M32104T-LAN is located in the extended bus space
* from 0x01000000 to 0x01ffffff on physical address.
* The base address of LAN controller(LAN91C111) is 0x300.
*/
#define LAN_IOSTART (0x300 | NONCACHE_OFFSET)
#define LAN_IOEND (0x320 | NONCACHE_OFFSET)
static inline void *_port2addr_ne(unsigned long port)
{
return (void *)(port + NONCACHE_OFFSET + 0x01000000);
}
static inline void delay(void)
{
__asm__ __volatile__ ("push r0; \n\t pop r0;" : : :"memory");
}
/*
* NIC I/O function
*/
#define PORT2ADDR_NE(port) _port2addr_ne(port)
static inline unsigned char _ne_inb(void *portp)
{
return *(volatile unsigned char *)portp;
}
static inline unsigned short _ne_inw(void *portp)
{
return (unsigned short)le16_to_cpu(*(volatile unsigned short *)portp);
}
static inline void _ne_insb(void *portp, void *addr, unsigned long count)
{
unsigned char *buf = (unsigned char *)addr;
while (count--)
*buf++ = _ne_inb(portp);
}
static inline void _ne_outb(unsigned char b, void *portp)
{
*(volatile unsigned char *)portp = b;
}
static inline void _ne_outw(unsigned short w, void *portp)
{
*(volatile unsigned short *)portp = cpu_to_le16(w);
}
unsigned char _inb(unsigned long port)
{
if (port >= LAN_IOSTART && port < LAN_IOEND)
return _ne_inb(PORT2ADDR_NE(port));
return *(volatile unsigned char *)PORT2ADDR(port);
}
unsigned short _inw(unsigned long port)
{
if (port >= LAN_IOSTART && port < LAN_IOEND)
return _ne_inw(PORT2ADDR_NE(port));
return *(volatile unsigned short *)PORT2ADDR(port);
}
unsigned long _inl(unsigned long port)
{
return *(volatile unsigned long *)PORT2ADDR(port);
}
unsigned char _inb_p(unsigned long port)
{
unsigned char v = _inb(port);
delay();
return (v);
}
unsigned short _inw_p(unsigned long port)
{
unsigned short v = _inw(port);
delay();
return (v);
}
unsigned long _inl_p(unsigned long port)
{
unsigned long v = _inl(port);
delay();
return (v);
}
void _outb(unsigned char b, unsigned long port)
{
if (port >= LAN_IOSTART && port < LAN_IOEND)
_ne_outb(b, PORT2ADDR_NE(port));
else
*(volatile unsigned char *)PORT2ADDR(port) = b;
}
void _outw(unsigned short w, unsigned long port)
{
if (port >= LAN_IOSTART && port < LAN_IOEND)
_ne_outw(w, PORT2ADDR_NE(port));
else
*(volatile unsigned short *)PORT2ADDR(port) = w;
}
void _outl(unsigned long l, unsigned long port)
{
*(volatile unsigned long *)PORT2ADDR(port) = l;
}
void _outb_p(unsigned char b, unsigned long port)
{
_outb(b, port);
delay();
}
void _outw_p(unsigned short w, unsigned long port)
{
_outw(w, port);
delay();
}
void _outl_p(unsigned long l, unsigned long port)
{
_outl(l, port);
delay();
}
void _insb(unsigned int port, void *addr, unsigned long count)
{
if (port >= LAN_IOSTART && port < LAN_IOEND)
_ne_insb(PORT2ADDR_NE(port), addr, count);
else {
unsigned char *buf = addr;
unsigned char *portp = PORT2ADDR(port);
while (count--)
*buf++ = *(volatile unsigned char *)portp;
}
}
void _insw(unsigned int port, void *addr, unsigned long count)
{
unsigned short *buf = addr;
unsigned short *portp;
if (port >= LAN_IOSTART && port < LAN_IOEND) {
/*
* This portion is only used by smc91111.c to read data
* from the DATA_REG. Do not swap the data.
*/
portp = PORT2ADDR_NE(port);
while (count--)
*buf++ = *(volatile unsigned short *)portp;
#if defined(CONFIG_PCMCIA) && defined(CONFIG_M32R_CFC)
} else if (port >= M32R_PCC_IOSTART0 && port <= M32R_PCC_IOEND0) {
pcc_ioread_word(9, port, (void *)addr, sizeof(unsigned short),
count, 1);
#endif
#if defined(CONFIG_IDE) && !defined(CONFIG_M32R_CFC)
} else if ((port >= 0x1f0 && port <=0x1f7) || port == 0x3f6) {
portp = __port2addr_ata(port);
while (count--)
*buf++ = *(volatile unsigned short *)portp;
#endif
} else {
portp = PORT2ADDR(port);
while (count--)
*buf++ = *(volatile unsigned short *)portp;
}
}
void _insl(unsigned int port, void *addr, unsigned long count)
{
unsigned long *buf = addr;
unsigned long *portp;
portp = PORT2ADDR(port);
while (count--)
*buf++ = *(volatile unsigned long *)portp;
}
void _outsb(unsigned int port, const void *addr, unsigned long count)
{
const unsigned char *buf = addr;
unsigned char *portp;
if (port >= LAN_IOSTART && port < LAN_IOEND) {
portp = PORT2ADDR_NE(port);
while (count--)
_ne_outb(*buf++, portp);
} else {
portp = PORT2ADDR(port);
while (count--)
*(volatile unsigned char *)portp = *buf++;
}
}
void _outsw(unsigned int port, const void *addr, unsigned long count)
{
const unsigned short *buf = addr;
unsigned short *portp;
if (port >= LAN_IOSTART && port < LAN_IOEND) {
/*
* This portion is only used by smc91111.c to write data
* into the DATA_REG. Do not swap the data.
*/
portp = PORT2ADDR_NE(port);
while (count--)
*(volatile unsigned short *)portp = *buf++;
#if defined(CONFIG_IDE) && !defined(CONFIG_M32R_CFC)
} else if ((port >= 0x1f0 && port <=0x1f7) || port == 0x3f6) {
portp = __port2addr_ata(port);
while (count--)
*(volatile unsigned short *)portp = *buf++;
#endif
#if defined(CONFIG_PCMCIA) && defined(CONFIG_M32R_CFC)
} else if (port >= M32R_PCC_IOSTART0 && port <= M32R_PCC_IOEND0) {
pcc_iowrite_word(9, port, (void *)addr, sizeof(unsigned short),
count, 1);
#endif
} else {
portp = PORT2ADDR(port);
while (count--)
*(volatile unsigned short *)portp = *buf++;
}
}
void _outsl(unsigned int port, const void *addr, unsigned long count)
{
const unsigned long *buf = addr;
unsigned char *portp;
portp = PORT2ADDR(port);
while (count--)
*(volatile unsigned long *)portp = *buf++;
}

View File

@ -36,7 +36,7 @@ extern void pcc_iowrite_word(int, unsigned long, void *, size_t, size_t, int);
static inline void *_port2addr(unsigned long port)
{
return (void *)(port + NONCACHE_OFFSET);
return (void *)(port | NONCACHE_OFFSET);
}
#if defined(CONFIG_IDE) && !defined(CONFIG_M32R_CFC)
@ -45,15 +45,15 @@ static inline void *__port2addr_ata(unsigned long port)
static int dummy_reg;
switch (port) {
case 0x1f0: return (void *)0xac002000;
case 0x1f1: return (void *)0xac012800;
case 0x1f2: return (void *)0xac012002;
case 0x1f3: return (void *)0xac012802;
case 0x1f4: return (void *)0xac012004;
case 0x1f5: return (void *)0xac012804;
case 0x1f6: return (void *)0xac012006;
case 0x1f7: return (void *)0xac012806;
case 0x3f6: return (void *)0xac01200e;
case 0x1f0: return (void *)(0x0c002000 | NONCACHE_OFFSET);
case 0x1f1: return (void *)(0x0c012800 | NONCACHE_OFFSET);
case 0x1f2: return (void *)(0x0c012002 | NONCACHE_OFFSET);
case 0x1f3: return (void *)(0x0c012802 | NONCACHE_OFFSET);
case 0x1f4: return (void *)(0x0c012004 | NONCACHE_OFFSET);
case 0x1f5: return (void *)(0x0c012804 | NONCACHE_OFFSET);
case 0x1f6: return (void *)(0x0c012006 | NONCACHE_OFFSET);
case 0x1f7: return (void *)(0x0c012806 | NONCACHE_OFFSET);
case 0x3f6: return (void *)(0x0c01200e | NONCACHE_OFFSET);
default: return (void *)&dummy_reg;
}
}
@ -64,8 +64,8 @@ static inline void *__port2addr_ata(unsigned long port)
* from 0x10000000 to 0x13ffffff on physical address.
* The base address of LAN controller(LAN91C111) is 0x300.
*/
#define LAN_IOSTART 0xa0000300
#define LAN_IOEND 0xa0000320
#define LAN_IOSTART (0x300 | NONCACHE_OFFSET)
#define LAN_IOEND (0x320 | NONCACHE_OFFSET)
static inline void *_port2addr_ne(unsigned long port)
{
return (void *)(port + 0x10000000);

View File

@ -31,7 +31,7 @@ extern void pcc_iowrite(int, unsigned long, void *, size_t, size_t, int);
static inline void *_port2addr(unsigned long port)
{
return (void *)(port | (NONCACHE_OFFSET));
return (void *)(port | NONCACHE_OFFSET);
}
static inline void *_port2addr_ne(unsigned long port)

View File

@ -33,7 +33,7 @@ extern void pcc_iowrite_word(int, unsigned long, void *, size_t, size_t, int);
static inline void *_port2addr(unsigned long port)
{
return (void *)(port | (NONCACHE_OFFSET));
return (void *)(port | NONCACHE_OFFSET);
}
#if defined(CONFIG_IDE) && !defined(CONFIG_M32R_CFC)
@ -42,22 +42,22 @@ static inline void *__port2addr_ata(unsigned long port)
static int dummy_reg;
switch (port) {
case 0x1f0: return (void *)0xac002000;
case 0x1f1: return (void *)0xac012800;
case 0x1f2: return (void *)0xac012002;
case 0x1f3: return (void *)0xac012802;
case 0x1f4: return (void *)0xac012004;
case 0x1f5: return (void *)0xac012804;
case 0x1f6: return (void *)0xac012006;
case 0x1f7: return (void *)0xac012806;
case 0x3f6: return (void *)0xac01200e;
case 0x1f0: return (void *)(0x0c002000 | NONCACHE_OFFSET);
case 0x1f1: return (void *)(0x0c012800 | NONCACHE_OFFSET);
case 0x1f2: return (void *)(0x0c012002 | NONCACHE_OFFSET);
case 0x1f3: return (void *)(0x0c012802 | NONCACHE_OFFSET);
case 0x1f4: return (void *)(0x0c012004 | NONCACHE_OFFSET);
case 0x1f5: return (void *)(0x0c012804 | NONCACHE_OFFSET);
case 0x1f6: return (void *)(0x0c012006 | NONCACHE_OFFSET);
case 0x1f7: return (void *)(0x0c012806 | NONCACHE_OFFSET);
case 0x3f6: return (void *)(0x0c01200e | NONCACHE_OFFSET);
default: return (void *)&dummy_reg;
}
}
#endif
#define LAN_IOSTART 0xa0000300
#define LAN_IOEND 0xa0000320
#define LAN_IOSTART (0x300 | NONCACHE_OFFSET)
#define LAN_IOEND (0x320 | NONCACHE_OFFSET)
#ifdef CONFIG_CHIP_OPSP
static inline void *_port2addr_ne(unsigned long port)
{

View File

@ -33,7 +33,7 @@ extern void pcc_iowrite_word(int, unsigned long, void *, size_t, size_t, int);
static inline void *_port2addr(unsigned long port)
{
return (void *)(port + NONCACHE_OFFSET);
return (void *)(port | NONCACHE_OFFSET);
}
#if defined(CONFIG_IDE)
@ -43,33 +43,42 @@ static inline void *__port2addr_ata(unsigned long port)
switch (port) {
/* IDE0 CF */
case 0x1f0: return (void *)0xb4002000;
case 0x1f1: return (void *)0xb4012800;
case 0x1f2: return (void *)0xb4012002;
case 0x1f3: return (void *)0xb4012802;
case 0x1f4: return (void *)0xb4012004;
case 0x1f5: return (void *)0xb4012804;
case 0x1f6: return (void *)0xb4012006;
case 0x1f7: return (void *)0xb4012806;
case 0x3f6: return (void *)0xb401200e;
case 0x1f0: return (void *)(0x14002000 | NONCACHE_OFFSET);
case 0x1f1: return (void *)(0x14012800 | NONCACHE_OFFSET);
case 0x1f2: return (void *)(0x14012002 | NONCACHE_OFFSET);
case 0x1f3: return (void *)(0x14012802 | NONCACHE_OFFSET);
case 0x1f4: return (void *)(0x14012004 | NONCACHE_OFFSET);
case 0x1f5: return (void *)(0x14012804 | NONCACHE_OFFSET);
case 0x1f6: return (void *)(0x14012006 | NONCACHE_OFFSET);
case 0x1f7: return (void *)(0x14012806 | NONCACHE_OFFSET);
case 0x3f6: return (void *)(0x1401200e | NONCACHE_OFFSET);
/* IDE1 IDE */
case 0x170: return (void *)0xb4810000; /* Data 16bit */
case 0x171: return (void *)0xb4810002; /* Features / Error */
case 0x172: return (void *)0xb4810004; /* Sector count */
case 0x173: return (void *)0xb4810006; /* Sector number */
case 0x174: return (void *)0xb4810008; /* Cylinder low */
case 0x175: return (void *)0xb481000a; /* Cylinder high */
case 0x176: return (void *)0xb481000c; /* Device head */
case 0x177: return (void *)0xb481000e; /* Command */
case 0x376: return (void *)0xb480800c; /* Device control / Alt status */
case 0x170: /* Data 16bit */
return (void *)(0x14810000 | NONCACHE_OFFSET);
case 0x171: /* Features / Error */
return (void *)(0x14810002 | NONCACHE_OFFSET);
case 0x172: /* Sector count */
return (void *)(0x14810004 | NONCACHE_OFFSET);
case 0x173: /* Sector number */
return (void *)(0x14810006 | NONCACHE_OFFSET);
case 0x174: /* Cylinder low */
return (void *)(0x14810008 | NONCACHE_OFFSET);
case 0x175: /* Cylinder high */
return (void *)(0x1481000a | NONCACHE_OFFSET);
case 0x176: /* Device head */
return (void *)(0x1481000c | NONCACHE_OFFSET);
case 0x177: /* Command */
return (void *)(0x1481000e | NONCACHE_OFFSET);
case 0x376: /* Device control / Alt status */
return (void *)(0x1480800c | NONCACHE_OFFSET);
default: return (void *)&dummy_reg;
}
}
#endif
#define LAN_IOSTART 0xa0000300
#define LAN_IOEND 0xa0000320
#define LAN_IOSTART (0x300 | NONCACHE_OFFSET)
#define LAN_IOEND (0x320 | NONCACHE_OFFSET)
static inline void *_port2addr_ne(unsigned long port)
{
return (void *)(port + 0x10000000);

View File

@ -16,7 +16,7 @@
static inline void *_port2addr(unsigned long port)
{
return (void *)(port | (NONCACHE_OFFSET));
return (void *)(port | NONCACHE_OFFSET);
}
static inline void *_port2addr_ne(unsigned long port)

View File

@ -36,7 +36,7 @@ extern void pcc_iowrite_word(int, unsigned long, void *, size_t, size_t, int);
static inline void *_port2addr(unsigned long port)
{
return (void *)(port | (NONCACHE_OFFSET));
return (void *)(port | NONCACHE_OFFSET);
}
/*
@ -44,8 +44,8 @@ static inline void *_port2addr(unsigned long port)
* from 0x10000000 to 0x13ffffff on physical address.
* The base address of LAN controller(LAN91C111) is 0x300.
*/
#define LAN_IOSTART 0xa0000300
#define LAN_IOEND 0xa0000320
#define LAN_IOSTART (0x300 | NONCACHE_OFFSET)
#define LAN_IOEND (0x320 | NONCACHE_OFFSET)
static inline void *_port2addr_ne(unsigned long port)
{
return (void *)(port + 0x10000000);

View File

@ -320,6 +320,9 @@ static int show_cpuinfo(struct seq_file *m, void *v)
#elif defined(CONFIG_CHIP_MP)
seq_printf(m, "cpu family\t: M32R-MP\n"
"cache size\t: I-xxKB/D-xxKB\n");
#elif defined(CONFIG_CHIP_M32104)
seq_printf(m,"cpu family\t: M32104\n"
"cache size\t: I-8KB/D-8KB\n");
#else
seq_printf(m, "cpu family\t: Unknown\n");
#endif
@ -340,6 +343,8 @@ static int show_cpuinfo(struct seq_file *m, void *v)
seq_printf(m, "Machine\t\t: uServer\n");
#elif defined(CONFIG_PLAT_OAKS32R)
seq_printf(m, "Machine\t\t: OAKS32R\n");
#elif defined(CONFIG_PLAT_M32104UT)
seq_printf(m, "Machine\t\t: M3T-M32104UT uT Engine board\n");
#else
seq_printf(m, "Machine\t\t: Unknown\n");
#endif
@ -389,7 +394,7 @@ unsigned long cpu_initialized __initdata = 0;
*/
#if defined(CONFIG_CHIP_VDEC2) || defined(CONFIG_CHIP_XNUX2) \
|| defined(CONFIG_CHIP_M32700) || defined(CONFIG_CHIP_M32102) \
|| defined(CONFIG_CHIP_OPSP)
|| defined(CONFIG_CHIP_OPSP) || defined(CONFIG_CHIP_M32104)
void __init cpu_init (void)
{
int cpu_id = smp_processor_id();

View File

@ -0,0 +1,156 @@
/*
* linux/arch/m32r/kernel/setup_m32104ut.c
*
* Setup routines for M32104UT Board
*
* Copyright (c) 2002-2005 Hiroyuki Kondo, Hirokazu Takata,
* Hitoshi Yamamoto, Mamoru Sakugawa,
* Naoto Sugai, Hayato Fujiwara
*/
#include <linux/config.h>
#include <linux/irq.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/device.h>
#include <asm/system.h>
#include <asm/m32r.h>
#include <asm/io.h>
#define irq2port(x) (M32R_ICU_CR1_PORTL + ((x - 1) * sizeof(unsigned long)))
icu_data_t icu_data[NR_IRQS];
static void disable_m32104ut_irq(unsigned int irq)
{
unsigned long port, data;
port = irq2port(irq);
data = icu_data[irq].icucr|M32R_ICUCR_ILEVEL7;
outl(data, port);
}
static void enable_m32104ut_irq(unsigned int irq)
{
unsigned long port, data;
port = irq2port(irq);
data = icu_data[irq].icucr|M32R_ICUCR_IEN|M32R_ICUCR_ILEVEL6;
outl(data, port);
}
static void mask_and_ack_m32104ut(unsigned int irq)
{
disable_m32104ut_irq(irq);
}
static void end_m32104ut_irq(unsigned int irq)
{
enable_m32104ut_irq(irq);
}
static unsigned int startup_m32104ut_irq(unsigned int irq)
{
enable_m32104ut_irq(irq);
return (0);
}
static void shutdown_m32104ut_irq(unsigned int irq)
{
unsigned long port;
port = irq2port(irq);
outl(M32R_ICUCR_ILEVEL7, port);
}
static struct hw_interrupt_type m32104ut_irq_type =
{
.typename = "M32104UT-IRQ",
.startup = startup_m32104ut_irq,
.shutdown = shutdown_m32104ut_irq,
.enable = enable_m32104ut_irq,
.disable = disable_m32104ut_irq,
.ack = mask_and_ack_m32104ut,
.end = end_m32104ut_irq
};
void __init init_IRQ(void)
{
static int once = 0;
if (once)
return;
else
once++;
#if defined(CONFIG_SMC91X)
/* INT#0: LAN controller on M32104UT-LAN (SMC91C111)*/
irq_desc[M32R_IRQ_INT0].status = IRQ_DISABLED;
irq_desc[M32R_IRQ_INT0].handler = &m32104ut_irq_type;
irq_desc[M32R_IRQ_INT0].action = 0;
irq_desc[M32R_IRQ_INT0].depth = 1;
icu_data[M32R_IRQ_INT0].icucr = M32R_ICUCR_IEN | M32R_ICUCR_ISMOD11; /* "H" level sense */
disable_m32104ut_irq(M32R_IRQ_INT0);
#endif /* CONFIG_SMC91X */
/* MFT2 : system timer */
irq_desc[M32R_IRQ_MFT2].status = IRQ_DISABLED;
irq_desc[M32R_IRQ_MFT2].handler = &m32104ut_irq_type;
irq_desc[M32R_IRQ_MFT2].action = 0;
irq_desc[M32R_IRQ_MFT2].depth = 1;
icu_data[M32R_IRQ_MFT2].icucr = M32R_ICUCR_IEN;
disable_m32104ut_irq(M32R_IRQ_MFT2);
#ifdef CONFIG_SERIAL_M32R_SIO
/* SIO0_R : uart receive data */
irq_desc[M32R_IRQ_SIO0_R].status = IRQ_DISABLED;
irq_desc[M32R_IRQ_SIO0_R].handler = &m32104ut_irq_type;
irq_desc[M32R_IRQ_SIO0_R].action = 0;
irq_desc[M32R_IRQ_SIO0_R].depth = 1;
icu_data[M32R_IRQ_SIO0_R].icucr = M32R_ICUCR_IEN;
disable_m32104ut_irq(M32R_IRQ_SIO0_R);
/* SIO0_S : uart send data */
irq_desc[M32R_IRQ_SIO0_S].status = IRQ_DISABLED;
irq_desc[M32R_IRQ_SIO0_S].handler = &m32104ut_irq_type;
irq_desc[M32R_IRQ_SIO0_S].action = 0;
irq_desc[M32R_IRQ_SIO0_S].depth = 1;
icu_data[M32R_IRQ_SIO0_S].icucr = M32R_ICUCR_IEN;
disable_m32104ut_irq(M32R_IRQ_SIO0_S);
#endif /* CONFIG_SERIAL_M32R_SIO */
}
#if defined(CONFIG_SMC91X)
#define LAN_IOSTART 0x300
#define LAN_IOEND 0x320
static struct resource smc91x_resources[] = {
[0] = {
.start = (LAN_IOSTART),
.end = (LAN_IOEND),
.flags = IORESOURCE_MEM,
},
[1] = {
.start = M32R_IRQ_INT0,
.end = M32R_IRQ_INT0,
.flags = IORESOURCE_IRQ,
}
};
static struct platform_device smc91x_device = {
.name = "smc91x",
.id = 0,
.num_resources = ARRAY_SIZE(smc91x_resources),
.resource = smc91x_resources,
};
#endif
static int __init platform_init(void)
{
#if defined(CONFIG_SMC91X)
platform_device_register(&smc91x_device);
#endif
return 0;
}
arch_initcall(platform_init);

View File

@ -26,15 +26,7 @@
*/
#define irq2port(x) (M32R_ICU_CR1_PORTL + ((x - 1) * sizeof(unsigned long)))
#ifndef CONFIG_SMP
typedef struct {
unsigned long icucr; /* ICU Control Register */
} icu_data_t;
static icu_data_t icu_data[M32700UT_NUM_CPU_IRQ];
#else
icu_data_t icu_data[M32700UT_NUM_CPU_IRQ];
#endif /* CONFIG_SMP */
static void disable_m32700ut_irq(unsigned int irq)
{

View File

@ -19,12 +19,6 @@
#define irq2port(x) (M32R_ICU_CR1_PORTL + ((x - 1) * sizeof(unsigned long)))
#ifndef CONFIG_SMP
typedef struct {
unsigned long icucr; /* ICU Control Register */
} icu_data_t;
#endif /* CONFIG_SMP */
icu_data_t icu_data[NR_IRQS];
static void disable_mappi_irq(unsigned int irq)

View File

@ -19,12 +19,6 @@
#define irq2port(x) (M32R_ICU_CR1_PORTL + ((x - 1) * sizeof(unsigned long)))
#ifndef CONFIG_SMP
typedef struct {
unsigned long icucr; /* ICU Control Register */
} icu_data_t;
#endif /* CONFIG_SMP */
icu_data_t icu_data[NR_IRQS];
static void disable_mappi2_irq(unsigned int irq)

View File

@ -19,12 +19,6 @@
#define irq2port(x) (M32R_ICU_CR1_PORTL + ((x - 1) * sizeof(unsigned long)))
#ifndef CONFIG_SMP
typedef struct {
unsigned long icucr; /* ICU Control Register */
} icu_data_t;
#endif /* CONFIG_SMP */
icu_data_t icu_data[NR_IRQS];
static void disable_mappi3_irq(unsigned int irq)

View File

@ -18,12 +18,6 @@
#define irq2port(x) (M32R_ICU_CR1_PORTL + ((x - 1) * sizeof(unsigned long)))
#ifndef CONFIG_SMP
typedef struct {
unsigned long icucr; /* ICU Control Register */
} icu_data_t;
#endif /* CONFIG_SMP */
icu_data_t icu_data[NR_IRQS];
static void disable_oaks32r_irq(unsigned int irq)

View File

@ -27,15 +27,7 @@
*/
#define irq2port(x) (M32R_ICU_CR1_PORTL + ((x - 1) * sizeof(unsigned long)))
#ifndef CONFIG_SMP
typedef struct {
unsigned long icucr; /* ICU Control Register */
} icu_data_t;
static icu_data_t icu_data[OPSPUT_NUM_CPU_IRQ];
#else
icu_data_t icu_data[OPSPUT_NUM_CPU_IRQ];
#endif /* CONFIG_SMP */
static void disable_opsput_irq(unsigned int irq)
{

View File

@ -18,12 +18,6 @@
#define irq2port(x) (M32R_ICU_CR1_PORTL + ((x - 1) * sizeof(unsigned long)))
#if !defined(CONFIG_SMP)
typedef struct {
unsigned long icucr; /* ICU Control Register */
} icu_data_t;
#endif /* CONFIG_SMP */
icu_data_t icu_data[M32700UT_NUM_CPU_IRQ];
static void disable_mappi_irq(unsigned int irq)

View File

@ -57,7 +57,7 @@ static unsigned long do_gettimeoffset(void)
#if defined(CONFIG_CHIP_M32102) || defined(CONFIG_CHIP_XNUX2) \
|| defined(CONFIG_CHIP_VDEC2) || defined(CONFIG_CHIP_M32700) \
|| defined(CONFIG_CHIP_OPSP)
|| defined(CONFIG_CHIP_OPSP) || defined(CONFIG_CHIP_M32104)
#ifndef CONFIG_SMP
unsigned long count;
@ -268,7 +268,7 @@ void __init time_init(void)
#if defined(CONFIG_CHIP_M32102) || defined(CONFIG_CHIP_XNUX2) \
|| defined(CONFIG_CHIP_VDEC2) || defined(CONFIG_CHIP_M32700) \
|| defined(CONFIG_CHIP_OPSP)
|| defined(CONFIG_CHIP_OPSP) || defined(CONFIG_CHIP_M32104)
/* M32102 MFT setup */
setup_irq(M32R_IRQ_MFT2, &irq0);

View File

@ -0,0 +1,657 @@
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.14
# Wed Nov 9 16:04:51 2005
#
CONFIG_M32R=y
# CONFIG_UID16 is not set
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_HARDIRQS=y
CONFIG_GENERIC_IRQ_PROBE=y
#
# Code maturity level options
#
CONFIG_EXPERIMENTAL=y
CONFIG_CLEAN_COMPILE=y
CONFIG_BROKEN_ON_SMP=y
CONFIG_INIT_ENV_ARG_LIMIT=32
#
# General setup
#
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
# CONFIG_POSIX_MQUEUE is not set
# CONFIG_BSD_PROCESS_ACCT is not set
CONFIG_SYSCTL=y
# CONFIG_AUDIT is not set
CONFIG_HOTPLUG=y
# CONFIG_KOBJECT_UEVENT is not set
# CONFIG_IKCONFIG is not set
CONFIG_INITRAMFS_SOURCE=""
CONFIG_EMBEDDED=y
# CONFIG_KALLSYMS is not set
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_BASE_FULL=y
# CONFIG_FUTEX is not set
# CONFIG_EPOLL is not set
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_CC_ALIGN_FUNCTIONS=0
CONFIG_CC_ALIGN_LABELS=0
CONFIG_CC_ALIGN_LOOPS=0
CONFIG_CC_ALIGN_JUMPS=0
CONFIG_TINY_SHMEM=y
CONFIG_BASE_SMALL=0
#
# Loadable module support
#
# CONFIG_MODULES is not set
#
# Processor type and features
#
# CONFIG_PLAT_MAPPI is not set
# CONFIG_PLAT_USRV is not set
# CONFIG_PLAT_M32700UT is not set
# CONFIG_PLAT_OPSPUT is not set
# CONFIG_PLAT_OAKS32R is not set
# CONFIG_PLAT_MAPPI2 is not set
# CONFIG_PLAT_MAPPI3 is not set
CONFIG_PLAT_M32104UT=y
# CONFIG_CHIP_M32700 is not set
# CONFIG_CHIP_M32102 is not set
CONFIG_CHIP_M32104=y
# CONFIG_CHIP_VDEC2 is not set
# CONFIG_CHIP_OPSP is not set
CONFIG_ISA_M32R=y
CONFIG_BUS_CLOCK=54000000
CONFIG_TIMER_DIVIDE=128
# CONFIG_CPU_LITTLE_ENDIAN is not set
CONFIG_MEMORY_START=04000000
CONFIG_MEMORY_SIZE=01000000
CONFIG_NOHIGHMEM=y
# CONFIG_ARCH_DISCONTIGMEM_ENABLE is not set
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_FLATMEM_MANUAL=y
# CONFIG_DISCONTIGMEM_MANUAL is not set
# CONFIG_SPARSEMEM_MANUAL is not set
CONFIG_FLATMEM=y
CONFIG_FLAT_NODE_MEM_MAP=y
# CONFIG_SPARSEMEM_STATIC is not set
CONFIG_RWSEM_GENERIC_SPINLOCK=y
# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set
CONFIG_GENERIC_CALIBRATE_DELAY=y
# CONFIG_PREEMPT is not set
# CONFIG_SMP is not set
#
# Bus options (PCI, PCMCIA, EISA, MCA, ISA)
#
# CONFIG_ISA is not set
#
# PCCARD (PCMCIA/CardBus) support
#
CONFIG_PCCARD=y
# CONFIG_PCMCIA_DEBUG is not set
CONFIG_PCMCIA=y
CONFIG_PCMCIA_LOAD_CIS=y
CONFIG_PCMCIA_IOCTL=y
#
# PC-card bridges
#
#
# PCI Hotplug Support
#
#
# Executable file formats
#
CONFIG_BINFMT_FLAT=y
# CONFIG_BINFMT_ZFLAT is not set
# CONFIG_BINFMT_SHARED_FLAT is not set
# CONFIG_BINFMT_MISC is not set
#
# Networking
#
CONFIG_NET=y
#
# Networking options
#
# CONFIG_PACKET is not set
CONFIG_UNIX=y
# CONFIG_NET_KEY is not set
CONFIG_INET=y
# CONFIG_IP_MULTICAST is not set
# CONFIG_IP_ADVANCED_ROUTER is not set
CONFIG_IP_FIB_HASH=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
# CONFIG_IP_PNP_BOOTP is not set
# CONFIG_IP_PNP_RARP is not set
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE is not set
# CONFIG_ARPD is not set
# CONFIG_SYN_COOKIES is not set
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_TUNNEL is not set
CONFIG_INET_DIAG=y
CONFIG_INET_TCP_DIAG=y
# CONFIG_TCP_CONG_ADVANCED is not set
CONFIG_TCP_CONG_BIC=y
# CONFIG_IPV6 is not set
# CONFIG_NETFILTER is not set
#
# DCCP Configuration (EXPERIMENTAL)
#
# CONFIG_IP_DCCP is not set
#
# SCTP Configuration (EXPERIMENTAL)
#
# CONFIG_IP_SCTP is not set
# CONFIG_ATM is not set
# CONFIG_BRIDGE is not set
# CONFIG_VLAN_8021Q is not set
# CONFIG_DECNET is not set
# CONFIG_LLC2 is not set
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_NET_DIVERT is not set
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set
# CONFIG_NET_SCHED is not set
# CONFIG_NET_CLS_ROUTE is not set
#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
# CONFIG_HAMRADIO is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# CONFIG_IEEE80211 is not set
#
# Device Drivers
#
#
# Generic Driver Options
#
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
# CONFIG_DEBUG_DRIVER is not set
#
# Connector - unified userspace <-> kernelspace linker
#
# CONFIG_CONNECTOR is not set
#
# Memory Technology Devices (MTD)
#
# CONFIG_MTD is not set
#
# Parallel port support
#
# CONFIG_PARPORT is not set
#
# Plug and Play support
#
#
# Block devices
#
# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
CONFIG_BLK_DEV_NBD=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CDROM_PKTCDVD is not set
#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
# CONFIG_IOSCHED_AS is not set
# CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_IOSCHED_CFQ is not set
# CONFIG_ATA_OVER_ETH is not set
#
# ATA/ATAPI/MFM/RLL support
#
# CONFIG_IDE is not set
#
# SCSI device support
#
# CONFIG_RAID_ATTRS is not set
# CONFIG_SCSI is not set
#
# Multi-device support (RAID and LVM)
#
# CONFIG_MD is not set
#
# Fusion MPT device support
#
# CONFIG_FUSION is not set
#
# IEEE 1394 (FireWire) support
#
#
# I2O device support
#
#
# Network device support
#
CONFIG_NETDEVICES=y
CONFIG_DUMMY=y
# CONFIG_BONDING is not set
# CONFIG_EQUALIZER is not set
# CONFIG_TUN is not set
#
# PHY device support
#
# CONFIG_PHYLIB is not set
#
# Ethernet (10 or 100Mbit)
#
CONFIG_NET_ETHERNET=y
CONFIG_MII=y
CONFIG_SMC91X=y
# CONFIG_NE2000 is not set
#
# Ethernet (1000 Mbit)
#
#
# Ethernet (10000 Mbit)
#
#
# Token Ring devices
#
#
# Wireless LAN (non-hamradio)
#
# CONFIG_NET_RADIO is not set
#
# PCMCIA network device support
#
# CONFIG_NET_PCMCIA is not set
#
# Wan interfaces
#
# CONFIG_WAN is not set
# CONFIG_PPP is not set
# CONFIG_SLIP is not set
# CONFIG_SHAPER is not set
# CONFIG_NETCONSOLE is not set
# CONFIG_NETPOLL is not set
# CONFIG_NET_POLL_CONTROLLER is not set
#
# ISDN subsystem
#
# CONFIG_ISDN is not set
#
# Telephony Support
#
# CONFIG_PHONE is not set
#
# Input device support
#
# CONFIG_INPUT is not set
#
# Hardware I/O ports
#
# CONFIG_SERIO is not set
# CONFIG_GAMEPORT is not set
#
# Character devices
#
# CONFIG_VT is not set
# CONFIG_SERIAL_NONSTANDARD is not set
#
# Serial drivers
#
# CONFIG_SERIAL_8250 is not set
#
# Non-8250 serial port support
#
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_SERIAL_M32R_SIO=y
CONFIG_SERIAL_M32R_SIO_CONSOLE=y
CONFIG_UNIX98_PTYS=y
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
#
# IPMI
#
# CONFIG_IPMI_HANDLER is not set
#
# Watchdog Cards
#
CONFIG_WATCHDOG=y
# CONFIG_WATCHDOG_NOWAYOUT is not set
#
# Watchdog Device Drivers
#
CONFIG_SOFT_WATCHDOG=y
# CONFIG_RTC is not set
# CONFIG_DTLK is not set
# CONFIG_R3964 is not set
#
# Ftape, the floppy tape device driver
#
#
# PCMCIA character devices
#
# CONFIG_SYNCLINK_CS is not set
# CONFIG_RAW_DRIVER is not set
#
# TPM devices
#
#
# I2C support
#
# CONFIG_I2C is not set
#
# Dallas's 1-wire bus
#
# CONFIG_W1 is not set
#
# Hardware Monitoring support
#
# CONFIG_HWMON is not set
# CONFIG_HWMON_VID is not set
#
# Misc devices
#
#
# Multimedia Capabilities Port drivers
#
#
# Multimedia devices
#
# CONFIG_VIDEO_DEV is not set
#
# Digital Video Broadcasting Devices
#
# CONFIG_DVB is not set
#
# Graphics support
#
# CONFIG_FB is not set
#
# Sound
#
# CONFIG_SOUND is not set
#
# USB support
#
# CONFIG_USB_ARCH_HAS_HCD is not set
# CONFIG_USB_ARCH_HAS_OHCI is not set
#
# USB Gadget Support
#
# CONFIG_USB_GADGET is not set
#
# MMC/SD Card support
#
# CONFIG_MMC is not set
#
# InfiniBand support
#
#
# SN Devices
#
#
# File systems
#
CONFIG_EXT2_FS=y
# CONFIG_EXT2_FS_XATTR is not set
# CONFIG_EXT2_FS_XIP is not set
CONFIG_EXT3_FS=y
CONFIG_EXT3_FS_XATTR=y
CONFIG_EXT3_FS_POSIX_ACL=y
# CONFIG_EXT3_FS_SECURITY is not set
CONFIG_JBD=y
# CONFIG_JBD_DEBUG is not set
CONFIG_FS_MBCACHE=y
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
CONFIG_FS_POSIX_ACL=y
# CONFIG_XFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
# CONFIG_INOTIFY is not set
# CONFIG_QUOTA is not set
CONFIG_DNOTIFY=y
# CONFIG_AUTOFS_FS is not set
# CONFIG_AUTOFS4_FS is not set
# CONFIG_FUSE_FS is not set
#
# CD-ROM/DVD Filesystems
#
# CONFIG_ISO9660_FS is not set
# CONFIG_UDF_FS is not set
#
# DOS/FAT/NT Filesystems
#
CONFIG_FAT_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_FAT_DEFAULT_CODEPAGE=932
CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
# CONFIG_NTFS_FS is not set
#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
# CONFIG_HUGETLB_PAGE is not set
CONFIG_RAMFS=y
# CONFIG_RELAYFS_FS is not set
#
# Miscellaneous filesystems
#
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_HFSPLUS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
CONFIG_CRAMFS=y
# CONFIG_VXFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
#
# Network File Systems
#
CONFIG_NFS_FS=y
CONFIG_NFS_V3=y
# CONFIG_NFS_V3_ACL is not set
# CONFIG_NFS_V4 is not set
# CONFIG_NFS_DIRECTIO is not set
# CONFIG_NFSD is not set
CONFIG_ROOT_NFS=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
CONFIG_NFS_COMMON=y
CONFIG_SUNRPC=y
# CONFIG_RPCSEC_GSS_KRB5 is not set
# CONFIG_RPCSEC_GSS_SPKM3 is not set
# CONFIG_SMB_FS is not set
# CONFIG_CIFS is not set
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
# CONFIG_9P_FS is not set
#
# Partition Types
#
# CONFIG_PARTITION_ADVANCED is not set
CONFIG_MSDOS_PARTITION=y
#
# Native Language Support
#
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="iso8859-1"
CONFIG_NLS_CODEPAGE_437=y
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
# CONFIG_NLS_CODEPAGE_850 is not set
# CONFIG_NLS_CODEPAGE_852 is not set
# CONFIG_NLS_CODEPAGE_855 is not set
# CONFIG_NLS_CODEPAGE_857 is not set
# CONFIG_NLS_CODEPAGE_860 is not set
# CONFIG_NLS_CODEPAGE_861 is not set
# CONFIG_NLS_CODEPAGE_862 is not set
# CONFIG_NLS_CODEPAGE_863 is not set
# CONFIG_NLS_CODEPAGE_864 is not set
# CONFIG_NLS_CODEPAGE_865 is not set
# CONFIG_NLS_CODEPAGE_866 is not set
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
# CONFIG_NLS_CODEPAGE_950 is not set
CONFIG_NLS_CODEPAGE_932=y
# CONFIG_NLS_CODEPAGE_949 is not set
# CONFIG_NLS_CODEPAGE_874 is not set
# CONFIG_NLS_ISO8859_8 is not set
# CONFIG_NLS_CODEPAGE_1250 is not set
# CONFIG_NLS_CODEPAGE_1251 is not set
# CONFIG_NLS_ASCII is not set
# CONFIG_NLS_ISO8859_1 is not set
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
# CONFIG_NLS_ISO8859_4 is not set
# CONFIG_NLS_ISO8859_5 is not set
# CONFIG_NLS_ISO8859_6 is not set
# CONFIG_NLS_ISO8859_7 is not set
# CONFIG_NLS_ISO8859_9 is not set
# CONFIG_NLS_ISO8859_13 is not set
# CONFIG_NLS_ISO8859_14 is not set
# CONFIG_NLS_ISO8859_15 is not set
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
CONFIG_NLS_UTF8=y
#
# Profiling support
#
# CONFIG_PROFILING is not set
#
# Kernel hacking
#
# CONFIG_PRINTK_TIME is not set
CONFIG_DEBUG_KERNEL=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_SCHEDSTATS is not set
# CONFIG_DEBUG_SLAB is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
# CONFIG_DEBUG_KOBJECT is not set
# CONFIG_DEBUG_BUGVERBOSE is not set
CONFIG_DEBUG_INFO=y
# CONFIG_DEBUG_FS is not set
# CONFIG_FRAME_POINTER is not set
# CONFIG_DEBUG_STACKOVERFLOW is not set
# CONFIG_DEBUG_STACK_USAGE is not set
#
# Security options
#
# CONFIG_KEYS is not set
# CONFIG_SECURITY is not set
#
# Cryptographic options
#
# CONFIG_CRYPTO is not set
#
# Hardware crypto devices
#
#
# Library routines
#
# CONFIG_CRC_CCITT is not set
# CONFIG_CRC16 is not set
CONFIG_CRC32=y
CONFIG_LIBCRC32C=y
CONFIG_ZLIB_INFLATE=y

View File

@ -1,7 +1,7 @@
/*
* linux/arch/m32r/mm/cache.c
*
* Copyright (C) 2002 Hirokazu Takata
* Copyright (C) 2002-2005 Hirokazu Takata, Hayato Fujiwara
*/
#include <linux/config.h>
@ -9,7 +9,8 @@
#undef MCCR
#if defined(CONFIG_CHIP_XNUX2) || defined(CONFIG_CHIP_M32700) || defined(CONFIG_CHIP_VDEC2) || defined(CONFIG_CHIP_OPSP)
#if defined(CONFIG_CHIP_XNUX2) || defined(CONFIG_CHIP_M32700) \
|| defined(CONFIG_CHIP_VDEC2) || defined(CONFIG_CHIP_OPSP)
/* Cache Control Register */
#define MCCR ((volatile unsigned long*)0xfffffffc)
#define MCCR_CC (1UL << 7) /* Cache mode modify bit */
@ -26,7 +27,17 @@
#define MCCR ((volatile unsigned char*)0xfffffffe)
#define MCCR_IIV (1UL << 0) /* I-cache invalidate */
#define MCCR_ICACHE_INV MCCR_IIV
#endif /* CONFIG_CHIP_XNUX2 || CONFIG_CHIP_M32700 */
#elif defined(CONFIG_CHIP_M32104)
#define MCCR ((volatile unsigned short*)0xfffffffe)
#define MCCR_IIV (1UL << 8) /* I-cache invalidate */
#define MCCR_DIV (1UL << 9) /* D-cache invalidate */
#define MCCR_DCB (1UL << 10) /* D-cache copy back */
#define MCCR_ICM (1UL << 0) /* I-cache mode [0:off,1:on] */
#define MCCR_DCM (1UL << 1) /* D-cache mode [0:off,1:on] */
#define MCCR_ICACHE_INV MCCR_IIV
#define MCCR_DCACHE_CB MCCR_DCB
#define MCCR_DCACHE_CBINV (MCCR_DIV|MCCR_DCB)
#endif
#ifndef MCCR
#error Unknown cache type.
@ -37,29 +48,42 @@
void _flush_cache_all(void)
{
#if defined(CONFIG_CHIP_M32102)
unsigned char mccr;
*MCCR = MCCR_ICACHE_INV;
#elif defined(CONFIG_CHIP_M32104)
unsigned short mccr;
/* Copyback and invalidate D-cache */
/* Invalidate I-cache */
*MCCR |= (MCCR_ICACHE_INV | MCCR_DCACHE_CBINV);
#else
unsigned long mccr;
/* Copyback and invalidate D-cache */
/* Invalidate I-cache */
*MCCR = MCCR_ICACHE_INV | MCCR_DCACHE_CBINV;
while ((mccr = *MCCR) & MCCR_IIV); /* loop while invalidating... */
#endif
while ((mccr = *MCCR) & MCCR_IIV); /* loop while invalidating... */
}
/* Copy back D-cache and invalidate I-cache all */
void _flush_cache_copyback_all(void)
{
#if defined(CONFIG_CHIP_M32102)
unsigned char mccr;
*MCCR = MCCR_ICACHE_INV;
#elif defined(CONFIG_CHIP_M32104)
unsigned short mccr;
/* Copyback and invalidate D-cache */
/* Invalidate I-cache */
*MCCR |= (MCCR_ICACHE_INV | MCCR_DCACHE_CB);
#else
unsigned long mccr;
/* Copyback D-cache */
/* Invalidate I-cache */
*MCCR = MCCR_ICACHE_INV | MCCR_DCACHE_CB;
while ((mccr = *MCCR) & MCCR_IIV); /* loop while invalidating... */
#endif
while ((mccr = *MCCR) & MCCR_IIV); /* loop while invalidating... */
}

View File

@ -38,8 +38,6 @@ EXPORT_SYMBOL(strncmp);
EXPORT_SYMBOL(ip_fast_csum);
EXPORT_SYMBOL(mach_enable_irq);
EXPORT_SYMBOL(mach_disable_irq);
EXPORT_SYMBOL(kernel_thread);
/* Networking helper routines. */

View File

@ -65,8 +65,6 @@ void (*mach_kbd_leds) (unsigned int) = NULL;
/* machine dependent irq functions */
void (*mach_init_IRQ) (void) = NULL;
irqreturn_t (*(*mach_default_handler)[]) (int, void *, struct pt_regs *) = NULL;
void (*mach_enable_irq) (unsigned int) = NULL;
void (*mach_disable_irq) (unsigned int) = NULL;
int (*mach_get_irq_list) (struct seq_file *, void *) = NULL;
void (*mach_process_int) (int irq, struct pt_regs *fp) = NULL;
void (*mach_trap_init) (void);

View File

@ -190,6 +190,8 @@ boot-$(CONFIG_REDWOOD_5) += embed_config.o
boot-$(CONFIG_REDWOOD_6) += embed_config.o
boot-$(CONFIG_8xx) += embed_config.o
boot-$(CONFIG_8260) += embed_config.o
boot-$(CONFIG_EP405) += embed_config.o
boot-$(CONFIG_XILINX_ML300) += embed_config.o
boot-$(CONFIG_BSEIP) += iic.o
boot-$(CONFIG_MBX) += iic.o pci.o qspan_pci.o
boot-$(CONFIG_MV64X60) += misc-mv64x60.o

View File

@ -37,7 +37,6 @@
void default_idle(void)
{
void (*powersave)(void);
int cpu = smp_processor_id();
powersave = ppc_md.power_save;
@ -47,7 +46,8 @@ void default_idle(void)
#ifdef CONFIG_SMP
else {
set_thread_flag(TIF_POLLING_NRFLAG);
while (!need_resched() && !cpu_is_offline(cpu))
while (!need_resched() &&
!cpu_is_offline(smp_processor_id()))
barrier();
clear_thread_flag(TIF_POLLING_NRFLAG);
}

View File

@ -58,7 +58,6 @@ static struct ocp_func_emac_data ibm440gx_emac2_def = {
.wol_irq = 65, /* WOL interrupt number */
.mdio_idx = -1, /* No shared MDIO */
.tah_idx = 0, /* TAH device index */
.jumbo = 1, /* Jumbo frames supported */
};
static struct ocp_func_emac_data ibm440gx_emac3_def = {
@ -72,7 +71,6 @@ static struct ocp_func_emac_data ibm440gx_emac3_def = {
.wol_irq = 67, /* WOL interrupt number */
.mdio_idx = -1, /* No shared MDIO */
.tah_idx = 1, /* TAH device index */
.jumbo = 1, /* Jumbo frames supported */
};
OCP_SYSFS_EMAC_DATA()

View File

@ -31,7 +31,6 @@ static struct ocp_func_emac_data ibm440sp_emac0_def = {
.wol_irq = 61, /* WOL interrupt number */
.mdio_idx = -1, /* No shared MDIO */
.tah_idx = -1, /* No TAH */
.jumbo = 1, /* Jumbo frames supported */
};
OCP_SYSFS_EMAC_DATA()

View File

@ -196,8 +196,10 @@ platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
mpc52xx_set_bat();
/* No ISA bus by default */
#ifdef CONFIG_PCI
isa_io_base = 0;
isa_mem_base = 0;
#endif
/* Powersave */
/* This is provided as an example on how to do it. But you

View File

@ -1,53 +0,0 @@
/*
* arch/ppc/platforms/mpc5200.c
*
* OCP Definitions for the boards based on MPC5200 processor. Contains
* definitions for every common peripherals. (Mostly all but PSCs)
*
* Maintainer : Sylvain Munaut <tnt@246tNt.com>
*
* Copyright 2004 Sylvain Munaut <tnt@246tNt.com>
*
* This file is licensed under the terms of the GNU General Public License
* version 2. This program is licensed "as is" without any warranty of any
* kind, whether express or implied.
*/
#include <asm/ocp.h>
#include <asm/mpc52xx.h>
static struct ocp_fs_i2c_data mpc5200_i2c_def = {
.flags = FS_I2C_CLOCK_5200,
};
/* Here is the core_ocp struct.
* With all the devices common to all board. Even if port multiplexing is
* not setup for them (if the user don't want them, just don't select the
* config option). The potentially conflicting devices (like PSCs) goes in
* board specific file.
*/
struct ocp_def core_ocp[] = {
{
.vendor = OCP_VENDOR_FREESCALE,
.function = OCP_FUNC_IIC,
.index = 0,
.paddr = MPC52xx_I2C1,
.irq = OCP_IRQ_NA, /* MPC52xx_IRQ_I2C1 - Buggy */
.pm = OCP_CPM_NA,
.additions = &mpc5200_i2c_def,
},
{
.vendor = OCP_VENDOR_FREESCALE,
.function = OCP_FUNC_IIC,
.index = 1,
.paddr = MPC52xx_I2C2,
.irq = OCP_IRQ_NA, /* MPC52xx_IRQ_I2C2 - Buggy */
.pm = OCP_CPM_NA,
.additions = &mpc5200_i2c_def,
},
{ /* Terminating entry */
.vendor = OCP_VENDOR_INVALID
}
};

View File

@ -24,6 +24,12 @@
#include <asm/machdep.h>
/* This macro is defined to activate the workaround for the bug
435 of the MPC5200 (L25R). With it activated, we don't do any
32 bits configuration access during type-1 cycles */
#define MPC5200_BUG_435_WORKAROUND
static int
mpc52xx_pci_read_config(struct pci_bus *bus, unsigned int devfn,
int offset, int len, u32 *val)
@ -40,17 +46,39 @@ mpc52xx_pci_read_config(struct pci_bus *bus, unsigned int devfn,
((bus->number - hose->bus_offset) << 16) |
(devfn << 8) |
(offset & 0xfc));
mb();
value = in_le32(hose->cfg_data);
#ifdef MPC5200_BUG_435_WORKAROUND
if (bus->number != hose->bus_offset) {
switch (len) {
case 1:
value = in_8(((u8 __iomem *)hose->cfg_data) + (offset & 3));
break;
case 2:
value = in_le16(((u16 __iomem *)hose->cfg_data) + ((offset>>1) & 1));
break;
if (len != 4) {
value >>= ((offset & 0x3) << 3);
value &= 0xffffffff >> (32 - (len << 3));
default:
value = in_le16((u16 __iomem *)hose->cfg_data) |
(in_le16(((u16 __iomem *)hose->cfg_data) + 1) << 16);
break;
}
}
else
#endif
{
value = in_le32(hose->cfg_data);
if (len != 4) {
value >>= ((offset & 0x3) << 3);
value &= 0xffffffff >> (32 - (len << 3));
}
}
*val = value;
out_be32(hose->cfg_addr, 0);
mb();
return PCIBIOS_SUCCESSFUL;
}
@ -71,21 +99,48 @@ mpc52xx_pci_write_config(struct pci_bus *bus, unsigned int devfn,
((bus->number - hose->bus_offset) << 16) |
(devfn << 8) |
(offset & 0xfc));
mb();
if (len != 4) {
value = in_le32(hose->cfg_data);
#ifdef MPC5200_BUG_435_WORKAROUND
if (bus->number != hose->bus_offset) {
switch (len) {
case 1:
out_8(((u8 __iomem *)hose->cfg_data) +
(offset & 3), val);
break;
case 2:
out_le16(((u16 __iomem *)hose->cfg_data) +
((offset>>1) & 1), val);
break;
offset = (offset & 0x3) << 3;
mask = (0xffffffff >> (32 - (len << 3)));
mask <<= offset;
value &= ~mask;
val = value | ((val << offset) & mask);
default:
out_le16((u16 __iomem *)hose->cfg_data,
(u16)val);
out_le16(((u16 __iomem *)hose->cfg_data) + 1,
(u16)(val>>16));
break;
}
}
else
#endif
{
if (len != 4) {
value = in_le32(hose->cfg_data);
out_le32(hose->cfg_data, val);
offset = (offset & 0x3) << 3;
mask = (0xffffffff >> (32 - (len << 3)));
mask <<= offset;
value &= ~mask;
val = value | ((val << offset) & mask);
}
out_le32(hose->cfg_data, val);
}
mb();
out_be32(hose->cfg_addr, 0);
mb();
return PCIBIOS_SUCCESSFUL;
}
@ -99,9 +154,12 @@ static struct pci_ops mpc52xx_pci_ops = {
static void __init
mpc52xx_pci_setup(struct mpc52xx_pci __iomem *pci_regs)
{
u32 tmp;
/* Setup control regs */
/* Nothing to do afaik */
tmp = in_be32(&pci_regs->scr);
tmp |= PCI_COMMAND_MASTER | PCI_COMMAND_MEMORY;
out_be32(&pci_regs->scr, tmp);
/* Setup windows */
out_be32(&pci_regs->iw0btar, MPC52xx_PCI_IWBTAR_TRANSLATION(
@ -142,16 +200,15 @@ mpc52xx_pci_setup(struct mpc52xx_pci __iomem *pci_regs)
/* Not necessary and can be a bad thing if for example the bootloader
is displaying a splash screen or ... Just left here for
documentation purpose if anyone need it */
#if 0
u32 tmp;
tmp = in_be32(&pci_regs->gscr);
#if 0
out_be32(&pci_regs->gscr, tmp | MPC52xx_PCI_GSCR_PR);
udelay(50);
out_be32(&pci_regs->gscr, tmp);
#endif
out_be32(&pci_regs->gscr, tmp & ~MPC52xx_PCI_GSCR_PR);
}
static void __init
static void
mpc52xx_pci_fixup_resources(struct pci_dev *dev)
{
int i;

View File

@ -84,9 +84,11 @@ mpc52xx_set_bat(void)
void __init
mpc52xx_map_io(void)
{
/* Here we only map the MBAR */
/* Here we map the MBAR and the whole upper zone. MBAR is only
64k but we can't map only 64k with BATs. Map the whole
0xf0000000 range is ok and helps eventual lpb devices placed there */
io_block_mapping(
MPC52xx_MBAR_VIRT, MPC52xx_MBAR, MPC52xx_MBAR_SIZE, _PAGE_IO);
MPC52xx_MBAR_VIRT, MPC52xx_MBAR, 0x10000000, _PAGE_IO);
}

View File

@ -23,14 +23,14 @@ config GENERIC_BUST_SPINLOCK
mainmenu "Linux Kernel Configuration"
config ARCH_S390
config S390
bool
default y
config UID16
bool
default y
depends on ARCH_S390X = 'n'
depends on !64BIT
source "init/Kconfig"
@ -38,20 +38,12 @@ menu "Base setup"
comment "Processor type and features"
config ARCH_S390X
config 64BIT
bool "64 bit kernel"
help
Select this option if you have a 64 bit IBM zSeries machine
and want to use the 64 bit addressing mode.
config 64BIT
def_bool ARCH_S390X
config ARCH_S390_31
bool
depends on ARCH_S390X = 'n'
default y
config SMP
bool "Symmetric multi-processing support"
---help---
@ -101,20 +93,15 @@ config MATHEMU
on older S/390 machines. Say Y unless you know your machine doesn't
need this.
config S390_SUPPORT
config COMPAT
bool "Kernel support for 31 bit emulation"
depends on ARCH_S390X
depends on 64BIT
help
Select this option if you want to enable your system kernel to
handle system-calls from ELF binaries for 31 bit ESA. This option
(and some other stuff like libraries and such) is needed for
executing 31 bit applications. It is safe to say "Y".
config COMPAT
bool
depends on S390_SUPPORT
default y
config SYSVIPC_COMPAT
bool
depends on COMPAT && SYSVIPC
@ -122,7 +109,7 @@ config SYSVIPC_COMPAT
config BINFMT_ELF32
tristate "Kernel support for 31 bit ELF binaries"
depends on S390_SUPPORT
depends on COMPAT
help
This allows you to run 32-bit Linux/ELF binaries on your zSeries
in 64 bit mode. Everybody wants this; say Y.
@ -135,7 +122,7 @@ choice
config MARCH_G5
bool "S/390 model G5 and G6"
depends on ARCH_S390_31
depends on !64BIT
help
Select this to build a 31 bit kernel that works
on all S/390 and zSeries machines.
@ -240,8 +227,8 @@ config MACHCHK_WARNING
config QDIO
tristate "QDIO support"
---help---
This driver provides the Queued Direct I/O base support for the
IBM S/390 (G5 and G6) and eServer zSeries (z800, z890, z900 and z990).
This driver provides the Queued Direct I/O base support for
IBM mainframes.
For details please refer to the documentation provided by IBM at
<http://www10.software.ibm.com/developerworks/opensource/linux390>
@ -263,7 +250,8 @@ config QDIO_DEBUG
bool "Extended debugging information"
depends on QDIO
help
Say Y here to get extended debugging output in /proc/s390dbf/qdio...
Say Y here to get extended debugging output in
/sys/kernel/debug/s390dbf/qdio...
Warning: this option reduces the performance of the QDIO module.
If unsure, say N.

View File

@ -13,16 +13,14 @@
# Copyright (C) 1994 by Linus Torvalds
#
ifdef CONFIG_ARCH_S390_31
ifndef CONFIG_64BIT
LDFLAGS := -m elf_s390
CFLAGS += -m31
AFLAGS += -m31
UTS_MACHINE := s390
STACK_SIZE := 8192
CHECKFLAGS += -D__s390__
endif
ifdef CONFIG_ARCH_S390X
else
LDFLAGS := -m elf64_s390
MODFLAGS += -fpic -D__PIC__
CFLAGS += -m64

View File

@ -40,7 +40,7 @@
#define TOD_MICRO 0x01000 /* nr. of TOD clock units
for 1 microsecond */
#ifndef CONFIG_ARCH_S390X
#ifndef CONFIG_64BIT
#define APPLDATA_START_INTERVAL_REC 0x00 /* Function codes for */
#define APPLDATA_STOP_REC 0x01 /* DIAG 0xDC */
@ -54,13 +54,13 @@
#define APPLDATA_GEN_EVENT_RECORD 0x82
#define APPLDATA_START_CONFIG_REC 0x83
#endif /* CONFIG_ARCH_S390X */
#endif /* CONFIG_64BIT */
/*
* Parameter list for DIAGNOSE X'DC'
*/
#ifndef CONFIG_ARCH_S390X
#ifndef CONFIG_64BIT
struct appldata_parameter_list {
u16 diag; /* The DIAGNOSE code X'00DC' */
u8 function; /* The function code for the DIAGNOSE */
@ -82,7 +82,7 @@ struct appldata_parameter_list {
u64 product_id_addr;
u64 buffer_addr;
};
#endif /* CONFIG_ARCH_S390X */
#endif /* CONFIG_64BIT */
/*
* /proc entries (sysctl)

View File

@ -141,19 +141,19 @@ static void appldata_get_os_data(void *data)
j = 0;
for_each_online_cpu(i) {
os_data->os_cpu[j].per_cpu_user =
kstat_cpu(i).cpustat.user;
cputime_to_jiffies(kstat_cpu(i).cpustat.user);
os_data->os_cpu[j].per_cpu_nice =
kstat_cpu(i).cpustat.nice;
cputime_to_jiffies(kstat_cpu(i).cpustat.nice);
os_data->os_cpu[j].per_cpu_system =
kstat_cpu(i).cpustat.system;
cputime_to_jiffies(kstat_cpu(i).cpustat.system);
os_data->os_cpu[j].per_cpu_idle =
kstat_cpu(i).cpustat.idle;
cputime_to_jiffies(kstat_cpu(i).cpustat.idle);
os_data->os_cpu[j].per_cpu_irq =
kstat_cpu(i).cpustat.irq;
cputime_to_jiffies(kstat_cpu(i).cpustat.irq);
os_data->os_cpu[j].per_cpu_softirq =
kstat_cpu(i).cpustat.softirq;
cputime_to_jiffies(kstat_cpu(i).cpustat.softirq);
os_data->os_cpu[j].per_cpu_iowait =
kstat_cpu(i).cpustat.iowait;
cputime_to_jiffies(kstat_cpu(i).cpustat.iowait);
j++;
}

View File

@ -2,7 +2,9 @@
# Cryptographic API
#
obj-$(CONFIG_CRYPTO_SHA1_Z990) += sha1_z990.o
obj-$(CONFIG_CRYPTO_DES_Z990) += des_z990.o des_check_key.o
obj-$(CONFIG_CRYPTO_SHA1_S390) += sha1_s390.o
obj-$(CONFIG_CRYPTO_SHA256_S390) += sha256_s390.o
obj-$(CONFIG_CRYPTO_DES_S390) += des_s390.o des_check_key.o
obj-$(CONFIG_CRYPTO_AES_S390) += aes_s390.o
obj-$(CONFIG_CRYPTO_TEST) += crypt_z990_query.o
obj-$(CONFIG_CRYPTO_TEST) += crypt_s390_query.o

248
arch/s390/crypto/aes_s390.c Normal file
View File

@ -0,0 +1,248 @@
/*
* Cryptographic API.
*
* s390 implementation of the AES Cipher Algorithm.
*
* s390 Version:
* Copyright (C) 2005 IBM Deutschland GmbH, IBM Corporation
* Author(s): Jan Glauber (jang@de.ibm.com)
*
* Derived from "crypto/aes.c"
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
*/
#include <linux/module.h>
#include <linux/init.h>
#include <linux/crypto.h>
#include "crypt_s390.h"
#define AES_MIN_KEY_SIZE 16
#define AES_MAX_KEY_SIZE 32
/* data block size for all key lengths */
#define AES_BLOCK_SIZE 16
int has_aes_128 = 0;
int has_aes_192 = 0;
int has_aes_256 = 0;
struct s390_aes_ctx {
u8 iv[AES_BLOCK_SIZE];
u8 key[AES_MAX_KEY_SIZE];
int key_len;
};
static int aes_set_key(void *ctx, const u8 *in_key, unsigned int key_len,
u32 *flags)
{
struct s390_aes_ctx *sctx = ctx;
switch (key_len) {
case 16:
if (!has_aes_128)
goto fail;
break;
case 24:
if (!has_aes_192)
goto fail;
break;
case 32:
if (!has_aes_256)
goto fail;
break;
default:
/* invalid key length */
goto fail;
break;
}
sctx->key_len = key_len;
memcpy(sctx->key, in_key, key_len);
return 0;
fail:
*flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL;
}
static void aes_encrypt(void *ctx, u8 *out, const u8 *in)
{
const struct s390_aes_ctx *sctx = ctx;
switch (sctx->key_len) {
case 16:
crypt_s390_km(KM_AES_128_ENCRYPT, &sctx->key, out, in,
AES_BLOCK_SIZE);
break;
case 24:
crypt_s390_km(KM_AES_192_ENCRYPT, &sctx->key, out, in,
AES_BLOCK_SIZE);
break;
case 32:
crypt_s390_km(KM_AES_256_ENCRYPT, &sctx->key, out, in,
AES_BLOCK_SIZE);
break;
}
}
static void aes_decrypt(void *ctx, u8 *out, const u8 *in)
{
const struct s390_aes_ctx *sctx = ctx;
switch (sctx->key_len) {
case 16:
crypt_s390_km(KM_AES_128_DECRYPT, &sctx->key, out, in,
AES_BLOCK_SIZE);
break;
case 24:
crypt_s390_km(KM_AES_192_DECRYPT, &sctx->key, out, in,
AES_BLOCK_SIZE);
break;
case 32:
crypt_s390_km(KM_AES_256_DECRYPT, &sctx->key, out, in,
AES_BLOCK_SIZE);
break;
}
}
static unsigned int aes_encrypt_ecb(const struct cipher_desc *desc, u8 *out,
const u8 *in, unsigned int nbytes)
{
struct s390_aes_ctx *sctx = crypto_tfm_ctx(desc->tfm);
switch (sctx->key_len) {
case 16:
crypt_s390_km(KM_AES_128_ENCRYPT, &sctx->key, out, in, nbytes);
break;
case 24:
crypt_s390_km(KM_AES_192_ENCRYPT, &sctx->key, out, in, nbytes);
break;
case 32:
crypt_s390_km(KM_AES_256_ENCRYPT, &sctx->key, out, in, nbytes);
break;
}
return nbytes & ~(AES_BLOCK_SIZE - 1);
}
static unsigned int aes_decrypt_ecb(const struct cipher_desc *desc, u8 *out,
const u8 *in, unsigned int nbytes)
{
struct s390_aes_ctx *sctx = crypto_tfm_ctx(desc->tfm);
switch (sctx->key_len) {
case 16:
crypt_s390_km(KM_AES_128_DECRYPT, &sctx->key, out, in, nbytes);
break;
case 24:
crypt_s390_km(KM_AES_192_DECRYPT, &sctx->key, out, in, nbytes);
break;
case 32:
crypt_s390_km(KM_AES_256_DECRYPT, &sctx->key, out, in, nbytes);
break;
}
return nbytes & ~(AES_BLOCK_SIZE - 1);
}
static unsigned int aes_encrypt_cbc(const struct cipher_desc *desc, u8 *out,
const u8 *in, unsigned int nbytes)
{
struct s390_aes_ctx *sctx = crypto_tfm_ctx(desc->tfm);
memcpy(&sctx->iv, desc->info, AES_BLOCK_SIZE);
switch (sctx->key_len) {
case 16:
crypt_s390_kmc(KMC_AES_128_ENCRYPT, &sctx->iv, out, in, nbytes);
break;
case 24:
crypt_s390_kmc(KMC_AES_192_ENCRYPT, &sctx->iv, out, in, nbytes);
break;
case 32:
crypt_s390_kmc(KMC_AES_256_ENCRYPT, &sctx->iv, out, in, nbytes);
break;
}
memcpy(desc->info, &sctx->iv, AES_BLOCK_SIZE);
return nbytes & ~(AES_BLOCK_SIZE - 1);
}
static unsigned int aes_decrypt_cbc(const struct cipher_desc *desc, u8 *out,
const u8 *in, unsigned int nbytes)
{
struct s390_aes_ctx *sctx = crypto_tfm_ctx(desc->tfm);
memcpy(&sctx->iv, desc->info, AES_BLOCK_SIZE);
switch (sctx->key_len) {
case 16:
crypt_s390_kmc(KMC_AES_128_DECRYPT, &sctx->iv, out, in, nbytes);
break;
case 24:
crypt_s390_kmc(KMC_AES_192_DECRYPT, &sctx->iv, out, in, nbytes);
break;
case 32:
crypt_s390_kmc(KMC_AES_256_DECRYPT, &sctx->iv, out, in, nbytes);
break;
}
return nbytes & ~(AES_BLOCK_SIZE - 1);
}
static struct crypto_alg aes_alg = {
.cra_name = "aes",
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s390_aes_ctx),
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(aes_alg.cra_list),
.cra_u = {
.cipher = {
.cia_min_keysize = AES_MIN_KEY_SIZE,
.cia_max_keysize = AES_MAX_KEY_SIZE,
.cia_setkey = aes_set_key,
.cia_encrypt = aes_encrypt,
.cia_decrypt = aes_decrypt,
.cia_encrypt_ecb = aes_encrypt_ecb,
.cia_decrypt_ecb = aes_decrypt_ecb,
.cia_encrypt_cbc = aes_encrypt_cbc,
.cia_decrypt_cbc = aes_decrypt_cbc,
}
}
};
static int __init aes_init(void)
{
int ret;
if (crypt_s390_func_available(KM_AES_128_ENCRYPT))
has_aes_128 = 1;
if (crypt_s390_func_available(KM_AES_192_ENCRYPT))
has_aes_192 = 1;
if (crypt_s390_func_available(KM_AES_256_ENCRYPT))
has_aes_256 = 1;
if (!has_aes_128 && !has_aes_192 && !has_aes_256)
return -ENOSYS;
ret = crypto_register_alg(&aes_alg);
if (ret != 0)
printk(KERN_INFO "crypt_s390: aes_s390 couldn't be loaded.\n");
return ret;
}
static void __exit aes_fini(void)
{
crypto_unregister_alg(&aes_alg);
}
module_init(aes_init);
module_exit(aes_fini);
MODULE_ALIAS("aes");
MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm");
MODULE_LICENSE("GPL");

View File

@ -1,7 +1,7 @@
/*
* Cryptographic API.
*
* Support for z990 cryptographic instructions.
* Support for s390 cryptographic instructions.
*
* Copyright (C) 2003 IBM Deutschland GmbH, IBM Corporation
* Author(s): Thomas Spatzier (tspat@de.ibm.com)
@ -12,84 +12,108 @@
* any later version.
*
*/
#ifndef _CRYPTO_ARCH_S390_CRYPT_Z990_H
#define _CRYPTO_ARCH_S390_CRYPT_Z990_H
#ifndef _CRYPTO_ARCH_S390_CRYPT_S390_H
#define _CRYPTO_ARCH_S390_CRYPT_S390_H
#include <asm/errno.h>
#define CRYPT_Z990_OP_MASK 0xFF00
#define CRYPT_Z990_FUNC_MASK 0x00FF
#define CRYPT_S390_OP_MASK 0xFF00
#define CRYPT_S390_FUNC_MASK 0x00FF
/*z990 cryptographic operations*/
enum crypt_z990_operations {
CRYPT_Z990_KM = 0x0100,
CRYPT_Z990_KMC = 0x0200,
CRYPT_Z990_KIMD = 0x0300,
CRYPT_Z990_KLMD = 0x0400,
CRYPT_Z990_KMAC = 0x0500
/* s930 cryptographic operations */
enum crypt_s390_operations {
CRYPT_S390_KM = 0x0100,
CRYPT_S390_KMC = 0x0200,
CRYPT_S390_KIMD = 0x0300,
CRYPT_S390_KLMD = 0x0400,
CRYPT_S390_KMAC = 0x0500
};
/*function codes for KM (CIPHER MESSAGE) instruction*/
enum crypt_z990_km_func {
KM_QUERY = CRYPT_Z990_KM | 0,
KM_DEA_ENCRYPT = CRYPT_Z990_KM | 1,
KM_DEA_DECRYPT = CRYPT_Z990_KM | 1 | 0x80, //modifier bit->decipher
KM_TDEA_128_ENCRYPT = CRYPT_Z990_KM | 2,
KM_TDEA_128_DECRYPT = CRYPT_Z990_KM | 2 | 0x80,
KM_TDEA_192_ENCRYPT = CRYPT_Z990_KM | 3,
KM_TDEA_192_DECRYPT = CRYPT_Z990_KM | 3 | 0x80,
/* function codes for KM (CIPHER MESSAGE) instruction
* 0x80 is the decipher modifier bit
*/
enum crypt_s390_km_func {
KM_QUERY = CRYPT_S390_KM | 0x0,
KM_DEA_ENCRYPT = CRYPT_S390_KM | 0x1,
KM_DEA_DECRYPT = CRYPT_S390_KM | 0x1 | 0x80,
KM_TDEA_128_ENCRYPT = CRYPT_S390_KM | 0x2,
KM_TDEA_128_DECRYPT = CRYPT_S390_KM | 0x2 | 0x80,
KM_TDEA_192_ENCRYPT = CRYPT_S390_KM | 0x3,
KM_TDEA_192_DECRYPT = CRYPT_S390_KM | 0x3 | 0x80,
KM_AES_128_ENCRYPT = CRYPT_S390_KM | 0x12,
KM_AES_128_DECRYPT = CRYPT_S390_KM | 0x12 | 0x80,
KM_AES_192_ENCRYPT = CRYPT_S390_KM | 0x13,
KM_AES_192_DECRYPT = CRYPT_S390_KM | 0x13 | 0x80,
KM_AES_256_ENCRYPT = CRYPT_S390_KM | 0x14,
KM_AES_256_DECRYPT = CRYPT_S390_KM | 0x14 | 0x80,
};
/*function codes for KMC (CIPHER MESSAGE WITH CHAINING) instruction*/
enum crypt_z990_kmc_func {
KMC_QUERY = CRYPT_Z990_KMC | 0,
KMC_DEA_ENCRYPT = CRYPT_Z990_KMC | 1,
KMC_DEA_DECRYPT = CRYPT_Z990_KMC | 1 | 0x80, //modifier bit->decipher
KMC_TDEA_128_ENCRYPT = CRYPT_Z990_KMC | 2,
KMC_TDEA_128_DECRYPT = CRYPT_Z990_KMC | 2 | 0x80,
KMC_TDEA_192_ENCRYPT = CRYPT_Z990_KMC | 3,
KMC_TDEA_192_DECRYPT = CRYPT_Z990_KMC | 3 | 0x80,
/* function codes for KMC (CIPHER MESSAGE WITH CHAINING)
* instruction
*/
enum crypt_s390_kmc_func {
KMC_QUERY = CRYPT_S390_KMC | 0x0,
KMC_DEA_ENCRYPT = CRYPT_S390_KMC | 0x1,
KMC_DEA_DECRYPT = CRYPT_S390_KMC | 0x1 | 0x80,
KMC_TDEA_128_ENCRYPT = CRYPT_S390_KMC | 0x2,
KMC_TDEA_128_DECRYPT = CRYPT_S390_KMC | 0x2 | 0x80,
KMC_TDEA_192_ENCRYPT = CRYPT_S390_KMC | 0x3,
KMC_TDEA_192_DECRYPT = CRYPT_S390_KMC | 0x3 | 0x80,
KMC_AES_128_ENCRYPT = CRYPT_S390_KMC | 0x12,
KMC_AES_128_DECRYPT = CRYPT_S390_KMC | 0x12 | 0x80,
KMC_AES_192_ENCRYPT = CRYPT_S390_KMC | 0x13,
KMC_AES_192_DECRYPT = CRYPT_S390_KMC | 0x13 | 0x80,
KMC_AES_256_ENCRYPT = CRYPT_S390_KMC | 0x14,
KMC_AES_256_DECRYPT = CRYPT_S390_KMC | 0x14 | 0x80,
};
/*function codes for KIMD (COMPUTE INTERMEDIATE MESSAGE DIGEST) instruction*/
enum crypt_z990_kimd_func {
KIMD_QUERY = CRYPT_Z990_KIMD | 0,
KIMD_SHA_1 = CRYPT_Z990_KIMD | 1,
/* function codes for KIMD (COMPUTE INTERMEDIATE MESSAGE DIGEST)
* instruction
*/
enum crypt_s390_kimd_func {
KIMD_QUERY = CRYPT_S390_KIMD | 0,
KIMD_SHA_1 = CRYPT_S390_KIMD | 1,
KIMD_SHA_256 = CRYPT_S390_KIMD | 2,
};
/*function codes for KLMD (COMPUTE LAST MESSAGE DIGEST) instruction*/
enum crypt_z990_klmd_func {
KLMD_QUERY = CRYPT_Z990_KLMD | 0,
KLMD_SHA_1 = CRYPT_Z990_KLMD | 1,
/* function codes for KLMD (COMPUTE LAST MESSAGE DIGEST)
* instruction
*/
enum crypt_s390_klmd_func {
KLMD_QUERY = CRYPT_S390_KLMD | 0,
KLMD_SHA_1 = CRYPT_S390_KLMD | 1,
KLMD_SHA_256 = CRYPT_S390_KLMD | 2,
};
/*function codes for KMAC (COMPUTE MESSAGE AUTHENTICATION CODE) instruction*/
enum crypt_z990_kmac_func {
KMAC_QUERY = CRYPT_Z990_KMAC | 0,
KMAC_DEA = CRYPT_Z990_KMAC | 1,
KMAC_TDEA_128 = CRYPT_Z990_KMAC | 2,
KMAC_TDEA_192 = CRYPT_Z990_KMAC | 3
/* function codes for KMAC (COMPUTE MESSAGE AUTHENTICATION CODE)
* instruction
*/
enum crypt_s390_kmac_func {
KMAC_QUERY = CRYPT_S390_KMAC | 0,
KMAC_DEA = CRYPT_S390_KMAC | 1,
KMAC_TDEA_128 = CRYPT_S390_KMAC | 2,
KMAC_TDEA_192 = CRYPT_S390_KMAC | 3
};
/*status word for z990 crypto instructions' QUERY functions*/
struct crypt_z990_query_status {
/* status word for s390 crypto instructions' QUERY functions */
struct crypt_s390_query_status {
u64 high;
u64 low;
};
/*
* Standard fixup and ex_table sections for crypt_z990 inline functions.
* label 0: the z990 crypto operation
* label 1: just after 1 to catch illegal operation exception on non-z990
* Standard fixup and ex_table sections for crypt_s390 inline functions.
* label 0: the s390 crypto operation
* label 1: just after 1 to catch illegal operation exception
* (unsupported model)
* label 6: the return point after fixup
* label 7: set error value if exception _in_ crypto operation
* label 8: set error value if illegal operation exception
* [ret] is the variable to receive the error code
* [ERR] is the error code value
*/
#ifndef __s390x__
#define __crypt_z990_fixup \
#ifndef CONFIG_64BIT
#define __crypt_s390_fixup \
".section .fixup,\"ax\" \n" \
"7: lhi %0,%h[e1] \n" \
" bras 1,9f \n" \
@ -105,8 +129,8 @@ struct crypt_z990_query_status {
" .long 0b,7b \n" \
" .long 1b,8b \n" \
".previous"
#else /* __s390x__ */
#define __crypt_z990_fixup \
#else /* CONFIG_64BIT */
#define __crypt_s390_fixup \
".section .fixup,\"ax\" \n" \
"7: lhi %0,%h[e1] \n" \
" jg 6b \n" \
@ -118,25 +142,25 @@ struct crypt_z990_query_status {
" .quad 0b,7b \n" \
" .quad 1b,8b \n" \
".previous"
#endif /* __s390x__ */
#endif /* CONFIG_64BIT */
/*
* Standard code for setting the result of z990 crypto instructions.
* Standard code for setting the result of s390 crypto instructions.
* %0: the register which will receive the result
* [result]: the register containing the result (e.g. second operand length
* to compute number of processed bytes].
*/
#ifndef __s390x__
#define __crypt_z990_set_result \
#ifndef CONFIG_64BIT
#define __crypt_s390_set_result \
" lr %0,%[result] \n"
#else /* __s390x__ */
#define __crypt_z990_set_result \
#else /* CONFIG_64BIT */
#define __crypt_s390_set_result \
" lgr %0,%[result] \n"
#endif
/*
* Executes the KM (CIPHER MESSAGE) operation of the z990 CPU.
* @param func: the function code passed to KM; see crypt_z990_km_func
* Executes the KM (CIPHER MESSAGE) operation of the CPU.
* @param func: the function code passed to KM; see crypt_s390_km_func
* @param param: address of parameter block; see POP for details on each func
* @param dest: address of destination memory area
* @param src: address of source memory area
@ -145,9 +169,9 @@ struct crypt_z990_query_status {
* for encryption/decryption funcs
*/
static inline int
crypt_z990_km(long func, void* param, u8* dest, const u8* src, long src_len)
crypt_s390_km(long func, void* param, u8* dest, const u8* src, long src_len)
{
register long __func asm("0") = func & CRYPT_Z990_FUNC_MASK;
register long __func asm("0") = func & CRYPT_S390_FUNC_MASK;
register void* __param asm("1") = param;
register u8* __dest asm("4") = dest;
register const u8* __src asm("2") = src;
@ -156,26 +180,26 @@ crypt_z990_km(long func, void* param, u8* dest, const u8* src, long src_len)
ret = 0;
__asm__ __volatile__ (
"0: .insn rre,0xB92E0000,%1,%2 \n" //KM opcode
"1: brc 1,0b \n" //handle partial completion
__crypt_z990_set_result
"0: .insn rre,0xB92E0000,%1,%2 \n" /* KM opcode */
"1: brc 1,0b \n" /* handle partial completion */
__crypt_s390_set_result
"6: \n"
__crypt_z990_fixup
__crypt_s390_fixup
: "+d" (ret), "+a" (__dest), "+a" (__src),
[result] "+d" (__src_len)
: [e1] "K" (-EFAULT), [e2] "K" (-ENOSYS), "d" (__func),
"a" (__param)
: "cc", "memory"
);
if (ret >= 0 && func & CRYPT_Z990_FUNC_MASK){
if (ret >= 0 && func & CRYPT_S390_FUNC_MASK){
ret = src_len - ret;
}
return ret;
}
/*
* Executes the KMC (CIPHER MESSAGE WITH CHAINING) operation of the z990 CPU.
* @param func: the function code passed to KM; see crypt_z990_kmc_func
* Executes the KMC (CIPHER MESSAGE WITH CHAINING) operation of the CPU.
* @param func: the function code passed to KM; see crypt_s390_kmc_func
* @param param: address of parameter block; see POP for details on each func
* @param dest: address of destination memory area
* @param src: address of source memory area
@ -184,9 +208,9 @@ crypt_z990_km(long func, void* param, u8* dest, const u8* src, long src_len)
* for encryption/decryption funcs
*/
static inline int
crypt_z990_kmc(long func, void* param, u8* dest, const u8* src, long src_len)
crypt_s390_kmc(long func, void* param, u8* dest, const u8* src, long src_len)
{
register long __func asm("0") = func & CRYPT_Z990_FUNC_MASK;
register long __func asm("0") = func & CRYPT_S390_FUNC_MASK;
register void* __param asm("1") = param;
register u8* __dest asm("4") = dest;
register const u8* __src asm("2") = src;
@ -195,18 +219,18 @@ crypt_z990_kmc(long func, void* param, u8* dest, const u8* src, long src_len)
ret = 0;
__asm__ __volatile__ (
"0: .insn rre,0xB92F0000,%1,%2 \n" //KMC opcode
"1: brc 1,0b \n" //handle partial completion
__crypt_z990_set_result
"0: .insn rre,0xB92F0000,%1,%2 \n" /* KMC opcode */
"1: brc 1,0b \n" /* handle partial completion */
__crypt_s390_set_result
"6: \n"
__crypt_z990_fixup
__crypt_s390_fixup
: "+d" (ret), "+a" (__dest), "+a" (__src),
[result] "+d" (__src_len)
: [e1] "K" (-EFAULT), [e2] "K" (-ENOSYS), "d" (__func),
"a" (__param)
: "cc", "memory"
);
if (ret >= 0 && func & CRYPT_Z990_FUNC_MASK){
if (ret >= 0 && func & CRYPT_S390_FUNC_MASK){
ret = src_len - ret;
}
return ret;
@ -214,8 +238,8 @@ crypt_z990_kmc(long func, void* param, u8* dest, const u8* src, long src_len)
/*
* Executes the KIMD (COMPUTE INTERMEDIATE MESSAGE DIGEST) operation
* of the z990 CPU.
* @param func: the function code passed to KM; see crypt_z990_kimd_func
* of the CPU.
* @param func: the function code passed to KM; see crypt_s390_kimd_func
* @param param: address of parameter block; see POP for details on each func
* @param src: address of source memory area
* @param src_len: length of src operand in bytes
@ -223,9 +247,9 @@ crypt_z990_kmc(long func, void* param, u8* dest, const u8* src, long src_len)
* for digest funcs
*/
static inline int
crypt_z990_kimd(long func, void* param, const u8* src, long src_len)
crypt_s390_kimd(long func, void* param, const u8* src, long src_len)
{
register long __func asm("0") = func & CRYPT_Z990_FUNC_MASK;
register long __func asm("0") = func & CRYPT_S390_FUNC_MASK;
register void* __param asm("1") = param;
register const u8* __src asm("2") = src;
register long __src_len asm("3") = src_len;
@ -233,25 +257,25 @@ crypt_z990_kimd(long func, void* param, const u8* src, long src_len)
ret = 0;
__asm__ __volatile__ (
"0: .insn rre,0xB93E0000,%1,%1 \n" //KIMD opcode
"1: brc 1,0b \n" /*handle partical completion of kimd*/
__crypt_z990_set_result
"0: .insn rre,0xB93E0000,%1,%1 \n" /* KIMD opcode */
"1: brc 1,0b \n" /* handle partical completion */
__crypt_s390_set_result
"6: \n"
__crypt_z990_fixup
__crypt_s390_fixup
: "+d" (ret), "+a" (__src), [result] "+d" (__src_len)
: [e1] "K" (-EFAULT), [e2] "K" (-ENOSYS), "d" (__func),
"a" (__param)
: "cc", "memory"
);
if (ret >= 0 && (func & CRYPT_Z990_FUNC_MASK)){
if (ret >= 0 && (func & CRYPT_S390_FUNC_MASK)){
ret = src_len - ret;
}
return ret;
}
/*
* Executes the KLMD (COMPUTE LAST MESSAGE DIGEST) operation of the z990 CPU.
* @param func: the function code passed to KM; see crypt_z990_klmd_func
* Executes the KLMD (COMPUTE LAST MESSAGE DIGEST) operation of the CPU.
* @param func: the function code passed to KM; see crypt_s390_klmd_func
* @param param: address of parameter block; see POP for details on each func
* @param src: address of source memory area
* @param src_len: length of src operand in bytes
@ -259,9 +283,9 @@ crypt_z990_kimd(long func, void* param, const u8* src, long src_len)
* for digest funcs
*/
static inline int
crypt_z990_klmd(long func, void* param, const u8* src, long src_len)
crypt_s390_klmd(long func, void* param, const u8* src, long src_len)
{
register long __func asm("0") = func & CRYPT_Z990_FUNC_MASK;
register long __func asm("0") = func & CRYPT_S390_FUNC_MASK;
register void* __param asm("1") = param;
register const u8* __src asm("2") = src;
register long __src_len asm("3") = src_len;
@ -269,17 +293,17 @@ crypt_z990_klmd(long func, void* param, const u8* src, long src_len)
ret = 0;
__asm__ __volatile__ (
"0: .insn rre,0xB93F0000,%1,%1 \n" //KLMD opcode
"1: brc 1,0b \n" /*handle partical completion of klmd*/
__crypt_z990_set_result
"0: .insn rre,0xB93F0000,%1,%1 \n" /* KLMD opcode */
"1: brc 1,0b \n" /* handle partical completion */
__crypt_s390_set_result
"6: \n"
__crypt_z990_fixup
__crypt_s390_fixup
: "+d" (ret), "+a" (__src), [result] "+d" (__src_len)
: [e1] "K" (-EFAULT), [e2] "K" (-ENOSYS), "d" (__func),
"a" (__param)
: "cc", "memory"
);
if (ret >= 0 && func & CRYPT_Z990_FUNC_MASK){
if (ret >= 0 && func & CRYPT_S390_FUNC_MASK){
ret = src_len - ret;
}
return ret;
@ -287,8 +311,8 @@ crypt_z990_klmd(long func, void* param, const u8* src, long src_len)
/*
* Executes the KMAC (COMPUTE MESSAGE AUTHENTICATION CODE) operation
* of the z990 CPU.
* @param func: the function code passed to KM; see crypt_z990_klmd_func
* of the CPU.
* @param func: the function code passed to KM; see crypt_s390_klmd_func
* @param param: address of parameter block; see POP for details on each func
* @param src: address of source memory area
* @param src_len: length of src operand in bytes
@ -296,9 +320,9 @@ crypt_z990_klmd(long func, void* param, const u8* src, long src_len)
* for digest funcs
*/
static inline int
crypt_z990_kmac(long func, void* param, const u8* src, long src_len)
crypt_s390_kmac(long func, void* param, const u8* src, long src_len)
{
register long __func asm("0") = func & CRYPT_Z990_FUNC_MASK;
register long __func asm("0") = func & CRYPT_S390_FUNC_MASK;
register void* __param asm("1") = param;
register const u8* __src asm("2") = src;
register long __src_len asm("3") = src_len;
@ -306,58 +330,58 @@ crypt_z990_kmac(long func, void* param, const u8* src, long src_len)
ret = 0;
__asm__ __volatile__ (
"0: .insn rre,0xB91E0000,%5,%5 \n" //KMAC opcode
"1: brc 1,0b \n" /*handle partical completion of klmd*/
__crypt_z990_set_result
"0: .insn rre,0xB91E0000,%5,%5 \n" /* KMAC opcode */
"1: brc 1,0b \n" /* handle partical completion */
__crypt_s390_set_result
"6: \n"
__crypt_z990_fixup
__crypt_s390_fixup
: "+d" (ret), "+a" (__src), [result] "+d" (__src_len)
: [e1] "K" (-EFAULT), [e2] "K" (-ENOSYS), "d" (__func),
"a" (__param)
: "cc", "memory"
);
if (ret >= 0 && func & CRYPT_Z990_FUNC_MASK){
if (ret >= 0 && func & CRYPT_S390_FUNC_MASK){
ret = src_len - ret;
}
return ret;
}
/**
* Tests if a specific z990 crypto function is implemented on the machine.
* Tests if a specific crypto function is implemented on the machine.
* @param func: the function code of the specific function; 0 if op in general
* @return 1 if func available; 0 if func or op in general not available
*/
static inline int
crypt_z990_func_available(int func)
crypt_s390_func_available(int func)
{
int ret;
struct crypt_z990_query_status status = {
struct crypt_s390_query_status status = {
.high = 0,
.low = 0
};
switch (func & CRYPT_Z990_OP_MASK){
case CRYPT_Z990_KM:
ret = crypt_z990_km(KM_QUERY, &status, NULL, NULL, 0);
switch (func & CRYPT_S390_OP_MASK){
case CRYPT_S390_KM:
ret = crypt_s390_km(KM_QUERY, &status, NULL, NULL, 0);
break;
case CRYPT_Z990_KMC:
ret = crypt_z990_kmc(KMC_QUERY, &status, NULL, NULL, 0);
case CRYPT_S390_KMC:
ret = crypt_s390_kmc(KMC_QUERY, &status, NULL, NULL, 0);
break;
case CRYPT_Z990_KIMD:
ret = crypt_z990_kimd(KIMD_QUERY, &status, NULL, 0);
case CRYPT_S390_KIMD:
ret = crypt_s390_kimd(KIMD_QUERY, &status, NULL, 0);
break;
case CRYPT_Z990_KLMD:
ret = crypt_z990_klmd(KLMD_QUERY, &status, NULL, 0);
case CRYPT_S390_KLMD:
ret = crypt_s390_klmd(KLMD_QUERY, &status, NULL, 0);
break;
case CRYPT_Z990_KMAC:
ret = crypt_z990_kmac(KMAC_QUERY, &status, NULL, 0);
case CRYPT_S390_KMAC:
ret = crypt_s390_kmac(KMAC_QUERY, &status, NULL, 0);
break;
default:
ret = 0;
return ret;
}
if (ret >= 0){
func &= CRYPT_Z990_FUNC_MASK;
func &= CRYPT_S390_FUNC_MASK;
func &= 0x7f; //mask modifier bit
if (func < 64){
ret = (status.high >> (64 - func - 1)) & 0x1;
@ -370,5 +394,4 @@ crypt_z990_func_available(int func)
return ret;
}
#endif // _CRYPTO_ARCH_S390_CRYPT_Z990_H
#endif // _CRYPTO_ARCH_S390_CRYPT_S390_H

View File

@ -0,0 +1,129 @@
/*
* Cryptographic API.
*
* Support for s390 cryptographic instructions.
* Testing module for querying processor crypto capabilities.
*
* Copyright (c) 2003 IBM Deutschland Entwicklung GmbH, IBM Corporation
* Author(s): Thomas Spatzier (tspat@de.ibm.com)
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
*/
#include <linux/module.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <asm/errno.h>
#include "crypt_s390.h"
static void query_available_functions(void)
{
printk(KERN_INFO "#####################\n");
/* query available KM functions */
printk(KERN_INFO "KM_QUERY: %d\n",
crypt_s390_func_available(KM_QUERY));
printk(KERN_INFO "KM_DEA: %d\n",
crypt_s390_func_available(KM_DEA_ENCRYPT));
printk(KERN_INFO "KM_TDEA_128: %d\n",
crypt_s390_func_available(KM_TDEA_128_ENCRYPT));
printk(KERN_INFO "KM_TDEA_192: %d\n",
crypt_s390_func_available(KM_TDEA_192_ENCRYPT));
printk(KERN_INFO "KM_AES_128: %d\n",
crypt_s390_func_available(KM_AES_128_ENCRYPT));
printk(KERN_INFO "KM_AES_192: %d\n",
crypt_s390_func_available(KM_AES_192_ENCRYPT));
printk(KERN_INFO "KM_AES_256: %d\n",
crypt_s390_func_available(KM_AES_256_ENCRYPT));
/* query available KMC functions */
printk(KERN_INFO "KMC_QUERY: %d\n",
crypt_s390_func_available(KMC_QUERY));
printk(KERN_INFO "KMC_DEA: %d\n",
crypt_s390_func_available(KMC_DEA_ENCRYPT));
printk(KERN_INFO "KMC_TDEA_128: %d\n",
crypt_s390_func_available(KMC_TDEA_128_ENCRYPT));
printk(KERN_INFO "KMC_TDEA_192: %d\n",
crypt_s390_func_available(KMC_TDEA_192_ENCRYPT));
printk(KERN_INFO "KMC_AES_128: %d\n",
crypt_s390_func_available(KMC_AES_128_ENCRYPT));
printk(KERN_INFO "KMC_AES_192: %d\n",
crypt_s390_func_available(KMC_AES_192_ENCRYPT));
printk(KERN_INFO "KMC_AES_256: %d\n",
crypt_s390_func_available(KMC_AES_256_ENCRYPT));
/* query available KIMD fucntions */
printk(KERN_INFO "KIMD_QUERY: %d\n",
crypt_s390_func_available(KIMD_QUERY));
printk(KERN_INFO "KIMD_SHA_1: %d\n",
crypt_s390_func_available(KIMD_SHA_1));
printk(KERN_INFO "KIMD_SHA_256: %d\n",
crypt_s390_func_available(KIMD_SHA_256));
/* query available KLMD functions */
printk(KERN_INFO "KLMD_QUERY: %d\n",
crypt_s390_func_available(KLMD_QUERY));
printk(KERN_INFO "KLMD_SHA_1: %d\n",
crypt_s390_func_available(KLMD_SHA_1));
printk(KERN_INFO "KLMD_SHA_256: %d\n",
crypt_s390_func_available(KLMD_SHA_256));
/* query available KMAC functions */
printk(KERN_INFO "KMAC_QUERY: %d\n",
crypt_s390_func_available(KMAC_QUERY));
printk(KERN_INFO "KMAC_DEA: %d\n",
crypt_s390_func_available(KMAC_DEA));
printk(KERN_INFO "KMAC_TDEA_128: %d\n",
crypt_s390_func_available(KMAC_TDEA_128));
printk(KERN_INFO "KMAC_TDEA_192: %d\n",
crypt_s390_func_available(KMAC_TDEA_192));
}
static int init(void)
{
struct crypt_s390_query_status status = {
.high = 0,
.low = 0
};
printk(KERN_INFO "crypt_s390: querying available crypto functions\n");
crypt_s390_km(KM_QUERY, &status, NULL, NULL, 0);
printk(KERN_INFO "KM:\t%016llx %016llx\n",
(unsigned long long) status.high,
(unsigned long long) status.low);
status.high = status.low = 0;
crypt_s390_kmc(KMC_QUERY, &status, NULL, NULL, 0);
printk(KERN_INFO "KMC:\t%016llx %016llx\n",
(unsigned long long) status.high,
(unsigned long long) status.low);
status.high = status.low = 0;
crypt_s390_kimd(KIMD_QUERY, &status, NULL, 0);
printk(KERN_INFO "KIMD:\t%016llx %016llx\n",
(unsigned long long) status.high,
(unsigned long long) status.low);
status.high = status.low = 0;
crypt_s390_klmd(KLMD_QUERY, &status, NULL, 0);
printk(KERN_INFO "KLMD:\t%016llx %016llx\n",
(unsigned long long) status.high,
(unsigned long long) status.low);
status.high = status.low = 0;
crypt_s390_kmac(KMAC_QUERY, &status, NULL, 0);
printk(KERN_INFO "KMAC:\t%016llx %016llx\n",
(unsigned long long) status.high,
(unsigned long long) status.low);
query_available_functions();
return -ECANCELED;
}
static void __exit cleanup(void)
{
}
module_init(init);
module_exit(cleanup);
MODULE_LICENSE("GPL");

View File

@ -1,111 +0,0 @@
/*
* Cryptographic API.
*
* Support for z990 cryptographic instructions.
* Testing module for querying processor crypto capabilities.
*
* Copyright (c) 2003 IBM Deutschland Entwicklung GmbH, IBM Corporation
* Author(s): Thomas Spatzier (tspat@de.ibm.com)
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
*/
#include <linux/module.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <asm/errno.h>
#include "crypt_z990.h"
static void
query_available_functions(void)
{
printk(KERN_INFO "#####################\n");
//query available KM functions
printk(KERN_INFO "KM_QUERY: %d\n",
crypt_z990_func_available(KM_QUERY));
printk(KERN_INFO "KM_DEA: %d\n",
crypt_z990_func_available(KM_DEA_ENCRYPT));
printk(KERN_INFO "KM_TDEA_128: %d\n",
crypt_z990_func_available(KM_TDEA_128_ENCRYPT));
printk(KERN_INFO "KM_TDEA_192: %d\n",
crypt_z990_func_available(KM_TDEA_192_ENCRYPT));
//query available KMC functions
printk(KERN_INFO "KMC_QUERY: %d\n",
crypt_z990_func_available(KMC_QUERY));
printk(KERN_INFO "KMC_DEA: %d\n",
crypt_z990_func_available(KMC_DEA_ENCRYPT));
printk(KERN_INFO "KMC_TDEA_128: %d\n",
crypt_z990_func_available(KMC_TDEA_128_ENCRYPT));
printk(KERN_INFO "KMC_TDEA_192: %d\n",
crypt_z990_func_available(KMC_TDEA_192_ENCRYPT));
//query available KIMD fucntions
printk(KERN_INFO "KIMD_QUERY: %d\n",
crypt_z990_func_available(KIMD_QUERY));
printk(KERN_INFO "KIMD_SHA_1: %d\n",
crypt_z990_func_available(KIMD_SHA_1));
//query available KLMD functions
printk(KERN_INFO "KLMD_QUERY: %d\n",
crypt_z990_func_available(KLMD_QUERY));
printk(KERN_INFO "KLMD_SHA_1: %d\n",
crypt_z990_func_available(KLMD_SHA_1));
//query available KMAC functions
printk(KERN_INFO "KMAC_QUERY: %d\n",
crypt_z990_func_available(KMAC_QUERY));
printk(KERN_INFO "KMAC_DEA: %d\n",
crypt_z990_func_available(KMAC_DEA));
printk(KERN_INFO "KMAC_TDEA_128: %d\n",
crypt_z990_func_available(KMAC_TDEA_128));
printk(KERN_INFO "KMAC_TDEA_192: %d\n",
crypt_z990_func_available(KMAC_TDEA_192));
}
static int
init(void)
{
struct crypt_z990_query_status status = {
.high = 0,
.low = 0
};
printk(KERN_INFO "crypt_z990: querying available crypto functions\n");
crypt_z990_km(KM_QUERY, &status, NULL, NULL, 0);
printk(KERN_INFO "KM: %016llx %016llx\n",
(unsigned long long) status.high,
(unsigned long long) status.low);
status.high = status.low = 0;
crypt_z990_kmc(KMC_QUERY, &status, NULL, NULL, 0);
printk(KERN_INFO "KMC: %016llx %016llx\n",
(unsigned long long) status.high,
(unsigned long long) status.low);
status.high = status.low = 0;
crypt_z990_kimd(KIMD_QUERY, &status, NULL, 0);
printk(KERN_INFO "KIMD: %016llx %016llx\n",
(unsigned long long) status.high,
(unsigned long long) status.low);
status.high = status.low = 0;
crypt_z990_klmd(KLMD_QUERY, &status, NULL, 0);
printk(KERN_INFO "KLMD: %016llx %016llx\n",
(unsigned long long) status.high,
(unsigned long long) status.low);
status.high = status.low = 0;
crypt_z990_kmac(KMAC_QUERY, &status, NULL, 0);
printk(KERN_INFO "KMAC: %016llx %016llx\n",
(unsigned long long) status.high,
(unsigned long long) status.low);
query_available_functions();
return -1;
}
static void __exit
cleanup(void)
{
}
module_init(init);
module_exit(cleanup);
MODULE_LICENSE("GPL");

View File

@ -1,7 +1,7 @@
/*
* Cryptographic API.
*
* z990 implementation of the DES Cipher Algorithm.
* s390 implementation of the DES Cipher Algorithm.
*
* Copyright (c) 2003 IBM Deutschland Entwicklung GmbH, IBM Corporation
* Author(s): Thomas Spatzier (tspat@de.ibm.com)
@ -19,7 +19,7 @@
#include <linux/errno.h>
#include <asm/scatterlist.h>
#include <linux/crypto.h>
#include "crypt_z990.h"
#include "crypt_s390.h"
#include "crypto_des.h"
#define DES_BLOCK_SIZE 8
@ -31,17 +31,17 @@
#define DES3_192_KEY_SIZE (3 * DES_KEY_SIZE)
#define DES3_192_BLOCK_SIZE DES_BLOCK_SIZE
struct crypt_z990_des_ctx {
struct crypt_s390_des_ctx {
u8 iv[DES_BLOCK_SIZE];
u8 key[DES_KEY_SIZE];
};
struct crypt_z990_des3_128_ctx {
struct crypt_s390_des3_128_ctx {
u8 iv[DES_BLOCK_SIZE];
u8 key[DES3_128_KEY_SIZE];
};
struct crypt_z990_des3_192_ctx {
struct crypt_s390_des3_192_ctx {
u8 iv[DES_BLOCK_SIZE];
u8 key[DES3_192_KEY_SIZE];
};
@ -49,7 +49,7 @@ struct crypt_z990_des3_192_ctx {
static int
des_setkey(void *ctx, const u8 *key, unsigned int keylen, u32 *flags)
{
struct crypt_z990_des_ctx *dctx;
struct crypt_s390_des_ctx *dctx;
int ret;
dctx = ctx;
@ -65,26 +65,26 @@ des_setkey(void *ctx, const u8 *key, unsigned int keylen, u32 *flags)
static void
des_encrypt(void *ctx, u8 *dst, const u8 *src)
{
struct crypt_z990_des_ctx *dctx;
struct crypt_s390_des_ctx *dctx;
dctx = ctx;
crypt_z990_km(KM_DEA_ENCRYPT, dctx->key, dst, src, DES_BLOCK_SIZE);
crypt_s390_km(KM_DEA_ENCRYPT, dctx->key, dst, src, DES_BLOCK_SIZE);
}
static void
des_decrypt(void *ctx, u8 *dst, const u8 *src)
{
struct crypt_z990_des_ctx *dctx;
struct crypt_s390_des_ctx *dctx;
dctx = ctx;
crypt_z990_km(KM_DEA_DECRYPT, dctx->key, dst, src, DES_BLOCK_SIZE);
crypt_s390_km(KM_DEA_DECRYPT, dctx->key, dst, src, DES_BLOCK_SIZE);
}
static struct crypto_alg des_alg = {
.cra_name = "des",
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = DES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crypt_z990_des_ctx),
.cra_ctxsize = sizeof(struct crypt_s390_des_ctx),
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(des_alg.cra_list),
.cra_u = { .cipher = {
@ -111,7 +111,7 @@ static int
des3_128_setkey(void *ctx, const u8 *key, unsigned int keylen, u32 *flags)
{
int i, ret;
struct crypt_z990_des3_128_ctx *dctx;
struct crypt_s390_des3_128_ctx *dctx;
const u8* temp_key = key;
dctx = ctx;
@ -132,20 +132,20 @@ des3_128_setkey(void *ctx, const u8 *key, unsigned int keylen, u32 *flags)
static void
des3_128_encrypt(void *ctx, u8 *dst, const u8 *src)
{
struct crypt_z990_des3_128_ctx *dctx;
struct crypt_s390_des3_128_ctx *dctx;
dctx = ctx;
crypt_z990_km(KM_TDEA_128_ENCRYPT, dctx->key, dst, (void*)src,
crypt_s390_km(KM_TDEA_128_ENCRYPT, dctx->key, dst, (void*)src,
DES3_128_BLOCK_SIZE);
}
static void
des3_128_decrypt(void *ctx, u8 *dst, const u8 *src)
{
struct crypt_z990_des3_128_ctx *dctx;
struct crypt_s390_des3_128_ctx *dctx;
dctx = ctx;
crypt_z990_km(KM_TDEA_128_DECRYPT, dctx->key, dst, (void*)src,
crypt_s390_km(KM_TDEA_128_DECRYPT, dctx->key, dst, (void*)src,
DES3_128_BLOCK_SIZE);
}
@ -153,7 +153,7 @@ static struct crypto_alg des3_128_alg = {
.cra_name = "des3_ede128",
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = DES3_128_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crypt_z990_des3_128_ctx),
.cra_ctxsize = sizeof(struct crypt_s390_des3_128_ctx),
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(des3_128_alg.cra_list),
.cra_u = { .cipher = {
@ -181,7 +181,7 @@ static int
des3_192_setkey(void *ctx, const u8 *key, unsigned int keylen, u32 *flags)
{
int i, ret;
struct crypt_z990_des3_192_ctx *dctx;
struct crypt_s390_des3_192_ctx *dctx;
const u8* temp_key;
dctx = ctx;
@ -206,20 +206,20 @@ des3_192_setkey(void *ctx, const u8 *key, unsigned int keylen, u32 *flags)
static void
des3_192_encrypt(void *ctx, u8 *dst, const u8 *src)
{
struct crypt_z990_des3_192_ctx *dctx;
struct crypt_s390_des3_192_ctx *dctx;
dctx = ctx;
crypt_z990_km(KM_TDEA_192_ENCRYPT, dctx->key, dst, (void*)src,
crypt_s390_km(KM_TDEA_192_ENCRYPT, dctx->key, dst, (void*)src,
DES3_192_BLOCK_SIZE);
}
static void
des3_192_decrypt(void *ctx, u8 *dst, const u8 *src)
{
struct crypt_z990_des3_192_ctx *dctx;
struct crypt_s390_des3_192_ctx *dctx;
dctx = ctx;
crypt_z990_km(KM_TDEA_192_DECRYPT, dctx->key, dst, (void*)src,
crypt_s390_km(KM_TDEA_192_DECRYPT, dctx->key, dst, (void*)src,
DES3_192_BLOCK_SIZE);
}
@ -227,7 +227,7 @@ static struct crypto_alg des3_192_alg = {
.cra_name = "des3_ede",
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = DES3_192_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crypt_z990_des3_192_ctx),
.cra_ctxsize = sizeof(struct crypt_s390_des3_192_ctx),
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(des3_192_alg.cra_list),
.cra_u = { .cipher = {
@ -245,9 +245,9 @@ init(void)
{
int ret;
if (!crypt_z990_func_available(KM_DEA_ENCRYPT) ||
!crypt_z990_func_available(KM_TDEA_128_ENCRYPT) ||
!crypt_z990_func_available(KM_TDEA_192_ENCRYPT)){
if (!crypt_s390_func_available(KM_DEA_ENCRYPT) ||
!crypt_s390_func_available(KM_TDEA_128_ENCRYPT) ||
!crypt_s390_func_available(KM_TDEA_192_ENCRYPT)){
return -ENOSYS;
}
@ -262,7 +262,7 @@ init(void)
return -EEXIST;
}
printk(KERN_INFO "crypt_z990: des_z990 loaded.\n");
printk(KERN_INFO "crypt_s390: des_s390 loaded.\n");
return 0;
}

View File

@ -1,7 +1,7 @@
/*
* Cryptographic API.
*
* z990 implementation of the SHA1 Secure Hash Algorithm.
* s390 implementation of the SHA1 Secure Hash Algorithm.
*
* Derived from cryptoapi implementation, adapted for in-place
* scatterlist interface. Originally based on the public domain
@ -28,22 +28,22 @@
#include <linux/crypto.h>
#include <asm/scatterlist.h>
#include <asm/byteorder.h>
#include "crypt_z990.h"
#include "crypt_s390.h"
#define SHA1_DIGEST_SIZE 20
#define SHA1_BLOCK_SIZE 64
struct crypt_z990_sha1_ctx {
u64 count;
u32 state[5];
struct crypt_s390_sha1_ctx {
u64 count;
u32 state[5];
u32 buf_len;
u8 buffer[2 * SHA1_BLOCK_SIZE];
u8 buffer[2 * SHA1_BLOCK_SIZE];
};
static void
sha1_init(void *ctx)
{
static const struct crypt_z990_sha1_ctx initstate = {
static const struct crypt_s390_sha1_ctx initstate = {
.state = {
0x67452301,
0xEFCDAB89,
@ -58,7 +58,7 @@ sha1_init(void *ctx)
static void
sha1_update(void *ctx, const u8 *data, unsigned int len)
{
struct crypt_z990_sha1_ctx *sctx;
struct crypt_s390_sha1_ctx *sctx;
long imd_len;
sctx = ctx;
@ -69,7 +69,7 @@ sha1_update(void *ctx, const u8 *data, unsigned int len)
//complete full block and hash
memcpy(sctx->buffer + sctx->buf_len, data,
SHA1_BLOCK_SIZE - sctx->buf_len);
crypt_z990_kimd(KIMD_SHA_1, sctx->state, sctx->buffer,
crypt_s390_kimd(KIMD_SHA_1, sctx->state, sctx->buffer,
SHA1_BLOCK_SIZE);
data += SHA1_BLOCK_SIZE - sctx->buf_len;
len -= SHA1_BLOCK_SIZE - sctx->buf_len;
@ -79,7 +79,7 @@ sha1_update(void *ctx, const u8 *data, unsigned int len)
//rest of data contains full blocks?
imd_len = len & ~0x3ful;
if (imd_len){
crypt_z990_kimd(KIMD_SHA_1, sctx->state, data, imd_len);
crypt_s390_kimd(KIMD_SHA_1, sctx->state, data, imd_len);
data += imd_len;
len -= imd_len;
}
@ -92,7 +92,7 @@ sha1_update(void *ctx, const u8 *data, unsigned int len)
static void
pad_message(struct crypt_z990_sha1_ctx* sctx)
pad_message(struct crypt_s390_sha1_ctx* sctx)
{
int index;
@ -113,11 +113,11 @@ pad_message(struct crypt_z990_sha1_ctx* sctx)
static void
sha1_final(void* ctx, u8 *out)
{
struct crypt_z990_sha1_ctx *sctx = ctx;
struct crypt_s390_sha1_ctx *sctx = ctx;
//must perform manual padding
pad_message(sctx);
crypt_z990_kimd(KIMD_SHA_1, sctx->state, sctx->buffer, sctx->buf_len);
crypt_s390_kimd(KIMD_SHA_1, sctx->state, sctx->buffer, sctx->buf_len);
//copy digest to out
memcpy(out, sctx->state, SHA1_DIGEST_SIZE);
/* Wipe context */
@ -128,7 +128,7 @@ static struct crypto_alg alg = {
.cra_name = "sha1",
.cra_flags = CRYPTO_ALG_TYPE_DIGEST,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crypt_z990_sha1_ctx),
.cra_ctxsize = sizeof(struct crypt_s390_sha1_ctx),
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(alg.cra_list),
.cra_u = { .digest = {
@ -143,10 +143,10 @@ init(void)
{
int ret = -ENOSYS;
if (crypt_z990_func_available(KIMD_SHA_1)){
if (crypt_s390_func_available(KIMD_SHA_1)){
ret = crypto_register_alg(&alg);
if (ret == 0){
printk(KERN_INFO "crypt_z990: sha1_z990 loaded.\n");
printk(KERN_INFO "crypt_s390: sha1_s390 loaded.\n");
}
}
return ret;

View File

@ -0,0 +1,151 @@
/*
* Cryptographic API.
*
* s390 implementation of the SHA256 Secure Hash Algorithm.
*
* s390 Version:
* Copyright (C) 2005 IBM Deutschland GmbH, IBM Corporation
* Author(s): Jan Glauber (jang@de.ibm.com)
*
* Derived from "crypto/sha256.c"
* and "arch/s390/crypto/sha1_s390.c"
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/crypto.h>
#include "crypt_s390.h"
#define SHA256_DIGEST_SIZE 32
#define SHA256_BLOCK_SIZE 64
struct s390_sha256_ctx {
u64 count;
u32 state[8];
u8 buf[2 * SHA256_BLOCK_SIZE];
};
static void sha256_init(void *ctx)
{
struct s390_sha256_ctx *sctx = ctx;
sctx->state[0] = 0x6a09e667;
sctx->state[1] = 0xbb67ae85;
sctx->state[2] = 0x3c6ef372;
sctx->state[3] = 0xa54ff53a;
sctx->state[4] = 0x510e527f;
sctx->state[5] = 0x9b05688c;
sctx->state[6] = 0x1f83d9ab;
sctx->state[7] = 0x5be0cd19;
sctx->count = 0;
memset(sctx->buf, 0, sizeof(sctx->buf));
}
static void sha256_update(void *ctx, const u8 *data, unsigned int len)
{
struct s390_sha256_ctx *sctx = ctx;
unsigned int index;
/* how much is already in the buffer? */
index = sctx->count / 8 & 0x3f;
/* update message bit length */
sctx->count += len * 8;
/* process one block */
if ((index + len) >= SHA256_BLOCK_SIZE) {
memcpy(sctx->buf + index, data, SHA256_BLOCK_SIZE - index);
crypt_s390_kimd(KIMD_SHA_256, sctx->state, sctx->buf,
SHA256_BLOCK_SIZE);
data += SHA256_BLOCK_SIZE - index;
len -= SHA256_BLOCK_SIZE - index;
}
/* anything left? */
if (len)
memcpy(sctx->buf + index , data, len);
}
static void pad_message(struct s390_sha256_ctx* sctx)
{
int index, end;
index = sctx->count / 8 & 0x3f;
end = index < 56 ? SHA256_BLOCK_SIZE : 2 * SHA256_BLOCK_SIZE;
/* start pad with 1 */
sctx->buf[index] = 0x80;
/* pad with zeros */
index++;
memset(sctx->buf + index, 0x00, end - index - 8);
/* append message length */
memcpy(sctx->buf + end - 8, &sctx->count, sizeof sctx->count);
sctx->count = end * 8;
}
/* Add padding and return the message digest */
static void sha256_final(void* ctx, u8 *out)
{
struct s390_sha256_ctx *sctx = ctx;
/* must perform manual padding */
pad_message(sctx);
crypt_s390_kimd(KIMD_SHA_256, sctx->state, sctx->buf,
sctx->count / 8);
/* copy digest to out */
memcpy(out, sctx->state, SHA256_DIGEST_SIZE);
/* wipe context */
memset(sctx, 0, sizeof *sctx);
}
static struct crypto_alg alg = {
.cra_name = "sha256",
.cra_flags = CRYPTO_ALG_TYPE_DIGEST,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s390_sha256_ctx),
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(alg.cra_list),
.cra_u = { .digest = {
.dia_digestsize = SHA256_DIGEST_SIZE,
.dia_init = sha256_init,
.dia_update = sha256_update,
.dia_final = sha256_final } }
};
static int init(void)
{
int ret;
if (!crypt_s390_func_available(KIMD_SHA_256))
return -ENOSYS;
ret = crypto_register_alg(&alg);
if (ret != 0)
printk(KERN_INFO "crypt_s390: sha256_s390 couldn't be loaded.");
return ret;
}
static void __exit fini(void)
{
crypto_unregister_alg(&alg);
}
module_init(init);
module_exit(fini);
MODULE_ALIAS("sha256");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA256 Secure Hash Algorithm");

View File

@ -1,12 +1,12 @@
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.14-rc1
# Wed Sep 14 16:46:19 2005
# Linux kernel version: 2.6.15-rc2
# Mon Nov 21 13:51:30 2005
#
CONFIG_MMU=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_S390=y
CONFIG_S390=y
CONFIG_UID16=y
#
@ -64,6 +64,24 @@ CONFIG_MODVERSIONS=y
CONFIG_KMOD=y
CONFIG_STOP_MACHINE=y
#
# Block layer
#
# CONFIG_LBD is not set
#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_AS=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
CONFIG_DEFAULT_AS=y
# CONFIG_DEFAULT_DEADLINE is not set
# CONFIG_DEFAULT_CFQ is not set
# CONFIG_DEFAULT_NOOP is not set
CONFIG_DEFAULT_IOSCHED="anticipatory"
#
# Base setup
#
@ -71,9 +89,7 @@ CONFIG_STOP_MACHINE=y
#
# Processor type and features
#
# CONFIG_ARCH_S390X is not set
# CONFIG_64BIT is not set
CONFIG_ARCH_S390_31=y
CONFIG_SMP=y
CONFIG_NR_CPUS=32
CONFIG_HOTPLUG_CPU=y
@ -97,6 +113,7 @@ CONFIG_FLATMEM_MANUAL=y
CONFIG_FLATMEM=y
CONFIG_FLAT_NODE_MEM_MAP=y
# CONFIG_SPARSEMEM_STATIC is not set
CONFIG_SPLIT_PTLOCK_CPUS=4
#
# I/O subsystem configuration
@ -188,10 +205,18 @@ CONFIG_IPV6=y
# CONFIG_NET_DIVERT is not set
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set
#
# QoS and/or fair queueing
#
CONFIG_NET_SCHED=y
CONFIG_NET_SCH_CLK_JIFFIES=y
# CONFIG_NET_SCH_CLK_GETTIMEOFDAY is not set
# CONFIG_NET_SCH_CLK_CPU is not set
#
# Queueing/Scheduling
#
CONFIG_NET_SCH_CBQ=m
# CONFIG_NET_SCH_HTB is not set
# CONFIG_NET_SCH_HFSC is not set
@ -204,8 +229,10 @@ CONFIG_NET_SCH_GRED=m
CONFIG_NET_SCH_DSMARK=m
# CONFIG_NET_SCH_NETEM is not set
# CONFIG_NET_SCH_INGRESS is not set
CONFIG_NET_QOS=y
CONFIG_NET_ESTIMATOR=y
#
# Classification
#
CONFIG_NET_CLS=y
# CONFIG_NET_CLS_BASIC is not set
CONFIG_NET_CLS_TCINDEX=m
@ -214,18 +241,18 @@ CONFIG_NET_CLS_ROUTE=y
CONFIG_NET_CLS_FW=m
CONFIG_NET_CLS_U32=m
# CONFIG_CLS_U32_PERF is not set
# CONFIG_NET_CLS_IND is not set
CONFIG_NET_CLS_RSVP=m
CONFIG_NET_CLS_RSVP6=m
# CONFIG_NET_EMATCH is not set
# CONFIG_NET_CLS_ACT is not set
CONFIG_NET_CLS_POLICE=y
# CONFIG_NET_CLS_IND is not set
CONFIG_NET_ESTIMATOR=y
#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
# CONFIG_NETFILTER_NETLINK is not set
# CONFIG_HAMRADIO is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
@ -276,6 +303,7 @@ CONFIG_SCSI_FC_ATTRS=y
#
# SCSI low-level drivers
#
# CONFIG_ISCSI_TCP is not set
# CONFIG_SCSI_SATA is not set
# CONFIG_SCSI_DEBUG is not set
CONFIG_ZFCP=y
@ -292,7 +320,6 @@ CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
CONFIG_BLK_DEV_INITRD=y
# CONFIG_LBD is not set
# CONFIG_CDROM_PKTCDVD is not set
#
@ -305,15 +332,8 @@ CONFIG_DASD_PROFILE=y
CONFIG_DASD_ECKD=y
CONFIG_DASD_FBA=y
CONFIG_DASD_DIAG=y
CONFIG_DASD_EER=m
# CONFIG_DASD_CMB is not set
#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_AS=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
# CONFIG_ATA_OVER_ETH is not set
#
@ -378,7 +398,6 @@ CONFIG_S390_TAPE_34XX=m
# CONFIG_VMLOGRDR is not set
# CONFIG_VMCP is not set
# CONFIG_MONREADER is not set
# CONFIG_DCSS_SHM is not set
#
# Cryptographic devices
@ -593,6 +612,8 @@ CONFIG_DEBUG_PREEMPT=y
# CONFIG_DEBUG_KOBJECT is not set
# CONFIG_DEBUG_INFO is not set
CONFIG_DEBUG_FS=y
# CONFIG_DEBUG_VM is not set
# CONFIG_RCU_TORTURE_TEST is not set
#
# Security options
@ -609,17 +630,19 @@ CONFIG_CRYPTO=y
# CONFIG_CRYPTO_MD4 is not set
# CONFIG_CRYPTO_MD5 is not set
# CONFIG_CRYPTO_SHA1 is not set
# CONFIG_CRYPTO_SHA1_Z990 is not set
# CONFIG_CRYPTO_SHA1_S390 is not set
# CONFIG_CRYPTO_SHA256 is not set
# CONFIG_CRYPTO_SHA256_S390 is not set
# CONFIG_CRYPTO_SHA512 is not set
# CONFIG_CRYPTO_WP512 is not set
# CONFIG_CRYPTO_TGR192 is not set
# CONFIG_CRYPTO_DES is not set
# CONFIG_CRYPTO_DES_Z990 is not set
# CONFIG_CRYPTO_DES_S390 is not set
# CONFIG_CRYPTO_BLOWFISH is not set
# CONFIG_CRYPTO_TWOFISH is not set
# CONFIG_CRYPTO_SERPENT is not set
# CONFIG_CRYPTO_AES is not set
# CONFIG_CRYPTO_AES_S390 is not set
# CONFIG_CRYPTO_CAST5 is not set
# CONFIG_CRYPTO_CAST6 is not set
# CONFIG_CRYPTO_TEA is not set

View File

@ -8,31 +8,26 @@ obj-y := bitmap.o traps.o time.o process.o \
setup.o sys_s390.o ptrace.o signal.o cpcmd.o ebcdic.o \
semaphore.o s390_ext.o debug.o profile.o irq.o reipl_diag.o
obj-y += $(if $(CONFIG_64BIT),entry64.o,entry.o)
obj-y += $(if $(CONFIG_64BIT),reipl64.o,reipl.o)
extra-y += head.o init_task.o vmlinux.lds
obj-$(CONFIG_MODULES) += s390_ksyms.o module.o
obj-$(CONFIG_SMP) += smp.o
obj-$(CONFIG_S390_SUPPORT) += compat_linux.o compat_signal.o \
obj-$(CONFIG_COMPAT) += compat_linux.o compat_signal.o \
compat_ioctl.o compat_wrapper.o \
compat_exec_domain.o
obj-$(CONFIG_BINFMT_ELF32) += binfmt_elf32.o
obj-$(CONFIG_ARCH_S390_31) += entry.o reipl.o
obj-$(CONFIG_ARCH_S390X) += entry64.o reipl64.o
obj-$(CONFIG_VIRT_TIMER) += vtime.o
# Kexec part
S390_KEXEC_OBJS := machine_kexec.o crash.o
ifeq ($(CONFIG_ARCH_S390X),y)
S390_KEXEC_OBJS += relocate_kernel64.o
else
S390_KEXEC_OBJS += relocate_kernel.o
endif
S390_KEXEC_OBJS += $(if $(CONFIG_64BIT),relocate_kernel64.o,relocate_kernel.o)
obj-$(CONFIG_KEXEC) += $(S390_KEXEC_OBJS)
#
# This is just to get the dependencies...
#

View File

@ -279,7 +279,7 @@ asmlinkage long sys32_getegid16(void)
static inline long get_tv32(struct timeval *o, struct compat_timeval *i)
{
return (!access_ok(VERIFY_READ, tv32, sizeof(*tv32)) ||
return (!access_ok(VERIFY_READ, o, sizeof(*o)) ||
(__get_user(o->tv_sec, &i->tv_sec) ||
__get_user(o->tv_usec, &i->tv_usec)));
}

View File

@ -467,8 +467,6 @@ asmlinkage long sys32_rt_sigreturn(struct pt_regs *regs)
if (err)
goto badframe;
/* It is more difficult to avoid calling this function than to
call it and ignore errors. */
set_fs (KERNEL_DS);
do_sigaltstack((stack_t __user *)&st, NULL, regs->gprs[15]);
set_fs (old_fs);

View File

@ -39,7 +39,7 @@ int __cpcmd(const char *cmd, char *response, int rlen, int *response_code)
if (response != NULL && rlen > 0) {
memset(response, 0, rlen);
#ifndef CONFIG_ARCH_S390X
#ifndef CONFIG_64BIT
asm volatile ( "lra 2,0(%2)\n"
"lr 4,%3\n"
"o 4,%6\n"
@ -55,7 +55,7 @@ int __cpcmd(const char *cmd, char *response, int rlen, int *response_code)
: "a" (cpcmd_buf), "d" (cmdlen),
"a" (response), "d" (rlen), "m" (mask)
: "cc", "2", "3", "4", "5" );
#else /* CONFIG_ARCH_S390X */
#else /* CONFIG_64BIT */
asm volatile ( "lrag 2,0(%2)\n"
"lgr 4,%3\n"
"o 4,%6\n"
@ -73,11 +73,11 @@ int __cpcmd(const char *cmd, char *response, int rlen, int *response_code)
: "a" (cpcmd_buf), "d" (cmdlen),
"a" (response), "d" (rlen), "m" (mask)
: "cc", "2", "3", "4", "5" );
#endif /* CONFIG_ARCH_S390X */
#endif /* CONFIG_64BIT */
EBCASC(response, rlen);
} else {
return_len = 0;
#ifndef CONFIG_ARCH_S390X
#ifndef CONFIG_64BIT
asm volatile ( "lra 2,0(%1)\n"
"lr 3,%2\n"
"diag 2,3,0x8\n"
@ -85,7 +85,7 @@ int __cpcmd(const char *cmd, char *response, int rlen, int *response_code)
: "=d" (return_code)
: "a" (cpcmd_buf), "d" (cmdlen)
: "2", "3" );
#else /* CONFIG_ARCH_S390X */
#else /* CONFIG_64BIT */
asm volatile ( "lrag 2,0(%1)\n"
"lgr 3,%2\n"
"sam31\n"
@ -95,7 +95,7 @@ int __cpcmd(const char *cmd, char *response, int rlen, int *response_code)
: "=d" (return_code)
: "a" (cpcmd_buf), "d" (cmdlen)
: "2", "3" );
#endif /* CONFIG_ARCH_S390X */
#endif /* CONFIG_64BIT */
}
spin_unlock_irqrestore(&cpcmd_lock, flags);
if (response_code != NULL)
@ -105,7 +105,7 @@ int __cpcmd(const char *cmd, char *response, int rlen, int *response_code)
EXPORT_SYMBOL(__cpcmd);
#ifdef CONFIG_ARCH_S390X
#ifdef CONFIG_64BIT
int cpcmd(const char *cmd, char *response, int rlen, int *response_code)
{
char *lowbuf;
@ -129,4 +129,4 @@ int cpcmd(const char *cmd, char *response, int rlen, int *response_code)
}
EXPORT_SYMBOL(cpcmd);
#endif /* CONFIG_ARCH_S390X */
#endif /* CONFIG_64BIT */

View File

@ -213,7 +213,7 @@ sysc_nr_ok:
mvc SP_ARGS(8,%r15),SP_R7(%r15)
sysc_do_restart:
larl %r10,sys_call_table
#ifdef CONFIG_S390_SUPPORT
#ifdef CONFIG_COMPAT
tm __TI_flags+5(%r9),(_TIF_31BIT>>16) # running in 31 bit mode ?
jno sysc_noemu
larl %r10,sys_call_table_emu # use 31 bit emulation system calls
@ -361,7 +361,7 @@ sys_clone_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs
jg sys_clone # branch to sys_clone
#ifdef CONFIG_S390_SUPPORT
#ifdef CONFIG_COMPAT
sys32_clone_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs
jg sys32_clone # branch to sys32_clone
@ -383,7 +383,7 @@ sys_execve_glue:
bnz 0(%r12) # it did fail -> store result in gpr2
b 6(%r12) # SKIP STG 2,SP_R2(15) in
# system_call/sysc_tracesys
#ifdef CONFIG_S390_SUPPORT
#ifdef CONFIG_COMPAT
sys32_execve_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs
lgr %r12,%r14 # save return address
@ -398,7 +398,7 @@ sys_sigreturn_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs as parameter
jg sys_sigreturn # branch to sys_sigreturn
#ifdef CONFIG_S390_SUPPORT
#ifdef CONFIG_COMPAT
sys32_sigreturn_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs as parameter
jg sys32_sigreturn # branch to sys32_sigreturn
@ -408,7 +408,7 @@ sys_rt_sigreturn_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs as parameter
jg sys_rt_sigreturn # branch to sys_sigreturn
#ifdef CONFIG_S390_SUPPORT
#ifdef CONFIG_COMPAT
sys32_rt_sigreturn_glue:
la %r2,SP_PTREGS(%r15) # load pt_regs as parameter
jg sys32_rt_sigreturn # branch to sys32_sigreturn
@ -429,7 +429,7 @@ sys_sigsuspend_glue:
la %r14,6(%r14) # skip store of return value
jg sys_sigsuspend # branch to sys_sigsuspend
#ifdef CONFIG_S390_SUPPORT
#ifdef CONFIG_COMPAT
sys32_sigsuspend_glue:
llgfr %r4,%r4 # unsigned long
lgr %r5,%r4 # move mask back
@ -449,7 +449,7 @@ sys_rt_sigsuspend_glue:
la %r14,6(%r14) # skip store of return value
jg sys_rt_sigsuspend # branch to sys_rt_sigsuspend
#ifdef CONFIG_S390_SUPPORT
#ifdef CONFIG_COMPAT
sys32_rt_sigsuspend_glue:
llgfr %r3,%r3 # size_t
lgr %r4,%r3 # move sigsetsize parameter
@ -464,7 +464,7 @@ sys_sigaltstack_glue:
la %r4,SP_PTREGS(%r15) # load pt_regs as parameter
jg sys_sigaltstack # branch to sys_sigreturn
#ifdef CONFIG_S390_SUPPORT
#ifdef CONFIG_COMPAT
sys32_sigaltstack_glue:
la %r4,SP_PTREGS(%r15) # load pt_regs as parameter
jg sys32_sigaltstack_wrapper # branch to sys_sigreturn
@ -1009,7 +1009,7 @@ sys_call_table:
#include "syscalls.S"
#undef SYSCALL
#ifdef CONFIG_S390_SUPPORT
#ifdef CONFIG_COMPAT
#define SYSCALL(esa,esame,emu) .long emu
.globl sys_call_table_emu

View File

@ -30,7 +30,7 @@
#include <asm/thread_info.h>
#include <asm/page.h>
#ifdef CONFIG_ARCH_S390X
#ifdef CONFIG_64BIT
#define ARCH_OFFSET 4
#else
#define ARCH_OFFSET 0
@ -539,7 +539,7 @@ ipl_devno:
.word 0
.endm
#ifdef CONFIG_ARCH_S390X
#ifdef CONFIG_64BIT
#include "head64.S"
#else
#include "head31.S"

View File

@ -85,7 +85,7 @@ kexec_halt_all_cpus(void *kernel_image)
pfault_fini();
#endif
if (atomic_compare_and_swap(-1, smp_processor_id(), &cpuid))
if (atomic_cmpxchg(&cpuid, -1, smp_processor_id()) != -1)
signal_processor(smp_processor_id(), sigp_stop);
/* Wait for all other cpus to enter stopped state */

Some files were not shown because too many files have changed in this diff Show More