2008-09-17 15:34:06 +00:00
|
|
|
/*
|
|
|
|
* Ultra Wide Band
|
|
|
|
* Neighborhood Management Daemon
|
|
|
|
*
|
|
|
|
* Copyright (C) 2005-2006 Intel Corporation
|
|
|
|
* Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU General Public License version
|
|
|
|
* 2 as published by the Free Software Foundation.
|
|
|
|
*
|
|
|
|
* This program is distributed in the hope that it will be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
* GNU General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU General Public License
|
|
|
|
* along with this program; if not, write to the Free Software
|
|
|
|
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
|
|
|
|
* 02110-1301, USA.
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* This daemon takes care of maintaing information that describes the
|
|
|
|
* UWB neighborhood that the radios in this machine can see. It also
|
|
|
|
* keeps a tab of which devices are visible, makes sure each HC sits
|
|
|
|
* on a different channel to avoid interfering, etc.
|
|
|
|
*
|
|
|
|
* Different drivers (radio controller, device, any API in general)
|
|
|
|
* communicate with this daemon through an event queue. Daemon wakes
|
|
|
|
* up, takes a list of events and handles them one by one; handling
|
|
|
|
* function is extracted from a table based on the event's type and
|
|
|
|
* subtype. Events are freed only if the handling function says so.
|
|
|
|
*
|
|
|
|
* . Lock protecting the event list has to be an spinlock and locked
|
|
|
|
* with IRQSAVE because it might be called from an interrupt
|
|
|
|
* context (ie: when events arrive and the notification drops
|
|
|
|
* down from the ISR).
|
|
|
|
*
|
|
|
|
* . UWB radio controller drivers queue events to the daemon using
|
|
|
|
* uwbd_event_queue(). They just get the event, chew it to make it
|
|
|
|
* look like UWBD likes it and pass it in a buffer allocated with
|
|
|
|
* uwb_event_alloc().
|
|
|
|
*
|
|
|
|
* EVENTS
|
|
|
|
*
|
2010-01-29 07:57:49 +00:00
|
|
|
* Events have a type, a subtype, a length, some other stuff and the
|
2008-09-17 15:34:06 +00:00
|
|
|
* data blob, which depends on the event. The header is 'struct
|
|
|
|
* uwb_event'; for payloads, see 'struct uwbd_evt_*'.
|
|
|
|
*
|
|
|
|
* EVENT HANDLER TABLES
|
|
|
|
*
|
|
|
|
* To find a handling function for an event, the type is used to index
|
|
|
|
* a subtype-table in the type-table. The subtype-table is indexed
|
|
|
|
* with the subtype to get the function that handles the event. Start
|
|
|
|
* with the main type-table 'uwbd_evt_type_handler'.
|
|
|
|
*
|
|
|
|
* DEVICES
|
|
|
|
*
|
|
|
|
* Devices are created when a bunch of beacons have been received and
|
|
|
|
* it is stablished that the device has stable radio presence. CREATED
|
|
|
|
* only, not configured. Devices are ONLY configured when an
|
|
|
|
* Application-Specific IE Probe is receieved, in which the device
|
|
|
|
* declares which Protocol ID it groks. Then the device is CONFIGURED
|
|
|
|
* (and the driver->probe() stuff of the device model is invoked).
|
|
|
|
*
|
|
|
|
* Devices are considered disconnected when a certain number of
|
|
|
|
* beacons are not received in an amount of time.
|
|
|
|
*
|
|
|
|
* Handler functions are called normally uwbd_evt_handle_*().
|
|
|
|
*/
|
|
|
|
#include <linux/kthread.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 08:04:11 +00:00
|
|
|
#include <linux/slab.h>
|
2008-09-17 15:34:06 +00:00
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/freezer.h>
|
|
|
|
|
2008-12-22 18:22:50 +00:00
|
|
|
#include "uwb-internal.h"
|
2008-09-17 15:34:06 +00:00
|
|
|
|
2008-12-22 18:22:50 +00:00
|
|
|
/*
|
2008-09-17 15:34:06 +00:00
|
|
|
* UWBD Event handler function signature
|
|
|
|
*
|
|
|
|
* Return !0 if the event needs not to be freed (ie the handler
|
|
|
|
* takes/took care of it). 0 means the daemon code will free the
|
|
|
|
* event.
|
|
|
|
*
|
|
|
|
* @evt->rc is already referenced and guaranteed to exist. See
|
|
|
|
* uwb_evt_handle().
|
|
|
|
*/
|
|
|
|
typedef int (*uwbd_evt_handler_f)(struct uwb_event *);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Properties of a UWBD event
|
|
|
|
*
|
|
|
|
* @handler: the function that will handle this event
|
|
|
|
* @name: text name of event
|
|
|
|
*/
|
|
|
|
struct uwbd_event {
|
|
|
|
uwbd_evt_handler_f handler;
|
|
|
|
const char *name;
|
|
|
|
};
|
|
|
|
|
2008-12-22 18:22:50 +00:00
|
|
|
/* Table of handlers for and properties of the UWBD Radio Control Events */
|
|
|
|
static struct uwbd_event uwbd_urc_events[] = {
|
2008-11-04 14:06:31 +00:00
|
|
|
[UWB_RC_EVT_IE_RCV] = {
|
|
|
|
.handler = uwbd_evt_handle_rc_ie_rcv,
|
|
|
|
.name = "IE_RECEIVED"
|
|
|
|
},
|
2008-09-17 15:34:06 +00:00
|
|
|
[UWB_RC_EVT_BEACON] = {
|
|
|
|
.handler = uwbd_evt_handle_rc_beacon,
|
|
|
|
.name = "BEACON_RECEIVED"
|
|
|
|
},
|
|
|
|
[UWB_RC_EVT_BEACON_SIZE] = {
|
|
|
|
.handler = uwbd_evt_handle_rc_beacon_size,
|
|
|
|
.name = "BEACON_SIZE_CHANGE"
|
|
|
|
},
|
|
|
|
[UWB_RC_EVT_BPOIE_CHANGE] = {
|
|
|
|
.handler = uwbd_evt_handle_rc_bpoie_change,
|
|
|
|
.name = "BPOIE_CHANGE"
|
|
|
|
},
|
|
|
|
[UWB_RC_EVT_BP_SLOT_CHANGE] = {
|
|
|
|
.handler = uwbd_evt_handle_rc_bp_slot_change,
|
|
|
|
.name = "BP_SLOT_CHANGE"
|
|
|
|
},
|
|
|
|
[UWB_RC_EVT_DRP_AVAIL] = {
|
|
|
|
.handler = uwbd_evt_handle_rc_drp_avail,
|
|
|
|
.name = "DRP_AVAILABILITY_CHANGE"
|
|
|
|
},
|
|
|
|
[UWB_RC_EVT_DRP] = {
|
|
|
|
.handler = uwbd_evt_handle_rc_drp,
|
|
|
|
.name = "DRP"
|
|
|
|
},
|
|
|
|
[UWB_RC_EVT_DEV_ADDR_CONFLICT] = {
|
|
|
|
.handler = uwbd_evt_handle_rc_dev_addr_conflict,
|
|
|
|
.name = "DEV_ADDR_CONFLICT",
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
struct uwbd_evt_type_handler {
|
|
|
|
const char *name;
|
|
|
|
struct uwbd_event *uwbd_events;
|
|
|
|
size_t size;
|
|
|
|
};
|
|
|
|
|
2008-12-22 18:22:50 +00:00
|
|
|
/* Table of handlers for each UWBD Event type. */
|
|
|
|
static struct uwbd_evt_type_handler uwbd_urc_evt_type_handlers[] = {
|
|
|
|
[UWB_RC_CET_GENERAL] = {
|
|
|
|
.name = "URC",
|
|
|
|
.uwbd_events = uwbd_urc_events,
|
|
|
|
.size = ARRAY_SIZE(uwbd_urc_events),
|
|
|
|
},
|
2008-09-17 15:34:06 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
static const struct uwbd_event uwbd_message_handlers[] = {
|
|
|
|
[UWB_EVT_MSG_RESET] = {
|
|
|
|
.handler = uwbd_msg_handle_reset,
|
|
|
|
.name = "reset",
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
2008-12-22 18:22:50 +00:00
|
|
|
/*
|
2008-09-17 15:34:06 +00:00
|
|
|
* Handle an URC event passed to the UWB Daemon
|
|
|
|
*
|
|
|
|
* @evt: the event to handle
|
|
|
|
* @returns: 0 if the event can be kfreed, !0 on the contrary
|
|
|
|
* (somebody else took ownership) [coincidentally, returning
|
|
|
|
* a <0 errno code will free it :)].
|
|
|
|
*
|
|
|
|
* Looks up the two indirection tables (one for the type, one for the
|
|
|
|
* subtype) to decide which function handles it and then calls the
|
|
|
|
* handler.
|
|
|
|
*
|
|
|
|
* The event structure passed to the event handler has the radio
|
|
|
|
* controller in @evt->rc referenced. The reference will be dropped
|
|
|
|
* once the handler returns, so if it needs it for longer (async),
|
|
|
|
* it'll need to take another one.
|
|
|
|
*/
|
|
|
|
static
|
|
|
|
int uwbd_event_handle_urc(struct uwb_event *evt)
|
|
|
|
{
|
2008-12-22 18:22:50 +00:00
|
|
|
int result = -EINVAL;
|
2008-09-17 15:34:06 +00:00
|
|
|
struct uwbd_evt_type_handler *type_table;
|
|
|
|
uwbd_evt_handler_f handler;
|
|
|
|
u8 type, context;
|
|
|
|
u16 event;
|
|
|
|
|
|
|
|
type = evt->notif.rceb->bEventType;
|
|
|
|
event = le16_to_cpu(evt->notif.rceb->wEvent);
|
|
|
|
context = evt->notif.rceb->bEventContext;
|
|
|
|
|
2009-08-25 14:03:07 +00:00
|
|
|
if (type >= ARRAY_SIZE(uwbd_urc_evt_type_handlers))
|
2008-12-22 18:22:50 +00:00
|
|
|
goto out;
|
|
|
|
type_table = &uwbd_urc_evt_type_handlers[type];
|
|
|
|
if (type_table->uwbd_events == NULL)
|
|
|
|
goto out;
|
2009-08-25 14:03:07 +00:00
|
|
|
if (event >= type_table->size)
|
2008-12-22 18:22:50 +00:00
|
|
|
goto out;
|
2008-09-17 15:34:06 +00:00
|
|
|
handler = type_table->uwbd_events[event].handler;
|
2008-12-22 18:22:50 +00:00
|
|
|
if (handler == NULL)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
result = (*handler)(evt);
|
|
|
|
out:
|
|
|
|
if (result < 0)
|
|
|
|
dev_err(&evt->rc->uwb_dev.dev,
|
|
|
|
"UWBD: event 0x%02x/%04x/%02x, handling failed: %d\n",
|
|
|
|
type, event, context, result);
|
|
|
|
return result;
|
2008-09-17 15:34:06 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void uwbd_event_handle_message(struct uwb_event *evt)
|
|
|
|
{
|
|
|
|
struct uwb_rc *rc;
|
|
|
|
int result;
|
|
|
|
|
|
|
|
rc = evt->rc;
|
|
|
|
|
|
|
|
if (evt->message < 0 || evt->message >= ARRAY_SIZE(uwbd_message_handlers)) {
|
|
|
|
dev_err(&rc->uwb_dev.dev, "UWBD: invalid message type %d\n", evt->message);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
result = uwbd_message_handlers[evt->message].handler(evt);
|
|
|
|
if (result < 0)
|
|
|
|
dev_err(&rc->uwb_dev.dev, "UWBD: '%s' message failed: %d\n",
|
|
|
|
uwbd_message_handlers[evt->message].name, result);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void uwbd_event_handle(struct uwb_event *evt)
|
|
|
|
{
|
|
|
|
struct uwb_rc *rc;
|
|
|
|
int should_keep;
|
|
|
|
|
|
|
|
rc = evt->rc;
|
|
|
|
|
|
|
|
if (rc->ready) {
|
|
|
|
switch (evt->type) {
|
|
|
|
case UWB_EVT_TYPE_NOTIF:
|
|
|
|
should_keep = uwbd_event_handle_urc(evt);
|
|
|
|
if (should_keep <= 0)
|
|
|
|
kfree(evt->notif.rceb);
|
|
|
|
break;
|
|
|
|
case UWB_EVT_TYPE_MSG:
|
|
|
|
uwbd_event_handle_message(evt);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
dev_err(&rc->uwb_dev.dev, "UWBD: invalid event type %d\n", evt->type);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
__uwb_rc_put(rc); /* for the __uwb_rc_get() in uwb_rc_notif_cb() */
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* UWB Daemon
|
|
|
|
*
|
|
|
|
* Listens to all UWB notifications and takes care to track the state
|
2011-07-17 19:15:54 +00:00
|
|
|
* of the UWB neighbourhood for the kernel. When we do a run, we
|
2008-09-17 15:34:06 +00:00
|
|
|
* spinlock, move the list to a private copy and release the
|
|
|
|
* lock. Hold it as little as possible. Not a conflict: it is
|
|
|
|
* guaranteed we own the events in the private list.
|
|
|
|
*
|
|
|
|
* FIXME: should change so we don't have a 1HZ timer all the time, but
|
|
|
|
* only if there are devices.
|
|
|
|
*/
|
2008-11-04 15:39:08 +00:00
|
|
|
static int uwbd(void *param)
|
2008-09-17 15:34:06 +00:00
|
|
|
{
|
2008-11-04 15:39:08 +00:00
|
|
|
struct uwb_rc *rc = param;
|
2008-09-17 15:34:06 +00:00
|
|
|
unsigned long flags;
|
2008-11-04 15:39:08 +00:00
|
|
|
struct uwb_event *evt;
|
2008-09-17 15:34:06 +00:00
|
|
|
int should_stop = 0;
|
2008-11-04 15:39:08 +00:00
|
|
|
|
2008-09-17 15:34:06 +00:00
|
|
|
while (1) {
|
|
|
|
wait_event_interruptible_timeout(
|
2008-11-04 15:39:08 +00:00
|
|
|
rc->uwbd.wq,
|
|
|
|
!list_empty(&rc->uwbd.event_list)
|
2008-09-17 15:34:06 +00:00
|
|
|
|| (should_stop = kthread_should_stop()),
|
|
|
|
HZ);
|
|
|
|
if (should_stop)
|
|
|
|
break;
|
|
|
|
|
2008-11-04 15:39:08 +00:00
|
|
|
spin_lock_irqsave(&rc->uwbd.event_list_lock, flags);
|
|
|
|
if (!list_empty(&rc->uwbd.event_list)) {
|
|
|
|
evt = list_first_entry(&rc->uwbd.event_list, struct uwb_event, list_node);
|
2008-09-17 15:34:06 +00:00
|
|
|
list_del(&evt->list_node);
|
2008-11-04 15:39:08 +00:00
|
|
|
} else
|
|
|
|
evt = NULL;
|
|
|
|
spin_unlock_irqrestore(&rc->uwbd.event_list_lock, flags);
|
|
|
|
|
|
|
|
if (evt) {
|
2008-09-17 15:34:06 +00:00
|
|
|
uwbd_event_handle(evt);
|
|
|
|
kfree(evt);
|
|
|
|
}
|
|
|
|
|
2008-11-04 15:39:08 +00:00
|
|
|
uwb_beca_purge(rc); /* Purge devices that left */
|
2008-09-17 15:34:06 +00:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/** Start the UWB daemon */
|
2008-11-04 15:39:08 +00:00
|
|
|
void uwbd_start(struct uwb_rc *rc)
|
2008-09-17 15:34:06 +00:00
|
|
|
{
|
2017-09-14 12:30:55 +00:00
|
|
|
struct task_struct *task = kthread_run(uwbd, rc, "uwbd");
|
|
|
|
if (IS_ERR(task)) {
|
|
|
|
rc->uwbd.task = NULL;
|
2008-09-17 15:34:06 +00:00
|
|
|
printk(KERN_ERR "UWB: Cannot start management daemon; "
|
|
|
|
"UWB won't work\n");
|
2017-09-14 12:30:55 +00:00
|
|
|
} else {
|
|
|
|
rc->uwbd.task = task;
|
2008-11-04 15:39:08 +00:00
|
|
|
rc->uwbd.pid = rc->uwbd.task->pid;
|
2017-09-14 12:30:55 +00:00
|
|
|
}
|
2008-09-17 15:34:06 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Stop the UWB daemon and free any unprocessed events */
|
2008-11-04 15:39:08 +00:00
|
|
|
void uwbd_stop(struct uwb_rc *rc)
|
2008-09-17 15:34:06 +00:00
|
|
|
{
|
2017-09-14 12:30:55 +00:00
|
|
|
if (rc->uwbd.task)
|
|
|
|
kthread_stop(rc->uwbd.task);
|
2008-11-04 15:39:08 +00:00
|
|
|
uwbd_flush(rc);
|
2008-09-17 15:34:06 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Queue an event for the management daemon
|
|
|
|
*
|
|
|
|
* When some lower layer receives an event, it uses this function to
|
|
|
|
* push it forward to the UWB daemon.
|
|
|
|
*
|
|
|
|
* Once you pass the event, you don't own it any more, but the daemon
|
|
|
|
* does. It will uwb_event_free() it when done, so make sure you
|
|
|
|
* uwb_event_alloc()ed it or bad things will happen.
|
|
|
|
*
|
|
|
|
* If the daemon is not running, we just free the event.
|
|
|
|
*/
|
|
|
|
void uwbd_event_queue(struct uwb_event *evt)
|
|
|
|
{
|
2008-11-04 15:39:08 +00:00
|
|
|
struct uwb_rc *rc = evt->rc;
|
2008-09-17 15:34:06 +00:00
|
|
|
unsigned long flags;
|
2008-11-04 15:39:08 +00:00
|
|
|
|
|
|
|
spin_lock_irqsave(&rc->uwbd.event_list_lock, flags);
|
|
|
|
if (rc->uwbd.pid != 0) {
|
|
|
|
list_add(&evt->list_node, &rc->uwbd.event_list);
|
|
|
|
wake_up_all(&rc->uwbd.wq);
|
2008-09-17 15:34:06 +00:00
|
|
|
} else {
|
|
|
|
__uwb_rc_put(evt->rc);
|
|
|
|
if (evt->type == UWB_EVT_TYPE_NOTIF)
|
|
|
|
kfree(evt->notif.rceb);
|
|
|
|
kfree(evt);
|
|
|
|
}
|
2008-11-04 15:39:08 +00:00
|
|
|
spin_unlock_irqrestore(&rc->uwbd.event_list_lock, flags);
|
2008-09-17 15:34:06 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
void uwbd_flush(struct uwb_rc *rc)
|
|
|
|
{
|
|
|
|
struct uwb_event *evt, *nxt;
|
|
|
|
|
2008-11-04 15:39:08 +00:00
|
|
|
spin_lock_irq(&rc->uwbd.event_list_lock);
|
|
|
|
list_for_each_entry_safe(evt, nxt, &rc->uwbd.event_list, list_node) {
|
2008-09-17 15:34:06 +00:00
|
|
|
if (evt->rc == rc) {
|
|
|
|
__uwb_rc_put(rc);
|
|
|
|
list_del(&evt->list_node);
|
|
|
|
if (evt->type == UWB_EVT_TYPE_NOTIF)
|
|
|
|
kfree(evt->notif.rceb);
|
|
|
|
kfree(evt);
|
|
|
|
}
|
|
|
|
}
|
2008-11-04 15:39:08 +00:00
|
|
|
spin_unlock_irq(&rc->uwbd.event_list_lock);
|
2008-09-17 15:34:06 +00:00
|
|
|
}
|