2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* linux/ipc/sem.c
|
|
|
|
* Copyright (C) 1992 Krishna Balasubramanian
|
|
|
|
* Copyright (C) 1995 Eric Schenk, Bruno Haible
|
|
|
|
*
|
|
|
|
* /proc/sysvipc/sem support (c) 1999 Dragos Acostachioaie <dragos@iname.com>
|
|
|
|
*
|
|
|
|
* SMP-threaded, sysctl's added
|
2006-01-15 01:43:54 +00:00
|
|
|
* (c) 1999 Manfred Spraul <manfred@colorfullife.com>
|
2005-04-16 22:20:36 +00:00
|
|
|
* Enforced range limit on SEM_UNDO
|
2009-01-05 14:06:29 +00:00
|
|
|
* (c) 2001 Red Hat Inc
|
2005-04-16 22:20:36 +00:00
|
|
|
* Lockless wakeup
|
|
|
|
* (c) 2003 Manfred Spraul <manfred@colorfullife.com>
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
* (c) 2016 Davidlohr Bueso <dave@stgolabs.net>
|
2010-05-26 21:43:43 +00:00
|
|
|
* Further wakeup optimizations, documentation
|
|
|
|
* (c) 2010 Manfred Spraul <manfred@colorfullife.com>
|
2006-04-02 21:07:33 +00:00
|
|
|
*
|
|
|
|
* support for audit of ipc object properties and permission changes
|
|
|
|
* Dustin Kirkland <dustin.kirkland@us.ibm.com>
|
2006-10-02 09:18:22 +00:00
|
|
|
*
|
|
|
|
* namespaces support
|
|
|
|
* OpenVZ, SWsoft Inc.
|
|
|
|
* Pavel Emelianov <xemul@openvz.org>
|
2010-05-26 21:43:43 +00:00
|
|
|
*
|
|
|
|
* Implementation notes: (May 2010)
|
|
|
|
* This file implements System V semaphores.
|
|
|
|
*
|
|
|
|
* User space visible behavior:
|
|
|
|
* - FIFO ordering for semop() operations (just FIFO, not starvation
|
|
|
|
* protection)
|
|
|
|
* - multiple semaphore operations that alter the same semaphore in
|
|
|
|
* one semop() are handled.
|
|
|
|
* - sem_ctime (time of last semctl()) is updated in the IPC_SET, SETVAL and
|
|
|
|
* SETALL calls.
|
|
|
|
* - two Linux specific semctl() commands: SEM_STAT, SEM_INFO.
|
|
|
|
* - undo adjustments at process exit are limited to 0..SEMVMX.
|
|
|
|
* - namespace are supported.
|
|
|
|
* - SEMMSL, SEMMNS, SEMOPM and SEMMNI can be configured at runtine by writing
|
|
|
|
* to /proc/sys/kernel/sem.
|
|
|
|
* - statistics about the usage are reported in /proc/sysvipc/sem.
|
|
|
|
*
|
|
|
|
* Internals:
|
|
|
|
* - scalability:
|
|
|
|
* - all global variables are read-mostly.
|
|
|
|
* - semop() calls and semctl(RMID) are synchronized by RCU.
|
|
|
|
* - most operations do write operations (actually: spin_lock calls) to
|
|
|
|
* the per-semaphore array structure.
|
|
|
|
* Thus: Perfect SMP scaling between independent semaphore arrays.
|
|
|
|
* If multiple semaphores in one array are used, then cache line
|
|
|
|
* trashing on the semaphore array spinlock will limit the scaling.
|
2014-06-06 21:37:48 +00:00
|
|
|
* - semncnt and semzcnt are calculated on demand in count_semcnt()
|
2010-05-26 21:43:43 +00:00
|
|
|
* - the task that performs a successful semop() scans the list of all
|
|
|
|
* sleeping tasks and completes any pending operations that can be fulfilled.
|
|
|
|
* Semaphores are actively given to waiting tasks (necessary for FIFO).
|
|
|
|
* (see update_queue())
|
|
|
|
* - To improve the scalability, the actual wake-up calls are performed after
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
* dropping all locks. (see wake_up_sem_queue_prepare())
|
2010-05-26 21:43:43 +00:00
|
|
|
* - All work is done by the waker, the woken up task does not have to do
|
|
|
|
* anything - not even acquiring a lock or dropping a refcount.
|
|
|
|
* - A woken up task may not even touch the semaphore array anymore, it may
|
|
|
|
* have been destroyed already by a semctl(RMID).
|
|
|
|
* - UNDO values are stored in an array (one per process and per
|
|
|
|
* semaphore array, lazily allocated). For backwards compatibility, multiple
|
|
|
|
* modes for the UNDO variables are supported (per process, per thread)
|
|
|
|
* (see copy_semundo, CLONE_SYSVSEM)
|
|
|
|
* - There are two lists of the pending operations: a per-array list
|
|
|
|
* and per-semaphore list (stored in the array). This allows to achieve FIFO
|
|
|
|
* ordering without always scanning all pending operations.
|
|
|
|
* The worst-case behavior is nevertheless O(N^2) for N wakeups.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/spinlock.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/proc_fs.h>
|
|
|
|
#include <linux/time.h>
|
|
|
|
#include <linux/security.h>
|
|
|
|
#include <linux/syscalls.h>
|
|
|
|
#include <linux/audit.h>
|
2006-01-11 20:17:46 +00:00
|
|
|
#include <linux/capability.h>
|
2005-09-06 22:17:10 +00:00
|
|
|
#include <linux/seq_file.h>
|
2007-10-19 06:40:54 +00:00
|
|
|
#include <linux/rwsem.h>
|
2006-10-02 09:18:22 +00:00
|
|
|
#include <linux/nsproxy.h>
|
2008-02-08 12:18:22 +00:00
|
|
|
#include <linux/ipc_namespace.h>
|
2006-03-26 09:37:17 +00:00
|
|
|
|
2014-06-06 21:37:37 +00:00
|
|
|
#include <linux/uaccess.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include "util.h"
|
|
|
|
|
2011-11-02 20:38:54 +00:00
|
|
|
/* One semaphore structure for each semaphore in the system. */
|
|
|
|
struct sem {
|
|
|
|
int semval; /* current value */
|
2016-03-22 21:27:48 +00:00
|
|
|
/*
|
|
|
|
* PID of the process that last modified the semaphore. For
|
|
|
|
* Linux, specifically these are:
|
|
|
|
* - semop
|
|
|
|
* - semctl, via SETVAL and SETALL.
|
|
|
|
* - at task exit when performing undo adjustments (see exit_sem).
|
|
|
|
*/
|
|
|
|
int sempid;
|
2013-05-01 02:15:44 +00:00
|
|
|
spinlock_t lock; /* spinlock for fine-grained semtimedop */
|
2013-07-08 23:01:23 +00:00
|
|
|
struct list_head pending_alter; /* pending single-sop operations */
|
|
|
|
/* that alter the semaphore */
|
|
|
|
struct list_head pending_const; /* pending single-sop operations */
|
|
|
|
/* that do not alter the semaphore*/
|
2013-07-08 23:01:25 +00:00
|
|
|
time_t sem_otime; /* candidate for sem_otime */
|
ipc/sem.c: cacheline align the semaphore structures
As now each semaphore has its own spinlock and parallel operations are
possible, give each semaphore its own cacheline.
On a i3 laptop, this gives up to 28% better performance:
#semscale 10 | grep "interleave 2"
- before:
Cpus 1, interleave 2 delay 0: 36109234 in 10 secs
Cpus 2, interleave 2 delay 0: 55276317 in 10 secs
Cpus 3, interleave 2 delay 0: 62411025 in 10 secs
Cpus 4, interleave 2 delay 0: 81963928 in 10 secs
-after:
Cpus 1, interleave 2 delay 0: 35527306 in 10 secs
Cpus 2, interleave 2 delay 0: 70922909 in 10 secs <<< + 28%
Cpus 3, interleave 2 delay 0: 80518538 in 10 secs
Cpus 4, interleave 2 delay 0: 89115148 in 10 secs <<< + 8.7%
i3, with 2 cores and with hyperthreading enabled. Interleave 2 in order
use first the full cores. HT partially hides the delay from cacheline
trashing, thus the improvement is "only" 8.7% if 4 threads are running.
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-08 23:01:22 +00:00
|
|
|
} ____cacheline_aligned_in_smp;
|
2011-11-02 20:38:54 +00:00
|
|
|
|
|
|
|
/* One queue for each sleeping process in the system. */
|
|
|
|
struct sem_queue {
|
|
|
|
struct list_head list; /* queue of pending operations */
|
|
|
|
struct task_struct *sleeper; /* this process */
|
|
|
|
struct sem_undo *undo; /* undo structure */
|
|
|
|
int pid; /* process id of requesting process */
|
|
|
|
int status; /* completion status of operation */
|
|
|
|
struct sembuf *sops; /* array of pending operations */
|
2014-06-06 21:37:49 +00:00
|
|
|
struct sembuf *blocking; /* the operation that blocked */
|
2011-11-02 20:38:54 +00:00
|
|
|
int nsops; /* number of operations */
|
ipc/sem: optimize perform_atomic_semop()
This is the main workhorse that deals with semop user calls such that
the waitforzero or semval update operations, on the set, can complete on
not as the sma currently stands. Currently, the set is iterated twice
(setting semval, then backwards for the sempid value). Slowpaths, and
particularly SEM_UNDO calls, must undo any altered sem when it is
detected that the caller must block or has errored-out.
With larger sets, there can occur situations where this involves a lot
of cycles and can obviously be a suboptimal use of cached resources in
shared memory. Ie, discarding CPU caches that are also calling semop
and have the sembuf cached (and can complete), while the current lock
holder doing the semop will block, error, or does a waitforzero
operation.
This patch proposes still iterating the set twice, but the first scan is
read-only, and we perform the actual updates afterward, once we know
that the call will succeed. In order to not suffer from the overhead of
dealing with sops that act on the same sem_num, such (rare) cases use
perform_atomic_semop_slow(), which is exactly what we have now.
Duplicates are detected before grabbing sem_lock, and uses simple a
32/64-bit hash array variable to based on the sem_num we are working on.
In addition add some comments to when we expect to the caller to block.
[akpm@linux-foundation.org: coding-style fixes]
[colin.king@canonical.com: ensure we left shift a ULL rather than a 32 bit integer]
Link: http://lkml.kernel.org/r/20161028181129.7311-1-colin.king@canonical.com
Link: http://lkml.kernel.org/r/20160921194603.GB21438@linux-80c1.suse
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:37 +00:00
|
|
|
bool alter; /* does *sops alter the array? */
|
|
|
|
bool dupsop; /* sops on more than one sem_num */
|
2011-11-02 20:38:54 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
/* Each task has a list of undo requests. They are executed automatically
|
|
|
|
* when the process exits.
|
|
|
|
*/
|
|
|
|
struct sem_undo {
|
|
|
|
struct list_head list_proc; /* per-process list: *
|
|
|
|
* all undos from one process
|
|
|
|
* rcu protected */
|
|
|
|
struct rcu_head rcu; /* rcu struct for sem_undo */
|
|
|
|
struct sem_undo_list *ulp; /* back ptr to sem_undo_list */
|
|
|
|
struct list_head list_id; /* per semaphore array list:
|
|
|
|
* all undos for one array */
|
|
|
|
int semid; /* semaphore set identifier */
|
|
|
|
short *semadj; /* array of adjustments */
|
|
|
|
/* one per semaphore */
|
|
|
|
};
|
|
|
|
|
|
|
|
/* sem_undo_list controls shared access to the list of sem_undo structures
|
|
|
|
* that may be shared among all a CLONE_SYSVSEM task group.
|
|
|
|
*/
|
|
|
|
struct sem_undo_list {
|
|
|
|
atomic_t refcnt;
|
|
|
|
spinlock_t lock;
|
|
|
|
struct list_head list_proc;
|
|
|
|
};
|
|
|
|
|
|
|
|
|
2008-02-08 12:18:57 +00:00
|
|
|
#define sem_ids(ns) ((ns)->ids[IPC_SEM_IDS])
|
2006-10-02 09:18:22 +00:00
|
|
|
|
2007-10-19 06:40:55 +00:00
|
|
|
#define sem_checkid(sma, semid) ipc_checkid(&sma->sem_perm, semid)
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2007-10-19 06:40:49 +00:00
|
|
|
static int newary(struct ipc_namespace *, struct ipc_params *);
|
2008-02-08 12:18:57 +00:00
|
|
|
static void freeary(struct ipc_namespace *, struct kern_ipc_perm *);
|
2005-04-16 22:20:36 +00:00
|
|
|
#ifdef CONFIG_PROC_FS
|
2005-09-06 22:17:10 +00:00
|
|
|
static int sysvipc_sem_proc_show(struct seq_file *s, void *it);
|
2005-04-16 22:20:36 +00:00
|
|
|
#endif
|
|
|
|
|
|
|
|
#define SEMMSL_FAST 256 /* 512 bytes on stack */
|
|
|
|
#define SEMOPM_FAST 64 /* ~ 372 bytes on stack */
|
|
|
|
|
|
|
|
/*
|
2013-07-08 23:01:26 +00:00
|
|
|
* Locking:
|
2016-10-11 20:54:50 +00:00
|
|
|
* a) global sem_lock() for read/write
|
2005-04-16 22:20:36 +00:00
|
|
|
* sem_undo.id_next,
|
2013-07-08 23:01:26 +00:00
|
|
|
* sem_array.complex_count,
|
2016-10-11 20:54:50 +00:00
|
|
|
* sem_array.complex_mode
|
|
|
|
* sem_array.pending{_alter,_const},
|
|
|
|
* sem_array.sem_undo
|
2014-06-06 21:37:37 +00:00
|
|
|
*
|
2016-10-11 20:54:50 +00:00
|
|
|
* b) global or semaphore sem_lock() for read/write:
|
2013-07-08 23:01:26 +00:00
|
|
|
* sem_array.sem_base[i].pending_{const,alter}:
|
2016-10-11 20:54:50 +00:00
|
|
|
* sem_array.complex_mode (for read)
|
|
|
|
*
|
|
|
|
* c) special:
|
|
|
|
* sem_undo_list.list_proc:
|
|
|
|
* * undo_list->lock for write
|
|
|
|
* * rcu for read
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
|
2006-10-02 09:18:22 +00:00
|
|
|
#define sc_semmsl sem_ctls[0]
|
|
|
|
#define sc_semmns sem_ctls[1]
|
|
|
|
#define sc_semopm sem_ctls[2]
|
|
|
|
#define sc_semmni sem_ctls[3]
|
|
|
|
|
2008-02-08 12:18:57 +00:00
|
|
|
void sem_init_ns(struct ipc_namespace *ns)
|
2006-10-02 09:18:22 +00:00
|
|
|
{
|
|
|
|
ns->sc_semmsl = SEMMSL;
|
|
|
|
ns->sc_semmns = SEMMNS;
|
|
|
|
ns->sc_semopm = SEMOPM;
|
|
|
|
ns->sc_semmni = SEMMNI;
|
|
|
|
ns->used_sems = 0;
|
2008-02-08 12:18:57 +00:00
|
|
|
ipc_init_ids(&ns->ids[IPC_SEM_IDS]);
|
2006-10-02 09:18:22 +00:00
|
|
|
}
|
|
|
|
|
2008-02-08 12:18:22 +00:00
|
|
|
#ifdef CONFIG_IPC_NS
|
2006-10-02 09:18:22 +00:00
|
|
|
void sem_exit_ns(struct ipc_namespace *ns)
|
|
|
|
{
|
2008-02-08 12:18:57 +00:00
|
|
|
free_ipcs(ns, &sem_ids(ns), freeary);
|
2009-12-16 00:47:27 +00:00
|
|
|
idr_destroy(&ns->ids[IPC_SEM_IDS].ipcs_idr);
|
2006-10-02 09:18:22 +00:00
|
|
|
}
|
2008-02-08 12:18:22 +00:00
|
|
|
#endif
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2014-01-28 01:07:04 +00:00
|
|
|
void __init sem_init(void)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2008-02-08 12:18:57 +00:00
|
|
|
sem_init_ns(&init_ipc_ns);
|
2005-09-06 22:17:10 +00:00
|
|
|
ipc_init_proc_interface("sysvipc/sem",
|
|
|
|
" key semid perms nsems uid gid cuid cgid otime ctime\n",
|
2006-10-02 09:18:22 +00:00
|
|
|
IPC_SEM_IDS, sysvipc_sem_proc_show);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2013-07-08 23:01:24 +00:00
|
|
|
/**
|
|
|
|
* unmerge_queues - unmerge queues, if possible.
|
|
|
|
* @sma: semaphore array
|
|
|
|
*
|
|
|
|
* The function unmerges the wait queues if complex_count is 0.
|
|
|
|
* It must be called prior to dropping the global semaphore array lock.
|
|
|
|
*/
|
|
|
|
static void unmerge_queues(struct sem_array *sma)
|
|
|
|
{
|
|
|
|
struct sem_queue *q, *tq;
|
|
|
|
|
|
|
|
/* complex operations still around? */
|
|
|
|
if (sma->complex_count)
|
|
|
|
return;
|
|
|
|
/*
|
|
|
|
* We will switch back to simple mode.
|
|
|
|
* Move all pending operation back into the per-semaphore
|
|
|
|
* queues.
|
|
|
|
*/
|
|
|
|
list_for_each_entry_safe(q, tq, &sma->pending_alter, list) {
|
|
|
|
struct sem *curr;
|
|
|
|
curr = &sma->sem_base[q->sops[0].sem_num];
|
|
|
|
|
|
|
|
list_add_tail(&q->list, &curr->pending_alter);
|
|
|
|
}
|
|
|
|
INIT_LIST_HEAD(&sma->pending_alter);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2014-01-28 01:07:05 +00:00
|
|
|
* merge_queues - merge single semop queues into global queue
|
2013-07-08 23:01:24 +00:00
|
|
|
* @sma: semaphore array
|
|
|
|
*
|
|
|
|
* This function merges all per-semaphore queues into the global queue.
|
|
|
|
* It is necessary to achieve FIFO ordering for the pending single-sop
|
|
|
|
* operations when a multi-semop operation must sleep.
|
|
|
|
* Only the alter operations must be moved, the const operations can stay.
|
|
|
|
*/
|
|
|
|
static void merge_queues(struct sem_array *sma)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
for (i = 0; i < sma->sem_nsems; i++) {
|
|
|
|
struct sem *sem = sma->sem_base + i;
|
|
|
|
|
|
|
|
list_splice_init(&sem->pending_alter, &sma->pending_alter);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
ipc: fix race with LSMs
Currently, IPC mechanisms do security and auditing related checks under
RCU. However, since security modules can free the security structure,
for example, through selinux_[sem,msg_queue,shm]_free_security(), we can
race if the structure is freed before other tasks are done with it,
creating a use-after-free condition. Manfred illustrates this nicely,
for instance with shared mem and selinux:
-> do_shmat calls rcu_read_lock()
-> do_shmat calls shm_object_check().
Checks that the object is still valid - but doesn't acquire any locks.
Then it returns.
-> do_shmat calls security_shm_shmat (e.g. selinux_shm_shmat)
-> selinux_shm_shmat calls ipc_has_perm()
-> ipc_has_perm accesses ipc_perms->security
shm_close()
-> shm_close acquires rw_mutex & shm_lock
-> shm_close calls shm_destroy
-> shm_destroy calls security_shm_free (e.g. selinux_shm_free_security)
-> selinux_shm_free_security calls ipc_free_security(&shp->shm_perm)
-> ipc_free_security calls kfree(ipc_perms->security)
This patch delays the freeing of the security structures after all RCU
readers are done. Furthermore it aligns the security life cycle with
that of the rest of IPC - freeing them based on the reference counter.
For situations where we need not free security, the current behavior is
kept. Linus states:
"... the old behavior was suspect for another reason too: having the
security blob go away from under a user sounds like it could cause
various other problems anyway, so I think the old code was at least
_prone_ to bugs even if it didn't have catastrophic behavior."
I have tested this patch with IPC testcases from LTP on both my
quad-core laptop and on a 64 core NUMA server. In both cases selinux is
enabled, and tests pass for both voluntary and forced preemption models.
While the mentioned races are theoretical (at least no one as reported
them), I wanted to make sure that this new logic doesn't break anything
we weren't aware of.
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-24 00:04:45 +00:00
|
|
|
static void sem_rcu_free(struct rcu_head *head)
|
|
|
|
{
|
|
|
|
struct ipc_rcu *p = container_of(head, struct ipc_rcu, rcu);
|
|
|
|
struct sem_array *sma = ipc_rcu_to_struct(p);
|
|
|
|
|
|
|
|
security_sem_free(sma);
|
|
|
|
ipc_rcu_free(head);
|
|
|
|
}
|
|
|
|
|
ipc/sem.c: fix race in sem_lock()
The exclusion of complex operations in sem_lock() is insufficient: after
acquiring the per-semaphore lock, a simple op must first check that
sem_perm.lock is not locked and only after that test check
complex_count. The current code does it the other way around - and that
creates a race. Details are below.
The patch is a complete rewrite of sem_lock(), based in part on the code
from Mike Galbraith. It removes all gotos and all loops and thus the
risk of livelocks.
I have tested the patch (together with the next one) on my i3 laptop and
it didn't cause any problems.
The bug is probably also present in 3.10 and 3.11, but for these kernels
it might be simpler just to move the test of sma->complex_count after
the spin_is_locked() test.
Details of the bug:
Assume:
- sma->complex_count = 0.
- Thread 1: semtimedop(complex op that must sleep)
- Thread 2: semtimedop(simple op).
Pseudo-Trace:
Thread 1: sem_lock(): acquire sem_perm.lock
Thread 1: sem_lock(): check for ongoing simple ops
Nothing ongoing, thread 2 is still before sem_lock().
Thread 1: try_atomic_semop()
<<< preempted.
Thread 2: sem_lock():
static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
int nsops)
{
int locknum;
again:
if (nsops == 1 && !sma->complex_count) {
struct sem *sem = sma->sem_base + sops->sem_num;
/* Lock just the semaphore we are interested in. */
spin_lock(&sem->lock);
/*
* If sma->complex_count was set while we were spinning,
* we may need to look at things we did not lock here.
*/
if (unlikely(sma->complex_count)) {
spin_unlock(&sem->lock);
goto lock_array;
}
<<<<<<<<<
<<< complex_count is still 0.
<<<
<<< Here it is preempted
<<<<<<<<<
Thread 1: try_atomic_semop() returns, notices that it must sleep.
Thread 1: increases sma->complex_count.
Thread 1: drops sem_perm.lock
Thread 2:
/*
* Another process is holding the global lock on the
* sem_array; we cannot enter our critical section,
* but have to wait for the global lock to be released.
*/
if (unlikely(spin_is_locked(&sma->sem_perm.lock))) {
spin_unlock(&sem->lock);
spin_unlock_wait(&sma->sem_perm.lock);
goto again;
}
<<< sem_perm.lock already dropped, thus no "goto again;"
locknum = sops->sem_num;
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: <stable@vger.kernel.org> [3.10+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-30 20:45:04 +00:00
|
|
|
/*
|
2016-10-11 20:54:50 +00:00
|
|
|
* Enter the mode suitable for non-simple operations:
|
ipc/sem.c: fix race in sem_lock()
The exclusion of complex operations in sem_lock() is insufficient: after
acquiring the per-semaphore lock, a simple op must first check that
sem_perm.lock is not locked and only after that test check
complex_count. The current code does it the other way around - and that
creates a race. Details are below.
The patch is a complete rewrite of sem_lock(), based in part on the code
from Mike Galbraith. It removes all gotos and all loops and thus the
risk of livelocks.
I have tested the patch (together with the next one) on my i3 laptop and
it didn't cause any problems.
The bug is probably also present in 3.10 and 3.11, but for these kernels
it might be simpler just to move the test of sma->complex_count after
the spin_is_locked() test.
Details of the bug:
Assume:
- sma->complex_count = 0.
- Thread 1: semtimedop(complex op that must sleep)
- Thread 2: semtimedop(simple op).
Pseudo-Trace:
Thread 1: sem_lock(): acquire sem_perm.lock
Thread 1: sem_lock(): check for ongoing simple ops
Nothing ongoing, thread 2 is still before sem_lock().
Thread 1: try_atomic_semop()
<<< preempted.
Thread 2: sem_lock():
static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
int nsops)
{
int locknum;
again:
if (nsops == 1 && !sma->complex_count) {
struct sem *sem = sma->sem_base + sops->sem_num;
/* Lock just the semaphore we are interested in. */
spin_lock(&sem->lock);
/*
* If sma->complex_count was set while we were spinning,
* we may need to look at things we did not lock here.
*/
if (unlikely(sma->complex_count)) {
spin_unlock(&sem->lock);
goto lock_array;
}
<<<<<<<<<
<<< complex_count is still 0.
<<<
<<< Here it is preempted
<<<<<<<<<
Thread 1: try_atomic_semop() returns, notices that it must sleep.
Thread 1: increases sma->complex_count.
Thread 1: drops sem_perm.lock
Thread 2:
/*
* Another process is holding the global lock on the
* sem_array; we cannot enter our critical section,
* but have to wait for the global lock to be released.
*/
if (unlikely(spin_is_locked(&sma->sem_perm.lock))) {
spin_unlock(&sem->lock);
spin_unlock_wait(&sma->sem_perm.lock);
goto again;
}
<<< sem_perm.lock already dropped, thus no "goto again;"
locknum = sops->sem_num;
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: <stable@vger.kernel.org> [3.10+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-30 20:45:04 +00:00
|
|
|
* Caller must own sem_perm.lock.
|
|
|
|
*/
|
2016-10-11 20:54:50 +00:00
|
|
|
static void complexmode_enter(struct sem_array *sma)
|
ipc/sem.c: fix race in sem_lock()
The exclusion of complex operations in sem_lock() is insufficient: after
acquiring the per-semaphore lock, a simple op must first check that
sem_perm.lock is not locked and only after that test check
complex_count. The current code does it the other way around - and that
creates a race. Details are below.
The patch is a complete rewrite of sem_lock(), based in part on the code
from Mike Galbraith. It removes all gotos and all loops and thus the
risk of livelocks.
I have tested the patch (together with the next one) on my i3 laptop and
it didn't cause any problems.
The bug is probably also present in 3.10 and 3.11, but for these kernels
it might be simpler just to move the test of sma->complex_count after
the spin_is_locked() test.
Details of the bug:
Assume:
- sma->complex_count = 0.
- Thread 1: semtimedop(complex op that must sleep)
- Thread 2: semtimedop(simple op).
Pseudo-Trace:
Thread 1: sem_lock(): acquire sem_perm.lock
Thread 1: sem_lock(): check for ongoing simple ops
Nothing ongoing, thread 2 is still before sem_lock().
Thread 1: try_atomic_semop()
<<< preempted.
Thread 2: sem_lock():
static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
int nsops)
{
int locknum;
again:
if (nsops == 1 && !sma->complex_count) {
struct sem *sem = sma->sem_base + sops->sem_num;
/* Lock just the semaphore we are interested in. */
spin_lock(&sem->lock);
/*
* If sma->complex_count was set while we were spinning,
* we may need to look at things we did not lock here.
*/
if (unlikely(sma->complex_count)) {
spin_unlock(&sem->lock);
goto lock_array;
}
<<<<<<<<<
<<< complex_count is still 0.
<<<
<<< Here it is preempted
<<<<<<<<<
Thread 1: try_atomic_semop() returns, notices that it must sleep.
Thread 1: increases sma->complex_count.
Thread 1: drops sem_perm.lock
Thread 2:
/*
* Another process is holding the global lock on the
* sem_array; we cannot enter our critical section,
* but have to wait for the global lock to be released.
*/
if (unlikely(spin_is_locked(&sma->sem_perm.lock))) {
spin_unlock(&sem->lock);
spin_unlock_wait(&sma->sem_perm.lock);
goto again;
}
<<< sem_perm.lock already dropped, thus no "goto again;"
locknum = sops->sem_num;
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: <stable@vger.kernel.org> [3.10+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-30 20:45:04 +00:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
struct sem *sem;
|
|
|
|
|
2016-10-11 20:54:50 +00:00
|
|
|
if (sma->complex_mode) {
|
|
|
|
/* We are already in complex_mode. Nothing to do */
|
2013-09-30 20:45:06 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2016-10-11 20:54:50 +00:00
|
|
|
/* We need a full barrier after seting complex_mode:
|
|
|
|
* The write to complex_mode must be visible
|
|
|
|
* before we read the first sem->lock spinlock state.
|
|
|
|
*/
|
|
|
|
smp_store_mb(sma->complex_mode, true);
|
|
|
|
|
ipc/sem.c: fix race in sem_lock()
The exclusion of complex operations in sem_lock() is insufficient: after
acquiring the per-semaphore lock, a simple op must first check that
sem_perm.lock is not locked and only after that test check
complex_count. The current code does it the other way around - and that
creates a race. Details are below.
The patch is a complete rewrite of sem_lock(), based in part on the code
from Mike Galbraith. It removes all gotos and all loops and thus the
risk of livelocks.
I have tested the patch (together with the next one) on my i3 laptop and
it didn't cause any problems.
The bug is probably also present in 3.10 and 3.11, but for these kernels
it might be simpler just to move the test of sma->complex_count after
the spin_is_locked() test.
Details of the bug:
Assume:
- sma->complex_count = 0.
- Thread 1: semtimedop(complex op that must sleep)
- Thread 2: semtimedop(simple op).
Pseudo-Trace:
Thread 1: sem_lock(): acquire sem_perm.lock
Thread 1: sem_lock(): check for ongoing simple ops
Nothing ongoing, thread 2 is still before sem_lock().
Thread 1: try_atomic_semop()
<<< preempted.
Thread 2: sem_lock():
static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
int nsops)
{
int locknum;
again:
if (nsops == 1 && !sma->complex_count) {
struct sem *sem = sma->sem_base + sops->sem_num;
/* Lock just the semaphore we are interested in. */
spin_lock(&sem->lock);
/*
* If sma->complex_count was set while we were spinning,
* we may need to look at things we did not lock here.
*/
if (unlikely(sma->complex_count)) {
spin_unlock(&sem->lock);
goto lock_array;
}
<<<<<<<<<
<<< complex_count is still 0.
<<<
<<< Here it is preempted
<<<<<<<<<
Thread 1: try_atomic_semop() returns, notices that it must sleep.
Thread 1: increases sma->complex_count.
Thread 1: drops sem_perm.lock
Thread 2:
/*
* Another process is holding the global lock on the
* sem_array; we cannot enter our critical section,
* but have to wait for the global lock to be released.
*/
if (unlikely(spin_is_locked(&sma->sem_perm.lock))) {
spin_unlock(&sem->lock);
spin_unlock_wait(&sma->sem_perm.lock);
goto again;
}
<<< sem_perm.lock already dropped, thus no "goto again;"
locknum = sops->sem_num;
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: <stable@vger.kernel.org> [3.10+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-30 20:45:04 +00:00
|
|
|
for (i = 0; i < sma->sem_nsems; i++) {
|
|
|
|
sem = sma->sem_base + i;
|
|
|
|
spin_unlock_wait(&sem->lock);
|
|
|
|
}
|
2016-10-11 20:54:50 +00:00
|
|
|
/*
|
|
|
|
* spin_unlock_wait() is not a memory barriers, it is only a
|
|
|
|
* control barrier. The code must pair with spin_unlock(&sem->lock),
|
|
|
|
* thus just the control barrier is insufficient.
|
|
|
|
*
|
|
|
|
* smp_rmb() is sufficient, as writes cannot pass the control barrier.
|
|
|
|
*/
|
|
|
|
smp_rmb();
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Try to leave the mode that disallows simple operations:
|
|
|
|
* Caller must own sem_perm.lock.
|
|
|
|
*/
|
|
|
|
static void complexmode_tryleave(struct sem_array *sma)
|
|
|
|
{
|
|
|
|
if (sma->complex_count) {
|
|
|
|
/* Complex ops are sleeping.
|
|
|
|
* We must stay in complex mode
|
|
|
|
*/
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Immediately after setting complex_mode to false,
|
|
|
|
* a simple op can start. Thus: all memory writes
|
|
|
|
* performed by the current operation must be visible
|
|
|
|
* before we set complex_mode to false.
|
|
|
|
*/
|
|
|
|
smp_store_release(&sma->complex_mode, false);
|
ipc/sem.c: fix race in sem_lock()
The exclusion of complex operations in sem_lock() is insufficient: after
acquiring the per-semaphore lock, a simple op must first check that
sem_perm.lock is not locked and only after that test check
complex_count. The current code does it the other way around - and that
creates a race. Details are below.
The patch is a complete rewrite of sem_lock(), based in part on the code
from Mike Galbraith. It removes all gotos and all loops and thus the
risk of livelocks.
I have tested the patch (together with the next one) on my i3 laptop and
it didn't cause any problems.
The bug is probably also present in 3.10 and 3.11, but for these kernels
it might be simpler just to move the test of sma->complex_count after
the spin_is_locked() test.
Details of the bug:
Assume:
- sma->complex_count = 0.
- Thread 1: semtimedop(complex op that must sleep)
- Thread 2: semtimedop(simple op).
Pseudo-Trace:
Thread 1: sem_lock(): acquire sem_perm.lock
Thread 1: sem_lock(): check for ongoing simple ops
Nothing ongoing, thread 2 is still before sem_lock().
Thread 1: try_atomic_semop()
<<< preempted.
Thread 2: sem_lock():
static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
int nsops)
{
int locknum;
again:
if (nsops == 1 && !sma->complex_count) {
struct sem *sem = sma->sem_base + sops->sem_num;
/* Lock just the semaphore we are interested in. */
spin_lock(&sem->lock);
/*
* If sma->complex_count was set while we were spinning,
* we may need to look at things we did not lock here.
*/
if (unlikely(sma->complex_count)) {
spin_unlock(&sem->lock);
goto lock_array;
}
<<<<<<<<<
<<< complex_count is still 0.
<<<
<<< Here it is preempted
<<<<<<<<<
Thread 1: try_atomic_semop() returns, notices that it must sleep.
Thread 1: increases sma->complex_count.
Thread 1: drops sem_perm.lock
Thread 2:
/*
* Another process is holding the global lock on the
* sem_array; we cannot enter our critical section,
* but have to wait for the global lock to be released.
*/
if (unlikely(spin_is_locked(&sma->sem_perm.lock))) {
spin_unlock(&sem->lock);
spin_unlock_wait(&sma->sem_perm.lock);
goto again;
}
<<< sem_perm.lock already dropped, thus no "goto again;"
locknum = sops->sem_num;
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: <stable@vger.kernel.org> [3.10+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-30 20:45:04 +00:00
|
|
|
}
|
|
|
|
|
2016-10-11 20:54:50 +00:00
|
|
|
#define SEM_GLOBAL_LOCK (-1)
|
2013-05-01 02:15:44 +00:00
|
|
|
/*
|
|
|
|
* If the request contains only one semaphore operation, and there are
|
|
|
|
* no complex transactions pending, lock only the semaphore involved.
|
|
|
|
* Otherwise, lock the entire semaphore array, since we either have
|
|
|
|
* multiple semaphores in our own semops, or we need to look at
|
|
|
|
* semaphores from other pending complex operations.
|
|
|
|
*/
|
|
|
|
static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
|
|
|
|
int nsops)
|
|
|
|
{
|
ipc/sem.c: fix race in sem_lock()
The exclusion of complex operations in sem_lock() is insufficient: after
acquiring the per-semaphore lock, a simple op must first check that
sem_perm.lock is not locked and only after that test check
complex_count. The current code does it the other way around - and that
creates a race. Details are below.
The patch is a complete rewrite of sem_lock(), based in part on the code
from Mike Galbraith. It removes all gotos and all loops and thus the
risk of livelocks.
I have tested the patch (together with the next one) on my i3 laptop and
it didn't cause any problems.
The bug is probably also present in 3.10 and 3.11, but for these kernels
it might be simpler just to move the test of sma->complex_count after
the spin_is_locked() test.
Details of the bug:
Assume:
- sma->complex_count = 0.
- Thread 1: semtimedop(complex op that must sleep)
- Thread 2: semtimedop(simple op).
Pseudo-Trace:
Thread 1: sem_lock(): acquire sem_perm.lock
Thread 1: sem_lock(): check for ongoing simple ops
Nothing ongoing, thread 2 is still before sem_lock().
Thread 1: try_atomic_semop()
<<< preempted.
Thread 2: sem_lock():
static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
int nsops)
{
int locknum;
again:
if (nsops == 1 && !sma->complex_count) {
struct sem *sem = sma->sem_base + sops->sem_num;
/* Lock just the semaphore we are interested in. */
spin_lock(&sem->lock);
/*
* If sma->complex_count was set while we were spinning,
* we may need to look at things we did not lock here.
*/
if (unlikely(sma->complex_count)) {
spin_unlock(&sem->lock);
goto lock_array;
}
<<<<<<<<<
<<< complex_count is still 0.
<<<
<<< Here it is preempted
<<<<<<<<<
Thread 1: try_atomic_semop() returns, notices that it must sleep.
Thread 1: increases sma->complex_count.
Thread 1: drops sem_perm.lock
Thread 2:
/*
* Another process is holding the global lock on the
* sem_array; we cannot enter our critical section,
* but have to wait for the global lock to be released.
*/
if (unlikely(spin_is_locked(&sma->sem_perm.lock))) {
spin_unlock(&sem->lock);
spin_unlock_wait(&sma->sem_perm.lock);
goto again;
}
<<< sem_perm.lock already dropped, thus no "goto again;"
locknum = sops->sem_num;
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: <stable@vger.kernel.org> [3.10+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-30 20:45:04 +00:00
|
|
|
struct sem *sem;
|
2013-05-01 02:15:44 +00:00
|
|
|
|
ipc/sem.c: fix race in sem_lock()
The exclusion of complex operations in sem_lock() is insufficient: after
acquiring the per-semaphore lock, a simple op must first check that
sem_perm.lock is not locked and only after that test check
complex_count. The current code does it the other way around - and that
creates a race. Details are below.
The patch is a complete rewrite of sem_lock(), based in part on the code
from Mike Galbraith. It removes all gotos and all loops and thus the
risk of livelocks.
I have tested the patch (together with the next one) on my i3 laptop and
it didn't cause any problems.
The bug is probably also present in 3.10 and 3.11, but for these kernels
it might be simpler just to move the test of sma->complex_count after
the spin_is_locked() test.
Details of the bug:
Assume:
- sma->complex_count = 0.
- Thread 1: semtimedop(complex op that must sleep)
- Thread 2: semtimedop(simple op).
Pseudo-Trace:
Thread 1: sem_lock(): acquire sem_perm.lock
Thread 1: sem_lock(): check for ongoing simple ops
Nothing ongoing, thread 2 is still before sem_lock().
Thread 1: try_atomic_semop()
<<< preempted.
Thread 2: sem_lock():
static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
int nsops)
{
int locknum;
again:
if (nsops == 1 && !sma->complex_count) {
struct sem *sem = sma->sem_base + sops->sem_num;
/* Lock just the semaphore we are interested in. */
spin_lock(&sem->lock);
/*
* If sma->complex_count was set while we were spinning,
* we may need to look at things we did not lock here.
*/
if (unlikely(sma->complex_count)) {
spin_unlock(&sem->lock);
goto lock_array;
}
<<<<<<<<<
<<< complex_count is still 0.
<<<
<<< Here it is preempted
<<<<<<<<<
Thread 1: try_atomic_semop() returns, notices that it must sleep.
Thread 1: increases sma->complex_count.
Thread 1: drops sem_perm.lock
Thread 2:
/*
* Another process is holding the global lock on the
* sem_array; we cannot enter our critical section,
* but have to wait for the global lock to be released.
*/
if (unlikely(spin_is_locked(&sma->sem_perm.lock))) {
spin_unlock(&sem->lock);
spin_unlock_wait(&sma->sem_perm.lock);
goto again;
}
<<< sem_perm.lock already dropped, thus no "goto again;"
locknum = sops->sem_num;
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: <stable@vger.kernel.org> [3.10+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-30 20:45:04 +00:00
|
|
|
if (nsops != 1) {
|
|
|
|
/* Complex operation - acquire a full lock */
|
|
|
|
ipc_lock_object(&sma->sem_perm);
|
2013-05-01 02:15:44 +00:00
|
|
|
|
2016-10-11 20:54:50 +00:00
|
|
|
/* Prevent parallel simple ops */
|
|
|
|
complexmode_enter(sma);
|
|
|
|
return SEM_GLOBAL_LOCK;
|
ipc/sem.c: fix race in sem_lock()
The exclusion of complex operations in sem_lock() is insufficient: after
acquiring the per-semaphore lock, a simple op must first check that
sem_perm.lock is not locked and only after that test check
complex_count. The current code does it the other way around - and that
creates a race. Details are below.
The patch is a complete rewrite of sem_lock(), based in part on the code
from Mike Galbraith. It removes all gotos and all loops and thus the
risk of livelocks.
I have tested the patch (together with the next one) on my i3 laptop and
it didn't cause any problems.
The bug is probably also present in 3.10 and 3.11, but for these kernels
it might be simpler just to move the test of sma->complex_count after
the spin_is_locked() test.
Details of the bug:
Assume:
- sma->complex_count = 0.
- Thread 1: semtimedop(complex op that must sleep)
- Thread 2: semtimedop(simple op).
Pseudo-Trace:
Thread 1: sem_lock(): acquire sem_perm.lock
Thread 1: sem_lock(): check for ongoing simple ops
Nothing ongoing, thread 2 is still before sem_lock().
Thread 1: try_atomic_semop()
<<< preempted.
Thread 2: sem_lock():
static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
int nsops)
{
int locknum;
again:
if (nsops == 1 && !sma->complex_count) {
struct sem *sem = sma->sem_base + sops->sem_num;
/* Lock just the semaphore we are interested in. */
spin_lock(&sem->lock);
/*
* If sma->complex_count was set while we were spinning,
* we may need to look at things we did not lock here.
*/
if (unlikely(sma->complex_count)) {
spin_unlock(&sem->lock);
goto lock_array;
}
<<<<<<<<<
<<< complex_count is still 0.
<<<
<<< Here it is preempted
<<<<<<<<<
Thread 1: try_atomic_semop() returns, notices that it must sleep.
Thread 1: increases sma->complex_count.
Thread 1: drops sem_perm.lock
Thread 2:
/*
* Another process is holding the global lock on the
* sem_array; we cannot enter our critical section,
* but have to wait for the global lock to be released.
*/
if (unlikely(spin_is_locked(&sma->sem_perm.lock))) {
spin_unlock(&sem->lock);
spin_unlock_wait(&sma->sem_perm.lock);
goto again;
}
<<< sem_perm.lock already dropped, thus no "goto again;"
locknum = sops->sem_num;
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: <stable@vger.kernel.org> [3.10+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-30 20:45:04 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Only one semaphore affected - try to optimize locking.
|
2016-10-11 20:54:50 +00:00
|
|
|
* Optimized locking is possible if no complex operation
|
|
|
|
* is either enqueued or processed right now.
|
|
|
|
*
|
|
|
|
* Both facts are tracked by complex_mode.
|
ipc/sem.c: fix race in sem_lock()
The exclusion of complex operations in sem_lock() is insufficient: after
acquiring the per-semaphore lock, a simple op must first check that
sem_perm.lock is not locked and only after that test check
complex_count. The current code does it the other way around - and that
creates a race. Details are below.
The patch is a complete rewrite of sem_lock(), based in part on the code
from Mike Galbraith. It removes all gotos and all loops and thus the
risk of livelocks.
I have tested the patch (together with the next one) on my i3 laptop and
it didn't cause any problems.
The bug is probably also present in 3.10 and 3.11, but for these kernels
it might be simpler just to move the test of sma->complex_count after
the spin_is_locked() test.
Details of the bug:
Assume:
- sma->complex_count = 0.
- Thread 1: semtimedop(complex op that must sleep)
- Thread 2: semtimedop(simple op).
Pseudo-Trace:
Thread 1: sem_lock(): acquire sem_perm.lock
Thread 1: sem_lock(): check for ongoing simple ops
Nothing ongoing, thread 2 is still before sem_lock().
Thread 1: try_atomic_semop()
<<< preempted.
Thread 2: sem_lock():
static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
int nsops)
{
int locknum;
again:
if (nsops == 1 && !sma->complex_count) {
struct sem *sem = sma->sem_base + sops->sem_num;
/* Lock just the semaphore we are interested in. */
spin_lock(&sem->lock);
/*
* If sma->complex_count was set while we were spinning,
* we may need to look at things we did not lock here.
*/
if (unlikely(sma->complex_count)) {
spin_unlock(&sem->lock);
goto lock_array;
}
<<<<<<<<<
<<< complex_count is still 0.
<<<
<<< Here it is preempted
<<<<<<<<<
Thread 1: try_atomic_semop() returns, notices that it must sleep.
Thread 1: increases sma->complex_count.
Thread 1: drops sem_perm.lock
Thread 2:
/*
* Another process is holding the global lock on the
* sem_array; we cannot enter our critical section,
* but have to wait for the global lock to be released.
*/
if (unlikely(spin_is_locked(&sma->sem_perm.lock))) {
spin_unlock(&sem->lock);
spin_unlock_wait(&sma->sem_perm.lock);
goto again;
}
<<< sem_perm.lock already dropped, thus no "goto again;"
locknum = sops->sem_num;
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: <stable@vger.kernel.org> [3.10+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-30 20:45:04 +00:00
|
|
|
*/
|
|
|
|
sem = sma->sem_base + sops->sem_num;
|
2013-05-01 02:15:44 +00:00
|
|
|
|
2016-10-11 20:54:50 +00:00
|
|
|
/*
|
|
|
|
* Initial check for complex_mode. Just an optimization,
|
|
|
|
* no locking, no memory barrier.
|
|
|
|
*/
|
|
|
|
if (!sma->complex_mode) {
|
2013-05-01 02:15:44 +00:00
|
|
|
/*
|
ipc/sem.c: fix race in sem_lock()
The exclusion of complex operations in sem_lock() is insufficient: after
acquiring the per-semaphore lock, a simple op must first check that
sem_perm.lock is not locked and only after that test check
complex_count. The current code does it the other way around - and that
creates a race. Details are below.
The patch is a complete rewrite of sem_lock(), based in part on the code
from Mike Galbraith. It removes all gotos and all loops and thus the
risk of livelocks.
I have tested the patch (together with the next one) on my i3 laptop and
it didn't cause any problems.
The bug is probably also present in 3.10 and 3.11, but for these kernels
it might be simpler just to move the test of sma->complex_count after
the spin_is_locked() test.
Details of the bug:
Assume:
- sma->complex_count = 0.
- Thread 1: semtimedop(complex op that must sleep)
- Thread 2: semtimedop(simple op).
Pseudo-Trace:
Thread 1: sem_lock(): acquire sem_perm.lock
Thread 1: sem_lock(): check for ongoing simple ops
Nothing ongoing, thread 2 is still before sem_lock().
Thread 1: try_atomic_semop()
<<< preempted.
Thread 2: sem_lock():
static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
int nsops)
{
int locknum;
again:
if (nsops == 1 && !sma->complex_count) {
struct sem *sem = sma->sem_base + sops->sem_num;
/* Lock just the semaphore we are interested in. */
spin_lock(&sem->lock);
/*
* If sma->complex_count was set while we were spinning,
* we may need to look at things we did not lock here.
*/
if (unlikely(sma->complex_count)) {
spin_unlock(&sem->lock);
goto lock_array;
}
<<<<<<<<<
<<< complex_count is still 0.
<<<
<<< Here it is preempted
<<<<<<<<<
Thread 1: try_atomic_semop() returns, notices that it must sleep.
Thread 1: increases sma->complex_count.
Thread 1: drops sem_perm.lock
Thread 2:
/*
* Another process is holding the global lock on the
* sem_array; we cannot enter our critical section,
* but have to wait for the global lock to be released.
*/
if (unlikely(spin_is_locked(&sma->sem_perm.lock))) {
spin_unlock(&sem->lock);
spin_unlock_wait(&sma->sem_perm.lock);
goto again;
}
<<< sem_perm.lock already dropped, thus no "goto again;"
locknum = sops->sem_num;
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: <stable@vger.kernel.org> [3.10+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-30 20:45:04 +00:00
|
|
|
* It appears that no complex operation is around.
|
|
|
|
* Acquire the per-semaphore lock.
|
2013-05-01 02:15:44 +00:00
|
|
|
*/
|
ipc/sem.c: fix race in sem_lock()
The exclusion of complex operations in sem_lock() is insufficient: after
acquiring the per-semaphore lock, a simple op must first check that
sem_perm.lock is not locked and only after that test check
complex_count. The current code does it the other way around - and that
creates a race. Details are below.
The patch is a complete rewrite of sem_lock(), based in part on the code
from Mike Galbraith. It removes all gotos and all loops and thus the
risk of livelocks.
I have tested the patch (together with the next one) on my i3 laptop and
it didn't cause any problems.
The bug is probably also present in 3.10 and 3.11, but for these kernels
it might be simpler just to move the test of sma->complex_count after
the spin_is_locked() test.
Details of the bug:
Assume:
- sma->complex_count = 0.
- Thread 1: semtimedop(complex op that must sleep)
- Thread 2: semtimedop(simple op).
Pseudo-Trace:
Thread 1: sem_lock(): acquire sem_perm.lock
Thread 1: sem_lock(): check for ongoing simple ops
Nothing ongoing, thread 2 is still before sem_lock().
Thread 1: try_atomic_semop()
<<< preempted.
Thread 2: sem_lock():
static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
int nsops)
{
int locknum;
again:
if (nsops == 1 && !sma->complex_count) {
struct sem *sem = sma->sem_base + sops->sem_num;
/* Lock just the semaphore we are interested in. */
spin_lock(&sem->lock);
/*
* If sma->complex_count was set while we were spinning,
* we may need to look at things we did not lock here.
*/
if (unlikely(sma->complex_count)) {
spin_unlock(&sem->lock);
goto lock_array;
}
<<<<<<<<<
<<< complex_count is still 0.
<<<
<<< Here it is preempted
<<<<<<<<<
Thread 1: try_atomic_semop() returns, notices that it must sleep.
Thread 1: increases sma->complex_count.
Thread 1: drops sem_perm.lock
Thread 2:
/*
* Another process is holding the global lock on the
* sem_array; we cannot enter our critical section,
* but have to wait for the global lock to be released.
*/
if (unlikely(spin_is_locked(&sma->sem_perm.lock))) {
spin_unlock(&sem->lock);
spin_unlock_wait(&sma->sem_perm.lock);
goto again;
}
<<< sem_perm.lock already dropped, thus no "goto again;"
locknum = sops->sem_num;
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: <stable@vger.kernel.org> [3.10+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-30 20:45:04 +00:00
|
|
|
spin_lock(&sem->lock);
|
|
|
|
|
2016-10-11 20:54:50 +00:00
|
|
|
/*
|
|
|
|
* See 51d7d5205d33
|
|
|
|
* ("powerpc: Add smp_mb() to arch_spin_is_locked()"):
|
|
|
|
* A full barrier is required: the write of sem->lock
|
|
|
|
* must be visible before the read is executed
|
|
|
|
*/
|
|
|
|
smp_mb();
|
ipc/sem.c: fix race in sem_lock()
The exclusion of complex operations in sem_lock() is insufficient: after
acquiring the per-semaphore lock, a simple op must first check that
sem_perm.lock is not locked and only after that test check
complex_count. The current code does it the other way around - and that
creates a race. Details are below.
The patch is a complete rewrite of sem_lock(), based in part on the code
from Mike Galbraith. It removes all gotos and all loops and thus the
risk of livelocks.
I have tested the patch (together with the next one) on my i3 laptop and
it didn't cause any problems.
The bug is probably also present in 3.10 and 3.11, but for these kernels
it might be simpler just to move the test of sma->complex_count after
the spin_is_locked() test.
Details of the bug:
Assume:
- sma->complex_count = 0.
- Thread 1: semtimedop(complex op that must sleep)
- Thread 2: semtimedop(simple op).
Pseudo-Trace:
Thread 1: sem_lock(): acquire sem_perm.lock
Thread 1: sem_lock(): check for ongoing simple ops
Nothing ongoing, thread 2 is still before sem_lock().
Thread 1: try_atomic_semop()
<<< preempted.
Thread 2: sem_lock():
static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
int nsops)
{
int locknum;
again:
if (nsops == 1 && !sma->complex_count) {
struct sem *sem = sma->sem_base + sops->sem_num;
/* Lock just the semaphore we are interested in. */
spin_lock(&sem->lock);
/*
* If sma->complex_count was set while we were spinning,
* we may need to look at things we did not lock here.
*/
if (unlikely(sma->complex_count)) {
spin_unlock(&sem->lock);
goto lock_array;
}
<<<<<<<<<
<<< complex_count is still 0.
<<<
<<< Here it is preempted
<<<<<<<<<
Thread 1: try_atomic_semop() returns, notices that it must sleep.
Thread 1: increases sma->complex_count.
Thread 1: drops sem_perm.lock
Thread 2:
/*
* Another process is holding the global lock on the
* sem_array; we cannot enter our critical section,
* but have to wait for the global lock to be released.
*/
if (unlikely(spin_is_locked(&sma->sem_perm.lock))) {
spin_unlock(&sem->lock);
spin_unlock_wait(&sma->sem_perm.lock);
goto again;
}
<<< sem_perm.lock already dropped, thus no "goto again;"
locknum = sops->sem_num;
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: <stable@vger.kernel.org> [3.10+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-30 20:45:04 +00:00
|
|
|
|
2016-10-11 20:54:50 +00:00
|
|
|
if (!smp_load_acquire(&sma->complex_mode)) {
|
|
|
|
/* fast path successful! */
|
|
|
|
return sops->sem_num;
|
2013-05-01 02:15:44 +00:00
|
|
|
}
|
ipc/sem.c: fix race in sem_lock()
The exclusion of complex operations in sem_lock() is insufficient: after
acquiring the per-semaphore lock, a simple op must first check that
sem_perm.lock is not locked and only after that test check
complex_count. The current code does it the other way around - and that
creates a race. Details are below.
The patch is a complete rewrite of sem_lock(), based in part on the code
from Mike Galbraith. It removes all gotos and all loops and thus the
risk of livelocks.
I have tested the patch (together with the next one) on my i3 laptop and
it didn't cause any problems.
The bug is probably also present in 3.10 and 3.11, but for these kernels
it might be simpler just to move the test of sma->complex_count after
the spin_is_locked() test.
Details of the bug:
Assume:
- sma->complex_count = 0.
- Thread 1: semtimedop(complex op that must sleep)
- Thread 2: semtimedop(simple op).
Pseudo-Trace:
Thread 1: sem_lock(): acquire sem_perm.lock
Thread 1: sem_lock(): check for ongoing simple ops
Nothing ongoing, thread 2 is still before sem_lock().
Thread 1: try_atomic_semop()
<<< preempted.
Thread 2: sem_lock():
static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
int nsops)
{
int locknum;
again:
if (nsops == 1 && !sma->complex_count) {
struct sem *sem = sma->sem_base + sops->sem_num;
/* Lock just the semaphore we are interested in. */
spin_lock(&sem->lock);
/*
* If sma->complex_count was set while we were spinning,
* we may need to look at things we did not lock here.
*/
if (unlikely(sma->complex_count)) {
spin_unlock(&sem->lock);
goto lock_array;
}
<<<<<<<<<
<<< complex_count is still 0.
<<<
<<< Here it is preempted
<<<<<<<<<
Thread 1: try_atomic_semop() returns, notices that it must sleep.
Thread 1: increases sma->complex_count.
Thread 1: drops sem_perm.lock
Thread 2:
/*
* Another process is holding the global lock on the
* sem_array; we cannot enter our critical section,
* but have to wait for the global lock to be released.
*/
if (unlikely(spin_is_locked(&sma->sem_perm.lock))) {
spin_unlock(&sem->lock);
spin_unlock_wait(&sma->sem_perm.lock);
goto again;
}
<<< sem_perm.lock already dropped, thus no "goto again;"
locknum = sops->sem_num;
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: <stable@vger.kernel.org> [3.10+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-30 20:45:04 +00:00
|
|
|
spin_unlock(&sem->lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* slow path: acquire the full lock */
|
|
|
|
ipc_lock_object(&sma->sem_perm);
|
2013-05-01 02:15:44 +00:00
|
|
|
|
ipc/sem.c: fix race in sem_lock()
The exclusion of complex operations in sem_lock() is insufficient: after
acquiring the per-semaphore lock, a simple op must first check that
sem_perm.lock is not locked and only after that test check
complex_count. The current code does it the other way around - and that
creates a race. Details are below.
The patch is a complete rewrite of sem_lock(), based in part on the code
from Mike Galbraith. It removes all gotos and all loops and thus the
risk of livelocks.
I have tested the patch (together with the next one) on my i3 laptop and
it didn't cause any problems.
The bug is probably also present in 3.10 and 3.11, but for these kernels
it might be simpler just to move the test of sma->complex_count after
the spin_is_locked() test.
Details of the bug:
Assume:
- sma->complex_count = 0.
- Thread 1: semtimedop(complex op that must sleep)
- Thread 2: semtimedop(simple op).
Pseudo-Trace:
Thread 1: sem_lock(): acquire sem_perm.lock
Thread 1: sem_lock(): check for ongoing simple ops
Nothing ongoing, thread 2 is still before sem_lock().
Thread 1: try_atomic_semop()
<<< preempted.
Thread 2: sem_lock():
static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
int nsops)
{
int locknum;
again:
if (nsops == 1 && !sma->complex_count) {
struct sem *sem = sma->sem_base + sops->sem_num;
/* Lock just the semaphore we are interested in. */
spin_lock(&sem->lock);
/*
* If sma->complex_count was set while we were spinning,
* we may need to look at things we did not lock here.
*/
if (unlikely(sma->complex_count)) {
spin_unlock(&sem->lock);
goto lock_array;
}
<<<<<<<<<
<<< complex_count is still 0.
<<<
<<< Here it is preempted
<<<<<<<<<
Thread 1: try_atomic_semop() returns, notices that it must sleep.
Thread 1: increases sma->complex_count.
Thread 1: drops sem_perm.lock
Thread 2:
/*
* Another process is holding the global lock on the
* sem_array; we cannot enter our critical section,
* but have to wait for the global lock to be released.
*/
if (unlikely(spin_is_locked(&sma->sem_perm.lock))) {
spin_unlock(&sem->lock);
spin_unlock_wait(&sma->sem_perm.lock);
goto again;
}
<<< sem_perm.lock already dropped, thus no "goto again;"
locknum = sops->sem_num;
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: <stable@vger.kernel.org> [3.10+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-30 20:45:04 +00:00
|
|
|
if (sma->complex_count == 0) {
|
|
|
|
/* False alarm:
|
|
|
|
* There is no complex operation, thus we can switch
|
|
|
|
* back to the fast path.
|
|
|
|
*/
|
|
|
|
spin_lock(&sem->lock);
|
|
|
|
ipc_unlock_object(&sma->sem_perm);
|
|
|
|
return sops->sem_num;
|
2013-05-01 02:15:44 +00:00
|
|
|
} else {
|
ipc/sem.c: fix race in sem_lock()
The exclusion of complex operations in sem_lock() is insufficient: after
acquiring the per-semaphore lock, a simple op must first check that
sem_perm.lock is not locked and only after that test check
complex_count. The current code does it the other way around - and that
creates a race. Details are below.
The patch is a complete rewrite of sem_lock(), based in part on the code
from Mike Galbraith. It removes all gotos and all loops and thus the
risk of livelocks.
I have tested the patch (together with the next one) on my i3 laptop and
it didn't cause any problems.
The bug is probably also present in 3.10 and 3.11, but for these kernels
it might be simpler just to move the test of sma->complex_count after
the spin_is_locked() test.
Details of the bug:
Assume:
- sma->complex_count = 0.
- Thread 1: semtimedop(complex op that must sleep)
- Thread 2: semtimedop(simple op).
Pseudo-Trace:
Thread 1: sem_lock(): acquire sem_perm.lock
Thread 1: sem_lock(): check for ongoing simple ops
Nothing ongoing, thread 2 is still before sem_lock().
Thread 1: try_atomic_semop()
<<< preempted.
Thread 2: sem_lock():
static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
int nsops)
{
int locknum;
again:
if (nsops == 1 && !sma->complex_count) {
struct sem *sem = sma->sem_base + sops->sem_num;
/* Lock just the semaphore we are interested in. */
spin_lock(&sem->lock);
/*
* If sma->complex_count was set while we were spinning,
* we may need to look at things we did not lock here.
*/
if (unlikely(sma->complex_count)) {
spin_unlock(&sem->lock);
goto lock_array;
}
<<<<<<<<<
<<< complex_count is still 0.
<<<
<<< Here it is preempted
<<<<<<<<<
Thread 1: try_atomic_semop() returns, notices that it must sleep.
Thread 1: increases sma->complex_count.
Thread 1: drops sem_perm.lock
Thread 2:
/*
* Another process is holding the global lock on the
* sem_array; we cannot enter our critical section,
* but have to wait for the global lock to be released.
*/
if (unlikely(spin_is_locked(&sma->sem_perm.lock))) {
spin_unlock(&sem->lock);
spin_unlock_wait(&sma->sem_perm.lock);
goto again;
}
<<< sem_perm.lock already dropped, thus no "goto again;"
locknum = sops->sem_num;
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: <stable@vger.kernel.org> [3.10+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-30 20:45:04 +00:00
|
|
|
/* Not a false alarm, thus complete the sequence for a
|
|
|
|
* full lock.
|
2013-05-01 02:15:44 +00:00
|
|
|
*/
|
2016-10-11 20:54:50 +00:00
|
|
|
complexmode_enter(sma);
|
|
|
|
return SEM_GLOBAL_LOCK;
|
2013-05-01 02:15:44 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void sem_unlock(struct sem_array *sma, int locknum)
|
|
|
|
{
|
2016-10-11 20:54:50 +00:00
|
|
|
if (locknum == SEM_GLOBAL_LOCK) {
|
2013-07-08 23:01:24 +00:00
|
|
|
unmerge_queues(sma);
|
2016-10-11 20:54:50 +00:00
|
|
|
complexmode_tryleave(sma);
|
2013-07-08 23:01:11 +00:00
|
|
|
ipc_unlock_object(&sma->sem_perm);
|
2013-05-01 02:15:44 +00:00
|
|
|
} else {
|
|
|
|
struct sem *sem = sma->sem_base + locknum;
|
|
|
|
spin_unlock(&sem->lock);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-10-19 06:40:54 +00:00
|
|
|
/*
|
2013-09-11 21:26:24 +00:00
|
|
|
* sem_lock_(check_) routines are called in the paths where the rwsem
|
2007-10-19 06:40:54 +00:00
|
|
|
* is not held.
|
2013-05-04 17:47:57 +00:00
|
|
|
*
|
|
|
|
* The caller holds the RCU read lock.
|
2007-10-19 06:40:54 +00:00
|
|
|
*/
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
static inline struct sem_array *sem_obtain_object(struct ipc_namespace *ns, int id)
|
|
|
|
{
|
2015-06-30 21:58:42 +00:00
|
|
|
struct kern_ipc_perm *ipcp = ipc_obtain_object_idr(&sem_ids(ns), id);
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
|
|
|
|
if (IS_ERR(ipcp))
|
|
|
|
return ERR_CAST(ipcp);
|
|
|
|
|
|
|
|
return container_of(ipcp, struct sem_array, sem_perm);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct sem_array *sem_obtain_object_check(struct ipc_namespace *ns,
|
|
|
|
int id)
|
|
|
|
{
|
|
|
|
struct kern_ipc_perm *ipcp = ipc_obtain_object_check(&sem_ids(ns), id);
|
|
|
|
|
|
|
|
if (IS_ERR(ipcp))
|
|
|
|
return ERR_CAST(ipcp);
|
IPC: fix error check in all new xxx_lock() and xxx_exit_ns() functions
In the new implementation of the [sem|shm|msg]_lock[_check]() routines, we
use the return value of ipc_lock() in container_of() without any check.
But ipc_lock may return a errcode. The use of this errcode in
container_of() may alter this errcode, and we don't want this.
And in xxx_exit_ns, the pointer return by idr_find is of type 'struct
kern_ipc_per'...
Today, the code will work as is because the member used in these
container_of() is the first member of its container (offset == 0), the
errcode isn't changed then. But in the general case, we can't count on
this assumption and this may lead later to a real bug if we don't correct
this.
Again, the proposed solution is simple and correct. But, as pointed by
Nadia, with this solution, the same check will be done several times (in
all sub-callers...), what is not very funny/optimal...
Signed-off-by: Pierre Peiffer <pierre.peiffer@bull.net>
Cc: Nadia Derbey <Nadia.Derbey@bull.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-06 09:36:23 +00:00
|
|
|
|
2007-10-19 06:40:51 +00:00
|
|
|
return container_of(ipcp, struct sem_array, sem_perm);
|
2007-10-19 06:40:51 +00:00
|
|
|
}
|
|
|
|
|
2008-04-29 08:00:46 +00:00
|
|
|
static inline void sem_lock_and_putref(struct sem_array *sma)
|
|
|
|
{
|
2013-05-01 02:15:44 +00:00
|
|
|
sem_lock(sma, NULL, -1);
|
2016-08-02 21:03:07 +00:00
|
|
|
ipc_rcu_putref(sma, sem_rcu_free);
|
2008-04-29 08:00:46 +00:00
|
|
|
}
|
|
|
|
|
2007-10-19 06:40:48 +00:00
|
|
|
static inline void sem_rmid(struct ipc_namespace *ns, struct sem_array *s)
|
|
|
|
{
|
|
|
|
ipc_rmid(&sem_ids(ns), &s->sem_perm);
|
|
|
|
}
|
|
|
|
|
2007-10-19 06:40:53 +00:00
|
|
|
/**
|
|
|
|
* newary - Create a new semaphore set
|
|
|
|
* @ns: namespace
|
|
|
|
* @params: ptr to the structure that contains key, semflg and nsems
|
|
|
|
*
|
2013-09-11 21:26:24 +00:00
|
|
|
* Called with sem_ids.rwsem held (as a writer)
|
2007-10-19 06:40:53 +00:00
|
|
|
*/
|
2007-10-19 06:40:49 +00:00
|
|
|
static int newary(struct ipc_namespace *ns, struct ipc_params *params)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
int id;
|
|
|
|
int retval;
|
|
|
|
struct sem_array *sma;
|
|
|
|
int size;
|
2007-10-19 06:40:49 +00:00
|
|
|
key_t key = params->key;
|
|
|
|
int nsems = params->u.nsems;
|
|
|
|
int semflg = params->flg;
|
2009-12-16 00:47:32 +00:00
|
|
|
int i;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
if (!nsems)
|
|
|
|
return -EINVAL;
|
2006-10-02 09:18:22 +00:00
|
|
|
if (ns->used_sems + nsems > ns->sc_semmns)
|
2005-04-16 22:20:36 +00:00
|
|
|
return -ENOSPC;
|
|
|
|
|
2014-01-28 01:07:04 +00:00
|
|
|
size = sizeof(*sma) + nsems * sizeof(struct sem);
|
2005-04-16 22:20:36 +00:00
|
|
|
sma = ipc_rcu_alloc(size);
|
2014-01-28 01:07:06 +00:00
|
|
|
if (!sma)
|
2005-04-16 22:20:36 +00:00
|
|
|
return -ENOMEM;
|
2014-01-28 01:07:06 +00:00
|
|
|
|
2014-01-28 01:07:04 +00:00
|
|
|
memset(sma, 0, size);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
sma->sem_perm.mode = (semflg & S_IRWXUGO);
|
|
|
|
sma->sem_perm.key = key;
|
|
|
|
|
|
|
|
sma->sem_perm.security = NULL;
|
|
|
|
retval = security_sem_alloc(sma);
|
|
|
|
if (retval) {
|
ipc: fix race with LSMs
Currently, IPC mechanisms do security and auditing related checks under
RCU. However, since security modules can free the security structure,
for example, through selinux_[sem,msg_queue,shm]_free_security(), we can
race if the structure is freed before other tasks are done with it,
creating a use-after-free condition. Manfred illustrates this nicely,
for instance with shared mem and selinux:
-> do_shmat calls rcu_read_lock()
-> do_shmat calls shm_object_check().
Checks that the object is still valid - but doesn't acquire any locks.
Then it returns.
-> do_shmat calls security_shm_shmat (e.g. selinux_shm_shmat)
-> selinux_shm_shmat calls ipc_has_perm()
-> ipc_has_perm accesses ipc_perms->security
shm_close()
-> shm_close acquires rw_mutex & shm_lock
-> shm_close calls shm_destroy
-> shm_destroy calls security_shm_free (e.g. selinux_shm_free_security)
-> selinux_shm_free_security calls ipc_free_security(&shp->shm_perm)
-> ipc_free_security calls kfree(ipc_perms->security)
This patch delays the freeing of the security structures after all RCU
readers are done. Furthermore it aligns the security life cycle with
that of the rest of IPC - freeing them based on the reference counter.
For situations where we need not free security, the current behavior is
kept. Linus states:
"... the old behavior was suspect for another reason too: having the
security blob go away from under a user sounds like it could cause
various other problems anyway, so I think the old code was at least
_prone_ to bugs even if it didn't have catastrophic behavior."
I have tested this patch with IPC testcases from LTP on both my
quad-core laptop and on a 64 core NUMA server. In both cases selinux is
enabled, and tests pass for both voluntary and forced preemption models.
While the mentioned races are theoretical (at least no one as reported
them), I wanted to make sure that this new logic doesn't break anything
we weren't aware of.
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-24 00:04:45 +00:00
|
|
|
ipc_rcu_putref(sma, ipc_rcu_free);
|
2005-04-16 22:20:36 +00:00
|
|
|
return retval;
|
|
|
|
}
|
|
|
|
|
|
|
|
sma->sem_base = (struct sem *) &sma[1];
|
2009-12-16 00:47:32 +00:00
|
|
|
|
2013-05-01 02:15:44 +00:00
|
|
|
for (i = 0; i < nsems; i++) {
|
2013-07-08 23:01:23 +00:00
|
|
|
INIT_LIST_HEAD(&sma->sem_base[i].pending_alter);
|
|
|
|
INIT_LIST_HEAD(&sma->sem_base[i].pending_const);
|
2013-05-01 02:15:44 +00:00
|
|
|
spin_lock_init(&sma->sem_base[i].lock);
|
|
|
|
}
|
2009-12-16 00:47:32 +00:00
|
|
|
|
|
|
|
sma->complex_count = 0;
|
2016-10-11 20:54:50 +00:00
|
|
|
sma->complex_mode = true; /* dropped by sem_unlock below */
|
2013-07-08 23:01:23 +00:00
|
|
|
INIT_LIST_HEAD(&sma->pending_alter);
|
|
|
|
INIT_LIST_HEAD(&sma->pending_const);
|
2008-07-25 08:48:04 +00:00
|
|
|
INIT_LIST_HEAD(&sma->list_id);
|
2005-04-16 22:20:36 +00:00
|
|
|
sma->sem_nsems = nsems;
|
|
|
|
sma->sem_ctime = get_seconds();
|
2014-12-02 23:59:34 +00:00
|
|
|
|
|
|
|
id = ipc_addid(&sem_ids(ns), &sma->sem_perm, ns->sc_semmni);
|
|
|
|
if (id < 0) {
|
|
|
|
ipc_rcu_putref(sma, sem_rcu_free);
|
|
|
|
return id;
|
|
|
|
}
|
|
|
|
ns->used_sems += nsems;
|
|
|
|
|
2013-05-01 02:15:44 +00:00
|
|
|
sem_unlock(sma, -1);
|
ipc: move rcu_read_unlock() out of sem_unlock() and into callers
The IPC locking is a mess, and sem_unlock() unlocks not only the
semaphore spinlock, it also drops the rcu read lock. Unlike sem_lock(),
which just gets the spin-lock, and expects the caller to get the rcu
read lock.
This all makes things very hard to follow, and it's very confusing when
you take the rcu read lock in one function, and then release it in
another. And it has caused actual bugs: the sem_obtain_lock() function
ended up dropping the RCU read lock twice in one error path, because it
first did the sem_unlock(), and then did a rcu_read_unlock() to match
the rcu_read_lock() it had done.
This is just a totally mindless "remove rcu_read_unlock() from
sem_unlock() and add it immediately after each caller" (except for the
aforementioned bug where we did too many rcu_read_unlock(), and in
find_alloc_undo() where we just got the rcu_read_lock() to correct for
the fact that sem_unlock would immediately drop it again).
We can (and should) clean things up further, but this fixes the bug with
the minimal amount of subtlety.
Reviewed-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-03 22:04:40 +00:00
|
|
|
rcu_read_unlock();
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2007-10-19 06:40:48 +00:00
|
|
|
return sma->sem_perm.id;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2007-10-19 06:40:49 +00:00
|
|
|
|
2007-10-19 06:40:53 +00:00
|
|
|
/*
|
2013-09-11 21:26:24 +00:00
|
|
|
* Called with sem_ids.rwsem and ipcp locked.
|
2007-10-19 06:40:53 +00:00
|
|
|
*/
|
2007-10-19 06:40:51 +00:00
|
|
|
static inline int sem_security(struct kern_ipc_perm *ipcp, int semflg)
|
2007-10-19 06:40:49 +00:00
|
|
|
{
|
2007-10-19 06:40:51 +00:00
|
|
|
struct sem_array *sma;
|
|
|
|
|
|
|
|
sma = container_of(ipcp, struct sem_array, sem_perm);
|
|
|
|
return security_sem_associate(sma, semflg);
|
2007-10-19 06:40:49 +00:00
|
|
|
}
|
|
|
|
|
2007-10-19 06:40:53 +00:00
|
|
|
/*
|
2013-09-11 21:26:24 +00:00
|
|
|
* Called with sem_ids.rwsem and ipcp locked.
|
2007-10-19 06:40:53 +00:00
|
|
|
*/
|
2007-10-19 06:40:51 +00:00
|
|
|
static inline int sem_more_checks(struct kern_ipc_perm *ipcp,
|
|
|
|
struct ipc_params *params)
|
2007-10-19 06:40:49 +00:00
|
|
|
{
|
2007-10-19 06:40:51 +00:00
|
|
|
struct sem_array *sma;
|
|
|
|
|
|
|
|
sma = container_of(ipcp, struct sem_array, sem_perm);
|
|
|
|
if (params->u.nsems > sma->sem_nsems)
|
2007-10-19 06:40:49 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-01-14 13:14:27 +00:00
|
|
|
SYSCALL_DEFINE3(semget, key_t, key, int, nsems, int, semflg)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2006-10-02 09:18:22 +00:00
|
|
|
struct ipc_namespace *ns;
|
2014-06-06 21:37:36 +00:00
|
|
|
static const struct ipc_ops sem_ops = {
|
|
|
|
.getnew = newary,
|
|
|
|
.associate = sem_security,
|
|
|
|
.more_checks = sem_more_checks,
|
|
|
|
};
|
2007-10-19 06:40:49 +00:00
|
|
|
struct ipc_params sem_params;
|
2006-10-02 09:18:22 +00:00
|
|
|
|
|
|
|
ns = current->nsproxy->ipc_ns;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2006-10-02 09:18:22 +00:00
|
|
|
if (nsems < 0 || nsems > ns->sc_semmsl)
|
2005-04-16 22:20:36 +00:00
|
|
|
return -EINVAL;
|
2007-10-19 06:40:48 +00:00
|
|
|
|
2007-10-19 06:40:49 +00:00
|
|
|
sem_params.key = key;
|
|
|
|
sem_params.flg = semflg;
|
|
|
|
sem_params.u.nsems = nsems;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2007-10-19 06:40:49 +00:00
|
|
|
return ipcget(ns, &sem_ids(ns), &sem_ops, &sem_params);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
ipc/sem.c: avoid overflow of semop undo (semadj) value
When trying to understand semop code, I found a small mistake in the check
for semadj (undo) value overflow. The new undo value is not stored
immediately and next potential checks are done against the old value.
The failing scenario is not much practical. One semop call has to do more
operations on the same semaphore. Also semval and semadj must have
different values, so there has to be some operations without SEM_UNDO
flag. For example:
struct sembuf depositor_op[1];
struct sembuf collector_op[2];
depositor_op[0].sem_num = 0;
depositor_op[0].sem_op = 20000;
depositor_op[0].sem_flg = 0;
collector_op[0].sem_num = 0;
collector_op[0].sem_op = -10000;
collector_op[0].sem_flg = SEM_UNDO;
collector_op[1].sem_num = 0;
collector_op[1].sem_op = -10000;
collector_op[1].sem_flg = SEM_UNDO;
if (semop(semid, depositor_op, 1) == -1)
{ perror("Failed to do 1st deposit"); return 1; }
if (semop(semid, collector_op, 2) == -1)
{ perror("Failed to do 1st collect"); return 1; }
if (semop(semid, depositor_op, 1) == -1)
{ perror("Failed to do 2nd deposit"); return 1; }
if (semop(semid, collector_op, 2) == -1)
{ perror("Failed to do 2nd collect"); return 1; }
return 0;
It passes without error now but the semadj value has overflown in the 2nd
collector operation.
[akpm@linux-foundation.org: restore lessened scope of local `undo']
[davidlohr@hp.com: correct header comment for perform_atomic_semop]
Signed-off-by: Petr Mladek <pmladek@suse.cz>
Acked-by: Davidlohr Bueso <davidlohr@hp.com>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-01-28 01:07:00 +00:00
|
|
|
/**
|
ipc/sem: optimize perform_atomic_semop()
This is the main workhorse that deals with semop user calls such that
the waitforzero or semval update operations, on the set, can complete on
not as the sma currently stands. Currently, the set is iterated twice
(setting semval, then backwards for the sempid value). Slowpaths, and
particularly SEM_UNDO calls, must undo any altered sem when it is
detected that the caller must block or has errored-out.
With larger sets, there can occur situations where this involves a lot
of cycles and can obviously be a suboptimal use of cached resources in
shared memory. Ie, discarding CPU caches that are also calling semop
and have the sembuf cached (and can complete), while the current lock
holder doing the semop will block, error, or does a waitforzero
operation.
This patch proposes still iterating the set twice, but the first scan is
read-only, and we perform the actual updates afterward, once we know
that the call will succeed. In order to not suffer from the overhead of
dealing with sops that act on the same sem_num, such (rare) cases use
perform_atomic_semop_slow(), which is exactly what we have now.
Duplicates are detected before grabbing sem_lock, and uses simple a
32/64-bit hash array variable to based on the sem_num we are working on.
In addition add some comments to when we expect to the caller to block.
[akpm@linux-foundation.org: coding-style fixes]
[colin.king@canonical.com: ensure we left shift a ULL rather than a 32 bit integer]
Link: http://lkml.kernel.org/r/20161028181129.7311-1-colin.king@canonical.com
Link: http://lkml.kernel.org/r/20160921194603.GB21438@linux-80c1.suse
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:37 +00:00
|
|
|
* perform_atomic_semop[_slow] - Attempt to perform semaphore
|
|
|
|
* operations on a given array.
|
2013-07-08 23:01:26 +00:00
|
|
|
* @sma: semaphore array
|
2014-06-06 21:37:49 +00:00
|
|
|
* @q: struct sem_queue that describes the operation
|
2013-07-08 23:01:26 +00:00
|
|
|
*
|
ipc/sem: optimize perform_atomic_semop()
This is the main workhorse that deals with semop user calls such that
the waitforzero or semval update operations, on the set, can complete on
not as the sma currently stands. Currently, the set is iterated twice
(setting semval, then backwards for the sempid value). Slowpaths, and
particularly SEM_UNDO calls, must undo any altered sem when it is
detected that the caller must block or has errored-out.
With larger sets, there can occur situations where this involves a lot
of cycles and can obviously be a suboptimal use of cached resources in
shared memory. Ie, discarding CPU caches that are also calling semop
and have the sembuf cached (and can complete), while the current lock
holder doing the semop will block, error, or does a waitforzero
operation.
This patch proposes still iterating the set twice, but the first scan is
read-only, and we perform the actual updates afterward, once we know
that the call will succeed. In order to not suffer from the overhead of
dealing with sops that act on the same sem_num, such (rare) cases use
perform_atomic_semop_slow(), which is exactly what we have now.
Duplicates are detected before grabbing sem_lock, and uses simple a
32/64-bit hash array variable to based on the sem_num we are working on.
In addition add some comments to when we expect to the caller to block.
[akpm@linux-foundation.org: coding-style fixes]
[colin.king@canonical.com: ensure we left shift a ULL rather than a 32 bit integer]
Link: http://lkml.kernel.org/r/20161028181129.7311-1-colin.king@canonical.com
Link: http://lkml.kernel.org/r/20160921194603.GB21438@linux-80c1.suse
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:37 +00:00
|
|
|
* Caller blocking are as follows, based the value
|
|
|
|
* indicated by the semaphore operation (sem_op):
|
|
|
|
*
|
|
|
|
* (1) >0 never blocks.
|
|
|
|
* (2) 0 (wait-for-zero operation): semval is non-zero.
|
|
|
|
* (3) <0 attempting to decrement semval to a value smaller than zero.
|
|
|
|
*
|
2013-07-08 23:01:26 +00:00
|
|
|
* Returns 0 if the operation was possible.
|
|
|
|
* Returns 1 if the operation is impossible, the caller must sleep.
|
ipc/sem: optimize perform_atomic_semop()
This is the main workhorse that deals with semop user calls such that
the waitforzero or semval update operations, on the set, can complete on
not as the sma currently stands. Currently, the set is iterated twice
(setting semval, then backwards for the sempid value). Slowpaths, and
particularly SEM_UNDO calls, must undo any altered sem when it is
detected that the caller must block or has errored-out.
With larger sets, there can occur situations where this involves a lot
of cycles and can obviously be a suboptimal use of cached resources in
shared memory. Ie, discarding CPU caches that are also calling semop
and have the sembuf cached (and can complete), while the current lock
holder doing the semop will block, error, or does a waitforzero
operation.
This patch proposes still iterating the set twice, but the first scan is
read-only, and we perform the actual updates afterward, once we know
that the call will succeed. In order to not suffer from the overhead of
dealing with sops that act on the same sem_num, such (rare) cases use
perform_atomic_semop_slow(), which is exactly what we have now.
Duplicates are detected before grabbing sem_lock, and uses simple a
32/64-bit hash array variable to based on the sem_num we are working on.
In addition add some comments to when we expect to the caller to block.
[akpm@linux-foundation.org: coding-style fixes]
[colin.king@canonical.com: ensure we left shift a ULL rather than a 32 bit integer]
Link: http://lkml.kernel.org/r/20161028181129.7311-1-colin.king@canonical.com
Link: http://lkml.kernel.org/r/20160921194603.GB21438@linux-80c1.suse
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:37 +00:00
|
|
|
* Returns <0 for error codes.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
ipc/sem: optimize perform_atomic_semop()
This is the main workhorse that deals with semop user calls such that
the waitforzero or semval update operations, on the set, can complete on
not as the sma currently stands. Currently, the set is iterated twice
(setting semval, then backwards for the sempid value). Slowpaths, and
particularly SEM_UNDO calls, must undo any altered sem when it is
detected that the caller must block or has errored-out.
With larger sets, there can occur situations where this involves a lot
of cycles and can obviously be a suboptimal use of cached resources in
shared memory. Ie, discarding CPU caches that are also calling semop
and have the sembuf cached (and can complete), while the current lock
holder doing the semop will block, error, or does a waitforzero
operation.
This patch proposes still iterating the set twice, but the first scan is
read-only, and we perform the actual updates afterward, once we know
that the call will succeed. In order to not suffer from the overhead of
dealing with sops that act on the same sem_num, such (rare) cases use
perform_atomic_semop_slow(), which is exactly what we have now.
Duplicates are detected before grabbing sem_lock, and uses simple a
32/64-bit hash array variable to based on the sem_num we are working on.
In addition add some comments to when we expect to the caller to block.
[akpm@linux-foundation.org: coding-style fixes]
[colin.king@canonical.com: ensure we left shift a ULL rather than a 32 bit integer]
Link: http://lkml.kernel.org/r/20161028181129.7311-1-colin.king@canonical.com
Link: http://lkml.kernel.org/r/20160921194603.GB21438@linux-80c1.suse
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:37 +00:00
|
|
|
static int perform_atomic_semop_slow(struct sem_array *sma, struct sem_queue *q)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2014-06-06 21:37:49 +00:00
|
|
|
int result, sem_op, nsops, pid;
|
2005-04-16 22:20:36 +00:00
|
|
|
struct sembuf *sop;
|
2014-01-28 01:07:04 +00:00
|
|
|
struct sem *curr;
|
2014-06-06 21:37:49 +00:00
|
|
|
struct sembuf *sops;
|
|
|
|
struct sem_undo *un;
|
|
|
|
|
|
|
|
sops = q->sops;
|
|
|
|
nsops = q->nsops;
|
|
|
|
un = q->undo;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
for (sop = sops; sop < sops + nsops; sop++) {
|
|
|
|
curr = sma->sem_base + sop->sem_num;
|
|
|
|
sem_op = sop->sem_op;
|
|
|
|
result = curr->semval;
|
ipc/sem.c: avoid overflow of semop undo (semadj) value
When trying to understand semop code, I found a small mistake in the check
for semadj (undo) value overflow. The new undo value is not stored
immediately and next potential checks are done against the old value.
The failing scenario is not much practical. One semop call has to do more
operations on the same semaphore. Also semval and semadj must have
different values, so there has to be some operations without SEM_UNDO
flag. For example:
struct sembuf depositor_op[1];
struct sembuf collector_op[2];
depositor_op[0].sem_num = 0;
depositor_op[0].sem_op = 20000;
depositor_op[0].sem_flg = 0;
collector_op[0].sem_num = 0;
collector_op[0].sem_op = -10000;
collector_op[0].sem_flg = SEM_UNDO;
collector_op[1].sem_num = 0;
collector_op[1].sem_op = -10000;
collector_op[1].sem_flg = SEM_UNDO;
if (semop(semid, depositor_op, 1) == -1)
{ perror("Failed to do 1st deposit"); return 1; }
if (semop(semid, collector_op, 2) == -1)
{ perror("Failed to do 1st collect"); return 1; }
if (semop(semid, depositor_op, 1) == -1)
{ perror("Failed to do 2nd deposit"); return 1; }
if (semop(semid, collector_op, 2) == -1)
{ perror("Failed to do 2nd collect"); return 1; }
return 0;
It passes without error now but the semadj value has overflown in the 2nd
collector operation.
[akpm@linux-foundation.org: restore lessened scope of local `undo']
[davidlohr@hp.com: correct header comment for perform_atomic_semop]
Signed-off-by: Petr Mladek <pmladek@suse.cz>
Acked-by: Davidlohr Bueso <davidlohr@hp.com>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-01-28 01:07:00 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
if (!sem_op && result)
|
|
|
|
goto would_block;
|
|
|
|
|
|
|
|
result += sem_op;
|
|
|
|
if (result < 0)
|
|
|
|
goto would_block;
|
|
|
|
if (result > SEMVMX)
|
|
|
|
goto out_of_range;
|
ipc/sem.c: avoid overflow of semop undo (semadj) value
When trying to understand semop code, I found a small mistake in the check
for semadj (undo) value overflow. The new undo value is not stored
immediately and next potential checks are done against the old value.
The failing scenario is not much practical. One semop call has to do more
operations on the same semaphore. Also semval and semadj must have
different values, so there has to be some operations without SEM_UNDO
flag. For example:
struct sembuf depositor_op[1];
struct sembuf collector_op[2];
depositor_op[0].sem_num = 0;
depositor_op[0].sem_op = 20000;
depositor_op[0].sem_flg = 0;
collector_op[0].sem_num = 0;
collector_op[0].sem_op = -10000;
collector_op[0].sem_flg = SEM_UNDO;
collector_op[1].sem_num = 0;
collector_op[1].sem_op = -10000;
collector_op[1].sem_flg = SEM_UNDO;
if (semop(semid, depositor_op, 1) == -1)
{ perror("Failed to do 1st deposit"); return 1; }
if (semop(semid, collector_op, 2) == -1)
{ perror("Failed to do 1st collect"); return 1; }
if (semop(semid, depositor_op, 1) == -1)
{ perror("Failed to do 2nd deposit"); return 1; }
if (semop(semid, collector_op, 2) == -1)
{ perror("Failed to do 2nd collect"); return 1; }
return 0;
It passes without error now but the semadj value has overflown in the 2nd
collector operation.
[akpm@linux-foundation.org: restore lessened scope of local `undo']
[davidlohr@hp.com: correct header comment for perform_atomic_semop]
Signed-off-by: Petr Mladek <pmladek@suse.cz>
Acked-by: Davidlohr Bueso <davidlohr@hp.com>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-01-28 01:07:00 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
if (sop->sem_flg & SEM_UNDO) {
|
|
|
|
int undo = un->semadj[sop->sem_num] - sem_op;
|
ipc/sem.c: avoid overflow of semop undo (semadj) value
When trying to understand semop code, I found a small mistake in the check
for semadj (undo) value overflow. The new undo value is not stored
immediately and next potential checks are done against the old value.
The failing scenario is not much practical. One semop call has to do more
operations on the same semaphore. Also semval and semadj must have
different values, so there has to be some operations without SEM_UNDO
flag. For example:
struct sembuf depositor_op[1];
struct sembuf collector_op[2];
depositor_op[0].sem_num = 0;
depositor_op[0].sem_op = 20000;
depositor_op[0].sem_flg = 0;
collector_op[0].sem_num = 0;
collector_op[0].sem_op = -10000;
collector_op[0].sem_flg = SEM_UNDO;
collector_op[1].sem_num = 0;
collector_op[1].sem_op = -10000;
collector_op[1].sem_flg = SEM_UNDO;
if (semop(semid, depositor_op, 1) == -1)
{ perror("Failed to do 1st deposit"); return 1; }
if (semop(semid, collector_op, 2) == -1)
{ perror("Failed to do 1st collect"); return 1; }
if (semop(semid, depositor_op, 1) == -1)
{ perror("Failed to do 2nd deposit"); return 1; }
if (semop(semid, collector_op, 2) == -1)
{ perror("Failed to do 2nd collect"); return 1; }
return 0;
It passes without error now but the semadj value has overflown in the 2nd
collector operation.
[akpm@linux-foundation.org: restore lessened scope of local `undo']
[davidlohr@hp.com: correct header comment for perform_atomic_semop]
Signed-off-by: Petr Mladek <pmladek@suse.cz>
Acked-by: Davidlohr Bueso <davidlohr@hp.com>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-01-28 01:07:00 +00:00
|
|
|
/* Exceeding the undo range is an error. */
|
2005-04-16 22:20:36 +00:00
|
|
|
if (undo < (-SEMAEM - 1) || undo > SEMAEM)
|
|
|
|
goto out_of_range;
|
ipc/sem.c: avoid overflow of semop undo (semadj) value
When trying to understand semop code, I found a small mistake in the check
for semadj (undo) value overflow. The new undo value is not stored
immediately and next potential checks are done against the old value.
The failing scenario is not much practical. One semop call has to do more
operations on the same semaphore. Also semval and semadj must have
different values, so there has to be some operations without SEM_UNDO
flag. For example:
struct sembuf depositor_op[1];
struct sembuf collector_op[2];
depositor_op[0].sem_num = 0;
depositor_op[0].sem_op = 20000;
depositor_op[0].sem_flg = 0;
collector_op[0].sem_num = 0;
collector_op[0].sem_op = -10000;
collector_op[0].sem_flg = SEM_UNDO;
collector_op[1].sem_num = 0;
collector_op[1].sem_op = -10000;
collector_op[1].sem_flg = SEM_UNDO;
if (semop(semid, depositor_op, 1) == -1)
{ perror("Failed to do 1st deposit"); return 1; }
if (semop(semid, collector_op, 2) == -1)
{ perror("Failed to do 1st collect"); return 1; }
if (semop(semid, depositor_op, 1) == -1)
{ perror("Failed to do 2nd deposit"); return 1; }
if (semop(semid, collector_op, 2) == -1)
{ perror("Failed to do 2nd collect"); return 1; }
return 0;
It passes without error now but the semadj value has overflown in the 2nd
collector operation.
[akpm@linux-foundation.org: restore lessened scope of local `undo']
[davidlohr@hp.com: correct header comment for perform_atomic_semop]
Signed-off-by: Petr Mladek <pmladek@suse.cz>
Acked-by: Davidlohr Bueso <davidlohr@hp.com>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-01-28 01:07:00 +00:00
|
|
|
un->semadj[sop->sem_num] = undo;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
ipc/sem.c: avoid overflow of semop undo (semadj) value
When trying to understand semop code, I found a small mistake in the check
for semadj (undo) value overflow. The new undo value is not stored
immediately and next potential checks are done against the old value.
The failing scenario is not much practical. One semop call has to do more
operations on the same semaphore. Also semval and semadj must have
different values, so there has to be some operations without SEM_UNDO
flag. For example:
struct sembuf depositor_op[1];
struct sembuf collector_op[2];
depositor_op[0].sem_num = 0;
depositor_op[0].sem_op = 20000;
depositor_op[0].sem_flg = 0;
collector_op[0].sem_num = 0;
collector_op[0].sem_op = -10000;
collector_op[0].sem_flg = SEM_UNDO;
collector_op[1].sem_num = 0;
collector_op[1].sem_op = -10000;
collector_op[1].sem_flg = SEM_UNDO;
if (semop(semid, depositor_op, 1) == -1)
{ perror("Failed to do 1st deposit"); return 1; }
if (semop(semid, collector_op, 2) == -1)
{ perror("Failed to do 1st collect"); return 1; }
if (semop(semid, depositor_op, 1) == -1)
{ perror("Failed to do 2nd deposit"); return 1; }
if (semop(semid, collector_op, 2) == -1)
{ perror("Failed to do 2nd collect"); return 1; }
return 0;
It passes without error now but the semadj value has overflown in the 2nd
collector operation.
[akpm@linux-foundation.org: restore lessened scope of local `undo']
[davidlohr@hp.com: correct header comment for perform_atomic_semop]
Signed-off-by: Petr Mladek <pmladek@suse.cz>
Acked-by: Davidlohr Bueso <davidlohr@hp.com>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-01-28 01:07:00 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
curr->semval = result;
|
|
|
|
}
|
|
|
|
|
|
|
|
sop--;
|
2014-06-06 21:37:49 +00:00
|
|
|
pid = q->pid;
|
2005-04-16 22:20:36 +00:00
|
|
|
while (sop >= sops) {
|
|
|
|
sma->sem_base[sop->sem_num].sempid = pid;
|
|
|
|
sop--;
|
|
|
|
}
|
ipc/sem.c: avoid overflow of semop undo (semadj) value
When trying to understand semop code, I found a small mistake in the check
for semadj (undo) value overflow. The new undo value is not stored
immediately and next potential checks are done against the old value.
The failing scenario is not much practical. One semop call has to do more
operations on the same semaphore. Also semval and semadj must have
different values, so there has to be some operations without SEM_UNDO
flag. For example:
struct sembuf depositor_op[1];
struct sembuf collector_op[2];
depositor_op[0].sem_num = 0;
depositor_op[0].sem_op = 20000;
depositor_op[0].sem_flg = 0;
collector_op[0].sem_num = 0;
collector_op[0].sem_op = -10000;
collector_op[0].sem_flg = SEM_UNDO;
collector_op[1].sem_num = 0;
collector_op[1].sem_op = -10000;
collector_op[1].sem_flg = SEM_UNDO;
if (semop(semid, depositor_op, 1) == -1)
{ perror("Failed to do 1st deposit"); return 1; }
if (semop(semid, collector_op, 2) == -1)
{ perror("Failed to do 1st collect"); return 1; }
if (semop(semid, depositor_op, 1) == -1)
{ perror("Failed to do 2nd deposit"); return 1; }
if (semop(semid, collector_op, 2) == -1)
{ perror("Failed to do 2nd collect"); return 1; }
return 0;
It passes without error now but the semadj value has overflown in the 2nd
collector operation.
[akpm@linux-foundation.org: restore lessened scope of local `undo']
[davidlohr@hp.com: correct header comment for perform_atomic_semop]
Signed-off-by: Petr Mladek <pmladek@suse.cz>
Acked-by: Davidlohr Bueso <davidlohr@hp.com>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-01-28 01:07:00 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
out_of_range:
|
|
|
|
result = -ERANGE;
|
|
|
|
goto undo;
|
|
|
|
|
|
|
|
would_block:
|
2014-06-06 21:37:49 +00:00
|
|
|
q->blocking = sop;
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
if (sop->sem_flg & IPC_NOWAIT)
|
|
|
|
result = -EAGAIN;
|
|
|
|
else
|
|
|
|
result = 1;
|
|
|
|
|
|
|
|
undo:
|
|
|
|
sop--;
|
|
|
|
while (sop >= sops) {
|
ipc/sem.c: avoid overflow of semop undo (semadj) value
When trying to understand semop code, I found a small mistake in the check
for semadj (undo) value overflow. The new undo value is not stored
immediately and next potential checks are done against the old value.
The failing scenario is not much practical. One semop call has to do more
operations on the same semaphore. Also semval and semadj must have
different values, so there has to be some operations without SEM_UNDO
flag. For example:
struct sembuf depositor_op[1];
struct sembuf collector_op[2];
depositor_op[0].sem_num = 0;
depositor_op[0].sem_op = 20000;
depositor_op[0].sem_flg = 0;
collector_op[0].sem_num = 0;
collector_op[0].sem_op = -10000;
collector_op[0].sem_flg = SEM_UNDO;
collector_op[1].sem_num = 0;
collector_op[1].sem_op = -10000;
collector_op[1].sem_flg = SEM_UNDO;
if (semop(semid, depositor_op, 1) == -1)
{ perror("Failed to do 1st deposit"); return 1; }
if (semop(semid, collector_op, 2) == -1)
{ perror("Failed to do 1st collect"); return 1; }
if (semop(semid, depositor_op, 1) == -1)
{ perror("Failed to do 2nd deposit"); return 1; }
if (semop(semid, collector_op, 2) == -1)
{ perror("Failed to do 2nd collect"); return 1; }
return 0;
It passes without error now but the semadj value has overflown in the 2nd
collector operation.
[akpm@linux-foundation.org: restore lessened scope of local `undo']
[davidlohr@hp.com: correct header comment for perform_atomic_semop]
Signed-off-by: Petr Mladek <pmladek@suse.cz>
Acked-by: Davidlohr Bueso <davidlohr@hp.com>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-01-28 01:07:00 +00:00
|
|
|
sem_op = sop->sem_op;
|
|
|
|
sma->sem_base[sop->sem_num].semval -= sem_op;
|
|
|
|
if (sop->sem_flg & SEM_UNDO)
|
|
|
|
un->semadj[sop->sem_num] += sem_op;
|
2005-04-16 22:20:36 +00:00
|
|
|
sop--;
|
|
|
|
}
|
|
|
|
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
ipc/sem: optimize perform_atomic_semop()
This is the main workhorse that deals with semop user calls such that
the waitforzero or semval update operations, on the set, can complete on
not as the sma currently stands. Currently, the set is iterated twice
(setting semval, then backwards for the sempid value). Slowpaths, and
particularly SEM_UNDO calls, must undo any altered sem when it is
detected that the caller must block or has errored-out.
With larger sets, there can occur situations where this involves a lot
of cycles and can obviously be a suboptimal use of cached resources in
shared memory. Ie, discarding CPU caches that are also calling semop
and have the sembuf cached (and can complete), while the current lock
holder doing the semop will block, error, or does a waitforzero
operation.
This patch proposes still iterating the set twice, but the first scan is
read-only, and we perform the actual updates afterward, once we know
that the call will succeed. In order to not suffer from the overhead of
dealing with sops that act on the same sem_num, such (rare) cases use
perform_atomic_semop_slow(), which is exactly what we have now.
Duplicates are detected before grabbing sem_lock, and uses simple a
32/64-bit hash array variable to based on the sem_num we are working on.
In addition add some comments to when we expect to the caller to block.
[akpm@linux-foundation.org: coding-style fixes]
[colin.king@canonical.com: ensure we left shift a ULL rather than a 32 bit integer]
Link: http://lkml.kernel.org/r/20161028181129.7311-1-colin.king@canonical.com
Link: http://lkml.kernel.org/r/20160921194603.GB21438@linux-80c1.suse
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:37 +00:00
|
|
|
static int perform_atomic_semop(struct sem_array *sma, struct sem_queue *q)
|
|
|
|
{
|
|
|
|
int result, sem_op, nsops;
|
|
|
|
struct sembuf *sop;
|
|
|
|
struct sem *curr;
|
|
|
|
struct sembuf *sops;
|
|
|
|
struct sem_undo *un;
|
|
|
|
|
|
|
|
sops = q->sops;
|
|
|
|
nsops = q->nsops;
|
|
|
|
un = q->undo;
|
|
|
|
|
|
|
|
if (unlikely(q->dupsop))
|
|
|
|
return perform_atomic_semop_slow(sma, q);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We scan the semaphore set twice, first to ensure that the entire
|
|
|
|
* operation can succeed, therefore avoiding any pointless writes
|
|
|
|
* to shared memory and having to undo such changes in order to block
|
|
|
|
* until the operations can go through.
|
|
|
|
*/
|
|
|
|
for (sop = sops; sop < sops + nsops; sop++) {
|
|
|
|
curr = sma->sem_base + sop->sem_num;
|
|
|
|
sem_op = sop->sem_op;
|
|
|
|
result = curr->semval;
|
|
|
|
|
|
|
|
if (!sem_op && result)
|
|
|
|
goto would_block; /* wait-for-zero */
|
|
|
|
|
|
|
|
result += sem_op;
|
|
|
|
if (result < 0)
|
|
|
|
goto would_block;
|
|
|
|
|
|
|
|
if (result > SEMVMX)
|
|
|
|
return -ERANGE;
|
|
|
|
|
|
|
|
if (sop->sem_flg & SEM_UNDO) {
|
|
|
|
int undo = un->semadj[sop->sem_num] - sem_op;
|
|
|
|
|
|
|
|
/* Exceeding the undo range is an error. */
|
|
|
|
if (undo < (-SEMAEM - 1) || undo > SEMAEM)
|
|
|
|
return -ERANGE;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
for (sop = sops; sop < sops + nsops; sop++) {
|
|
|
|
curr = sma->sem_base + sop->sem_num;
|
|
|
|
sem_op = sop->sem_op;
|
|
|
|
result = curr->semval;
|
|
|
|
|
|
|
|
if (sop->sem_flg & SEM_UNDO) {
|
|
|
|
int undo = un->semadj[sop->sem_num] - sem_op;
|
|
|
|
|
|
|
|
un->semadj[sop->sem_num] = undo;
|
|
|
|
}
|
|
|
|
curr->semval += sem_op;
|
|
|
|
curr->sempid = q->pid;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
would_block:
|
|
|
|
q->blocking = sop;
|
|
|
|
return sop->sem_flg & IPC_NOWAIT ? -EAGAIN : 1;
|
|
|
|
}
|
|
|
|
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
static inline void wake_up_sem_queue_prepare(struct sem_queue *q, int error,
|
|
|
|
struct wake_q_head *wake_q)
|
2010-05-26 21:43:41 +00:00
|
|
|
{
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
wake_q_add(wake_q, q->sleeper);
|
|
|
|
/*
|
|
|
|
* Rely on the above implicit barrier, such that we can
|
|
|
|
* ensure that we hold reference to the task before setting
|
|
|
|
* q->status. Otherwise we could race with do_exit if the
|
|
|
|
* task is awoken by an external event before calling
|
|
|
|
* wake_up_process().
|
|
|
|
*/
|
|
|
|
WRITE_ONCE(q->status, error);
|
2009-12-16 00:47:30 +00:00
|
|
|
}
|
|
|
|
|
2009-12-16 00:47:32 +00:00
|
|
|
static void unlink_queue(struct sem_array *sma, struct sem_queue *q)
|
|
|
|
{
|
|
|
|
list_del(&q->list);
|
2013-05-01 02:15:39 +00:00
|
|
|
if (q->nsops > 1)
|
2009-12-16 00:47:32 +00:00
|
|
|
sma->complex_count--;
|
|
|
|
}
|
|
|
|
|
2010-05-26 21:43:40 +00:00
|
|
|
/** check_restart(sma, q)
|
|
|
|
* @sma: semaphore array
|
|
|
|
* @q: the operation that just completed
|
|
|
|
*
|
|
|
|
* update_queue is O(N^2) when it restarts scanning the whole queue of
|
|
|
|
* waiting operations. Therefore this function checks if the restart is
|
|
|
|
* really necessary. It is called after a previously waiting operation
|
2013-07-08 23:01:23 +00:00
|
|
|
* modified the array.
|
|
|
|
* Note that wait-for-zero operations are handled without restart.
|
2010-05-26 21:43:40 +00:00
|
|
|
*/
|
2016-12-14 23:06:40 +00:00
|
|
|
static inline int check_restart(struct sem_array *sma, struct sem_queue *q)
|
2010-05-26 21:43:40 +00:00
|
|
|
{
|
2013-07-08 23:01:23 +00:00
|
|
|
/* pending complex alter operations are too difficult to analyse */
|
|
|
|
if (!list_empty(&sma->pending_alter))
|
2010-05-26 21:43:40 +00:00
|
|
|
return 1;
|
|
|
|
|
|
|
|
/* we were a sleeping complex operation. Too difficult */
|
|
|
|
if (q->nsops > 1)
|
|
|
|
return 1;
|
|
|
|
|
2013-07-08 23:01:23 +00:00
|
|
|
/* It is impossible that someone waits for the new value:
|
|
|
|
* - complex operations always restart.
|
|
|
|
* - wait-for-zero are handled seperately.
|
|
|
|
* - q is a previously sleeping simple operation that
|
|
|
|
* altered the array. It must be a decrement, because
|
|
|
|
* simple increments never sleep.
|
|
|
|
* - If there are older (higher priority) decrements
|
|
|
|
* in the queue, then they have observed the original
|
|
|
|
* semval value and couldn't proceed. The operation
|
|
|
|
* decremented to value - thus they won't proceed either.
|
|
|
|
*/
|
|
|
|
return 0;
|
|
|
|
}
|
2010-05-26 21:43:40 +00:00
|
|
|
|
2013-07-08 23:01:23 +00:00
|
|
|
/**
|
2014-01-28 01:07:05 +00:00
|
|
|
* wake_const_ops - wake up non-alter tasks
|
2013-07-08 23:01:23 +00:00
|
|
|
* @sma: semaphore array.
|
|
|
|
* @semnum: semaphore that was modified.
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
* @wake_q: lockless wake-queue head.
|
2013-07-08 23:01:23 +00:00
|
|
|
*
|
|
|
|
* wake_const_ops must be called after a semaphore in a semaphore array
|
|
|
|
* was set to 0. If complex const operations are pending, wake_const_ops must
|
|
|
|
* be called with semnum = -1, as well as with the number of each modified
|
|
|
|
* semaphore.
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
* The tasks that must be woken up are added to @wake_q. The return code
|
2013-07-08 23:01:23 +00:00
|
|
|
* is stored in q->pid.
|
|
|
|
* The function returns 1 if at least one operation was completed successfully.
|
|
|
|
*/
|
|
|
|
static int wake_const_ops(struct sem_array *sma, int semnum,
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
struct wake_q_head *wake_q)
|
2013-07-08 23:01:23 +00:00
|
|
|
{
|
2016-12-14 23:06:43 +00:00
|
|
|
struct sem_queue *q, *tmp;
|
2013-07-08 23:01:23 +00:00
|
|
|
struct list_head *pending_list;
|
|
|
|
int semop_completed = 0;
|
|
|
|
|
|
|
|
if (semnum == -1)
|
|
|
|
pending_list = &sma->pending_const;
|
|
|
|
else
|
|
|
|
pending_list = &sma->sem_base[semnum].pending_const;
|
2010-05-26 21:43:40 +00:00
|
|
|
|
2016-12-14 23:06:43 +00:00
|
|
|
list_for_each_entry_safe(q, tmp, pending_list, list) {
|
|
|
|
int error = perform_atomic_semop(sma, q);
|
2013-07-08 23:01:23 +00:00
|
|
|
|
2016-12-14 23:06:43 +00:00
|
|
|
if (error > 0)
|
|
|
|
continue;
|
|
|
|
/* operation completed, remove from queue & wakeup */
|
|
|
|
unlink_queue(sma, q);
|
2013-07-08 23:01:23 +00:00
|
|
|
|
2016-12-14 23:06:43 +00:00
|
|
|
wake_up_sem_queue_prepare(q, error, wake_q);
|
|
|
|
if (error == 0)
|
|
|
|
semop_completed = 1;
|
2013-07-08 23:01:23 +00:00
|
|
|
}
|
2016-12-14 23:06:43 +00:00
|
|
|
|
2013-07-08 23:01:23 +00:00
|
|
|
return semop_completed;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2014-01-28 01:07:05 +00:00
|
|
|
* do_smart_wakeup_zero - wakeup all wait for zero tasks
|
2013-07-08 23:01:23 +00:00
|
|
|
* @sma: semaphore array
|
|
|
|
* @sops: operations that were performed
|
|
|
|
* @nsops: number of operations
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
* @wake_q: lockless wake-queue head
|
2013-07-08 23:01:23 +00:00
|
|
|
*
|
2014-01-28 01:07:05 +00:00
|
|
|
* Checks all required queue for wait-for-zero operations, based
|
|
|
|
* on the actual changes that were performed on the semaphore array.
|
2013-07-08 23:01:23 +00:00
|
|
|
* The function returns 1 if at least one operation was completed successfully.
|
|
|
|
*/
|
|
|
|
static int do_smart_wakeup_zero(struct sem_array *sma, struct sembuf *sops,
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
int nsops, struct wake_q_head *wake_q)
|
2013-07-08 23:01:23 +00:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
int semop_completed = 0;
|
|
|
|
int got_zero = 0;
|
|
|
|
|
|
|
|
/* first: the per-semaphore queues, if known */
|
|
|
|
if (sops) {
|
|
|
|
for (i = 0; i < nsops; i++) {
|
|
|
|
int num = sops[i].sem_num;
|
|
|
|
|
|
|
|
if (sma->sem_base[num].semval == 0) {
|
|
|
|
got_zero = 1;
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
semop_completed |= wake_const_ops(sma, num, wake_q);
|
2013-07-08 23:01:23 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* No sops means modified semaphores not known.
|
|
|
|
* Assume all were changed.
|
2010-05-26 21:43:40 +00:00
|
|
|
*/
|
2013-07-08 23:01:23 +00:00
|
|
|
for (i = 0; i < sma->sem_nsems; i++) {
|
|
|
|
if (sma->sem_base[i].semval == 0) {
|
|
|
|
got_zero = 1;
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
semop_completed |= wake_const_ops(sma, i, wake_q);
|
2013-07-08 23:01:23 +00:00
|
|
|
}
|
|
|
|
}
|
2010-05-26 21:43:40 +00:00
|
|
|
}
|
|
|
|
/*
|
2013-07-08 23:01:23 +00:00
|
|
|
* If one of the modified semaphores got 0,
|
|
|
|
* then check the global queue, too.
|
2010-05-26 21:43:40 +00:00
|
|
|
*/
|
2013-07-08 23:01:23 +00:00
|
|
|
if (got_zero)
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
semop_completed |= wake_const_ops(sma, -1, wake_q);
|
2010-05-26 21:43:40 +00:00
|
|
|
|
2013-07-08 23:01:23 +00:00
|
|
|
return semop_completed;
|
2010-05-26 21:43:40 +00:00
|
|
|
}
|
|
|
|
|
2009-12-16 00:47:33 +00:00
|
|
|
|
|
|
|
/**
|
2014-01-28 01:07:05 +00:00
|
|
|
* update_queue - look for tasks that can be completed.
|
2009-12-16 00:47:33 +00:00
|
|
|
* @sma: semaphore array.
|
|
|
|
* @semnum: semaphore that was modified.
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
* @wake_q: lockless wake-queue head.
|
2009-12-16 00:47:33 +00:00
|
|
|
*
|
|
|
|
* update_queue must be called after a semaphore in a semaphore array
|
2013-05-01 02:15:39 +00:00
|
|
|
* was modified. If multiple semaphores were modified, update_queue must
|
|
|
|
* be called with semnum = -1, as well as with the number of each modified
|
|
|
|
* semaphore.
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
* The tasks that must be woken up are added to @wake_q. The return code
|
2010-05-26 21:43:41 +00:00
|
|
|
* is stored in q->pid.
|
2013-07-08 23:01:23 +00:00
|
|
|
* The function internally checks if const operations can now succeed.
|
|
|
|
*
|
2010-05-26 21:43:41 +00:00
|
|
|
* The function return 1 if at least one semop was completed successfully.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
static int update_queue(struct sem_array *sma, int semnum, struct wake_q_head *wake_q)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2016-12-14 23:06:43 +00:00
|
|
|
struct sem_queue *q, *tmp;
|
2009-12-16 00:47:33 +00:00
|
|
|
struct list_head *pending_list;
|
2010-05-26 21:43:41 +00:00
|
|
|
int semop_completed = 0;
|
2009-12-16 00:47:33 +00:00
|
|
|
|
2013-05-01 02:15:39 +00:00
|
|
|
if (semnum == -1)
|
2013-07-08 23:01:23 +00:00
|
|
|
pending_list = &sma->pending_alter;
|
2013-05-01 02:15:39 +00:00
|
|
|
else
|
2013-07-08 23:01:23 +00:00
|
|
|
pending_list = &sma->sem_base[semnum].pending_alter;
|
2009-12-16 00:47:29 +00:00
|
|
|
|
|
|
|
again:
|
2016-12-14 23:06:43 +00:00
|
|
|
list_for_each_entry_safe(q, tmp, pending_list, list) {
|
2010-05-26 21:43:40 +00:00
|
|
|
int error, restart;
|
2009-12-16 00:47:33 +00:00
|
|
|
|
2009-12-16 00:47:34 +00:00
|
|
|
/* If we are scanning the single sop, per-semaphore list of
|
|
|
|
* one semaphore and that semaphore is 0, then it is not
|
2013-07-08 23:01:23 +00:00
|
|
|
* necessary to scan further: simple increments
|
2009-12-16 00:47:34 +00:00
|
|
|
* that affect only one entry succeed immediately and cannot
|
|
|
|
* be in the per semaphore pending queue, and decrements
|
|
|
|
* cannot be successful if the value is already 0.
|
|
|
|
*/
|
2013-07-08 23:01:23 +00:00
|
|
|
if (semnum != -1 && sma->sem_base[semnum].semval == 0)
|
2009-12-16 00:47:34 +00:00
|
|
|
break;
|
|
|
|
|
2014-06-06 21:37:49 +00:00
|
|
|
error = perform_atomic_semop(sma, q);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/* Does q->sleeper still need to sleep? */
|
2009-12-16 00:47:29 +00:00
|
|
|
if (error > 0)
|
|
|
|
continue;
|
|
|
|
|
2009-12-16 00:47:32 +00:00
|
|
|
unlink_queue(sma, q);
|
2009-12-16 00:47:29 +00:00
|
|
|
|
2010-05-26 21:43:41 +00:00
|
|
|
if (error) {
|
2010-05-26 21:43:40 +00:00
|
|
|
restart = 0;
|
2010-05-26 21:43:41 +00:00
|
|
|
} else {
|
|
|
|
semop_completed = 1;
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
do_smart_wakeup_zero(sma, q->sops, q->nsops, wake_q);
|
2010-05-26 21:43:40 +00:00
|
|
|
restart = check_restart(sma, q);
|
2010-05-26 21:43:41 +00:00
|
|
|
}
|
2010-05-26 21:43:40 +00:00
|
|
|
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
wake_up_sem_queue_prepare(q, error, wake_q);
|
2010-05-26 21:43:40 +00:00
|
|
|
if (restart)
|
2009-12-16 00:47:29 +00:00
|
|
|
goto again;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
2010-05-26 21:43:41 +00:00
|
|
|
return semop_completed;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2013-09-30 20:45:25 +00:00
|
|
|
/**
|
2014-01-28 01:07:05 +00:00
|
|
|
* set_semotime - set sem_otime
|
2013-09-30 20:45:25 +00:00
|
|
|
* @sma: semaphore array
|
|
|
|
* @sops: operations that modified the array, may be NULL
|
|
|
|
*
|
|
|
|
* sem_otime is replicated to avoid cache line trashing.
|
|
|
|
* This function sets one instance to the current time.
|
|
|
|
*/
|
|
|
|
static void set_semotime(struct sem_array *sma, struct sembuf *sops)
|
|
|
|
{
|
|
|
|
if (sops == NULL) {
|
|
|
|
sma->sem_base[0].sem_otime = get_seconds();
|
|
|
|
} else {
|
|
|
|
sma->sem_base[sops[0].sem_num].sem_otime =
|
|
|
|
get_seconds();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-05-26 21:43:41 +00:00
|
|
|
/**
|
2014-01-28 01:07:05 +00:00
|
|
|
* do_smart_update - optimized update_queue
|
2010-05-26 21:43:40 +00:00
|
|
|
* @sma: semaphore array
|
|
|
|
* @sops: operations that were performed
|
|
|
|
* @nsops: number of operations
|
2010-05-26 21:43:41 +00:00
|
|
|
* @otime: force setting otime
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
* @wake_q: lockless wake-queue head
|
2010-05-26 21:43:40 +00:00
|
|
|
*
|
2013-07-08 23:01:23 +00:00
|
|
|
* do_smart_update() does the required calls to update_queue and wakeup_zero,
|
|
|
|
* based on the actual changes that were performed on the semaphore array.
|
2010-05-26 21:43:41 +00:00
|
|
|
* Note that the function does not do the actual wake-up: the caller is
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
* responsible for calling wake_up_q().
|
2010-05-26 21:43:41 +00:00
|
|
|
* It is safe to perform this call after dropping all locks.
|
2010-05-26 21:43:40 +00:00
|
|
|
*/
|
2010-05-26 21:43:41 +00:00
|
|
|
static void do_smart_update(struct sem_array *sma, struct sembuf *sops, int nsops,
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
int otime, struct wake_q_head *wake_q)
|
2010-05-26 21:43:40 +00:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
otime |= do_smart_wakeup_zero(sma, sops, nsops, wake_q);
|
2013-07-08 23:01:23 +00:00
|
|
|
|
2013-07-08 23:01:24 +00:00
|
|
|
if (!list_empty(&sma->pending_alter)) {
|
|
|
|
/* semaphore array uses the global queue - just process it. */
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
otime |= update_queue(sma, -1, wake_q);
|
2013-07-08 23:01:24 +00:00
|
|
|
} else {
|
|
|
|
if (!sops) {
|
|
|
|
/*
|
|
|
|
* No sops, thus the modified semaphores are not
|
|
|
|
* known. Check all.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < sma->sem_nsems; i++)
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
otime |= update_queue(sma, i, wake_q);
|
2013-07-08 23:01:24 +00:00
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* Check the semaphores that were increased:
|
|
|
|
* - No complex ops, thus all sleeping ops are
|
|
|
|
* decrease.
|
|
|
|
* - if we decreased the value, then any sleeping
|
|
|
|
* semaphore ops wont be able to run: If the
|
|
|
|
* previous value was too small, then the new
|
|
|
|
* value will be too small, too.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < nsops; i++) {
|
|
|
|
if (sops[i].sem_op > 0) {
|
|
|
|
otime |= update_queue(sma,
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
sops[i].sem_num, wake_q);
|
2013-07-08 23:01:24 +00:00
|
|
|
}
|
2013-05-26 09:08:52 +00:00
|
|
|
}
|
2013-05-01 02:15:39 +00:00
|
|
|
}
|
2010-05-26 21:43:40 +00:00
|
|
|
}
|
2013-09-30 20:45:25 +00:00
|
|
|
if (otime)
|
|
|
|
set_semotime(sma, sops);
|
2010-05-26 21:43:40 +00:00
|
|
|
}
|
|
|
|
|
2014-06-06 21:37:48 +00:00
|
|
|
/*
|
ipc/sem.c: make semctl(,,{GETNCNT,GETZCNT}) standard compliant
SUSv4 clearly defines how semncnt and semzcnt must be calculated: A task
waits on exactly one semaphore: The semaphore from the first operation
in the sop array that cannot proceed.
The Linux implementation never followed the standard, it tried to count
all semaphores that might be the reason why a task sleeps.
This patch fixes that.
Note:
a) The implementation assumes that GETNCNT and GETZCNT are rare operations,
therefore the code counts them only on demand.
(If they wouldn't be rare, then the non-compliance would have
been found earlier)
b) compared to the initial version of the patch, the BUG_ONs were removed
and it was clarified that the new behavior conforms to SUS.
Back-compatibility concerns:
Manfred:
: - there is no application in Fedora that uses GETNCNT or GETZCNT.
:
: - application that use only single-sop semop() are also safe, the
: difference only affects complex apps.
:
: - portable application are also safe, the new behavior is standard
: compliant.
:
: But that's it. The old behavior existed in Linux from 0.99.something
: until now.
Michael:
: * These operations seem to be very little used. Grepping the public
: source that is contained Fedora 20 source DVD, there appear to be no
: uses. Of course, this says nothing about uses in private /
: non-mainstream FOSS code, but it seems likely that the same pattern
: is followed there.
:
: * The existing behavior is hard enough to understand that I suspect
: that no one understood it well enough to rely on it anyway
: (especially as that behavior contradicted both man page and POSIX).
:
: So, there's a chance of breakage, but I estimate that it's minute.
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-06 21:37:51 +00:00
|
|
|
* check_qop: Test if a queued operation sleeps on the semaphore semnum
|
2014-06-06 21:37:48 +00:00
|
|
|
*/
|
|
|
|
static int check_qop(struct sem_array *sma, int semnum, struct sem_queue *q,
|
|
|
|
bool count_zero)
|
|
|
|
{
|
ipc/sem.c: make semctl(,,{GETNCNT,GETZCNT}) standard compliant
SUSv4 clearly defines how semncnt and semzcnt must be calculated: A task
waits on exactly one semaphore: The semaphore from the first operation
in the sop array that cannot proceed.
The Linux implementation never followed the standard, it tried to count
all semaphores that might be the reason why a task sleeps.
This patch fixes that.
Note:
a) The implementation assumes that GETNCNT and GETZCNT are rare operations,
therefore the code counts them only on demand.
(If they wouldn't be rare, then the non-compliance would have
been found earlier)
b) compared to the initial version of the patch, the BUG_ONs were removed
and it was clarified that the new behavior conforms to SUS.
Back-compatibility concerns:
Manfred:
: - there is no application in Fedora that uses GETNCNT or GETZCNT.
:
: - application that use only single-sop semop() are also safe, the
: difference only affects complex apps.
:
: - portable application are also safe, the new behavior is standard
: compliant.
:
: But that's it. The old behavior existed in Linux from 0.99.something
: until now.
Michael:
: * These operations seem to be very little used. Grepping the public
: source that is contained Fedora 20 source DVD, there appear to be no
: uses. Of course, this says nothing about uses in private /
: non-mainstream FOSS code, but it seems likely that the same pattern
: is followed there.
:
: * The existing behavior is hard enough to understand that I suspect
: that no one understood it well enough to rely on it anyway
: (especially as that behavior contradicted both man page and POSIX).
:
: So, there's a chance of breakage, but I estimate that it's minute.
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-06 21:37:51 +00:00
|
|
|
struct sembuf *sop = q->blocking;
|
2014-06-06 21:37:48 +00:00
|
|
|
|
2014-06-06 21:37:52 +00:00
|
|
|
/*
|
|
|
|
* Linux always (since 0.99.10) reported a task as sleeping on all
|
|
|
|
* semaphores. This violates SUS, therefore it was changed to the
|
|
|
|
* standard compliant behavior.
|
|
|
|
* Give the administrators a chance to notice that an application
|
|
|
|
* might misbehave because it relies on the Linux behavior.
|
|
|
|
*/
|
|
|
|
pr_info_once("semctl(GETNCNT/GETZCNT) is since 3.16 Single Unix Specification compliant.\n"
|
|
|
|
"The task %s (%d) triggered the difference, watch for misbehavior.\n",
|
|
|
|
current->comm, task_pid_nr(current));
|
|
|
|
|
ipc/sem.c: make semctl(,,{GETNCNT,GETZCNT}) standard compliant
SUSv4 clearly defines how semncnt and semzcnt must be calculated: A task
waits on exactly one semaphore: The semaphore from the first operation
in the sop array that cannot proceed.
The Linux implementation never followed the standard, it tried to count
all semaphores that might be the reason why a task sleeps.
This patch fixes that.
Note:
a) The implementation assumes that GETNCNT and GETZCNT are rare operations,
therefore the code counts them only on demand.
(If they wouldn't be rare, then the non-compliance would have
been found earlier)
b) compared to the initial version of the patch, the BUG_ONs were removed
and it was clarified that the new behavior conforms to SUS.
Back-compatibility concerns:
Manfred:
: - there is no application in Fedora that uses GETNCNT or GETZCNT.
:
: - application that use only single-sop semop() are also safe, the
: difference only affects complex apps.
:
: - portable application are also safe, the new behavior is standard
: compliant.
:
: But that's it. The old behavior existed in Linux from 0.99.something
: until now.
Michael:
: * These operations seem to be very little used. Grepping the public
: source that is contained Fedora 20 source DVD, there appear to be no
: uses. Of course, this says nothing about uses in private /
: non-mainstream FOSS code, but it seems likely that the same pattern
: is followed there.
:
: * The existing behavior is hard enough to understand that I suspect
: that no one understood it well enough to rely on it anyway
: (especially as that behavior contradicted both man page and POSIX).
:
: So, there's a chance of breakage, but I estimate that it's minute.
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-06 21:37:51 +00:00
|
|
|
if (sop->sem_num != semnum)
|
|
|
|
return 0;
|
2014-06-06 21:37:48 +00:00
|
|
|
|
ipc/sem.c: make semctl(,,{GETNCNT,GETZCNT}) standard compliant
SUSv4 clearly defines how semncnt and semzcnt must be calculated: A task
waits on exactly one semaphore: The semaphore from the first operation
in the sop array that cannot proceed.
The Linux implementation never followed the standard, it tried to count
all semaphores that might be the reason why a task sleeps.
This patch fixes that.
Note:
a) The implementation assumes that GETNCNT and GETZCNT are rare operations,
therefore the code counts them only on demand.
(If they wouldn't be rare, then the non-compliance would have
been found earlier)
b) compared to the initial version of the patch, the BUG_ONs were removed
and it was clarified that the new behavior conforms to SUS.
Back-compatibility concerns:
Manfred:
: - there is no application in Fedora that uses GETNCNT or GETZCNT.
:
: - application that use only single-sop semop() are also safe, the
: difference only affects complex apps.
:
: - portable application are also safe, the new behavior is standard
: compliant.
:
: But that's it. The old behavior existed in Linux from 0.99.something
: until now.
Michael:
: * These operations seem to be very little used. Grepping the public
: source that is contained Fedora 20 source DVD, there appear to be no
: uses. Of course, this says nothing about uses in private /
: non-mainstream FOSS code, but it seems likely that the same pattern
: is followed there.
:
: * The existing behavior is hard enough to understand that I suspect
: that no one understood it well enough to rely on it anyway
: (especially as that behavior contradicted both man page and POSIX).
:
: So, there's a chance of breakage, but I estimate that it's minute.
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-06 21:37:51 +00:00
|
|
|
if (count_zero && sop->sem_op == 0)
|
|
|
|
return 1;
|
|
|
|
if (!count_zero && sop->sem_op < 0)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
return 0;
|
2014-06-06 21:37:48 +00:00
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/* The following counts are associated to each semaphore:
|
|
|
|
* semncnt number of tasks waiting on semval being nonzero
|
|
|
|
* semzcnt number of tasks waiting on semval being zero
|
ipc/sem.c: make semctl(,,{GETNCNT,GETZCNT}) standard compliant
SUSv4 clearly defines how semncnt and semzcnt must be calculated: A task
waits on exactly one semaphore: The semaphore from the first operation
in the sop array that cannot proceed.
The Linux implementation never followed the standard, it tried to count
all semaphores that might be the reason why a task sleeps.
This patch fixes that.
Note:
a) The implementation assumes that GETNCNT and GETZCNT are rare operations,
therefore the code counts them only on demand.
(If they wouldn't be rare, then the non-compliance would have
been found earlier)
b) compared to the initial version of the patch, the BUG_ONs were removed
and it was clarified that the new behavior conforms to SUS.
Back-compatibility concerns:
Manfred:
: - there is no application in Fedora that uses GETNCNT or GETZCNT.
:
: - application that use only single-sop semop() are also safe, the
: difference only affects complex apps.
:
: - portable application are also safe, the new behavior is standard
: compliant.
:
: But that's it. The old behavior existed in Linux from 0.99.something
: until now.
Michael:
: * These operations seem to be very little used. Grepping the public
: source that is contained Fedora 20 source DVD, there appear to be no
: uses. Of course, this says nothing about uses in private /
: non-mainstream FOSS code, but it seems likely that the same pattern
: is followed there.
:
: * The existing behavior is hard enough to understand that I suspect
: that no one understood it well enough to rely on it anyway
: (especially as that behavior contradicted both man page and POSIX).
:
: So, there's a chance of breakage, but I estimate that it's minute.
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-06 21:37:51 +00:00
|
|
|
*
|
|
|
|
* Per definition, a task waits only on the semaphore of the first semop
|
|
|
|
* that cannot proceed, even if additional operation would block, too.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
2014-06-06 21:37:48 +00:00
|
|
|
static int count_semcnt(struct sem_array *sma, ushort semnum,
|
|
|
|
bool count_zero)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2014-06-06 21:37:48 +00:00
|
|
|
struct list_head *l;
|
2014-01-28 01:07:04 +00:00
|
|
|
struct sem_queue *q;
|
2014-06-06 21:37:48 +00:00
|
|
|
int semcnt;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2014-06-06 21:37:48 +00:00
|
|
|
semcnt = 0;
|
|
|
|
/* First: check the simple operations. They are easy to evaluate */
|
|
|
|
if (count_zero)
|
|
|
|
l = &sma->sem_base[semnum].pending_const;
|
|
|
|
else
|
|
|
|
l = &sma->sem_base[semnum].pending_alter;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2014-06-06 21:37:48 +00:00
|
|
|
list_for_each_entry(q, l, list) {
|
|
|
|
/* all task on a per-semaphore list sleep on exactly
|
|
|
|
* that semaphore
|
|
|
|
*/
|
|
|
|
semcnt++;
|
2013-05-09 20:53:28 +00:00
|
|
|
}
|
|
|
|
|
2014-06-06 21:37:48 +00:00
|
|
|
/* Then: check the complex operations. */
|
2014-06-06 21:37:47 +00:00
|
|
|
list_for_each_entry(q, &sma->pending_alter, list) {
|
2014-06-06 21:37:48 +00:00
|
|
|
semcnt += check_qop(sma, semnum, q, count_zero);
|
|
|
|
}
|
|
|
|
if (count_zero) {
|
|
|
|
list_for_each_entry(q, &sma->pending_const, list) {
|
|
|
|
semcnt += check_qop(sma, semnum, q, count_zero);
|
|
|
|
}
|
2014-06-06 21:37:47 +00:00
|
|
|
}
|
2014-06-06 21:37:48 +00:00
|
|
|
return semcnt;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2013-09-11 21:26:24 +00:00
|
|
|
/* Free a semaphore set. freeary() is called with sem_ids.rwsem locked
|
|
|
|
* as a writer and the spinlock for this semaphore set hold. sem_ids.rwsem
|
2007-10-19 06:40:54 +00:00
|
|
|
* remains locked on exit.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
2008-02-08 12:18:57 +00:00
|
|
|
static void freeary(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2008-07-25 08:48:06 +00:00
|
|
|
struct sem_undo *un, *tu;
|
|
|
|
struct sem_queue *q, *tq;
|
2008-02-08 12:18:57 +00:00
|
|
|
struct sem_array *sma = container_of(ipcp, struct sem_array, sem_perm);
|
2013-05-01 02:15:39 +00:00
|
|
|
int i;
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
DEFINE_WAKE_Q(wake_q);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-07-25 08:48:06 +00:00
|
|
|
/* Free the existing undo structures for this semaphore set. */
|
2013-07-08 23:01:11 +00:00
|
|
|
ipc_assert_locked_object(&sma->sem_perm);
|
2008-07-25 08:48:06 +00:00
|
|
|
list_for_each_entry_safe(un, tu, &sma->list_id, list_id) {
|
|
|
|
list_del(&un->list_id);
|
|
|
|
spin_lock(&un->ulp->lock);
|
2005-04-16 22:20:36 +00:00
|
|
|
un->semid = -1;
|
2008-07-25 08:48:06 +00:00
|
|
|
list_del_rcu(&un->list_proc);
|
|
|
|
spin_unlock(&un->ulp->lock);
|
2011-03-18 04:09:35 +00:00
|
|
|
kfree_rcu(un, rcu);
|
2008-07-25 08:48:06 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/* Wake up all pending processes and let them fail with EIDRM. */
|
2013-07-08 23:01:23 +00:00
|
|
|
list_for_each_entry_safe(q, tq, &sma->pending_const, list) {
|
|
|
|
unlink_queue(sma, q);
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
wake_up_sem_queue_prepare(q, -EIDRM, &wake_q);
|
2013-07-08 23:01:23 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
list_for_each_entry_safe(q, tq, &sma->pending_alter, list) {
|
2009-12-16 00:47:32 +00:00
|
|
|
unlink_queue(sma, q);
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
wake_up_sem_queue_prepare(q, -EIDRM, &wake_q);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
2013-05-01 02:15:39 +00:00
|
|
|
for (i = 0; i < sma->sem_nsems; i++) {
|
|
|
|
struct sem *sem = sma->sem_base + i;
|
2013-07-08 23:01:23 +00:00
|
|
|
list_for_each_entry_safe(q, tq, &sem->pending_const, list) {
|
|
|
|
unlink_queue(sma, q);
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
wake_up_sem_queue_prepare(q, -EIDRM, &wake_q);
|
2013-07-08 23:01:23 +00:00
|
|
|
}
|
|
|
|
list_for_each_entry_safe(q, tq, &sem->pending_alter, list) {
|
2013-05-01 02:15:39 +00:00
|
|
|
unlink_queue(sma, q);
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
wake_up_sem_queue_prepare(q, -EIDRM, &wake_q);
|
2013-05-01 02:15:39 +00:00
|
|
|
}
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2007-10-19 06:40:48 +00:00
|
|
|
/* Remove the semaphore set from the IDR */
|
|
|
|
sem_rmid(ns, sma);
|
2013-05-01 02:15:44 +00:00
|
|
|
sem_unlock(sma, -1);
|
ipc: move rcu_read_unlock() out of sem_unlock() and into callers
The IPC locking is a mess, and sem_unlock() unlocks not only the
semaphore spinlock, it also drops the rcu read lock. Unlike sem_lock(),
which just gets the spin-lock, and expects the caller to get the rcu
read lock.
This all makes things very hard to follow, and it's very confusing when
you take the rcu read lock in one function, and then release it in
another. And it has caused actual bugs: the sem_obtain_lock() function
ended up dropping the RCU read lock twice in one error path, because it
first did the sem_unlock(), and then did a rcu_read_unlock() to match
the rcu_read_lock() it had done.
This is just a totally mindless "remove rcu_read_unlock() from
sem_unlock() and add it immediately after each caller" (except for the
aforementioned bug where we did too many rcu_read_unlock(), and in
find_alloc_undo() where we just got the rcu_read_lock() to correct for
the fact that sem_unlock would immediately drop it again).
We can (and should) clean things up further, but this fixes the bug with
the minimal amount of subtlety.
Reviewed-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-03 22:04:40 +00:00
|
|
|
rcu_read_unlock();
|
2005-04-16 22:20:36 +00:00
|
|
|
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
wake_up_q(&wake_q);
|
2006-10-02 09:18:22 +00:00
|
|
|
ns->used_sems -= sma->sem_nsems;
|
ipc: fix race with LSMs
Currently, IPC mechanisms do security and auditing related checks under
RCU. However, since security modules can free the security structure,
for example, through selinux_[sem,msg_queue,shm]_free_security(), we can
race if the structure is freed before other tasks are done with it,
creating a use-after-free condition. Manfred illustrates this nicely,
for instance with shared mem and selinux:
-> do_shmat calls rcu_read_lock()
-> do_shmat calls shm_object_check().
Checks that the object is still valid - but doesn't acquire any locks.
Then it returns.
-> do_shmat calls security_shm_shmat (e.g. selinux_shm_shmat)
-> selinux_shm_shmat calls ipc_has_perm()
-> ipc_has_perm accesses ipc_perms->security
shm_close()
-> shm_close acquires rw_mutex & shm_lock
-> shm_close calls shm_destroy
-> shm_destroy calls security_shm_free (e.g. selinux_shm_free_security)
-> selinux_shm_free_security calls ipc_free_security(&shp->shm_perm)
-> ipc_free_security calls kfree(ipc_perms->security)
This patch delays the freeing of the security structures after all RCU
readers are done. Furthermore it aligns the security life cycle with
that of the rest of IPC - freeing them based on the reference counter.
For situations where we need not free security, the current behavior is
kept. Linus states:
"... the old behavior was suspect for another reason too: having the
security blob go away from under a user sounds like it could cause
various other problems anyway, so I think the old code was at least
_prone_ to bugs even if it didn't have catastrophic behavior."
I have tested this patch with IPC testcases from LTP on both my
quad-core laptop and on a 64 core NUMA server. In both cases selinux is
enabled, and tests pass for both voluntary and forced preemption models.
While the mentioned races are theoretical (at least no one as reported
them), I wanted to make sure that this new logic doesn't break anything
we weren't aware of.
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-24 00:04:45 +00:00
|
|
|
ipc_rcu_putref(sma, sem_rcu_free);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned long copy_semid_to_user(void __user *buf, struct semid64_ds *in, int version)
|
|
|
|
{
|
2014-01-28 01:07:04 +00:00
|
|
|
switch (version) {
|
2005-04-16 22:20:36 +00:00
|
|
|
case IPC_64:
|
|
|
|
return copy_to_user(buf, in, sizeof(*in));
|
|
|
|
case IPC_OLD:
|
|
|
|
{
|
|
|
|
struct semid_ds out;
|
|
|
|
|
2010-09-30 22:15:31 +00:00
|
|
|
memset(&out, 0, sizeof(out));
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
ipc64_perm_to_ipc_perm(&in->sem_perm, &out.sem_perm);
|
|
|
|
|
|
|
|
out.sem_otime = in->sem_otime;
|
|
|
|
out.sem_ctime = in->sem_ctime;
|
|
|
|
out.sem_nsems = in->sem_nsems;
|
|
|
|
|
|
|
|
return copy_to_user(buf, &out, sizeof(out));
|
|
|
|
}
|
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-07-08 23:01:25 +00:00
|
|
|
static time_t get_semotime(struct sem_array *sma)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
time_t res;
|
|
|
|
|
|
|
|
res = sma->sem_base[0].sem_otime;
|
|
|
|
for (i = 1; i < sma->sem_nsems; i++) {
|
|
|
|
time_t to = sma->sem_base[i].sem_otime;
|
|
|
|
|
|
|
|
if (to > res)
|
|
|
|
res = to;
|
|
|
|
}
|
|
|
|
return res;
|
|
|
|
}
|
|
|
|
|
2008-02-08 12:18:56 +00:00
|
|
|
static int semctl_nolock(struct ipc_namespace *ns, int semid,
|
2013-03-05 20:04:55 +00:00
|
|
|
int cmd, int version, void __user *p)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2009-12-16 00:47:35 +00:00
|
|
|
int err;
|
2005-04-16 22:20:36 +00:00
|
|
|
struct sem_array *sma;
|
|
|
|
|
2014-01-28 01:07:04 +00:00
|
|
|
switch (cmd) {
|
2005-04-16 22:20:36 +00:00
|
|
|
case IPC_INFO:
|
|
|
|
case SEM_INFO:
|
|
|
|
{
|
|
|
|
struct seminfo seminfo;
|
|
|
|
int max_id;
|
|
|
|
|
|
|
|
err = security_sem_semctl(NULL, cmd);
|
|
|
|
if (err)
|
|
|
|
return err;
|
2014-06-06 21:37:37 +00:00
|
|
|
|
2014-01-28 01:07:04 +00:00
|
|
|
memset(&seminfo, 0, sizeof(seminfo));
|
2006-10-02 09:18:22 +00:00
|
|
|
seminfo.semmni = ns->sc_semmni;
|
|
|
|
seminfo.semmns = ns->sc_semmns;
|
|
|
|
seminfo.semmsl = ns->sc_semmsl;
|
|
|
|
seminfo.semopm = ns->sc_semopm;
|
2005-04-16 22:20:36 +00:00
|
|
|
seminfo.semvmx = SEMVMX;
|
|
|
|
seminfo.semmnu = SEMMNU;
|
|
|
|
seminfo.semmap = SEMMAP;
|
|
|
|
seminfo.semume = SEMUME;
|
2013-09-11 21:26:24 +00:00
|
|
|
down_read(&sem_ids(ns).rwsem);
|
2005-04-16 22:20:36 +00:00
|
|
|
if (cmd == SEM_INFO) {
|
2006-10-02 09:18:22 +00:00
|
|
|
seminfo.semusz = sem_ids(ns).in_use;
|
|
|
|
seminfo.semaem = ns->used_sems;
|
2005-04-16 22:20:36 +00:00
|
|
|
} else {
|
|
|
|
seminfo.semusz = SEMUSZ;
|
|
|
|
seminfo.semaem = SEMAEM;
|
|
|
|
}
|
2007-10-19 06:40:48 +00:00
|
|
|
max_id = ipc_get_maxid(&sem_ids(ns));
|
2013-09-11 21:26:24 +00:00
|
|
|
up_read(&sem_ids(ns).rwsem);
|
2014-06-06 21:37:37 +00:00
|
|
|
if (copy_to_user(p, &seminfo, sizeof(struct seminfo)))
|
2005-04-16 22:20:36 +00:00
|
|
|
return -EFAULT;
|
2014-01-28 01:07:04 +00:00
|
|
|
return (max_id < 0) ? 0 : max_id;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
2008-02-08 12:18:56 +00:00
|
|
|
case IPC_STAT:
|
2005-04-16 22:20:36 +00:00
|
|
|
case SEM_STAT:
|
|
|
|
{
|
|
|
|
struct semid64_ds tbuf;
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
int id = 0;
|
|
|
|
|
|
|
|
memset(&tbuf, 0, sizeof(tbuf));
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2013-05-04 18:04:29 +00:00
|
|
|
rcu_read_lock();
|
2008-02-08 12:18:56 +00:00
|
|
|
if (cmd == SEM_STAT) {
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
sma = sem_obtain_object(ns, semid);
|
|
|
|
if (IS_ERR(sma)) {
|
|
|
|
err = PTR_ERR(sma);
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
2008-02-08 12:18:56 +00:00
|
|
|
id = sma->sem_perm.id;
|
|
|
|
} else {
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
sma = sem_obtain_object_check(ns, semid);
|
|
|
|
if (IS_ERR(sma)) {
|
|
|
|
err = PTR_ERR(sma);
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
2008-02-08 12:18:56 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
err = -EACCES;
|
2011-03-23 23:43:24 +00:00
|
|
|
if (ipcperms(ns, &sma->sem_perm, S_IRUGO))
|
2005-04-16 22:20:36 +00:00
|
|
|
goto out_unlock;
|
|
|
|
|
|
|
|
err = security_sem_semctl(sma, cmd);
|
|
|
|
if (err)
|
|
|
|
goto out_unlock;
|
|
|
|
|
|
|
|
kernel_to_ipc64_perm(&sma->sem_perm, &tbuf.sem_perm);
|
2013-07-08 23:01:25 +00:00
|
|
|
tbuf.sem_otime = get_semotime(sma);
|
|
|
|
tbuf.sem_ctime = sma->sem_ctime;
|
|
|
|
tbuf.sem_nsems = sma->sem_nsems;
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
rcu_read_unlock();
|
2013-03-05 20:04:55 +00:00
|
|
|
if (copy_semid_to_user(p, &tbuf, version))
|
2005-04-16 22:20:36 +00:00
|
|
|
return -EFAULT;
|
|
|
|
return id;
|
|
|
|
}
|
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
out_unlock:
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
rcu_read_unlock();
|
2005-04-16 22:20:36 +00:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2013-03-05 20:04:55 +00:00
|
|
|
static int semctl_setval(struct ipc_namespace *ns, int semid, int semnum,
|
|
|
|
unsigned long arg)
|
|
|
|
{
|
|
|
|
struct sem_undo *un;
|
|
|
|
struct sem_array *sma;
|
2014-01-28 01:07:04 +00:00
|
|
|
struct sem *curr;
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
int err, val;
|
|
|
|
DEFINE_WAKE_Q(wake_q);
|
|
|
|
|
2013-03-05 20:04:55 +00:00
|
|
|
#if defined(CONFIG_64BIT) && defined(__BIG_ENDIAN)
|
|
|
|
/* big-endian 64bit */
|
|
|
|
val = arg >> 32;
|
|
|
|
#else
|
|
|
|
/* 32bit or little-endian 64bit */
|
|
|
|
val = arg;
|
|
|
|
#endif
|
|
|
|
|
2013-05-01 02:15:44 +00:00
|
|
|
if (val > SEMVMX || val < 0)
|
|
|
|
return -ERANGE;
|
2013-03-05 20:04:55 +00:00
|
|
|
|
2013-05-01 02:15:44 +00:00
|
|
|
rcu_read_lock();
|
|
|
|
sma = sem_obtain_object_check(ns, semid);
|
|
|
|
if (IS_ERR(sma)) {
|
|
|
|
rcu_read_unlock();
|
|
|
|
return PTR_ERR(sma);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (semnum < 0 || semnum >= sma->sem_nsems) {
|
|
|
|
rcu_read_unlock();
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
if (ipcperms(ns, &sma->sem_perm, S_IWUGO)) {
|
|
|
|
rcu_read_unlock();
|
|
|
|
return -EACCES;
|
|
|
|
}
|
2013-03-05 20:04:55 +00:00
|
|
|
|
|
|
|
err = security_sem_semctl(sma, SETVAL);
|
2013-05-01 02:15:44 +00:00
|
|
|
if (err) {
|
|
|
|
rcu_read_unlock();
|
|
|
|
return -EACCES;
|
|
|
|
}
|
2013-03-05 20:04:55 +00:00
|
|
|
|
2013-05-01 02:15:44 +00:00
|
|
|
sem_lock(sma, NULL, -1);
|
2013-03-05 20:04:55 +00:00
|
|
|
|
2014-01-28 01:07:01 +00:00
|
|
|
if (!ipc_valid_object(&sma->sem_perm)) {
|
2013-10-16 20:46:45 +00:00
|
|
|
sem_unlock(sma, -1);
|
|
|
|
rcu_read_unlock();
|
|
|
|
return -EIDRM;
|
|
|
|
}
|
|
|
|
|
2013-03-05 20:04:55 +00:00
|
|
|
curr = &sma->sem_base[semnum];
|
|
|
|
|
2013-07-08 23:01:11 +00:00
|
|
|
ipc_assert_locked_object(&sma->sem_perm);
|
2013-03-05 20:04:55 +00:00
|
|
|
list_for_each_entry(un, &sma->list_id, list_id)
|
|
|
|
un->semadj[semnum] = 0;
|
|
|
|
|
|
|
|
curr->semval = val;
|
|
|
|
curr->sempid = task_tgid_vnr(current);
|
|
|
|
sma->sem_ctime = get_seconds();
|
|
|
|
/* maybe some queued-up processes were waiting for this */
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
do_smart_update(sma, NULL, 0, 0, &wake_q);
|
2013-05-01 02:15:44 +00:00
|
|
|
sem_unlock(sma, -1);
|
ipc: move rcu_read_unlock() out of sem_unlock() and into callers
The IPC locking is a mess, and sem_unlock() unlocks not only the
semaphore spinlock, it also drops the rcu read lock. Unlike sem_lock(),
which just gets the spin-lock, and expects the caller to get the rcu
read lock.
This all makes things very hard to follow, and it's very confusing when
you take the rcu read lock in one function, and then release it in
another. And it has caused actual bugs: the sem_obtain_lock() function
ended up dropping the RCU read lock twice in one error path, because it
first did the sem_unlock(), and then did a rcu_read_unlock() to match
the rcu_read_lock() it had done.
This is just a totally mindless "remove rcu_read_unlock() from
sem_unlock() and add it immediately after each caller" (except for the
aforementioned bug where we did too many rcu_read_unlock(), and in
find_alloc_undo() where we just got the rcu_read_lock() to correct for
the fact that sem_unlock would immediately drop it again).
We can (and should) clean things up further, but this fixes the bug with
the minimal amount of subtlety.
Reviewed-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-03 22:04:40 +00:00
|
|
|
rcu_read_unlock();
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
wake_up_q(&wake_q);
|
2013-05-01 02:15:44 +00:00
|
|
|
return 0;
|
2013-03-05 20:04:55 +00:00
|
|
|
}
|
|
|
|
|
2006-10-02 09:18:22 +00:00
|
|
|
static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
|
2013-03-05 20:04:55 +00:00
|
|
|
int cmd, void __user *p)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
struct sem_array *sma;
|
2014-01-28 01:07:04 +00:00
|
|
|
struct sem *curr;
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
int err, nsems;
|
2005-04-16 22:20:36 +00:00
|
|
|
ushort fast_sem_io[SEMMSL_FAST];
|
2014-01-28 01:07:04 +00:00
|
|
|
ushort *sem_io = fast_sem_io;
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
DEFINE_WAKE_Q(wake_q);
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
sma = sem_obtain_object_check(ns, semid);
|
|
|
|
if (IS_ERR(sma)) {
|
|
|
|
rcu_read_unlock();
|
2007-10-19 06:40:51 +00:00
|
|
|
return PTR_ERR(sma);
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
nsems = sma->sem_nsems;
|
|
|
|
|
|
|
|
err = -EACCES;
|
2013-05-04 18:04:29 +00:00
|
|
|
if (ipcperms(ns, &sma->sem_perm, cmd == SETALL ? S_IWUGO : S_IRUGO))
|
|
|
|
goto out_rcu_wakeup;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
err = security_sem_semctl(sma, cmd);
|
2013-05-04 18:04:29 +00:00
|
|
|
if (err)
|
|
|
|
goto out_rcu_wakeup;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
err = -EACCES;
|
|
|
|
switch (cmd) {
|
|
|
|
case GETALL:
|
|
|
|
{
|
2013-03-05 20:04:55 +00:00
|
|
|
ushort __user *array = p;
|
2005-04-16 22:20:36 +00:00
|
|
|
int i;
|
|
|
|
|
2013-05-02 23:30:49 +00:00
|
|
|
sem_lock(sma, NULL, -1);
|
2014-01-28 01:07:01 +00:00
|
|
|
if (!ipc_valid_object(&sma->sem_perm)) {
|
2013-10-16 20:46:45 +00:00
|
|
|
err = -EIDRM;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
2014-01-28 01:07:04 +00:00
|
|
|
if (nsems > SEMMSL_FAST) {
|
2013-05-02 23:30:49 +00:00
|
|
|
if (!ipc_rcu_getref(sma)) {
|
|
|
|
err = -EIDRM;
|
2013-10-16 20:46:45 +00:00
|
|
|
goto out_unlock;
|
2013-05-02 23:30:49 +00:00
|
|
|
}
|
|
|
|
sem_unlock(sma, -1);
|
ipc: move rcu_read_unlock() out of sem_unlock() and into callers
The IPC locking is a mess, and sem_unlock() unlocks not only the
semaphore spinlock, it also drops the rcu read lock. Unlike sem_lock(),
which just gets the spin-lock, and expects the caller to get the rcu
read lock.
This all makes things very hard to follow, and it's very confusing when
you take the rcu read lock in one function, and then release it in
another. And it has caused actual bugs: the sem_obtain_lock() function
ended up dropping the RCU read lock twice in one error path, because it
first did the sem_unlock(), and then did a rcu_read_unlock() to match
the rcu_read_lock() it had done.
This is just a totally mindless "remove rcu_read_unlock() from
sem_unlock() and add it immediately after each caller" (except for the
aforementioned bug where we did too many rcu_read_unlock(), and in
find_alloc_undo() where we just got the rcu_read_lock() to correct for
the fact that sem_unlock would immediately drop it again).
We can (and should) clean things up further, but this fixes the bug with
the minimal amount of subtlety.
Reviewed-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-03 22:04:40 +00:00
|
|
|
rcu_read_unlock();
|
2005-04-16 22:20:36 +00:00
|
|
|
sem_io = ipc_alloc(sizeof(ushort)*nsems);
|
2014-01-28 01:07:04 +00:00
|
|
|
if (sem_io == NULL) {
|
2016-08-02 21:03:07 +00:00
|
|
|
ipc_rcu_putref(sma, sem_rcu_free);
|
2005-04-16 22:20:36 +00:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
2013-05-04 17:13:40 +00:00
|
|
|
rcu_read_lock();
|
2008-04-29 08:00:46 +00:00
|
|
|
sem_lock_and_putref(sma);
|
2014-01-28 01:07:01 +00:00
|
|
|
if (!ipc_valid_object(&sma->sem_perm)) {
|
2005-04-16 22:20:36 +00:00
|
|
|
err = -EIDRM;
|
2013-10-16 20:46:45 +00:00
|
|
|
goto out_unlock;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
2013-05-02 23:30:49 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
for (i = 0; i < sma->sem_nsems; i++)
|
|
|
|
sem_io[i] = sma->sem_base[i].semval;
|
2013-05-01 02:15:44 +00:00
|
|
|
sem_unlock(sma, -1);
|
ipc: move rcu_read_unlock() out of sem_unlock() and into callers
The IPC locking is a mess, and sem_unlock() unlocks not only the
semaphore spinlock, it also drops the rcu read lock. Unlike sem_lock(),
which just gets the spin-lock, and expects the caller to get the rcu
read lock.
This all makes things very hard to follow, and it's very confusing when
you take the rcu read lock in one function, and then release it in
another. And it has caused actual bugs: the sem_obtain_lock() function
ended up dropping the RCU read lock twice in one error path, because it
first did the sem_unlock(), and then did a rcu_read_unlock() to match
the rcu_read_lock() it had done.
This is just a totally mindless "remove rcu_read_unlock() from
sem_unlock() and add it immediately after each caller" (except for the
aforementioned bug where we did too many rcu_read_unlock(), and in
find_alloc_undo() where we just got the rcu_read_lock() to correct for
the fact that sem_unlock would immediately drop it again).
We can (and should) clean things up further, but this fixes the bug with
the minimal amount of subtlety.
Reviewed-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-03 22:04:40 +00:00
|
|
|
rcu_read_unlock();
|
2005-04-16 22:20:36 +00:00
|
|
|
err = 0;
|
2014-01-28 01:07:04 +00:00
|
|
|
if (copy_to_user(array, sem_io, nsems*sizeof(ushort)))
|
2005-04-16 22:20:36 +00:00
|
|
|
err = -EFAULT;
|
|
|
|
goto out_free;
|
|
|
|
}
|
|
|
|
case SETALL:
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
struct sem_undo *un;
|
|
|
|
|
2013-05-01 02:15:44 +00:00
|
|
|
if (!ipc_rcu_getref(sma)) {
|
2013-10-16 20:46:45 +00:00
|
|
|
err = -EIDRM;
|
|
|
|
goto out_rcu_wakeup;
|
2013-05-01 02:15:44 +00:00
|
|
|
}
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
rcu_read_unlock();
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2014-01-28 01:07:04 +00:00
|
|
|
if (nsems > SEMMSL_FAST) {
|
2005-04-16 22:20:36 +00:00
|
|
|
sem_io = ipc_alloc(sizeof(ushort)*nsems);
|
2014-01-28 01:07:04 +00:00
|
|
|
if (sem_io == NULL) {
|
2016-08-02 21:03:07 +00:00
|
|
|
ipc_rcu_putref(sma, sem_rcu_free);
|
2005-04-16 22:20:36 +00:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-01-28 01:07:04 +00:00
|
|
|
if (copy_from_user(sem_io, p, nsems*sizeof(ushort))) {
|
2016-08-02 21:03:07 +00:00
|
|
|
ipc_rcu_putref(sma, sem_rcu_free);
|
2005-04-16 22:20:36 +00:00
|
|
|
err = -EFAULT;
|
|
|
|
goto out_free;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < nsems; i++) {
|
|
|
|
if (sem_io[i] > SEMVMX) {
|
2016-08-02 21:03:07 +00:00
|
|
|
ipc_rcu_putref(sma, sem_rcu_free);
|
2005-04-16 22:20:36 +00:00
|
|
|
err = -ERANGE;
|
|
|
|
goto out_free;
|
|
|
|
}
|
|
|
|
}
|
2013-05-04 17:13:40 +00:00
|
|
|
rcu_read_lock();
|
2008-04-29 08:00:46 +00:00
|
|
|
sem_lock_and_putref(sma);
|
2014-01-28 01:07:01 +00:00
|
|
|
if (!ipc_valid_object(&sma->sem_perm)) {
|
2005-04-16 22:20:36 +00:00
|
|
|
err = -EIDRM;
|
2013-10-16 20:46:45 +00:00
|
|
|
goto out_unlock;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2016-03-22 21:27:48 +00:00
|
|
|
for (i = 0; i < nsems; i++) {
|
2005-04-16 22:20:36 +00:00
|
|
|
sma->sem_base[i].semval = sem_io[i];
|
2016-03-22 21:27:48 +00:00
|
|
|
sma->sem_base[i].sempid = task_tgid_vnr(current);
|
|
|
|
}
|
2008-07-25 08:48:04 +00:00
|
|
|
|
2013-07-08 23:01:11 +00:00
|
|
|
ipc_assert_locked_object(&sma->sem_perm);
|
2008-07-25 08:48:04 +00:00
|
|
|
list_for_each_entry(un, &sma->list_id, list_id) {
|
2005-04-16 22:20:36 +00:00
|
|
|
for (i = 0; i < nsems; i++)
|
|
|
|
un->semadj[i] = 0;
|
2008-07-25 08:48:04 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
sma->sem_ctime = get_seconds();
|
|
|
|
/* maybe some queued-up processes were waiting for this */
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
do_smart_update(sma, NULL, 0, 0, &wake_q);
|
2005-04-16 22:20:36 +00:00
|
|
|
err = 0;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
2013-03-05 20:04:55 +00:00
|
|
|
/* GETVAL, GETPID, GETNCTN, GETZCNT: fall-through */
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
err = -EINVAL;
|
2013-05-04 18:04:29 +00:00
|
|
|
if (semnum < 0 || semnum >= nsems)
|
|
|
|
goto out_rcu_wakeup;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2013-05-01 02:15:44 +00:00
|
|
|
sem_lock(sma, NULL, -1);
|
2014-01-28 01:07:01 +00:00
|
|
|
if (!ipc_valid_object(&sma->sem_perm)) {
|
2013-10-16 20:46:45 +00:00
|
|
|
err = -EIDRM;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
curr = &sma->sem_base[semnum];
|
|
|
|
|
|
|
|
switch (cmd) {
|
|
|
|
case GETVAL:
|
|
|
|
err = curr->semval;
|
|
|
|
goto out_unlock;
|
|
|
|
case GETPID:
|
|
|
|
err = curr->sempid;
|
|
|
|
goto out_unlock;
|
|
|
|
case GETNCNT:
|
2014-06-06 21:37:48 +00:00
|
|
|
err = count_semcnt(sma, semnum, 0);
|
2005-04-16 22:20:36 +00:00
|
|
|
goto out_unlock;
|
|
|
|
case GETZCNT:
|
2014-06-06 21:37:48 +00:00
|
|
|
err = count_semcnt(sma, semnum, 1);
|
2005-04-16 22:20:36 +00:00
|
|
|
goto out_unlock;
|
|
|
|
}
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
out_unlock:
|
2013-05-01 02:15:44 +00:00
|
|
|
sem_unlock(sma, -1);
|
2013-05-04 18:04:29 +00:00
|
|
|
out_rcu_wakeup:
|
ipc: move rcu_read_unlock() out of sem_unlock() and into callers
The IPC locking is a mess, and sem_unlock() unlocks not only the
semaphore spinlock, it also drops the rcu read lock. Unlike sem_lock(),
which just gets the spin-lock, and expects the caller to get the rcu
read lock.
This all makes things very hard to follow, and it's very confusing when
you take the rcu read lock in one function, and then release it in
another. And it has caused actual bugs: the sem_obtain_lock() function
ended up dropping the RCU read lock twice in one error path, because it
first did the sem_unlock(), and then did a rcu_read_unlock() to match
the rcu_read_lock() it had done.
This is just a totally mindless "remove rcu_read_unlock() from
sem_unlock() and add it immediately after each caller" (except for the
aforementioned bug where we did too many rcu_read_unlock(), and in
find_alloc_undo() where we just got the rcu_read_lock() to correct for
the fact that sem_unlock would immediately drop it again).
We can (and should) clean things up further, but this fixes the bug with
the minimal amount of subtlety.
Reviewed-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-03 22:04:40 +00:00
|
|
|
rcu_read_unlock();
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
wake_up_q(&wake_q);
|
2005-04-16 22:20:36 +00:00
|
|
|
out_free:
|
2014-01-28 01:07:04 +00:00
|
|
|
if (sem_io != fast_sem_io)
|
2016-01-22 23:11:02 +00:00
|
|
|
ipc_free(sem_io);
|
2005-04-16 22:20:36 +00:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2008-04-29 08:00:50 +00:00
|
|
|
static inline unsigned long
|
|
|
|
copy_semid_from_user(struct semid64_ds *out, void __user *buf, int version)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2014-01-28 01:07:04 +00:00
|
|
|
switch (version) {
|
2005-04-16 22:20:36 +00:00
|
|
|
case IPC_64:
|
2008-04-29 08:00:50 +00:00
|
|
|
if (copy_from_user(out, buf, sizeof(*out)))
|
2005-04-16 22:20:36 +00:00
|
|
|
return -EFAULT;
|
|
|
|
return 0;
|
|
|
|
case IPC_OLD:
|
|
|
|
{
|
|
|
|
struct semid_ds tbuf_old;
|
|
|
|
|
2014-01-28 01:07:04 +00:00
|
|
|
if (copy_from_user(&tbuf_old, buf, sizeof(tbuf_old)))
|
2005-04-16 22:20:36 +00:00
|
|
|
return -EFAULT;
|
|
|
|
|
2008-04-29 08:00:50 +00:00
|
|
|
out->sem_perm.uid = tbuf_old.sem_perm.uid;
|
|
|
|
out->sem_perm.gid = tbuf_old.sem_perm.gid;
|
|
|
|
out->sem_perm.mode = tbuf_old.sem_perm.mode;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-04-29 08:00:49 +00:00
|
|
|
/*
|
2013-09-11 21:26:24 +00:00
|
|
|
* This function handles some semctl commands which require the rwsem
|
2008-04-29 08:00:49 +00:00
|
|
|
* to be held in write mode.
|
2013-09-11 21:26:24 +00:00
|
|
|
* NOTE: no locks must be held, the rwsem is taken inside this function.
|
2008-04-29 08:00:49 +00:00
|
|
|
*/
|
2008-04-29 08:00:49 +00:00
|
|
|
static int semctl_down(struct ipc_namespace *ns, int semid,
|
2013-03-05 20:04:55 +00:00
|
|
|
int cmd, int version, void __user *p)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
struct sem_array *sma;
|
|
|
|
int err;
|
2008-04-29 08:00:50 +00:00
|
|
|
struct semid64_ds semid64;
|
2005-04-16 22:20:36 +00:00
|
|
|
struct kern_ipc_perm *ipcp;
|
|
|
|
|
2014-01-28 01:07:04 +00:00
|
|
|
if (cmd == IPC_SET) {
|
2013-03-05 20:04:55 +00:00
|
|
|
if (copy_semid_from_user(&semid64, p, version))
|
2005-04-16 22:20:36 +00:00
|
|
|
return -EFAULT;
|
|
|
|
}
|
2006-04-02 21:07:33 +00:00
|
|
|
|
2013-09-11 21:26:24 +00:00
|
|
|
down_write(&sem_ids(ns).rwsem);
|
2013-07-08 23:01:12 +00:00
|
|
|
rcu_read_lock();
|
|
|
|
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
ipcp = ipcctl_pre_down_nolock(ns, &sem_ids(ns), semid, cmd,
|
|
|
|
&semid64.sem_perm, 0);
|
2013-07-08 23:01:12 +00:00
|
|
|
if (IS_ERR(ipcp)) {
|
|
|
|
err = PTR_ERR(ipcp);
|
|
|
|
goto out_unlock1;
|
|
|
|
}
|
2006-04-02 21:07:33 +00:00
|
|
|
|
2008-04-29 08:00:54 +00:00
|
|
|
sma = container_of(ipcp, struct sem_array, sem_perm);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
err = security_sem_semctl(sma, cmd);
|
2013-07-08 23:01:12 +00:00
|
|
|
if (err)
|
|
|
|
goto out_unlock1;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2013-07-08 23:01:12 +00:00
|
|
|
switch (cmd) {
|
2005-04-16 22:20:36 +00:00
|
|
|
case IPC_RMID:
|
2013-05-01 02:15:44 +00:00
|
|
|
sem_lock(sma, NULL, -1);
|
2013-07-08 23:01:12 +00:00
|
|
|
/* freeary unlocks the ipc object and rcu */
|
2008-02-08 12:18:57 +00:00
|
|
|
freeary(ns, ipcp);
|
2008-04-29 08:00:49 +00:00
|
|
|
goto out_up;
|
2005-04-16 22:20:36 +00:00
|
|
|
case IPC_SET:
|
2013-05-01 02:15:44 +00:00
|
|
|
sem_lock(sma, NULL, -1);
|
2012-02-08 00:54:11 +00:00
|
|
|
err = ipc_update_perm(&semid64.sem_perm, ipcp);
|
|
|
|
if (err)
|
2013-07-08 23:01:12 +00:00
|
|
|
goto out_unlock0;
|
2005-04-16 22:20:36 +00:00
|
|
|
sma->sem_ctime = get_seconds();
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
err = -EINVAL;
|
2013-07-08 23:01:12 +00:00
|
|
|
goto out_unlock1;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2013-07-08 23:01:12 +00:00
|
|
|
out_unlock0:
|
2013-05-01 02:15:44 +00:00
|
|
|
sem_unlock(sma, -1);
|
2013-07-08 23:01:12 +00:00
|
|
|
out_unlock1:
|
ipc: move rcu_read_unlock() out of sem_unlock() and into callers
The IPC locking is a mess, and sem_unlock() unlocks not only the
semaphore spinlock, it also drops the rcu read lock. Unlike sem_lock(),
which just gets the spin-lock, and expects the caller to get the rcu
read lock.
This all makes things very hard to follow, and it's very confusing when
you take the rcu read lock in one function, and then release it in
another. And it has caused actual bugs: the sem_obtain_lock() function
ended up dropping the RCU read lock twice in one error path, because it
first did the sem_unlock(), and then did a rcu_read_unlock() to match
the rcu_read_lock() it had done.
This is just a totally mindless "remove rcu_read_unlock() from
sem_unlock() and add it immediately after each caller" (except for the
aforementioned bug where we did too many rcu_read_unlock(), and in
find_alloc_undo() where we just got the rcu_read_lock() to correct for
the fact that sem_unlock would immediately drop it again).
We can (and should) clean things up further, but this fixes the bug with
the minimal amount of subtlety.
Reviewed-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-03 22:04:40 +00:00
|
|
|
rcu_read_unlock();
|
2008-04-29 08:00:49 +00:00
|
|
|
out_up:
|
2013-09-11 21:26:24 +00:00
|
|
|
up_write(&sem_ids(ns).rwsem);
|
2005-04-16 22:20:36 +00:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2013-03-05 20:04:55 +00:00
|
|
|
SYSCALL_DEFINE4(semctl, int, semid, int, semnum, int, cmd, unsigned long, arg)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
int version;
|
2006-10-02 09:18:22 +00:00
|
|
|
struct ipc_namespace *ns;
|
2013-03-05 20:04:55 +00:00
|
|
|
void __user *p = (void __user *)arg;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
if (semid < 0)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
version = ipc_parse_version(&cmd);
|
2006-10-02 09:18:22 +00:00
|
|
|
ns = current->nsproxy->ipc_ns;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2014-01-28 01:07:04 +00:00
|
|
|
switch (cmd) {
|
2005-04-16 22:20:36 +00:00
|
|
|
case IPC_INFO:
|
|
|
|
case SEM_INFO:
|
2008-02-08 12:18:56 +00:00
|
|
|
case IPC_STAT:
|
2005-04-16 22:20:36 +00:00
|
|
|
case SEM_STAT:
|
2013-03-05 20:04:55 +00:00
|
|
|
return semctl_nolock(ns, semid, cmd, version, p);
|
2005-04-16 22:20:36 +00:00
|
|
|
case GETALL:
|
|
|
|
case GETVAL:
|
|
|
|
case GETPID:
|
|
|
|
case GETNCNT:
|
|
|
|
case GETZCNT:
|
|
|
|
case SETALL:
|
2013-03-05 20:04:55 +00:00
|
|
|
return semctl_main(ns, semid, semnum, cmd, p);
|
|
|
|
case SETVAL:
|
|
|
|
return semctl_setval(ns, semid, semnum, arg);
|
2005-04-16 22:20:36 +00:00
|
|
|
case IPC_RMID:
|
|
|
|
case IPC_SET:
|
2013-03-05 20:04:55 +00:00
|
|
|
return semctl_down(ns, semid, cmd, version, p);
|
2005-04-16 22:20:36 +00:00
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If the task doesn't already have a undo_list, then allocate one
|
|
|
|
* here. We guarantee there is only one thread using this undo list,
|
|
|
|
* and current is THE ONE
|
|
|
|
*
|
|
|
|
* If this allocation and assignment succeeds, but later
|
|
|
|
* portions of this code fail, there is no need to free the sem_undo_list.
|
|
|
|
* Just let it stay associated with the task, and it'll be freed later
|
|
|
|
* at exit time.
|
|
|
|
*
|
|
|
|
* This can block, so callers must hold no locks.
|
|
|
|
*/
|
|
|
|
static inline int get_undo_list(struct sem_undo_list **undo_listp)
|
|
|
|
{
|
|
|
|
struct sem_undo_list *undo_list;
|
|
|
|
|
|
|
|
undo_list = current->sysvsem.undo_list;
|
|
|
|
if (!undo_list) {
|
2006-10-02 09:18:25 +00:00
|
|
|
undo_list = kzalloc(sizeof(*undo_list), GFP_KERNEL);
|
2005-04-16 22:20:36 +00:00
|
|
|
if (undo_list == NULL)
|
|
|
|
return -ENOMEM;
|
2005-08-05 21:05:27 +00:00
|
|
|
spin_lock_init(&undo_list->lock);
|
2005-04-16 22:20:36 +00:00
|
|
|
atomic_set(&undo_list->refcnt, 1);
|
2008-07-25 08:48:04 +00:00
|
|
|
INIT_LIST_HEAD(&undo_list->list_proc);
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
current->sysvsem.undo_list = undo_list;
|
|
|
|
}
|
|
|
|
*undo_listp = undo_list;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-12-16 00:47:28 +00:00
|
|
|
static struct sem_undo *__lookup_undo(struct sem_undo_list *ulp, int semid)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2009-12-16 00:47:28 +00:00
|
|
|
struct sem_undo *un;
|
2008-07-25 08:48:04 +00:00
|
|
|
|
2009-12-16 00:47:28 +00:00
|
|
|
list_for_each_entry_rcu(un, &ulp->list_proc, list_proc) {
|
|
|
|
if (un->semid == semid)
|
|
|
|
return un;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
2008-07-25 08:48:04 +00:00
|
|
|
return NULL;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2009-12-16 00:47:28 +00:00
|
|
|
static struct sem_undo *lookup_undo(struct sem_undo_list *ulp, int semid)
|
|
|
|
{
|
|
|
|
struct sem_undo *un;
|
|
|
|
|
2014-01-28 01:07:04 +00:00
|
|
|
assert_spin_locked(&ulp->lock);
|
2009-12-16 00:47:28 +00:00
|
|
|
|
|
|
|
un = __lookup_undo(ulp, semid);
|
|
|
|
if (un) {
|
|
|
|
list_del_rcu(&un->list_proc);
|
|
|
|
list_add_rcu(&un->list_proc, &ulp->list_proc);
|
|
|
|
}
|
|
|
|
return un;
|
|
|
|
}
|
|
|
|
|
2008-07-25 08:48:04 +00:00
|
|
|
/**
|
2014-01-28 01:07:05 +00:00
|
|
|
* find_alloc_undo - lookup (and if not present create) undo array
|
2008-07-25 08:48:04 +00:00
|
|
|
* @ns: namespace
|
|
|
|
* @semid: semaphore array id
|
|
|
|
*
|
|
|
|
* The function looks up (and if not present creates) the undo structure.
|
|
|
|
* The size of the undo structure depends on the size of the semaphore
|
|
|
|
* array, thus the alloc path is not that straightforward.
|
2008-07-25 08:48:06 +00:00
|
|
|
* Lifetime-rules: sem_undo is rcu-protected, on success, the function
|
|
|
|
* performs a rcu_read_lock().
|
2008-07-25 08:48:04 +00:00
|
|
|
*/
|
|
|
|
static struct sem_undo *find_alloc_undo(struct ipc_namespace *ns, int semid)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
struct sem_array *sma;
|
|
|
|
struct sem_undo_list *ulp;
|
|
|
|
struct sem_undo *un, *new;
|
2013-05-01 02:15:44 +00:00
|
|
|
int nsems, error;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
error = get_undo_list(&ulp);
|
|
|
|
if (error)
|
|
|
|
return ERR_PTR(error);
|
|
|
|
|
2008-07-25 08:48:06 +00:00
|
|
|
rcu_read_lock();
|
2007-10-19 06:40:55 +00:00
|
|
|
spin_lock(&ulp->lock);
|
2005-04-16 22:20:36 +00:00
|
|
|
un = lookup_undo(ulp, semid);
|
2007-10-19 06:40:55 +00:00
|
|
|
spin_unlock(&ulp->lock);
|
2014-01-28 01:07:04 +00:00
|
|
|
if (likely(un != NULL))
|
2005-04-16 22:20:36 +00:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
/* no undo structure around - allocate one. */
|
2008-07-25 08:48:04 +00:00
|
|
|
/* step 1: figure out the size of the semaphore array */
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
sma = sem_obtain_object_check(ns, semid);
|
|
|
|
if (IS_ERR(sma)) {
|
|
|
|
rcu_read_unlock();
|
2010-05-26 21:43:44 +00:00
|
|
|
return ERR_CAST(sma);
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
}
|
2007-10-19 06:40:51 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
nsems = sma->sem_nsems;
|
2013-05-01 02:15:44 +00:00
|
|
|
if (!ipc_rcu_getref(sma)) {
|
|
|
|
rcu_read_unlock();
|
|
|
|
un = ERR_PTR(-EIDRM);
|
|
|
|
goto out;
|
|
|
|
}
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
rcu_read_unlock();
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-07-25 08:48:04 +00:00
|
|
|
/* step 2: allocate new undo structure */
|
2006-12-07 04:38:51 +00:00
|
|
|
new = kzalloc(sizeof(struct sem_undo) + sizeof(short)*nsems, GFP_KERNEL);
|
2005-04-16 22:20:36 +00:00
|
|
|
if (!new) {
|
2016-08-02 21:03:07 +00:00
|
|
|
ipc_rcu_putref(sma, sem_rcu_free);
|
2005-04-16 22:20:36 +00:00
|
|
|
return ERR_PTR(-ENOMEM);
|
|
|
|
}
|
|
|
|
|
2008-07-25 08:48:06 +00:00
|
|
|
/* step 3: Acquire the lock on semaphore array */
|
2013-05-04 17:13:40 +00:00
|
|
|
rcu_read_lock();
|
2008-04-29 08:00:46 +00:00
|
|
|
sem_lock_and_putref(sma);
|
2014-01-28 01:07:01 +00:00
|
|
|
if (!ipc_valid_object(&sma->sem_perm)) {
|
2013-05-01 02:15:44 +00:00
|
|
|
sem_unlock(sma, -1);
|
ipc: move rcu_read_unlock() out of sem_unlock() and into callers
The IPC locking is a mess, and sem_unlock() unlocks not only the
semaphore spinlock, it also drops the rcu read lock. Unlike sem_lock(),
which just gets the spin-lock, and expects the caller to get the rcu
read lock.
This all makes things very hard to follow, and it's very confusing when
you take the rcu read lock in one function, and then release it in
another. And it has caused actual bugs: the sem_obtain_lock() function
ended up dropping the RCU read lock twice in one error path, because it
first did the sem_unlock(), and then did a rcu_read_unlock() to match
the rcu_read_lock() it had done.
This is just a totally mindless "remove rcu_read_unlock() from
sem_unlock() and add it immediately after each caller" (except for the
aforementioned bug where we did too many rcu_read_unlock(), and in
find_alloc_undo() where we just got the rcu_read_lock() to correct for
the fact that sem_unlock would immediately drop it again).
We can (and should) clean things up further, but this fixes the bug with
the minimal amount of subtlety.
Reviewed-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-03 22:04:40 +00:00
|
|
|
rcu_read_unlock();
|
2005-04-16 22:20:36 +00:00
|
|
|
kfree(new);
|
|
|
|
un = ERR_PTR(-EIDRM);
|
|
|
|
goto out;
|
|
|
|
}
|
2008-07-25 08:48:06 +00:00
|
|
|
spin_lock(&ulp->lock);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* step 4: check for races: did someone else allocate the undo struct?
|
|
|
|
*/
|
|
|
|
un = lookup_undo(ulp, semid);
|
|
|
|
if (un) {
|
|
|
|
kfree(new);
|
|
|
|
goto success;
|
|
|
|
}
|
2008-07-25 08:48:04 +00:00
|
|
|
/* step 5: initialize & link new undo structure */
|
|
|
|
new->semadj = (short *) &new[1];
|
2008-07-25 08:48:06 +00:00
|
|
|
new->ulp = ulp;
|
2008-07-25 08:48:04 +00:00
|
|
|
new->semid = semid;
|
|
|
|
assert_spin_locked(&ulp->lock);
|
2008-07-25 08:48:06 +00:00
|
|
|
list_add_rcu(&new->list_proc, &ulp->list_proc);
|
2013-07-08 23:01:11 +00:00
|
|
|
ipc_assert_locked_object(&sma->sem_perm);
|
2008-07-25 08:48:04 +00:00
|
|
|
list_add(&new->list_id, &sma->list_id);
|
2008-07-25 08:48:06 +00:00
|
|
|
un = new;
|
2008-07-25 08:48:04 +00:00
|
|
|
|
2008-07-25 08:48:06 +00:00
|
|
|
success:
|
2007-10-19 06:40:55 +00:00
|
|
|
spin_unlock(&ulp->lock);
|
2013-05-01 02:15:44 +00:00
|
|
|
sem_unlock(sma, -1);
|
2005-04-16 22:20:36 +00:00
|
|
|
out:
|
|
|
|
return un;
|
|
|
|
}
|
|
|
|
|
2009-01-14 13:14:27 +00:00
|
|
|
SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops,
|
|
|
|
unsigned, nsops, const struct timespec __user *, timeout)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
int error = -EINVAL;
|
|
|
|
struct sem_array *sma;
|
|
|
|
struct sembuf fast_sops[SEMOPM_FAST];
|
2014-01-28 01:07:04 +00:00
|
|
|
struct sembuf *sops = fast_sops, *sop;
|
2005-04-16 22:20:36 +00:00
|
|
|
struct sem_undo *un;
|
ipc/sem: optimize perform_atomic_semop()
This is the main workhorse that deals with semop user calls such that
the waitforzero or semval update operations, on the set, can complete on
not as the sma currently stands. Currently, the set is iterated twice
(setting semval, then backwards for the sempid value). Slowpaths, and
particularly SEM_UNDO calls, must undo any altered sem when it is
detected that the caller must block or has errored-out.
With larger sets, there can occur situations where this involves a lot
of cycles and can obviously be a suboptimal use of cached resources in
shared memory. Ie, discarding CPU caches that are also calling semop
and have the sembuf cached (and can complete), while the current lock
holder doing the semop will block, error, or does a waitforzero
operation.
This patch proposes still iterating the set twice, but the first scan is
read-only, and we perform the actual updates afterward, once we know
that the call will succeed. In order to not suffer from the overhead of
dealing with sops that act on the same sem_num, such (rare) cases use
perform_atomic_semop_slow(), which is exactly what we have now.
Duplicates are detected before grabbing sem_lock, and uses simple a
32/64-bit hash array variable to based on the sem_num we are working on.
In addition add some comments to when we expect to the caller to block.
[akpm@linux-foundation.org: coding-style fixes]
[colin.king@canonical.com: ensure we left shift a ULL rather than a 32 bit integer]
Link: http://lkml.kernel.org/r/20161028181129.7311-1-colin.king@canonical.com
Link: http://lkml.kernel.org/r/20160921194603.GB21438@linux-80c1.suse
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:37 +00:00
|
|
|
int max, locknum;
|
|
|
|
bool undos = false, alter = false, dupsop = false;
|
2005-04-16 22:20:36 +00:00
|
|
|
struct sem_queue queue;
|
ipc/sem: optimize perform_atomic_semop()
This is the main workhorse that deals with semop user calls such that
the waitforzero or semval update operations, on the set, can complete on
not as the sma currently stands. Currently, the set is iterated twice
(setting semval, then backwards for the sempid value). Slowpaths, and
particularly SEM_UNDO calls, must undo any altered sem when it is
detected that the caller must block or has errored-out.
With larger sets, there can occur situations where this involves a lot
of cycles and can obviously be a suboptimal use of cached resources in
shared memory. Ie, discarding CPU caches that are also calling semop
and have the sembuf cached (and can complete), while the current lock
holder doing the semop will block, error, or does a waitforzero
operation.
This patch proposes still iterating the set twice, but the first scan is
read-only, and we perform the actual updates afterward, once we know
that the call will succeed. In order to not suffer from the overhead of
dealing with sops that act on the same sem_num, such (rare) cases use
perform_atomic_semop_slow(), which is exactly what we have now.
Duplicates are detected before grabbing sem_lock, and uses simple a
32/64-bit hash array variable to based on the sem_num we are working on.
In addition add some comments to when we expect to the caller to block.
[akpm@linux-foundation.org: coding-style fixes]
[colin.king@canonical.com: ensure we left shift a ULL rather than a 32 bit integer]
Link: http://lkml.kernel.org/r/20161028181129.7311-1-colin.king@canonical.com
Link: http://lkml.kernel.org/r/20160921194603.GB21438@linux-80c1.suse
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:37 +00:00
|
|
|
unsigned long dup = 0, jiffies_left = 0;
|
2006-10-02 09:18:22 +00:00
|
|
|
struct ipc_namespace *ns;
|
|
|
|
|
|
|
|
ns = current->nsproxy->ipc_ns;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
if (nsops < 1 || semid < 0)
|
|
|
|
return -EINVAL;
|
2006-10-02 09:18:22 +00:00
|
|
|
if (nsops > ns->sc_semopm)
|
2005-04-16 22:20:36 +00:00
|
|
|
return -E2BIG;
|
2014-01-28 01:07:04 +00:00
|
|
|
if (nsops > SEMOPM_FAST) {
|
|
|
|
sops = kmalloc(sizeof(*sops)*nsops, GFP_KERNEL);
|
|
|
|
if (sops == NULL)
|
2005-04-16 22:20:36 +00:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
ipc/sem: optimize perform_atomic_semop()
This is the main workhorse that deals with semop user calls such that
the waitforzero or semval update operations, on the set, can complete on
not as the sma currently stands. Currently, the set is iterated twice
(setting semval, then backwards for the sempid value). Slowpaths, and
particularly SEM_UNDO calls, must undo any altered sem when it is
detected that the caller must block or has errored-out.
With larger sets, there can occur situations where this involves a lot
of cycles and can obviously be a suboptimal use of cached resources in
shared memory. Ie, discarding CPU caches that are also calling semop
and have the sembuf cached (and can complete), while the current lock
holder doing the semop will block, error, or does a waitforzero
operation.
This patch proposes still iterating the set twice, but the first scan is
read-only, and we perform the actual updates afterward, once we know
that the call will succeed. In order to not suffer from the overhead of
dealing with sops that act on the same sem_num, such (rare) cases use
perform_atomic_semop_slow(), which is exactly what we have now.
Duplicates are detected before grabbing sem_lock, and uses simple a
32/64-bit hash array variable to based on the sem_num we are working on.
In addition add some comments to when we expect to the caller to block.
[akpm@linux-foundation.org: coding-style fixes]
[colin.king@canonical.com: ensure we left shift a ULL rather than a 32 bit integer]
Link: http://lkml.kernel.org/r/20161028181129.7311-1-colin.king@canonical.com
Link: http://lkml.kernel.org/r/20160921194603.GB21438@linux-80c1.suse
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:37 +00:00
|
|
|
|
2014-01-28 01:07:04 +00:00
|
|
|
if (copy_from_user(sops, tsops, nsops * sizeof(*tsops))) {
|
|
|
|
error = -EFAULT;
|
2005-04-16 22:20:36 +00:00
|
|
|
goto out_free;
|
|
|
|
}
|
ipc/sem: optimize perform_atomic_semop()
This is the main workhorse that deals with semop user calls such that
the waitforzero or semval update operations, on the set, can complete on
not as the sma currently stands. Currently, the set is iterated twice
(setting semval, then backwards for the sempid value). Slowpaths, and
particularly SEM_UNDO calls, must undo any altered sem when it is
detected that the caller must block or has errored-out.
With larger sets, there can occur situations where this involves a lot
of cycles and can obviously be a suboptimal use of cached resources in
shared memory. Ie, discarding CPU caches that are also calling semop
and have the sembuf cached (and can complete), while the current lock
holder doing the semop will block, error, or does a waitforzero
operation.
This patch proposes still iterating the set twice, but the first scan is
read-only, and we perform the actual updates afterward, once we know
that the call will succeed. In order to not suffer from the overhead of
dealing with sops that act on the same sem_num, such (rare) cases use
perform_atomic_semop_slow(), which is exactly what we have now.
Duplicates are detected before grabbing sem_lock, and uses simple a
32/64-bit hash array variable to based on the sem_num we are working on.
In addition add some comments to when we expect to the caller to block.
[akpm@linux-foundation.org: coding-style fixes]
[colin.king@canonical.com: ensure we left shift a ULL rather than a 32 bit integer]
Link: http://lkml.kernel.org/r/20161028181129.7311-1-colin.king@canonical.com
Link: http://lkml.kernel.org/r/20160921194603.GB21438@linux-80c1.suse
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:37 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
if (timeout) {
|
|
|
|
struct timespec _timeout;
|
|
|
|
if (copy_from_user(&_timeout, timeout, sizeof(*timeout))) {
|
|
|
|
error = -EFAULT;
|
|
|
|
goto out_free;
|
|
|
|
}
|
|
|
|
if (_timeout.tv_sec < 0 || _timeout.tv_nsec < 0 ||
|
|
|
|
_timeout.tv_nsec >= 1000000000L) {
|
|
|
|
error = -EINVAL;
|
|
|
|
goto out_free;
|
|
|
|
}
|
|
|
|
jiffies_left = timespec_to_jiffies(&_timeout);
|
|
|
|
}
|
ipc/sem: optimize perform_atomic_semop()
This is the main workhorse that deals with semop user calls such that
the waitforzero or semval update operations, on the set, can complete on
not as the sma currently stands. Currently, the set is iterated twice
(setting semval, then backwards for the sempid value). Slowpaths, and
particularly SEM_UNDO calls, must undo any altered sem when it is
detected that the caller must block or has errored-out.
With larger sets, there can occur situations where this involves a lot
of cycles and can obviously be a suboptimal use of cached resources in
shared memory. Ie, discarding CPU caches that are also calling semop
and have the sembuf cached (and can complete), while the current lock
holder doing the semop will block, error, or does a waitforzero
operation.
This patch proposes still iterating the set twice, but the first scan is
read-only, and we perform the actual updates afterward, once we know
that the call will succeed. In order to not suffer from the overhead of
dealing with sops that act on the same sem_num, such (rare) cases use
perform_atomic_semop_slow(), which is exactly what we have now.
Duplicates are detected before grabbing sem_lock, and uses simple a
32/64-bit hash array variable to based on the sem_num we are working on.
In addition add some comments to when we expect to the caller to block.
[akpm@linux-foundation.org: coding-style fixes]
[colin.king@canonical.com: ensure we left shift a ULL rather than a 32 bit integer]
Link: http://lkml.kernel.org/r/20161028181129.7311-1-colin.king@canonical.com
Link: http://lkml.kernel.org/r/20160921194603.GB21438@linux-80c1.suse
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:37 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
max = 0;
|
|
|
|
for (sop = sops; sop < sops + nsops; sop++) {
|
ipc/sem: optimize perform_atomic_semop()
This is the main workhorse that deals with semop user calls such that
the waitforzero or semval update operations, on the set, can complete on
not as the sma currently stands. Currently, the set is iterated twice
(setting semval, then backwards for the sempid value). Slowpaths, and
particularly SEM_UNDO calls, must undo any altered sem when it is
detected that the caller must block or has errored-out.
With larger sets, there can occur situations where this involves a lot
of cycles and can obviously be a suboptimal use of cached resources in
shared memory. Ie, discarding CPU caches that are also calling semop
and have the sembuf cached (and can complete), while the current lock
holder doing the semop will block, error, or does a waitforzero
operation.
This patch proposes still iterating the set twice, but the first scan is
read-only, and we perform the actual updates afterward, once we know
that the call will succeed. In order to not suffer from the overhead of
dealing with sops that act on the same sem_num, such (rare) cases use
perform_atomic_semop_slow(), which is exactly what we have now.
Duplicates are detected before grabbing sem_lock, and uses simple a
32/64-bit hash array variable to based on the sem_num we are working on.
In addition add some comments to when we expect to the caller to block.
[akpm@linux-foundation.org: coding-style fixes]
[colin.king@canonical.com: ensure we left shift a ULL rather than a 32 bit integer]
Link: http://lkml.kernel.org/r/20161028181129.7311-1-colin.king@canonical.com
Link: http://lkml.kernel.org/r/20160921194603.GB21438@linux-80c1.suse
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:37 +00:00
|
|
|
unsigned long mask = 1ULL << ((sop->sem_num) % BITS_PER_LONG);
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
if (sop->sem_num >= max)
|
|
|
|
max = sop->sem_num;
|
|
|
|
if (sop->sem_flg & SEM_UNDO)
|
ipc/sem: optimize perform_atomic_semop()
This is the main workhorse that deals with semop user calls such that
the waitforzero or semval update operations, on the set, can complete on
not as the sma currently stands. Currently, the set is iterated twice
(setting semval, then backwards for the sempid value). Slowpaths, and
particularly SEM_UNDO calls, must undo any altered sem when it is
detected that the caller must block or has errored-out.
With larger sets, there can occur situations where this involves a lot
of cycles and can obviously be a suboptimal use of cached resources in
shared memory. Ie, discarding CPU caches that are also calling semop
and have the sembuf cached (and can complete), while the current lock
holder doing the semop will block, error, or does a waitforzero
operation.
This patch proposes still iterating the set twice, but the first scan is
read-only, and we perform the actual updates afterward, once we know
that the call will succeed. In order to not suffer from the overhead of
dealing with sops that act on the same sem_num, such (rare) cases use
perform_atomic_semop_slow(), which is exactly what we have now.
Duplicates are detected before grabbing sem_lock, and uses simple a
32/64-bit hash array variable to based on the sem_num we are working on.
In addition add some comments to when we expect to the caller to block.
[akpm@linux-foundation.org: coding-style fixes]
[colin.king@canonical.com: ensure we left shift a ULL rather than a 32 bit integer]
Link: http://lkml.kernel.org/r/20161028181129.7311-1-colin.king@canonical.com
Link: http://lkml.kernel.org/r/20160921194603.GB21438@linux-80c1.suse
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:37 +00:00
|
|
|
undos = true;
|
|
|
|
if (dup & mask) {
|
|
|
|
/*
|
|
|
|
* There was a previous alter access that appears
|
|
|
|
* to have accessed the same semaphore, thus use
|
|
|
|
* the dupsop logic. "appears", because the detection
|
|
|
|
* can only check % BITS_PER_LONG.
|
|
|
|
*/
|
|
|
|
dupsop = true;
|
|
|
|
}
|
|
|
|
if (sop->sem_op != 0) {
|
|
|
|
alter = true;
|
|
|
|
dup |= mask;
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (undos) {
|
2013-05-01 02:15:44 +00:00
|
|
|
/* On success, find_alloc_undo takes the rcu_read_lock */
|
2008-07-25 08:48:04 +00:00
|
|
|
un = find_alloc_undo(ns, semid);
|
2005-04-16 22:20:36 +00:00
|
|
|
if (IS_ERR(un)) {
|
|
|
|
error = PTR_ERR(un);
|
|
|
|
goto out_free;
|
|
|
|
}
|
2013-05-01 02:15:44 +00:00
|
|
|
} else {
|
2005-04-16 22:20:36 +00:00
|
|
|
un = NULL;
|
2013-05-01 02:15:44 +00:00
|
|
|
rcu_read_lock();
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
sma = sem_obtain_object_check(ns, semid);
|
2007-10-19 06:40:51 +00:00
|
|
|
if (IS_ERR(sma)) {
|
2013-05-01 02:15:44 +00:00
|
|
|
rcu_read_unlock();
|
2007-10-19 06:40:51 +00:00
|
|
|
error = PTR_ERR(sma);
|
2005-04-16 22:20:36 +00:00
|
|
|
goto out_free;
|
2007-10-19 06:40:51 +00:00
|
|
|
}
|
|
|
|
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
error = -EFBIG;
|
2016-12-14 23:06:31 +00:00
|
|
|
if (max >= sma->sem_nsems) {
|
|
|
|
rcu_read_unlock();
|
|
|
|
goto out_free;
|
|
|
|
}
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
|
|
|
|
error = -EACCES;
|
2016-12-14 23:06:31 +00:00
|
|
|
if (ipcperms(ns, &sma->sem_perm, alter ? S_IWUGO : S_IRUGO)) {
|
|
|
|
rcu_read_unlock();
|
|
|
|
goto out_free;
|
|
|
|
}
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
|
|
|
|
error = security_sem_semop(sma, sops, nsops, alter);
|
2016-12-14 23:06:31 +00:00
|
|
|
if (error) {
|
|
|
|
rcu_read_unlock();
|
|
|
|
goto out_free;
|
|
|
|
}
|
ipc,sem: do not hold ipc lock more than necessary
Instead of holding the ipc lock for permissions and security checks, among
others, only acquire it when necessary.
Some numbers....
1) With Rik's semop-multi.c microbenchmark we can see the following
results:
Baseline (3.9-rc1):
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 151452270, ops/sec 5048409
+ 59.40% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 6.14% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 3.84% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 3.64% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 2.06% a.out [kernel.kallsyms] [k] copy_user_enhanced_fast_string
+ 1.86% a.out [kernel.kallsyms] [k] ipc_lock
With this patchset:
cpus 4, threads: 256, semaphores: 128, test duration: 30 secs
total operations: 273156400, ops/sec 9105213
+ 18.54% a.out [kernel.kallsyms] [k] _raw_spin_lock
+ 11.72% a.out [kernel.kallsyms] [k] sys_semtimedop
+ 7.70% a.out [kernel.kallsyms] [k] ipc_has_perm.isra.21
+ 6.58% a.out [kernel.kallsyms] [k] avc_has_perm_flags
+ 6.54% a.out [kernel.kallsyms] [k] __audit_syscall_exit
+ 4.71% a.out [kernel.kallsyms] [k] ipc_obtain_object_check
2) While on an Oracle swingbench DSS (data mining) workload the
improvements are not as exciting as with Rik's benchmark, we can see
some positive numbers. For an 8 socket machine the following are the
percentages of %sys time incurred in the ipc lock:
Baseline (3.9-rc1):
100 swingbench users: 8,74%
400 swingbench users: 21,86%
800 swingbench users: 84,35%
With this patchset:
100 swingbench users: 8,11%
400 swingbench users: 19,93%
800 swingbench users: 77,69%
[riel@redhat.com: fix two locking bugs]
[sasha.levin@oracle.com: prevent releasing RCU read lock twice in semctl_main]
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Chegu Vinod <chegu_vinod@hp.com>
Acked-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Emmanuel Benisty <benisty.e@gmail.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-01 02:15:29 +00:00
|
|
|
|
2013-10-16 20:46:45 +00:00
|
|
|
error = -EIDRM;
|
|
|
|
locknum = sem_lock(sma, sops, nsops);
|
2014-01-28 01:07:01 +00:00
|
|
|
/*
|
|
|
|
* We eventually might perform the following check in a lockless
|
|
|
|
* fashion, considering ipc_valid_object() locking constraints.
|
|
|
|
* If nsops == 1 and there is no contention for sem_perm.lock, then
|
|
|
|
* only a per-semaphore lock is held and it's OK to proceed with the
|
|
|
|
* check below. More details on the fine grained locking scheme
|
|
|
|
* entangled here and why it's RMID race safe on comments at sem_lock()
|
|
|
|
*/
|
|
|
|
if (!ipc_valid_object(&sma->sem_perm))
|
2013-10-16 20:46:45 +00:00
|
|
|
goto out_unlock_free;
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
2008-07-25 08:48:04 +00:00
|
|
|
* semid identifiers are not unique - find_alloc_undo may have
|
2005-04-16 22:20:36 +00:00
|
|
|
* allocated an undo structure, it was invalidated by an RMID
|
2008-07-25 08:48:04 +00:00
|
|
|
* and now a new array with received the same id. Check and fail.
|
2011-03-31 01:57:33 +00:00
|
|
|
* This case can be detected checking un->semid. The existence of
|
2008-07-25 08:48:06 +00:00
|
|
|
* "un" itself is guaranteed by rcu.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
2013-05-01 02:15:44 +00:00
|
|
|
if (un && un->semid == -1)
|
|
|
|
goto out_unlock_free;
|
2008-07-25 08:48:04 +00:00
|
|
|
|
2014-06-06 21:37:49 +00:00
|
|
|
queue.sops = sops;
|
|
|
|
queue.nsops = nsops;
|
|
|
|
queue.undo = un;
|
|
|
|
queue.pid = task_tgid_vnr(current);
|
|
|
|
queue.alter = alter;
|
ipc/sem: optimize perform_atomic_semop()
This is the main workhorse that deals with semop user calls such that
the waitforzero or semval update operations, on the set, can complete on
not as the sma currently stands. Currently, the set is iterated twice
(setting semval, then backwards for the sempid value). Slowpaths, and
particularly SEM_UNDO calls, must undo any altered sem when it is
detected that the caller must block or has errored-out.
With larger sets, there can occur situations where this involves a lot
of cycles and can obviously be a suboptimal use of cached resources in
shared memory. Ie, discarding CPU caches that are also calling semop
and have the sembuf cached (and can complete), while the current lock
holder doing the semop will block, error, or does a waitforzero
operation.
This patch proposes still iterating the set twice, but the first scan is
read-only, and we perform the actual updates afterward, once we know
that the call will succeed. In order to not suffer from the overhead of
dealing with sops that act on the same sem_num, such (rare) cases use
perform_atomic_semop_slow(), which is exactly what we have now.
Duplicates are detected before grabbing sem_lock, and uses simple a
32/64-bit hash array variable to based on the sem_num we are working on.
In addition add some comments to when we expect to the caller to block.
[akpm@linux-foundation.org: coding-style fixes]
[colin.king@canonical.com: ensure we left shift a ULL rather than a 32 bit integer]
Link: http://lkml.kernel.org/r/20161028181129.7311-1-colin.king@canonical.com
Link: http://lkml.kernel.org/r/20160921194603.GB21438@linux-80c1.suse
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:37 +00:00
|
|
|
queue.dupsop = dupsop;
|
2014-06-06 21:37:49 +00:00
|
|
|
|
|
|
|
error = perform_atomic_semop(sma, &queue);
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
if (error == 0) { /* non-blocking succesfull path */
|
|
|
|
DEFINE_WAKE_Q(wake_q);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the operation was successful, then do
|
2013-09-30 20:45:25 +00:00
|
|
|
* the required updates.
|
|
|
|
*/
|
|
|
|
if (alter)
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
do_smart_update(sma, sops, nsops, 1, &wake_q);
|
2013-09-30 20:45:25 +00:00
|
|
|
else
|
|
|
|
set_semotime(sma, sops);
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
|
|
|
|
sem_unlock(sma, locknum);
|
|
|
|
rcu_read_unlock();
|
|
|
|
wake_up_q(&wake_q);
|
|
|
|
|
|
|
|
goto out_free;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
if (error < 0) /* non-blocking error path */
|
2013-09-30 20:45:25 +00:00
|
|
|
goto out_unlock_free;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
/*
|
|
|
|
* We need to sleep on this operation, so we put the current
|
2005-04-16 22:20:36 +00:00
|
|
|
* task into the pending queue and go to sleep.
|
|
|
|
*/
|
2009-12-16 00:47:32 +00:00
|
|
|
if (nsops == 1) {
|
|
|
|
struct sem *curr;
|
|
|
|
curr = &sma->sem_base[sops->sem_num];
|
|
|
|
|
2013-07-08 23:01:24 +00:00
|
|
|
if (alter) {
|
|
|
|
if (sma->complex_count) {
|
|
|
|
list_add_tail(&queue.list,
|
|
|
|
&sma->pending_alter);
|
|
|
|
} else {
|
|
|
|
|
|
|
|
list_add_tail(&queue.list,
|
|
|
|
&curr->pending_alter);
|
|
|
|
}
|
|
|
|
} else {
|
2013-07-08 23:01:23 +00:00
|
|
|
list_add_tail(&queue.list, &curr->pending_const);
|
2013-07-08 23:01:24 +00:00
|
|
|
}
|
2009-12-16 00:47:32 +00:00
|
|
|
} else {
|
2013-07-08 23:01:24 +00:00
|
|
|
if (!sma->complex_count)
|
|
|
|
merge_queues(sma);
|
|
|
|
|
2013-05-01 02:15:39 +00:00
|
|
|
if (alter)
|
2013-07-08 23:01:23 +00:00
|
|
|
list_add_tail(&queue.list, &sma->pending_alter);
|
2013-05-01 02:15:39 +00:00
|
|
|
else
|
2013-07-08 23:01:23 +00:00
|
|
|
list_add_tail(&queue.list, &sma->pending_const);
|
|
|
|
|
2009-12-16 00:47:32 +00:00
|
|
|
sma->complex_count++;
|
|
|
|
}
|
|
|
|
|
2016-12-14 23:06:46 +00:00
|
|
|
do {
|
|
|
|
queue.status = -EINTR;
|
|
|
|
queue.sleeper = current;
|
2011-11-02 20:38:52 +00:00
|
|
|
|
2016-12-14 23:06:46 +00:00
|
|
|
__set_current_state(TASK_INTERRUPTIBLE);
|
|
|
|
sem_unlock(sma, locknum);
|
|
|
|
rcu_read_unlock();
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2016-12-14 23:06:46 +00:00
|
|
|
if (timeout)
|
|
|
|
jiffies_left = schedule_timeout(jiffies_left);
|
|
|
|
else
|
|
|
|
schedule();
|
2005-04-16 22:20:36 +00:00
|
|
|
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
/*
|
2016-12-14 23:06:46 +00:00
|
|
|
* fastpath: the semop has completed, either successfully or
|
|
|
|
* not, from the syscall pov, is quite irrelevant to us at this
|
|
|
|
* point; we're done.
|
|
|
|
*
|
|
|
|
* We _do_ care, nonetheless, about being awoken by a signal or
|
|
|
|
* spuriously. The queue.status is checked again in the
|
|
|
|
* slowpath (aka after taking sem_lock), such that we can detect
|
|
|
|
* scenarios where we were awakened externally, during the
|
|
|
|
* window between wake_q_add() and wake_up_q().
|
2010-07-20 20:24:23 +00:00
|
|
|
*/
|
2016-12-14 23:06:46 +00:00
|
|
|
error = READ_ONCE(queue.status);
|
|
|
|
if (error != -EINTR) {
|
|
|
|
/*
|
|
|
|
* User space could assume that semop() is a memory
|
|
|
|
* barrier: Without the mb(), the cpu could
|
|
|
|
* speculatively read in userspace stale data that was
|
|
|
|
* overwritten by the previous owner of the semaphore.
|
|
|
|
*/
|
|
|
|
smp_mb();
|
|
|
|
goto out_free;
|
|
|
|
}
|
2011-07-26 00:11:47 +00:00
|
|
|
|
2016-12-14 23:06:46 +00:00
|
|
|
rcu_read_lock();
|
2017-01-11 00:57:48 +00:00
|
|
|
locknum = sem_lock(sma, sops, nsops);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2016-12-14 23:06:49 +00:00
|
|
|
if (!ipc_valid_object(&sma->sem_perm))
|
|
|
|
goto out_unlock_free;
|
|
|
|
|
|
|
|
error = READ_ONCE(queue.status);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2016-12-14 23:06:46 +00:00
|
|
|
/*
|
|
|
|
* If queue.status != -EINTR we are woken up by another process.
|
|
|
|
* Leave without unlink_queue(), but with sem_unlock().
|
|
|
|
*/
|
|
|
|
if (error != -EINTR)
|
|
|
|
goto out_unlock_free;
|
2011-11-02 20:38:52 +00:00
|
|
|
|
2016-12-14 23:06:46 +00:00
|
|
|
/*
|
|
|
|
* If an interrupt occurred we have to clean up the queue.
|
|
|
|
*/
|
|
|
|
if (timeout && jiffies_left == 0)
|
|
|
|
error = -EAGAIN;
|
|
|
|
} while (error == -EINTR && !signal_pending(current)); /* spurious */
|
2011-11-02 20:38:52 +00:00
|
|
|
|
2009-12-16 00:47:32 +00:00
|
|
|
unlink_queue(sma, &queue);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
out_unlock_free:
|
2013-05-01 02:15:44 +00:00
|
|
|
sem_unlock(sma, locknum);
|
ipc: move rcu_read_unlock() out of sem_unlock() and into callers
The IPC locking is a mess, and sem_unlock() unlocks not only the
semaphore spinlock, it also drops the rcu read lock. Unlike sem_lock(),
which just gets the spin-lock, and expects the caller to get the rcu
read lock.
This all makes things very hard to follow, and it's very confusing when
you take the rcu read lock in one function, and then release it in
another. And it has caused actual bugs: the sem_obtain_lock() function
ended up dropping the RCU read lock twice in one error path, because it
first did the sem_unlock(), and then did a rcu_read_unlock() to match
the rcu_read_lock() it had done.
This is just a totally mindless "remove rcu_read_unlock() from
sem_unlock() and add it immediately after each caller" (except for the
aforementioned bug where we did too many rcu_read_unlock(), and in
find_alloc_undo() where we just got the rcu_read_lock() to correct for
the fact that sem_unlock would immediately drop it again).
We can (and should) clean things up further, but this fixes the bug with
the minimal amount of subtlety.
Reviewed-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-03 22:04:40 +00:00
|
|
|
rcu_read_unlock();
|
2005-04-16 22:20:36 +00:00
|
|
|
out_free:
|
2014-01-28 01:07:04 +00:00
|
|
|
if (sops != fast_sops)
|
2005-04-16 22:20:36 +00:00
|
|
|
kfree(sops);
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
2009-01-14 13:14:27 +00:00
|
|
|
SYSCALL_DEFINE3(semop, int, semid, struct sembuf __user *, tsops,
|
|
|
|
unsigned, nsops)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
return sys_semtimedop(semid, tsops, nsops, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If CLONE_SYSVSEM is set, establish sharing of SEM_UNDO state between
|
|
|
|
* parent and child tasks.
|
|
|
|
*/
|
|
|
|
|
|
|
|
int copy_semundo(unsigned long clone_flags, struct task_struct *tsk)
|
|
|
|
{
|
|
|
|
struct sem_undo_list *undo_list;
|
|
|
|
int error;
|
|
|
|
|
|
|
|
if (clone_flags & CLONE_SYSVSEM) {
|
|
|
|
error = get_undo_list(&undo_list);
|
|
|
|
if (error)
|
|
|
|
return error;
|
|
|
|
atomic_inc(&undo_list->refcnt);
|
|
|
|
tsk->sysvsem.undo_list = undo_list;
|
2014-06-06 21:37:37 +00:00
|
|
|
} else
|
2005-04-16 22:20:36 +00:00
|
|
|
tsk->sysvsem.undo_list = NULL;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* add semadj values to semaphores, free undo structures.
|
|
|
|
* undo structures are not freed when semaphore arrays are destroyed
|
|
|
|
* so some of them may be out of date.
|
|
|
|
* IMPLEMENTATION NOTE: There is some confusion over whether the
|
|
|
|
* set of adjustments that needs to be done should be done in an atomic
|
|
|
|
* manner or not. That is, if we are attempting to decrement the semval
|
|
|
|
* should we queue up and wait until we can do so legally?
|
|
|
|
* The original implementation attempted to do this (queue and wait).
|
|
|
|
* The current implementation does not do so. The POSIX standard
|
|
|
|
* and SVID should be consulted to determine what behavior is mandated.
|
|
|
|
*/
|
|
|
|
void exit_sem(struct task_struct *tsk)
|
|
|
|
{
|
2008-07-25 08:48:04 +00:00
|
|
|
struct sem_undo_list *ulp;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-07-25 08:48:04 +00:00
|
|
|
ulp = tsk->sysvsem.undo_list;
|
|
|
|
if (!ulp)
|
2005-04-16 22:20:36 +00:00
|
|
|
return;
|
2008-04-29 08:00:57 +00:00
|
|
|
tsk->sysvsem.undo_list = NULL;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-07-25 08:48:04 +00:00
|
|
|
if (!atomic_dec_and_test(&ulp->refcnt))
|
2005-04-16 22:20:36 +00:00
|
|
|
return;
|
|
|
|
|
2008-07-25 08:48:06 +00:00
|
|
|
for (;;) {
|
2005-04-16 22:20:36 +00:00
|
|
|
struct sem_array *sma;
|
2008-07-25 08:48:06 +00:00
|
|
|
struct sem_undo *un;
|
2013-05-01 02:15:44 +00:00
|
|
|
int semid, i;
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
DEFINE_WAKE_Q(wake_q);
|
2008-07-25 08:48:04 +00:00
|
|
|
|
2016-10-11 20:55:05 +00:00
|
|
|
cond_resched();
|
|
|
|
|
2008-07-25 08:48:06 +00:00
|
|
|
rcu_read_lock();
|
2009-04-14 18:17:16 +00:00
|
|
|
un = list_entry_rcu(ulp->list_proc.next,
|
|
|
|
struct sem_undo, list_proc);
|
ipc,sem: fix use after free on IPC_RMID after a task using same semaphore set exits
The current semaphore code allows a potential use after free: in
exit_sem we may free the task's sem_undo_list while there is still
another task looping through the same semaphore set and cleaning the
sem_undo list at freeary function (the task called IPC_RMID for the same
semaphore set).
For example, with a test program [1] running which keeps forking a lot
of processes (which then do a semop call with SEM_UNDO flag), and with
the parent right after removing the semaphore set with IPC_RMID, and a
kernel built with CONFIG_SLAB, CONFIG_SLAB_DEBUG and
CONFIG_DEBUG_SPINLOCK, you can easily see something like the following
in the kernel log:
Slab corruption (Not tainted): kmalloc-64 start=ffff88003b45c1c0, len=64
000: 6b 6b 6b 6b 6b 6b 6b 6b 00 6b 6b 6b 6b 6b 6b 6b kkkkkkkk.kkkkkkk
010: ff ff ff ff 6b 6b 6b 6b ff ff ff ff ff ff ff ff ....kkkk........
Prev obj: start=ffff88003b45c180, len=64
000: 00 00 00 00 ad 4e ad de ff ff ff ff 5a 5a 5a 5a .....N......ZZZZ
010: ff ff ff ff ff ff ff ff c0 fb 01 37 00 88 ff ff ...........7....
Next obj: start=ffff88003b45c200, len=64
000: 00 00 00 00 ad 4e ad de ff ff ff ff 5a 5a 5a 5a .....N......ZZZZ
010: ff ff ff ff ff ff ff ff 68 29 a7 3c 00 88 ff ff ........h).<....
BUG: spinlock wrong CPU on CPU#2, test/18028
general protection fault: 0000 [#1] SMP
Modules linked in: 8021q mrp garp stp llc nf_conntrack_ipv4 nf_defrag_ipv4 ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables binfmt_misc ppdev input_leds joydev parport_pc parport floppy serio_raw virtio_balloon virtio_rng virtio_console virtio_net iosf_mbi crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr qxl ttm drm_kms_helper drm snd_hda_codec_generic i2c_piix4 snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_seq snd_seq_device snd_pcm snd_timer snd soundcore crc32c_intel virtio_pci virtio_ring virtio pata_acpi ata_generic [last unloaded: speedstep_lib]
CPU: 2 PID: 18028 Comm: test Not tainted 4.2.0-rc5+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.8.1-20150318_183358- 04/01/2014
RIP: spin_dump+0x53/0xc0
Call Trace:
spin_bug+0x30/0x40
do_raw_spin_unlock+0x71/0xa0
_raw_spin_unlock+0xe/0x10
freeary+0x82/0x2a0
? _raw_spin_lock+0xe/0x10
semctl_down.clone.0+0xce/0x160
? __do_page_fault+0x19a/0x430
? __audit_syscall_entry+0xa8/0x100
SyS_semctl+0x236/0x2c0
? syscall_trace_leave+0xde/0x130
entry_SYSCALL_64_fastpath+0x12/0x71
Code: 8b 80 88 03 00 00 48 8d 88 60 05 00 00 48 c7 c7 a0 2c a4 81 31 c0 65 8b 15 eb 40 f3 7e e8 08 31 68 00 4d 85 e4 44 8b 4b 08 74 5e <45> 8b 84 24 88 03 00 00 49 8d 8c 24 60 05 00 00 8b 53 04 48 89
RIP [<ffffffff810d6053>] spin_dump+0x53/0xc0
RSP <ffff88003750fd68>
---[ end trace 783ebb76612867a0 ]---
NMI watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [test:18053]
Modules linked in: 8021q mrp garp stp llc nf_conntrack_ipv4 nf_defrag_ipv4 ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables binfmt_misc ppdev input_leds joydev parport_pc parport floppy serio_raw virtio_balloon virtio_rng virtio_console virtio_net iosf_mbi crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr qxl ttm drm_kms_helper drm snd_hda_codec_generic i2c_piix4 snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_seq snd_seq_device snd_pcm snd_timer snd soundcore crc32c_intel virtio_pci virtio_ring virtio pata_acpi ata_generic [last unloaded: speedstep_lib]
CPU: 3 PID: 18053 Comm: test Tainted: G D 4.2.0-rc5+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.8.1-20150318_183358- 04/01/2014
RIP: native_read_tsc+0x0/0x20
Call Trace:
? delay_tsc+0x40/0x70
__delay+0xf/0x20
do_raw_spin_lock+0x96/0x140
_raw_spin_lock+0xe/0x10
sem_lock_and_putref+0x11/0x70
SYSC_semtimedop+0x7bf/0x960
? handle_mm_fault+0xbf6/0x1880
? dequeue_task_fair+0x79/0x4a0
? __do_page_fault+0x19a/0x430
? kfree_debugcheck+0x16/0x40
? __do_page_fault+0x19a/0x430
? __audit_syscall_entry+0xa8/0x100
? do_audit_syscall_entry+0x66/0x70
? syscall_trace_enter_phase1+0x139/0x160
SyS_semtimedop+0xe/0x10
SyS_semop+0x10/0x20
entry_SYSCALL_64_fastpath+0x12/0x71
Code: 47 10 83 e8 01 85 c0 89 47 10 75 08 65 48 89 3d 1f 74 ff 7e c9 c3 0f 1f 44 00 00 55 48 89 e5 e8 87 17 04 00 66 90 c9 c3 0f 1f 00 <55> 48 89 e5 0f 31 89 c1 48 89 d0 48 c1 e0 20 89 c9 48 09 c8 c9
Kernel panic - not syncing: softlockup: hung tasks
I wasn't able to trigger any badness on a recent kernel without the
proper config debugs enabled, however I have softlockup reports on some
kernel versions, in the semaphore code, which are similar as above (the
scenario is seen on some servers running IBM DB2 which uses semaphore
syscalls).
The patch here fixes the race against freeary, by acquiring or waiting
on the sem_undo_list lock as necessary (exit_sem can race with freeary,
while freeary sets un->semid to -1 and removes the same sem_undo from
list_proc or when it removes the last sem_undo).
After the patch I'm unable to reproduce the problem using the test case
[1].
[1] Test case used below:
#include <stdio.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/sem.h>
#include <sys/wait.h>
#include <stdlib.h>
#include <time.h>
#include <unistd.h>
#include <errno.h>
#define NSEM 1
#define NSET 5
int sid[NSET];
void thread()
{
struct sembuf op;
int s;
uid_t pid = getuid();
s = rand() % NSET;
op.sem_num = pid % NSEM;
op.sem_op = 1;
op.sem_flg = SEM_UNDO;
semop(sid[s], &op, 1);
exit(EXIT_SUCCESS);
}
void create_set()
{
int i, j;
pid_t p;
union {
int val;
struct semid_ds *buf;
unsigned short int *array;
struct seminfo *__buf;
} un;
/* Create and initialize semaphore set */
for (i = 0; i < NSET; i++) {
sid[i] = semget(IPC_PRIVATE , NSEM, 0644 | IPC_CREAT);
if (sid[i] < 0) {
perror("semget");
exit(EXIT_FAILURE);
}
}
un.val = 0;
for (i = 0; i < NSET; i++) {
for (j = 0; j < NSEM; j++) {
if (semctl(sid[i], j, SETVAL, un) < 0)
perror("semctl");
}
}
/* Launch threads that operate on semaphore set */
for (i = 0; i < NSEM * NSET * NSET; i++) {
p = fork();
if (p < 0)
perror("fork");
if (p == 0)
thread();
}
/* Free semaphore set */
for (i = 0; i < NSET; i++) {
if (semctl(sid[i], NSEM, IPC_RMID))
perror("IPC_RMID");
}
/* Wait for forked processes to exit */
while (wait(NULL)) {
if (errno == ECHILD)
break;
};
}
int main(int argc, char **argv)
{
pid_t p;
srand(time(NULL));
while (1) {
p = fork();
if (p < 0) {
perror("fork");
exit(EXIT_FAILURE);
}
if (p == 0) {
create_set();
goto end;
}
/* Wait for forked processes to exit */
while (wait(NULL)) {
if (errno == ECHILD)
break;
};
}
end:
return 0;
}
[akpm@linux-foundation.org: use normal comment layout]
Signed-off-by: Herton R. Krzesinski <herton@redhat.com>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Rafael Aquini <aquini@redhat.com>
CC: Aristeu Rozanski <aris@redhat.com>
Cc: David Jeffery <djeffery@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-14 22:35:02 +00:00
|
|
|
if (&un->list_proc == &ulp->list_proc) {
|
|
|
|
/*
|
|
|
|
* We must wait for freeary() before freeing this ulp,
|
|
|
|
* in case we raced with last sem_undo. There is a small
|
|
|
|
* possibility where we exit while freeary() didn't
|
|
|
|
* finish unlocking sem_undo_list.
|
|
|
|
*/
|
|
|
|
spin_unlock_wait(&ulp->lock);
|
|
|
|
rcu_read_unlock();
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
spin_lock(&ulp->lock);
|
|
|
|
semid = un->semid;
|
|
|
|
spin_unlock(&ulp->lock);
|
2008-07-25 08:48:04 +00:00
|
|
|
|
ipc,sem: fix use after free on IPC_RMID after a task using same semaphore set exits
The current semaphore code allows a potential use after free: in
exit_sem we may free the task's sem_undo_list while there is still
another task looping through the same semaphore set and cleaning the
sem_undo list at freeary function (the task called IPC_RMID for the same
semaphore set).
For example, with a test program [1] running which keeps forking a lot
of processes (which then do a semop call with SEM_UNDO flag), and with
the parent right after removing the semaphore set with IPC_RMID, and a
kernel built with CONFIG_SLAB, CONFIG_SLAB_DEBUG and
CONFIG_DEBUG_SPINLOCK, you can easily see something like the following
in the kernel log:
Slab corruption (Not tainted): kmalloc-64 start=ffff88003b45c1c0, len=64
000: 6b 6b 6b 6b 6b 6b 6b 6b 00 6b 6b 6b 6b 6b 6b 6b kkkkkkkk.kkkkkkk
010: ff ff ff ff 6b 6b 6b 6b ff ff ff ff ff ff ff ff ....kkkk........
Prev obj: start=ffff88003b45c180, len=64
000: 00 00 00 00 ad 4e ad de ff ff ff ff 5a 5a 5a 5a .....N......ZZZZ
010: ff ff ff ff ff ff ff ff c0 fb 01 37 00 88 ff ff ...........7....
Next obj: start=ffff88003b45c200, len=64
000: 00 00 00 00 ad 4e ad de ff ff ff ff 5a 5a 5a 5a .....N......ZZZZ
010: ff ff ff ff ff ff ff ff 68 29 a7 3c 00 88 ff ff ........h).<....
BUG: spinlock wrong CPU on CPU#2, test/18028
general protection fault: 0000 [#1] SMP
Modules linked in: 8021q mrp garp stp llc nf_conntrack_ipv4 nf_defrag_ipv4 ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables binfmt_misc ppdev input_leds joydev parport_pc parport floppy serio_raw virtio_balloon virtio_rng virtio_console virtio_net iosf_mbi crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr qxl ttm drm_kms_helper drm snd_hda_codec_generic i2c_piix4 snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_seq snd_seq_device snd_pcm snd_timer snd soundcore crc32c_intel virtio_pci virtio_ring virtio pata_acpi ata_generic [last unloaded: speedstep_lib]
CPU: 2 PID: 18028 Comm: test Not tainted 4.2.0-rc5+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.8.1-20150318_183358- 04/01/2014
RIP: spin_dump+0x53/0xc0
Call Trace:
spin_bug+0x30/0x40
do_raw_spin_unlock+0x71/0xa0
_raw_spin_unlock+0xe/0x10
freeary+0x82/0x2a0
? _raw_spin_lock+0xe/0x10
semctl_down.clone.0+0xce/0x160
? __do_page_fault+0x19a/0x430
? __audit_syscall_entry+0xa8/0x100
SyS_semctl+0x236/0x2c0
? syscall_trace_leave+0xde/0x130
entry_SYSCALL_64_fastpath+0x12/0x71
Code: 8b 80 88 03 00 00 48 8d 88 60 05 00 00 48 c7 c7 a0 2c a4 81 31 c0 65 8b 15 eb 40 f3 7e e8 08 31 68 00 4d 85 e4 44 8b 4b 08 74 5e <45> 8b 84 24 88 03 00 00 49 8d 8c 24 60 05 00 00 8b 53 04 48 89
RIP [<ffffffff810d6053>] spin_dump+0x53/0xc0
RSP <ffff88003750fd68>
---[ end trace 783ebb76612867a0 ]---
NMI watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [test:18053]
Modules linked in: 8021q mrp garp stp llc nf_conntrack_ipv4 nf_defrag_ipv4 ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables binfmt_misc ppdev input_leds joydev parport_pc parport floppy serio_raw virtio_balloon virtio_rng virtio_console virtio_net iosf_mbi crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr qxl ttm drm_kms_helper drm snd_hda_codec_generic i2c_piix4 snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_seq snd_seq_device snd_pcm snd_timer snd soundcore crc32c_intel virtio_pci virtio_ring virtio pata_acpi ata_generic [last unloaded: speedstep_lib]
CPU: 3 PID: 18053 Comm: test Tainted: G D 4.2.0-rc5+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.8.1-20150318_183358- 04/01/2014
RIP: native_read_tsc+0x0/0x20
Call Trace:
? delay_tsc+0x40/0x70
__delay+0xf/0x20
do_raw_spin_lock+0x96/0x140
_raw_spin_lock+0xe/0x10
sem_lock_and_putref+0x11/0x70
SYSC_semtimedop+0x7bf/0x960
? handle_mm_fault+0xbf6/0x1880
? dequeue_task_fair+0x79/0x4a0
? __do_page_fault+0x19a/0x430
? kfree_debugcheck+0x16/0x40
? __do_page_fault+0x19a/0x430
? __audit_syscall_entry+0xa8/0x100
? do_audit_syscall_entry+0x66/0x70
? syscall_trace_enter_phase1+0x139/0x160
SyS_semtimedop+0xe/0x10
SyS_semop+0x10/0x20
entry_SYSCALL_64_fastpath+0x12/0x71
Code: 47 10 83 e8 01 85 c0 89 47 10 75 08 65 48 89 3d 1f 74 ff 7e c9 c3 0f 1f 44 00 00 55 48 89 e5 e8 87 17 04 00 66 90 c9 c3 0f 1f 00 <55> 48 89 e5 0f 31 89 c1 48 89 d0 48 c1 e0 20 89 c9 48 09 c8 c9
Kernel panic - not syncing: softlockup: hung tasks
I wasn't able to trigger any badness on a recent kernel without the
proper config debugs enabled, however I have softlockup reports on some
kernel versions, in the semaphore code, which are similar as above (the
scenario is seen on some servers running IBM DB2 which uses semaphore
syscalls).
The patch here fixes the race against freeary, by acquiring or waiting
on the sem_undo_list lock as necessary (exit_sem can race with freeary,
while freeary sets un->semid to -1 and removes the same sem_undo from
list_proc or when it removes the last sem_undo).
After the patch I'm unable to reproduce the problem using the test case
[1].
[1] Test case used below:
#include <stdio.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/sem.h>
#include <sys/wait.h>
#include <stdlib.h>
#include <time.h>
#include <unistd.h>
#include <errno.h>
#define NSEM 1
#define NSET 5
int sid[NSET];
void thread()
{
struct sembuf op;
int s;
uid_t pid = getuid();
s = rand() % NSET;
op.sem_num = pid % NSEM;
op.sem_op = 1;
op.sem_flg = SEM_UNDO;
semop(sid[s], &op, 1);
exit(EXIT_SUCCESS);
}
void create_set()
{
int i, j;
pid_t p;
union {
int val;
struct semid_ds *buf;
unsigned short int *array;
struct seminfo *__buf;
} un;
/* Create and initialize semaphore set */
for (i = 0; i < NSET; i++) {
sid[i] = semget(IPC_PRIVATE , NSEM, 0644 | IPC_CREAT);
if (sid[i] < 0) {
perror("semget");
exit(EXIT_FAILURE);
}
}
un.val = 0;
for (i = 0; i < NSET; i++) {
for (j = 0; j < NSEM; j++) {
if (semctl(sid[i], j, SETVAL, un) < 0)
perror("semctl");
}
}
/* Launch threads that operate on semaphore set */
for (i = 0; i < NSEM * NSET * NSET; i++) {
p = fork();
if (p < 0)
perror("fork");
if (p == 0)
thread();
}
/* Free semaphore set */
for (i = 0; i < NSET; i++) {
if (semctl(sid[i], NSEM, IPC_RMID))
perror("IPC_RMID");
}
/* Wait for forked processes to exit */
while (wait(NULL)) {
if (errno == ECHILD)
break;
};
}
int main(int argc, char **argv)
{
pid_t p;
srand(time(NULL));
while (1) {
p = fork();
if (p < 0) {
perror("fork");
exit(EXIT_FAILURE);
}
if (p == 0) {
create_set();
goto end;
}
/* Wait for forked processes to exit */
while (wait(NULL)) {
if (errno == ECHILD)
break;
};
}
end:
return 0;
}
[akpm@linux-foundation.org: use normal comment layout]
Signed-off-by: Herton R. Krzesinski <herton@redhat.com>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Rafael Aquini <aquini@redhat.com>
CC: Aristeu Rozanski <aris@redhat.com>
Cc: David Jeffery <djeffery@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-14 22:35:02 +00:00
|
|
|
/* exit_sem raced with IPC_RMID, nothing to do */
|
2013-05-01 02:15:44 +00:00
|
|
|
if (semid == -1) {
|
|
|
|
rcu_read_unlock();
|
ipc,sem: fix use after free on IPC_RMID after a task using same semaphore set exits
The current semaphore code allows a potential use after free: in
exit_sem we may free the task's sem_undo_list while there is still
another task looping through the same semaphore set and cleaning the
sem_undo list at freeary function (the task called IPC_RMID for the same
semaphore set).
For example, with a test program [1] running which keeps forking a lot
of processes (which then do a semop call with SEM_UNDO flag), and with
the parent right after removing the semaphore set with IPC_RMID, and a
kernel built with CONFIG_SLAB, CONFIG_SLAB_DEBUG and
CONFIG_DEBUG_SPINLOCK, you can easily see something like the following
in the kernel log:
Slab corruption (Not tainted): kmalloc-64 start=ffff88003b45c1c0, len=64
000: 6b 6b 6b 6b 6b 6b 6b 6b 00 6b 6b 6b 6b 6b 6b 6b kkkkkkkk.kkkkkkk
010: ff ff ff ff 6b 6b 6b 6b ff ff ff ff ff ff ff ff ....kkkk........
Prev obj: start=ffff88003b45c180, len=64
000: 00 00 00 00 ad 4e ad de ff ff ff ff 5a 5a 5a 5a .....N......ZZZZ
010: ff ff ff ff ff ff ff ff c0 fb 01 37 00 88 ff ff ...........7....
Next obj: start=ffff88003b45c200, len=64
000: 00 00 00 00 ad 4e ad de ff ff ff ff 5a 5a 5a 5a .....N......ZZZZ
010: ff ff ff ff ff ff ff ff 68 29 a7 3c 00 88 ff ff ........h).<....
BUG: spinlock wrong CPU on CPU#2, test/18028
general protection fault: 0000 [#1] SMP
Modules linked in: 8021q mrp garp stp llc nf_conntrack_ipv4 nf_defrag_ipv4 ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables binfmt_misc ppdev input_leds joydev parport_pc parport floppy serio_raw virtio_balloon virtio_rng virtio_console virtio_net iosf_mbi crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr qxl ttm drm_kms_helper drm snd_hda_codec_generic i2c_piix4 snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_seq snd_seq_device snd_pcm snd_timer snd soundcore crc32c_intel virtio_pci virtio_ring virtio pata_acpi ata_generic [last unloaded: speedstep_lib]
CPU: 2 PID: 18028 Comm: test Not tainted 4.2.0-rc5+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.8.1-20150318_183358- 04/01/2014
RIP: spin_dump+0x53/0xc0
Call Trace:
spin_bug+0x30/0x40
do_raw_spin_unlock+0x71/0xa0
_raw_spin_unlock+0xe/0x10
freeary+0x82/0x2a0
? _raw_spin_lock+0xe/0x10
semctl_down.clone.0+0xce/0x160
? __do_page_fault+0x19a/0x430
? __audit_syscall_entry+0xa8/0x100
SyS_semctl+0x236/0x2c0
? syscall_trace_leave+0xde/0x130
entry_SYSCALL_64_fastpath+0x12/0x71
Code: 8b 80 88 03 00 00 48 8d 88 60 05 00 00 48 c7 c7 a0 2c a4 81 31 c0 65 8b 15 eb 40 f3 7e e8 08 31 68 00 4d 85 e4 44 8b 4b 08 74 5e <45> 8b 84 24 88 03 00 00 49 8d 8c 24 60 05 00 00 8b 53 04 48 89
RIP [<ffffffff810d6053>] spin_dump+0x53/0xc0
RSP <ffff88003750fd68>
---[ end trace 783ebb76612867a0 ]---
NMI watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [test:18053]
Modules linked in: 8021q mrp garp stp llc nf_conntrack_ipv4 nf_defrag_ipv4 ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables binfmt_misc ppdev input_leds joydev parport_pc parport floppy serio_raw virtio_balloon virtio_rng virtio_console virtio_net iosf_mbi crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr qxl ttm drm_kms_helper drm snd_hda_codec_generic i2c_piix4 snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_seq snd_seq_device snd_pcm snd_timer snd soundcore crc32c_intel virtio_pci virtio_ring virtio pata_acpi ata_generic [last unloaded: speedstep_lib]
CPU: 3 PID: 18053 Comm: test Tainted: G D 4.2.0-rc5+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.8.1-20150318_183358- 04/01/2014
RIP: native_read_tsc+0x0/0x20
Call Trace:
? delay_tsc+0x40/0x70
__delay+0xf/0x20
do_raw_spin_lock+0x96/0x140
_raw_spin_lock+0xe/0x10
sem_lock_and_putref+0x11/0x70
SYSC_semtimedop+0x7bf/0x960
? handle_mm_fault+0xbf6/0x1880
? dequeue_task_fair+0x79/0x4a0
? __do_page_fault+0x19a/0x430
? kfree_debugcheck+0x16/0x40
? __do_page_fault+0x19a/0x430
? __audit_syscall_entry+0xa8/0x100
? do_audit_syscall_entry+0x66/0x70
? syscall_trace_enter_phase1+0x139/0x160
SyS_semtimedop+0xe/0x10
SyS_semop+0x10/0x20
entry_SYSCALL_64_fastpath+0x12/0x71
Code: 47 10 83 e8 01 85 c0 89 47 10 75 08 65 48 89 3d 1f 74 ff 7e c9 c3 0f 1f 44 00 00 55 48 89 e5 e8 87 17 04 00 66 90 c9 c3 0f 1f 00 <55> 48 89 e5 0f 31 89 c1 48 89 d0 48 c1 e0 20 89 c9 48 09 c8 c9
Kernel panic - not syncing: softlockup: hung tasks
I wasn't able to trigger any badness on a recent kernel without the
proper config debugs enabled, however I have softlockup reports on some
kernel versions, in the semaphore code, which are similar as above (the
scenario is seen on some servers running IBM DB2 which uses semaphore
syscalls).
The patch here fixes the race against freeary, by acquiring or waiting
on the sem_undo_list lock as necessary (exit_sem can race with freeary,
while freeary sets un->semid to -1 and removes the same sem_undo from
list_proc or when it removes the last sem_undo).
After the patch I'm unable to reproduce the problem using the test case
[1].
[1] Test case used below:
#include <stdio.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/sem.h>
#include <sys/wait.h>
#include <stdlib.h>
#include <time.h>
#include <unistd.h>
#include <errno.h>
#define NSEM 1
#define NSET 5
int sid[NSET];
void thread()
{
struct sembuf op;
int s;
uid_t pid = getuid();
s = rand() % NSET;
op.sem_num = pid % NSEM;
op.sem_op = 1;
op.sem_flg = SEM_UNDO;
semop(sid[s], &op, 1);
exit(EXIT_SUCCESS);
}
void create_set()
{
int i, j;
pid_t p;
union {
int val;
struct semid_ds *buf;
unsigned short int *array;
struct seminfo *__buf;
} un;
/* Create and initialize semaphore set */
for (i = 0; i < NSET; i++) {
sid[i] = semget(IPC_PRIVATE , NSEM, 0644 | IPC_CREAT);
if (sid[i] < 0) {
perror("semget");
exit(EXIT_FAILURE);
}
}
un.val = 0;
for (i = 0; i < NSET; i++) {
for (j = 0; j < NSEM; j++) {
if (semctl(sid[i], j, SETVAL, un) < 0)
perror("semctl");
}
}
/* Launch threads that operate on semaphore set */
for (i = 0; i < NSEM * NSET * NSET; i++) {
p = fork();
if (p < 0)
perror("fork");
if (p == 0)
thread();
}
/* Free semaphore set */
for (i = 0; i < NSET; i++) {
if (semctl(sid[i], NSEM, IPC_RMID))
perror("IPC_RMID");
}
/* Wait for forked processes to exit */
while (wait(NULL)) {
if (errno == ECHILD)
break;
};
}
int main(int argc, char **argv)
{
pid_t p;
srand(time(NULL));
while (1) {
p = fork();
if (p < 0) {
perror("fork");
exit(EXIT_FAILURE);
}
if (p == 0) {
create_set();
goto end;
}
/* Wait for forked processes to exit */
while (wait(NULL)) {
if (errno == ECHILD)
break;
};
}
end:
return 0;
}
[akpm@linux-foundation.org: use normal comment layout]
Signed-off-by: Herton R. Krzesinski <herton@redhat.com>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Rafael Aquini <aquini@redhat.com>
CC: Aristeu Rozanski <aris@redhat.com>
Cc: David Jeffery <djeffery@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-14 22:35:02 +00:00
|
|
|
continue;
|
2013-05-01 02:15:44 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
ipc,sem: fix use after free on IPC_RMID after a task using same semaphore set exits
The current semaphore code allows a potential use after free: in
exit_sem we may free the task's sem_undo_list while there is still
another task looping through the same semaphore set and cleaning the
sem_undo list at freeary function (the task called IPC_RMID for the same
semaphore set).
For example, with a test program [1] running which keeps forking a lot
of processes (which then do a semop call with SEM_UNDO flag), and with
the parent right after removing the semaphore set with IPC_RMID, and a
kernel built with CONFIG_SLAB, CONFIG_SLAB_DEBUG and
CONFIG_DEBUG_SPINLOCK, you can easily see something like the following
in the kernel log:
Slab corruption (Not tainted): kmalloc-64 start=ffff88003b45c1c0, len=64
000: 6b 6b 6b 6b 6b 6b 6b 6b 00 6b 6b 6b 6b 6b 6b 6b kkkkkkkk.kkkkkkk
010: ff ff ff ff 6b 6b 6b 6b ff ff ff ff ff ff ff ff ....kkkk........
Prev obj: start=ffff88003b45c180, len=64
000: 00 00 00 00 ad 4e ad de ff ff ff ff 5a 5a 5a 5a .....N......ZZZZ
010: ff ff ff ff ff ff ff ff c0 fb 01 37 00 88 ff ff ...........7....
Next obj: start=ffff88003b45c200, len=64
000: 00 00 00 00 ad 4e ad de ff ff ff ff 5a 5a 5a 5a .....N......ZZZZ
010: ff ff ff ff ff ff ff ff 68 29 a7 3c 00 88 ff ff ........h).<....
BUG: spinlock wrong CPU on CPU#2, test/18028
general protection fault: 0000 [#1] SMP
Modules linked in: 8021q mrp garp stp llc nf_conntrack_ipv4 nf_defrag_ipv4 ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables binfmt_misc ppdev input_leds joydev parport_pc parport floppy serio_raw virtio_balloon virtio_rng virtio_console virtio_net iosf_mbi crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr qxl ttm drm_kms_helper drm snd_hda_codec_generic i2c_piix4 snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_seq snd_seq_device snd_pcm snd_timer snd soundcore crc32c_intel virtio_pci virtio_ring virtio pata_acpi ata_generic [last unloaded: speedstep_lib]
CPU: 2 PID: 18028 Comm: test Not tainted 4.2.0-rc5+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.8.1-20150318_183358- 04/01/2014
RIP: spin_dump+0x53/0xc0
Call Trace:
spin_bug+0x30/0x40
do_raw_spin_unlock+0x71/0xa0
_raw_spin_unlock+0xe/0x10
freeary+0x82/0x2a0
? _raw_spin_lock+0xe/0x10
semctl_down.clone.0+0xce/0x160
? __do_page_fault+0x19a/0x430
? __audit_syscall_entry+0xa8/0x100
SyS_semctl+0x236/0x2c0
? syscall_trace_leave+0xde/0x130
entry_SYSCALL_64_fastpath+0x12/0x71
Code: 8b 80 88 03 00 00 48 8d 88 60 05 00 00 48 c7 c7 a0 2c a4 81 31 c0 65 8b 15 eb 40 f3 7e e8 08 31 68 00 4d 85 e4 44 8b 4b 08 74 5e <45> 8b 84 24 88 03 00 00 49 8d 8c 24 60 05 00 00 8b 53 04 48 89
RIP [<ffffffff810d6053>] spin_dump+0x53/0xc0
RSP <ffff88003750fd68>
---[ end trace 783ebb76612867a0 ]---
NMI watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [test:18053]
Modules linked in: 8021q mrp garp stp llc nf_conntrack_ipv4 nf_defrag_ipv4 ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables binfmt_misc ppdev input_leds joydev parport_pc parport floppy serio_raw virtio_balloon virtio_rng virtio_console virtio_net iosf_mbi crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr qxl ttm drm_kms_helper drm snd_hda_codec_generic i2c_piix4 snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_seq snd_seq_device snd_pcm snd_timer snd soundcore crc32c_intel virtio_pci virtio_ring virtio pata_acpi ata_generic [last unloaded: speedstep_lib]
CPU: 3 PID: 18053 Comm: test Tainted: G D 4.2.0-rc5+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.8.1-20150318_183358- 04/01/2014
RIP: native_read_tsc+0x0/0x20
Call Trace:
? delay_tsc+0x40/0x70
__delay+0xf/0x20
do_raw_spin_lock+0x96/0x140
_raw_spin_lock+0xe/0x10
sem_lock_and_putref+0x11/0x70
SYSC_semtimedop+0x7bf/0x960
? handle_mm_fault+0xbf6/0x1880
? dequeue_task_fair+0x79/0x4a0
? __do_page_fault+0x19a/0x430
? kfree_debugcheck+0x16/0x40
? __do_page_fault+0x19a/0x430
? __audit_syscall_entry+0xa8/0x100
? do_audit_syscall_entry+0x66/0x70
? syscall_trace_enter_phase1+0x139/0x160
SyS_semtimedop+0xe/0x10
SyS_semop+0x10/0x20
entry_SYSCALL_64_fastpath+0x12/0x71
Code: 47 10 83 e8 01 85 c0 89 47 10 75 08 65 48 89 3d 1f 74 ff 7e c9 c3 0f 1f 44 00 00 55 48 89 e5 e8 87 17 04 00 66 90 c9 c3 0f 1f 00 <55> 48 89 e5 0f 31 89 c1 48 89 d0 48 c1 e0 20 89 c9 48 09 c8 c9
Kernel panic - not syncing: softlockup: hung tasks
I wasn't able to trigger any badness on a recent kernel without the
proper config debugs enabled, however I have softlockup reports on some
kernel versions, in the semaphore code, which are similar as above (the
scenario is seen on some servers running IBM DB2 which uses semaphore
syscalls).
The patch here fixes the race against freeary, by acquiring or waiting
on the sem_undo_list lock as necessary (exit_sem can race with freeary,
while freeary sets un->semid to -1 and removes the same sem_undo from
list_proc or when it removes the last sem_undo).
After the patch I'm unable to reproduce the problem using the test case
[1].
[1] Test case used below:
#include <stdio.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/sem.h>
#include <sys/wait.h>
#include <stdlib.h>
#include <time.h>
#include <unistd.h>
#include <errno.h>
#define NSEM 1
#define NSET 5
int sid[NSET];
void thread()
{
struct sembuf op;
int s;
uid_t pid = getuid();
s = rand() % NSET;
op.sem_num = pid % NSEM;
op.sem_op = 1;
op.sem_flg = SEM_UNDO;
semop(sid[s], &op, 1);
exit(EXIT_SUCCESS);
}
void create_set()
{
int i, j;
pid_t p;
union {
int val;
struct semid_ds *buf;
unsigned short int *array;
struct seminfo *__buf;
} un;
/* Create and initialize semaphore set */
for (i = 0; i < NSET; i++) {
sid[i] = semget(IPC_PRIVATE , NSEM, 0644 | IPC_CREAT);
if (sid[i] < 0) {
perror("semget");
exit(EXIT_FAILURE);
}
}
un.val = 0;
for (i = 0; i < NSET; i++) {
for (j = 0; j < NSEM; j++) {
if (semctl(sid[i], j, SETVAL, un) < 0)
perror("semctl");
}
}
/* Launch threads that operate on semaphore set */
for (i = 0; i < NSEM * NSET * NSET; i++) {
p = fork();
if (p < 0)
perror("fork");
if (p == 0)
thread();
}
/* Free semaphore set */
for (i = 0; i < NSET; i++) {
if (semctl(sid[i], NSEM, IPC_RMID))
perror("IPC_RMID");
}
/* Wait for forked processes to exit */
while (wait(NULL)) {
if (errno == ECHILD)
break;
};
}
int main(int argc, char **argv)
{
pid_t p;
srand(time(NULL));
while (1) {
p = fork();
if (p < 0) {
perror("fork");
exit(EXIT_FAILURE);
}
if (p == 0) {
create_set();
goto end;
}
/* Wait for forked processes to exit */
while (wait(NULL)) {
if (errno == ECHILD)
break;
};
}
end:
return 0;
}
[akpm@linux-foundation.org: use normal comment layout]
Signed-off-by: Herton R. Krzesinski <herton@redhat.com>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Rafael Aquini <aquini@redhat.com>
CC: Aristeu Rozanski <aris@redhat.com>
Cc: David Jeffery <djeffery@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-14 22:35:02 +00:00
|
|
|
sma = sem_obtain_object_check(tsk->nsproxy->ipc_ns, semid);
|
2008-07-25 08:48:06 +00:00
|
|
|
/* exit_sem raced with IPC_RMID, nothing to do */
|
2013-05-01 02:15:44 +00:00
|
|
|
if (IS_ERR(sma)) {
|
|
|
|
rcu_read_unlock();
|
2008-07-25 08:48:06 +00:00
|
|
|
continue;
|
2013-05-01 02:15:44 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2013-05-01 02:15:44 +00:00
|
|
|
sem_lock(sma, NULL, -1);
|
2013-10-16 20:46:45 +00:00
|
|
|
/* exit_sem raced with IPC_RMID, nothing to do */
|
2014-01-28 01:07:01 +00:00
|
|
|
if (!ipc_valid_object(&sma->sem_perm)) {
|
2013-10-16 20:46:45 +00:00
|
|
|
sem_unlock(sma, -1);
|
|
|
|
rcu_read_unlock();
|
|
|
|
continue;
|
|
|
|
}
|
2009-12-16 00:47:28 +00:00
|
|
|
un = __lookup_undo(ulp, semid);
|
2008-07-25 08:48:06 +00:00
|
|
|
if (un == NULL) {
|
|
|
|
/* exit_sem raced with IPC_RMID+semget() that created
|
|
|
|
* exactly the same semid. Nothing to do.
|
|
|
|
*/
|
2013-05-01 02:15:44 +00:00
|
|
|
sem_unlock(sma, -1);
|
ipc: move rcu_read_unlock() out of sem_unlock() and into callers
The IPC locking is a mess, and sem_unlock() unlocks not only the
semaphore spinlock, it also drops the rcu read lock. Unlike sem_lock(),
which just gets the spin-lock, and expects the caller to get the rcu
read lock.
This all makes things very hard to follow, and it's very confusing when
you take the rcu read lock in one function, and then release it in
another. And it has caused actual bugs: the sem_obtain_lock() function
ended up dropping the RCU read lock twice in one error path, because it
first did the sem_unlock(), and then did a rcu_read_unlock() to match
the rcu_read_lock() it had done.
This is just a totally mindless "remove rcu_read_unlock() from
sem_unlock() and add it immediately after each caller" (except for the
aforementioned bug where we did too many rcu_read_unlock(), and in
find_alloc_undo() where we just got the rcu_read_lock() to correct for
the fact that sem_unlock would immediately drop it again).
We can (and should) clean things up further, but this fixes the bug with
the minimal amount of subtlety.
Reviewed-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-03 22:04:40 +00:00
|
|
|
rcu_read_unlock();
|
2008-07-25 08:48:06 +00:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* remove un from the linked lists */
|
2013-07-08 23:01:11 +00:00
|
|
|
ipc_assert_locked_object(&sma->sem_perm);
|
2008-07-25 08:48:04 +00:00
|
|
|
list_del(&un->list_id);
|
|
|
|
|
2015-08-14 22:35:05 +00:00
|
|
|
/* we are the last process using this ulp, acquiring ulp->lock
|
|
|
|
* isn't required. Besides that, we are also protected against
|
|
|
|
* IPC_RMID as we hold sma->sem_perm lock now
|
|
|
|
*/
|
2008-07-25 08:48:06 +00:00
|
|
|
list_del_rcu(&un->list_proc);
|
|
|
|
|
2008-07-25 08:48:04 +00:00
|
|
|
/* perform adjustments registered in un */
|
|
|
|
for (i = 0; i < sma->sem_nsems; i++) {
|
2014-01-28 01:07:04 +00:00
|
|
|
struct sem *semaphore = &sma->sem_base[i];
|
2008-07-25 08:48:04 +00:00
|
|
|
if (un->semadj[i]) {
|
|
|
|
semaphore->semval += un->semadj[i];
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* Range checks of the new semaphore value,
|
|
|
|
* not defined by sus:
|
|
|
|
* - Some unices ignore the undo entirely
|
|
|
|
* (e.g. HP UX 11i 11.22, Tru64 V5.1)
|
|
|
|
* - some cap the value (e.g. FreeBSD caps
|
|
|
|
* at 0, but doesn't enforce SEMVMX)
|
|
|
|
*
|
|
|
|
* Linux caps the semaphore value, both at 0
|
|
|
|
* and at SEMVMX.
|
|
|
|
*
|
2014-01-28 01:07:04 +00:00
|
|
|
* Manfred <manfred@colorfullife.com>
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
2006-03-26 09:37:17 +00:00
|
|
|
if (semaphore->semval < 0)
|
|
|
|
semaphore->semval = 0;
|
|
|
|
if (semaphore->semval > SEMVMX)
|
|
|
|
semaphore->semval = SEMVMX;
|
2007-10-19 06:40:14 +00:00
|
|
|
semaphore->sempid = task_tgid_vnr(current);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
/* maybe some queued-up processes were waiting for this */
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
do_smart_update(sma, NULL, 0, 1, &wake_q);
|
2013-05-01 02:15:44 +00:00
|
|
|
sem_unlock(sma, -1);
|
ipc: move rcu_read_unlock() out of sem_unlock() and into callers
The IPC locking is a mess, and sem_unlock() unlocks not only the
semaphore spinlock, it also drops the rcu read lock. Unlike sem_lock(),
which just gets the spin-lock, and expects the caller to get the rcu
read lock.
This all makes things very hard to follow, and it's very confusing when
you take the rcu read lock in one function, and then release it in
another. And it has caused actual bugs: the sem_obtain_lock() function
ended up dropping the RCU read lock twice in one error path, because it
first did the sem_unlock(), and then did a rcu_read_unlock() to match
the rcu_read_lock() it had done.
This is just a totally mindless "remove rcu_read_unlock() from
sem_unlock() and add it immediately after each caller" (except for the
aforementioned bug where we did too many rcu_read_unlock(), and in
find_alloc_undo() where we just got the rcu_read_lock() to correct for
the fact that sem_unlock would immediately drop it again).
We can (and should) clean things up further, but this fixes the bug with
the minimal amount of subtlety.
Reviewed-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-03 22:04:40 +00:00
|
|
|
rcu_read_unlock();
|
ipc/sem: rework task wakeups
Our sysv sems have been using the notion of lockless wakeups for a
while, ever since commit 0a2b9d4c7967 ("ipc/sem.c: move wake_up_process
out of the spinlock section"), in order to reduce the sem_lock hold
times. This in-house pending queue can be replaced by wake_q (just like
all the rest of ipc now), in that it provides the following advantages:
o Simplifies and gets rid of unnecessary code.
o We get rid of the IN_WAKEUP complexities. Given that wake_q_add()
grabs reference to the task, if awoken due to an unrelated event,
between the wake_q_add() and wake_up_q() window, we cannot race with
sys_exit and the imminent call to wake_up_process().
o By not spinning IN_WAKEUP, we no longer need to disable preemption.
In consequence, the wakeup paths (after schedule(), that is) must
acknowledge an external signal/event, as well spurious wakeup occurring
during the pending wakeup window. Obviously no changes in semantics
that could be visible to the user. The fastpath is _only_ for when we
know for sure that we were awoken due to a the waker's successful semop
call (queue.status is not -EINTR).
On a 48-core Haswell, running the ipcscale 'waitforzero' test, the
following is seen with increasing thread counts:
v4.8-rc5 v4.8-rc5
semopv2
Hmean sembench-sem-2 574733.00 ( 0.00%) 578322.00 ( 0.62%)
Hmean sembench-sem-8 811708.00 ( 0.00%) 824689.00 ( 1.59%)
Hmean sembench-sem-12 842448.00 ( 0.00%) 845409.00 ( 0.35%)
Hmean sembench-sem-21 933003.00 ( 0.00%) 977748.00 ( 4.80%)
Hmean sembench-sem-48 935910.00 ( 0.00%) 1004759.00 ( 7.36%)
Hmean sembench-sem-79 937186.00 ( 0.00%) 983976.00 ( 4.99%)
Hmean sembench-sem-234 974256.00 ( 0.00%) 1060294.00 ( 8.83%)
Hmean sembench-sem-265 975468.00 ( 0.00%) 1016243.00 ( 4.18%)
Hmean sembench-sem-296 991280.00 ( 0.00%) 1042659.00 ( 5.18%)
Hmean sembench-sem-327 975415.00 ( 0.00%) 1029977.00 ( 5.59%)
Hmean sembench-sem-358 1014286.00 ( 0.00%) 1049624.00 ( 3.48%)
Hmean sembench-sem-389 972939.00 ( 0.00%) 1043127.00 ( 7.21%)
Hmean sembench-sem-420 981909.00 ( 0.00%) 1056747.00 ( 7.62%)
Hmean sembench-sem-451 990139.00 ( 0.00%) 1051609.00 ( 6.21%)
Hmean sembench-sem-482 965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: merge fix for WAKE_Q to DEFINE_WAKE_Q rename]
Link: http://lkml.kernel.org/r/20161122210410.5eca9fc2@canb.auug.org.au
Link: http://lkml.kernel.org/r/1474225896-10066-3-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-14 23:06:34 +00:00
|
|
|
wake_up_q(&wake_q);
|
2008-07-25 08:48:06 +00:00
|
|
|
|
2011-03-18 04:09:35 +00:00
|
|
|
kfree_rcu(un, rcu);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
2008-07-25 08:48:04 +00:00
|
|
|
kfree(ulp);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_PROC_FS
|
2005-09-06 22:17:10 +00:00
|
|
|
static int sysvipc_sem_proc_show(struct seq_file *s, void *it)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2012-02-08 00:54:11 +00:00
|
|
|
struct user_namespace *user_ns = seq_user_ns(s);
|
2005-09-06 22:17:10 +00:00
|
|
|
struct sem_array *sma = it;
|
2013-07-08 23:01:25 +00:00
|
|
|
time_t sem_otime;
|
|
|
|
|
2013-09-30 20:45:07 +00:00
|
|
|
/*
|
|
|
|
* The proc interface isn't aware of sem_lock(), it calls
|
|
|
|
* ipc_lock_object() directly (in sysvipc_find_ipc).
|
2016-10-11 20:54:50 +00:00
|
|
|
* In order to stay compatible with sem_lock(), we must
|
|
|
|
* enter / leave complex_mode.
|
2013-09-30 20:45:07 +00:00
|
|
|
*/
|
2016-10-11 20:54:50 +00:00
|
|
|
complexmode_enter(sma);
|
2013-09-30 20:45:07 +00:00
|
|
|
|
2013-07-08 23:01:25 +00:00
|
|
|
sem_otime = get_semotime(sma);
|
2005-09-06 22:17:10 +00:00
|
|
|
|
2015-04-15 23:17:54 +00:00
|
|
|
seq_printf(s,
|
|
|
|
"%10d %10d %4o %10u %5u %5u %5u %5u %10lu %10lu\n",
|
|
|
|
sma->sem_perm.key,
|
|
|
|
sma->sem_perm.id,
|
|
|
|
sma->sem_perm.mode,
|
|
|
|
sma->sem_nsems,
|
|
|
|
from_kuid_munged(user_ns, sma->sem_perm.uid),
|
|
|
|
from_kgid_munged(user_ns, sma->sem_perm.gid),
|
|
|
|
from_kuid_munged(user_ns, sma->sem_perm.cuid),
|
|
|
|
from_kgid_munged(user_ns, sma->sem_perm.cgid),
|
|
|
|
sem_otime,
|
|
|
|
sma->sem_ctime);
|
|
|
|
|
2016-10-11 20:54:50 +00:00
|
|
|
complexmode_tryleave(sma);
|
|
|
|
|
2015-04-15 23:17:54 +00:00
|
|
|
return 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
#endif
|