linux/ipc/mqueue.c

1655 lines
40 KiB
C
Raw Normal View History

/*
* POSIX message queues filesystem for Linux.
*
* Copyright (C) 2003,2004 Krzysztof Benedyczak (golbi@mat.uni.torun.pl)
* Michal Wronski (michal.wronski@gmail.com)
*
* Spinlocks: Mohamed Abbas (abbas.mohamed@intel.com)
* Lockless receive & send, fd based notify:
* Manfred Spraul (manfred@colorfullife.com)
*
* Audit: George Wilson (ltcgcw@us.ibm.com)
*
* This file is released under the GPL.
*/
#include <linux/capability.h>
#include <linux/init.h>
#include <linux/pagemap.h>
#include <linux/file.h>
#include <linux/mount.h>
#include <linux/fs_context.h>
#include <linux/namei.h>
#include <linux/sysctl.h>
#include <linux/poll.h>
#include <linux/mqueue.h>
#include <linux/msg.h>
#include <linux/skbuff.h>
ipc/mqueue: update maximums for the mqueue subsystem Commit b231cca4381e ("message queues: increase range limits") changed the maximum size of a message in a message queue from INT_MAX to 8192*128. Unfortunately, we had customers that relied on a size much larger than 8192*128 on their production systems. After reviewing POSIX, we found that it is silent on the maximum message size. We did find a couple other areas in which it was not silent. Fix up the mqueue maximums so that the customer's system can continue to work, and document both the POSIX and real world requirements in ipc_namespace.h so that we don't have this issue crop back up. Also, commit 9cf18e1dd74cd0 ("ipc: HARD_MSGMAX should be higher not lower on 64bit") fiddled with HARD_MSGMAX without realizing that the number was intentionally in place to limit the msg queue depth to one that was small enough to kmalloc an array of pointers (hence why we divided 128k by sizeof(long)). If we wish to meet POSIX requirements, we have no choice but to change our allocation to a vmalloc instead (at least for the large queue size case). With that, it's possible to increase our allowed maximum to the POSIX requirements (or more if we choose). [sfr@canb.auug.org.au: using vmalloc requires including vmalloc.h] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Serge E. Hallyn <serue@us.ibm.com> Cc: Amerigo Wang <amwang@redhat.com> Cc: Joe Korty <joe.korty@ccur.com> Cc: Jiri Slaby <jslaby@suse.cz> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Manfred Spraul <manfred@colorfullife.com> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:30 +00:00
#include <linux/vmalloc.h>
#include <linux/netlink.h>
#include <linux/syscalls.h>
#include <linux/audit.h>
#include <linux/signal.h>
#include <linux/mutex.h>
#include <linux/nsproxy.h>
#include <linux/pid.h>
#include <linux/ipc_namespace.h>
user namespace: make signal.c respect user namespaces ipc/mqueue.c: for __SI_MESQ, convert the uid being sent to recipient's user namespace. (new, thanks Oleg) __send_signal: convert current's uid to the recipient's user namespace for any siginfo which is not SI_FROMKERNEL (patch from Oleg, thanks again :) do_notify_parent and do_notify_parent_cldstop: map task's uid to parent's user namespace ptrace_signal maps parent's uid into current's user namespace before including in signal to current. IIUC Oleg has argued that this shouldn't matter as the debugger will play with it, but it seems like not converting the value currently being set is misleading. Changelog: Sep 20: Inspired by Oleg's suggestion, define map_cred_ns() helper to simplify callers and help make clear what we are translating (which uid into which namespace). Passing the target task would make callers even easier to read, but we pass in user_ns because current_user_ns() != task_cred_xxx(current, user_ns). Sep 20: As recommended by Oleg, also put task_pid_vnr() under rcu_read_lock in ptrace_signal(). Sep 23: In send_signal(), detect when (user) signal is coming from an ancestor or unrelated user namespace. Pass that on to __send_signal, which sets si_uid to 0 or overflowuid if needed. Oct 12: Base on Oleg's fixup_uid() patch. On top of that, handle all SI_FROMKERNEL cases at callers, because we can't assume sender is current in those cases. Nov 10: (mhelsley) rename fixup_uid to more meaningful usern_fixup_signal_uid Nov 10: (akpm) make the !CONFIG_USER_NS case clearer Signed-off-by: Serge Hallyn <serge.hallyn@canonical.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Matt Helsley <matthltc@us.ibm.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> From: Serge Hallyn <serge.hallyn@canonical.com> Subject: __send_signal: pass q->info, not info, to userns_fixup_signal_uid (v2) Eric Biederman pointed out that passing info is a bug and could lead to a NULL pointer deref to boot. A collection of signal, securebits, filecaps, cap_bounds, and a few other ltp tests passed with this kernel. Changelog: Nov 18: previous patch missed a leading '&' Signed-off-by: Serge Hallyn <serge.hallyn@canonical.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> From: Dan Carpenter <dan.carpenter@oracle.com> Subject: ipc/mqueue: lock() => unlock() typo There was a double lock typo introduced in b085f4bd6b21 "user namespace: make signal.c respect user namespaces" Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Matt Helsley <matthltc@us.ibm.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Acked-by: Serge Hallyn <serge@hallyn.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-01-10 23:11:37 +00:00
#include <linux/user_namespace.h>
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 08:04:11 +00:00
#include <linux/slab.h>
#include <linux/sched/wake_q.h>
#include <linux/sched/signal.h>
#include <linux/sched/user.h>
#include <net/sock.h>
#include "util.h"
struct mqueue_fs_context {
struct ipc_namespace *ipc_ns;
};
#define MQUEUE_MAGIC 0x19800202
#define DIRENT_SIZE 20
#define FILENT_SIZE 80
#define SEND 0
#define RECV 1
#define STATE_NONE 0
ipc/mqueue: Implement lockless pipelined wakeups This patch moves the wakeup_process() invocation so it is not done under the info->lock by making use of a lockless wake_q. With this change, the waiter is woken up once it is STATE_READY and it does not need to loop on SMP if it is still in STATE_PENDING. In the timeout case we still need to grab the info->lock to verify the state. This change should also avoid the introduction of preempt_disable() in -rt which avoids a busy-loop which pools for the STATE_PENDING -> STATE_READY change if the waiter has a higher priority compared to the waker. Additionally, this patch micro-optimizes wq_sleep by using the cheaper cousin of set_current_state(TASK_INTERRUPTABLE) as we will block no matter what, thus get rid of the implied barrier. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: George Spelvin <linux@horizon.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Chris Mason <clm@fb.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: dave@stgolabs.net Link: http://lkml.kernel.org/r/1430748166.1940.17.camel@stgolabs.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-04 14:02:46 +00:00
#define STATE_READY 1
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
struct posix_msg_tree_node {
struct rb_node rb_node;
struct list_head msg_list;
int priority;
};
struct ext_wait_queue { /* queue of sleeping tasks */
struct task_struct *task;
struct list_head list;
struct msg_msg *msg; /* ptr of loaded message */
int state; /* one of STATE_* values */
};
struct mqueue_inode_info {
spinlock_t lock;
struct inode vfs_inode;
wait_queue_head_t wait_q;
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
struct rb_root msg_tree;
struct rb_node *msg_tree_rightmost;
ipc/mqueue: add rbtree node caching support When I wrote the first patch that added the rbtree support for message queue insertion, it sped up the case where the queue was very full drastically from the original code. It, however, slowed down the case where the queue was empty (not drastically though). This patch caches the last freed rbtree node struct so we can quickly reuse it when we get a new message. This is the common path for any queue that very frequently goes from 0 to 1 then back to 0 messages in queue. Andrew Morton didn't like that we were doing a GFP_ATOMIC allocation in msg_insert, so this patch attempts to speculatively allocate a new node struct outside of the spin lock when we know we need it, but will still fall back to a GFP_ATOMIC allocation if it has to. Once I added the caching, the necessary various ret = ; spin_unlock gyrations in mq_timedsend were getting pretty ugly, so this also slightly refactors that function to streamline the flow of the code and the function exit. Finally, while working on getting performance back I made sure that all of the node structs were always fully initialized when they were first used, rendering the use of kzalloc unnecessary and a waste of CPU cycles. The net result of all of this is: 1) We will avoid a GFP_ATOMIC allocation when possible, but fall back on it when necessary. 2) We will speculatively allocate a node struct using GFP_KERNEL if our cache is empty (and save the struct to our cache if it's still empty after we have obtained the spin lock). 3) The performance of the common queue empty case has significantly improved and is now much more in line with the older performance for this case. The performance changes are: Old mqueue new mqueue new mqueue + caching queue empty send/recv 305/288ns 349/318ns 310/322ns I don't think we'll ever be able to get the recv performance back, but that's because the old recv performance was a direct result and consequence of the old methods abysmal send performance. The recv path simply must do more so that the send path does not incur such a penalty under higher queue depths. As it turns out, the new caching code also sped up the various queue full cases relative to my last patch. That could be because of the difference between the syscall path in 3.3.4-rc5 and 3.3.4-rc6, or because of the change in code flow in the mq_timedsend routine. Regardless, I'll take it. It wasn't huge, and I *would* say it was within the margin for error, but after many repeated runs what I'm seeing is that the old numbers trend slightly higher (about 10 to 20ns depending on which test is the one running). [akpm@linux-foundation.org: checkpatch fixes] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:38 +00:00
struct posix_msg_tree_node *node_cache;
struct mq_attr attr;
struct sigevent notify;
struct pid *notify_owner;
struct user_namespace *notify_user_ns;
struct user_struct *user; /* user who created, for accounting */
struct sock *notify_sock;
struct sk_buff *notify_cookie;
/* for tasks waiting for free space and messages, respectively */
struct ext_wait_queue e_wait_q[2];
unsigned long qsize; /* size of queue in memory (sum of all msgs) */
};
static struct file_system_type mqueue_fs_type;
static const struct inode_operations mqueue_dir_inode_operations;
static const struct file_operations mqueue_file_operations;
static const struct super_operations mqueue_super_ops;
static const struct fs_context_operations mqueue_fs_context_ops;
static void remove_notification(struct mqueue_inode_info *info);
static struct kmem_cache *mqueue_inode_cachep;
static struct ctl_table_header *mq_sysctl_table;
static inline struct mqueue_inode_info *MQUEUE_I(struct inode *inode)
{
return container_of(inode, struct mqueue_inode_info, vfs_inode);
}
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
/*
* This routine should be called with the mq_lock held.
*/
static inline struct ipc_namespace *__get_ns_from_inode(struct inode *inode)
{
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
return get_ipc_ns(inode->i_sb->s_fs_info);
}
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
static struct ipc_namespace *get_ns_from_inode(struct inode *inode)
{
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
struct ipc_namespace *ns;
spin_lock(&mq_lock);
ns = __get_ns_from_inode(inode);
spin_unlock(&mq_lock);
return ns;
}
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
/* Auxiliary functions to manipulate messages' list */
static int msg_insert(struct msg_msg *msg, struct mqueue_inode_info *info)
{
struct rb_node **p, *parent = NULL;
struct posix_msg_tree_node *leaf;
bool rightmost = true;
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
p = &info->msg_tree.rb_node;
while (*p) {
parent = *p;
leaf = rb_entry(parent, struct posix_msg_tree_node, rb_node);
if (likely(leaf->priority == msg->m_type))
goto insert_msg;
else if (msg->m_type < leaf->priority) {
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
p = &(*p)->rb_left;
rightmost = false;
} else
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
p = &(*p)->rb_right;
}
ipc/mqueue: add rbtree node caching support When I wrote the first patch that added the rbtree support for message queue insertion, it sped up the case where the queue was very full drastically from the original code. It, however, slowed down the case where the queue was empty (not drastically though). This patch caches the last freed rbtree node struct so we can quickly reuse it when we get a new message. This is the common path for any queue that very frequently goes from 0 to 1 then back to 0 messages in queue. Andrew Morton didn't like that we were doing a GFP_ATOMIC allocation in msg_insert, so this patch attempts to speculatively allocate a new node struct outside of the spin lock when we know we need it, but will still fall back to a GFP_ATOMIC allocation if it has to. Once I added the caching, the necessary various ret = ; spin_unlock gyrations in mq_timedsend were getting pretty ugly, so this also slightly refactors that function to streamline the flow of the code and the function exit. Finally, while working on getting performance back I made sure that all of the node structs were always fully initialized when they were first used, rendering the use of kzalloc unnecessary and a waste of CPU cycles. The net result of all of this is: 1) We will avoid a GFP_ATOMIC allocation when possible, but fall back on it when necessary. 2) We will speculatively allocate a node struct using GFP_KERNEL if our cache is empty (and save the struct to our cache if it's still empty after we have obtained the spin lock). 3) The performance of the common queue empty case has significantly improved and is now much more in line with the older performance for this case. The performance changes are: Old mqueue new mqueue new mqueue + caching queue empty send/recv 305/288ns 349/318ns 310/322ns I don't think we'll ever be able to get the recv performance back, but that's because the old recv performance was a direct result and consequence of the old methods abysmal send performance. The recv path simply must do more so that the send path does not incur such a penalty under higher queue depths. As it turns out, the new caching code also sped up the various queue full cases relative to my last patch. That could be because of the difference between the syscall path in 3.3.4-rc5 and 3.3.4-rc6, or because of the change in code flow in the mq_timedsend routine. Regardless, I'll take it. It wasn't huge, and I *would* say it was within the margin for error, but after many repeated runs what I'm seeing is that the old numbers trend slightly higher (about 10 to 20ns depending on which test is the one running). [akpm@linux-foundation.org: checkpatch fixes] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:38 +00:00
if (info->node_cache) {
leaf = info->node_cache;
info->node_cache = NULL;
} else {
leaf = kmalloc(sizeof(*leaf), GFP_ATOMIC);
if (!leaf)
return -ENOMEM;
INIT_LIST_HEAD(&leaf->msg_list);
}
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
leaf->priority = msg->m_type;
if (rightmost)
info->msg_tree_rightmost = &leaf->rb_node;
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
rb_link_node(&leaf->rb_node, parent, p);
rb_insert_color(&leaf->rb_node, &info->msg_tree);
insert_msg:
info->attr.mq_curmsgs++;
info->qsize += msg->m_ts;
list_add_tail(&msg->m_list, &leaf->msg_list);
return 0;
}
static inline void msg_tree_erase(struct posix_msg_tree_node *leaf,
struct mqueue_inode_info *info)
{
struct rb_node *node = &leaf->rb_node;
if (info->msg_tree_rightmost == node)
info->msg_tree_rightmost = rb_prev(node);
rb_erase(node, &info->msg_tree);
if (info->node_cache) {
kfree(leaf);
} else {
info->node_cache = leaf;
}
}
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
static inline struct msg_msg *msg_get(struct mqueue_inode_info *info)
{
struct rb_node *parent = NULL;
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
struct posix_msg_tree_node *leaf;
struct msg_msg *msg;
try_again:
/*
* During insert, low priorities go to the left and high to the
* right. On receive, we want the highest priorities first, so
* walk all the way to the right.
*/
parent = info->msg_tree_rightmost;
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
if (!parent) {
if (info->attr.mq_curmsgs) {
pr_warn_once("Inconsistency in POSIX message queue, "
"no tree element, but supposedly messages "
"should exist!\n");
info->attr.mq_curmsgs = 0;
}
return NULL;
}
leaf = rb_entry(parent, struct posix_msg_tree_node, rb_node);
ipc/mqueue: add rbtree node caching support When I wrote the first patch that added the rbtree support for message queue insertion, it sped up the case where the queue was very full drastically from the original code. It, however, slowed down the case where the queue was empty (not drastically though). This patch caches the last freed rbtree node struct so we can quickly reuse it when we get a new message. This is the common path for any queue that very frequently goes from 0 to 1 then back to 0 messages in queue. Andrew Morton didn't like that we were doing a GFP_ATOMIC allocation in msg_insert, so this patch attempts to speculatively allocate a new node struct outside of the spin lock when we know we need it, but will still fall back to a GFP_ATOMIC allocation if it has to. Once I added the caching, the necessary various ret = ; spin_unlock gyrations in mq_timedsend were getting pretty ugly, so this also slightly refactors that function to streamline the flow of the code and the function exit. Finally, while working on getting performance back I made sure that all of the node structs were always fully initialized when they were first used, rendering the use of kzalloc unnecessary and a waste of CPU cycles. The net result of all of this is: 1) We will avoid a GFP_ATOMIC allocation when possible, but fall back on it when necessary. 2) We will speculatively allocate a node struct using GFP_KERNEL if our cache is empty (and save the struct to our cache if it's still empty after we have obtained the spin lock). 3) The performance of the common queue empty case has significantly improved and is now much more in line with the older performance for this case. The performance changes are: Old mqueue new mqueue new mqueue + caching queue empty send/recv 305/288ns 349/318ns 310/322ns I don't think we'll ever be able to get the recv performance back, but that's because the old recv performance was a direct result and consequence of the old methods abysmal send performance. The recv path simply must do more so that the send path does not incur such a penalty under higher queue depths. As it turns out, the new caching code also sped up the various queue full cases relative to my last patch. That could be because of the difference between the syscall path in 3.3.4-rc5 and 3.3.4-rc6, or because of the change in code flow in the mq_timedsend routine. Regardless, I'll take it. It wasn't huge, and I *would* say it was within the margin for error, but after many repeated runs what I'm seeing is that the old numbers trend slightly higher (about 10 to 20ns depending on which test is the one running). [akpm@linux-foundation.org: checkpatch fixes] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:38 +00:00
if (unlikely(list_empty(&leaf->msg_list))) {
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
pr_warn_once("Inconsistency in POSIX message queue, "
"empty leaf node but we haven't implemented "
"lazy leaf delete!\n");
msg_tree_erase(leaf, info);
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
goto try_again;
} else {
msg = list_first_entry(&leaf->msg_list,
struct msg_msg, m_list);
list_del(&msg->m_list);
if (list_empty(&leaf->msg_list)) {
msg_tree_erase(leaf, info);
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
}
}
info->attr.mq_curmsgs--;
info->qsize -= msg->m_ts;
return msg;
}
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
static struct inode *mqueue_get_inode(struct super_block *sb,
struct ipc_namespace *ipc_ns, umode_t mode,
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
struct mq_attr *attr)
{
struct user_struct *u = current_user();
struct inode *inode;
int ret = -ENOMEM;
inode = new_inode(sb);
if (!inode)
goto err;
inode->i_ino = get_next_ino();
inode->i_mode = mode;
inode->i_uid = current_fsuid();
inode->i_gid = current_fsgid();
inode->i_mtime = inode->i_ctime = inode->i_atime = current_time(inode);
if (S_ISREG(mode)) {
struct mqueue_inode_info *info;
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
unsigned long mq_bytes, mq_treesize;
inode->i_fop = &mqueue_file_operations;
inode->i_size = FILENT_SIZE;
/* mqueue specific info */
info = MQUEUE_I(inode);
spin_lock_init(&info->lock);
init_waitqueue_head(&info->wait_q);
INIT_LIST_HEAD(&info->e_wait_q[0].list);
INIT_LIST_HEAD(&info->e_wait_q[1].list);
info->notify_owner = NULL;
info->notify_user_ns = NULL;
info->qsize = 0;
info->user = NULL; /* set when all is ok */
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
info->msg_tree = RB_ROOT;
info->msg_tree_rightmost = NULL;
ipc/mqueue: add rbtree node caching support When I wrote the first patch that added the rbtree support for message queue insertion, it sped up the case where the queue was very full drastically from the original code. It, however, slowed down the case where the queue was empty (not drastically though). This patch caches the last freed rbtree node struct so we can quickly reuse it when we get a new message. This is the common path for any queue that very frequently goes from 0 to 1 then back to 0 messages in queue. Andrew Morton didn't like that we were doing a GFP_ATOMIC allocation in msg_insert, so this patch attempts to speculatively allocate a new node struct outside of the spin lock when we know we need it, but will still fall back to a GFP_ATOMIC allocation if it has to. Once I added the caching, the necessary various ret = ; spin_unlock gyrations in mq_timedsend were getting pretty ugly, so this also slightly refactors that function to streamline the flow of the code and the function exit. Finally, while working on getting performance back I made sure that all of the node structs were always fully initialized when they were first used, rendering the use of kzalloc unnecessary and a waste of CPU cycles. The net result of all of this is: 1) We will avoid a GFP_ATOMIC allocation when possible, but fall back on it when necessary. 2) We will speculatively allocate a node struct using GFP_KERNEL if our cache is empty (and save the struct to our cache if it's still empty after we have obtained the spin lock). 3) The performance of the common queue empty case has significantly improved and is now much more in line with the older performance for this case. The performance changes are: Old mqueue new mqueue new mqueue + caching queue empty send/recv 305/288ns 349/318ns 310/322ns I don't think we'll ever be able to get the recv performance back, but that's because the old recv performance was a direct result and consequence of the old methods abysmal send performance. The recv path simply must do more so that the send path does not incur such a penalty under higher queue depths. As it turns out, the new caching code also sped up the various queue full cases relative to my last patch. That could be because of the difference between the syscall path in 3.3.4-rc5 and 3.3.4-rc6, or because of the change in code flow in the mq_timedsend routine. Regardless, I'll take it. It wasn't huge, and I *would* say it was within the margin for error, but after many repeated runs what I'm seeing is that the old numbers trend slightly higher (about 10 to 20ns depending on which test is the one running). [akpm@linux-foundation.org: checkpatch fixes] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:38 +00:00
info->node_cache = NULL;
memset(&info->attr, 0, sizeof(info->attr));
info->attr.mq_maxmsg = min(ipc_ns->mq_msg_max,
ipc_ns->mq_msg_default);
info->attr.mq_msgsize = min(ipc_ns->mq_msgsize_max,
ipc_ns->mq_msgsize_default);
if (attr) {
info->attr.mq_maxmsg = attr->mq_maxmsg;
info->attr.mq_msgsize = attr->mq_msgsize;
}
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
/*
* We used to allocate a static array of pointers and account
* the size of that array as well as one msg_msg struct per
* possible message into the queue size. That's no longer
* accurate as the queue is now an rbtree and will grow and
* shrink depending on usage patterns. We can, however, still
* account one msg_msg struct per message, but the nodes are
* allocated depending on priority usage, and most programs
* only use one, or a handful, of priorities. However, since
* this is pinned memory, we need to assume worst case, so
* that means the min(mq_maxmsg, max_priorities) * struct
* posix_msg_tree_node.
*/
ret = -EINVAL;
if (info->attr.mq_maxmsg <= 0 || info->attr.mq_msgsize <= 0)
goto out_inode;
if (capable(CAP_SYS_RESOURCE)) {
if (info->attr.mq_maxmsg > HARD_MSGMAX ||
info->attr.mq_msgsize > HARD_MSGSIZEMAX)
goto out_inode;
} else {
if (info->attr.mq_maxmsg > ipc_ns->mq_msg_max ||
info->attr.mq_msgsize > ipc_ns->mq_msgsize_max)
goto out_inode;
}
ret = -EOVERFLOW;
/* check for overflow */
if (info->attr.mq_msgsize > ULONG_MAX/info->attr.mq_maxmsg)
goto out_inode;
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
mq_treesize = info->attr.mq_maxmsg * sizeof(struct msg_msg) +
min_t(unsigned int, info->attr.mq_maxmsg, MQ_PRIO_MAX) *
sizeof(struct posix_msg_tree_node);
mq_bytes = info->attr.mq_maxmsg * info->attr.mq_msgsize;
if (mq_bytes + mq_treesize < mq_bytes)
goto out_inode;
mq_bytes += mq_treesize;
spin_lock(&mq_lock);
if (u->mq_bytes + mq_bytes < u->mq_bytes ||
u->mq_bytes + mq_bytes > rlimit(RLIMIT_MSGQUEUE)) {
spin_unlock(&mq_lock);
/* mqueue_evict_inode() releases info->messages */
ret = -EMFILE;
goto out_inode;
}
u->mq_bytes += mq_bytes;
spin_unlock(&mq_lock);
/* all is ok */
info->user = get_uid(u);
} else if (S_ISDIR(mode)) {
inc_nlink(inode);
/* Some things misbehave if size == 0 on a directory */
inode->i_size = 2 * DIRENT_SIZE;
inode->i_op = &mqueue_dir_inode_operations;
inode->i_fop = &simple_dir_operations;
}
return inode;
out_inode:
iput(inode);
err:
return ERR_PTR(ret);
}
static int mqueue_fill_super(struct super_block *sb, struct fs_context *fc)
{
struct inode *inode;
Revert "mqueue: switch to on-demand creation of internal mount" This reverts commit 36735a6a2b5e042db1af956ce4bcc13f3ff99e21. Aleksa Sarai <asarai@suse.de> writes: > [REGRESSION v4.16-rc6] [PATCH] mqueue: forbid unprivileged user access to internal mount > > Felix reported weird behaviour on 4.16.0-rc6 with regards to mqueue[1], > which was introduced by 36735a6a2b5e ("mqueue: switch to on-demand > creation of internal mount"). > > Basically, the reproducer boils down to being able to mount mqueue if > you create a new user namespace, even if you don't unshare the IPC > namespace. > > Previously this was not possible, and you would get an -EPERM. The mount > is the *host* mqueue mount, which is being cached and just returned from > mqueue_mount(). To be honest, I'm not sure if this is safe or not (or if > it was intentional -- since I'm not familiar with mqueue). > > To me it looks like there is a missing permission check. I've included a > patch below that I've compile-tested, and should block the above case. > Can someone please tell me if I'm missing something? Is this actually > safe? > > [1]: https://github.com/docker/docker/issues/36674 The issue is a lot deeper than a missing permission check. sb->s_user_ns was is improperly set as well. So in addition to the filesystem being mounted when it should not be mounted, so things are not allow that should be. We are practically to the release of 4.16 and there is no agreement between Al Viro and myself on what the code should looks like to fix things properly. So revert the code to what it was before so that we can take our time and discuss this properly. Fixes: 36735a6a2b5e ("mqueue: switch to on-demand creation of internal mount") Reported-by: Felix Abecassis <fabecassis@nvidia.com> Reported-by: Aleksa Sarai <asarai@suse.de> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-03-24 16:28:14 +00:00
struct ipc_namespace *ns = sb->s_fs_info;
sb->s_iflags |= SB_I_NOEXEC | SB_I_NODEV;
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time ago with promise that one day it will be possible to implement page cache with bigger chunks than PAGE_SIZE. This promise never materialized. And unlikely will. We have many places where PAGE_CACHE_SIZE assumed to be equal to PAGE_SIZE. And it's constant source of confusion on whether PAGE_CACHE_* or PAGE_* constant should be used in a particular case, especially on the border between fs and mm. Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much breakage to be doable. Let's stop pretending that pages in page cache are special. They are not. The changes are pretty straight-forward: - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>; - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>; - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN}; - page_cache_get() -> get_page(); - page_cache_release() -> put_page(); This patch contains automated changes generated with coccinelle using script below. For some reason, coccinelle doesn't patch header files. I've called spatch for them manually. The only adjustment after coccinelle is revert of changes to PAGE_CAHCE_ALIGN definition: we are going to drop it later. There are few places in the code where coccinelle didn't reach. I'll fix them manually in a separate patch. Comments and documentation also will be addressed with the separate patch. virtual patch @@ expression E; @@ - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT) + E @@ expression E; @@ - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) + E @@ @@ - PAGE_CACHE_SHIFT + PAGE_SHIFT @@ @@ - PAGE_CACHE_SIZE + PAGE_SIZE @@ @@ - PAGE_CACHE_MASK + PAGE_MASK @@ expression E; @@ - PAGE_CACHE_ALIGN(E) + PAGE_ALIGN(E) @@ expression E; @@ - page_cache_get(E) + get_page(E) @@ expression E; @@ - page_cache_release(E) + put_page(E) Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 12:29:47 +00:00
sb->s_blocksize = PAGE_SIZE;
sb->s_blocksize_bits = PAGE_SHIFT;
sb->s_magic = MQUEUE_MAGIC;
sb->s_op = &mqueue_super_ops;
inode = mqueue_get_inode(sb, ns, S_IFDIR | S_ISVTX | S_IRWXUGO, NULL);
if (IS_ERR(inode))
return PTR_ERR(inode);
sb->s_root = d_make_root(inode);
if (!sb->s_root)
return -ENOMEM;
return 0;
}
static int mqueue_get_tree(struct fs_context *fc)
{
struct mqueue_fs_context *ctx = fc->fs_private;
fc->s_fs_info = ctx->ipc_ns;
return vfs_get_super(fc, vfs_get_keyed_super, mqueue_fill_super);
}
static void mqueue_fs_context_free(struct fs_context *fc)
{
struct mqueue_fs_context *ctx = fc->fs_private;
put_ipc_ns(ctx->ipc_ns);
kfree(ctx);
}
static int mqueue_init_fs_context(struct fs_context *fc)
{
struct mqueue_fs_context *ctx;
ctx = kzalloc(sizeof(struct mqueue_fs_context), GFP_KERNEL);
if (!ctx)
return -ENOMEM;
ctx->ipc_ns = get_ipc_ns(current->nsproxy->ipc_ns);
put_user_ns(fc->user_ns);
fc->user_ns = get_user_ns(ctx->ipc_ns->user_ns);
fc->fs_private = ctx;
fc->ops = &mqueue_fs_context_ops;
return 0;
}
static struct vfsmount *mq_create_mount(struct ipc_namespace *ns)
{
struct mqueue_fs_context *ctx;
struct fs_context *fc;
struct vfsmount *mnt;
fc = fs_context_for_mount(&mqueue_fs_type, SB_KERNMOUNT);
if (IS_ERR(fc))
return ERR_CAST(fc);
ctx = fc->fs_private;
put_ipc_ns(ctx->ipc_ns);
ctx->ipc_ns = get_ipc_ns(ns);
put_user_ns(fc->user_ns);
fc->user_ns = get_user_ns(ctx->ipc_ns->user_ns);
mnt = fc_mount(fc);
put_fs_context(fc);
return mnt;
}
static void init_once(void *foo)
{
struct mqueue_inode_info *p = (struct mqueue_inode_info *) foo;
inode_init_once(&p->vfs_inode);
}
static struct inode *mqueue_alloc_inode(struct super_block *sb)
{
struct mqueue_inode_info *ei;
ei = kmem_cache_alloc(mqueue_inode_cachep, GFP_KERNEL);
if (!ei)
return NULL;
return &ei->vfs_inode;
}
static void mqueue_free_inode(struct inode *inode)
{
kmem_cache_free(mqueue_inode_cachep, MQUEUE_I(inode));
}
static void mqueue_evict_inode(struct inode *inode)
{
struct mqueue_inode_info *info;
struct user_struct *user;
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
struct ipc_namespace *ipc_ns;
ipc: prevent lockup on alloc_msg and free_msg msgctl10 of ltp triggers the following lockup When CONFIG_KASAN is enabled on large memory SMP systems, the pages initialization can take a long time, if msgctl10 requests a huge block memory, and it will block rcu scheduler, so release cpu actively. After adding schedule() in free_msg, free_msg can not be called when holding spinlock, so adding msg to a tmp list, and free it out of spinlock rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: rcu: Tasks blocked on level-1 rcu_node (CPUs 16-31): P32505 rcu: Tasks blocked on level-1 rcu_node (CPUs 48-63): P34978 rcu: (detected by 11, t=35024 jiffies, g=44237529, q=16542267) msgctl10 R running task 21608 32505 2794 0x00000082 Call Trace: preempt_schedule_irq+0x4c/0xb0 retint_kernel+0x1b/0x2d RIP: 0010:__is_insn_slot_addr+0xfb/0x250 Code: 82 1d 00 48 8b 9b 90 00 00 00 4c 89 f7 49 c1 ee 03 e8 59 83 1d 00 48 b8 00 00 00 00 00 fc ff df 4c 39 eb 48 89 9d 58 ff ff ff <41> c6 04 06 f8 74 66 4c 8d 75 98 4c 89 f1 48 c1 e9 03 48 01 c8 48 RSP: 0018:ffff88bce041f758 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13 RAX: dffffc0000000000 RBX: ffffffff8471bc50 RCX: ffffffff828a2a57 RDX: dffffc0000000000 RSI: dffffc0000000000 RDI: ffff88bce041f780 RBP: ffff88bce041f828 R08: ffffed15f3f4c5b3 R09: ffffed15f3f4c5b3 R10: 0000000000000001 R11: ffffed15f3f4c5b2 R12: 000000318aee9b73 R13: ffffffff8471bc50 R14: 1ffff1179c083ef0 R15: 1ffff1179c083eec kernel_text_address+0xc1/0x100 __kernel_text_address+0xe/0x30 unwind_get_return_address+0x2f/0x50 __save_stack_trace+0x92/0x100 create_object+0x380/0x650 __kmalloc+0x14c/0x2b0 load_msg+0x38/0x1a0 do_msgsnd+0x19e/0xcf0 do_syscall_64+0x117/0x400 entry_SYSCALL_64_after_hwframe+0x49/0xbe rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: rcu: Tasks blocked on level-1 rcu_node (CPUs 0-15): P32170 rcu: (detected by 14, t=35016 jiffies, g=44237525, q=12423063) msgctl10 R running task 21608 32170 32155 0x00000082 Call Trace: preempt_schedule_irq+0x4c/0xb0 retint_kernel+0x1b/0x2d RIP: 0010:lock_acquire+0x4d/0x340 Code: 48 81 ec c0 00 00 00 45 89 c6 4d 89 cf 48 8d 6c 24 20 48 89 3c 24 48 8d bb e4 0c 00 00 89 74 24 0c 48 c7 44 24 20 b3 8a b5 41 <48> c1 ed 03 48 c7 44 24 28 b4 25 18 84 48 c7 44 24 30 d0 54 7a 82 RSP: 0018:ffff88af83417738 EFLAGS: 00000282 ORIG_RAX: ffffffffffffff13 RAX: dffffc0000000000 RBX: ffff88bd335f3080 RCX: 0000000000000002 RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff88bd335f3d64 RBP: ffff88af83417758 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000001 R11: ffffed13f3f745b2 R12: 0000000000000000 R13: 0000000000000002 R14: 0000000000000000 R15: 0000000000000000 is_bpf_text_address+0x32/0xe0 kernel_text_address+0xec/0x100 __kernel_text_address+0xe/0x30 unwind_get_return_address+0x2f/0x50 __save_stack_trace+0x92/0x100 save_stack+0x32/0xb0 __kasan_slab_free+0x130/0x180 kfree+0xfa/0x2d0 free_msg+0x24/0x50 do_msgrcv+0x508/0xe60 do_syscall_64+0x117/0x400 entry_SYSCALL_64_after_hwframe+0x49/0xbe Davidlohr said: "So after releasing the lock, the msg rbtree/list is empty and new calls will not see those in the newly populated tmp_msg list, and therefore they cannot access the delayed msg freeing pointers, which is good. Also the fact that the node_cache is now freed before the actual messages seems to be harmless as this is wanted for msg_insert() avoiding GFP_ATOMIC allocations, and after releasing the info->lock the thing is freed anyway so it should not change things" Link: http://lkml.kernel.org/r/1552029161-4957-1-git-send-email-lirongqing@baidu.com Signed-off-by: Li RongQing <lirongqing@baidu.com> Signed-off-by: Zhang Yu <zhangyu31@baidu.com> Reviewed-by: Davidlohr Bueso <dbueso@suse.de> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-14 22:46:20 +00:00
struct msg_msg *msg, *nmsg;
LIST_HEAD(tmp_msg);
clear_inode(inode);
if (S_ISDIR(inode->i_mode))
return;
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
ipc_ns = get_ns_from_inode(inode);
info = MQUEUE_I(inode);
spin_lock(&info->lock);
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
while ((msg = msg_get(info)) != NULL)
ipc: prevent lockup on alloc_msg and free_msg msgctl10 of ltp triggers the following lockup When CONFIG_KASAN is enabled on large memory SMP systems, the pages initialization can take a long time, if msgctl10 requests a huge block memory, and it will block rcu scheduler, so release cpu actively. After adding schedule() in free_msg, free_msg can not be called when holding spinlock, so adding msg to a tmp list, and free it out of spinlock rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: rcu: Tasks blocked on level-1 rcu_node (CPUs 16-31): P32505 rcu: Tasks blocked on level-1 rcu_node (CPUs 48-63): P34978 rcu: (detected by 11, t=35024 jiffies, g=44237529, q=16542267) msgctl10 R running task 21608 32505 2794 0x00000082 Call Trace: preempt_schedule_irq+0x4c/0xb0 retint_kernel+0x1b/0x2d RIP: 0010:__is_insn_slot_addr+0xfb/0x250 Code: 82 1d 00 48 8b 9b 90 00 00 00 4c 89 f7 49 c1 ee 03 e8 59 83 1d 00 48 b8 00 00 00 00 00 fc ff df 4c 39 eb 48 89 9d 58 ff ff ff <41> c6 04 06 f8 74 66 4c 8d 75 98 4c 89 f1 48 c1 e9 03 48 01 c8 48 RSP: 0018:ffff88bce041f758 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13 RAX: dffffc0000000000 RBX: ffffffff8471bc50 RCX: ffffffff828a2a57 RDX: dffffc0000000000 RSI: dffffc0000000000 RDI: ffff88bce041f780 RBP: ffff88bce041f828 R08: ffffed15f3f4c5b3 R09: ffffed15f3f4c5b3 R10: 0000000000000001 R11: ffffed15f3f4c5b2 R12: 000000318aee9b73 R13: ffffffff8471bc50 R14: 1ffff1179c083ef0 R15: 1ffff1179c083eec kernel_text_address+0xc1/0x100 __kernel_text_address+0xe/0x30 unwind_get_return_address+0x2f/0x50 __save_stack_trace+0x92/0x100 create_object+0x380/0x650 __kmalloc+0x14c/0x2b0 load_msg+0x38/0x1a0 do_msgsnd+0x19e/0xcf0 do_syscall_64+0x117/0x400 entry_SYSCALL_64_after_hwframe+0x49/0xbe rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: rcu: Tasks blocked on level-1 rcu_node (CPUs 0-15): P32170 rcu: (detected by 14, t=35016 jiffies, g=44237525, q=12423063) msgctl10 R running task 21608 32170 32155 0x00000082 Call Trace: preempt_schedule_irq+0x4c/0xb0 retint_kernel+0x1b/0x2d RIP: 0010:lock_acquire+0x4d/0x340 Code: 48 81 ec c0 00 00 00 45 89 c6 4d 89 cf 48 8d 6c 24 20 48 89 3c 24 48 8d bb e4 0c 00 00 89 74 24 0c 48 c7 44 24 20 b3 8a b5 41 <48> c1 ed 03 48 c7 44 24 28 b4 25 18 84 48 c7 44 24 30 d0 54 7a 82 RSP: 0018:ffff88af83417738 EFLAGS: 00000282 ORIG_RAX: ffffffffffffff13 RAX: dffffc0000000000 RBX: ffff88bd335f3080 RCX: 0000000000000002 RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff88bd335f3d64 RBP: ffff88af83417758 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000001 R11: ffffed13f3f745b2 R12: 0000000000000000 R13: 0000000000000002 R14: 0000000000000000 R15: 0000000000000000 is_bpf_text_address+0x32/0xe0 kernel_text_address+0xec/0x100 __kernel_text_address+0xe/0x30 unwind_get_return_address+0x2f/0x50 __save_stack_trace+0x92/0x100 save_stack+0x32/0xb0 __kasan_slab_free+0x130/0x180 kfree+0xfa/0x2d0 free_msg+0x24/0x50 do_msgrcv+0x508/0xe60 do_syscall_64+0x117/0x400 entry_SYSCALL_64_after_hwframe+0x49/0xbe Davidlohr said: "So after releasing the lock, the msg rbtree/list is empty and new calls will not see those in the newly populated tmp_msg list, and therefore they cannot access the delayed msg freeing pointers, which is good. Also the fact that the node_cache is now freed before the actual messages seems to be harmless as this is wanted for msg_insert() avoiding GFP_ATOMIC allocations, and after releasing the info->lock the thing is freed anyway so it should not change things" Link: http://lkml.kernel.org/r/1552029161-4957-1-git-send-email-lirongqing@baidu.com Signed-off-by: Li RongQing <lirongqing@baidu.com> Signed-off-by: Zhang Yu <zhangyu31@baidu.com> Reviewed-by: Davidlohr Bueso <dbueso@suse.de> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-14 22:46:20 +00:00
list_add_tail(&msg->m_list, &tmp_msg);
ipc/mqueue: add rbtree node caching support When I wrote the first patch that added the rbtree support for message queue insertion, it sped up the case where the queue was very full drastically from the original code. It, however, slowed down the case where the queue was empty (not drastically though). This patch caches the last freed rbtree node struct so we can quickly reuse it when we get a new message. This is the common path for any queue that very frequently goes from 0 to 1 then back to 0 messages in queue. Andrew Morton didn't like that we were doing a GFP_ATOMIC allocation in msg_insert, so this patch attempts to speculatively allocate a new node struct outside of the spin lock when we know we need it, but will still fall back to a GFP_ATOMIC allocation if it has to. Once I added the caching, the necessary various ret = ; spin_unlock gyrations in mq_timedsend were getting pretty ugly, so this also slightly refactors that function to streamline the flow of the code and the function exit. Finally, while working on getting performance back I made sure that all of the node structs were always fully initialized when they were first used, rendering the use of kzalloc unnecessary and a waste of CPU cycles. The net result of all of this is: 1) We will avoid a GFP_ATOMIC allocation when possible, but fall back on it when necessary. 2) We will speculatively allocate a node struct using GFP_KERNEL if our cache is empty (and save the struct to our cache if it's still empty after we have obtained the spin lock). 3) The performance of the common queue empty case has significantly improved and is now much more in line with the older performance for this case. The performance changes are: Old mqueue new mqueue new mqueue + caching queue empty send/recv 305/288ns 349/318ns 310/322ns I don't think we'll ever be able to get the recv performance back, but that's because the old recv performance was a direct result and consequence of the old methods abysmal send performance. The recv path simply must do more so that the send path does not incur such a penalty under higher queue depths. As it turns out, the new caching code also sped up the various queue full cases relative to my last patch. That could be because of the difference between the syscall path in 3.3.4-rc5 and 3.3.4-rc6, or because of the change in code flow in the mq_timedsend routine. Regardless, I'll take it. It wasn't huge, and I *would* say it was within the margin for error, but after many repeated runs what I'm seeing is that the old numbers trend slightly higher (about 10 to 20ns depending on which test is the one running). [akpm@linux-foundation.org: checkpatch fixes] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:38 +00:00
kfree(info->node_cache);
spin_unlock(&info->lock);
ipc: prevent lockup on alloc_msg and free_msg msgctl10 of ltp triggers the following lockup When CONFIG_KASAN is enabled on large memory SMP systems, the pages initialization can take a long time, if msgctl10 requests a huge block memory, and it will block rcu scheduler, so release cpu actively. After adding schedule() in free_msg, free_msg can not be called when holding spinlock, so adding msg to a tmp list, and free it out of spinlock rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: rcu: Tasks blocked on level-1 rcu_node (CPUs 16-31): P32505 rcu: Tasks blocked on level-1 rcu_node (CPUs 48-63): P34978 rcu: (detected by 11, t=35024 jiffies, g=44237529, q=16542267) msgctl10 R running task 21608 32505 2794 0x00000082 Call Trace: preempt_schedule_irq+0x4c/0xb0 retint_kernel+0x1b/0x2d RIP: 0010:__is_insn_slot_addr+0xfb/0x250 Code: 82 1d 00 48 8b 9b 90 00 00 00 4c 89 f7 49 c1 ee 03 e8 59 83 1d 00 48 b8 00 00 00 00 00 fc ff df 4c 39 eb 48 89 9d 58 ff ff ff <41> c6 04 06 f8 74 66 4c 8d 75 98 4c 89 f1 48 c1 e9 03 48 01 c8 48 RSP: 0018:ffff88bce041f758 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13 RAX: dffffc0000000000 RBX: ffffffff8471bc50 RCX: ffffffff828a2a57 RDX: dffffc0000000000 RSI: dffffc0000000000 RDI: ffff88bce041f780 RBP: ffff88bce041f828 R08: ffffed15f3f4c5b3 R09: ffffed15f3f4c5b3 R10: 0000000000000001 R11: ffffed15f3f4c5b2 R12: 000000318aee9b73 R13: ffffffff8471bc50 R14: 1ffff1179c083ef0 R15: 1ffff1179c083eec kernel_text_address+0xc1/0x100 __kernel_text_address+0xe/0x30 unwind_get_return_address+0x2f/0x50 __save_stack_trace+0x92/0x100 create_object+0x380/0x650 __kmalloc+0x14c/0x2b0 load_msg+0x38/0x1a0 do_msgsnd+0x19e/0xcf0 do_syscall_64+0x117/0x400 entry_SYSCALL_64_after_hwframe+0x49/0xbe rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: rcu: Tasks blocked on level-1 rcu_node (CPUs 0-15): P32170 rcu: (detected by 14, t=35016 jiffies, g=44237525, q=12423063) msgctl10 R running task 21608 32170 32155 0x00000082 Call Trace: preempt_schedule_irq+0x4c/0xb0 retint_kernel+0x1b/0x2d RIP: 0010:lock_acquire+0x4d/0x340 Code: 48 81 ec c0 00 00 00 45 89 c6 4d 89 cf 48 8d 6c 24 20 48 89 3c 24 48 8d bb e4 0c 00 00 89 74 24 0c 48 c7 44 24 20 b3 8a b5 41 <48> c1 ed 03 48 c7 44 24 28 b4 25 18 84 48 c7 44 24 30 d0 54 7a 82 RSP: 0018:ffff88af83417738 EFLAGS: 00000282 ORIG_RAX: ffffffffffffff13 RAX: dffffc0000000000 RBX: ffff88bd335f3080 RCX: 0000000000000002 RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff88bd335f3d64 RBP: ffff88af83417758 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000001 R11: ffffed13f3f745b2 R12: 0000000000000000 R13: 0000000000000002 R14: 0000000000000000 R15: 0000000000000000 is_bpf_text_address+0x32/0xe0 kernel_text_address+0xec/0x100 __kernel_text_address+0xe/0x30 unwind_get_return_address+0x2f/0x50 __save_stack_trace+0x92/0x100 save_stack+0x32/0xb0 __kasan_slab_free+0x130/0x180 kfree+0xfa/0x2d0 free_msg+0x24/0x50 do_msgrcv+0x508/0xe60 do_syscall_64+0x117/0x400 entry_SYSCALL_64_after_hwframe+0x49/0xbe Davidlohr said: "So after releasing the lock, the msg rbtree/list is empty and new calls will not see those in the newly populated tmp_msg list, and therefore they cannot access the delayed msg freeing pointers, which is good. Also the fact that the node_cache is now freed before the actual messages seems to be harmless as this is wanted for msg_insert() avoiding GFP_ATOMIC allocations, and after releasing the info->lock the thing is freed anyway so it should not change things" Link: http://lkml.kernel.org/r/1552029161-4957-1-git-send-email-lirongqing@baidu.com Signed-off-by: Li RongQing <lirongqing@baidu.com> Signed-off-by: Zhang Yu <zhangyu31@baidu.com> Reviewed-by: Davidlohr Bueso <dbueso@suse.de> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-14 22:46:20 +00:00
list_for_each_entry_safe(msg, nmsg, &tmp_msg, m_list) {
list_del(&msg->m_list);
free_msg(msg);
}
user = info->user;
if (user) {
ipc/mqueue.c: only perform resource calculation if user valid Andreas Christoforou reported: UBSAN: Undefined behaviour in ipc/mqueue.c:414:49 signed integer overflow: 9 * 2305843009213693951 cannot be represented in type 'long int' ... Call Trace: mqueue_evict_inode+0x8e7/0xa10 ipc/mqueue.c:414 evict+0x472/0x8c0 fs/inode.c:558 iput_final fs/inode.c:1547 [inline] iput+0x51d/0x8c0 fs/inode.c:1573 mqueue_get_inode+0x8eb/0x1070 ipc/mqueue.c:320 mqueue_create_attr+0x198/0x440 ipc/mqueue.c:459 vfs_mkobj+0x39e/0x580 fs/namei.c:2892 prepare_open ipc/mqueue.c:731 [inline] do_mq_open+0x6da/0x8e0 ipc/mqueue.c:771 Which could be triggered by: struct mq_attr attr = { .mq_flags = 0, .mq_maxmsg = 9, .mq_msgsize = 0x1fffffffffffffff, .mq_curmsgs = 0, }; if (mq_open("/testing", 0x40, 3, &attr) == (mqd_t) -1) perror("mq_open"); mqueue_get_inode() was correctly rejecting the giant mq_msgsize, and preparing to return -EINVAL. During the cleanup, it calls mqueue_evict_inode() which performed resource usage tracking math for updating "user", before checking if there was a valid "user" at all (which would indicate that the calculations would be sane). Instead, delay this check to after seeing a valid "user". The overflow was real, but the results went unused, so while the flaw is harmless, it's noisy for kernel fuzzers, so just fix it by moving the calculation under the non-NULL "user" where it actually gets used. Link: http://lkml.kernel.org/r/201906072207.ECB65450@keescook Signed-off-by: Kees Cook <keescook@chromium.org> Reported-by: Andreas Christoforou <andreaschristofo@gmail.com> Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Manfred Spraul <manfred@colorfullife.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-16 23:30:21 +00:00
unsigned long mq_bytes, mq_treesize;
/* Total amount of bytes accounted for the mqueue */
mq_treesize = info->attr.mq_maxmsg * sizeof(struct msg_msg) +
min_t(unsigned int, info->attr.mq_maxmsg, MQ_PRIO_MAX) *
sizeof(struct posix_msg_tree_node);
mq_bytes = mq_treesize + (info->attr.mq_maxmsg *
info->attr.mq_msgsize);
spin_lock(&mq_lock);
user->mq_bytes -= mq_bytes;
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
/*
* get_ns_from_inode() ensures that the
* (ipc_ns = sb->s_fs_info) is either a valid ipc_ns
* to which we now hold a reference, or it is NULL.
* We can't put it here under mq_lock, though.
*/
if (ipc_ns)
ipc_ns->mq_queues_count--;
spin_unlock(&mq_lock);
free_uid(user);
}
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
if (ipc_ns)
put_ipc_ns(ipc_ns);
}
static int mqueue_create_attr(struct dentry *dentry, umode_t mode, void *arg)
{
struct inode *dir = dentry->d_parent->d_inode;
struct inode *inode;
struct mq_attr *attr = arg;
int error;
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
struct ipc_namespace *ipc_ns;
spin_lock(&mq_lock);
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
ipc_ns = __get_ns_from_inode(dir);
if (!ipc_ns) {
error = -EACCES;
goto out_unlock;
}
ipc,mqueue: remove limits for the amount of system-wide queues Commit 93e6f119c0ce ("ipc/mqueue: cleanup definition names and locations") added global hardcoded limits to the amount of message queues that can be created. While these limits are per-namespace, reality is that it ends up breaking userspace applications. Historically users have, at least in theory, been able to create up to INT_MAX queues, and limiting it to just 1024 is way too low and dramatic for some workloads and use cases. For instance, Madars reports: "This update imposes bad limits on our multi-process application. As our app uses approaches that each process opens its own set of queues (usually something about 3-5 queues per process). In some scenarios we might run up to 3000 processes or more (which of-course for linux is not a problem). Thus we might need up to 9000 queues or more. All processes run under one user." Other affected users can be found in launchpad bug #1155695: https://bugs.launchpad.net/ubuntu/+source/manpages/+bug/1155695 Instead of increasing this limit, revert it entirely and fallback to the original way of dealing queue limits -- where once a user's resource limit is reached, and all memory is used, new queues cannot be created. Signed-off-by: Davidlohr Bueso <davidlohr@hp.com> Reported-by: Madars Vitolins <m@silodev.com> Acked-by: Doug Ledford <dledford@redhat.com> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: <stable@vger.kernel.org> [3.5+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-02-25 23:01:45 +00:00
if (ipc_ns->mq_queues_count >= ipc_ns->mq_queues_max &&
!capable(CAP_SYS_RESOURCE)) {
error = -ENOSPC;
goto out_unlock;
}
ipc_ns->mq_queues_count++;
spin_unlock(&mq_lock);
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
inode = mqueue_get_inode(dir->i_sb, ipc_ns, mode, attr);
if (IS_ERR(inode)) {
error = PTR_ERR(inode);
spin_lock(&mq_lock);
ipc_ns->mq_queues_count--;
goto out_unlock;
}
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
put_ipc_ns(ipc_ns);
dir->i_size += DIRENT_SIZE;
dir->i_ctime = dir->i_mtime = dir->i_atime = current_time(dir);
d_instantiate(dentry, inode);
dget(dentry);
return 0;
out_unlock:
spin_unlock(&mq_lock);
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
if (ipc_ns)
put_ipc_ns(ipc_ns);
return error;
}
static int mqueue_create(struct inode *dir, struct dentry *dentry,
umode_t mode, bool excl)
{
return mqueue_create_attr(dentry, mode, NULL);
}
static int mqueue_unlink(struct inode *dir, struct dentry *dentry)
{
struct inode *inode = d_inode(dentry);
dir->i_ctime = dir->i_mtime = dir->i_atime = current_time(dir);
dir->i_size -= DIRENT_SIZE;
drop_nlink(inode);
dput(dentry);
return 0;
}
/*
* This is routine for system read from queue file.
* To avoid mess with doing here some sort of mq_receive we allow
* to read only queue size & notification info (the only values
* that are interesting from user point of view and aren't accessible
* through std routines)
*/
static ssize_t mqueue_read_file(struct file *filp, char __user *u_data,
size_t count, loff_t *off)
{
struct mqueue_inode_info *info = MQUEUE_I(file_inode(filp));
char buffer[FILENT_SIZE];
ssize_t ret;
spin_lock(&info->lock);
snprintf(buffer, sizeof(buffer),
"QSIZE:%-10lu NOTIFY:%-5d SIGNO:%-5d NOTIFY_PID:%-6d\n",
info->qsize,
info->notify_owner ? info->notify.sigev_notify : 0,
(info->notify_owner &&
info->notify.sigev_notify == SIGEV_SIGNAL) ?
info->notify.sigev_signo : 0,
pid_vnr(info->notify_owner));
spin_unlock(&info->lock);
buffer[sizeof(buffer)-1] = '\0';
ret = simple_read_from_buffer(u_data, count, off, buffer,
strlen(buffer));
if (ret <= 0)
return ret;
file_inode(filp)->i_atime = file_inode(filp)->i_ctime = current_time(file_inode(filp));
return ret;
}
static int mqueue_flush_file(struct file *filp, fl_owner_t id)
{
struct mqueue_inode_info *info = MQUEUE_I(file_inode(filp));
spin_lock(&info->lock);
if (task_tgid(current) == info->notify_owner)
remove_notification(info);
spin_unlock(&info->lock);
return 0;
}
static __poll_t mqueue_poll_file(struct file *filp, struct poll_table_struct *poll_tab)
{
struct mqueue_inode_info *info = MQUEUE_I(file_inode(filp));
__poll_t retval = 0;
poll_wait(filp, &info->wait_q, poll_tab);
spin_lock(&info->lock);
if (info->attr.mq_curmsgs)
retval = EPOLLIN | EPOLLRDNORM;
if (info->attr.mq_curmsgs < info->attr.mq_maxmsg)
retval |= EPOLLOUT | EPOLLWRNORM;
spin_unlock(&info->lock);
return retval;
}
/* Adds current to info->e_wait_q[sr] before element with smaller prio */
static void wq_add(struct mqueue_inode_info *info, int sr,
struct ext_wait_queue *ewp)
{
struct ext_wait_queue *walk;
list_for_each_entry(walk, &info->e_wait_q[sr].list, list) {
ipc/mqueue.c: have RT tasks queue in by priority in wq_add() Previous behavior added tasks to the work queue using the static_prio value instead of the dynamic priority value in prio. This caused RT tasks to be added to the work queue in a FIFO manner rather than by priority. Normal tasks were handled by priority. This fix utilizes the dynamic priority of the task to ensure that both RT and normal tasks are added to the work queue in priority order. Utilizing the dynamic priority (prio) rather than the base priority (normal_prio) was chosen to ensure that if a task had a boosted priority when it was added to the work queue, it would be woken sooner to to ensure that it releases any other locks it may be holding in a more timely manner. It is understood that the task could have a lower priority when it wakes than when it was added to the queue in this (unlikely) case. Link: http://lkml.kernel.org/r/1513006652-7014-1-git-send-email-jhaws@sdl.usu.edu Signed-off-by: Jonathan Haws <jhaws@sdl.usu.edu> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Reviewed-by: Davidlohr Bueso <dbueso@suse.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Deepa Dinamani <deepa.kernel@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Manfred Spraul <manfred@colorfullife.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-02-06 23:40:52 +00:00
if (walk->task->prio <= current->prio) {
list_add_tail(&ewp->list, &walk->list);
return;
}
}
list_add_tail(&ewp->list, &info->e_wait_q[sr].list);
}
/*
* Puts current task to sleep. Caller must hold queue lock. After return
* lock isn't held.
* sr: SEND or RECV
*/
static int wq_sleep(struct mqueue_inode_info *info, int sr,
ktime_t *timeout, struct ext_wait_queue *ewp)
__releases(&info->lock)
{
int retval;
signed long time;
wq_add(info, sr, ewp);
for (;;) {
ipc/mqueue: Implement lockless pipelined wakeups This patch moves the wakeup_process() invocation so it is not done under the info->lock by making use of a lockless wake_q. With this change, the waiter is woken up once it is STATE_READY and it does not need to loop on SMP if it is still in STATE_PENDING. In the timeout case we still need to grab the info->lock to verify the state. This change should also avoid the introduction of preempt_disable() in -rt which avoids a busy-loop which pools for the STATE_PENDING -> STATE_READY change if the waiter has a higher priority compared to the waker. Additionally, this patch micro-optimizes wq_sleep by using the cheaper cousin of set_current_state(TASK_INTERRUPTABLE) as we will block no matter what, thus get rid of the implied barrier. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: George Spelvin <linux@horizon.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Chris Mason <clm@fb.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: dave@stgolabs.net Link: http://lkml.kernel.org/r/1430748166.1940.17.camel@stgolabs.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-04 14:02:46 +00:00
__set_current_state(TASK_INTERRUPTIBLE);
spin_unlock(&info->lock);
time = schedule_hrtimeout_range_clock(timeout, 0,
HRTIMER_MODE_ABS, CLOCK_REALTIME);
if (ewp->state == STATE_READY) {
retval = 0;
goto out;
}
spin_lock(&info->lock);
if (ewp->state == STATE_READY) {
retval = 0;
goto out_unlock;
}
if (signal_pending(current)) {
retval = -ERESTARTSYS;
break;
}
if (time == 0) {
retval = -ETIMEDOUT;
break;
}
}
list_del(&ewp->list);
out_unlock:
spin_unlock(&info->lock);
out:
return retval;
}
/*
* Returns waiting task that should be serviced first or NULL if none exists
*/
static struct ext_wait_queue *wq_get_first_waiter(
struct mqueue_inode_info *info, int sr)
{
struct list_head *ptr;
ptr = info->e_wait_q[sr].list.prev;
if (ptr == &info->e_wait_q[sr].list)
return NULL;
return list_entry(ptr, struct ext_wait_queue, list);
}
static inline void set_cookie(struct sk_buff *skb, char code)
{
((char *)skb->data)[NOTIFY_COOKIE_LEN-1] = code;
}
/*
* The next function is only to split too long sys_mq_timedsend
*/
static void __do_notify(struct mqueue_inode_info *info)
{
/* notification
* invoked when there is registered process and there isn't process
* waiting synchronously for message AND state of queue changed from
* empty to not empty. Here we are sure that no one is waiting
* synchronously. */
if (info->notify_owner &&
info->attr.mq_curmsgs == 1) {
struct kernel_siginfo sig_i;
switch (info->notify.sigev_notify) {
case SIGEV_NONE:
break;
case SIGEV_SIGNAL:
/* sends signal */
clear_siginfo(&sig_i);
sig_i.si_signo = info->notify.sigev_signo;
sig_i.si_errno = 0;
sig_i.si_code = SI_MESGQ;
sig_i.si_value = info->notify.sigev_value;
user namespace: make signal.c respect user namespaces ipc/mqueue.c: for __SI_MESQ, convert the uid being sent to recipient's user namespace. (new, thanks Oleg) __send_signal: convert current's uid to the recipient's user namespace for any siginfo which is not SI_FROMKERNEL (patch from Oleg, thanks again :) do_notify_parent and do_notify_parent_cldstop: map task's uid to parent's user namespace ptrace_signal maps parent's uid into current's user namespace before including in signal to current. IIUC Oleg has argued that this shouldn't matter as the debugger will play with it, but it seems like not converting the value currently being set is misleading. Changelog: Sep 20: Inspired by Oleg's suggestion, define map_cred_ns() helper to simplify callers and help make clear what we are translating (which uid into which namespace). Passing the target task would make callers even easier to read, but we pass in user_ns because current_user_ns() != task_cred_xxx(current, user_ns). Sep 20: As recommended by Oleg, also put task_pid_vnr() under rcu_read_lock in ptrace_signal(). Sep 23: In send_signal(), detect when (user) signal is coming from an ancestor or unrelated user namespace. Pass that on to __send_signal, which sets si_uid to 0 or overflowuid if needed. Oct 12: Base on Oleg's fixup_uid() patch. On top of that, handle all SI_FROMKERNEL cases at callers, because we can't assume sender is current in those cases. Nov 10: (mhelsley) rename fixup_uid to more meaningful usern_fixup_signal_uid Nov 10: (akpm) make the !CONFIG_USER_NS case clearer Signed-off-by: Serge Hallyn <serge.hallyn@canonical.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Matt Helsley <matthltc@us.ibm.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> From: Serge Hallyn <serge.hallyn@canonical.com> Subject: __send_signal: pass q->info, not info, to userns_fixup_signal_uid (v2) Eric Biederman pointed out that passing info is a bug and could lead to a NULL pointer deref to boot. A collection of signal, securebits, filecaps, cap_bounds, and a few other ltp tests passed with this kernel. Changelog: Nov 18: previous patch missed a leading '&' Signed-off-by: Serge Hallyn <serge.hallyn@canonical.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> From: Dan Carpenter <dan.carpenter@oracle.com> Subject: ipc/mqueue: lock() => unlock() typo There was a double lock typo introduced in b085f4bd6b21 "user namespace: make signal.c respect user namespaces" Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Matt Helsley <matthltc@us.ibm.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Acked-by: Serge Hallyn <serge@hallyn.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-01-10 23:11:37 +00:00
/* map current pid/uid into info->owner's namespaces */
rcu_read_lock();
sig_i.si_pid = task_tgid_nr_ns(current,
ns_of_pid(info->notify_owner));
sig_i.si_uid = from_kuid_munged(info->notify_user_ns, current_uid());
user namespace: make signal.c respect user namespaces ipc/mqueue.c: for __SI_MESQ, convert the uid being sent to recipient's user namespace. (new, thanks Oleg) __send_signal: convert current's uid to the recipient's user namespace for any siginfo which is not SI_FROMKERNEL (patch from Oleg, thanks again :) do_notify_parent and do_notify_parent_cldstop: map task's uid to parent's user namespace ptrace_signal maps parent's uid into current's user namespace before including in signal to current. IIUC Oleg has argued that this shouldn't matter as the debugger will play with it, but it seems like not converting the value currently being set is misleading. Changelog: Sep 20: Inspired by Oleg's suggestion, define map_cred_ns() helper to simplify callers and help make clear what we are translating (which uid into which namespace). Passing the target task would make callers even easier to read, but we pass in user_ns because current_user_ns() != task_cred_xxx(current, user_ns). Sep 20: As recommended by Oleg, also put task_pid_vnr() under rcu_read_lock in ptrace_signal(). Sep 23: In send_signal(), detect when (user) signal is coming from an ancestor or unrelated user namespace. Pass that on to __send_signal, which sets si_uid to 0 or overflowuid if needed. Oct 12: Base on Oleg's fixup_uid() patch. On top of that, handle all SI_FROMKERNEL cases at callers, because we can't assume sender is current in those cases. Nov 10: (mhelsley) rename fixup_uid to more meaningful usern_fixup_signal_uid Nov 10: (akpm) make the !CONFIG_USER_NS case clearer Signed-off-by: Serge Hallyn <serge.hallyn@canonical.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Matt Helsley <matthltc@us.ibm.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> From: Serge Hallyn <serge.hallyn@canonical.com> Subject: __send_signal: pass q->info, not info, to userns_fixup_signal_uid (v2) Eric Biederman pointed out that passing info is a bug and could lead to a NULL pointer deref to boot. A collection of signal, securebits, filecaps, cap_bounds, and a few other ltp tests passed with this kernel. Changelog: Nov 18: previous patch missed a leading '&' Signed-off-by: Serge Hallyn <serge.hallyn@canonical.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> From: Dan Carpenter <dan.carpenter@oracle.com> Subject: ipc/mqueue: lock() => unlock() typo There was a double lock typo introduced in b085f4bd6b21 "user namespace: make signal.c respect user namespaces" Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Matt Helsley <matthltc@us.ibm.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Acked-by: Serge Hallyn <serge@hallyn.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-01-10 23:11:37 +00:00
rcu_read_unlock();
kill_pid_info(info->notify.sigev_signo,
&sig_i, info->notify_owner);
break;
case SIGEV_THREAD:
set_cookie(info->notify_cookie, NOTIFY_WOKENUP);
netlink_sendskb(info->notify_sock, info->notify_cookie);
break;
}
/* after notification unregisters process */
put_pid(info->notify_owner);
put_user_ns(info->notify_user_ns);
info->notify_owner = NULL;
info->notify_user_ns = NULL;
}
wake_up(&info->wait_q);
}
static int prepare_timeout(const struct __kernel_timespec __user *u_abs_timeout,
struct timespec64 *ts)
{
if (get_timespec64(ts, u_abs_timeout))
return -EFAULT;
if (!timespec64_valid(ts))
return -EINVAL;
return 0;
}
static void remove_notification(struct mqueue_inode_info *info)
{
if (info->notify_owner != NULL &&
info->notify.sigev_notify == SIGEV_THREAD) {
set_cookie(info->notify_cookie, NOTIFY_REMOVED);
netlink_sendskb(info->notify_sock, info->notify_cookie);
}
put_pid(info->notify_owner);
put_user_ns(info->notify_user_ns);
info->notify_owner = NULL;
info->notify_user_ns = NULL;
}
static int prepare_open(struct dentry *dentry, int oflag, int ro,
umode_t mode, struct filename *name,
struct mq_attr *attr)
{
static const int oflag2acc[O_ACCMODE] = { MAY_READ, MAY_WRITE,
MAY_READ | MAY_WRITE };
int acc;
if (d_really_is_negative(dentry)) {
if (!(oflag & O_CREAT))
return -ENOENT;
if (ro)
return ro;
audit_inode_parent_hidden(name, dentry->d_parent);
return vfs_mkobj(dentry, mode & ~current_umask(),
mqueue_create_attr, attr);
}
/* it already existed */
audit_inode(name, dentry, 0);
if ((oflag & (O_CREAT|O_EXCL)) == (O_CREAT|O_EXCL))
return -EEXIST;
if ((oflag & O_ACCMODE) == (O_RDWR | O_WRONLY))
return -EINVAL;
acc = oflag2acc[oflag & O_ACCMODE];
return inode_permission(d_inode(dentry), acc);
}
static int do_mq_open(const char __user *u_name, int oflag, umode_t mode,
struct mq_attr *attr)
{
Revert "mqueue: switch to on-demand creation of internal mount" This reverts commit 36735a6a2b5e042db1af956ce4bcc13f3ff99e21. Aleksa Sarai <asarai@suse.de> writes: > [REGRESSION v4.16-rc6] [PATCH] mqueue: forbid unprivileged user access to internal mount > > Felix reported weird behaviour on 4.16.0-rc6 with regards to mqueue[1], > which was introduced by 36735a6a2b5e ("mqueue: switch to on-demand > creation of internal mount"). > > Basically, the reproducer boils down to being able to mount mqueue if > you create a new user namespace, even if you don't unshare the IPC > namespace. > > Previously this was not possible, and you would get an -EPERM. The mount > is the *host* mqueue mount, which is being cached and just returned from > mqueue_mount(). To be honest, I'm not sure if this is safe or not (or if > it was intentional -- since I'm not familiar with mqueue). > > To me it looks like there is a missing permission check. I've included a > patch below that I've compile-tested, and should block the above case. > Can someone please tell me if I'm missing something? Is this actually > safe? > > [1]: https://github.com/docker/docker/issues/36674 The issue is a lot deeper than a missing permission check. sb->s_user_ns was is improperly set as well. So in addition to the filesystem being mounted when it should not be mounted, so things are not allow that should be. We are practically to the release of 4.16 and there is no agreement between Al Viro and myself on what the code should looks like to fix things properly. So revert the code to what it was before so that we can take our time and discuss this properly. Fixes: 36735a6a2b5e ("mqueue: switch to on-demand creation of internal mount") Reported-by: Felix Abecassis <fabecassis@nvidia.com> Reported-by: Aleksa Sarai <asarai@suse.de> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-03-24 16:28:14 +00:00
struct vfsmount *mnt = current->nsproxy->ipc_ns->mq_mnt;
struct dentry *root = mnt->mnt_root;
struct filename *name;
struct path path;
int fd, error;
int ro;
audit_mq_open(oflag, mode, attr);
if (IS_ERR(name = getname(u_name)))
return PTR_ERR(name);
fd = get_unused_fd_flags(O_CLOEXEC);
if (fd < 0)
goto out_putname;
ro = mnt_want_write(mnt); /* we'll drop it in any case */
inode_lock(d_inode(root));
path.dentry = lookup_one_len(name->name, root, strlen(name->name));
if (IS_ERR(path.dentry)) {
error = PTR_ERR(path.dentry);
goto out_putfd;
}
path.mnt = mntget(mnt);
error = prepare_open(path.dentry, oflag, ro, mode, name, attr);
if (!error) {
struct file *file = dentry_open(&path, oflag, current_cred());
if (!IS_ERR(file))
fd_install(fd, file);
else
error = PTR_ERR(file);
}
path_put(&path);
out_putfd:
if (error) {
put_unused_fd(fd);
fd = error;
}
inode_unlock(d_inode(root));
if (!ro)
mnt_drop_write(mnt);
out_putname:
putname(name);
return fd;
}
SYSCALL_DEFINE4(mq_open, const char __user *, u_name, int, oflag, umode_t, mode,
struct mq_attr __user *, u_attr)
{
struct mq_attr attr;
if (u_attr && copy_from_user(&attr, u_attr, sizeof(struct mq_attr)))
return -EFAULT;
return do_mq_open(u_name, oflag, mode, u_attr ? &attr : NULL);
}
SYSCALL_DEFINE1(mq_unlink, const char __user *, u_name)
{
int err;
struct filename *name;
struct dentry *dentry;
struct inode *inode = NULL;
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
struct ipc_namespace *ipc_ns = current->nsproxy->ipc_ns;
struct vfsmount *mnt = ipc_ns->mq_mnt;
name = getname(u_name);
if (IS_ERR(name))
return PTR_ERR(name);
audit: fix mq_open and mq_unlink to add the MQ root as a hidden parent audit_names record The old audit PATH records for mq_open looked like this: type=PATH msg=audit(1366282323.982:869): item=1 name=(null) inode=6777 dev=00:0c mode=041777 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tmpfs_t:s15:c0.c1023 type=PATH msg=audit(1366282323.982:869): item=0 name="test_mq" inode=26732 dev=00:0c mode=0100700 ouid=0 ogid=0 rdev=00:00 obj=staff_u:object_r:user_tmpfs_t:s15:c0.c1023 ...with the audit related changes that went into 3.7, they now look like this: type=PATH msg=audit(1366282236.776:3606): item=2 name=(null) inode=66655 dev=00:0c mode=0100700 ouid=0 ogid=0 rdev=00:00 obj=staff_u:object_r:user_tmpfs_t:s15:c0.c1023 type=PATH msg=audit(1366282236.776:3606): item=1 name=(null) inode=6926 dev=00:0c mode=041777 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tmpfs_t:s15:c0.c1023 type=PATH msg=audit(1366282236.776:3606): item=0 name="test_mq" Both of these look wrong to me. As Steve Grubb pointed out: "What we need is 1 PATH record that identifies the MQ. The other PATH records probably should not be there." Fix it to record the mq root as a parent, and flag it such that it should be hidden from view when the names are logged, since the root of the mq filesystem isn't terribly interesting. With this change, we get a single PATH record that looks more like this: type=PATH msg=audit(1368021604.836:484): item=0 name="test_mq" inode=16914 dev=00:0c mode=0100644 ouid=0 ogid=0 rdev=00:00 obj=unconfined_u:object_r:user_tmpfs_t:s0 In order to do this, a new audit_inode_parent_hidden() function is added. If we do it this way, then we avoid having the existing callers of audit_inode needing to do any sort of flag conversion if auditing is inactive. Signed-off-by: Jeff Layton <jlayton@redhat.com> Reported-by: Jiri Jaburek <jjaburek@redhat.com> Cc: Steve Grubb <sgrubb@redhat.com> Cc: Eric Paris <eparis@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-08 22:59:36 +00:00
audit_inode_parent_hidden(name, mnt->mnt_root);
err = mnt_want_write(mnt);
if (err)
goto out_name;
inode_lock_nested(d_inode(mnt->mnt_root), I_MUTEX_PARENT);
dentry = lookup_one_len(name->name, mnt->mnt_root,
strlen(name->name));
if (IS_ERR(dentry)) {
err = PTR_ERR(dentry);
goto out_unlock;
}
inode = d_inode(dentry);
if (!inode) {
err = -ENOENT;
} else {
ihold(inode);
err = vfs_unlink(d_inode(dentry->d_parent), dentry, NULL);
}
dput(dentry);
out_unlock:
inode_unlock(d_inode(mnt->mnt_root));
if (inode)
iput(inode);
mnt_drop_write(mnt);
out_name:
putname(name);
return err;
}
/* Pipelined send and receive functions.
*
* If a receiver finds no waiting message, then it registers itself in the
* list of waiting receivers. A sender checks that list before adding the new
* message into the message array. If there is a waiting receiver, then it
* bypasses the message array and directly hands the message over to the
ipc/mqueue: Implement lockless pipelined wakeups This patch moves the wakeup_process() invocation so it is not done under the info->lock by making use of a lockless wake_q. With this change, the waiter is woken up once it is STATE_READY and it does not need to loop on SMP if it is still in STATE_PENDING. In the timeout case we still need to grab the info->lock to verify the state. This change should also avoid the introduction of preempt_disable() in -rt which avoids a busy-loop which pools for the STATE_PENDING -> STATE_READY change if the waiter has a higher priority compared to the waker. Additionally, this patch micro-optimizes wq_sleep by using the cheaper cousin of set_current_state(TASK_INTERRUPTABLE) as we will block no matter what, thus get rid of the implied barrier. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: George Spelvin <linux@horizon.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Chris Mason <clm@fb.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: dave@stgolabs.net Link: http://lkml.kernel.org/r/1430748166.1940.17.camel@stgolabs.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-04 14:02:46 +00:00
* receiver. The receiver accepts the message and returns without grabbing the
* queue spinlock:
*
* - Set pointer to message.
* - Queue the receiver task for later wakeup (without the info->lock).
* - Update its state to STATE_READY. Now the receiver can continue.
* - Wake up the process after the lock is dropped. Should the process wake up
* before this wakeup (due to a timeout or a signal) it will either see
* STATE_READY and continue or acquire the lock to check the state again.
*
* The same algorithm is used for senders.
*/
/* pipelined_send() - send a message directly to the task waiting in
* sys_mq_timedreceive() (without inserting message into a queue).
*/
ipc/mqueue: Implement lockless pipelined wakeups This patch moves the wakeup_process() invocation so it is not done under the info->lock by making use of a lockless wake_q. With this change, the waiter is woken up once it is STATE_READY and it does not need to loop on SMP if it is still in STATE_PENDING. In the timeout case we still need to grab the info->lock to verify the state. This change should also avoid the introduction of preempt_disable() in -rt which avoids a busy-loop which pools for the STATE_PENDING -> STATE_READY change if the waiter has a higher priority compared to the waker. Additionally, this patch micro-optimizes wq_sleep by using the cheaper cousin of set_current_state(TASK_INTERRUPTABLE) as we will block no matter what, thus get rid of the implied barrier. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: George Spelvin <linux@horizon.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Chris Mason <clm@fb.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: dave@stgolabs.net Link: http://lkml.kernel.org/r/1430748166.1940.17.camel@stgolabs.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-04 14:02:46 +00:00
static inline void pipelined_send(struct wake_q_head *wake_q,
struct mqueue_inode_info *info,
struct msg_msg *message,
struct ext_wait_queue *receiver)
{
receiver->msg = message;
list_del(&receiver->list);
ipc/mqueue: Implement lockless pipelined wakeups This patch moves the wakeup_process() invocation so it is not done under the info->lock by making use of a lockless wake_q. With this change, the waiter is woken up once it is STATE_READY and it does not need to loop on SMP if it is still in STATE_PENDING. In the timeout case we still need to grab the info->lock to verify the state. This change should also avoid the introduction of preempt_disable() in -rt which avoids a busy-loop which pools for the STATE_PENDING -> STATE_READY change if the waiter has a higher priority compared to the waker. Additionally, this patch micro-optimizes wq_sleep by using the cheaper cousin of set_current_state(TASK_INTERRUPTABLE) as we will block no matter what, thus get rid of the implied barrier. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: George Spelvin <linux@horizon.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Chris Mason <clm@fb.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: dave@stgolabs.net Link: http://lkml.kernel.org/r/1430748166.1940.17.camel@stgolabs.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-04 14:02:46 +00:00
wake_q_add(wake_q, receiver->task);
/*
* Rely on the implicit cmpxchg barrier from wake_q_add such
* that we can ensure that updating receiver->state is the last
* write operation: As once set, the receiver can continue,
* and if we don't have the reference count from the wake_q,
* yet, at that point we can later have a use-after-free
* condition and bogus wakeup.
*/
receiver->state = STATE_READY;
}
/* pipelined_receive() - if there is task waiting in sys_mq_timedsend()
* gets its message and put to the queue (we have one free place for sure). */
ipc/mqueue: Implement lockless pipelined wakeups This patch moves the wakeup_process() invocation so it is not done under the info->lock by making use of a lockless wake_q. With this change, the waiter is woken up once it is STATE_READY and it does not need to loop on SMP if it is still in STATE_PENDING. In the timeout case we still need to grab the info->lock to verify the state. This change should also avoid the introduction of preempt_disable() in -rt which avoids a busy-loop which pools for the STATE_PENDING -> STATE_READY change if the waiter has a higher priority compared to the waker. Additionally, this patch micro-optimizes wq_sleep by using the cheaper cousin of set_current_state(TASK_INTERRUPTABLE) as we will block no matter what, thus get rid of the implied barrier. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: George Spelvin <linux@horizon.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Chris Mason <clm@fb.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: dave@stgolabs.net Link: http://lkml.kernel.org/r/1430748166.1940.17.camel@stgolabs.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-04 14:02:46 +00:00
static inline void pipelined_receive(struct wake_q_head *wake_q,
struct mqueue_inode_info *info)
{
struct ext_wait_queue *sender = wq_get_first_waiter(info, SEND);
if (!sender) {
/* for poll */
wake_up_interruptible(&info->wait_q);
return;
}
ipc/mqueue: improve performance of send/recv The existing implementation of the POSIX message queue send and recv functions is, well, abysmal. Even worse than abysmal. I submitted a patch to increase the maximum POSIX message queue limit to 65536 due to customer needs, however, upon looking over the send/recv implementation, I realized that my customer needs help with that too even if they don't know it. The basic problem is that, given the fairly typical use case scenario for a large queue of queueing lots of messages all at the same priority (I verified with my customer that this is indeed what their app does), the msg_insert routine is basically a frikkin' bubble sort. I mean, whoa, that's *so* middle school. OK, OK, to not slam the original author too much, I'm sure they didn't envision a queue depth of 50,000+ messages. No one would think that moving elements in an array, one at a time, and dereferencing each pointer in that array to check priority of the message being pointed too, again one at a time, for 50,000+ times would be good. So let's assume that, as is typical, the users have found a way to break our code simply by using it in a way we didn't envision. Fair enough. "So, just how broken is it?", you ask. I wondered the same thing, so I wrote an app to let me know. It's my next patch. It gave me some interesting results. Here's what it tested: Interference with other apps - In continuous mode, the app just sits there and hits a message queue forever, while you go do something productive on another terminal using other CPUs. You then measure how long it takes you to do that something productive. Then you restart the app in fake continuous mode, and it sits in a tight loop on a CPU while you repeat your tests. The whole point of this is to keep one CPU tied up (so it can't be used in your other work) but in one case tied up hitting the mqueue code so we can see the effect of walking that 65,528 element array one pointer at a time on the global CPU cache. If it's bad, then it will slow down your app on the other CPUs just by polluting cache mercilessly. In the fake case, it will be in a tight loop, but not polluting cache. Testing the mqueue subsystem directly - Here we just run a number of tests to see how the mqueue subsystem performs under different conditions. A couple conditions are known to be worst case for the old system, and some routines, so this tests all of them. So, on to the results already: Subsystem/Test Old New Time to compile linux kernel (make -j12 on a 6 core CPU) Running mqueue test user 49m10.744s user 45m26.294s sys 5m51.924s sys 4m59.894s total 55m02.668s total 50m26.188s Running fake test user 45m32.686s user 45m18.552s sys 5m12.465s sys 4m56.468s total 50m45.151s total 50m15.020s % slowdown from mqueue cache thrashing ~8% ~.5% Avg time to send/recv (in nanoseconds per message) when queue empty 305/288 349/318 when queue full (65528 messages) constant priority 526589/823 362/314 increasing priority 403105/916 495/445 decreasing priority 73420/594 482/409 random priority 280147/920 546/436 Time to fill/drain queue (65528 messages, in seconds) constant priority 17.37/.12 .13/.12 increasing priority 4.14/.14 .21/.18 decreasing priority 12.93/.13 .21/.18 random priority 8.88/.16 .22/.17 So, I think the results speak for themselves. It's possible this implementation could be improved by cacheing at least one priority level in the node tree (that would bring the queue empty performance more in line with the old implementation), but this works and is *so* much better than what we had, especially for the common case of a single priority in use, that further refinements can be in follow on patches. [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon] [levinsasha928@gmail.com: use correct gfp flags in msg_insert] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Manfred Spraul <manfred@colorfullife.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:35 +00:00
if (msg_insert(sender->msg, info))
return;
ipc/mqueue: Implement lockless pipelined wakeups This patch moves the wakeup_process() invocation so it is not done under the info->lock by making use of a lockless wake_q. With this change, the waiter is woken up once it is STATE_READY and it does not need to loop on SMP if it is still in STATE_PENDING. In the timeout case we still need to grab the info->lock to verify the state. This change should also avoid the introduction of preempt_disable() in -rt which avoids a busy-loop which pools for the STATE_PENDING -> STATE_READY change if the waiter has a higher priority compared to the waker. Additionally, this patch micro-optimizes wq_sleep by using the cheaper cousin of set_current_state(TASK_INTERRUPTABLE) as we will block no matter what, thus get rid of the implied barrier. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: George Spelvin <linux@horizon.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Chris Mason <clm@fb.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: dave@stgolabs.net Link: http://lkml.kernel.org/r/1430748166.1940.17.camel@stgolabs.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-04 14:02:46 +00:00
list_del(&sender->list);
ipc/mqueue: Implement lockless pipelined wakeups This patch moves the wakeup_process() invocation so it is not done under the info->lock by making use of a lockless wake_q. With this change, the waiter is woken up once it is STATE_READY and it does not need to loop on SMP if it is still in STATE_PENDING. In the timeout case we still need to grab the info->lock to verify the state. This change should also avoid the introduction of preempt_disable() in -rt which avoids a busy-loop which pools for the STATE_PENDING -> STATE_READY change if the waiter has a higher priority compared to the waker. Additionally, this patch micro-optimizes wq_sleep by using the cheaper cousin of set_current_state(TASK_INTERRUPTABLE) as we will block no matter what, thus get rid of the implied barrier. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: George Spelvin <linux@horizon.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Chris Mason <clm@fb.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: dave@stgolabs.net Link: http://lkml.kernel.org/r/1430748166.1940.17.camel@stgolabs.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-04 14:02:46 +00:00
wake_q_add(wake_q, sender->task);
sender->state = STATE_READY;
}
static int do_mq_timedsend(mqd_t mqdes, const char __user *u_msg_ptr,
size_t msg_len, unsigned int msg_prio,
struct timespec64 *ts)
{
struct fd f;
struct inode *inode;
struct ext_wait_queue wait;
struct ext_wait_queue *receiver;
struct msg_msg *msg_ptr;
struct mqueue_inode_info *info;
ktime_t expires, *timeout = NULL;
ipc/mqueue: add rbtree node caching support When I wrote the first patch that added the rbtree support for message queue insertion, it sped up the case where the queue was very full drastically from the original code. It, however, slowed down the case where the queue was empty (not drastically though). This patch caches the last freed rbtree node struct so we can quickly reuse it when we get a new message. This is the common path for any queue that very frequently goes from 0 to 1 then back to 0 messages in queue. Andrew Morton didn't like that we were doing a GFP_ATOMIC allocation in msg_insert, so this patch attempts to speculatively allocate a new node struct outside of the spin lock when we know we need it, but will still fall back to a GFP_ATOMIC allocation if it has to. Once I added the caching, the necessary various ret = ; spin_unlock gyrations in mq_timedsend were getting pretty ugly, so this also slightly refactors that function to streamline the flow of the code and the function exit. Finally, while working on getting performance back I made sure that all of the node structs were always fully initialized when they were first used, rendering the use of kzalloc unnecessary and a waste of CPU cycles. The net result of all of this is: 1) We will avoid a GFP_ATOMIC allocation when possible, but fall back on it when necessary. 2) We will speculatively allocate a node struct using GFP_KERNEL if our cache is empty (and save the struct to our cache if it's still empty after we have obtained the spin lock). 3) The performance of the common queue empty case has significantly improved and is now much more in line with the older performance for this case. The performance changes are: Old mqueue new mqueue new mqueue + caching queue empty send/recv 305/288ns 349/318ns 310/322ns I don't think we'll ever be able to get the recv performance back, but that's because the old recv performance was a direct result and consequence of the old methods abysmal send performance. The recv path simply must do more so that the send path does not incur such a penalty under higher queue depths. As it turns out, the new caching code also sped up the various queue full cases relative to my last patch. That could be because of the difference between the syscall path in 3.3.4-rc5 and 3.3.4-rc6, or because of the change in code flow in the mq_timedsend routine. Regardless, I'll take it. It wasn't huge, and I *would* say it was within the margin for error, but after many repeated runs what I'm seeing is that the old numbers trend slightly higher (about 10 to 20ns depending on which test is the one running). [akpm@linux-foundation.org: checkpatch fixes] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:38 +00:00
struct posix_msg_tree_node *new_leaf = NULL;
int ret = 0;
DEFINE_WAKE_Q(wake_q);
if (unlikely(msg_prio >= (unsigned long) MQ_PRIO_MAX))
return -EINVAL;
if (ts) {
expires = timespec64_to_ktime(*ts);
timeout = &expires;
}
audit_mq_sendrecv(mqdes, msg_len, msg_prio, ts);
f = fdget(mqdes);
if (unlikely(!f.file)) {
ret = -EBADF;
goto out;
}
inode = file_inode(f.file);
if (unlikely(f.file->f_op != &mqueue_file_operations)) {
ret = -EBADF;
goto out_fput;
}
info = MQUEUE_I(inode);
audit_file(f.file);
if (unlikely(!(f.file->f_mode & FMODE_WRITE))) {
ret = -EBADF;
goto out_fput;
}
if (unlikely(msg_len > info->attr.mq_msgsize)) {
ret = -EMSGSIZE;
goto out_fput;
}
/* First try to allocate memory, before doing anything with
* existing queues. */
msg_ptr = load_msg(u_msg_ptr, msg_len);
if (IS_ERR(msg_ptr)) {
ret = PTR_ERR(msg_ptr);
goto out_fput;
}
msg_ptr->m_ts = msg_len;
msg_ptr->m_type = msg_prio;
ipc/mqueue: add rbtree node caching support When I wrote the first patch that added the rbtree support for message queue insertion, it sped up the case where the queue was very full drastically from the original code. It, however, slowed down the case where the queue was empty (not drastically though). This patch caches the last freed rbtree node struct so we can quickly reuse it when we get a new message. This is the common path for any queue that very frequently goes from 0 to 1 then back to 0 messages in queue. Andrew Morton didn't like that we were doing a GFP_ATOMIC allocation in msg_insert, so this patch attempts to speculatively allocate a new node struct outside of the spin lock when we know we need it, but will still fall back to a GFP_ATOMIC allocation if it has to. Once I added the caching, the necessary various ret = ; spin_unlock gyrations in mq_timedsend were getting pretty ugly, so this also slightly refactors that function to streamline the flow of the code and the function exit. Finally, while working on getting performance back I made sure that all of the node structs were always fully initialized when they were first used, rendering the use of kzalloc unnecessary and a waste of CPU cycles. The net result of all of this is: 1) We will avoid a GFP_ATOMIC allocation when possible, but fall back on it when necessary. 2) We will speculatively allocate a node struct using GFP_KERNEL if our cache is empty (and save the struct to our cache if it's still empty after we have obtained the spin lock). 3) The performance of the common queue empty case has significantly improved and is now much more in line with the older performance for this case. The performance changes are: Old mqueue new mqueue new mqueue + caching queue empty send/recv 305/288ns 349/318ns 310/322ns I don't think we'll ever be able to get the recv performance back, but that's because the old recv performance was a direct result and consequence of the old methods abysmal send performance. The recv path simply must do more so that the send path does not incur such a penalty under higher queue depths. As it turns out, the new caching code also sped up the various queue full cases relative to my last patch. That could be because of the difference between the syscall path in 3.3.4-rc5 and 3.3.4-rc6, or because of the change in code flow in the mq_timedsend routine. Regardless, I'll take it. It wasn't huge, and I *would* say it was within the margin for error, but after many repeated runs what I'm seeing is that the old numbers trend slightly higher (about 10 to 20ns depending on which test is the one running). [akpm@linux-foundation.org: checkpatch fixes] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:38 +00:00
/*
* msg_insert really wants us to have a valid, spare node struct so
* it doesn't have to kmalloc a GFP_ATOMIC allocation, but it will
* fall back to that if necessary.
*/
if (!info->node_cache)
new_leaf = kmalloc(sizeof(*new_leaf), GFP_KERNEL);
spin_lock(&info->lock);
ipc/mqueue: add rbtree node caching support When I wrote the first patch that added the rbtree support for message queue insertion, it sped up the case where the queue was very full drastically from the original code. It, however, slowed down the case where the queue was empty (not drastically though). This patch caches the last freed rbtree node struct so we can quickly reuse it when we get a new message. This is the common path for any queue that very frequently goes from 0 to 1 then back to 0 messages in queue. Andrew Morton didn't like that we were doing a GFP_ATOMIC allocation in msg_insert, so this patch attempts to speculatively allocate a new node struct outside of the spin lock when we know we need it, but will still fall back to a GFP_ATOMIC allocation if it has to. Once I added the caching, the necessary various ret = ; spin_unlock gyrations in mq_timedsend were getting pretty ugly, so this also slightly refactors that function to streamline the flow of the code and the function exit. Finally, while working on getting performance back I made sure that all of the node structs were always fully initialized when they were first used, rendering the use of kzalloc unnecessary and a waste of CPU cycles. The net result of all of this is: 1) We will avoid a GFP_ATOMIC allocation when possible, but fall back on it when necessary. 2) We will speculatively allocate a node struct using GFP_KERNEL if our cache is empty (and save the struct to our cache if it's still empty after we have obtained the spin lock). 3) The performance of the common queue empty case has significantly improved and is now much more in line with the older performance for this case. The performance changes are: Old mqueue new mqueue new mqueue + caching queue empty send/recv 305/288ns 349/318ns 310/322ns I don't think we'll ever be able to get the recv performance back, but that's because the old recv performance was a direct result and consequence of the old methods abysmal send performance. The recv path simply must do more so that the send path does not incur such a penalty under higher queue depths. As it turns out, the new caching code also sped up the various queue full cases relative to my last patch. That could be because of the difference between the syscall path in 3.3.4-rc5 and 3.3.4-rc6, or because of the change in code flow in the mq_timedsend routine. Regardless, I'll take it. It wasn't huge, and I *would* say it was within the margin for error, but after many repeated runs what I'm seeing is that the old numbers trend slightly higher (about 10 to 20ns depending on which test is the one running). [akpm@linux-foundation.org: checkpatch fixes] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:38 +00:00
if (!info->node_cache && new_leaf) {
/* Save our speculative allocation into the cache */
INIT_LIST_HEAD(&new_leaf->msg_list);
info->node_cache = new_leaf;
new_leaf = NULL;
} else {
kfree(new_leaf);
}
if (info->attr.mq_curmsgs == info->attr.mq_maxmsg) {
if (f.file->f_flags & O_NONBLOCK) {
ret = -EAGAIN;
} else {
wait.task = current;
wait.msg = (void *) msg_ptr;
wait.state = STATE_NONE;
ret = wq_sleep(info, SEND, timeout, &wait);
ipc/mqueue: add rbtree node caching support When I wrote the first patch that added the rbtree support for message queue insertion, it sped up the case where the queue was very full drastically from the original code. It, however, slowed down the case where the queue was empty (not drastically though). This patch caches the last freed rbtree node struct so we can quickly reuse it when we get a new message. This is the common path for any queue that very frequently goes from 0 to 1 then back to 0 messages in queue. Andrew Morton didn't like that we were doing a GFP_ATOMIC allocation in msg_insert, so this patch attempts to speculatively allocate a new node struct outside of the spin lock when we know we need it, but will still fall back to a GFP_ATOMIC allocation if it has to. Once I added the caching, the necessary various ret = ; spin_unlock gyrations in mq_timedsend were getting pretty ugly, so this also slightly refactors that function to streamline the flow of the code and the function exit. Finally, while working on getting performance back I made sure that all of the node structs were always fully initialized when they were first used, rendering the use of kzalloc unnecessary and a waste of CPU cycles. The net result of all of this is: 1) We will avoid a GFP_ATOMIC allocation when possible, but fall back on it when necessary. 2) We will speculatively allocate a node struct using GFP_KERNEL if our cache is empty (and save the struct to our cache if it's still empty after we have obtained the spin lock). 3) The performance of the common queue empty case has significantly improved and is now much more in line with the older performance for this case. The performance changes are: Old mqueue new mqueue new mqueue + caching queue empty send/recv 305/288ns 349/318ns 310/322ns I don't think we'll ever be able to get the recv performance back, but that's because the old recv performance was a direct result and consequence of the old methods abysmal send performance. The recv path simply must do more so that the send path does not incur such a penalty under higher queue depths. As it turns out, the new caching code also sped up the various queue full cases relative to my last patch. That could be because of the difference between the syscall path in 3.3.4-rc5 and 3.3.4-rc6, or because of the change in code flow in the mq_timedsend routine. Regardless, I'll take it. It wasn't huge, and I *would* say it was within the margin for error, but after many repeated runs what I'm seeing is that the old numbers trend slightly higher (about 10 to 20ns depending on which test is the one running). [akpm@linux-foundation.org: checkpatch fixes] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:38 +00:00
/*
* wq_sleep must be called with info->lock held, and
* returns with the lock released
*/
goto out_free;
}
} else {
receiver = wq_get_first_waiter(info, RECV);
if (receiver) {
ipc/mqueue: Implement lockless pipelined wakeups This patch moves the wakeup_process() invocation so it is not done under the info->lock by making use of a lockless wake_q. With this change, the waiter is woken up once it is STATE_READY and it does not need to loop on SMP if it is still in STATE_PENDING. In the timeout case we still need to grab the info->lock to verify the state. This change should also avoid the introduction of preempt_disable() in -rt which avoids a busy-loop which pools for the STATE_PENDING -> STATE_READY change if the waiter has a higher priority compared to the waker. Additionally, this patch micro-optimizes wq_sleep by using the cheaper cousin of set_current_state(TASK_INTERRUPTABLE) as we will block no matter what, thus get rid of the implied barrier. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: George Spelvin <linux@horizon.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Chris Mason <clm@fb.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: dave@stgolabs.net Link: http://lkml.kernel.org/r/1430748166.1940.17.camel@stgolabs.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-04 14:02:46 +00:00
pipelined_send(&wake_q, info, msg_ptr, receiver);
} else {
/* adds message to the queue */
ipc/mqueue: add rbtree node caching support When I wrote the first patch that added the rbtree support for message queue insertion, it sped up the case where the queue was very full drastically from the original code. It, however, slowed down the case where the queue was empty (not drastically though). This patch caches the last freed rbtree node struct so we can quickly reuse it when we get a new message. This is the common path for any queue that very frequently goes from 0 to 1 then back to 0 messages in queue. Andrew Morton didn't like that we were doing a GFP_ATOMIC allocation in msg_insert, so this patch attempts to speculatively allocate a new node struct outside of the spin lock when we know we need it, but will still fall back to a GFP_ATOMIC allocation if it has to. Once I added the caching, the necessary various ret = ; spin_unlock gyrations in mq_timedsend were getting pretty ugly, so this also slightly refactors that function to streamline the flow of the code and the function exit. Finally, while working on getting performance back I made sure that all of the node structs were always fully initialized when they were first used, rendering the use of kzalloc unnecessary and a waste of CPU cycles. The net result of all of this is: 1) We will avoid a GFP_ATOMIC allocation when possible, but fall back on it when necessary. 2) We will speculatively allocate a node struct using GFP_KERNEL if our cache is empty (and save the struct to our cache if it's still empty after we have obtained the spin lock). 3) The performance of the common queue empty case has significantly improved and is now much more in line with the older performance for this case. The performance changes are: Old mqueue new mqueue new mqueue + caching queue empty send/recv 305/288ns 349/318ns 310/322ns I don't think we'll ever be able to get the recv performance back, but that's because the old recv performance was a direct result and consequence of the old methods abysmal send performance. The recv path simply must do more so that the send path does not incur such a penalty under higher queue depths. As it turns out, the new caching code also sped up the various queue full cases relative to my last patch. That could be because of the difference between the syscall path in 3.3.4-rc5 and 3.3.4-rc6, or because of the change in code flow in the mq_timedsend routine. Regardless, I'll take it. It wasn't huge, and I *would* say it was within the margin for error, but after many repeated runs what I'm seeing is that the old numbers trend slightly higher (about 10 to 20ns depending on which test is the one running). [akpm@linux-foundation.org: checkpatch fixes] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:38 +00:00
ret = msg_insert(msg_ptr, info);
if (ret)
goto out_unlock;
__do_notify(info);
}
inode->i_atime = inode->i_mtime = inode->i_ctime =
current_time(inode);
}
ipc/mqueue: add rbtree node caching support When I wrote the first patch that added the rbtree support for message queue insertion, it sped up the case where the queue was very full drastically from the original code. It, however, slowed down the case where the queue was empty (not drastically though). This patch caches the last freed rbtree node struct so we can quickly reuse it when we get a new message. This is the common path for any queue that very frequently goes from 0 to 1 then back to 0 messages in queue. Andrew Morton didn't like that we were doing a GFP_ATOMIC allocation in msg_insert, so this patch attempts to speculatively allocate a new node struct outside of the spin lock when we know we need it, but will still fall back to a GFP_ATOMIC allocation if it has to. Once I added the caching, the necessary various ret = ; spin_unlock gyrations in mq_timedsend were getting pretty ugly, so this also slightly refactors that function to streamline the flow of the code and the function exit. Finally, while working on getting performance back I made sure that all of the node structs were always fully initialized when they were first used, rendering the use of kzalloc unnecessary and a waste of CPU cycles. The net result of all of this is: 1) We will avoid a GFP_ATOMIC allocation when possible, but fall back on it when necessary. 2) We will speculatively allocate a node struct using GFP_KERNEL if our cache is empty (and save the struct to our cache if it's still empty after we have obtained the spin lock). 3) The performance of the common queue empty case has significantly improved and is now much more in line with the older performance for this case. The performance changes are: Old mqueue new mqueue new mqueue + caching queue empty send/recv 305/288ns 349/318ns 310/322ns I don't think we'll ever be able to get the recv performance back, but that's because the old recv performance was a direct result and consequence of the old methods abysmal send performance. The recv path simply must do more so that the send path does not incur such a penalty under higher queue depths. As it turns out, the new caching code also sped up the various queue full cases relative to my last patch. That could be because of the difference between the syscall path in 3.3.4-rc5 and 3.3.4-rc6, or because of the change in code flow in the mq_timedsend routine. Regardless, I'll take it. It wasn't huge, and I *would* say it was within the margin for error, but after many repeated runs what I'm seeing is that the old numbers trend slightly higher (about 10 to 20ns depending on which test is the one running). [akpm@linux-foundation.org: checkpatch fixes] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:38 +00:00
out_unlock:
spin_unlock(&info->lock);
ipc/mqueue: Implement lockless pipelined wakeups This patch moves the wakeup_process() invocation so it is not done under the info->lock by making use of a lockless wake_q. With this change, the waiter is woken up once it is STATE_READY and it does not need to loop on SMP if it is still in STATE_PENDING. In the timeout case we still need to grab the info->lock to verify the state. This change should also avoid the introduction of preempt_disable() in -rt which avoids a busy-loop which pools for the STATE_PENDING -> STATE_READY change if the waiter has a higher priority compared to the waker. Additionally, this patch micro-optimizes wq_sleep by using the cheaper cousin of set_current_state(TASK_INTERRUPTABLE) as we will block no matter what, thus get rid of the implied barrier. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: George Spelvin <linux@horizon.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Chris Mason <clm@fb.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: dave@stgolabs.net Link: http://lkml.kernel.org/r/1430748166.1940.17.camel@stgolabs.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-04 14:02:46 +00:00
wake_up_q(&wake_q);
ipc/mqueue: add rbtree node caching support When I wrote the first patch that added the rbtree support for message queue insertion, it sped up the case where the queue was very full drastically from the original code. It, however, slowed down the case where the queue was empty (not drastically though). This patch caches the last freed rbtree node struct so we can quickly reuse it when we get a new message. This is the common path for any queue that very frequently goes from 0 to 1 then back to 0 messages in queue. Andrew Morton didn't like that we were doing a GFP_ATOMIC allocation in msg_insert, so this patch attempts to speculatively allocate a new node struct outside of the spin lock when we know we need it, but will still fall back to a GFP_ATOMIC allocation if it has to. Once I added the caching, the necessary various ret = ; spin_unlock gyrations in mq_timedsend were getting pretty ugly, so this also slightly refactors that function to streamline the flow of the code and the function exit. Finally, while working on getting performance back I made sure that all of the node structs were always fully initialized when they were first used, rendering the use of kzalloc unnecessary and a waste of CPU cycles. The net result of all of this is: 1) We will avoid a GFP_ATOMIC allocation when possible, but fall back on it when necessary. 2) We will speculatively allocate a node struct using GFP_KERNEL if our cache is empty (and save the struct to our cache if it's still empty after we have obtained the spin lock). 3) The performance of the common queue empty case has significantly improved and is now much more in line with the older performance for this case. The performance changes are: Old mqueue new mqueue new mqueue + caching queue empty send/recv 305/288ns 349/318ns 310/322ns I don't think we'll ever be able to get the recv performance back, but that's because the old recv performance was a direct result and consequence of the old methods abysmal send performance. The recv path simply must do more so that the send path does not incur such a penalty under higher queue depths. As it turns out, the new caching code also sped up the various queue full cases relative to my last patch. That could be because of the difference between the syscall path in 3.3.4-rc5 and 3.3.4-rc6, or because of the change in code flow in the mq_timedsend routine. Regardless, I'll take it. It wasn't huge, and I *would* say it was within the margin for error, but after many repeated runs what I'm seeing is that the old numbers trend slightly higher (about 10 to 20ns depending on which test is the one running). [akpm@linux-foundation.org: checkpatch fixes] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:38 +00:00
out_free:
if (ret)
free_msg(msg_ptr);
out_fput:
fdput(f);
out:
return ret;
}
static int do_mq_timedreceive(mqd_t mqdes, char __user *u_msg_ptr,
size_t msg_len, unsigned int __user *u_msg_prio,
struct timespec64 *ts)
{
ssize_t ret;
struct msg_msg *msg_ptr;
struct fd f;
struct inode *inode;
struct mqueue_inode_info *info;
struct ext_wait_queue wait;
ktime_t expires, *timeout = NULL;
ipc/mqueue: add rbtree node caching support When I wrote the first patch that added the rbtree support for message queue insertion, it sped up the case where the queue was very full drastically from the original code. It, however, slowed down the case where the queue was empty (not drastically though). This patch caches the last freed rbtree node struct so we can quickly reuse it when we get a new message. This is the common path for any queue that very frequently goes from 0 to 1 then back to 0 messages in queue. Andrew Morton didn't like that we were doing a GFP_ATOMIC allocation in msg_insert, so this patch attempts to speculatively allocate a new node struct outside of the spin lock when we know we need it, but will still fall back to a GFP_ATOMIC allocation if it has to. Once I added the caching, the necessary various ret = ; spin_unlock gyrations in mq_timedsend were getting pretty ugly, so this also slightly refactors that function to streamline the flow of the code and the function exit. Finally, while working on getting performance back I made sure that all of the node structs were always fully initialized when they were first used, rendering the use of kzalloc unnecessary and a waste of CPU cycles. The net result of all of this is: 1) We will avoid a GFP_ATOMIC allocation when possible, but fall back on it when necessary. 2) We will speculatively allocate a node struct using GFP_KERNEL if our cache is empty (and save the struct to our cache if it's still empty after we have obtained the spin lock). 3) The performance of the common queue empty case has significantly improved and is now much more in line with the older performance for this case. The performance changes are: Old mqueue new mqueue new mqueue + caching queue empty send/recv 305/288ns 349/318ns 310/322ns I don't think we'll ever be able to get the recv performance back, but that's because the old recv performance was a direct result and consequence of the old methods abysmal send performance. The recv path simply must do more so that the send path does not incur such a penalty under higher queue depths. As it turns out, the new caching code also sped up the various queue full cases relative to my last patch. That could be because of the difference between the syscall path in 3.3.4-rc5 and 3.3.4-rc6, or because of the change in code flow in the mq_timedsend routine. Regardless, I'll take it. It wasn't huge, and I *would* say it was within the margin for error, but after many repeated runs what I'm seeing is that the old numbers trend slightly higher (about 10 to 20ns depending on which test is the one running). [akpm@linux-foundation.org: checkpatch fixes] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:38 +00:00
struct posix_msg_tree_node *new_leaf = NULL;
if (ts) {
expires = timespec64_to_ktime(*ts);
timeout = &expires;
}
audit_mq_sendrecv(mqdes, msg_len, 0, ts);
f = fdget(mqdes);
if (unlikely(!f.file)) {
ret = -EBADF;
goto out;
}
inode = file_inode(f.file);
if (unlikely(f.file->f_op != &mqueue_file_operations)) {
ret = -EBADF;
goto out_fput;
}
info = MQUEUE_I(inode);
audit_file(f.file);
if (unlikely(!(f.file->f_mode & FMODE_READ))) {
ret = -EBADF;
goto out_fput;
}
/* checks if buffer is big enough */
if (unlikely(msg_len < info->attr.mq_msgsize)) {
ret = -EMSGSIZE;
goto out_fput;
}
ipc/mqueue: add rbtree node caching support When I wrote the first patch that added the rbtree support for message queue insertion, it sped up the case where the queue was very full drastically from the original code. It, however, slowed down the case where the queue was empty (not drastically though). This patch caches the last freed rbtree node struct so we can quickly reuse it when we get a new message. This is the common path for any queue that very frequently goes from 0 to 1 then back to 0 messages in queue. Andrew Morton didn't like that we were doing a GFP_ATOMIC allocation in msg_insert, so this patch attempts to speculatively allocate a new node struct outside of the spin lock when we know we need it, but will still fall back to a GFP_ATOMIC allocation if it has to. Once I added the caching, the necessary various ret = ; spin_unlock gyrations in mq_timedsend were getting pretty ugly, so this also slightly refactors that function to streamline the flow of the code and the function exit. Finally, while working on getting performance back I made sure that all of the node structs were always fully initialized when they were first used, rendering the use of kzalloc unnecessary and a waste of CPU cycles. The net result of all of this is: 1) We will avoid a GFP_ATOMIC allocation when possible, but fall back on it when necessary. 2) We will speculatively allocate a node struct using GFP_KERNEL if our cache is empty (and save the struct to our cache if it's still empty after we have obtained the spin lock). 3) The performance of the common queue empty case has significantly improved and is now much more in line with the older performance for this case. The performance changes are: Old mqueue new mqueue new mqueue + caching queue empty send/recv 305/288ns 349/318ns 310/322ns I don't think we'll ever be able to get the recv performance back, but that's because the old recv performance was a direct result and consequence of the old methods abysmal send performance. The recv path simply must do more so that the send path does not incur such a penalty under higher queue depths. As it turns out, the new caching code also sped up the various queue full cases relative to my last patch. That could be because of the difference between the syscall path in 3.3.4-rc5 and 3.3.4-rc6, or because of the change in code flow in the mq_timedsend routine. Regardless, I'll take it. It wasn't huge, and I *would* say it was within the margin for error, but after many repeated runs what I'm seeing is that the old numbers trend slightly higher (about 10 to 20ns depending on which test is the one running). [akpm@linux-foundation.org: checkpatch fixes] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:38 +00:00
/*
* msg_insert really wants us to have a valid, spare node struct so
* it doesn't have to kmalloc a GFP_ATOMIC allocation, but it will
* fall back to that if necessary.
*/
if (!info->node_cache)
new_leaf = kmalloc(sizeof(*new_leaf), GFP_KERNEL);
spin_lock(&info->lock);
ipc/mqueue: add rbtree node caching support When I wrote the first patch that added the rbtree support for message queue insertion, it sped up the case where the queue was very full drastically from the original code. It, however, slowed down the case where the queue was empty (not drastically though). This patch caches the last freed rbtree node struct so we can quickly reuse it when we get a new message. This is the common path for any queue that very frequently goes from 0 to 1 then back to 0 messages in queue. Andrew Morton didn't like that we were doing a GFP_ATOMIC allocation in msg_insert, so this patch attempts to speculatively allocate a new node struct outside of the spin lock when we know we need it, but will still fall back to a GFP_ATOMIC allocation if it has to. Once I added the caching, the necessary various ret = ; spin_unlock gyrations in mq_timedsend were getting pretty ugly, so this also slightly refactors that function to streamline the flow of the code and the function exit. Finally, while working on getting performance back I made sure that all of the node structs were always fully initialized when they were first used, rendering the use of kzalloc unnecessary and a waste of CPU cycles. The net result of all of this is: 1) We will avoid a GFP_ATOMIC allocation when possible, but fall back on it when necessary. 2) We will speculatively allocate a node struct using GFP_KERNEL if our cache is empty (and save the struct to our cache if it's still empty after we have obtained the spin lock). 3) The performance of the common queue empty case has significantly improved and is now much more in line with the older performance for this case. The performance changes are: Old mqueue new mqueue new mqueue + caching queue empty send/recv 305/288ns 349/318ns 310/322ns I don't think we'll ever be able to get the recv performance back, but that's because the old recv performance was a direct result and consequence of the old methods abysmal send performance. The recv path simply must do more so that the send path does not incur such a penalty under higher queue depths. As it turns out, the new caching code also sped up the various queue full cases relative to my last patch. That could be because of the difference between the syscall path in 3.3.4-rc5 and 3.3.4-rc6, or because of the change in code flow in the mq_timedsend routine. Regardless, I'll take it. It wasn't huge, and I *would* say it was within the margin for error, but after many repeated runs what I'm seeing is that the old numbers trend slightly higher (about 10 to 20ns depending on which test is the one running). [akpm@linux-foundation.org: checkpatch fixes] Signed-off-by: Doug Ledford <dledford@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-31 23:26:38 +00:00
if (!info->node_cache && new_leaf) {
/* Save our speculative allocation into the cache */
INIT_LIST_HEAD(&new_leaf->msg_list);
info->node_cache = new_leaf;
} else {
kfree(new_leaf);
}
if (info->attr.mq_curmsgs == 0) {
if (f.file->f_flags & O_NONBLOCK) {
spin_unlock(&info->lock);
ret = -EAGAIN;
} else {
wait.task = current;
wait.state = STATE_NONE;
ret = wq_sleep(info, RECV, timeout, &wait);
msg_ptr = wait.msg;
}
} else {
DEFINE_WAKE_Q(wake_q);
ipc/mqueue: Implement lockless pipelined wakeups This patch moves the wakeup_process() invocation so it is not done under the info->lock by making use of a lockless wake_q. With this change, the waiter is woken up once it is STATE_READY and it does not need to loop on SMP if it is still in STATE_PENDING. In the timeout case we still need to grab the info->lock to verify the state. This change should also avoid the introduction of preempt_disable() in -rt which avoids a busy-loop which pools for the STATE_PENDING -> STATE_READY change if the waiter has a higher priority compared to the waker. Additionally, this patch micro-optimizes wq_sleep by using the cheaper cousin of set_current_state(TASK_INTERRUPTABLE) as we will block no matter what, thus get rid of the implied barrier. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: George Spelvin <linux@horizon.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Chris Mason <clm@fb.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: dave@stgolabs.net Link: http://lkml.kernel.org/r/1430748166.1940.17.camel@stgolabs.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-04 14:02:46 +00:00
msg_ptr = msg_get(info);
inode->i_atime = inode->i_mtime = inode->i_ctime =
current_time(inode);
/* There is now free space in queue. */
ipc/mqueue: Implement lockless pipelined wakeups This patch moves the wakeup_process() invocation so it is not done under the info->lock by making use of a lockless wake_q. With this change, the waiter is woken up once it is STATE_READY and it does not need to loop on SMP if it is still in STATE_PENDING. In the timeout case we still need to grab the info->lock to verify the state. This change should also avoid the introduction of preempt_disable() in -rt which avoids a busy-loop which pools for the STATE_PENDING -> STATE_READY change if the waiter has a higher priority compared to the waker. Additionally, this patch micro-optimizes wq_sleep by using the cheaper cousin of set_current_state(TASK_INTERRUPTABLE) as we will block no matter what, thus get rid of the implied barrier. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: George Spelvin <linux@horizon.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Chris Mason <clm@fb.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: dave@stgolabs.net Link: http://lkml.kernel.org/r/1430748166.1940.17.camel@stgolabs.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-04 14:02:46 +00:00
pipelined_receive(&wake_q, info);
spin_unlock(&info->lock);
ipc/mqueue: Implement lockless pipelined wakeups This patch moves the wakeup_process() invocation so it is not done under the info->lock by making use of a lockless wake_q. With this change, the waiter is woken up once it is STATE_READY and it does not need to loop on SMP if it is still in STATE_PENDING. In the timeout case we still need to grab the info->lock to verify the state. This change should also avoid the introduction of preempt_disable() in -rt which avoids a busy-loop which pools for the STATE_PENDING -> STATE_READY change if the waiter has a higher priority compared to the waker. Additionally, this patch micro-optimizes wq_sleep by using the cheaper cousin of set_current_state(TASK_INTERRUPTABLE) as we will block no matter what, thus get rid of the implied barrier. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: George Spelvin <linux@horizon.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Chris Mason <clm@fb.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: dave@stgolabs.net Link: http://lkml.kernel.org/r/1430748166.1940.17.camel@stgolabs.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-04 14:02:46 +00:00
wake_up_q(&wake_q);
ret = 0;
}
if (ret == 0) {
ret = msg_ptr->m_ts;
if ((u_msg_prio && put_user(msg_ptr->m_type, u_msg_prio)) ||
store_msg(u_msg_ptr, msg_ptr, msg_ptr->m_ts)) {
ret = -EFAULT;
}
free_msg(msg_ptr);
}
out_fput:
fdput(f);
out:
return ret;
}
SYSCALL_DEFINE5(mq_timedsend, mqd_t, mqdes, const char __user *, u_msg_ptr,
size_t, msg_len, unsigned int, msg_prio,
const struct __kernel_timespec __user *, u_abs_timeout)
{
struct timespec64 ts, *p = NULL;
if (u_abs_timeout) {
int res = prepare_timeout(u_abs_timeout, &ts);
if (res)
return res;
p = &ts;
}
return do_mq_timedsend(mqdes, u_msg_ptr, msg_len, msg_prio, p);
}
SYSCALL_DEFINE5(mq_timedreceive, mqd_t, mqdes, char __user *, u_msg_ptr,
size_t, msg_len, unsigned int __user *, u_msg_prio,
const struct __kernel_timespec __user *, u_abs_timeout)
{
struct timespec64 ts, *p = NULL;
if (u_abs_timeout) {
int res = prepare_timeout(u_abs_timeout, &ts);
if (res)
return res;
p = &ts;
}
return do_mq_timedreceive(mqdes, u_msg_ptr, msg_len, u_msg_prio, p);
}
/*
* Notes: the case when user wants us to deregister (with NULL as pointer)
* and he isn't currently owner of notification, will be silently discarded.
* It isn't explicitly defined in the POSIX.
*/
static int do_mq_notify(mqd_t mqdes, const struct sigevent *notification)
{
int ret;
struct fd f;
struct sock *sock;
struct inode *inode;
struct mqueue_inode_info *info;
struct sk_buff *nc;
audit_mq_notify(mqdes, notification);
nc = NULL;
sock = NULL;
if (notification != NULL) {
if (unlikely(notification->sigev_notify != SIGEV_NONE &&
notification->sigev_notify != SIGEV_SIGNAL &&
notification->sigev_notify != SIGEV_THREAD))
return -EINVAL;
if (notification->sigev_notify == SIGEV_SIGNAL &&
!valid_signal(notification->sigev_signo)) {
return -EINVAL;
}
if (notification->sigev_notify == SIGEV_THREAD) {
long timeo;
/* create the notify skb */
nc = alloc_skb(NOTIFY_COOKIE_LEN, GFP_KERNEL);
if (!nc) {
ret = -ENOMEM;
goto out;
}
if (copy_from_user(nc->data,
notification->sigev_value.sival_ptr,
NOTIFY_COOKIE_LEN)) {
ret = -EFAULT;
goto out;
}
/* TODO: add a header? */
skb_put(nc, NOTIFY_COOKIE_LEN);
/* and attach it to the socket */
retry:
f = fdget(notification->sigev_signo);
if (!f.file) {
ret = -EBADF;
goto out;
}
sock = netlink_getsockbyfilp(f.file);
fdput(f);
if (IS_ERR(sock)) {
ret = PTR_ERR(sock);
sock = NULL;
goto out;
}
timeo = MAX_SCHEDULE_TIMEOUT;
ret = netlink_attachskb(sock, nc, &timeo, NULL);
if (ret == 1) {
sock = NULL;
goto retry;
}
if (ret) {
sock = NULL;
nc = NULL;
goto out;
}
}
}
f = fdget(mqdes);
if (!f.file) {
ret = -EBADF;
goto out;
}
inode = file_inode(f.file);
if (unlikely(f.file->f_op != &mqueue_file_operations)) {
ret = -EBADF;
goto out_fput;
}
info = MQUEUE_I(inode);
ret = 0;
spin_lock(&info->lock);
if (notification == NULL) {
if (info->notify_owner == task_tgid(current)) {
remove_notification(info);
inode->i_atime = inode->i_ctime = current_time(inode);
}
} else if (info->notify_owner != NULL) {
ret = -EBUSY;
} else {
switch (notification->sigev_notify) {
case SIGEV_NONE:
info->notify.sigev_notify = SIGEV_NONE;
break;
case SIGEV_THREAD:
info->notify_sock = sock;
info->notify_cookie = nc;
sock = NULL;
nc = NULL;
info->notify.sigev_notify = SIGEV_THREAD;
break;
case SIGEV_SIGNAL:
info->notify.sigev_signo = notification->sigev_signo;
info->notify.sigev_value = notification->sigev_value;
info->notify.sigev_notify = SIGEV_SIGNAL;
break;
}
info->notify_owner = get_pid(task_tgid(current));
info->notify_user_ns = get_user_ns(current_user_ns());
inode->i_atime = inode->i_ctime = current_time(inode);
}
spin_unlock(&info->lock);
out_fput:
fdput(f);
out:
if (sock)
netlink_detachskb(sock, nc);
else if (nc)
dev_kfree_skb(nc);
return ret;
}
SYSCALL_DEFINE2(mq_notify, mqd_t, mqdes,
const struct sigevent __user *, u_notification)
{
struct sigevent n, *p = NULL;
if (u_notification) {
if (copy_from_user(&n, u_notification, sizeof(struct sigevent)))
return -EFAULT;
p = &n;
}
return do_mq_notify(mqdes, p);
}
static int do_mq_getsetattr(int mqdes, struct mq_attr *new, struct mq_attr *old)
{
struct fd f;
struct inode *inode;
struct mqueue_inode_info *info;
if (new && (new->mq_flags & (~O_NONBLOCK)))
return -EINVAL;
f = fdget(mqdes);
if (!f.file)
return -EBADF;
if (unlikely(f.file->f_op != &mqueue_file_operations)) {
fdput(f);
return -EBADF;
}
inode = file_inode(f.file);
info = MQUEUE_I(inode);
spin_lock(&info->lock);
if (old) {
*old = info->attr;
old->mq_flags = f.file->f_flags & O_NONBLOCK;
}
if (new) {
audit_mq_getsetattr(mqdes, new);
spin_lock(&f.file->f_lock);
if (new->mq_flags & O_NONBLOCK)
f.file->f_flags |= O_NONBLOCK;
else
f.file->f_flags &= ~O_NONBLOCK;
spin_unlock(&f.file->f_lock);
inode->i_atime = inode->i_ctime = current_time(inode);
}
spin_unlock(&info->lock);
fdput(f);
return 0;
}
SYSCALL_DEFINE3(mq_getsetattr, mqd_t, mqdes,
const struct mq_attr __user *, u_mqstat,
struct mq_attr __user *, u_omqstat)
{
int ret;
struct mq_attr mqstat, omqstat;
struct mq_attr *new = NULL, *old = NULL;
if (u_mqstat) {
new = &mqstat;
if (copy_from_user(new, u_mqstat, sizeof(struct mq_attr)))
return -EFAULT;
}
if (u_omqstat)
old = &omqstat;
ret = do_mq_getsetattr(mqdes, new, old);
if (ret || !old)
return ret;
if (copy_to_user(u_omqstat, old, sizeof(struct mq_attr)))
return -EFAULT;
return 0;
}
#ifdef CONFIG_COMPAT
struct compat_mq_attr {
compat_long_t mq_flags; /* message queue flags */
compat_long_t mq_maxmsg; /* maximum number of messages */
compat_long_t mq_msgsize; /* maximum message size */
compat_long_t mq_curmsgs; /* number of messages currently queued */
compat_long_t __reserved[4]; /* ignored for input, zeroed for output */
};
static inline int get_compat_mq_attr(struct mq_attr *attr,
const struct compat_mq_attr __user *uattr)
{
struct compat_mq_attr v;
if (copy_from_user(&v, uattr, sizeof(*uattr)))
return -EFAULT;
memset(attr, 0, sizeof(*attr));
attr->mq_flags = v.mq_flags;
attr->mq_maxmsg = v.mq_maxmsg;
attr->mq_msgsize = v.mq_msgsize;
attr->mq_curmsgs = v.mq_curmsgs;
return 0;
}
static inline int put_compat_mq_attr(const struct mq_attr *attr,
struct compat_mq_attr __user *uattr)
{
struct compat_mq_attr v;
memset(&v, 0, sizeof(v));
v.mq_flags = attr->mq_flags;
v.mq_maxmsg = attr->mq_maxmsg;
v.mq_msgsize = attr->mq_msgsize;
v.mq_curmsgs = attr->mq_curmsgs;
if (copy_to_user(uattr, &v, sizeof(*uattr)))
return -EFAULT;
return 0;
}
COMPAT_SYSCALL_DEFINE4(mq_open, const char __user *, u_name,
int, oflag, compat_mode_t, mode,
struct compat_mq_attr __user *, u_attr)
{
struct mq_attr attr, *p = NULL;
if (u_attr && oflag & O_CREAT) {
p = &attr;
if (get_compat_mq_attr(&attr, u_attr))
return -EFAULT;
}
return do_mq_open(u_name, oflag, mode, p);
}
COMPAT_SYSCALL_DEFINE2(mq_notify, mqd_t, mqdes,
const struct compat_sigevent __user *, u_notification)
{
struct sigevent n, *p = NULL;
if (u_notification) {
if (get_compat_sigevent(&n, u_notification))
return -EFAULT;
if (n.sigev_notify == SIGEV_THREAD)
n.sigev_value.sival_ptr = compat_ptr(n.sigev_value.sival_int);
p = &n;
}
return do_mq_notify(mqdes, p);
}
COMPAT_SYSCALL_DEFINE3(mq_getsetattr, mqd_t, mqdes,
const struct compat_mq_attr __user *, u_mqstat,
struct compat_mq_attr __user *, u_omqstat)
{
int ret;
struct mq_attr mqstat, omqstat;
struct mq_attr *new = NULL, *old = NULL;
if (u_mqstat) {
new = &mqstat;
if (get_compat_mq_attr(new, u_mqstat))
return -EFAULT;
}
if (u_omqstat)
old = &omqstat;
ret = do_mq_getsetattr(mqdes, new, old);
if (ret || !old)
return ret;
if (put_compat_mq_attr(old, u_omqstat))
return -EFAULT;
return 0;
}
#endif
#ifdef CONFIG_COMPAT_32BIT_TIME
y2038: globally rename compat_time to old_time32 Christoph Hellwig suggested a slightly different path for handling backwards compatibility with the 32-bit time_t based system calls: Rather than simply reusing the compat_sys_* entry points on 32-bit architectures unchanged, we get rid of those entry points and the compat_time types by renaming them to something that makes more sense on 32-bit architectures (which don't have a compat mode otherwise), and then share the entry points under the new name with the 64-bit architectures that use them for implementing the compatibility. The following types and interfaces are renamed here, and moved from linux/compat_time.h to linux/time32.h: old new --- --- compat_time_t old_time32_t struct compat_timeval struct old_timeval32 struct compat_timespec struct old_timespec32 struct compat_itimerspec struct old_itimerspec32 ns_to_compat_timeval() ns_to_old_timeval32() get_compat_itimerspec64() get_old_itimerspec32() put_compat_itimerspec64() put_old_itimerspec32() compat_get_timespec64() get_old_timespec32() compat_put_timespec64() put_old_timespec32() As we already have aliases in place, this patch addresses only the instances that are relevant to the system call interface in particular, not those that occur in device drivers and other modules. Those will get handled separately, while providing the 64-bit version of the respective interfaces. I'm not renaming the timex, rusage and itimerval structures, as we are still debating what the new interface will look like, and whether we will need a replacement at all. This also doesn't change the names of the syscall entry points, which can be done more easily when we actually switch over the 32-bit architectures to use them, at that point we need to change COMPAT_SYSCALL_DEFINEx to SYSCALL_DEFINEx with a new name, e.g. with a _time32 suffix. Suggested-by: Christoph Hellwig <hch@infradead.org> Link: https://lore.kernel.org/lkml/20180705222110.GA5698@infradead.org/ Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2018-07-13 10:52:28 +00:00
static int compat_prepare_timeout(const struct old_timespec32 __user *p,
struct timespec64 *ts)
{
y2038: globally rename compat_time to old_time32 Christoph Hellwig suggested a slightly different path for handling backwards compatibility with the 32-bit time_t based system calls: Rather than simply reusing the compat_sys_* entry points on 32-bit architectures unchanged, we get rid of those entry points and the compat_time types by renaming them to something that makes more sense on 32-bit architectures (which don't have a compat mode otherwise), and then share the entry points under the new name with the 64-bit architectures that use them for implementing the compatibility. The following types and interfaces are renamed here, and moved from linux/compat_time.h to linux/time32.h: old new --- --- compat_time_t old_time32_t struct compat_timeval struct old_timeval32 struct compat_timespec struct old_timespec32 struct compat_itimerspec struct old_itimerspec32 ns_to_compat_timeval() ns_to_old_timeval32() get_compat_itimerspec64() get_old_itimerspec32() put_compat_itimerspec64() put_old_itimerspec32() compat_get_timespec64() get_old_timespec32() compat_put_timespec64() put_old_timespec32() As we already have aliases in place, this patch addresses only the instances that are relevant to the system call interface in particular, not those that occur in device drivers and other modules. Those will get handled separately, while providing the 64-bit version of the respective interfaces. I'm not renaming the timex, rusage and itimerval structures, as we are still debating what the new interface will look like, and whether we will need a replacement at all. This also doesn't change the names of the syscall entry points, which can be done more easily when we actually switch over the 32-bit architectures to use them, at that point we need to change COMPAT_SYSCALL_DEFINEx to SYSCALL_DEFINEx with a new name, e.g. with a _time32 suffix. Suggested-by: Christoph Hellwig <hch@infradead.org> Link: https://lore.kernel.org/lkml/20180705222110.GA5698@infradead.org/ Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2018-07-13 10:52:28 +00:00
if (get_old_timespec32(ts, p))
return -EFAULT;
if (!timespec64_valid(ts))
return -EINVAL;
return 0;
}
SYSCALL_DEFINE5(mq_timedsend_time32, mqd_t, mqdes,
const char __user *, u_msg_ptr,
unsigned int, msg_len, unsigned int, msg_prio,
const struct old_timespec32 __user *, u_abs_timeout)
{
struct timespec64 ts, *p = NULL;
if (u_abs_timeout) {
int res = compat_prepare_timeout(u_abs_timeout, &ts);
if (res)
return res;
p = &ts;
}
return do_mq_timedsend(mqdes, u_msg_ptr, msg_len, msg_prio, p);
}
SYSCALL_DEFINE5(mq_timedreceive_time32, mqd_t, mqdes,
char __user *, u_msg_ptr,
unsigned int, msg_len, unsigned int __user *, u_msg_prio,
const struct old_timespec32 __user *, u_abs_timeout)
{
struct timespec64 ts, *p = NULL;
if (u_abs_timeout) {
int res = compat_prepare_timeout(u_abs_timeout, &ts);
if (res)
return res;
p = &ts;
}
return do_mq_timedreceive(mqdes, u_msg_ptr, msg_len, u_msg_prio, p);
}
#endif
static const struct inode_operations mqueue_dir_inode_operations = {
.lookup = simple_lookup,
.create = mqueue_create,
.unlink = mqueue_unlink,
};
static const struct file_operations mqueue_file_operations = {
.flush = mqueue_flush_file,
.poll = mqueue_poll_file,
.read = mqueue_read_file,
llseek: automatically add .llseek fop All file_operations should get a .llseek operation so we can make nonseekable_open the default for future file operations without a .llseek pointer. The three cases that we can automatically detect are no_llseek, seq_lseek and default_llseek. For cases where we can we can automatically prove that the file offset is always ignored, we use noop_llseek, which maintains the current behavior of not returning an error from a seek. New drivers should normally not use noop_llseek but instead use no_llseek and call nonseekable_open at open time. Existing drivers can be converted to do the same when the maintainer knows for certain that no user code relies on calling seek on the device file. The generated code is often incorrectly indented and right now contains comments that clarify for each added line why a specific variant was chosen. In the version that gets submitted upstream, the comments will be gone and I will manually fix the indentation, because there does not seem to be a way to do that using coccinelle. Some amount of new code is currently sitting in linux-next that should get the same modifications, which I will do at the end of the merge window. Many thanks to Julia Lawall for helping me learn to write a semantic patch that does all this. ===== begin semantic patch ===== // This adds an llseek= method to all file operations, // as a preparation for making no_llseek the default. // // The rules are // - use no_llseek explicitly if we do nonseekable_open // - use seq_lseek for sequential files // - use default_llseek if we know we access f_pos // - use noop_llseek if we know we don't access f_pos, // but we still want to allow users to call lseek // @ open1 exists @ identifier nested_open; @@ nested_open(...) { <+... nonseekable_open(...) ...+> } @ open exists@ identifier open_f; identifier i, f; identifier open1.nested_open; @@ int open_f(struct inode *i, struct file *f) { <+... ( nonseekable_open(...) | nested_open(...) ) ...+> } @ read disable optional_qualifier exists @ identifier read_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; expression E; identifier func; @@ ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off) { <+... ( *off = E | *off += E | func(..., off, ...) | E = *off ) ...+> } @ read_no_fpos disable optional_qualifier exists @ identifier read_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; @@ ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off) { ... when != off } @ write @ identifier write_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; expression E; identifier func; @@ ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off) { <+... ( *off = E | *off += E | func(..., off, ...) | E = *off ) ...+> } @ write_no_fpos @ identifier write_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; @@ ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off) { ... when != off } @ fops0 @ identifier fops; @@ struct file_operations fops = { ... }; @ has_llseek depends on fops0 @ identifier fops0.fops; identifier llseek_f; @@ struct file_operations fops = { ... .llseek = llseek_f, ... }; @ has_read depends on fops0 @ identifier fops0.fops; identifier read_f; @@ struct file_operations fops = { ... .read = read_f, ... }; @ has_write depends on fops0 @ identifier fops0.fops; identifier write_f; @@ struct file_operations fops = { ... .write = write_f, ... }; @ has_open depends on fops0 @ identifier fops0.fops; identifier open_f; @@ struct file_operations fops = { ... .open = open_f, ... }; // use no_llseek if we call nonseekable_open //////////////////////////////////////////// @ nonseekable1 depends on !has_llseek && has_open @ identifier fops0.fops; identifier nso ~= "nonseekable_open"; @@ struct file_operations fops = { ... .open = nso, ... +.llseek = no_llseek, /* nonseekable */ }; @ nonseekable2 depends on !has_llseek @ identifier fops0.fops; identifier open.open_f; @@ struct file_operations fops = { ... .open = open_f, ... +.llseek = no_llseek, /* open uses nonseekable */ }; // use seq_lseek for sequential files ///////////////////////////////////// @ seq depends on !has_llseek @ identifier fops0.fops; identifier sr ~= "seq_read"; @@ struct file_operations fops = { ... .read = sr, ... +.llseek = seq_lseek, /* we have seq_read */ }; // use default_llseek if there is a readdir /////////////////////////////////////////// @ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier readdir_e; @@ // any other fop is used that changes pos struct file_operations fops = { ... .readdir = readdir_e, ... +.llseek = default_llseek, /* readdir is present */ }; // use default_llseek if at least one of read/write touches f_pos ///////////////////////////////////////////////////////////////// @ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier read.read_f; @@ // read fops use offset struct file_operations fops = { ... .read = read_f, ... +.llseek = default_llseek, /* read accesses f_pos */ }; @ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier write.write_f; @@ // write fops use offset struct file_operations fops = { ... .write = write_f, ... + .llseek = default_llseek, /* write accesses f_pos */ }; // Use noop_llseek if neither read nor write accesses f_pos /////////////////////////////////////////////////////////// @ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier read_no_fpos.read_f; identifier write_no_fpos.write_f; @@ // write fops use offset struct file_operations fops = { ... .write = write_f, .read = read_f, ... +.llseek = noop_llseek, /* read and write both use no f_pos */ }; @ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier write_no_fpos.write_f; @@ struct file_operations fops = { ... .write = write_f, ... +.llseek = noop_llseek, /* write uses no f_pos */ }; @ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier read_no_fpos.read_f; @@ struct file_operations fops = { ... .read = read_f, ... +.llseek = noop_llseek, /* read uses no f_pos */ }; @ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; @@ struct file_operations fops = { ... +.llseek = noop_llseek, /* no read or write fn */ }; ===== End semantic patch ===== Signed-off-by: Arnd Bergmann <arnd@arndb.de> Cc: Julia Lawall <julia@diku.dk> Cc: Christoph Hellwig <hch@infradead.org>
2010-08-15 16:52:59 +00:00
.llseek = default_llseek,
};
static const struct super_operations mqueue_super_ops = {
.alloc_inode = mqueue_alloc_inode,
.free_inode = mqueue_free_inode,
.evict_inode = mqueue_evict_inode,
.statfs = simple_statfs,
};
static const struct fs_context_operations mqueue_fs_context_ops = {
.free = mqueue_fs_context_free,
.get_tree = mqueue_get_tree,
};
static struct file_system_type mqueue_fs_type = {
.name = "mqueue",
.init_fs_context = mqueue_init_fs_context,
.kill_sb = kill_litter_super,
.fs_flags = FS_USERNS_MOUNT,
};
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
int mq_init_ns(struct ipc_namespace *ns)
{
struct vfsmount *m;
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
ns->mq_queues_count = 0;
ns->mq_queues_max = DFLT_QUEUESMAX;
ns->mq_msg_max = DFLT_MSGMAX;
ns->mq_msgsize_max = DFLT_MSGSIZEMAX;
ns->mq_msg_default = DFLT_MSG;
ns->mq_msgsize_default = DFLT_MSGSIZE;
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
m = mq_create_mount(ns);
if (IS_ERR(m))
return PTR_ERR(m);
ns->mq_mnt = m;
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
return 0;
}
void mq_clear_sbinfo(struct ipc_namespace *ns)
{
Revert "mqueue: switch to on-demand creation of internal mount" This reverts commit 36735a6a2b5e042db1af956ce4bcc13f3ff99e21. Aleksa Sarai <asarai@suse.de> writes: > [REGRESSION v4.16-rc6] [PATCH] mqueue: forbid unprivileged user access to internal mount > > Felix reported weird behaviour on 4.16.0-rc6 with regards to mqueue[1], > which was introduced by 36735a6a2b5e ("mqueue: switch to on-demand > creation of internal mount"). > > Basically, the reproducer boils down to being able to mount mqueue if > you create a new user namespace, even if you don't unshare the IPC > namespace. > > Previously this was not possible, and you would get an -EPERM. The mount > is the *host* mqueue mount, which is being cached and just returned from > mqueue_mount(). To be honest, I'm not sure if this is safe or not (or if > it was intentional -- since I'm not familiar with mqueue). > > To me it looks like there is a missing permission check. I've included a > patch below that I've compile-tested, and should block the above case. > Can someone please tell me if I'm missing something? Is this actually > safe? > > [1]: https://github.com/docker/docker/issues/36674 The issue is a lot deeper than a missing permission check. sb->s_user_ns was is improperly set as well. So in addition to the filesystem being mounted when it should not be mounted, so things are not allow that should be. We are practically to the release of 4.16 and there is no agreement between Al Viro and myself on what the code should looks like to fix things properly. So revert the code to what it was before so that we can take our time and discuss this properly. Fixes: 36735a6a2b5e ("mqueue: switch to on-demand creation of internal mount") Reported-by: Felix Abecassis <fabecassis@nvidia.com> Reported-by: Aleksa Sarai <asarai@suse.de> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-03-24 16:28:14 +00:00
ns->mq_mnt->mnt_sb->s_fs_info = NULL;
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
}
void mq_put_mnt(struct ipc_namespace *ns)
{
Revert "mqueue: switch to on-demand creation of internal mount" This reverts commit 36735a6a2b5e042db1af956ce4bcc13f3ff99e21. Aleksa Sarai <asarai@suse.de> writes: > [REGRESSION v4.16-rc6] [PATCH] mqueue: forbid unprivileged user access to internal mount > > Felix reported weird behaviour on 4.16.0-rc6 with regards to mqueue[1], > which was introduced by 36735a6a2b5e ("mqueue: switch to on-demand > creation of internal mount"). > > Basically, the reproducer boils down to being able to mount mqueue if > you create a new user namespace, even if you don't unshare the IPC > namespace. > > Previously this was not possible, and you would get an -EPERM. The mount > is the *host* mqueue mount, which is being cached and just returned from > mqueue_mount(). To be honest, I'm not sure if this is safe or not (or if > it was intentional -- since I'm not familiar with mqueue). > > To me it looks like there is a missing permission check. I've included a > patch below that I've compile-tested, and should block the above case. > Can someone please tell me if I'm missing something? Is this actually > safe? > > [1]: https://github.com/docker/docker/issues/36674 The issue is a lot deeper than a missing permission check. sb->s_user_ns was is improperly set as well. So in addition to the filesystem being mounted when it should not be mounted, so things are not allow that should be. We are practically to the release of 4.16 and there is no agreement between Al Viro and myself on what the code should looks like to fix things properly. So revert the code to what it was before so that we can take our time and discuss this properly. Fixes: 36735a6a2b5e ("mqueue: switch to on-demand creation of internal mount") Reported-by: Felix Abecassis <fabecassis@nvidia.com> Reported-by: Aleksa Sarai <asarai@suse.de> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-03-24 16:28:14 +00:00
kern_unmount(ns->mq_mnt);
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
}
static int __init init_mqueue_fs(void)
{
int error;
mqueue_inode_cachep = kmem_cache_create("mqueue_inode_cache",
sizeof(struct mqueue_inode_info), 0,
2016-01-14 23:18:21 +00:00
SLAB_HWCACHE_ALIGN|SLAB_ACCOUNT, init_once);
if (mqueue_inode_cachep == NULL)
return -ENOMEM;
/* ignore failures - they are not fatal */
mq_sysctl_table = mq_register_sysctl_table();
error = register_filesystem(&mqueue_fs_type);
if (error)
goto out_sysctl;
namespaces: ipc namespaces: implement support for posix msqueues Implement multiple mounts of the mqueue file system, and link it to usage of CLONE_NEWIPC. Each ipc ns has a corresponding mqueuefs superblock. When a user does clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an internal mount of a new mqueuefs sb linked to the new ipc ns. When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the mqueuefs superblock. Posix message queues can be worked with both through the mq_* system calls (see mq_overview(7)), and through the VFS through the mqueue mount. Any usage of mq_open() and friends will work with the acting task's ipc namespace. Any actions through the VFS will work with the mqueuefs in which the file was created. So if a user doesn't remount mqueuefs after unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls /dev/mqueue". If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns, ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1) ipc_ns:1 will be freed, (2) it's superblock will live on until task b umounts the corresponding mqueuefs, and vfs actions will continue to succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to the deceased ipc_ns:1. To make this happen, we must protect the ipc reference count when a) a task exits and drops its ipcns->count, since it might be dropping it to 0 and freeing the ipcns b) a task accesses the ipcns through its mqueuefs interface, since it bumps the ipcns refcount and might race with the last task in the ipcns exiting. So the kref is changed to an atomic_t so we can use atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns through ns = mqueuefs_sb->s_fs_info is protected by the same lock. Signed-off-by: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 02:01:10 +00:00
spin_lock_init(&mq_lock);
error = mq_init_ns(&init_ipc_ns);
if (error)
goto out_filesystem;
return 0;
out_filesystem:
unregister_filesystem(&mqueue_fs_type);
out_sysctl:
if (mq_sysctl_table)
unregister_sysctl_table(mq_sysctl_table);
kmem_cache_destroy(mqueue_inode_cachep);
return error;
}
device_initcall(init_mqueue_fs);